TRUST LAYER FOR LARGE LANGUAGE MODELS

    公开(公告)号:US20250086309A1

    公开(公告)日:2025-03-13

    申请号:US18410722

    申请日:2024-01-11

    Abstract: A cloud platform may include a model interface that receives from a client and at an interface for accessing a large language model, a prompt for a response from the large language model, and the client is associated with a set of configuration parameters via a cloud platform that supports the interface. The cloud platform may modify, in accordance with the set of configuration parameters, the prompt that results in a modified prompt and transmit, to the large language model, the modified prompt. The cloud platform may receive the response generated by the large language model and provide the response to a model that determines one or more probabilities that the response contains content from one or more content categories. The cloud platform may transmit the response or the one or more probabilities to the client.

    SYSTEMS AND METHODS FOR NEURAL NETWORK BASED RECOMMENDER MODELS

    公开(公告)号:US20240412059A1

    公开(公告)日:2024-12-12

    申请号:US18330488

    申请日:2023-06-07

    Abstract: Embodiments described herein provide A method for training a neural network based model. The methods include receiving a training dataset with a plurality of training samples, and those samples are encoded into representations in feature space. A positive sample is determined from the raining dataset based on a relationship between the given query and the positive sample in feature space. For a given query, a positive sample from the training dataset is selected based on a relationship between the given query and the positive sample in a feature space. One or more negative samples from the training dataset that are within a reconfigurable distance to the positive sample in the feature space are selected, and a loss is computed based on the positive sample and the one or more negative samples. The neural network is trained based on the loss.

    Systems and methods for providing an automated testing pipeline for neural network models

    公开(公告)号:US12197317B2

    公开(公告)日:2025-01-14

    申请号:US18156323

    申请日:2023-01-18

    Abstract: Embodiments described herein provide an automated testing pipeline for providing a testing dataset for testing a trained neural network model trained using a first training dataset. A first testing dataset for the trained neural network including a first plurality of user queries is received. A dependency parser is used to filter the first plurality of user queries based on one or more action verbs. A pretrained language model is used to rank the remaining user queries based on respective relationships with queries in the first training dataset. Further, user queries that are classified as keyword matches with the queries in the first training dataset using a bag of words classifier are removed. A second testing dataset is generated using the ranked remaining user queries. Testing outputs are generated, by the trained neural network model, using the second testing dataset.

    METADATA DRIVEN PROMPT GROUNDING FOR GENERATIVE ARTIFICIAL INTELLIGENCE APPLICATIONS

    公开(公告)号:US20250086467A1

    公开(公告)日:2025-03-13

    申请号:US18427304

    申请日:2024-01-30

    Abstract: The described method may include receiving user input indicating a configuration identifying a large language model (LLM) and a subset of documents indicated in the configuration as being available to a tenant. The method may include generating one or more vectorizations of content of the subset of documents. The method may include receiving a request to generate a generative response. The method may include generating the generative artificial intelligence (AI) prompt using the content to ground the generative AI prompt. The subset of documents may be identified based on a comparison between a vectorization of the request and the one or more vectorizations and based at least in part on a determination that a user associated with the tenant is permitted to access the subset of documents. The method may include presenting a response to the generative AI prompt, the response generated by the LLM using the generative AI prompt.

Patent Agency Ranking