-
公开(公告)号:US20250086309A1
公开(公告)日:2025-03-13
申请号:US18410722
申请日:2024-01-11
Applicant: Salesforce, Inc.
Inventor: Shashank Harinath , Eugene Wayne Becker , Subha Melapalayam , Eric Brochu , Claire Cheng , Mario Rodriguez , Prithvi Krisnan Padmanabhan , Kathy Baxter , Kin Fai Kan
Abstract: A cloud platform may include a model interface that receives from a client and at an interface for accessing a large language model, a prompt for a response from the large language model, and the client is associated with a set of configuration parameters via a cloud platform that supports the interface. The cloud platform may modify, in accordance with the set of configuration parameters, the prompt that results in a modified prompt and transmit, to the large language model, the modified prompt. The cloud platform may receive the response generated by the large language model and provide the response to a model that determines one or more probabilities that the response contains content from one or more content categories. The cloud platform may transmit the response or the one or more probabilities to the client.
-
公开(公告)号:US20240412059A1
公开(公告)日:2024-12-12
申请号:US18330488
申请日:2023-06-07
Applicant: Salesforce, Inc.
Inventor: Regunathan Radhakrishnan , Zachary Alexander , Sitaram Asur , Shashank Harinath , Na Cheng , Shiva Kumar Pentyala
IPC: G06N3/08
Abstract: Embodiments described herein provide A method for training a neural network based model. The methods include receiving a training dataset with a plurality of training samples, and those samples are encoded into representations in feature space. A positive sample is determined from the raining dataset based on a relationship between the given query and the positive sample in feature space. For a given query, a positive sample from the training dataset is selected based on a relationship between the given query and the positive sample in a feature space. One or more negative samples from the training dataset that are within a reconfigurable distance to the positive sample in the feature space are selected, and a loss is computed based on the positive sample and the one or more negative samples. The neural network is trained based on the loss.
-
3.
公开(公告)号:US12197317B2
公开(公告)日:2025-01-14
申请号:US18156323
申请日:2023-01-18
Applicant: Salesforce, Inc.
Inventor: Shiva Kumar Pentyala , Shashank Harinath , Sitaram Asur , Zachary Alexander
IPC: G06F11/36
Abstract: Embodiments described herein provide an automated testing pipeline for providing a testing dataset for testing a trained neural network model trained using a first training dataset. A first testing dataset for the trained neural network including a first plurality of user queries is received. A dependency parser is used to filter the first plurality of user queries based on one or more action verbs. A pretrained language model is used to rank the remaining user queries based on respective relationships with queries in the first training dataset. Further, user queries that are classified as keyword matches with the queries in the first training dataset using a bag of words classifier are removed. A second testing dataset is generated using the ranked remaining user queries. Testing outputs are generated, by the trained neural network model, using the second testing dataset.
-
公开(公告)号:US12019984B2
公开(公告)日:2024-06-25
申请号:US17479748
申请日:2021-09-20
Applicant: Salesforce, Inc.
Inventor: Shilpa Bhagavath , Shubham Mehrotra , Abhishek Sharma , Shashank Harinath , Na Cheng , Zineb Laraki
IPC: G06F40/263 , G06F18/2415 , G06F18/2431 , G06F40/35 , G06F40/58 , G10L15/22
CPC classification number: G06F40/263 , G06F18/2415 , G06F18/2431 , G06F40/35 , G06F40/58 , G10L15/22
Abstract: A method that includes receiving an input at an interactive conversation service that uses an intent classification model. The method may further include generating, using an encoder model of the intent classification model, a set of output vectors corresponding to the input, where the encoder model is configured to determine a set of metrics corresponding to intent classifications. The method may further include determining, using an outlier detection model of the intent classification model, whether the input is in-domain or out-of-domain (OOD) based on a first vector of the set of output vectors satisfying a domain threshold relative to one or more of the intent classifications. The method may further include outputting, by the intent classification model, a second vector of the set of output vectors that indicates the set of metrics corresponding to the intent classifications or an indication that the input is OOD.
-
5.
公开(公告)号:US20240303443A1
公开(公告)日:2024-09-12
申请号:US18496523
申请日:2023-10-27
Applicant: Salesforce, Inc.
Inventor: Na (Claire) Cheng , Jayesh Govindarajan , Zachary Alexander , Shashank Harinath , Atul Kshirsagar , Fermin Ordaz
IPC: G06F40/40 , G06F16/33 , G06F40/295
CPC classification number: G06F40/40 , G06F16/3347 , G06F40/295
Abstract: Embodiments provide a generative AI creation framework to a customized generative AI stack using a foundational model (such as GPT) based on user-defined prompts, a natural language description of the task to be accomplished, and domain adaptation. In one embodiment, organization-specific knowledge may be injected into either the prompt and/or the foundational model. In this way, the customized generative AI stack thus supports a full spectrum of domain-adaptive prompts to enable a full spectrum of personalized and adaptive AI chat applications.
-
公开(公告)号:US20240242022A1
公开(公告)日:2024-07-18
申请号:US18156043
申请日:2023-01-18
Applicant: Salesforce, Inc.
Inventor: Victor Yee , Chien-Sheng Wu , Na Cheng , Alexander R. Fabbri , Zachary Alexander , Nicholas Feinig , Sameer Abhinkar , Shashank Harinath , Sitaram Asur , Jacob Nathaniel Huffman , Wojciech Kryscinski , Caiming Xiong
IPC: G06F40/174 , G06F16/34
CPC classification number: G06F40/174 , G06F16/345
Abstract: Embodiments described herein provide a structured conversation summarization framework. A user interface may be provided which allows an agent to perform a conversation with a customer, for example regarding resolving a customer support issue. Utterances by both the agent and customer may be stored, and at the end of the conversation, the utterances may be used to generate a structured summary. The structured summary may include components such as a general summary, an issue summary, and a resolution summary. Using neural network models and heuristics, each component of the summary may be automatically generated.
-
公开(公告)号:US20240143945A1
公开(公告)日:2024-05-02
申请号:US18161767
申请日:2023-01-30
Applicant: Salesforce, Inc.
Inventor: Shubham Mehrotra , Zachary Alexander , Shilpa Bhagavath , Gurkirat Singh , Shashank Harinath , Anuprit Kale
Abstract: Embodiments described herein provide a cross-lingual intent classification model that predicts in multiple languages without the need of training data in all the multiple languages. For example, data requirement for training can be reduced to just one utterance per intent label. Specifically, when an utterance is fed to the intent classification model, the model checks whether the utterance is similar to any of the example utterances provided for each intent. If any such utterance(s) are found, the model returns the specified intent, otherwise, it returns out of domain (OOD).
-
公开(公告)号:US20250086467A1
公开(公告)日:2025-03-13
申请号:US18427304
申请日:2024-01-30
Applicant: Salesforce, Inc.
Inventor: Victor Yee , Yiqiao Liu , Shashank Harinath , Fermin Ordaz , Adam Smith , Suhail Barot , Tuan Nguyen
IPC: G06N3/0895 , G06N3/0475
Abstract: The described method may include receiving user input indicating a configuration identifying a large language model (LLM) and a subset of documents indicated in the configuration as being available to a tenant. The method may include generating one or more vectorizations of content of the subset of documents. The method may include receiving a request to generate a generative response. The method may include generating the generative artificial intelligence (AI) prompt using the content to ground the generative AI prompt. The subset of documents may be identified based on a comparison between a vectorization of the request and the one or more vectorizations and based at least in part on a determination that a user associated with the tenant is permitted to access the subset of documents. The method may include presenting a response to the generative AI prompt, the response generated by the LLM using the generative AI prompt.
-
公开(公告)号:US20240411992A1
公开(公告)日:2024-12-12
申请号:US18335898
申请日:2023-06-15
Applicant: Salesforce, Inc.
Inventor: Shiva Kumar Pentyala , Prafulla Kumar Choubey , Shashank Harinath , Sitaram Asur , Chien-Sheng Jason Wu , Zachary Alexander , Caiming Xiong
IPC: G06F40/284 , G06N3/08
Abstract: Embodiments described herein provide a training framework for generative NLP models. Specifically, the training input, e.g., in the form of a sequence of tokens representing a user-agent dialogue, may be randomly masked for a few spans, which can be one or more tokens, one or more words, one or more sentences, or one or more paragraphs. These masked spans are replaced with their embeddings generated from pre-trained large language models are then used for training the NLP model.
-
公开(公告)号:US20240411991A1
公开(公告)日:2024-12-12
申请号:US18330216
申请日:2023-06-06
Applicant: Salesforce, Inc.
Inventor: Shiva Kumar Pentyala , Prafulla Kumar Choubey , Shashank Harinath , Sitaram Asur , Chien-Sheng Jason Wu , Zachary Alexander , Caiming Xiong
IPC: G06F40/284
Abstract: Embodiments described herein provide a training framework for generative NLP models that operate on previously learnt knowledge from pretrained large language models. Specifically, to train an NLP model to generate a response to a user utterance (e.g., “resolve login issue”), document embeddings of support IT documents encoded by a pretrained LLM are fed to an NLP decoder together with a training dialogue (e.g., a dialogue between the chat agent on how to “resolve login issue”). The NLP decoder can thus be trained by a causal language modeling loss computed based on the predicted next token and the ground-truth token from the training dialogue.
-
-
-
-
-
-
-
-
-