-
公开(公告)号:US20190258714A1
公开(公告)日:2019-08-22
申请号:US15978445
申请日:2018-05-14
Applicant: salesforce.com, inc.
Inventor: Victor Zhong , Caiming Xiong
Abstract: A method for maintaining a dialogue state associated with a dialogue between a user and a digital system includes receiving, by a dialogue state tracker associated with the digital system, a representation of a user communication, updating, by the dialogue state tracker, the dialogue state and providing a system response based on the updated dialogue state. The dialogue state is updated by evaluating, based on the representation of the user communication, a plurality of member scores corresponding to a plurality of ontology members of an ontology set, and selecting, based on the plurality of member scores, zero or more of the plurality of ontology members to add to or remove from the dialogue state. The dialogue state tracker includes a global-local encoder that includes a global branch and a local branch, the global branch having global trained parameters that are shared among the plurality of ontology members and the local branch having local trained parameters that are determined separately for each of the plurality of ontology members.
-
公开(公告)号:US20190251168A1
公开(公告)日:2019-08-15
申请号:US15974118
申请日:2018-05-08
Applicant: salesforce.com, inc.
Inventor: Bryan McCann , Nitish Shirish Keskar , Caiming Xiong , Richard Socher
Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.
-
公开(公告)号:US12198060B2
公开(公告)日:2025-01-14
申请号:US17006570
申请日:2020-08-28
Applicant: Salesforce.com, Inc.
Inventor: Junwen Bai , Weiran Wang , Yingbo Zhou , Caiming Xiong
IPC: G06N3/088 , G06F18/21 , G06F18/214 , G06N3/049
Abstract: Embodiments described herein combine both masked reconstruction and predictive coding. Specifically, unlike contrastive learning, the mutual information between past states and future states are directly estimated. The context information can also be directly captured via shifted masked reconstruction—unlike standard masked reconstruction, the target reconstructed observations are shifted slightly towards the future to incorporate more predictability. The estimated mutual information and shifted masked reconstruction loss can then be combined as the loss function to update the neural model.
-
公开(公告)号:US11797825B2
公开(公告)日:2023-10-24
申请号:US17331337
申请日:2021-05-26
Applicant: salesforce.com, inc.
Inventor: Kazuma Hashimoto , Caiming Xiong , Richard Socher
IPC: G06N3/04 , G06N3/08 , G06F40/30 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06N3/063 , G10L15/18 , G10L25/30 , G10L15/16 , G06F40/00 , G06N3/084 , G06N3/044 , G06N3/045 , G06N3/047
CPC classification number: G06N3/04 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06F40/30 , G06N3/044 , G06N3/045 , G06N3/047 , G06N3/063 , G06N3/08 , G06N3/084 , G06F40/00 , G10L15/16 , G10L15/18 , G10L25/30
Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
-
公开(公告)号:US11790894B2
公开(公告)日:2023-10-17
申请号:US17202077
申请日:2021-03-15
Applicant: salesforce.com, inc.
Inventor: Yixin Mao , Zachary Alexander , Victor Winslow Yee , Joseph R. Zeimen , Na Cheng , Chien-Sheng Wu , Wenhao Liu , Caiming Xiong
CPC classification number: G10L15/16 , G10L15/063 , G10L15/08 , G10L15/22 , H04L51/02 , G06F16/3344 , G06F40/56
Abstract: A system uses conversation engines to process natural language requests and conduct automatic conversations with users. The system generates responses to users in an online conversation. The system ranks generated user responses for the online conversation. The system generates a context vector based on a sequence of utterances of the conversation and generates response vectors for generated user responses. The system ranks the user responses based on a comparison of the context vectors and user response vectors. The system uses a machine learning based model that uses a pretrained neural network that supports multiple languages. The system determines a context of an utterance based on utterances in the conversation. The system generates responses and ranks them based on the context. The ranked responses are used to respond to the user.
-
66.
公开(公告)号:US11769013B2
公开(公告)日:2023-09-26
申请号:US16680323
申请日:2019-11-11
Applicant: salesforce.com, inc.
Inventor: Michael Machado , James Douglas Harrison , Caiming Xiong , Xinyi Yang , Thomas Archie Cook , Roojuta Lalani , Jean-Marc Soumet , Karl Ryszard Skucha , Juan Rodriguez , Manju Vijayakumar , Vishal Motwani , Tian Xie , Bryan McCann , Nitish Shirish Keskar , Zhihao Zou , Chitra Gulabrani , Minal Khodani , Adarsha Badarinath , Rohiniben Thakar , Srikanth Kollu , Kevin Schoen , Qiong Liu , Amit Hetawal , Kevin Zhang , Kevin Zhang , Johnson Liu , Rafael Amsili
CPC classification number: G06F40/30 , G06F40/295 , G06N3/04 , G06N3/08 , H04L51/02
Abstract: A multi-tenant system performs custom configuration of a tenant-specific chatbot to process and act upon natural language requests. The multi-tenant system configures the tenant-specific chatbots without requiring tenant-specific training. The multi-tenant system providing a user interface for configuring a tenant-specific set of permitted actions. The multi-tenant system determines a set of example phrases for each of the selected permitted actions. The multi-tenant system receives a natural language request from a user and identifies the action that the user wants to perform. The multi-tenant system uses a neural network to compare the natural language request with example phrases to identify an example phrase that matches the natural language request. The multi-tenant system performs the action corresponding to the matching example phrase.
-
公开(公告)号:US11669745B2
公开(公告)日:2023-06-06
申请号:US17080276
申请日:2020-10-26
Applicant: salesforce.com, inc.
Inventor: Chetan Ramaiah , Peng Tang , Caiming Xiong
IPC: G06F18/21 , G06N3/082 , G06F18/214
CPC classification number: G06F18/2178 , G06F18/2155 , G06N3/082
Abstract: A method for generating a neural network for detecting one or more objects in images includes generating one or more self-supervised proposal learning losses based on the one or more proposal features and corresponding proposal feature predictions. One or more consistency-based proposal learning losses are generated based on noisy proposal feature predictions and the corresponding proposal predictions without noise. A combined loss is generated using the one or more self-supervised proposal learning losses and one or more consistency-based proposal learning losses. The neural network is updated based on the combined loss.
-
公开(公告)号:US20230162490A1
公开(公告)日:2023-05-25
申请号:US17589725
申请日:2022-01-31
Applicant: salesforce.com, inc.
Inventor: Shu Zhang , Junnan Li , Ran Xu , Caiming Xiong , Chetan Ramaiah
IPC: G06V10/776 , G06V10/74 , G06F40/284 , G06F40/166 , G06F40/126 , G06V10/80 , G06F16/583 , G06F16/56
CPC classification number: G06V10/776 , G06V10/761 , G06F40/284 , G06F40/166 , G06F40/126 , G06V10/806 , G06F16/5846 , G06F16/56
Abstract: Embodiments described herein a CROss-Modal Distribution Alignment (CROMDA) model for vision-language pretraining, which can be used for retrieval downstream tasks. In the CROMDA mode, global cross-modal representations are aligned on each unimodality. Specifically, a uni-modal global similarity between an image/text and the image/text feature queue are computed. A softmax-normalized distribution is then generated based on the computed similarity. The distribution thus takes advantage of property of the global structure of the queue. CROMDA then aligns the two distributions and learns a modal invariant global representation. In this way, CROMDA is able to obtain invariant property in each modality, where images with similar text representations should be similar and vice versa.
-
69.
公开(公告)号:US11657233B2
公开(公告)日:2023-05-23
申请号:US17673709
申请日:2022-02-16
Applicant: salesforce.com, inc.
Inventor: Nitish Shirish Keskar , Bryan McCann , Richard Socher , Caiming Xiong
IPC: G06F16/332 , G06F40/30 , G06F40/284 , G06N3/08
CPC classification number: G06F40/30 , G06F40/284 , G06F16/3329 , G06N3/08
Abstract: Systems and methods for unifying question answering and text classification via span extraction include a preprocessor for preparing a source text and an auxiliary text based on a task type of a natural language processing task, an encoder for receiving the source text and the auxiliary text from the preprocessor and generating an encoded representation of a combination of the source text and the auxiliary text, and a span-extractive decoder for receiving the encoded representation and identifying a span of text within the source text that is a result of the NLP task. The task type is one of entailment, classification, or regression. In some embodiments, the source text includes one or more of text received as input when the task type is entailment, a list of classifications when the task type is entailment or classification, or a list of similarity options when the task type is regression.
-
公开(公告)号:US11600194B2
公开(公告)日:2023-03-07
申请号:US16006691
申请日:2018-06-12
Applicant: salesforce.com, inc.
Inventor: Bryan McCann , Nitish Shirish Keskar , Caiming Xiong , Richard Socher
IPC: G09B7/02 , G06F16/9032 , G06F40/30 , G06F40/284 , G06N3/084 , G06F40/35 , G06N3/082 , G06N5/04 , G06N3/04 , G06F16/34 , G06F40/216
Abstract: Approaches for natural language processing include a multi-layer encoder for encoding words from a context and words from a question in parallel, a multi-layer decoder for decoding the encoded context and the encoded question, a pointer generator for generating distributions over the words from the context, the words from the question, and words in a vocabulary based on an output from the decoder, and a switch. The switch generates a weighting of the distributions over the words from the context, the words from the question, and the words in the vocabulary, generates a composite distribution based on the weighting of the distribution over the first words from the context, the distribution over the second words from the question, and the distribution over the words in the vocabulary, and selects words for inclusion in an answer using the composite distribution.
-
-
-
-
-
-
-
-
-