-
公开(公告)号:US11797825B2
公开(公告)日:2023-10-24
申请号:US17331337
申请日:2021-05-26
Applicant: salesforce.com, inc.
Inventor: Kazuma Hashimoto , Caiming Xiong , Richard Socher
IPC: G06N3/04 , G06N3/08 , G06F40/30 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06N3/063 , G10L15/18 , G10L25/30 , G10L15/16 , G06F40/00 , G06N3/084 , G06N3/044 , G06N3/045 , G06N3/047
CPC classification number: G06N3/04 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06F40/30 , G06N3/044 , G06N3/045 , G06N3/047 , G06N3/063 , G06N3/08 , G06N3/084 , G06F40/00 , G10L15/16 , G10L15/18 , G10L25/30
Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
-
2.
公开(公告)号:US20230055188A1
公开(公告)日:2023-02-23
申请号:US17565215
申请日:2021-12-29
Applicant: salesforce.com, inc.
Inventor: Xi Ye , Semih Yavuz , Kazuma Hashimoto , Yingbo Zhou
IPC: G06N5/04 , G06N5/02 , G06F16/2457
Abstract: Embodiments described herein provide a question answering approach that answers a question by generating an executable logical form. First, a ranking model is used to select a set of good logical forms from a pool of logical forms obtained by searching over a knowledge graph. The selected logical forms are good in the sense that they are close to (or exactly match, in some cases) the intents in the question and final desired logical form. Next, a generation model is adopted conditioned on the question as well as the selected logical forms to generate the target logical form and execute it to obtain the final answer. For example, at inference stage, when a question is received, a matching logical form is identified from the question, based on which the final answer can be generated based on the node that is associated with the matching logical form in the knowledge base.
-
3.
公开(公告)号:US20220374459A1
公开(公告)日:2022-11-24
申请号:US17533613
申请日:2021-11-23
Applicant: salesforce.com, inc.
Inventor: Ye Liu , Kazuma Hashimoto , Yingbo Zhou , Semih Yavuz , Caiming Xiong
IPC: G06F16/335 , G06F16/332 , G06F16/31
Abstract: Embodiments described herein provide a dense hierarchical retrieval for open-domain question and answering for a corpus of documents using a document-level and passage-level dense retrieval model. Specifically, each document is viewed as a structural collection that has sections, subsections and paragraphs. Each document may be split into short length passages, where a document-level retrieval model and a passage-level retrieval model may be applied to return a smaller set of filtered texts. Top documents may be identified after encoding the question and the documents and determining document relevance scores to the encoded question. Thereafter, a set of top passages are further identified based on encoding of the passages and determining passage relevance scores to the encoded question. The document and passage relevance scores may be used in combination to determine a final retrieval ranking for the documents having the set of top passages.
-
公开(公告)号:US11763090B2
公开(公告)日:2023-09-19
申请号:US16718186
申请日:2019-12-18
Applicant: salesforce.com, inc.
Inventor: Tian Xie , Kazuma Hashimoto , Xinyi Yang , Caiming Xiong
IPC: G06F40/00 , G06F40/30 , G06F40/216 , G06N5/04 , G06F18/2413 , G06F18/214
CPC classification number: G06F40/30 , G06F18/2148 , G06F18/2413 , G06F40/216 , G06N5/04
Abstract: An online system that allows users to interact with it using expressions in natural language form includes an intent inference module allowing it to infer an intent represented by a user expression. The intent inference module has a set of possible intents, along with a small set of example natural language expressions known to represent that intent. When a user interacts with the system using a natural language expression for which the intent is not already known, the intent inference module applies a natural language inference model to compute scores indicating whether the user expression textually entails the various example natural language expressions. Based on the scores, the intent inference module determines an intent that is most applicable for the expression. If an intent cannot be determined with sufficient confidence, the intent inference module may further attempt to determine whether the various example natural language expressions textually entail the user expression.
-
公开(公告)号:US20220103491A1
公开(公告)日:2022-03-31
申请号:US17037554
申请日:2020-09-29
Applicant: salesforce.com, inc.
Inventor: Xinyi Yang , Tian Xie , Caiming Xiong , Wenhao Liu , Huan Wang , Kazuma Hashimoto , Jin Qu , Feihong Wu , Yingbo Zhou
Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.
-
公开(公告)号:US20210383212A1
公开(公告)日:2021-12-09
申请号:US17105262
申请日:2020-11-25
Applicant: salesforce.com, inc.
Abstract: Embodiments described herein provide safe policy improvement (SPI) in a batch reinforcement learning framework for a task-oriented dialogue. Specifically, a batch reinforcement learning framework for dialogue policy learning is provided, which improves the performance of the dialogue and learns to shape a reward that reasons the invention behind human response rather than just imitating the human demonstration.
-
公开(公告)号:US20210279551A1
公开(公告)日:2021-09-09
申请号:US17331337
申请日:2021-05-26
Applicant: salesforce.com, inc.
Inventor: Kazuma Hashimoto , Caiming Xiong , Richard Socher
IPC: G06N3/04 , G06N3/08 , G06F40/30 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06N3/063
Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
-
公开(公告)号:US11741142B2
公开(公告)日:2023-08-29
申请号:US17589522
申请日:2022-01-31
Applicant: salesforce.com, inc.
Inventor: Haopeng Zheng , Semih Yavuz , Wojciech Kryscinski , Kazuma Hashimoto , Yingbo Zhou
IPC: G06F16/34 , G06F40/166 , G06N20/00 , G06F40/117 , G06F40/279
CPC classification number: G06F16/345 , G06F40/166 , G06N20/00 , G06F40/117 , G06F40/279
Abstract: Embodiments described herein provide document summarization systems and methods that utilize fine-tuning of pre-trained abstractive summarization models to produce summaries that more faithfully track the content of the documents. Such abstractive summarization models may be pre-trained using a corpus consisting of pairs of articles and associated summaries. For each article-summary pair, a pseudo label or control code is generated and represents a faithfulness of the summary with respect to the article. The pre-trained model is then fine-tuned based on the article-summary pairs and the corresponding control codes. The resulting fine-tuned models then provide improved faithfulness in document summarization tasks.
-
公开(公告)号:US11580977B2
公开(公告)日:2023-02-14
申请号:US17037556
申请日:2020-09-29
Applicant: salesforce.com, inc.
Inventor: Xinyi Yang , Tian Xie , Caiming Xiong , Wenhao Liu , Huan Wang , Kazuma Hashimoto , Yingbo Zhou , Xugang Ye , Jin Qu , Feihong Wu
Abstract: A conversation engine performs conversations with users using chatbots customized for performing a set of tasks that can be performed using an online system. The conversation engine loads a chatbot configuration that specifies the behavior of a chatbot including the tasks that can be performed by the chatbot, the types of entities relevant to each task, and so on. The conversation may be voice based and use natural language. The conversation engine may load different chatbot configurations to implement different chatbots. The conversation engine receives a conversation engine configuration that specifies the behavior of the conversation engine across chatbots. The system may be a multi-tenant system that allows customization of the chatbots for each tenant.
-
公开(公告)号:US20220383159A1
公开(公告)日:2022-12-01
申请号:US17534085
申请日:2021-11-23
Applicant: salesforce.com, inc.
Inventor: Semih Yavuz , Kazuma Hashimoto , Yingbo Zhou
IPC: G06N5/04 , G06F40/40 , G06F40/284
Abstract: Embodiments described herein provide a fusion-in-decoder (FID) based model (referred to as “PATHID”) for open-domain multi-hop question answering. Specifically, PATHID addresses the gap between the general behavior of the FID model on single-hop and multi-hop question answering, and provides more transparency into the reasoning path. In addition to answer generation, PATHID explicitly models the full reasoning path to resolve the answer with a generative sequence-to-sequence model.
-
-
-
-
-
-
-
-
-