-
公开(公告)号:US20160350653A1
公开(公告)日:2016-12-01
申请号:US15170884
申请日:2016-06-01
Applicant: salesforce.com, inc.
Inventor: Richard Socher , Ankit Kumar , Ozan Irsoy , Mohit Iyyer , Caiming Xiong , Stephen Merity , Romain Paulus
CPC classification number: G06N5/04 , G06N3/0445
Abstract: A novel unified neural network framework, the dynamic memory network, is disclosed. This unified framework reduces every task in natural language processing to a question answering problem over an input sequence. Inputs and questions are used to create and connect deep memory sequences. Answers are then generated based on dynamically retrieved memories.
Abstract translation: 公开了一种新颖的统一神经网络框架,动态存储网络。 这个统一框架将自然语言处理中的每个任务都减少到一个输入序列中的问题回答问题。 输入和问题用于创建和连接深层记忆序列。 然后基于动态检索的存储器生成答案。
-
公开(公告)号:US12072955B2
公开(公告)日:2024-08-27
申请号:US17532851
申请日:2021-11-22
Applicant: salesforce.com, inc.
Inventor: Chen Xing , Wenhao Liu , Chu Hong Hoi , Nitish Shirish Keskar , Caiming Xiong
IPC: G06F18/214 , G06F18/21 , G06F40/00
CPC classification number: G06F18/2148 , G06F18/2163 , G06F40/00
Abstract: Embodiments are directed to pre-training a transformer model using more parameters for sophisticated patterns (PSP++). The transformer model is divided into a held-out model and a main model. A forward pass and a backward pass are performed on the held-out model, where the forward pass determines self-attention hidden states of the held-out model and the backward pass determines loss of the held-out model. A forward pass on the main model is performed to determine a self-attention hidden states of the main model. The self-attention hidden states of the main model are concatenated with the self-attention hidden states of the held-out model. A backward pass is performed on the main model to determine a loss of the main model. The parameters of the held-out model are updated to reflect the loss of the held-out model and parameters of the main model are updated to reflect the loss of the main model.
-
公开(公告)号:US11829721B2
公开(公告)日:2023-11-28
申请号:US17161214
申请日:2021-01-28
Applicant: salesforce.com, inc.
Inventor: Tong Niu , Semih Yavuz , Yingbo Zhou , Nitish Shirish Keskar , Huan Wang , Caiming Xiong
IPC: G10L15/065 , G06N3/0455 , G06F18/20 , G06F40/20 , G06F40/289 , G06F40/45 , G06F40/284 , G06F40/242 , G06F18/22 , G06F18/214 , G06N7/01
CPC classification number: G06F40/284 , G06F18/214 , G06F18/22 , G06F40/242 , G06N7/01
Abstract: Embodiments described herein provide dynamic blocking, a decoding algorithm which enables large-scale pretrained language models to generate high-quality paraphrases in an un-supervised setting. Specifically, in order to obtain an alternative surface form, when the language model emits a token that is present in the source sequence, the language model is prevented from generating the next token that is the same as the subsequent source token in the source sequence at the next time step. In this way, the language model is forced to generate a paraphrased sequence of the input source sequence, but with mostly different wording.
-
公开(公告)号:US11822897B2
公开(公告)日:2023-11-21
申请号:US17463227
申请日:2021-08-31
Applicant: salesforce.com, inc.
Inventor: Kazuma Hashimoto , Raffaella Buschiazzo , James Bradbury , Teresa Anna Marshall , Caiming Xiong , Richard Socher
Abstract: Approaches for the translation of structured text include an embedding module for encoding and embedding source text in a first language, an encoder for encoding output of the embedding module, a decoder for iteratively decoding output of the encoder based on generated tokens in translated text from previous iterations, a beam module for constraining output of the decoder with respect to possible embedded tags to include in the translated text for a current iteration using a beam search, and a layer for selecting a token to be included in the translated text for the current iteration. The translated text is in a second language different from the first language. In some embodiments, the approach further includes scoring and pointer modules for selecting the token based on the output of the beam module or copied from the source text or reference text from a training pair best matching the source text.
-
145.
公开(公告)号:US11783164B2
公开(公告)日:2023-10-10
申请号:US17080656
申请日:2020-10-26
Applicant: salesforce.com, inc.
Inventor: Kazuma Hashimoto , Caiming Xiong , Richard Socher
IPC: G06N3/04 , G06N3/084 , G06F40/30 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06N3/044 , G06N3/045 , G06N3/047 , G06N3/063 , G06N3/08 , G10L15/18 , G10L25/30 , G10L15/16 , G06F40/00
CPC classification number: G06N3/04 , G06F40/205 , G06F40/216 , G06F40/253 , G06F40/284 , G06F40/30 , G06N3/044 , G06N3/045 , G06N3/047 , G06N3/063 , G06N3/08 , G06N3/084 , G06F40/00 , G10L15/16 , G10L15/18 , G10L25/30
Abstract: The technology disclosed provides a so-called “joint many-task neural network model” to solve a variety of increasingly complex natural language processing (NLP) tasks using growing depth of layers in a single end-to-end model. The model is successively trained by considering linguistic hierarchies, directly connecting word representations to all model layers, explicitly using predictions in lower tasks, and applying a so-called “successive regularization” technique to prevent catastrophic forgetting. Three examples of lower level model layers are part-of-speech (POS) tagging layer, chunking layer, and dependency parsing layer. Two examples of higher level model layers are semantic relatedness layer and textual entailment layer. The model achieves the state-of-the-art results on chunking, dependency parsing, semantic relatedness and textual entailment.
-
公开(公告)号:US11669712B2
公开(公告)日:2023-06-06
申请号:US16559196
申请日:2019-09-03
Applicant: salesforce.com, inc.
Inventor: Lichao Sun , Kazuma Hashimoto , Jia Li , Richard Socher , Caiming Xiong
IPC: G06N3/08 , G06F40/232 , G06N3/045 , G06N3/008 , G06N3/044
CPC classification number: G06N3/008 , G06F40/232 , G06N3/044 , G06N3/045 , G06N3/08
Abstract: A method for evaluating robustness of one or more target neural network models using natural typos. The method includes receiving one or more natural typo generation rules associated with a first task associated with a first input document type, receiving a first target neural network model, and receiving a first document and corresponding its ground truth labels. The method further includes generating one or more natural typos for the first document based on the one or more natural typo generation rules, and providing, to the first target neural network model, a test document generated based on the first document and the one or more natural typos as an input document to generate a first output. A robustness evaluation result of the first target neural network model is generated based on a comparison between the output and the ground truth labels.
-
公开(公告)号:US20230153542A1
公开(公告)日:2023-05-18
申请号:US17581380
申请日:2022-01-21
Applicant: salesforce.com, inc.
Inventor: Tong Niu , Kazuma Hashimoto , Yingbo Zhou , Caiming Xiong
IPC: G06F40/51
CPC classification number: G06F40/51
Abstract: Embodiments described herein provide a cross-lingual sentence alignment framework that is trained only on rich-resource language pairs. To obtain an accurate aligner, a pretrained multi-lingual language model is used, and a classifier is trained on parallel data from rich-resource language pairs. This trained classifier may then be used for cross-lingual transfer with low-resource languages.
-
公开(公告)号:US11640527B2
公开(公告)日:2023-05-02
申请号:US16658399
申请日:2019-10-21
Applicant: salesforce.com, inc.
Inventor: Lichao Sun , Jia Li , Caiming Xiong , Yingbo Zhou
Abstract: Systems and methods are provided for near-zero-cost (NZC) query framework or approach for differentially private deep learning. To protect the privacy of training data during learning, the near-zero-cost query framework transfers knowledge from an ensemble of teacher models trained on partitions of the data to a student model. Privacy guarantees may be understood intuitively and expressed rigorously in terms of differential privacy. Other features are also provided.
-
公开(公告)号:US20230113750A1
公开(公告)日:2023-04-13
申请号:US17498155
申请日:2021-10-11
Applicant: salesforce.com, inc.
Abstract: A system performs group testing on a population of items. The group testing identifies items satisfying particular criteria from a population of items, for example, defective items from the population. The group testing may be performed for software or hardware testing, for testing a human population, for training of deep learning applications, and so on. The system trains a machine learning based model, for example, a reinforcement learning based model to evaluate groups. The model may further determine system dynamics that may represent priors of items. An agent treats the population and groups of items being tested as the environment and performs actions, for example, adjusting the groups. The system also performs a non-adaptive strategy based on monte carlo simulation of tests based on a simulation results.
-
公开(公告)号:US11615249B2
公开(公告)日:2023-03-28
申请号:US16996726
申请日:2020-08-18
Applicant: salesforce.com, inc.
Inventor: Bryan McCann , Nitish Shirish Keskar , Caiming Xiong , Richard Socher
IPC: G06F40/30 , G06N3/08 , G06N5/04 , G06N3/04 , G06F40/56 , G06F16/242 , G06F16/33 , G06F16/332 , G06N20/20 , G06N20/10 , G06N20/00 , G10L15/16 , G10L15/18 , G06N3/044 , G06N3/045
Abstract: Approaches for multitask learning as question answering include an input layer for encoding a context and a question, a self-attention based transformer including an encoder and a decoder, a first bi-directional long-term short-term memory (biLSTM) for further encoding an output of the encoder, a long-term short-term memory (LSTM) for generating a context-adjusted hidden state from the output of the decoder and a hidden state, an attention network for generating first attention weights based on an output of the first biLSTM and an output of the LSTM, a vocabulary layer for generating a distribution over a vocabulary, a context layer for generating a distribution over the context, and a switch for generating a weighting between the distributions over the vocabulary and the context, generating a composite distribution based on the weighting, and selecting a word of an answer using the composite distribution.
-
-
-
-
-
-
-
-
-