-
公开(公告)号:US11782686B2
公开(公告)日:2023-10-10
申请号:US17459968
申请日:2021-08-27
Applicant: salesforce.com, inc.
Inventor: Yue Wang , Weishi Wang , Shafiq Rayhan Joty , Chu Hong Hoi
CPC classification number: G06F8/427 , G06F18/214 , G06F40/20 , G06N3/047 , G06N3/084
Abstract: Embodiments described herein a code generation and understanding model that builds on a Transformer-based encoder-decoder framework. The code generation and understanding model is configured to derive generic representations for programming language (PL) and natural language (NL) in code domain via pre-training on unlabeled code corpus, and then to benefit many code-related downstream tasks with fine-tuning. Apart from the denoising sequence-to-sequence objectives widely adopted for pre-training on natural language, identifier tagging and prediction pre-training objective is adopted to enable the model to better leverage the crucial token type information from PL, which specifically are the identifiers assigned by developers.
-
公开(公告)号:US11562147B2
公开(公告)日:2023-01-24
申请号:US16929738
申请日:2020-07-15
Applicant: salesforce.com, inc.
Inventor: Yue Wang , Chu Hong Hoi , Shafiq Rayhan Joty
Abstract: A visual dialogue model receives image input and text input that includes a dialogue history between the model and a current utterance by a human user. The model generates a unified contextualized representation using a transformer encoder network, in which the unified contextualized representation includes a token level encoding of the image input and text input. The model generates an encoded visual dialogue input from the unified contextualized representation using visual dialogue encoding layers. The encoded visual dialogue input includes a position level encoding and a segment type encoding. The model generates an answer prediction from the encoded visual dialogue input using a first self-attention mask associated with discriminative settings or a second self-attention mask associated with generative settings. Dense annotation fine tuning may be performed to increase accuracy of the answer prediction. The model provides the answer prediction as a response to the current utterance of the human user.
-
公开(公告)号:US20220382527A1
公开(公告)日:2022-12-01
申请号:US17459968
申请日:2021-08-27
Applicant: salesforce.com, inc.
Inventor: Yue Wang , Weishi Wang , Shafiq Rayhan Joty , Chu Hong Hoi
Abstract: Embodiments described herein a code generation and understanding model that builds on a Transformer-based encoder-decoder framework. The code generation and understanding model is configured to derive generic representations for programming language (PL) and natural language (NL) in code domain via pre-training on unlabeled code corpus, and then to benefit many code-related downstream tasks with fine-tuning. Apart from the denoising sequence-to-sequence objectives widely adopted for pre-training on natural language, identifier tagging and prediction pre-training objective is adopted to enable the model to better leverage the crucial token type information from PL, which specifically are the identifiers assigned by developers.
-
-