-
公开(公告)号:US11599721B2
公开(公告)日:2023-03-07
申请号:US17002562
申请日:2020-08-25
Applicant: salesforce.com, inc.
Inventor: Shiva Kumar Pentyala , Mridul Gupta , Ankit Chadha , Indira Iyer , Richard Socher
IPC: G06F40/253 , G10L15/19 , G06F40/30
Abstract: A natural language processing system that trains task models for particular natural language tasks programmatically generates additional utterances for inclusion in the training set, based on the existing utterances in the training set and the existing state of a task model as generated from the original (non-augmented) training set. More specifically, the training augmentation module 220 identifies specific textual units of utterances and generates variants of the utterances based on those identified units. The identification is based on determined importances of the textual units to the output of the task model, as well as on task rules that correspond to the natural language task for which the task model is being generated. The generation of the additional utterances improves the quality of the task model without the expense of manual labeling of utterances for training set inclusion.
-
公开(公告)号:US20220067277A1
公开(公告)日:2022-03-03
申请号:US17002562
申请日:2020-08-25
Applicant: salesforce.com, inc.
Inventor: Shiva Kumar Pentyala , Mridul Gupta , Ankit Chadha , Indira Iyer , Richard Socher
IPC: G06F40/253 , G06F40/30 , G10L15/19
Abstract: A natural language processing system that trains task models for particular natural language tasks programmatically generates additional utterances for inclusion in the training set, based on the existing utterances in the training set and the existing state of a task model as generated from the original (non-augmented) training set. More specifically, the training augmentation module 220 identifies specific textual units of utterances and generates variants of the utterances based on those identified units. The identification is based on determined importances of the textual units to the output of the task model, as well as on task rules that correspond to the natural language task for which the task model is being generated. The generation of the additional utterances improves the quality of the task model without the expense of manual labeling of utterances for training set inclusion.
-