-
公开(公告)号:US20200372025A1
公开(公告)日:2020-11-26
申请号:US16420764
申请日:2019-05-23
Applicant: ADOBE INC.
Inventor: Seung-hyun Yoon , Franck Dernoncourt , Trung Huu Bui , Doo Soon Kim , Carl Iwan Dockhorn , Yu Gong
IPC: G06F16/2452 , G06F16/2457 , G06F16/28 , G06F16/248 , G06N20/00
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for techniques for identifying textual similarity and performing answer selection. A textual-similarity computing model can use a pre-trained language model to generate vector representations of a question and a candidate answer from a target corpus. The target corpus can be clustered into latent topics (or other latent groupings), and probabilities of a question or candidate answer being in each of the latent topics can be calculated and condensed (e.g., downsampled) to improve performance and focus on the most relevant topics. The condensed probabilities can be aggregated and combined with a downstream vector representation of the question (or answer) so the model can use focused topical and other categorical information as auxiliary information in a similarity computation. In training, transfer learning may be applied from a large-scale corpus, and the conventional list-wise approach can be replaced with point-wise learning.
-
公开(公告)号:US11113323B2
公开(公告)日:2021-09-07
申请号:US16420764
申请日:2019-05-23
Applicant: ADOBE INC.
Inventor: Seung-hyun Yoon , Franck Dernoncourt , Trung Huu Bui , Doo Soon Kim , Carl Iwan Dockhorn , Yu Gong
IPC: G06F7/00 , G06F16/332 , G06N20/00 , G06F16/33
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for techniques for identifying textual similarity and performing answer selection. A textual-similarity computing model can use a pre-trained language model to generate vector representations of a question and a candidate answer from a target corpus. The target corpus can be clustered into latent topics (or other latent groupings), and probabilities of a question or candidate answer being in each of the latent topics can be calculated and condensed (e.g., downsampled) to improve performance and focus on the most relevant topics. The condensed probabilities can be aggregated and combined with a downstream vector representation of the question (or answer) so the model can use focused topical and other categorical information as auxiliary information in a similarity computation. In training, transfer learning may be applied from a large-scale corpus, and the conventional list-wise approach can be replaced with point-wise learning.
-