-
公开(公告)号:US11960838B2
公开(公告)日:2024-04-16
申请号:US17120075
申请日:2020-12-11
Applicant: 42Maru Inc.
Inventor: Dong Hwan Kim , Han Su Kim , Woo Tae Jeong , Ki Bong Sung , Hyeon Dey Kim
IPC: G06F40/279 , G06F40/35 , G06N3/08
CPC classification number: G06F40/279 , G06F40/35 , G06N3/08
Abstract: The present invention relates to a method for reinforcing a multiple-choice QA model based on adversarial learning techniques, wherein incorrect answers are further generated based on a data set used in the process of training the multiple-choice QA model to enrich data which are learnable by the multiple-choice QA model. To achieve this object, the method includes step A of an incorrect answer generation model encoding a text based on natural language text and a question, generating a second incorrect answer based on the text and the question, and transmitting the second incorrect answer to an incorrect answer test model, step B of the incorrect answer test model encoding the text, the question, a first correct answer corresponding to the text and the question, a first incorrect answer and the second incorrect answer, and selecting a second correct answer based on results of the encoding, step C of the incorrect answer test model generating a feedback by determining whether the first correct answer is identical to the second correct answer, and step D of the incorrect answer generation model and the incorrect answer test model performing self-learning based on the feedback.