-
公开(公告)号:US12159118B2
公开(公告)日:2024-12-03
申请号:US18544209
申请日:2023-12-18
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.
-
公开(公告)号:US11886233B2
公开(公告)日:2024-01-30
申请号:US17096767
申请日:2020-11-12
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.
-
公开(公告)号:US20250068855A1
公开(公告)日:2025-02-27
申请号:US18942853
申请日:2024-11-11
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture for generating diverse QA pairs from a single context. The context-based QA generation architecture includes a latent variable generating network, an answer generating network and a question generating network. The latent variable generating network comprises multiple Bi-LSTM encoders encode the a first context, the a first question and the a first answer to generate a first context vector, a first question vector and a first answer vector, respectively, a first Multi-Layer Perceptron (MLP) generate a first question latent variable based on the first context vector and the first question vector, and a second MLP generate a first answer latent variable based on the first question latent variable and the first answer vector. The answer generating network and the question generating network are trained based on the first context, the first question latent variable and the first answer latent variable.
-
公开(公告)号:US11960838B2
公开(公告)日:2024-04-16
申请号:US17120075
申请日:2020-12-11
Applicant: 42Maru Inc.
Inventor: Dong Hwan Kim , Han Su Kim , Woo Tae Jeong , Ki Bong Sung , Hyeon Dey Kim
IPC: G06F40/279 , G06F40/35 , G06N3/08
CPC classification number: G06F40/279 , G06F40/35 , G06N3/08
Abstract: The present invention relates to a method for reinforcing a multiple-choice QA model based on adversarial learning techniques, wherein incorrect answers are further generated based on a data set used in the process of training the multiple-choice QA model to enrich data which are learnable by the multiple-choice QA model. To achieve this object, the method includes step A of an incorrect answer generation model encoding a text based on natural language text and a question, generating a second incorrect answer based on the text and the question, and transmitting the second incorrect answer to an incorrect answer test model, step B of the incorrect answer test model encoding the text, the question, a first correct answer corresponding to the text and the question, a first incorrect answer and the second incorrect answer, and selecting a second correct answer based on results of the encoding, step C of the incorrect answer test model generating a feedback by determining whether the first correct answer is identical to the second correct answer, and step D of the incorrect answer generation model and the incorrect answer test model performing self-learning based on the feedback.
-
公开(公告)号:US12130851B2
公开(公告)日:2024-10-29
申请号:US18209703
申请日:2023-06-14
Applicant: 42Maru Inc.
Inventor: Dong Hwan Kim , Han Su Kim , Woo Tae Jeong , Seung Hyeon Lee , Chang Hyeon Lim
IPC: G06F16/34 , G06F16/33 , G06F16/901 , G06F40/279 , G06F40/30 , G06F40/40
CPC classification number: G06F16/345 , G06F16/3347 , G06F16/9024 , G06F40/279 , G06F40/30 , G06F40/40
Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: an a step of generating an embedding vector by vectorizing a natural language-based context; a b step of generating a graph using the embedding vector and calculating a first likelihood of each of at least one node included in the graph; a c step of generating a second likelihood by assigning a weight to the first likelihood according to a result of comparing at least one node included in the graph with the context; and a d step of calculating a third likelihood for all candidate paths present in the graph based on the second likelihood, selecting a path having a highest third likelihood, and generating a summary based on the path.
-
公开(公告)号:US11727041B2
公开(公告)日:2023-08-15
申请号:US17125991
申请日:2020-12-17
Applicant: 42Maru Inc.
Inventor: Dong Hwan Kim , Han Su Kim , Woo Tae Jeong , Seung Hyeon Lee , Chang Hyeon Lim
IPC: G06F40/279 , G06F40/40 , G06F40/30 , G06F16/34 , G06F16/33 , G06F16/901
CPC classification number: G06F16/345 , G06F16/3347 , G06F16/9024 , G06F40/279 , G06F40/30 , G06F40/40
Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: an a step of generating an embedding vector by vectorizing a natural language-based context; a b step of generating a graph by using the embedding vector; a c step of assigning a weight depending on whether or not a keyword corresponding to at least one node included in the graph is present in the context; and a d step of selecting a path having a highest likelihood in the graph and generating a summary based on the path.
-
-
-
-
-