-
公开(公告)号:US20250068855A1
公开(公告)日:2025-02-27
申请号:US18942853
申请日:2024-11-11
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture for generating diverse QA pairs from a single context. The context-based QA generation architecture includes a latent variable generating network, an answer generating network and a question generating network. The latent variable generating network comprises multiple Bi-LSTM encoders encode the a first context, the a first question and the a first answer to generate a first context vector, a first question vector and a first answer vector, respectively, a first Multi-Layer Perceptron (MLP) generate a first question latent variable based on the first context vector and the first question vector, and a second MLP generate a first answer latent variable based on the first question latent variable and the first answer vector. The answer generating network and the question generating network are trained based on the first context, the first question latent variable and the first answer latent variable.
-
公开(公告)号:US12159118B2
公开(公告)日:2024-12-03
申请号:US18544209
申请日:2023-12-18
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.
-
公开(公告)号:US11886233B2
公开(公告)日:2024-01-30
申请号:US17096767
申请日:2020-11-12
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.
-
公开(公告)号:US20230325423A1
公开(公告)日:2023-10-12
申请号:US18209703
申请日:2023-06-14
Applicant: 42 Maru Inc.
Inventor: Dong Hwan KIM , Han Su Kim , Woo Tae Jeong , Seung Hyeon Lee , Chang Hyeon Lim
IPC: G06F16/34 , G06F16/33 , G06F16/901 , G06F40/279 , G06F40/40 , G06F40/30
CPC classification number: G06F16/345 , G06F16/3347 , G06F16/9024 , G06F40/279 , G06F40/40 , G06F40/30
Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: an a step of generating an embedding vector by vectorizing a natural language-based context; a b step of generating a graph using the embedding vector and calculating a first likelihood of each of at least one node included in the graph; a c step of generating a second likelihood by assigning a weight to the first likelihood according to a result of comparing at least one node included in the graph with the context; and a d step of calculating a third likelihood for all candidate paths present in the graph based on the second likelihood, selecting a path having a highest third likelihood, and generating a summary based on the path.
-
-
-