-
公开(公告)号:US11886233B2
公开(公告)日:2024-01-30
申请号:US17096767
申请日:2020-11-12
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.
-
公开(公告)号:US12039449B2
公开(公告)日:2024-07-16
申请号:US17119381
申请日:2020-12-11
Inventor: Seong-Jin Park , Sung Ju Hwang , Seungju Han , Insoo Kim , Jiwon Baek , Jaejoon Han
IPC: G06N3/08 , G06F17/18 , G06F18/10 , G06F18/213 , G06F18/2415 , G06F18/2431 , G06N3/082 , G06V10/44 , G06V10/764 , G06V10/774 , G06V40/16
CPC classification number: G06N3/082 , G06F17/18 , G06F18/10 , G06F18/213 , G06F18/2415 , G06F18/2431 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/774 , G06V40/168 , G06V40/174
Abstract: A processor-implemented neural network method includes: extracting, by a feature extractor of a neural network, a plurality of training feature vectors corresponding to a plurality of training class data of each of a plurality of classes including a first class and a second class; determining, by a feature sample generator of the neural network, an additional feature vector of the second class based on a mean vector and a variation vector of the plurality of training feature vectors of each of the first class and the second class; and training a class vector of the second class included in a classifier of the neural network based on the additional feature vector and the plurality of training feature vectors of the second class.
-
公开(公告)号:US11875119B2
公开(公告)日:2024-01-16
申请号:US17179178
申请日:2021-02-18
Inventor: Sung Ju Hwang , Moonsu Han , Minki Kang , Hyunwoo Jung
IPC: G06F16/215 , G06N3/0442 , G06N5/00 , G06F40/30 , G06N20/00 , G06F18/21 , G06V10/776
CPC classification number: G06F40/30 , G06F18/217 , G06N20/00 , G06V10/776
Abstract: Provided is a memory-based reinforcement learning method and system capable of storing optional information in streaming data. A question-answering (QA) method using memory-based reinforcement learning method includes receiving, in an episodic memory reader (EMR), streaming data about an input context that is input from a user; analyzing, in the EMR, the received streaming data and storing preset semantic information used for QA in an external memory; and, in response to an input of a question front the user, determining, in a pretrained QA model, an answer to the input question based on semantic information stored in the external memory.
-
公开(公告)号:US20210263859A1
公开(公告)日:2021-08-26
申请号:US17179178
申请日:2021-02-18
Inventor: Sung Ju Hwang , Moonsu Han , Minki Kang , Hyunwoo Jung
IPC: G06F12/121 , G06F40/30 , G06K9/62 , G06N20/00
Abstract: Provided is a memory-based reinforcement learning method and system capable of storing optional information in streaming data. A question-answering (QA) method using memory-based reinforcement learning method includes receiving, in an episodic memory reader (EMR), streaming data about an input context that is input from a user; analyzing, in the EMR, the received streaming data and storing preset semantic information used for QA in an external memory; and, in response to an input of a question from the user, determining, in a pretrained QA model, an answer to the input question based on sematic information stored in the external memory.
-
5.
公开(公告)号:US20200160212A1
公开(公告)日:2020-05-21
申请号:US16214598
申请日:2018-12-10
Inventor: Jinwoo Shin , Sung Ju Hwang , Yunhun Jang
IPC: G06N20/00
Abstract: Disclosed are a method and system for transfer learning to a random target dataset and model structure based on meta learning. A transfer learning method may include determining the form and amount of information to be transferred, used by a pre-trained model, using a meta model based on similarity between a source dataset and a new target dataset and performing transfer-learning on a target model using the form and amount of information of the pre-trained model determined by the meta model.
-
公开(公告)号:US20250068855A1
公开(公告)日:2025-02-27
申请号:US18942853
申请日:2024-11-11
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture for generating diverse QA pairs from a single context. The context-based QA generation architecture includes a latent variable generating network, an answer generating network and a question generating network. The latent variable generating network comprises multiple Bi-LSTM encoders encode the a first context, the a first question and the a first answer to generate a first context vector, a first question vector and a first answer vector, respectively, a first Multi-Layer Perceptron (MLP) generate a first question latent variable based on the first context vector and the first question vector, and a second MLP generate a first answer latent variable based on the first question latent variable and the first answer vector. The answer generating network and the question generating network are trained based on the first context, the first question latent variable and the first answer latent variable.
-
公开(公告)号:US12159118B2
公开(公告)日:2024-12-03
申请号:US18544209
申请日:2023-12-18
Inventor: Dong Hwan Kim , Sung Ju Hwang , Seanie Lee , Dong Bok Lee , Woo Tae Jeong , Han Su Kim , You Kyung Kwon , Hyun Ok Kim
Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.
-
-
-
-
-
-