-
1.
公开(公告)号:US20240202454A1
公开(公告)日:2024-06-20
申请号:US18471538
申请日:2023-09-21
Inventor: Euisok CHUNG , Hyun Woo KIM , Hwajeon SONG , Jeongmin YANG , Byunghyun YOO , Ran HAN
IPC: G06F40/30 , G06F40/284
CPC classification number: G06F40/30 , G06F40/284
Abstract: A domain adaptation procedure, such as fine-tuning training, is required to utilize a large-capacity PLM for a specific domain. Attempts in existing research have been made to improve performance of a PLM through domain adaptor technology based on an N-gram in order to reduce errors on the basis of the results of domain text error analysis of the PLM. Proposed is a method of selecting a semantic chunk through a domain semantic chunk graph and PageRank based on the existing domain adaptor research, with an N-gram as the semantic chunk. Proposed is also a method of domain-adapting a large-capacity PLM using semantic chunk dynamic weight masking, which reflects an output value of a PLM rather than simply integrating embedding values of semantic chunks, in a semantic chunk domain adaptor technology.
-
公开(公告)号:US20240330649A1
公开(公告)日:2024-10-03
申请号:US18610804
申请日:2024-03-20
Inventor: Jeongmin YANG , Hyun Woo KIM , Hwajeon SONG , Byunghyun YOO , Euisok CHUNG , Ran HAN
IPC: G06N3/043
CPC classification number: G06N3/043
Abstract: Provided is an inference method employing a prompt-based meta-learning network and a computer system. The inference method includes selecting a task, generating a prompt key for the selected task using a prompt-embedding network (PEN), calculating similarities between the prompt key for the selected task and prompt keys included in a prompt key pool (PKP), acquiring a prompt value for the selected task using a memory network (MN), and generating an inference result for the selected task using a model-agnostic meta-learning (MAML)-based pre-trained model (MPM).
-
公开(公告)号:US20230186154A1
公开(公告)日:2023-06-15
申请号:US17893628
申请日:2022-08-23
Inventor: Byunghyun YOO , Hyun Woo KIM , Jeon Gue PARK , Hwa Jeon SONG , Jeongmin YANG , Sungwon YI , Euisok CHUNG , Ran HAN
IPC: G06N20/00
CPC classification number: G06N20/00
Abstract: An exploration method used by an exploration apparatus in multi-agent reinforcement learning to collect training samples during the training process is provided. The exploration method includes calculating the influence of a selected action of each agent on the actions of other agents in a current state, calculating a linear sum of the value of a utility function representing the action value of each agent and the influence on the actions of the other agent calculated for the selected action of each agent, and obtaining a sample to be used for training an action policy of each agent by probabilistically selecting the action in which the linear sum is the maximum, and the random action.
-
4.
公开(公告)号:US20220180071A1
公开(公告)日:2022-06-09
申请号:US17540768
申请日:2021-12-02
Inventor: Eui Sok CHUNG , Hyun Woo KIM , Gyeong Moon PARK , Jeon Gue PARK , Hwa Jeon SONG , Byung Hyun YOO , Ran HAN
IPC: G06F40/40 , G06F40/284 , G06F40/216
Abstract: Provided are a system and method for adaptive masking and non-directional language understanding and generation. The system for adaptive masking and non-directional language understanding and generation according to the present invention includes an encoder unit including an adaptive masking block for performing masking on training data, a language generator for restoring masked words, and an encoder for detecting whether or not the restored sentence construction words are original, and a decoder unit including a generation word position detector for detecting a position of a word to be generated next, a language generator for determining a word suitable for the corresponding position, and a non-directional training data generator for decoder training.
-
公开(公告)号:US20240232648A1
公开(公告)日:2024-07-11
申请号:US18537588
申请日:2023-12-12
Inventor: Hyun-Woo KIM , Hwa-Jeon SONG , Jeong-Min YANG , Byung-Hyun YOO , Eui-Sok CHUNG , Ran HAN
IPC: G06N3/0985 , G06N3/0455 , G06N3/088
CPC classification number: G06N3/0985 , G06N3/0455 , G06N3/088
Abstract: Disclosed herein are a multimodal unsupervised meta-learning method and apparatus. The multimodal unsupervised meta-learning method includes training, by a multimodal unsupervised feature representation learning unit, an encoder configured to extract features of individual single-modal signals from a source multimodal dataset, generating, by a multimodal unsupervised task generation unit, a source task based on the features of individual single-modal signals, deriving, by a multimodal unsupervised learning method derivation unit, a learning method from the source task using the encoder, and training, by a target task performance unit, a model based on the learning method and features extracted from a small number of target datasets by the encoder, thus performing the target task.
-
公开(公告)号:US20210398004A1
公开(公告)日:2021-12-23
申请号:US17353136
申请日:2021-06-21
Inventor: Hyun Woo KIM , Gyeong Moon PARK , Jeon Gue PARK , Hwa Jeon SONG , Byung Hyun YOO , Eui Sok CHUNG , Ran HAN
Abstract: Provided are a method and apparatus for online Bayesian few-shot learning. The present invention provides a method and apparatus for online Bayesian few-shot learning in which multi-domain-based online learning and few-shot learning are integrated when domains of tasks having data are sequentially given.
-
7.
公开(公告)号:US20240256885A1
公开(公告)日:2024-08-01
申请号:US18517931
申请日:2023-11-22
Inventor: Byunghyun YOO , Hyun Woo KIM , Hwajeon SONG , Jeongmin YANG , Sungwon YI , Euisok CHUNG , Ran HAN
IPC: G06N3/092
CPC classification number: G06N3/092
Abstract: Provided is an exploration method based on reward decomposition in multi-agent reinforcement learning. The exploration method includes: generating a positive reward estimation model through neural network training based on training data including states of all agents, actions of all the agents, and a global reward true value; generating, for each of the agents, a first individual utility function based on the global reward true value and generating a second individual utility function using the positive reward estimation model; and determining an action of each of the agents using the first individual utility function and the second individual utility function based on the state of each of the agents.
-
8.
公开(公告)号:US20240160859A1
公开(公告)日:2024-05-16
申请号:US18507953
申请日:2023-11-13
Inventor: Eui Sok CHUNG , Hyun Woo KIM , Jeon Gue PARK , Hwa Jeon SONG , Jeong Min YANG , Byung Hyun YOO , Ran HAN
IPC: G06F40/40
CPC classification number: G06F40/40
Abstract: The present invention relates to a multi-modality system for recommending multiple items using an interaction and a method of operating the same. The multi-modality system includes an interaction data preprocessing module that preprocesses an interaction data set and converts the preprocessed interaction data set into interaction training data; an item data preprocessing module that preprocesses item information data and converts the preprocessed item information data into item training data; and a learning module that includes a neural network model that is trained using the interaction training data and the item training data and outputs a result including a set of recommended items using a conversation context with a user as input.
-
公开(公告)号:US20230274127A1
公开(公告)日:2023-08-31
申请号:US18088428
申请日:2022-12-23
Inventor: Hyun Woo KIM , Jeon Gue PARK , Hwajeon SONG , Jeongmin YANG , Byunghyun YOO , Euisok CHUNG , Ran HAN
IPC: G06N3/045 , G06F18/15 , G06F18/213 , G06F18/22
CPC classification number: G06N3/045 , G06F18/15 , G06F18/213 , G06F18/22
Abstract: A concept based few-shot learning method is disclosed. The method includes estimating a task embedding corresponding to a task to be executed from support data that is a small amount of learning data; calculating a slot probability of a concept memory necessary for a task based on the task embedding; extracting features of query data that is test data, and of the support data; comparing local features for the extracted features with slots of a concept memory to extract a concept, and generating synthesis features to have maximum similarity to the extracted features through the slots of the concept memory; and calculating a task execution result from the synthesis feature and the extracted concept by applying the slot probability as a weight.
-
公开(公告)号:US20210374545A1
公开(公告)日:2021-12-02
申请号:US17332464
申请日:2021-05-27
Inventor: Hyun Woo KIM , Jeon Gue PARK , Hwa Jeon SONG , Yoo Rhee OH , Byung Hyun YOO , Eui Sok CHUNG , Ran HAN
Abstract: A knowledge increasing method includes calculating uncertainty of knowledge obtained from a neural network using an explicit memory, determining the insufficiency of the knowledge on the basis of the calculated uncertainty, obtaining additional data (learning data) for increasing insufficient knowledge, and training the neural network by using the additional data to autonomously increase knowledge.
-
-
-
-
-
-
-
-
-