-
公开(公告)号:US20220215204A1
公开(公告)日:2022-07-07
申请号:US17570226
申请日:2022-01-06
Inventor: Ningombam Devarani Devi , Sungwon YI , Hyun Woo KIM , Hwa Jeon SONG , Byung Hyun YOO
IPC: G06K9/62
Abstract: Provided is a method for exploration based on curiosity and prioritization of experience data in multi-agent reinforcement learning, the method including the steps of: calculating a similarity between a policy of a first agent and a policy of a second agent and computing a final reward using the similarity; and performing clustering on a replay buffer using a result of calculating the similarity between the policy of the first agent and the policy of the second agent and performing sampling on data in the cluster.
-
12.
公开(公告)号:US20220180071A1
公开(公告)日:2022-06-09
申请号:US17540768
申请日:2021-12-02
Inventor: Eui Sok CHUNG , Hyun Woo KIM , Gyeong Moon PARK , Jeon Gue PARK , Hwa Jeon SONG , Byung Hyun YOO , Ran HAN
IPC: G06F40/40 , G06F40/284 , G06F40/216
Abstract: Provided are a system and method for adaptive masking and non-directional language understanding and generation. The system for adaptive masking and non-directional language understanding and generation according to the present invention includes an encoder unit including an adaptive masking block for performing masking on training data, a language generator for restoring masked words, and an encoder for detecting whether or not the restored sentence construction words are original, and a decoder unit including a generation word position detector for detecting a position of a word to be generated next, a language generator for determining a word suitable for the corresponding position, and a non-directional training data generator for decoder training.
-
13.
公开(公告)号:US20200219166A1
公开(公告)日:2020-07-09
申请号:US16711934
申请日:2019-12-12
Inventor: Hyun Woo KIM , Hwa Jeon SONG , Eui Sok CHUNG , Ho Young JUNG , Jeon Gue PARK , Yun Keun LEE
Abstract: A method and apparatus for estimating a user's requirement through a neural network which are capable of reading and writing a working memory and for providing fashion coordination knowledge appropriate for the requirement through the neural network using a long-term memory, by using the neural network using an explicit memory, in order to accurately provide the fashion coordination knowledge. The apparatus includes a language embedding unit for embedding a user's question and a previously created answer to acquire a digitized embedding vector; a fashion coordination knowledge creation unit for creating fashion coordination through the neural network having the explicit memory by using the embedding vector as an input; and a dialog creation unit for creating dialog content for configuring the fashion coordination through the neural network having the explicit memory by using the fashion coordination knowledge and the embedding vector an input.
-
公开(公告)号:US20200175119A1
公开(公告)日:2020-06-04
申请号:US16671773
申请日:2019-11-01
Inventor: Eui Sok CHUNG , Hyun Woo KIM , Hwa Jeon SONG , Ho Young JUNG , Byung Ok KANG , Jeon Gue PARK , Yoo Rhee OH , Yun Keun LEE
Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning. A proposed model minimizes additional training parameters based on sentence embedding such that most training results may be accumulated in a subword embedding parameter.
-
-
-