-
公开(公告)号:US10929612B2
公开(公告)日:2021-02-23
申请号:US16217804
申请日:2018-12-12
Inventor: Ho Young Jung , Hyun Woo Kim , Hwa Jeon Song , Eui Sok Chung , Jeon Gue Park
Abstract: Provided are a neural network memory computing system and method. The neural network memory computing system includes a first processor configured to learn a sense-making process on the basis of sense-making multimodal training data stored in a database, receive multiple modalities, and output a sense-making result on the basis of results of the learning, and a second processor configured to generate a sense-making training set for the first processor to increase knowledge for learning a sense-making process and provide the generated sense-making training set to the first processor.
-
公开(公告)号:US12217185B2
公开(公告)日:2025-02-04
申请号:US17332464
申请日:2021-05-27
Inventor: Hyun Woo Kim , Jeon Gue Park , Hwa Jeon Song , Yoo Rhee Oh , Byung Hyun Yoo , Eui Sok Chung , Ran Han
Abstract: A knowledge increasing method includes calculating uncertainty of knowledge obtained from a neural network using an explicit memory, determining the insufficiency of the knowledge on the basis of the calculated uncertainty, obtaining additional data (learning data) for increasing insufficient knowledge, and training the neural network by using the additional data to autonomously increase knowledge.
-
公开(公告)号:US11423238B2
公开(公告)日:2022-08-23
申请号:US16671773
申请日:2019-11-01
Inventor: Eui Sok Chung , Hyun Woo Kim , Hwa Jeon Song , Ho Young Jung , Byung Ok Kang , Jeon Gue Park , Yoo Rhee Oh , Yun Keun Lee
IPC: G06F40/56 , G06F40/30 , G06F40/289
Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning. A proposed model minimizes additional training parameters based on sentence embedding such that most training results may be accumulated in a subword embedding parameter.
-
4.
公开(公告)号:US09959862B2
公开(公告)日:2018-05-01
申请号:US15187581
申请日:2016-06-20
Inventor: Byung Ok Kang , Jeon Gue Park , Hwa Jeon Song , Yun Keun Lee , Eui Sok Chung
CPC classification number: G10L15/16 , G10L15/063 , G10L15/07 , G10L2015/022 , G10L2015/0636
Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
-
公开(公告)号:US10402494B2
公开(公告)日:2019-09-03
申请号:US15439416
申请日:2017-02-22
Inventor: Eui Sok Chung , Byung Ok Kang , Ki Young Park , Jeon Gue Park , Hwa Jeon Song , Sung Joo Lee , Yun Keun Lee , Hyung Bae Jeon
Abstract: Provided is a method of automatically expanding input text. The method includes receiving input text composed of a plurality of documents, extracting a sentence pair that is present in different documents among the plurality of documents, setting the extracted sentence pair as an input of an encoder of a sequence-to-sequence model, setting an output of the encoder as an output of a decoder of the sequence-to-sequence model and generating a sentence corresponding to the input, and generating expanded text based on the generated sentence.
-
-
-
-