-
公开(公告)号:US12217185B2
公开(公告)日:2025-02-04
申请号:US17332464
申请日:2021-05-27
Inventor: Hyun Woo Kim , Jeon Gue Park , Hwa Jeon Song , Yoo Rhee Oh , Byung Hyun Yoo , Eui Sok Chung , Ran Han
Abstract: A knowledge increasing method includes calculating uncertainty of knowledge obtained from a neural network using an explicit memory, determining the insufficiency of the knowledge on the basis of the calculated uncertainty, obtaining additional data (learning data) for increasing insufficient knowledge, and training the neural network by using the additional data to autonomously increase knowledge.
-
公开(公告)号:US11423238B2
公开(公告)日:2022-08-23
申请号:US16671773
申请日:2019-11-01
Inventor: Eui Sok Chung , Hyun Woo Kim , Hwa Jeon Song , Ho Young Jung , Byung Ok Kang , Jeon Gue Park , Yoo Rhee Oh , Yun Keun Lee
IPC: G06F40/56 , G06F40/30 , G06F40/289
Abstract: Provided are sentence embedding method and apparatus based on subword embedding and skip-thoughts. To integrate skip-thought sentence embedding learning methodology with a subword embedding technique, a skip-thought sentence embedding learning method based on subword embedding and methodology for simultaneously learning subword embedding learning and skip-thought sentence embedding learning, that is, multitask learning methodology, are provided as methodology for applying intra-sentence contextual information to subword embedding in the case of subword embedding learning. This makes it possible to apply a sentence embedding approach to agglutinative languages such as Korean in a bag-of-words form. Also, skip-thought sentence embedding learning methodology is integrated with a subword embedding technique such that intra-sentence contextual information can be used in the case of subword embedding learning. A proposed model minimizes additional training parameters based on sentence embedding such that most training results may be accumulated in a subword embedding parameter.
-
13.
公开(公告)号:US09959862B2
公开(公告)日:2018-05-01
申请号:US15187581
申请日:2016-06-20
Inventor: Byung Ok Kang , Jeon Gue Park , Hwa Jeon Song , Yun Keun Lee , Eui Sok Chung
CPC classification number: G10L15/16 , G10L15/063 , G10L15/07 , G10L2015/022 , G10L2015/0636
Abstract: A speech recognition apparatus based on a deep-neural-network (DNN) sound model includes a memory and a processor. As the processor executes a program stored in the memory, the processor generates sound-model state sets corresponding to a plurality of pieces of set training speech data included in multi-set training speech data, generates a multi-set state cluster from the sound-model state sets, and sets the multi-set training speech data as an input node and the multi-set state cluster as output nodes so as to learn a DNN structured parameter.
-
公开(公告)号:US09805716B2
公开(公告)日:2017-10-31
申请号:US15042309
申请日:2016-02-12
Inventor: Sung Joo Lee , Byung Ok Kang , Jeon Gue Park , Yun Keun Lee , Hoon Chung
CPC classification number: G10L15/142 , G10L15/063 , G10L15/16 , G10L21/02
Abstract: Provided is an apparatus for large vocabulary continuous speech recognition (LVCSR) based on a context-dependent deep neural network hidden Markov model (CD-DNN-HMM) algorithm. The apparatus may include an extractor configured to extract acoustic model-state level information corresponding to an input speech signal from a training data model set using at least one of a first feature vector based on a gammatone filterbank signal analysis algorithm and a second feature vector based on a bottleneck algorithm, and a speech recognizer configured to provide a result of recognizing the input speech signal based on the extracted acoustic model-state level information.
-
-
-