-
11.
公开(公告)号:US20180365579A1
公开(公告)日:2018-12-20
申请号:US16008559
申请日:2018-06-14
Inventor: Shengxian Wan , Yu Sun , Dianhai Yu
Abstract: The present disclosure provides a method and apparatus for evaluating a matching degree of multi-domain information based on artificial intelligence, a device and a medium. The method comprises: respectively obtaining valid words in a query, and valid words in each information domain in at least two information domains in a to-be-queried document; respectively obtaining word expressions of valid words in the query and word expressions of valid words in said each information domain in at least two information domains in the to-be-queried document; based on the word expressions, respectively obtaining context-based word expressions of valid words in the query and context-based word expressions of valid words in said each information domain; generating matching features corresponding to said each information domain according to the obtained information; determining a matching degree score between the query and the to-be-queried document according to the matching features corresponding to said each information domain.
-
公开(公告)号:US11687718B2
公开(公告)日:2023-06-27
申请号:US17116846
申请日:2020-12-09
Inventor: Chao Pang , Shuohuan Wang , Yu Sun , Hua Wu , Haifeng Wang
IPC: G06F17/00 , G06F40/295 , G06F40/137 , G06F40/30
CPC classification number: G06F40/295 , G06F40/137 , G06F40/30
Abstract: A method, an apparatus, a device and a storage medium for learning a knowledge representation are provided. The method can include: sampling a sub-graph of a knowledge graph from a knowledge base; serializing the sub-graph of the knowledge graph to obtain a serialized text; and reading using a pre-trained language model the serialized text in an order in the sub-graph of the knowledge graph, to perform learning to obtain a knowledge representation of each word in the serialized text. The knowledge representation learning in this embodiment is performed for entity and relationship representation learning in the knowledge base.
-
公开(公告)号:US11562150B2
公开(公告)日:2023-01-24
申请号:US17031569
申请日:2020-09-24
Inventor: Han Zhang , Dongling Xiao , Yukun Li , Yu Sun , Hao Tian , Hua Wu , Haifeng Wang
Abstract: The present disclosure proposes a language generation method and apparatus. The method includes: performing encoding processing on an input sequence by using a preset encoder to generate a hidden state vector corresponding to the input sequence; in response to a granularity category of a second target segment being a phrase, decoding a first target segment vector, the hidden state vector, and a position vector corresponding to the second target segment by using N decoders to generate N second target segments; determining a loss value based on differences between respective N second target segments and a second target annotated segment; and performing parameter updating on the preset encoder, a preset classifier, and the N decoders based on the loss value to generate an updated language generation model for performing language generation.
-
公开(公告)号:US11556715B2
公开(公告)日:2023-01-17
申请号:US16951702
申请日:2020-11-18
IPC: G06F40/30 , G06N20/00 , G06F40/279
Abstract: A method for training a language model based on various word vectors, a device and a medium, which relate to the field of natural language processing technologies in artificial intelligence, are disclosed. An implementation includes inputting a first sample text language material including a first word mask into the language model, and outputting a context vector of the first word mask via the language model; acquiring a first probability distribution matrix of the first word mask based on the context vector of the first word mask and a first word vector parameter matrix, and a second probability distribution matrix of the first word mask based on the context vector of the first word mask and a second word vector parameter matrix; and training the language model based on a word vector corresponding to the first word mask.
-
15.
公开(公告)号:US11520991B2
公开(公告)日:2022-12-06
申请号:US16885358
申请日:2020-05-28
Inventor: Yu Sun , Haifeng Wang , Shuohuan Wang , Yukun Li , Shikun Feng , Hao Tian , Hua Wu
Abstract: The present disclosure provides a method, apparatus, electronic device and storage medium for processing a semantic representation model, and relates to the field of artificial intelligence technologies. A specific implementation solution is: collecting a training corpus set including a plurality of training corpuses; training the semantic representation model using the training corpus set based on at least one of lexicon, grammar and semantics. In the present disclosure, by building the unsupervised or weakly-supervised training task at three different levels, namely, lexicon, grammar and semantics, the semantic representation model is enabled to learn knowledge at levels of lexicon, grammar and semantics from massive data, enhance the capability of universal semantic representation and improve the processing effect of the NLP task.
-
16.
公开(公告)号:US20220300763A1
公开(公告)日:2022-09-22
申请号:US17209051
申请日:2021-03-22
Abstract: The present disclosure provides a method, apparatus, electronic device and storage medium for training a semantic similarity model, which relates to the field of artificial intelligence. A specific implementation solution is as follows: obtaining a target field to be used by a semantic similarity model to be trained; calculating respective correlations between the target field and application fields corresponding to each of training datasets in known multiple training datasets; training the semantic similarity model with the training datasets in turn, according to the respective correlations between the target field and the application fields corresponding to each of the training datasets. According to the technical solution of the present disclosure, it is possible to, in the fine-tuning phase, more purposefully train the semantic similarity model with the training datasets with reference to the correlations between the target field and the application fields corresponding to the training datasets, thereby effectively improving the learning capability of the sematic similarity model and effectively improving the accuracy of the trained semantic similarity model.
-
公开(公告)号:US11366973B2
公开(公告)日:2022-06-21
申请号:US16691104
申请日:2019-11-21
Inventor: Jingwei Wang , Ao Zhang , Jiaxiang Liu , Yu Sun , Zhi Li
IPC: G06F40/35 , G06F40/186 , G06F40/289
Abstract: Embodiments of the present disclosure disclose a method and apparatus for determining a topic. A specific embodiment of the method comprises: determining a to-be-recognized sentence sequence; calculating similarities between the to-be-recognized sentence sequence and each of topic templates in a topic template set in a target area, the each of the topic templates in the topic template set corresponding to a topic in at least one topic in the target area, the topic template including a topic section sequence, and a topic section including a topic sentence sequence; and determining a topic of the to-be-recognized sentence sequence according to an associated parameter, the associated parameter including the similarities between the to-be-recognized sentence sequence and the each of the topic templates in the topic template set. This embodiment reduces labor costs during a topic segmentation.
-
18.
公开(公告)号:US10698932B2
公开(公告)日:2020-06-30
申请号:US15990157
申请日:2018-05-25
Inventor: Shuohuan Wang , Yu Sun , Dianhai Yu
IPC: G06F16/332 , G06N20/00 , G06F40/35 , G06F40/205 , G06F40/216 , G06F40/242 , G06N3/04 , G06N3/08 , G06N5/02 , G06N7/00
Abstract: The present disclosure provides a method and apparatus for parsing a query based on artificial intelligence, and a storage medium, wherein the method comprises: regarding any application domain, obtaining a knowledge library corresponding to the application domain; determining a training query serving as a training language material according to the knowledge library; obtaining a deep query parsing model by training according to the training language material; using the deep query parsing model to parse the user's query to obtain a parsing result. The solution of the present disclosure may be applied to improve the accuracy of the parsing result.
-
公开(公告)号:US10528667B2
公开(公告)日:2020-01-07
申请号:US15900176
申请日:2018-02-20
Inventor: Yukun Li , Yi Liu , Yu Sun , Dianhai Yu
Abstract: An artificial intelligence based method and apparatus for generating information are disclosed. The method in an embodiment includes: segmenting a to-be-processed text into characters to obtain a character sequence; determining a character vector for each character in the character sequence to generate a character vector sequence; generating a plurality of character vector subsequences by segmenting the character vector sequence based on a preset vocabulary; for each generated character vector subsequence, determining a sum of character vectors composing the character vector subsequence as a target vector, and inputting the target vector into a pre-trained first neural network to obtain a word vector corresponding to the each character vector subsequence, the first neural network used to characterize a correspondence between the target vector and the word vector; and analyzing the to-be-processed text based on the obtained word vector to generate an analysis result. This embodiment improves the adaptability of text processing.
-
20.
公开(公告)号:US11914964B2
公开(公告)日:2024-02-27
申请号:US17209124
申请日:2021-03-22
Inventor: Shuohuan Wang , Jiaxiang Liu , Xuan Ouyang , Yu Sun , Hua Wu , Haifeng Wang
Abstract: The present application discloses a method and apparatus for training a semantic representation model, a device and a computer storage medium, which relates to the field of natural language processing technologies in artificial intelligence. An implementation includes: acquiring a semantic representation model which has been trained for a first language as a first semantic representation model; taking a bottom layer and a top layer of the first semantic representation model as trained layers, initializing the trained layers, keeping model parameters of other layers unchanged, and training the trained layers using training language materials of a second language until a training ending condition is met; successively bringing the untrained layers into the trained layers from bottom to top, and executing these layers respectively: keeping the model parameters of other layers than the trained layers unchanged, and training the trained layers using the training language materials of the second language until the training ending condition is met respectively; and obtaining a semantic representation model for the second language after all the layers are trained.
-
-
-
-
-
-
-
-
-