METHOD FOR TRAINING MULTILINGUAL SEMANTIC REPRESENTATION MODEL, DEVICE AND STORAGE MEDIUM

    公开(公告)号:US20220019743A1

    公开(公告)日:2022-01-20

    申请号:US17318577

    申请日:2021-05-12

    Abstract: Technical solutions relate to the natural language processing field based on artificial intelligence. According to an embodiment, a multilingual semantic representation model is trained using a plurality of training language materials represented in a plurality of languages respectively, such that the multilingual semantic representation model learns the semantic representation capability of each language; a corresponding mixed-language language material is generated for each of the plurality of training language materials, and the mixed-language language material includes language materials in at least two languages; and the multilingual semantic representation model is trained using each mixed-language language material and the corresponding training language material, such that the multilingual semantic representation model learns semantic alignment information of different languages.

    METHOD FOR RESOURCE SORTING, METHOD FOR TRAINING SORTING MODEL AND CORRESPONDING APPARATUSES

    公开(公告)号:US20210374344A1

    公开(公告)日:2021-12-02

    申请号:US17094943

    申请日:2020-11-11

    Abstract: A method for resource sorting, a method for training a sorting model and corresponding apparatuses which relate to the technical field of natural language processing under artificial intelligence are disclosed. The method according to some embodiments includes: forming an input sequence in order with an item to be matched and information of candidate resources; performing Embedding processing on each Token in the input sequence, the Embedding processing including: word Embedding, position Embedding and statement Embedding; and inputting result of the Embedding processing in a sorting model to obtain sorting scores of the sorting model for the candidate resources, the sorting model is obtained by pre-training of a Transformer model.

    METHOD FOR TRAINING LANGUAGE MODEL, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

    公开(公告)号:US20210374334A1

    公开(公告)日:2021-12-02

    申请号:US17117211

    申请日:2020-12-10

    Inventor: Yukun LI Zhen LI Yu SUN

    Abstract: A method for training a language model, an electronic device and a readable storage medium, which relate to the field of natural language processing technologies in artificial intelligence, are disclosed. The method may include pre-training the language model using preset text language materials in a corpus; replacing at least one word in a sample text language material with a word mask respectively to obtain a sample text language material including at least one word mask; inputting the sample text language material including the at least one word mask into the language model, and outputting a context vector of each of the at least one word mask via the language model; determining a word vector corresponding to each word mask based on the context vector of the word mask and a word vector parameter matrix; and training the language model based on the word vector corresponding to each word mask.

    METHOD AND APPARATUS FOR GENERATING SEMANTIC REPRESENTATION MODEL, AND STORAGE MEDIUM

    公开(公告)号:US20210248484A1

    公开(公告)日:2021-08-12

    申请号:US17205894

    申请日:2021-03-18

    Abstract: The disclosure discloses a method and an apparatus for generating a semantic representation model, and a storage medium. The detailed implementation includes: performing recognition and segmentation on the original text included in an original text set to obtain knowledge units and non-knowledge units in the original text; performing knowledge unit-level disorder processing on the knowledge units and the non-knowledge units in the original text to obtain a disorder text; generating a training text set based on the character attribute of each character in the disorder text; and training an initial semantic representation model by employing the training text set to generate the semantic representation model.

    METHOD FOR TRAINING LANGUAGE MODEL BASED ON VARIOUS WORD VECTORS, DEVICE AND MEDIUM

    公开(公告)号:US20210374352A1

    公开(公告)日:2021-12-02

    申请号:US16951702

    申请日:2020-11-18

    Inventor: Zhen LI Yukun LI Yu SUN

    Abstract: A method for training a language model based on various word vectors, a device and a medium, which relate to the field of natural language processing technologies in artificial intelligence, are disclosed. An implementation includes inputting a first sample text language material including a first word mask into the language model, and outputting a context vector of the first word mask via the language model; acquiring a first probability distribution matrix of the first word mask based on the context vector of the first word mask and a first word vector parameter matrix, and a second probability distribution matrix of the first word mask based on the context vector of the first word mask and a second word vector parameter matrix; and training the language model based on a word vector corresponding to the first word mask.

    METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR PROCESSING A SEMANTIC REPRESENTATION MODEL

    公开(公告)号:US20210182498A1

    公开(公告)日:2021-06-17

    申请号:US16885358

    申请日:2020-05-28

    Abstract: The present disclosure provides a method, apparatus, electronic device and storage medium for processing a semantic representation model, and relates to the field of artificial intelligence technologies. A specific implementation solution is: collecting a training corpus set including a plurality of training corpuses; training the semantic representation model using the training corpus set based on at least one of lexicon, grammar and semantics. In the present disclosure, by building the unsupervised or weakly-supervised training task at three different levels, namely, lexicon, grammar and semantics, the semantic representation model is enabled to learn knowledge at levels of lexicon, grammar and semantics from massive data, enhance the capability of universal semantic representation and improve the processing effect of the NLP task.

    METHOD AND APPARATUS FOR INFORMATION PROCESSING

    公开(公告)号:US20190065507A1

    公开(公告)日:2019-02-28

    申请号:US16054920

    申请日:2018-08-03

    Abstract: Embodiments of the present disclosure disclose a method and apparatus for processing information. A specific implementation of the method includes: acquiring a search result set related to a search statement inputted by a user; parsing the search statement to generate a first syntax tree, and parsing a search result in the search result set to generate a second syntax tree set; calculating a similarity between the search statement and the search result in the search result set using a pre-trained semantic matching model on the basis of the first syntax tree and the second syntax tree set, the semantic matching model being used to determine the similarity between the syntax trees; and sorting the search result in the search result set on the basis of the similarity between the search statement and the search result in the search result set, and pushing the sorted search result set to the user.

    SEARCH METHOD AND APPARATUS BASED ON ARTIFICIAL INTELLIGENCE

    公开(公告)号:US20190065506A1

    公开(公告)日:2019-02-28

    申请号:US16054842

    申请日:2018-08-03

    Abstract: Embodiments of the present disclosure disclose a search method and apparatus based on artificial intelligence. A specific implementation of the method comprises: acquiring at least one candidate document related to a query sentence; determining a query word vector sequence corresponding to a segmented word sequence of the query sentence, and determining a candidate document word vector sequence corresponding to a segmented word sequence of each candidate document in the at least one candidate document; performing a similarity calculation for each candidate document in the at least one candidate document; selecting, in a descending order of similarities between the candidate document and the query sentence, a preset number of candidate documents from the at least one candidate document as a search result.

Patent Agency Ranking