Method and apparatus for correcting character errors, electronic device and storage medium

    公开(公告)号:US11443100B2

    公开(公告)日:2022-09-13

    申请号:US16950975

    申请日:2020-11-18

    Abstract: A method and apparatus for correcting character errors, an electronic device and a storage medium are disclosed, which relates to the natural language processing field and the deep learning field. The method may include: for a character to be processed, acquiring the score of each character in a pre-constructed vocabulary, the score being a score of the reasonability of the character in the vocabulary at the position of the character to be processed; selecting top K characters as candidates of the character to be processed, K being a positive integer greater than one; selecting an optimal candidate from the K candidates; and replacing the character to be processed with the optimal candidate if the optimal candidate is different from the character to be processed. With the solution of the present application, the accuracy of an error correction result, or the like, may be improved.

    Method and apparatus for generating parallel text in same language

    公开(公告)号:US10650102B2

    公开(公告)日:2020-05-12

    申请号:US15900166

    申请日:2018-02-20

    Abstract: The present disclosure discloses a method and apparatus for generating a parallel text in the same language. The method comprises: acquiring a source segmented word sequence and a pre-trained word vector table; determining a source word vector sequence corresponding to the source segmented word sequence, according to the word vector table; importing the source word vector sequence into a first pre-trained recurrent neural network model, to generate an intermediate vector of a preset dimension for characterizing semantics of the source segmented word sequence; importing the intermediate vector into a second pre-trained recurrent neural network model, to generate a target word vector sequence corresponding to the intermediate vector; and determining a target segmented word sequence corresponding to the target word vector sequence according to the word vector table, and determining the target segmented word sequence as a parallel text in the same language corresponding to the source segmented word sequence.

    METHOD AND APPARATUS FOR GENERATING PARALLEL TEXT IN SAME LANGUAGE

    公开(公告)号:US20180365231A1

    公开(公告)日:2018-12-20

    申请号:US15900166

    申请日:2018-02-20

    Abstract: The present disclosure discloses a method and apparatus for generating a parallel text in the same language. The method comprises: acquiring a source segmented word sequence and a pre-trained word vector table; determining a source word vector sequence corresponding to the source segmented word sequence, according to the word vector table; importing the source word vector sequence into a first pre-trained recurrent neural network model, to generate an intermediate vector of a preset dimension for characterizing semantics of the source segmented word sequence;importing the intermediate vector into a second pre-trained recurrent neural network model, to generate a target word vector sequence corresponding to the intermediate vector; and determining a target segmented word sequence corresponding to the target word vector sequence according to the word vector table, and determining the target segmented word sequence as a parallel text in the same language corresponding to the source segmented word sequence.

    Optimizer learning method and apparatus, electronic device and readable storage medium

    公开(公告)号:US12260327B2

    公开(公告)日:2025-03-25

    申请号:US17210141

    申请日:2021-03-23

    Abstract: The present application discloses an optimizer learning method and apparatus, an electronic device and a readable storage medium, which relates to the field of deep learning technologies. An implementation solution adopted by the present application during optimizer learning is: acquiring training data, the training data including a plurality of data sets each including neural network attribute information, neural network optimizer information, and optimizer parameter information; and training a meta-learning model by taking the neural network attribute information and the neural network optimizer information in the data sets as input and taking the optimizer parameter information in the data sets as output, until the meta-learning model converges. The present application can implement self-adaptation of optimizers, so as to improve generalization capability of the optimizers.

    Method and apparatus for generating dialogue model

    公开(公告)号:US11537798B2

    公开(公告)日:2022-12-27

    申请号:US16895297

    申请日:2020-06-08

    Abstract: Embodiments of the present disclosure relate to a method and apparatus for generating a dialogue model. The method may include: acquiring a corpus sample set, a corpus sample including input information and target response information; classifying corpus samples in the corpus sample set, setting discrete hidden variables for the corpus samples based on a classification result to generate a training sample set, a training sample including the input information, the target response information, and a discrete hidden variable; and training a preset neural network using the training sample set to obtain the dialogue model, the dialogue model being used to represent a corresponding relationship between inputted input information and outputted target response information.

    OPTIMIZER LEARNING METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM

    公开(公告)号:US20220004867A1

    公开(公告)日:2022-01-06

    申请号:US17210141

    申请日:2021-03-23

    Abstract: The present application discloses an optimizer learning method and apparatus, an electronic device and a readable storage medium, which relates to the field of deep learning technologies. An implementation solution adopted by the present application during optimizer learning is: acquiring training data, the training data including a plurality of data sets each including neural network attribute information, neural network optimizer information, and optimizer parameter information; and training a meta-learning model by taking the neural network attribute information and the neural network optimizer information in the data sets as input and taking the optimizer parameter information in the data sets as output, until the meta-learning model converges. The present application can implement self-adaptation of optimizers, so as to improve generalization capability of the optimizers.

Patent Agency Ranking