Abstract:
The method includes: collecting historical operations of sample users for M items, and predicting a preference value of a target user for each of the M items according to historical operations of the sample users for each of the M items, collecting classification data of N to-be-recommended items, and classifying the N to-be-recommended items according to the classification data of the N to-be-recommended items, to obtain X themes, where each of the X themes includes at least one of the N to-be-recommended items, and the N to-be-recommended items are some or all of the M items; calculating a preference value of the target user for each of the X themes according to a preference value of the target user for a to-be-recommended item included in each of the X themes; and pushing a target theme to the target user.
Abstract:
The method includes: collecting historical operations of sample users for M items, and predicting a preference value of a target user for each of the M items according to historical operations of the sample users for each of the M items, collecting classification data of N to-be-recommended items, and classifying the N to-be-recommended items according to the classification data of the N to-be-recommended items, to obtain X themes, where each of the X themes includes at least one of the N to-be-recommended items, and the N to-be-recommended items are some or all of the M items; calculating a preference value of the target user for each of the X themes according to a preference value of the target user for a to-be-recommended item included in each of the X themes; and pushing a target theme to the target user.
Abstract:
A recommendation result generation method, where the method includes obtaining article content information of at least one article and user score information of at least one user, where user score information of a first user of the at least one user includes a historical score of the first user for the at least one article, encoding the article content information and the user score information using an article neural network and a user neural network respectively to obtain a target article latent vector of each of the at least one article and a target user latent vector of each of the at least one user, and calculating a recommendation result for each user according to the article latent vector and the user latent vector.
Abstract:
A social message monitoring method is implemented by receiving, from a social network server, a social message, and obtaining a theme probability vector of the social message, comparing the theme probability vector of the social message with a theme probability vector of each representative message to obtain a theme similarity, and acquiring a similarity between the social message and each representative message according to the theme similarity, and saving the social message in a message class that contains a representative message most similar to the social message, and outputting the message class to a social network client when a quantity of social messages in the message class reaches a first threshold or themes of social messages in the message class are consistent.
Abstract:
A route planning method includes obtaining exercise capability information of a wearer and one or more candidate routes, where the candidate routes include attribute features that comprise historical exercise capability information, where the historical exercise capability information is information calculated according to a first preset rule and based on obtained exercise capability information of a plurality of users having exercised along the candidate routes; determining a target route based on the attribute features of the candidate routes and the exercise capability information of the wearer; and outputting the target route information.
Abstract:
A social message monitoring method is implemented by receiving, from a social network server, a social message, and obtaining a theme probability vector of the social message, comparing the theme probability vector of the social message with a theme probability vector of each representative message to obtain a theme similarity, and acquiring a similarity between the social message and each representative message according to the theme similarity, and saving the social message in a message class that contains a representative message most similar to the social message, and outputting the message class to a social network client when a quantity of social messages in the message class reaches a first threshold or themes of social messages in the message class are consistent.
Abstract:
A method and a system for processing lifelong learning of a terminal, and an apparatus is presented. The method for processing lifelong learning of a terminal according to the present disclosure includes sending, to a server, a request for downloading a function module, where the download request includes description information of the function module; receiving the function module that is sent by the server and is corresponding to the description information; and using the function module to expand and/or update a local function. According to the embodiments of the present disclosure, the lifelong learning of the terminal is implemented, and a problem in the prior art that the terminal cannot perform function expansion and updating is resolved.
Abstract:
A user verification processing method, a user equipment, and a server, where the method includes: receiving from a server a notification message that includes an action verification code instruction; obtaining sensor data generated when a user performs an action corresponding to the action verification code instruction; and feeding back verification information to the server according to the sensor data. The user verification processing method, the user equipment, and the server provided in the embodiments of the present invention can increase difficulty of cracking a verification code and improve security of the verification code.
Abstract:
A data processing method related to the field of artificial intelligence includes adding an architecture parameter to each feature interaction item in a first model, to obtain a second model, where the first model is a factorization machine (FM)-based model, and the architecture parameter represents importance of a corresponding feature interaction item; performing optimization on architecture parameters in the second model to obtain the optimized architecture parameters; and obtaining, based on the optimized architecture parameters and the first model or the second model, a third model through feature interaction item deletion.
Abstract:
A recommendation model training method, a selection probability prediction method, and an apparatus are provided. The training method includes obtaining a training sample, where the training sample includes a sample user behavior log, position information of a sample recommended object, and a sample label. The training method further includes performing joint training on a position aware model and a recommendation model by the training sample, to obtain a trained recommendation model, where the position aware model predicts probabilities that a user pays attention to a target recommended object when the target recommended object is at different positions, and the recommendation model predicts, when the user pays attention to the target recommended object, a probability that the user selects the target recommended object.