-
公开(公告)号:US11977850B2
公开(公告)日:2024-05-07
申请号:US17411917
申请日:2021-08-25
Inventor: Fan Wang , Siqi Bao , Huang He , Hua Wu , Jingzhou He , Haifeng Wang
CPC classification number: G06F40/35 , G06F16/325 , G06F16/3347 , G06F18/285 , G06F40/30 , G10L15/01 , G10L15/18 , G10L15/22
Abstract: A method for dialogue processing, an electronic device and a storage medium are provided. The specific technical solution includes: obtaining a dialogue history; selecting a target machine from a plurality of machines; inputting the dialogue history into a trained dialogue model in the target machine to generate a response to the dialogue history, in which the dialogue model comprises a common parameter and a specific parameter, and different machines correspond to the same common parameter.
-
2.
公开(公告)号:US11836222B2
公开(公告)日:2023-12-05
申请号:US17083704
申请日:2020-10-29
Inventor: Lihang Liu , Xiaomin Fang , Fan Wang , Jingzhou He
IPC: G06Q30/00 , G06F18/21 , G06N20/00 , G06F16/9535 , G06Q30/0207 , G05B19/418 , G06Q30/0601
CPC classification number: G06F18/2178 , G06F16/9535 , G06F18/2193 , G06N20/00 , G06Q30/0221 , G06Q30/0225 , G06Q30/0631
Abstract: A method and apparatus for optimizing a recommendation system, a device and a computer storage medium are described, which relates to the technical field of deep learning and intelligent search in artificial intelligence. A specific implementation solution is: taking the recommendation system as an agent, a user as an environment, each recommended content of the recommendation system as an action of the agent, and a long-term behavioral revenue of the user as a reward of the environment; and optimizing to-be-optimized parameters in the recommendation system by reinforcement learning to maximize the reward of the environment. The present disclosure can effectively optimize long-term behavioral revenues of users.
-
公开(公告)号:US11734392B2
公开(公告)日:2023-08-22
申请号:US17237978
申请日:2021-04-22
Inventor: Yang Xue , Fan Wang , Jingzhou He
CPC classification number: G06F18/40 , G06F18/2132 , G06F18/253 , G06F40/30 , G06T7/251 , G06V20/46 , G06T2207/20084
Abstract: An active interaction method, an electronic device and a readable storage medium, relating to the field of deep learning and image processing technologies, are disclosed. According to an embodiment, the active interaction method includes: acquiring a video shot in real time; extracting a visual target from each image frame of the video, and generating a first feature vector of each visual target; for each image frame of the video, fusing the first feature vector of each visual target and identification information of the image frame to which the visual target belongs to generate a second feature vector of each visual target; aggregating the second feature vectors with the same identification information respectively to generate a third feature vector corresponding to each image frame; and initiating active interaction in response to determining that the active interaction is to be performed according to the third feature vector of a preset image frame.
-
公开(公告)号:US11412070B2
公开(公告)日:2022-08-09
申请号:US16891822
申请日:2020-06-03
Inventor: Xiaomin Fang , Yaxue Chen , Lihang Liu , Lingke Zeng , Fan Wang , Jingzhou He
Abstract: Embodiment of the disclosure provide a method and apparatus for generating information. The method includes: acquiring vectors of a plurality of users, the vector being used to characterize behavior habits of the users; inputting the vectors of the plurality of users and push information pushed by a push system to the plurality of users into a feedback information generating model established in advance, to generate the feedback information of the plurality of users for the push information, wherein the feedback information generating model is used to characterize a corresponding relationship between the vectors, the push information and the feedback information; and generating an evaluation report of the push system based on the feedback information.
-
公开(公告)号:US11150655B2
公开(公告)日:2021-10-19
申请号:US16020340
申请日:2018-06-27
Abstract: The present disclosure provides a method and system for training an unmanned aerial vehicle control model based on artificial intelligence. The method comprises: obtaining training data by using sensor data and target state information of the unmanned aerial vehicle and state information of the unmanned aerial vehicle under action of control information output by a deep neural network; training the deep neural network with the training data to obtain an unmanned aerial vehicle control model, the unmanned aerial vehicle control model being used to obtain the control information of the unmanned aerial vehicle according to the senor data and target state information of the unmanned aerial vehicle.
-
公开(公告)号:US12260327B2
公开(公告)日:2025-03-25
申请号:US17210141
申请日:2021-03-23
Inventor: Xiaomin Fang , Fan Wang , Yelan Mo , Jingzhou He
Abstract: The present application discloses an optimizer learning method and apparatus, an electronic device and a readable storage medium, which relates to the field of deep learning technologies. An implementation solution adopted by the present application during optimizer learning is: acquiring training data, the training data including a plurality of data sets each including neural network attribute information, neural network optimizer information, and optimizer parameter information; and training a meta-learning model by taking the neural network attribute information and the neural network optimizer information in the data sets as input and taking the optimizer parameter information in the data sets as output, until the meta-learning model converges. The present application can implement self-adaptation of optimizers, so as to improve generalization capability of the optimizers.
-
公开(公告)号:US11954449B2
公开(公告)日:2024-04-09
申请号:US17475073
申请日:2021-09-14
Inventor: Fan Wang , Siqi Bao , Xinxian Huang , Hua Wu , Jingzhou He
IPC: G06F40/40 , G06F16/33 , G06F16/332 , G06N7/01 , G10L15/22
CPC classification number: G06F40/40 , G06F16/3329 , G06F16/3344 , G06N7/01 , G10L15/22
Abstract: The disclosure discloses a method for generating a conversation, an electronic device, and a storage medium. The detailed implementation includes: obtaining a current conversation and historical conversations of the current conversation; selecting multiple reference historical conversations from the historical conversations and adding the multiple reference historical conversations to a temporary conversation set; and generating reply information of the current conversation based on the current conversation and the temporary conversation set.
-
公开(公告)号:US11537798B2
公开(公告)日:2022-12-27
申请号:US16895297
申请日:2020-06-08
Inventor: Siqi Bao , Huang He , Junkun Chen , Fan Wang , Hua Wu , Jingzhou He
IPC: G06F40/30 , G06F16/332
Abstract: Embodiments of the present disclosure relate to a method and apparatus for generating a dialogue model. The method may include: acquiring a corpus sample set, a corpus sample including input information and target response information; classifying corpus samples in the corpus sample set, setting discrete hidden variables for the corpus samples based on a classification result to generate a training sample set, a training sample including the input information, the target response information, and a discrete hidden variable; and training a preset neural network using the training sample set to obtain the dialogue model, the dialogue model being used to represent a corresponding relationship between inputted input information and outputted target response information.
-
公开(公告)号:US20220004867A1
公开(公告)日:2022-01-06
申请号:US17210141
申请日:2021-03-23
Inventor: Xiaomin Fang , Fan Wang , Yelan Mo , Jingzhou He
Abstract: The present application discloses an optimizer learning method and apparatus, an electronic device and a readable storage medium, which relates to the field of deep learning technologies. An implementation solution adopted by the present application during optimizer learning is: acquiring training data, the training data including a plurality of data sets each including neural network attribute information, neural network optimizer information, and optimizer parameter information; and training a meta-learning model by taking the neural network attribute information and the neural network optimizer information in the data sets as input and taking the optimizer parameter information in the data sets as output, until the meta-learning model converges. The present application can implement self-adaptation of optimizers, so as to improve generalization capability of the optimizers.
-
10.
公开(公告)号:US10578445B2
公开(公告)日:2020-03-03
申请号:US16008639
申请日:2018-06-14
Inventor: Mengting Chen , Huasheng Liang , Fan Wang , Bo Zhou
Abstract: The present disclosure provides a method and apparatus for building an itinerary-planning model and planning a traveling itinerary, wherein the method for building the itinerary-planning model comprises: obtaining a travel route with a known travel demand; training a deep learning model by regarding the travel demand, a set of candidate scenic spots determined by using the travel demand and the travel route corresponding to the travel demand as training samples, to obtain the itinerary-planning model; the itinerary-planning model is configured to use the travel demand to obtain a corresponding travel route. The method of planning a travelling itinerary comprises: obtaining the user's travel demand; according to the user's travel demand, obtaining a set of candidate scenic spots corresponding to the travel demand; inputting the user's travel demand and the set of candidate scenic spots into an itinerary-planning model, to obtain a travel route obtained by the itinerary-planning model.
-
-
-
-
-
-
-
-
-