-
公开(公告)号:US20250045567A1
公开(公告)日:2025-02-06
申请号:US18498257
申请日:2023-10-31
Applicant: Salesforce, Inc.
Inventor: Weiran Yao , Shelby Heinecke , Juan Carlos Niebles Duque , Zhiwei Liu , Yihao Feng , Le Xue , Rithesh Murthy , Zeyuan Chen , Jianguo Zhang , Devansh Arpit , Ran Xu , Lik Mui , Huan Wang , Caiming Xiong , Silvio Savarese
IPC: G06N3/0455 , G06N3/092
Abstract: Embodiments described herein provide for optimizing a language model (LM) agent. In at least one embodiment, and LM agent comprises an “actor” LM and a “retrospective LM which provides reflections on attempts by the actor LM. The reflections are used to update subsequent prompts to the actor LM. Optimizing the LM agent comprises fine-tuning parameters of the retrospective LM while keeping parameters of the actor LM frozen. A gradient may be determined by a change in reward from the environment based on actions taken by the actor LM with and without a reflection of the retrospective LM. Using this gradient, parameters of the retrospective LM may be updated via backpropagation.
-
公开(公告)号:US20250139411A1
公开(公告)日:2025-05-01
申请号:US18498229
申请日:2023-10-31
Applicant: Salesforce, Inc.
Inventor: Rithesh Murthy , Shelby Heinecke , Juan Carlos Niebles Duque , Zhiwei Liu , Le Xue , Weiran Yao , Yihao Feng , Zeyuan Chen , Akash Gokul , Devansh Arpit , Ran Xu , Lik Mui , Huan Wang , Caiming Xiong , Silvio Savarese
IPC: G06N3/0455 , G06N3/084
Abstract: Embodiments described herein provide a large language model (LLM) based AI agent that adopts Monte-Carlo Tree Search (MCTS) to execute a task. The LLM is prompted with a task description and it responds with its first attempted list of actions. Based on the success or failure of the first attempt, the LLM is prompted with an updated prompt which includes feedback from the first attempt based on a determined reward. The prompt may include a relative “score” for each action taken at each step. A numeric score may be mapped to a set of pre-defined text labels, such as “high” or “low” value putting the score in a form more suited for an LLM prompt. In this way, the LLM is iteratively given prompts which are updated with the scores from each action taken at each previous iterations so that it traverses different paths on the tree in each iteration.
-
公开(公告)号:US20250053793A1
公开(公告)日:2025-02-13
申请号:US18494393
申请日:2023-10-25
Applicant: Salesforce, Inc.
Inventor: Zhiwei Liu , Weiran Yao , Jianguo Zhang , Le Xue , Shelby Heinecke , Rithesh Murthy , Yihao Feng , Zeyuan Chen , Juan Carlos Niebles Duque , Devansh Arpit , Ran Xu , Lik Mui , Huan Wang , Caiming Xiong , Silvio Savarese
Abstract: Embodiments described herein provide a method of predicting an action by a plurality of language model augmented agents (LAAs). In at least one embodiment, a controller receives a task instruction to be performed using an environment. The controller receives an observation of a first state from the environment. The controller selects a LAA from the plurality of LAAs based on the task instruction and the observation. The controller obtains an output from the selected LAA generated using an input combining the task instruction, the observation, and an LAA-specific prompt template. The controller determines the action based on the output. The controller causes the action to be performed on the environment thereby causing the first state of the environment to change to a second state.
-
公开(公告)号:US20250053787A1
公开(公告)日:2025-02-13
申请号:US18429119
申请日:2024-01-31
Applicant: Salesforce, Inc.
Inventor: Liangwei Yang , Shelby Heinecke , Jianguo Zhang , Rithesh Murthy , Huan Wang , Caiming Xiong , Zhiwei Liu
IPC: G06N3/0455 , G06N3/084 , G06Q30/0601
Abstract: Embodiments described herein provide a method for training a recommendation neural network model using multiple data sources. The method may include: receiving, via a data interface, time series data indicating a user-item interaction history; transforming the time series data into a user-item graph; encoding, by a neural network encoder, the user-item graph into user embeddings and item embeddings; generating a plurality of losses according to a plurality of training tasks performed based on the user embeddings and, item embeddings; training the recommendation neural network model by updating the user embeddings and the item embeddings via backpropagation based on a weighted sum of gradients of the plurality of losses; and generating, by a neural network decoder, one or more recommended items for a given user based on the updated user embeddings and the updated item embeddings.
-
-
-