-
公开(公告)号:US20250139411A1
公开(公告)日:2025-05-01
申请号:US18498229
申请日:2023-10-31
Applicant: Salesforce, Inc.
Inventor: Rithesh Murthy , Shelby Heinecke , Juan Carlos Niebles Duque , Zhiwei Liu , Le Xue , Weiran Yao , Yihao Feng , Zeyuan Chen , Akash Gokul , Devansh Arpit , Ran Xu , Lik Mui , Huan Wang , Caiming Xiong , Silvio Savarese
IPC: G06N3/0455 , G06N3/084
Abstract: Embodiments described herein provide a large language model (LLM) based AI agent that adopts Monte-Carlo Tree Search (MCTS) to execute a task. The LLM is prompted with a task description and it responds with its first attempted list of actions. Based on the success or failure of the first attempt, the LLM is prompted with an updated prompt which includes feedback from the first attempt based on a determined reward. The prompt may include a relative “score” for each action taken at each step. A numeric score may be mapped to a set of pre-defined text labels, such as “high” or “low” value putting the score in a form more suited for an LLM prompt. In this way, the LLM is iteratively given prompts which are updated with the scores from each action taken at each previous iterations so that it traverses different paths on the tree in each iteration.
-
公开(公告)号:US20230368078A1
公开(公告)日:2023-11-16
申请号:US17663595
申请日:2022-05-16
Applicant: Salesforce, inc.
Inventor: Devansh Arpit , Huan Wang , Yingbo Zhou , Caiming Xiong
CPC classification number: G06N20/20 , G06K9/6227 , G06K9/6262
Abstract: A computing device may perform training of a set of machine learning models on a first data set associated with a first domain. In some examples, the training may include, for each machine learning model of the set of machine learning models, inputting, as values for a set of parameters of the respective sets of parameters and for an iteration of a set of iterations, a moving average of the set of parameters calculated over a threshold number of previous iterations. The computing device may select a set of model states that are generated during the training of the plurality of machine learning models based on a validation performance of the set of model states performed during the training. The computing device may then generate an ensembled machine learning model by aggregating the set of machine learning models corresponding to the set of selected model states.
-
公开(公告)号:US20250045567A1
公开(公告)日:2025-02-06
申请号:US18498257
申请日:2023-10-31
Applicant: Salesforce, Inc.
Inventor: Weiran Yao , Shelby Heinecke , Juan Carlos Niebles Duque , Zhiwei Liu , Yihao Feng , Le Xue , Rithesh Murthy , Zeyuan Chen , Jianguo Zhang , Devansh Arpit , Ran Xu , Lik Mui , Huan Wang , Caiming Xiong , Silvio Savarese
IPC: G06N3/0455 , G06N3/092
Abstract: Embodiments described herein provide for optimizing a language model (LM) agent. In at least one embodiment, and LM agent comprises an “actor” LM and a “retrospective LM which provides reflections on attempts by the actor LM. The reflections are used to update subsequent prompts to the actor LM. Optimizing the LM agent comprises fine-tuning parameters of the retrospective LM while keeping parameters of the actor LM frozen. A gradient may be determined by a change in reward from the environment based on actions taken by the actor LM with and without a reflection of the retrospective LM. Using this gradient, parameters of the retrospective LM may be updated via backpropagation.
-
公开(公告)号:US20250124233A1
公开(公告)日:2025-04-17
申请号:US18428530
申请日:2024-01-31
Applicant: Salesforce, Inc.
Inventor: Itai Izhak Feigenbaum , Devansh Arpit , Shelby Heinecke , Juan Carlos Niebles Duque , Weiran Yao , Huan Wang , Caiming Xiong , Silvio Savarese
Abstract: Systems and methods for editing a large language model are provided. The large language model generates a sequence of tokens, a first probability of a pre-edit output based on the sequence of tokens, and a second probability of a target output based on the sequence of tokens. A loss function is provided based on the first probability and the second probability. A plurality of gradients of the large language model with respect to the loss function is computed. An edit location of the large language model is determined based on the plurality of gradients. The large language model is edited by editing weights at the edit location of the large language model, such that the updated large language model generates the target output for an input including the sequence of words.
-
公开(公告)号:US20250131246A1
公开(公告)日:2025-04-24
申请号:US18492408
申请日:2023-10-23
Applicant: Salesforce, Inc.
Inventor: Devansh Arpit , Huan Wang , Caiming Xiong
IPC: G06N3/0455 , G06N3/084
Abstract: Embodiments provide an attention mechanism that computes attention weights for an input sequence by employing a set of multi-head learnable vectors (referred to as “binder vectors”) to attend to the input sequence.
-
公开(公告)号:US20250053793A1
公开(公告)日:2025-02-13
申请号:US18494393
申请日:2023-10-25
Applicant: Salesforce, Inc.
Inventor: Zhiwei Liu , Weiran Yao , Jianguo Zhang , Le Xue , Shelby Heinecke , Rithesh Murthy , Yihao Feng , Zeyuan Chen , Juan Carlos Niebles Duque , Devansh Arpit , Ran Xu , Lik Mui , Huan Wang , Caiming Xiong , Silvio Savarese
Abstract: Embodiments described herein provide a method of predicting an action by a plurality of language model augmented agents (LAAs). In at least one embodiment, a controller receives a task instruction to be performed using an environment. The controller receives an observation of a first state from the environment. The controller selects a LAA from the plurality of LAAs based on the task instruction and the observation. The controller obtains an output from the selected LAA generated using an input combining the task instruction, the observation, and an LAA-specific prompt template. The controller determines the action based on the output. The controller causes the action to be performed on the environment thereby causing the first state of the environment to change to a second state.
-
-
-
-
-