-
1.
公开(公告)号:US20250022459A1
公开(公告)日:2025-01-16
申请号:US18220910
申请日:2023-07-12
Applicant: Adobe Inc.
Inventor: Viet Dac Lai , Trung Bui , Seunghyun Yoon , Quan Tran , Hao Tan , Hanieh Deilamsalehy , Abel Salinas , Franck Dernoncourt
IPC: G10L15/183 , G10L15/065
Abstract: The disclosed method generates helpful training data for a language model, for example, a model implementing a punctuation restoration task, for real-world ASR texts. The method uses a reinforcement learning method using a generative AI model to generate additional data to train the language model. The method allows the generative AI model to learn from real-world ASR text to generate more effective training examples based on gradient feedback from the language model.
-
公开(公告)号:US20250111129A1
公开(公告)日:2025-04-03
申请号:US18478221
申请日:2023-09-29
Applicant: ADOBE INC.
Inventor: Viet Dac Lai , Franck Dernoncourt
IPC: G06F40/14 , G06F40/279
Abstract: A method, apparatus, and non-transitory computer readable medium for natural language processing are described. Embodiments of the present disclosure include obtaining a document comprising a first event mention and a second event mention. Some embodiments generate a dependency tree based on the document. The dependency tree is pruned by removing an irrelevant word to obtain a pruned dependency tree. Subevent relation information is generated for the first event mention and the second event mention based on the pruned dependency tree.
-
公开(公告)号:US20240330669A1
公开(公告)日:2024-10-03
申请号:US18116129
申请日:2023-03-01
Applicant: ADOBE INC.
Inventor: Amir Pouran Ben Veyseh , Viet Dac Lai , Franck Dernoncourt
IPC: G06N3/08 , G06N3/0475
CPC classification number: G06N3/08 , G06N3/0475
Abstract: In various examples, reinforcement learning techniques are used during joint training of a generative model with at least one other model. For example, a first set of training data and a second set of training data generated by the generative model are combined and used to train an event detection model. In addition, in such examples, a reward is determined based on the performance of the event detection model (e.g., an agreement between gradients of a loss function of training data and synthetic data) and used at least in part to update the parameters of the generative model.
-
-