-
公开(公告)号:US11461638B2
公开(公告)日:2022-10-04
申请号:US16296076
申请日:2019-03-07
Applicant: ADOBE INC.
Inventor: Sungchul Kim , Scott Cohen , Ryan A. Rossi , Charles Li Chen , Eunyee Koh
Abstract: Embodiments of the present invention are generally directed to generating figure captions for electronic figures, generating a training dataset to train a set of neural networks for generating figure captions, and training a set of neural networks employable to generate figure captions. A set of neural networks is trained with a training dataset having electronic figures and corresponding captions. Sequence-level training with reinforced learning techniques are employed to train the set of neural networks configured in an encoder-decoder with attention configuration. Provided with an electronic figure, the set of neural networks can encode the electronic figure based on various aspects detected from the electronic figure, resulting in the generation of associated label map(s), feature map(s), and relation map(s). The trained set of neural networks employs a set of attention mechanisms that facilitate the generation of accurate and meaningful figure captions corresponding to visible aspects of the electronic figure.
-
公开(公告)号:US10558852B2
公开(公告)日:2020-02-11
申请号:US15814979
申请日:2017-11-16
Applicant: ADOBE INC.
Inventor: Sungchul Kim , Deepali Jain , Deepali Gupta , Eunyee Koh , Branislav Kveton , Nikhil Sheoran , Atanu Sinha , Hung Hai Bui , Charles Li Chen
IPC: G06K9/00 , G06N3/04 , G06N3/08 , G06F16/954 , G06K9/62
Abstract: Systems and methods provide for generating predictive models that are useful in predicting next-user-actions. User-specific navigation sequences are obtained, the navigation sequences representing temporally-related series of actions performed by users during navigation sessions. To each navigation sequence, a Recurrent Neural Network (RNN) is applied to encode the navigation sequences into user embeddings that reflect time-based, sequential navigation patterns for the user. Once a set of navigation sequences is encoded to a set of user embeddings, a variety of classifiers (prediction models) may be applied to the user embeddings to predict what a probable next-user-action may be and/or the likelihood that the next-user-action will be a desired target action.
-
公开(公告)号:US10783361B2
公开(公告)日:2020-09-22
申请号:US16723619
申请日:2019-12-20
Applicant: ADOBE INC.
Inventor: Sungchul Kim , Deepali Jain , Deepali Gupta , Eunyee Koh , Branislav Kveton , Nikhil Sheoran , Atanu Sinha , Hung Hai Bui , Charles Li Chen
IPC: G06K9/00 , G06N3/04 , G06N3/08 , G06F16/954 , G06K9/62
Abstract: Systems and methods provide for generating predictive models that are useful in predicting next-user-actions. User-specific navigation sequences are obtained, the navigation sequences representing temporally-related series of actions performed by users during navigation sessions. To each navigation sequence, a Recurrent Neural Network (RNN) is applied to encode the navigation sequences into user embeddings that reflect time-based, sequential navigation patterns for the user. Once a set of navigation sequences is encoded to a set of user embeddings, a variety of classifiers (prediction models) may be applied to the user embeddings to predict what a probable next-user-action may be and/or the likelihood that the next-user-action will be a desired target action.
-
公开(公告)号:US20200285951A1
公开(公告)日:2020-09-10
申请号:US16296076
申请日:2019-03-07
Applicant: ADOBE INC.
Inventor: Sungchul Kim , Scott Cohen , Ryan A. Rossi , Charles Li Chen , Eunyee Koh
Abstract: Embodiments of the present invention are generally directed to generating figure captions for electronic figures, generating a training dataset to train a set of neural networks for generating figure captions, and training a set of neural networks employable to generate figure captions. A set of neural networks is trained with a training dataset having electronic figures and corresponding captions. Sequence-level training with reinforced learning techniques are employed to train the set of neural networks configured in an encoder-decoder with attention configuration. Provided with an electronic figure, the set of neural networks can encode the electronic figure based on various aspects detected from the electronic figure, resulting in the generation of associated label map(s), feature map(s), and relation map(s). The trained set of neural networks employs a set of attention mechanisms that facilitate the generation of accurate and meaningful figure captions corresponding to visible aspects of the electronic figure.
-
-
-