-
公开(公告)号:US10748324B2
公开(公告)日:2020-08-18
申请号:US16184289
申请日:2018-11-08
Applicant: Adobe Inc.
Inventor: Elya Shechtman , Yijun Li , Chen Fang , Aaron Hertzmann
Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style. In some implementations, the disclosed methods, non-transitory computer readable media, and systems can either train or apply a style-transfer-neural network that captures a variety of stroke styles, such as different edge-stroke styles or shading-stroke styles.
-
公开(公告)号:US20200226724A1
公开(公告)日:2020-07-16
申请号:US16246051
申请日:2019-01-11
Applicant: Adobe Inc.
Inventor: Chen Fang , Zhe Lin , Zhaowen Wang , Yulun Zhang , Yilin Wang , Jimei Yang
Abstract: In implementations of transferring image style to content of a digital image, an image editing system includes an encoder that extracts features from a content image and features from a style image. A whitening and color transform generates coarse features from the content and style features extracted by the encoder for one pass of encoding and decoding. Hence, the processing delay and memory requirements are low. A feature transfer module iteratively transfers style features to the coarse feature map and generates a fine feature map. The image editing system fuses the fine features with the coarse features, and a decoder generates an output image with content of the content image in a style of the style image from the fused features. Accordingly, the image editing system efficiently transfers an image style to image content in real-time, without undesirable artifacts in the output image.
-
公开(公告)号:US10565518B2
公开(公告)日:2020-02-18
申请号:US14748059
申请日:2015-06-23
Applicant: Adobe Inc.
Inventor: Hailin Jin , Chen Fang , Jianchao Yang , Zhe Lin
Abstract: The present disclosure is directed to collaborative feature learning using social media data. For example, a machine learning system may identify social media data that includes user behavioral data, which indicates user interactions with content item. Using the identified social user behavioral data, the machine learning system may determine latent representations from the content items. In some embodiments, the machine learning system may train a machine-learning model based on the latent representations. Further, the machine learning system may extract features of the content item from the trained machine-learning model.
-
14.
公开(公告)号:US20190251446A1
公开(公告)日:2019-08-15
申请号:US15897822
申请日:2018-02-15
Applicant: Adobe Inc. , The Regents of the University of California
Inventor: Chen Fang , Zhaowen Wang , Wangcheng Kang , Julian McAuley
Abstract: The present disclosure relates to a fashion recommendation system that employs a task-guided learning framework to jointly train a visually-aware personalized preference ranking network. In addition, the fashion recommendation system employs implicit feedback and generated user-based triplets to learn variances in the user's fashion preferences for items with which the user has not yet interacted. In particular, the fashion recommendation system uses triplets generated from implicit user data to jointly train a Siamese convolutional neural network and a personalized ranking model, which together produce a user preference predictor that determines personalized fashion recommendations for a user.
-
15.
公开(公告)号:US20210342697A1
公开(公告)日:2021-11-04
申请号:US17377043
申请日:2021-07-15
Applicant: Adobe Inc. , The Regents of the University of California
Inventor: Chen Fang , Zhaowen Wang , Wangcheng Kang , Julian McAuley
Abstract: The present disclosure relates to a fashion recommendation system that employs a task-guided learning framework to jointly train a visually-aware personalized preference ranking network. In addition, the fashion recommendation system employs implicit feedback and generated user-based triplets to learn variances in the user's fashion preferences for items with which the user has not yet interacted. In particular, the fashion recommendation system uses triplets generated from implicit user data to jointly train a Siamese convolutional neural network and a personalized ranking model, which together produce a user preference predictor that determines personalized fashion recommendations for a user.
-
16.
公开(公告)号:US11100400B2
公开(公告)日:2021-08-24
申请号:US15897822
申请日:2018-02-15
Applicant: Adobe Inc. , The Regents of the University of California
Inventor: Chen Fang , Zhaowen Wang , Wangcheng Kang , Julian McAuley
Abstract: The present disclosure relates to a fashion recommendation system that employs a task-guided learning framework to jointly train a visually-aware personalized preference ranking network. In addition, the fashion recommendation system employs implicit feedback and generated user-based triplets to learn variances in the user's fashion preferences for items with which the user has not yet interacted. In particular, the fashion recommendation system uses triplets generated from implicit user data to jointly train a Siamese convolutional neural network and a personalized ranking model, which together produce a user preference predictor that determines personalized fashion recommendations for a user.
-
17.
公开(公告)号:US10963759B2
公开(公告)日:2021-03-30
申请号:US16417115
申请日:2019-05-20
Applicant: Adobe Inc.
Inventor: Zhe Lin , Mai Long , Jonathan Brandt , Hailin Jin , Chen Fang
IPC: G06K9/66 , G06F16/532 , G06K9/46 , G06K9/62 , G06K9/72 , G06N3/04 , G06F16/583 , G06K9/52 , G06N3/08
Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.
-
公开(公告)号:US10916001B2
公开(公告)日:2021-02-09
申请号:US15457830
申请日:2017-03-13
Applicant: ADOBE INC.
Inventor: Jingwan Lu , Patsorn Sangkloy , Chen Fang
Abstract: Methods and systems are provided for transforming sketches into stylized electronic paintings. A neural network system is trained where the training includes training a first neural network that converts input sketches into output images and training a second neural network that converts images into output paintings. Similarity for the first neural network is evaluated between the output image and a reference image and similarity for the second neural network is evaluated between the output painting, the output image, and a reference painting. The neural network system is modified based on the evaluated similarity. The trained neural network is used to generate an output painting from an input sketch where the output painting maintains features from the input sketch utilizing an extrapolated intermediate image and reflects a designated style from the reference painting.
-
公开(公告)号:US10825221B1
公开(公告)日:2020-11-03
申请号:US16392041
申请日:2019-04-23
Applicant: ADOBE INC.
Inventor: Zhaowen Wang , Yipin Zhou , Trung Bui , Chen Fang
Abstract: The present disclosure provides a method for generating a video of a body moving in synchronization with music by applying a first artificial neural network (ANN) to a sequence of samples of an audio waveform of the music to generate a first latent vector describing the waveform and a sequence of coordinates of points of body parts of the body, by applying a first stage of a second ANN to the sequence of coordinates to generate a second latent vector describing movement of the body, by applying a second stage of the second ANN to static images of a person in a plurality of different poses to generate a third latent vector describing an appearance of the person, and by applying a third stage of the second ANN to the first latent vector, the second latent vector, and the third latent vector to generate the video.
-
公开(公告)号:US20200342646A1
公开(公告)日:2020-10-29
申请号:US16392041
申请日:2019-04-23
Applicant: ADOBE INC.
Inventor: Zhaowen Wang , Yipin Zhou , Trung Bui , Chen Fang
Abstract: The present disclosure provides a method for generating a video of a body moving in synchronization with music by applying a first artificial neural network (ANN) to a sequence of samples of an audio waveform of the music to generate a first latent vector describing the waveform and a sequence of coordinates of points of body parts of the body, by applying a first stage of a second ANN to the sequence of coordinates to generate a second latent vector describing movement of the body, by applying a second stage of the second ANN to static images of a person in a plurality of different poses to generate a third latent vector describing an appearance of the person, and by applying a third stage of the second ANN to the first latent vector, the second latent vector, and the third latent vector to generate the video.
-
-
-
-
-
-
-
-
-