Generating stylized-stroke images from source images utilizing style-transfer-neural networks with non-photorealistic-rendering

    公开(公告)号:US10748324B2

    公开(公告)日:2020-08-18

    申请号:US16184289

    申请日:2018-11-08

    Applicant: Adobe Inc.

    Abstract: This disclosure relates to methods, non-transitory computer readable media, and systems that integrate (or embed) a non-photorealistic rendering (“NPR”) generator with a style-transfer-neural network to generate stylized images that both correspond to a source image and resemble a stroke style. By integrating an NPR generator with a style-transfer-neural network, the disclosed methods, non-transitory computer readable media, and systems can accurately capture a stroke style resembling one or both of stylized edges or stylized shadings. When training such a style-transfer-neural network, the integrated NPR generator can enable the disclosed methods, non-transitory computer readable media, and systems to use real-stroke drawings (instead of conventional paired-ground-truth drawings) for training the network to accurately portray a stroke style. In some implementations, the disclosed methods, non-transitory computer readable media, and systems can either train or apply a style-transfer-neural network that captures a variety of stroke styles, such as different edge-stroke styles or shading-stroke styles.

    Transferring Image Style to Content of a Digital Image

    公开(公告)号:US20200226724A1

    公开(公告)日:2020-07-16

    申请号:US16246051

    申请日:2019-01-11

    Applicant: Adobe Inc.

    Abstract: In implementations of transferring image style to content of a digital image, an image editing system includes an encoder that extracts features from a content image and features from a style image. A whitening and color transform generates coarse features from the content and style features extracted by the encoder for one pass of encoding and decoding. Hence, the processing delay and memory requirements are low. A feature transfer module iteratively transfers style features to the coarse feature map and generates a fine feature map. The image editing system fuses the fine features with the coarse features, and a decoder generates an output image with content of the content image in a style of the style image from the fused features. Accordingly, the image editing system efficiently transfers an image style to image content in real-time, without undesirable artifacts in the output image.

    Collaborative feature learning from social media

    公开(公告)号:US10565518B2

    公开(公告)日:2020-02-18

    申请号:US14748059

    申请日:2015-06-23

    Applicant: Adobe Inc.

    Abstract: The present disclosure is directed to collaborative feature learning using social media data. For example, a machine learning system may identify social media data that includes user behavioral data, which indicates user interactions with content item. Using the identified social user behavioral data, the machine learning system may determine latent representations from the content items. In some embodiments, the machine learning system may train a machine-learning model based on the latent representations. Further, the machine learning system may extract features of the content item from the trained machine-learning model.

    Utilizing a digital canvas to conduct a spatial-semantic search for digital visual media

    公开(公告)号:US10963759B2

    公开(公告)日:2021-03-30

    申请号:US16417115

    申请日:2019-05-20

    Applicant: Adobe Inc.

    Abstract: The present disclosure includes methods and systems for searching for digital visual media based on semantic and spatial information. In particular, one or more embodiments of the disclosed systems and methods identify digital visual media displaying targeted visual content in a targeted region based on a query term and a query area provide via a digital canvas. Specifically, the disclosed systems and methods can receive user input of a query term and a query area and provide the query term and query area to a query neural network to generate a query feature set. Moreover, the disclosed systems and methods can compare the query feature set to digital visual media feature sets. Further, based on the comparison, the disclosed systems and methods can identify digital visual media portraying targeted visual content corresponding to the query term within a targeted region corresponding to the query area.

    Facilitating sketch to painting transformations

    公开(公告)号:US10916001B2

    公开(公告)日:2021-02-09

    申请号:US15457830

    申请日:2017-03-13

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for transforming sketches into stylized electronic paintings. A neural network system is trained where the training includes training a first neural network that converts input sketches into output images and training a second neural network that converts images into output paintings. Similarity for the first neural network is evaluated between the output image and a reference image and similarity for the second neural network is evaluated between the output painting, the output image, and a reference painting. The neural network system is modified based on the evaluated similarity. The trained neural network is used to generate an output painting from an input sketch where the output painting maintains features from the input sketch utilizing an extrapolated intermediate image and reflects a designated style from the reference painting.

    Music driven human dancing video synthesis

    公开(公告)号:US10825221B1

    公开(公告)日:2020-11-03

    申请号:US16392041

    申请日:2019-04-23

    Applicant: ADOBE INC.

    Abstract: The present disclosure provides a method for generating a video of a body moving in synchronization with music by applying a first artificial neural network (ANN) to a sequence of samples of an audio waveform of the music to generate a first latent vector describing the waveform and a sequence of coordinates of points of body parts of the body, by applying a first stage of a second ANN to the sequence of coordinates to generate a second latent vector describing movement of the body, by applying a second stage of the second ANN to static images of a person in a plurality of different poses to generate a third latent vector describing an appearance of the person, and by applying a third stage of the second ANN to the first latent vector, the second latent vector, and the third latent vector to generate the video.

    MUSIC DRIVEN HUMAN DANCING VIDEO SYNTHESIS
    20.
    发明申请

    公开(公告)号:US20200342646A1

    公开(公告)日:2020-10-29

    申请号:US16392041

    申请日:2019-04-23

    Applicant: ADOBE INC.

    Abstract: The present disclosure provides a method for generating a video of a body moving in synchronization with music by applying a first artificial neural network (ANN) to a sequence of samples of an audio waveform of the music to generate a first latent vector describing the waveform and a sequence of coordinates of points of body parts of the body, by applying a first stage of a second ANN to the sequence of coordinates to generate a second latent vector describing movement of the body, by applying a second stage of the second ANN to static images of a person in a plurality of different poses to generate a third latent vector describing an appearance of the person, and by applying a third stage of the second ANN to the first latent vector, the second latent vector, and the third latent vector to generate the video.

Patent Agency Ranking