-
11.
公开(公告)号:US20230325992A1
公开(公告)日:2023-10-12
申请号:US17658774
申请日:2022-04-11
Applicant: Adobe Inc.
Inventor: Zhe Lin , Sijie Zhu , Jason Wen Yong Kuen , Scott Cohen , Zhifei Zhang
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilizes artificial intelligence to learn to recommend foreground object images for use in generating composite images based on geometry and/or lighting features. For instance, in one or more embodiments, the disclosed systems transform a foreground object image corresponding to a background image using at least one of a geometry transformation or a lighting transformation. The disclosed systems further generating predicted embeddings for the background image, the foreground object image, and the transformed foreground object image within a geometry-lighting-sensitive embedding space utilizing a geometry-lighting-aware neural network. Using a loss determined from the predicted embeddings, the disclosed systems update parameters of the geometry-lighting-aware neural network. The disclosed systems further provide a variety of efficient user interfaces for generating composite digital images.
-
公开(公告)号:US11508148B2
公开(公告)日:2022-11-22
申请号:US16822878
申请日:2020-03-18
Applicant: Adobe Inc.
Inventor: Yijun Li , Zhifei Zhang , Richard Zhang , Jingwan Lu
Abstract: The present disclosure relates to systems, computer-implemented methods, and non-transitory computer readable medium for automatically transferring makeup from a reference face image to a target face image using a neural network trained using semi-supervised learning. For example, the disclosed systems can receive, at a neural network, a target face image and a reference face image, where the target face image is selected by a user via a graphical user interface (GUI) and the reference face image has makeup. The systems transfer, by the neural network, the makeup from the reference face image to the target face image, where the neural network is trained to transfer the makeup from the reference face image to the target face image using semi-supervised learning. The systems output for display the makeup on the target face image.
-
公开(公告)号:US20250069297A1
公开(公告)日:2025-02-27
申请号:US18948839
申请日:2024-11-15
Applicant: Adobe Inc.
Inventor: Zhifei Zhang , Zhe Lin , Scott Cohen , Darshan Prasad , Zhihong Ding
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for transferring global style features between digital images utilizing one or more machine learning models or neural networks. In particular, in one or more embodiments, the disclosed systems receive a request to transfer a global style from a source digital image to a target digital image, identify at least one target object within the target digital image, and transfer the global style from the source digital image to the target digital image while maintaining an object style of the at least one target object.
-
14.
公开(公告)号:US20250046055A1
公开(公告)日:2025-02-06
申请号:US18363980
申请日:2023-08-02
Applicant: Adobe Inc.
Inventor: Zhifei Zhang , Zhe Lin , Yixuan Ren , Yifei Fan , Jing Shi
Abstract: This disclosure describes one or more implementations of systems, non-transitory computer-readable media, and methods that trains (and utilizes) an image color editing diffusion neural network to generate a color edited digital image(s) for a digital image. In particular, in one or more implementations, the disclosed systems identify a digital image depicting content in a first color style. Moreover, the disclosed systems generate, from the digital image utilizing an image color editing diffusion neural network, a color-edited digital image depicting the content in a second color style different from the first color style. Further, the disclosed systems provide, for display within a graphical user interface, the color-edited digital image.
-
公开(公告)号:US11977829B2
公开(公告)日:2024-05-07
申请号:US17362031
申请日:2021-06-29
Applicant: Adobe Inc.
Inventor: Zhifei Zhang , Zhaowen Wang , Hailin Jin , Matthew Fisher
IPC: G06F40/109 , G06N3/045 , G06T11/20
CPC classification number: G06F40/109 , G06N3/045 , G06T11/203
Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly generating scalable and semantically editable font representations utilizing a machine learning approach. For example, the disclosed systems generate a font representation code from a glyph utilizing a particular neural network architecture. For example, the disclosed systems utilize a glyph appearance propagation model and perform an iterative process to generate a font representation code from an initial glyph. Additionally, using a glyph appearance propagation model, the disclosed systems automatically propagate the appearance of the initial glyph from the font representation code to generate additional glyphs corresponding to respective glyph labels. In some embodiments, the disclosed systems propagate edits or other changes in appearance of a glyph to other glyphs within a glyph set (e.g., to match the appearance of the edited glyph).
-
公开(公告)号:US20230351566A1
公开(公告)日:2023-11-02
申请号:US17660968
申请日:2022-04-27
Applicant: ADOBE INC.
Inventor: Sangryul Jeon , Zhifei Zhang , Zhe Lin , Scott Cohen , Zhihong Ding
CPC classification number: G06T5/50 , G06V10/513 , G06V10/751 , G06V10/7715 , G06V10/774 , G06V10/454 , G06T2207/20221 , G06T2207/20081
Abstract: Systems and methods for image processing are configured. Embodiments of the present disclosure encode a content image and a style image using a machine learning model to obtain content features and style features, wherein the content image includes a first object having a first appearance attribute and the style image includes a second object having a second appearance attribute; align the content features and the style features to obtain a sparse correspondence map that indicates a correspondence between a sparse set of pixels of the content image and corresponding pixels of the style image; and generate a hybrid image based on the sparse correspondence map, wherein the hybrid image depicts the first object having the second appearance attribute.
-
公开(公告)号:US11688190B2
公开(公告)日:2023-06-27
申请号:US17089865
申请日:2020-11-05
Applicant: ADOBE INC.
Inventor: Zhifei Zhang , Xingqian Xu , Zhaowen Wang , Brian Price
IPC: G06V30/148 , G06T7/194 , G06N20/00 , G06N3/04 , G06T11/60 , G06F18/214 , G06V30/10
CPC classification number: G06V30/153 , G06F18/214 , G06N3/04 , G06N20/00 , G06T7/194 , G06T11/60 , G06V30/10
Abstract: Systems and methods for text segmentation are described. Embodiments of the inventive concept are configured to receive an image including a foreground text portion and a background portion, classify each pixel of the image as foreground text or background using a neural network that refines a segmentation prediction using a key vector representing features of the foreground text portion, wherein the key vector is based on the segmentation prediction, and identify the foreground text portion based on the classification.
-
公开(公告)号:US20220138483A1
公开(公告)日:2022-05-05
申请号:US17089865
申请日:2020-11-05
Applicant: ADOBE INC.
Inventor: Zhifei Zhang , Xingqian Xu , Zhaowen Wang , Brian Price
Abstract: Systems and methods for text segmentation are described. Embodiments of the inventive concept are configured to receive an image including a foreground text portion and a background portion, classify each pixel of the image as foreground text or background using a neural network that refines a segmentation prediction using a key vector representing features of the foreground text portion, wherein the key vector is based on the segmentation prediction, and identify the foreground text portion based on the classification.
-
公开(公告)号:US12217395B2
公开(公告)日:2025-02-04
申请号:US17660968
申请日:2022-04-27
Applicant: ADOBE INC.
Inventor: Sangryul Jeon , Zhifei Zhang , Zhe Lin , Scott Cohen , Zhihong Ding
Abstract: Systems and methods for image processing are configured. Embodiments of the present disclosure encode a content image and a style image using a machine learning model to obtain content features and style features, wherein the content image includes a first object having a first appearance attribute and the style image includes a second object having a second appearance attribute; align the content features and the style features to obtain a sparse correspondence map that indicates a correspondence between a sparse set of pixels of the content image and corresponding pixels of the style image; and generate a hybrid image based on the sparse correspondence map, wherein the hybrid image depicts the first object having the second appearance attribute.
-
公开(公告)号:US20240005574A1
公开(公告)日:2024-01-04
申请号:US17810392
申请日:2022-07-01
Applicant: Adobe Inc.
Inventor: Zhifei Zhang , Zhe Lin , Scott Cohen , Darshan Prasad , Zhihong Ding
CPC classification number: G06T11/40 , G06T5/005 , G06T7/12 , G06T2207/20084
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for transferring global style features between digital images utilizing one or more machine learning models or neural networks. In particular, in one or more embodiments, the disclosed systems receive a request to transfer a global style from a source digital image to a target digital image, identify at least one target object within the target digital image, and transfer the global style from the source digital image to the target digital image while maintaining an object style of the at least one target object.
-
-
-
-
-
-
-
-
-