Text-Based Real Image Editing with Diffusion Models

    公开(公告)号:US20240355017A1

    公开(公告)日:2024-10-24

    申请号:US18302508

    申请日:2023-04-18

    Applicant: Google LLC

    CPC classification number: G06T11/60 G06T3/4053

    Abstract: Methods and systems for editing an image are disclosed herein. The method includes receiving an input image and a target text, the target text indicating a desired edit for the input image and obtaining, by the computing system, a target text embedding based on the target text. The method also includes obtaining, by the computing system, an optimized text embedding based on the target text embedding and the input image and fine-tuning, by the computing system, a diffusion model based on the optimized text embedding. The method can further include interpolating, by the computing system, the target text embedding and the optimized text embedding to obtain an interpolated embedding and generating, by the computing system, an edited image including the desired edit using the diffusion model based on the input image and the interpolated embedding.

    Depth Determination for Images Captured with a Moving Camera and Representing Moving Features

    公开(公告)号:US20210090279A1

    公开(公告)日:2021-03-25

    申请号:US16578215

    申请日:2019-09-20

    Applicant: Google LLC

    Abstract: A method includes obtaining a reference image and a target image each representing an environment containing moving features and static features. The method also includes determining an object mask configured to mask out the moving features and preserves the static features in the target image. The method additionally includes determining, based on motion parallax between the reference image and the target image, a static depth image representing depth values of the static features in the target image. The method further includes generating, by way of a machine learning model, a dynamic depth image representing depth values of both the static features and the moving features in the target image. The model is trained to generate the dynamic depth image by determining depth values of at least the moving features based on the target image, the object mask, and the static depth image.

    Re-timing objects in video via layered neural rendering

    公开(公告)号:US12243145B2

    公开(公告)日:2025-03-04

    申请号:US17927101

    申请日:2020-05-22

    Applicant: Google LLC

    Abstract: A computer-implemented method for decomposing videos into multiple layers (212, 213) that can be re-combined with modified relative timings includes obtaining video data including a plurality of image frames (201) depicting one or more objects. For each of the plurality of frames, the computer-implemented method includes generating one or more object maps descriptive of a respective location of at least one object of the one or more objects within the image frame. For each of the plurality of frames, the computer-implemented method includes inputting the image frame and the one or more object maps into a machine-learned layer Tenderer model. (220) For each of the plurality of frames, the computer-implemented method includes receiving, as output from the machine-learned layer Tenderer model, a background layer illustrative of a background of the video data and one or more object layers respectively associated with one of the one or more object maps. The object layers include image data illustrative of the at least one object and one or more trace effects at least partially attributable to the at least one object such that the one or more object layers and the background layer can be re-combined with modified relative timings.

    AUDIO-VISUAL SPEECH SEPARATION
    6.
    发明申请

    公开(公告)号:US20230122905A1

    公开(公告)日:2023-04-20

    申请号:US17951002

    申请日:2022-09-22

    Applicant: Google LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio-visual speech separation. A method includes: obtaining, for each frame in a stream of frames from a video in which faces of one or more speakers have been detected, a respective per-frame face embedding of the face of each speaker; processing, for each speaker, the per-frame face embeddings of the face of the speaker to generate visual features for the face of the speaker; obtaining a spectrogram of an audio soundtrack for the video; processing the spectrogram to generate an audio embedding for the audio soundtrack; combining the visual features for the one or more speakers and the audio embedding for the audio soundtrack to generate an audio-visual embedding for the video; determining a respective spectrogram mask for each of the one or more speakers; and determining a respective isolated speech spectrogram for each speaker.

    Re-Timing Objects in Video Via Layered Neural Rendering

    公开(公告)号:US20230206955A1

    公开(公告)日:2023-06-29

    申请号:US17927101

    申请日:2020-05-22

    Applicant: Google LLC

    CPC classification number: G11B27/005 G06V10/82 G06V20/46 G11B27/031

    Abstract: A computer-implemented method for decomposing videos into multiple layers (212, 213) that can be re-combined with modified relative timings includes obtaining video data including a plurality of image frames (201) depicting one or more objects. For each of the plurality of frames, the computer-implemented method includes generating one or more object maps descriptive of a respective location of at least one object of the one or more objects within the image frame. For each of the plurality of frames, the computer-implemented method includes inputting the image frame and the one or more object maps into a machine-learned layer Tenderer model. (220) For each of the plurality of frames, the computer-implemented method includes receiving, as output from the machine-learned layer Tenderer model, a background layer illustrative of a background of the video data and one or more object layers respectively associated with one of the one or more object maps. The object layers include image data illustrative of the at least one object and one or more trace effects at least partially attributable to the at least one object such that the one or more object layers and the background layer can be re-combined with modified relative timings.

    Audio-visual speech separation
    8.
    发明授权

    公开(公告)号:US11456005B2

    公开(公告)日:2022-09-27

    申请号:US16761707

    申请日:2018-11-21

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio-visual speech separation. A method includes: obtaining, for each frame in a stream of frames from a video in which faces of one or more speakers have been detected, a respective per-frame face embedding of the face of each speaker; processing, for each speaker, the per-frame face embeddings of the face of the speaker to generate visual features for the face of the speaker; obtaining a spectrogram of an audio soundtrack for the video; processing the spectrogram to generate an audio embedding for the audio soundtrack; combining the visual features for the one or more speakers and the audio embedding for the audio soundtrack to generate an audio-visual embedding for the video; determining a respective spectrogram mask for each of the one or more speakers; and determining a respective isolated speech spectrogram for each speaker.

    AUDIO-VISUAL SPEECH SEPARATION
    9.
    发明申请

    公开(公告)号:US20200335121A1

    公开(公告)日:2020-10-22

    申请号:US16761707

    申请日:2018-11-21

    Applicant: GOOGLE LLC

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio-visual speech separation. A method includes: obtaining, for each frame in a stream of frames from a video in which faces of one or more speakers have been detected, a respective per-frame face embedding of the face of each speaker; processing, for each speaker, the per-frame face embeddings of the face of the speaker to generate visual features for the face of the speaker; obtaining a spectrogram of an audio soundtrack for the video; processing the spectrogram to generate an audio embedding for the audio soundtrack; combining the visual features for the one or more speakers and the audio embedding for the audio soundtrack to generate an audio-visual embedding for the video; determining a respective spectrogram mask for each of the one or more speakers; and determining a respective isolated speech spectrogram for each speaker.

Patent Agency Ranking