SEMANTICALLY-AWARE IMAGE EXTRAPOLATION
    1.
    发明公开

    公开(公告)号:US20230169632A1

    公开(公告)日:2023-06-01

    申请号:US17521503

    申请日:2021-11-08

    Applicant: Adobe Inc.

    CPC classification number: G06T5/50 G06T7/181

    Abstract: Certain aspects and features of this disclosure relate to semantically-aware image extrapolation. In one example, an input image is segmented to produce an input segmentation map of object instances in the input image. An object generation network is used to generate an extrapolated semantic label map for an extrapolated image. The extrapolated semantic label map includes instances in the original image and instances that will appear in an outpainted region of the extrapolated image. A panoptic label map is derived from coordinates of output instances in the extrapolated image and used to identify partial instances and boundaries. Instance-aware context normalization is used to apply one or more characteristics from the input image to the outpainted region to maintain semantic continuity. The extrapolated image includes the original image and the outpainted region and can be rendered or stored for future use.

    Object Insertion via Scene Graph
    3.
    发明公开

    公开(公告)号:US20240202876A1

    公开(公告)日:2024-06-20

    申请号:US18067989

    申请日:2022-12-19

    Applicant: Adobe Inc.

    CPC classification number: G06T5/50 G06V10/82 G06V20/70 G06T2207/20221

    Abstract: Techniques are described for object insertion via scene graph. In implementations, given an input image and a region of the image where a new object is to be inserted, the input image is converted to an intermediate scene graph space. In the intermediate scene graph space, graph convolutional networks are leveraged to expand the scene graph by predicting the identity and relationships of a new object to be inserted, taking into account existing objects in the input image. The expanded scene graph and the input image are then processed by an image generator to insert a predicted visual object into the input image to produce an output image.

    CREATING CINEMAGRAPHS FROM A SINGLE IMAGE

    公开(公告)号:US20240404155A1

    公开(公告)日:2024-12-05

    申请号:US18325645

    申请日:2023-05-30

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media that utilizes neural networks to generate cinemagraphs from single RGB images. For example, the cyclic animation system includes a cyclic animation neural network trained with synthetic data, wherein different wind effects can be replicated using physically based simulations to create cyclic videos more efficiently. More specifically, the cyclic animation system generalizes a solution by operating in the gradient domain and using surface normal maps. Because normal maps are invariant to appearance (color, texture, illumination, etc.), the gap between synthetic and real data distribution in the normal map space is smaller than in the RGB space. The cyclic animation system performs a reshading approach that synthesizes RGB pixels from the original image and the animated normal maps to create plausible changes to the real image to create the cinemagraph.

Patent Agency Ranking