Intuitive editing of three-dimensional models

    公开(公告)号:US10957117B2

    公开(公告)日:2021-03-23

    申请号:US16204980

    申请日:2018-11-29

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.

    Automatic 3D camera alignment and object arrangment to match a 2D background image

    公开(公告)号:US10417833B2

    公开(公告)日:2019-09-17

    申请号:US15804908

    申请日:2017-11-06

    Applicant: ADOBE INC.

    Abstract: Embodiments disclosed herein provide systems, methods, and computer storage media for automatically aligning a 3D camera with a 2D background image. An automated image analysis can be performed on the 2D background image, and a classifier can predict whether the automated image analysis is accurate within a selected confidence level. As such, a feature can be enabled that allows a user to automatically align the 3D camera with the 2D background image. For example, where the automated analysis detects a horizon and one or more vanishing points from the background image, the 3D camera can be automatically transformed to align with the detected horizon and to point at a detected horizon-located vanishing point. In some embodiments, 3D objects in a 3D scene can be pivoted and the 3D camera dollied forward or backwards to reduce changes to the framing of the 3D composition resulting from the 3D camera transformation.

    NEURAL NETWORK-BASED CAMERA CALIBRATION
    25.
    发明申请

    公开(公告)号:US20190164312A1

    公开(公告)日:2019-05-30

    申请号:US15826331

    申请日:2017-11-29

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.

    Environment map generation and hole filling

    公开(公告)号:US11276150B2

    公开(公告)日:2022-03-15

    申请号:US16893505

    申请日:2020-06-05

    Applicant: Adobe Inc.

    Abstract: In some embodiments, an image manipulation application receives a two-dimensional background image and projects the background image onto a sphere to generate a sphere image. Based on the sphere image, an unfilled environment map containing a hole area lacking image content can be generated. A portion of the unfilled environment map can be projected to an unfilled projection image using a map projection. The unfilled projection image contains the hole area. A hole filling model is applied to the unfilled projection image to generate a filled projection image containing image content for the hole area. A filled environment map can be generated by applying an inverse projection of the map projection on the filled projection image and by combining the unfilled environment map with the generated image content for the hole area of the environment map.

    INTUITIVE EDITING OF THREE-DIMENSIONAL MODELS

    公开(公告)号:US20210256775A1

    公开(公告)日:2021-08-19

    申请号:US17208627

    申请日:2021-03-22

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.

    Classifying panoramic images
    28.
    发明授权

    公开(公告)号:US10991085B2

    公开(公告)日:2021-04-27

    申请号:US16372202

    申请日:2019-04-01

    Applicant: ADOBE INC.

    Abstract: Embodiments herein describe a framework for classifying images. In some embodiments, it is determined whether an image includes synthetic image content. If it does, characteristics of the image are analyzed to determine if the image includes characteristics particular to panoramic images (e.g., possess a threshold equivalency of pixel values among the top and/or bottom boundaries of the image, or a difference between summed pixel values of the pixels comprising the right vertical boundary of the image and summed pixel values of the pixels comprising the left vertical boundary of the image being less than or equal to a threshold value). If the image includes characteristics particular to panoramic images, the image is classified as a synthetic panoramic image. If the image is determined to not include synthetic image content, a neural network is applied to the image and the image is classified as one of non-synthetic panoramic or non-synthetic non-panoramic.

    Learning to estimate high-dynamic range outdoor lighting parameters

    公开(公告)号:US10936909B2

    公开(公告)日:2021-03-02

    申请号:US16188130

    申请日:2018-11-12

    Applicant: ADOBE INC.

    Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.

    CLASSIFYING PANORAMIC IMAGES
    30.
    发明申请

    公开(公告)号:US20200311901A1

    公开(公告)日:2020-10-01

    申请号:US16372202

    申请日:2019-04-01

    Applicant: ADOBE INC.

    Abstract: Embodiments herein describe a framework for classifying images. In some embodiments, it is determined whether an image includes synthetic image content. If it does, characteristics of the image are analyzed to determine if the image includes characteristics particular to panoramic images (e.g., possess a threshold equivalency of pixel values among the top and/or bottom boundaries of the image, or a difference between summed pixel values of the pixels comprising the right vertical boundary of the image and summed pixel values of the pixels comprising the left vertical boundary of the image being less than or equal to a threshold value). If the image includes characteristics particular to panoramic images, the image is classified as a synthetic panoramic image. If the image is determined to not include synthetic image content, a neural network is applied to the image and the image is classified as one of non-synthetic panoramic or non-synthetic non-panoramic.

Patent Agency Ranking