-
公开(公告)号:US10957117B2
公开(公告)日:2021-03-23
申请号:US16204980
申请日:2018-11-29
Applicant: ADOBE INC.
Inventor: Duygu Ceylan Aksit , Vladimir Kim , Siddhartha Chaudhuri , Radomir Mech , Noam Aigerman , Kevin Wampler , Jonathan Eisenmann , Giorgio Gori , Emiliano Gambaretto
IPC: G06T15/00 , G06T19/20 , G06F3/0481 , G06F3/0484
Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
-
22.
公开(公告)号:US20200242804A1
公开(公告)日:2020-07-30
申请号:US16257495
申请日:2019-01-25
Applicant: Adobe Inc.
Inventor: Jonathan Eisenmann , Wenqi Xian , Matthew Fisher , Geoffrey Oxholm , Elya Shechtman
Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.
-
公开(公告)号:US20200074600A1
公开(公告)日:2020-03-05
申请号:US16678072
申请日:2019-11-08
Applicant: Adobe Inc.
Inventor: Kalyan Sunkavalli , Mehmet Ersin Yumer , Marc-Andre Gardner , Xiaohui Shen , Jonathan Eisenmann , Emiliano Gambaretto
Abstract: Systems and techniques for estimating illumination from a single image are provided. An example system may include a neural network. The neural network may include an encoder that is configured to encode an input image into an intermediate representation. The neural network may also include an intensity decoder that is configured to decode the intermediate representation into an output light intensity map. An example intensity decoder is generated by a multi-phase training process that includes a first phase to train a light mask decoder using a set of low dynamic range images and a second phase to adjust parameters of the light mask decoder using a set of high dynamic range image to generate the intensity decoder.
-
公开(公告)号:US10417833B2
公开(公告)日:2019-09-17
申请号:US15804908
申请日:2017-11-06
Applicant: ADOBE INC.
Inventor: Jonathan Eisenmann , Geoffrey Alan Oxholm , Elya Shechtman , Bryan Russell
Abstract: Embodiments disclosed herein provide systems, methods, and computer storage media for automatically aligning a 3D camera with a 2D background image. An automated image analysis can be performed on the 2D background image, and a classifier can predict whether the automated image analysis is accurate within a selected confidence level. As such, a feature can be enabled that allows a user to automatically align the 3D camera with the 2D background image. For example, where the automated analysis detects a horizon and one or more vanishing points from the background image, the 3D camera can be automatically transformed to align with the detected horizon and to point at a detected horizon-located vanishing point. In some embodiments, 3D objects in a 3D scene can be pivoted and the 3D camera dollied forward or backwards to reduce changes to the framing of the 3D composition resulting from the 3D camera transformation.
-
公开(公告)号:US20190164312A1
公开(公告)日:2019-05-30
申请号:US15826331
申请日:2017-11-29
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Yannick Hold-Geoffroy , Sunil Hadap , Matthew David Fisher , Jonathan Eisenmann , Emiliano Gambaretto
CPC classification number: G06T7/80 , G06N3/0454 , G06N3/08 , G06T7/97 , G06T2207/20081 , G06T2207/20084
Abstract: Embodiments of the present invention provide systems, methods, and computer storage media directed to generating training image data for a convolutional neural network, encoding parameters into a convolutional neural network, and employing a convolutional neural network that estimates camera calibration parameters of a camera responsible for capturing a given digital image. A plurality of different digital images can be extracted from a single panoramic image given a range of camera calibration parameters that correspond to a determined range of plausible camera calibration parameters. With each digital image in the plurality of extracted different digital images having a corresponding set of known camera calibration parameters, the digital images can be provided to the convolutional neural network to establish high-confidence correlations between detectable characteristics of a digital image and its corresponding set of camera calibration parameters. Once trained, the convolutional neural network can receive a new digital image, and based on detected image characteristics thereof, estimate a corresponding set of camera calibration parameters with a calculated level of confidence.
-
公开(公告)号:US11276150B2
公开(公告)日:2022-03-15
申请号:US16893505
申请日:2020-06-05
Applicant: Adobe Inc.
Inventor: Jonathan Eisenmann , Zhe Lin , Matthew Fisher
Abstract: In some embodiments, an image manipulation application receives a two-dimensional background image and projects the background image onto a sphere to generate a sphere image. Based on the sphere image, an unfilled environment map containing a hole area lacking image content can be generated. A portion of the unfilled environment map can be projected to an unfilled projection image using a map projection. The unfilled projection image contains the hole area. A hole filling model is applied to the unfilled projection image to generate a filled projection image containing image content for the hole area. A filled environment map can be generated by applying an inverse projection of the map projection on the filled projection image and by combining the unfilled environment map with the generated image content for the hole area of the environment map.
-
公开(公告)号:US20210256775A1
公开(公告)日:2021-08-19
申请号:US17208627
申请日:2021-03-22
Applicant: ADOBE INC.
Inventor: Duygu Ceylan Aksit , Vladimir Kim , Siddhartha Chaudhuri , Radomir Mech , Noam Aigerman , Kevin Wampler , Jonathan Eisenmann , Giorgio Gori , Emiliano Gambaretto
IPC: G06T19/20
Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.
-
公开(公告)号:US10991085B2
公开(公告)日:2021-04-27
申请号:US16372202
申请日:2019-04-01
Applicant: ADOBE INC.
Inventor: Qi Sun , Li-Yi Wei , Joon-Young Lee , Jonathan Eisenmann , Jinwoong Jung , Byungmoon Kim
Abstract: Embodiments herein describe a framework for classifying images. In some embodiments, it is determined whether an image includes synthetic image content. If it does, characteristics of the image are analyzed to determine if the image includes characteristics particular to panoramic images (e.g., possess a threshold equivalency of pixel values among the top and/or bottom boundaries of the image, or a difference between summed pixel values of the pixels comprising the right vertical boundary of the image and summed pixel values of the pixels comprising the left vertical boundary of the image being less than or equal to a threshold value). If the image includes characteristics particular to panoramic images, the image is classified as a synthetic panoramic image. If the image is determined to not include synthetic image content, a neural network is applied to the image and the image is classified as one of non-synthetic panoramic or non-synthetic non-panoramic.
-
公开(公告)号:US10936909B2
公开(公告)日:2021-03-02
申请号:US16188130
申请日:2018-11-12
Applicant: ADOBE INC.
Inventor: Kalyan K. Sunkavalli , Sunil Hadap , Jonathan Eisenmann , Jinsong Zhang , Emiliano Gambaretto
Abstract: Methods and systems are provided for determining high-dynamic range lighting parameters for input low-dynamic range images. A neural network system can be trained to estimate lighting parameters for input images where the input images are synthetic and real low-dynamic range images. Such a neural network system can be trained using differences between a simple scene rendered using the estimated lighting parameters and the same simple scene rendered using known ground-truth lighting parameters. Such a neural network system can also be trained such that the synthetic and real low-dynamic range images are mapped in roughly the same distribution. Such a trained neural network system can be used to input a low-dynamic range image determine high-dynamic range lighting parameters.
-
公开(公告)号:US20200311901A1
公开(公告)日:2020-10-01
申请号:US16372202
申请日:2019-04-01
Applicant: ADOBE INC.
Inventor: Qi Sun , Li-Yi Wei , Joon-Young Lee , Jonathan Eisenmann , Jinwoong Jung , Byungmoon Kim
Abstract: Embodiments herein describe a framework for classifying images. In some embodiments, it is determined whether an image includes synthetic image content. If it does, characteristics of the image are analyzed to determine if the image includes characteristics particular to panoramic images (e.g., possess a threshold equivalency of pixel values among the top and/or bottom boundaries of the image, or a difference between summed pixel values of the pixels comprising the right vertical boundary of the image and summed pixel values of the pixels comprising the left vertical boundary of the image being less than or equal to a threshold value). If the image includes characteristics particular to panoramic images, the image is classified as a synthetic panoramic image. If the image is determined to not include synthetic image content, a neural network is applied to the image and the image is classified as one of non-synthetic panoramic or non-synthetic non-panoramic.
-
-
-
-
-
-
-
-
-