-
公开(公告)号:US10762425B2
公开(公告)日:2020-09-01
申请号:US16134716
申请日:2018-09-18
Applicant: NVIDIA Corporation
Inventor: Sifei Liu , Shalini De Mello , Jinwei Gu , Ming-Hsuan Yang , Jan Kautz
Abstract: A spatial linear propagation network (SLPN) system learns the affinity matrix for vision tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The SLPN system is trained for a particular computer vision task and refines an input map (i.e., affinity matrix) that indicates pixels the share a particular property (e.g., color, object, texture, shape, etc.). Inputs to the SLPN system are input data (e.g., pixel values for an image) and the input map corresponding to the input data to be propagated. The input data is processed to produce task-specific affinity values (guidance data). The task-specific affinity values are applied to values in the input map, with at least two weighted values from each column contributing to a value in the refined map data for the adjacent column.
-
公开(公告)号:US20200273207A1
公开(公告)日:2020-08-27
申请号:US16872752
申请日:2020-05-12
Applicant: NVIDIA Corporation
Inventor: Jinwei Gu , Samarth Manoj Brahmbhatt , Kihwan Kim , Jan Kautz
Abstract: A deep neural network (DNN) system learns a map representation for estimating a camera position and orientation (pose). The DNN is trained to learn a map representation corresponding to the environment, defining positions and attributes of structures, trees, walls, vehicles, etc. The DNN system learns a map representation that is versatile and performs well for many different environments (indoor, outdoor, natural, synthetic, etc.). The DNN system receives images of an environment captured by a camera (observations) and outputs an estimated camera pose within the environment. The estimated camera pose is used to perform camera localization, i.e., recover the three-dimensional (3D) position and orientation of a moving camera, which is a fundamental task in computer vision with a wide variety of applications in robot navigation, car localization for autonomous driving, device localization for mobile navigation, and augmented/virtual reality.
-
公开(公告)号:US20240338871A1
公开(公告)日:2024-10-10
申请号:US18746911
申请日:2024-06-18
Applicant: NVIDIA Corporation
Inventor: Donghoom LEE , Sifei Liu , Jinwei Gu , Ming-Yu Liu , Jan Kautz
CPC classification number: G06T11/60 , G06F18/217 , G06F18/24 , G06T3/02 , G06T7/30 , G06V30/274 , G06T7/70 , G06T2207/20081 , G06T2207/20084 , G06T2210/12
Abstract: One embodiment of a method includes applying a first generator model to a semantic representation of an image to generate an affine transformation, where the affine transformation represents a bounding box associated with at least one region within the image. The method further includes applying a second generator model to the affine transformation and the semantic representation to generate a shape of an object. The method further includes inserting the object into the image based on the bounding box and the shape.
-
公开(公告)号:US11270161B2
公开(公告)日:2022-03-08
申请号:US16924005
申请日:2020-07-08
Applicant: NVIDIA Corporation
Inventor: Orazio Gallo , Jinwei Gu , Jan Kautz , Patrick Wieschollek
Abstract: When a computer image is generated from a real-world scene having a semi-reflective surface (e.g. window), the computer image will create, at the semi-reflective surface from the viewpoint of the camera, both a reflection of a scene in front of the semi-reflective surface and a transmission of a scene located behind the semi-reflective surface. Similar to a person viewing the real-world scene from different locations, angles, etc., the reflection and transmission may change, and also move relative to each other, as the viewpoint of the camera changes. Unfortunately, the dynamic nature of the reflection and transmission negatively impacts the performance of many computer applications, but performance can generally be improved if the reflection and transmission are separated. The present disclosure uses deep learning to separate reflection and transmission at a semi-reflective surface of a computer image generated from a real-world scene.
-
公开(公告)号:US11037051B2
公开(公告)日:2021-06-15
申请号:US16565885
申请日:2019-09-10
Applicant: NVIDIA Corporation
Inventor: Kihwan Kim , Jinwei Gu , Chen Liu , Jan Kautz
Abstract: Planar regions in three-dimensional scenes offer important geometric cues in a variety of three-dimensional perception tasks such as scene understanding, scene reconstruction, and robot navigation. Image analysis to detect planar regions can be performed by a deep learning architecture that includes a number of neural networks configured to estimate parameters for the planar regions. The neural networks process an image to detect an arbitrary number of plane objects in the image. Each plane object is associated with a number of estimated parameters including bounding box parameters, plane normal parameters, and a segmentation mask. Global parameters for the image, including a depth map, can also be estimated by one of the neural networks. Then, a segmentation refinement network jointly optimizes (i.e., refines) the segmentation masks for each instance of the plane objects and combines the refined segmentation masks to generate an aggregate segmentation mask for the image.
-
公开(公告)号:US20200160593A1
公开(公告)日:2020-05-21
申请号:US16685538
申请日:2019-11-15
Applicant: NVIDIA Corporation
Inventor: Jinwei Gu , Kihwan Kim , Jan Kautz , Guilin Liu , Soumyadip Sengupta
Abstract: Inverse rendering estimates physical scene attributes (e.g., reflectance, geometry, and lighting) from image(s) and is used for gaming, virtual reality, augmented reality, and robotics. An inverse rendering network (IRN) receives a single input image of a 3D scene and generates the physical scene attributes for the image. The IRN is trained by using the estimated physical scene attributes generated by the IRN to reproduce the input image and updating parameters of the IRN to reduce differences between the reproduced input image and the input image. A direct renderer and a residual appearance renderer (RAR) reproduce the input image. The RAR predicts a residual image representing complex appearance effects of the real (not synthetic) image based on features extracted from the image and the reflectance and geometry properties. The residual image represents near-field illumination, cast shadows, inter-reflections, and realistic shading that are not provided by the direct renderer.
-
公开(公告)号:US20190095791A1
公开(公告)日:2019-03-28
申请号:US16134716
申请日:2018-09-18
Applicant: NVIDIA Corporation
Inventor: Sifei Liu , Shalini De Mello , Jinwei Gu , Ming-Hsuan Yang , Jan Kautz
Abstract: A spatial linear propagation network (SLPN) system learns the affinity matrix for vision tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The SLPN system is trained for a particular computer vision task and refines an input map (i.e., affinity matrix) that indicates pixels the share a particular property (e.g., color, object, texture, shape, etc.). Inputs to the SLPN system are input data (e.g., pixel values for an image) and the input map corresponding to the input data to be propagated. The input data is processed to produce task-specific affinity values (guidance data). The task-specific affinity values are applied to values in the input map, with at least two weighted values from each column contributing to a value in the refined map data for the adjacent column.
-
公开(公告)号:US11328169B2
公开(公告)日:2022-05-10
申请号:US16353835
申请日:2019-03-14
Applicant: NVIDIA Corporation
Inventor: Sifei Liu , Shalini De Mello , Jinwei Gu , Varun Jampani , Jan Kautz
Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
-
公开(公告)号:US10984545B2
公开(公告)日:2021-04-20
申请号:US16439539
申请日:2019-06-12
Applicant: NVIDIA Corporation
Inventor: Jinwei Gu , Kihwan Kim , Chao Liu
Abstract: Techniques for estimating depth for a video stream captured by a monocular image sensor are disclosed. A sequence of image frames are captured by the monocular image sensor. A first neural network is configured to process at least a portion of the sequence of image frames to generate a depth probability volume. The depth probability volume includes a plurality of probability maps corresponding to a number of discrete depth candidate locations over a range of depths defined for the scene. The depth probability volume can be updated using a second neural network that is configured to generate adaptive gain parameters to integrate the DPVs over time. A third neural network is configured to refine the updated depth probability volume from a lower resolution to a higher resolution that matches the original resolution of the sequence of image frames. A depth map can be calculated based on the depth probability volume.
-
公开(公告)号:US20210073575A1
公开(公告)日:2021-03-11
申请号:US17081805
申请日:2020-10-27
Applicant: NVIDIA Corporation
Inventor: Sifei Liu , Shalini De Mello , Jinwei Gu , Varun Jampani , Jan Kautz
Abstract: A temporal propagation network (TPN) system learns the affinity matrix for video image processing tasks. An affinity matrix is a generic matrix that defines the similarity of two points in space. The TPN system includes a guidance neural network model and a temporal propagation module and is trained for a particular computer vision task to propagate visual properties from a key-frame represented by dense data (color), to another frame that is represented by coarse data (grey-scale). The guidance neural network model generates an affinity matrix referred to as a global transformation matrix from task-specific data for the key-frame and the other frame. The temporal propagation module applies the global transformation matrix to the key-frame property data to produce propagated property data (color) for the other frame. For example, the TPN system may be used to colorize several frames of greyscale video using a single manually colorized key-frame.
-
-
-
-
-
-
-
-
-