-
公开(公告)号:US20240265712A1
公开(公告)日:2024-08-08
申请号:US18599719
申请日:2024-03-08
Applicant: NVIDIA Corporation
Inventor: David Wehr , Ibrahim Eden , Joachim Pehserl
CPC classification number: G06V20/58 , G01B11/22 , G01S17/89 , G05D1/249 , G06N7/01 , G06T7/579 , G06T7/70 , G06T2207/10028 , G06T2207/20081 , G06T2207/30261
Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.
-
公开(公告)号:US20240029447A1
公开(公告)日:2024-01-25
申请号:US18482183
申请日:2023-10-06
Applicant: NVIDIA Corporation
Inventor: Nikolai SMOLYANSKIY , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
CPC classification number: G06V20/584 , G01S17/931 , B60W60/0016 , B60W60/0027 , B60W60/0011 , G01S17/89 , G05D1/0088 , G06T19/006 , G06V20/58 , G06N3/045 , B60W2420/403 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US11532168B2
公开(公告)日:2022-12-20
申请号:US16915346
申请日:2020-06-29
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20220415059A1
公开(公告)日:2022-12-29
申请号:US17895940
申请日:2022-08-25
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20210342609A1
公开(公告)日:2021-11-04
申请号:US17377064
申请日:2021-07-15
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20240410981A1
公开(公告)日:2024-12-12
申请号:US18810728
申请日:2024-08-21
Applicant: NVIDIA CORPORATION
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
IPC: G01S7/48 , B60W60/00 , G01S17/89 , G01S17/931 , G05D1/81 , G06N3/045 , G06T19/00 , G06V10/10 , G06V10/25 , G06V10/26 , G06V10/44 , G06V10/764 , G06V10/774 , G06V10/80 , G06V10/82 , G06V20/56 , G06V20/58
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US20210150230A1
公开(公告)日:2021-05-20
申请号:US16915346
申请日:2020-06-29
Applicant: NVIDIA Corporation
Inventor: Nikolai Smolyanskiy , Ryan Oldja , Ke Chen , Alexander Popov , Joachim Pehserl , Ibrahim Eden , Tilman Wekel , David Wehr , Ruchi Bhargava , David Nister
Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
-
公开(公告)号:US11954914B2
公开(公告)日:2024-04-09
申请号:US17392050
申请日:2021-08-02
Applicant: NVIDIA Corporation
Inventor: David Wehr , Ibrahim Eden , Joachim Pehserl
CPC classification number: G06V20/58 , G01B11/22 , G01S17/89 , G05D1/0231 , G06N7/01 , G06T7/579 , G06T7/70 , G06T2207/10028 , G06T2207/20081 , G06T2207/30261
Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.
-
公开(公告)号:US20230054759A1
公开(公告)日:2023-02-23
申请号:US17409052
申请日:2021-08-23
Applicant: NVIDIA Corporation
IPC: G01S17/66 , G01S17/931 , G01S17/58 , B25J9/16
Abstract: In various examples, an obstacle detector is capable of tracking a velocity state of detected objects or obstacles using LiDAR data. For example, using LiDAR data alone, an iterative closest point (ICP) algorithm may be used to determine a current state of detected objects for a current frame and a Kalman filter may be used to maintain a tracked state of the one or more objects detected over time. The obstacle detector may be configured to estimate velocity for one or more detected objects, compare the estimated velocity to one or more previous tracked states for previously detected objects, determine that the detected objects corresponds to a certain previously detected object, and update the tracked state for the previously detected object with the estimated velocity.
-
公开(公告)号:US20230033470A1
公开(公告)日:2023-02-02
申请号:US17392050
申请日:2021-08-02
Applicant: NVIDIA Corporation
Inventor: David Wehr , Ibrahim Eden , Joachim Pehserl
Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.
-
-
-
-
-
-
-
-
-