-
公开(公告)号:US11774582B2
公开(公告)日:2023-10-03
申请号:US17149588
申请日:2021-01-14
发明人: Sanling Song , Yang Zheng , Izzat H. Izzat
IPC分类号: G01S13/89 , G01S7/41 , G01S13/72 , G01S13/86 , G01S13/931 , G06V10/764 , G06V10/80 , G06V10/82 , G06V20/58
CPC分类号: G01S13/89 , G01S7/415 , G01S7/417 , G01S13/726 , G01S13/865 , G01S13/867 , G01S13/931 , G06V10/764 , G06V10/803 , G06V10/82 , G06V20/58
摘要: This document describes methods and systems directed at imaging sensor and radar fusion for multiple-object tracking. Using tracking-by-detection, an object is first detected in a frame captured by an imaging sensor, and then the object is tracked over several consecutive frames by both the imaging sensor and a radar system. The object is tracked by assigning a probability that the object identified in one frame is a same object identified in the consecutive frame. A probability is calculated for each data set captured by a sensor by a supervised-learning neural-network model using the data collected from the sensors. Then, the probabilities associated with each sensor are fused into a refined probability. By fusing the data gathered by the imaging sensor and the radar system in the consecutive frames, a safety system can track multiple objects more accurately and reliably than using the sensor data separately to track objects.
-
公开(公告)号:US11693090B2
公开(公告)日:2023-07-04
申请号:US17650661
申请日:2022-02-10
发明人: Yang Zheng , Izzat H. Izzat
IPC分类号: G01S7/41 , G01S17/894 , G05D1/02 , G06V20/56 , G06F18/22 , G06F18/2136 , G06V10/764 , G06V10/82
CPC分类号: G01S7/414 , G01S17/894 , G05D1/0251 , G05D1/0257 , G05D1/0278 , G06F18/2136 , G06F18/22 , G06V10/764 , G06V10/82 , G06V20/56 , G05D2201/0213
摘要: This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof. MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
-
公开(公告)号:US20220165065A1
公开(公告)日:2022-05-26
申请号:US17650661
申请日:2022-02-10
发明人: Yang Zheng , Izzat H. Izzat
IPC分类号: G06V20/56 , G01S17/894 , G05D1/02 , G06K9/62
摘要: This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
-
公开(公告)号:US10936861B2
公开(公告)日:2021-03-02
申请号:US16270105
申请日:2019-02-07
发明人: Yang Zheng , Izzat H. Izzat
摘要: An object detection system includes color and infrared cameras, a controller-circuit, and instructions. The color and infrared cameras are configured to output respective color image and infrared image signals. The controller-circuit is in communication with the cameras, and includes a processor and a storage medium. The processor is configured to receive and transform the color image and infrared image signals into classification and location data associated with a detected object. The instructions are stored in the at least one storage medium and executed by the at least one processor, and are configured to utilize the color image and infrared image signals to form respective first and second maps. The first map has a first plurality of layers, and the second map has a second plurality of layers. Selected layers from each are paired and fused to form a feature pyramid that facilitates formulation of the classification and location data.
-
公开(公告)号:US11281917B2
公开(公告)日:2022-03-22
申请号:US17080822
申请日:2020-10-26
发明人: Yang Zheng , Izzat H. Izzat
IPC分类号: G06K9/00 , G01S17/894 , G05D1/02 , G06K9/62
摘要: This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof. MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
-
公开(公告)号:US20210231794A1
公开(公告)日:2021-07-29
申请号:US17149588
申请日:2021-01-14
发明人: Sanling Song , Yang Zheng , Izzat H. Izzat
IPC分类号: G01S13/86 , G01S13/931 , G01S13/72 , G01S7/41 , G01S13/89
摘要: This document describes methods and systems directed at imaging sensor and radar fusion for multiple-object tracking. Using tracking-by-detection, an object is first detected in a frame captured by an imaging sensor, and then the object is tracked over several consecutive frames by both the imaging sensor and a radar system. The object is tracked by assigning a probability that the object identified in one frame is a same object identified in the consecutive frame. A probability is calculated for each data set captured by a sensor by a supervised-learning neural-network model using the data collected from the sensors. Then, the probabilities associated with each sensor are fused into a refined probability. By fusing the data gathered by the imaging sensor and the radar system in the consecutive frames, a safety system can track multiple objects more accurately and reliably than using the sensor data separately to track objects.
-
公开(公告)号:US20210133463A1
公开(公告)日:2021-05-06
申请号:US17080822
申请日:2020-10-26
发明人: Yang Zheng , Izzat H. Izzat
IPC分类号: G06K9/00 , G06K9/62 , G05D1/02 , G01S17/894
摘要: This document describes “Multi-domain Neighborhood Embedding and Weighting” (MNEW) for use in processing point cloud data, including sparsely populated data obtained from a lidar, a camera, a radar, or combination thereof. MNEW is a process based on a dilation architecture that captures pointwise and global features of the point cloud data involving multi-scale local semantics adopted from a hierarchical encoder-decoder structure. Neighborhood information is embedded in both static geometric and dynamic feature domains. A geometric distance, feature similarity, and local sparsity can be computed and transformed into adaptive weighting factors that are reapplied to the point cloud data. This enables an automotive system to obtain outstanding performance with sparse and dense point cloud data. Processing point cloud data via the MNEW techniques promotes greater adoption of sensor-based autonomous driving and perception-based systems.
-
公开(公告)号:US10754035B2
公开(公告)日:2020-08-25
申请号:US15407404
申请日:2017-01-17
发明人: Ronald M. Taylor , Izzat H. Izzat
摘要: A ground-classifier system that classifies ground-cover proximate to an automated vehicle includes a lidar, a camera, and a controller. The lidar that detects a point-cloud of a field-of-view. The camera that renders an image of the field-of-view. The controller is configured to define a lidar-grid that segregates the point-cloud into an array of patches, and define a camera-grid that segregates the image into an array of cells. The point-cloud and the image are aligned such that a patch is aligned with a cell. A patch is determined to be ground when the height is less than a height-threshold. The controller is configured to determine a lidar-characteristic of cloud-points within the patch, determine a camera-characteristic of pixels within the cell, and determine a classification of the patch when the patch is determined to be ground, wherein the classification of the patch is determined based on the lidar-characteristic and the camera-characteristic.
-
公开(公告)号:US10366310B2
公开(公告)日:2019-07-30
申请号:US15680854
申请日:2017-08-18
IPC分类号: G06K9/62 , G06K9/00 , G01S17/93 , G01S13/86 , G01S13/93 , G06T7/11 , G06T7/174 , G06T7/246 , G08G1/04 , G08G1/16 , G01S17/02
摘要: An illustrative example object detection system includes a camera having a field of view. The camera provides an output comprising information regarding potential objects within the field of view. A processor is configured to select a portion of the camera output based on information from at least one other type of detector that indicates a potential object in the selected portion. The processor determines an Objectness of the selected portion based on information in the camera output regarding the selected portion.
-
公开(公告)号:US10145951B2
公开(公告)日:2018-12-04
申请号:US15084914
申请日:2016-03-30
发明人: Izzat H. Izzat , Rohith Mv
IPC分类号: G01S13/86 , G06K9/00 , G06T7/521 , G01S13/42 , G01S7/41 , G06K9/32 , G01S13/93 , G01S13/72 , G01S17/93 , G01S17/66 , G01S17/02
摘要: An object-detection system includes a radar sensor, a camera, and a controller. The radar-sensor is suitable for mounting on a vehicle and is used to detect a radar-signal reflected by an object in a radar-field-of-view. The radar-signal is indicative of a range, range-rate, and a direction to the object relative to the vehicle. The camera is used to capture an image of a camera-field-of-view that overlaps the radar-field-of-view. The controller is in communication with the radar-sensor and the camera. The controller is configured to determine a range-map for the image based on the range and the direction of the radar detection, define a detection-zone in the image based on the range-map, and process only the detection-zone of the image to determine an identity of the object.
-
-
-
-
-
-
-
-
-