Abstract:
A method is disclosed for improved target grouping of sensor measurements in an object detection system. The method uses road curvature information to improve grouping accuracy by better predicting a new location of a known target object and matching it to sensor measurements. Additional target attributes are also used for improved grouping accuracy, where the attributes includes range rate, target cross-section and others. Distance compression is also employed for improved grouping accuracy, where range is compressed in a log scale calculation in order to diminish errors in measurement of distant objects. Grid-based techniques include the use of hash tables and a flood fill algorithm for improved computational performance of target object identification, where the number of computations can be reduced by an order of magnitude.
Abstract:
A system and method for fusing the outputs from multiple LiDAR sensors. The method includes providing object files for objects detected by the sensors at a previous sample time, where the object files identify the position, orientation and velocity of the detected objects. The method also includes receiving a plurality of scan returns from objects detected in the field-of-view of the sensors at a current sample time and constructing a point cloud from the scan returns. The method then segments the scan points in the point cloud into predicted clusters, where each cluster initially identifies an object detected by the sensors. The method matches the predicted clusters with predicted object models generated from objects being tracked during the previous sample time. The method creates new object models, deletes dying object models and updates the object files based on the object models for the current sample time.
Abstract:
A method and system for estimating the state of health of an object sensing fusion system. Target data from a vision system and a radar system, which are used by an object sensing fusion system, are also stored in a context queue. The context queue maintains the vision and radar target data for a sequence of many frames covering a sliding window of time. The target data from the context queue are used to compute matching scores, which are indicative of how well vision targets correlate with radar targets, and vice versa. The matching scores are computed within individual frames of vision and radar data, and across a sequence of multiple frames. The matching scores are used to assess the state of health of the object sensing fusion system. If the fusion system state of health is below a certain threshold, one or more faulty sensors are identified.
Abstract:
A method includes receiving sensed vehicle-state data, actuation-command data, and surface-coefficient data from a plurality of remote vehicles, inputting the sensed vehicle-state data, the actuation-command data, and the surface-coefficient data into a self-supervised recurrent neural network (RNN) to predict vehicle states of a host vehicle in a plurality of driving scenarios, and commanding the host vehicle to move autonomously according to a trajectory determined using the vehicle states predicted using the self-supervised RNN.
Abstract:
A system for real-time control selection and calibration in a vehicle using a deep-Q network (DQN) includes sensors and actuators disposed on the vehicle. A control module has a processor, memory, and input/output (I/O) ports in communication with the one or more sensors and the one or more actuators. The processor executes program code portions that cause the sensors actuators to obtain vehicle dynamics and road surface estimation information and utilize the vehicle dynamics information and road surface estimation information to generate a vehicle dynamical context. The system decides which one of a plurality of predefined calibrations is appropriate for the vehicle dynamical context, generates a command to the actuators based on a selected calibration. The system continuously and recursively causes the program code portions to execute while the vehicle is being operated.
Abstract:
Presented are embedded control systems with logic for computation and data sharing, methods for making/using such systems, and vehicles with distributed sensors and embedded processing hardware for provisioning automated driving functionality. A method for operating embedded controllers connected with distributed sensors includes receiving a first data stream from a first sensor via a first embedded controller, and storing the first data stream with a first timestamp and data lifespan via a shared data buffer in a memory device. A second data stream is received from a second sensor via a second embedded controller. A timing impact of the second data stream is calculated based on the corresponding timestamp and data lifespan. Upon determining that the timing impact does not violate a timing constraint, the first data stream is purged from memory and the second data stream is stored with a second timestamp and data lifespan in the memory device.
Abstract:
A dynamic side blind zone method includes determining that a host vehicle is approaching a first lane that is nonparallel to a second lane. The host vehicle is moving in the second lane. The method further includes activating an adaptive side blind zone alert system of the host vehicle in response to determining that the host vehicle is approaching the first lane that is nonparallel to the second lane, determining a warning zone in response to activating the adaptive side blind zone alert system of the host vehicle, and detecting a remote vehicle inside the warning zone after determining the warning zone. The remote vehicle is moving in the first lane. The method further includes providing an alert to a vehicle user of the host vehicle in response to detecting that the remote vehicle is inside the warning zone.
Abstract:
A vehicle system includes a lidar system that obtains an initial point cloud and obtains a dual density point cloud by implementing a first neural network and based on the initial point cloud. The dual density point cloud results from reducing point density of the initial point cloud outside a region of interest (ROI). Processing the dual density point cloud results in a detection result that indicates any objects in a field of view (FOV) of the lidar system. A controller obtains the detection result from the lidar system and controls an operation of the vehicle based on the detection result.
Abstract:
A latency masking system for use in an autonomous vehicle (AV) system includes a sensors module providing sensor data from a plurality of sensors. The sensor data includes image frames provided by a vehicle camera and vehicle motion data. A wireless transceiver transmits the sensor data to a remote server associated with a network infrastructure and receives remote state information derived from the sensor data. An on-board function module receives the sensor data from the sensors module and generates local state information. A state fusion and prediction module receives the remote station information and the local state information and updates the local state information with the remote state information. The state fusion and prediction module uses checkpoints in a state history data structure to update the local state information with the remote state information.
Abstract:
A vehicle system includes a lidar system that obtains an initial point cloud and obtains a dual density point cloud by implementing a first neural network and based on the initial point cloud. The dual density point cloud results from reducing point density of the initial point cloud outside a region of interest (ROI). Processing the dual density point cloud results in a detection result that indicates any objects in a field of view (FOV) of the lidar system. A controller obtains the detection result from the lidar system and controls an operation of the vehicle based on the detection result.