System and method for fusing outputs from multiple LiDAR sensors
    32.
    发明授权
    System and method for fusing outputs from multiple LiDAR sensors 有权
    用于融合多个LiDAR传感器输出的系统和方法

    公开(公告)号:US09378463B2

    公开(公告)日:2016-06-28

    申请号:US14828383

    申请日:2015-08-17

    Inventor: Shuqing Zeng

    Abstract: A system and method for fusing the outputs from multiple LiDAR sensors. The method includes providing object files for objects detected by the sensors at a previous sample time, where the object files identify the position, orientation and velocity of the detected objects. The method also includes receiving a plurality of scan returns from objects detected in the field-of-view of the sensors at a current sample time and constructing a point cloud from the scan returns. The method then segments the scan points in the point cloud into predicted clusters, where each cluster initially identifies an object detected by the sensors. The method matches the predicted clusters with predicted object models generated from objects being tracked during the previous sample time. The method creates new object models, deletes dying object models and updates the object files based on the object models for the current sample time.

    Abstract translation: 用于融合多个LiDAR传感器输出的系统和方法。 该方法包括在先前的采样时间提供由传感器检测到的物体的目标文件,其中目标文件识别被检测物体的位置,取向和速度。 该方法还包括在当前采样时间从传感器的视场中检测到的物体接收多个扫描返回,并从扫描返回构建点云。 然后,该方法将点云中的扫描点分割成预测的簇,其中每个簇最初识别由传感器检测到的对象。 该方法将预测的聚类与在之前的采样时间内跟踪的对象产生的预测对象模型进行匹配。 该方法创建新的对象模型,删除垂死对象模型,并根据当前采样时间的对象模型更新对象文件。

    Method and apparatus for state of health estimation of object sensing fusion system
    33.
    发明授权
    Method and apparatus for state of health estimation of object sensing fusion system 有权
    物体感知融合系统健康状况估计方法与装置

    公开(公告)号:US09152526B2

    公开(公告)日:2015-10-06

    申请号:US13679849

    申请日:2012-11-16

    Abstract: A method and system for estimating the state of health of an object sensing fusion system. Target data from a vision system and a radar system, which are used by an object sensing fusion system, are also stored in a context queue. The context queue maintains the vision and radar target data for a sequence of many frames covering a sliding window of time. The target data from the context queue are used to compute matching scores, which are indicative of how well vision targets correlate with radar targets, and vice versa. The matching scores are computed within individual frames of vision and radar data, and across a sequence of multiple frames. The matching scores are used to assess the state of health of the object sensing fusion system. If the fusion system state of health is below a certain threshold, one or more faulty sensors are identified.

    Abstract translation: 一种用于估计物体感测融合系统的健康状况的方法和系统。 由对象感测融合系统使用的来自视觉系统和雷达系统的目标数据也存储在上下文队列中。 上下文队列维护覆盖时间滑动窗口的许多帧的序列的视觉和雷达目标数据。 来自上下文队列的目标数据用于计算匹配分数,这表示视觉目标与雷达目标相关的程度如何,反之亦然。 匹配得分是在单独的视觉和雷达数据帧内以及跨多个帧的序列计算的。 匹配分数用于评估物体感知融合系统的健康状况。 如果融合系统的健康状况低于某个阈值,则识别出一个或多个故障传感器。

    REAL-TIME CONTROL SELECTION AND CALIBRATION USING NEURAL NETWORK

    公开(公告)号:US20240174243A1

    公开(公告)日:2024-05-30

    申请号:US18060049

    申请日:2022-11-30

    CPC classification number: B60W50/06 B60W50/045 B60W2050/0088 B60W2050/041

    Abstract: A system for real-time control selection and calibration in a vehicle using a deep-Q network (DQN) includes sensors and actuators disposed on the vehicle. A control module has a processor, memory, and input/output (I/O) ports in communication with the one or more sensors and the one or more actuators. The processor executes program code portions that cause the sensors actuators to obtain vehicle dynamics and road surface estimation information and utilize the vehicle dynamics information and road surface estimation information to generate a vehicle dynamical context. The system decides which one of a plurality of predefined calibrations is appropriate for the vehicle dynamical context, generates a command to the actuators based on a selected calibration. The system continuously and recursively causes the program code portions to execute while the vehicle is being operated.

    Latency masking in an autonomous vehicle using edge network computing resources

    公开(公告)号:US11516636B2

    公开(公告)日:2022-11-29

    申请号:US17162452

    申请日:2021-01-29

    Abstract: A latency masking system for use in an autonomous vehicle (AV) system includes a sensors module providing sensor data from a plurality of sensors. The sensor data includes image frames provided by a vehicle camera and vehicle motion data. A wireless transceiver transmits the sensor data to a remote server associated with a network infrastructure and receives remote state information derived from the sensor data. An on-board function module receives the sensor data from the sensors module and generates local state information. A state fusion and prediction module receives the remote station information and the local state information and updates the local state information with the remote state information. The state fusion and prediction module uses checkpoints in a state history data structure to update the local state information with the remote state information.

Patent Agency Ranking