Perception and motion prediction for autonomous devices

    公开(公告)号:US11548533B2

    公开(公告)日:2023-01-10

    申请号:US16826895

    申请日:2020-03-23

    Applicant: UATC, LLC

    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object perception and prediction of object motion are provided. For example, a plurality of temporal instance representations can be generated. Each temporal instance representation can be associated with differences in the appearance and motion of objects over past time intervals. Past paths and candidate paths of a set of objects can be determined based on the temporal instance representations and current detections of objects. Predicted paths of the set of objects using a machine-learned model trained that uses the past paths and candidate paths to determine the predicted paths. Past path data that includes information associated with the predicted paths can be generated for each object of the set of objects respectively.

    Three-dimensional object detection
    15.
    发明授权

    公开(公告)号:US11500099B2

    公开(公告)日:2022-11-15

    申请号:US16353457

    申请日:2019-03-14

    Applicant: UATC, LLC

    Abstract: Generally, the disclosed systems and methods implement improved detection of objects in three-dimensional (3D) space. More particularly, an improved 3D object detection system can exploit continuous fusion of multiple sensors and/or integrated geographic prior map data to enhance effectiveness and robustness of object detection in applications such as autonomous driving. In some implementations, geographic prior data (e.g., geometric ground and/or semantic road features) can be exploited to enhance three-dimensional object detection for autonomous vehicle applications. In some implementations, object detection systems and methods can be improved based on dynamic utilization of multiple sensor modalities. More particularly, an improved 3D object detection system can exploit both LIDAR systems and cameras to perform very accurate localization of objects within three-dimensional space relative to an autonomous vehicle. For example, multi-sensor fusion can be implemented via continuous convolutions to fuse image data samples and LIDAR feature maps at different levels of resolution.

    Automatic Annotation of Object Trajectories in Multiple Dimensions

    公开(公告)号:US20220153310A1

    公开(公告)日:2022-05-19

    申请号:US17528559

    申请日:2021-11-17

    Applicant: UATC, LLC

    Abstract: Techniques for improving the performance of an autonomous vehicle (AV) by automatically annotating objects surrounding the AV are described herein. A system can obtain sensor data from a sensor coupled to the AV and generate an initial object trajectory for an object using the sensor data. Additionally, the system can determine a fixed value for the object size of the object based on the initial object trajectory. Moreover, the system can generate an updated initial object trajectory, wherein the object size corresponds to the fixed value. Furthermore, the system can determine, based on the sensor data and the updated initial object trajectory, a refined object trajectory. Subsequently, the system can generate a multi-dimensional label for the object based on the refined object trajectory. A motion plan for controlling the AV can be generated based on the multi-dimensional label.

    Sparse Convolutional Neural Networks

    公开(公告)号:US20210325882A1

    公开(公告)日:2021-10-21

    申请号:US17363986

    申请日:2021-06-30

    Applicant: UATC, LLC

    Abstract: The present disclosure provides systems and methods that apply neural networks such as, for example, convolutional neural networks, to sparse imagery in an improved manner. For example, the systems and methods of the present disclosure can be included in or otherwise leveraged by an autonomous vehicle. In one example, a computing system can extract one or more relevant portions from imagery, where the relevant portions are less than an entirety of the imagery. The computing system can provide the relevant portions of the imagery to a machine-learned convolutional neural network and receive at least one prediction from the machine-learned convolutional neural network based at least in part on the one or more relevant portions of the imagery. Thus, the computing system can skip performing convolutions over regions of the imagery where the imagery is sparse and/or regions of the imagery that are not relevant to the prediction being sought.

    Multi-Task Multi-Sensor Fusion for Three-Dimensional Object Detection

    公开(公告)号:US20200160559A1

    公开(公告)日:2020-05-21

    申请号:US16654487

    申请日:2019-10-16

    Applicant: UATC, LLC

    Abstract: Provided are systems and methods that perform multi-task and/or multi-sensor fusion for three-dimensional object detection in furtherance of, for example, autonomous vehicle perception and control. In particular, according to one aspect of the present disclosure, example systems and methods described herein exploit simultaneous training of a machine-learned model ensemble relative to multiple related tasks to learn to perform more accurate multi-sensor 3D object detection. For example, the present disclosure provides an end-to-end learnable architecture with multiple machine-learned models that interoperate to reason about 2D and/or 3D object detection as well as one or more auxiliary tasks. According to another aspect of the present disclosure, example systems and methods described herein can perform multi-sensor fusion (e.g., fusing features derived from image data, light detection and ranging (LIDAR) data, and/or other sensor modalities) at both the point-wise and region of interest (ROI)-wise level, resulting in fully fused feature representations.

Patent Agency Ranking