Systems and methods for controlling the operation of an autonomous vehicle using multiple traffic light detectors

    公开(公告)号:US11753012B2

    公开(公告)日:2023-09-12

    申请号:US17039156

    申请日:2020-09-30

    CPC classification number: B60W30/18159 G06V20/584 H04W4/44 B60W2556/55

    Abstract: Systems and methods for controlling the operation of an autonomous vehicle are disclosed herein. One embodiment performs traffic light detection at an intersection using a sensor-based traffic light detector to produce a sensor-based detection output, the sensor-based detection output having an associated first confidence level; performs traffic light detection at the intersection using a vehicle-to-infrastructure-based (V2I-based) traffic light detector to produce a V2I-based detection output, the V2I-based detection output having an associated second confidence level; performs one of (1) selecting as a final traffic-light-detection output whichever of the sensor-based detection output and the V2I-based detection output has a higher associated confidence level and (2) generating the final traffic-light-detection output by fusing the sensor-based detection output and the V2I-based detection output using a first learning-based classifier; and controls the operation of the autonomous vehicle based, at least in part, on the final traffic-light-detection output.

    SYSTEMS AND METHODS FOR GENERATING UNIFORM FRAMES HAVING SENSOR AND AGENT DATA

    公开(公告)号:US20230351244A1

    公开(公告)日:2023-11-02

    申请号:US17733476

    申请日:2022-04-29

    CPC classification number: G06N20/00 G07C5/0841

    Abstract: System, methods, and other embodiments described herein relate to a manner of generating and relating frames that improves the retrieval of sensor and agent data for processing by different vehicle tasks. In one embodiment, a method includes acquiring sensor data by a vehicle. The method also includes generating a frame including the sensor data and agent perceptions determined from the sensor data at a timestamp, the agent perceptions including multi-dimensional data that describes features for surrounding vehicles of the vehicle. The method also includes relating the frame to other frames of the vehicle by track, the other frames having processed data from various times and the track having a predetermined window of scene information associated with an agent. The method also includes training a learning model using the agent perceptions accessed from the track.

    SYSTEMS AND METHODS FOR DETECTING TRAFFIC LIGHTS OF DRIVING LANES USING A CAMERA AND MULTIPLE MODELS

    公开(公告)号:US20230343109A1

    公开(公告)日:2023-10-26

    申请号:US17726939

    申请日:2022-04-22

    CPC classification number: G06V20/584 G06V20/588 G06V10/56

    Abstract: System, methods, and other embodiments described herein relate to improving the detection of traffic lights associated with a driving lane using a camera instead of map data. In one embodiment, a method includes estimating, from an image using a first model, depth and orientation information of traffic lights relative to a driving lane of a vehicle. The method also includes computing, using a second model, relevancy scores for the traffic lights according to geometric inferences between the depth and the orientation information. The method also includes assigning, using the second model, a primary relevancy score for a light of the traffic lights associated with the driving lane according to the depth and the orientation information. The method also includes executing a control task by the vehicle according to the primary relevancy score and a state confidence, computed by the first model, for the light.

    Shared vision system backbone
    10.
    发明授权

    公开(公告)号:US12148223B2

    公开(公告)日:2024-11-19

    申请号:US17732421

    申请日:2022-04-28

    Abstract: A method for generating a dense light detection and ranging (LiDAR) representation by a vision system includes receiving, at a sparse depth network, one or more sparse representations of an environment. The method also includes generating a depth estimate of the environment depicted in an image captured by an image capturing sensor. The method further includes generating, via the sparse depth network, one or more sparse depth estimates based on receiving the one or more sparse representations. The method also includes fusing the depth estimate and the one or more sparse depth estimates to generate a dense depth estimate. The method further includes generating the dense LiDAR representation based on the dense depth estimate and controlling an action of the vehicle based on identifying a three-dimensional object in the dense LiDAR representation.

Patent Agency Ranking