LANE SELECTION
    5.
    发明申请

    公开(公告)号:US20220185326A1

    公开(公告)日:2022-06-16

    申请号:US17121081

    申请日:2020-12-14

    摘要: According to one aspect, systems and techniques for lane selection may include receiving a current state of an ego vehicle and a traffic participant vehicle, and a goal position, projecting the ego vehicle and the traffic participant vehicle onto a graph network, where nodes of the graph network may be indicative of discretized space within an operating environment, determining a current node for the ego vehicle within the graph network, and determining a subsequent node for the ego vehicle based on identifying adjacent nodes which may be adjacent to the current node, calculating travel times associated with each of the adjacent nodes, calculating step costs associated with each of the adjacent nodes, calculating heuristic costs associated with each of the adjacent nodes, and predicting a position of the traffic participant vehicle.

    MODEL-FREE REINFORCEMENT LEARNING

    公开(公告)号:US20210086798A1

    公开(公告)日:2021-03-25

    申请号:US16841602

    申请日:2020-04-06

    摘要: A system for generating a model-free reinforcement learning policy may include a processor, a memory, and a simulator. The simulator may be implemented via the processor and the memory. The simulator may generate a simulated traffic scenario including two or more lanes, an ego-vehicle, a dead end position, and one or more traffic participants. The dead end position may be a position by which a lane change for the ego-vehicle may be desired. The simulated traffic scenario may be associated with an occupancy map, a relative velocity map, a relative displacement map, and a relative heading map at each time step within the simulated traffic scenario. The simulator may model the ego-vehicle and one or more of the traffic participants using a kinematic bicycle model. The simulator may build a policy based on the simulated traffic scenario using an actor-critic network. The policy may be implemented on an autonomous vehicle.

    Lane selection
    8.
    发明授权

    公开(公告)号:US11780470B2

    公开(公告)日:2023-10-10

    申请号:US17229126

    申请日:2021-04-13

    IPC分类号: B60W60/00 G08G1/16 G08G1/01

    摘要: According to one aspect, systems and techniques for lane selection may include receiving a current state of an ego vehicle and a traffic participant vehicle, and a goal position, projecting the ego vehicle and the traffic participant vehicle onto a graph network, where nodes of the graph network may be indicative of discretized space within an operating environment, determining a current node for the ego vehicle within the graph network, and determining a subsequent node for the ego vehicle based on identifying adjacent nodes which may be adjacent to the current node, calculating travel times associated with each of the adjacent nodes, calculating step costs associated with each of the adjacent nodes, calculating heuristic costs associated with each of the adjacent nodes, and predicting a position of the traffic participant vehicle.

    Model-free reinforcement learning

    公开(公告)号:US11465650B2

    公开(公告)日:2022-10-11

    申请号:US16841602

    申请日:2020-04-06

    摘要: A system for generating a model-free reinforcement learning policy may include a processor, a memory, and a simulator. The simulator may be implemented via the processor and the memory. The simulator may generate a simulated traffic scenario including two or more lanes, an ego-vehicle, a dead end position, and one or more traffic participants. The dead end position may be a position by which a lane change for the ego-vehicle may be desired. The simulated traffic scenario may be associated with an occupancy map, a relative velocity map, a relative displacement map, and a relative heading map at each time step within the simulated traffic scenario. The simulator may model the ego-vehicle and one or more of the traffic participants using a kinematic bicycle model. The simulator may build a policy based on the simulated traffic scenario using an actor-critic network. The policy may be implemented on an autonomous vehicle.

    SYSTEM AND METHOD FOR COMPLETING TRAJECTORY PREDICTION FROM AGENT-AUGMENTED ENVIRONMENTS

    公开(公告)号:US20220153307A1

    公开(公告)日:2022-05-19

    申请号:US17161136

    申请日:2021-01-28

    IPC分类号: B60W60/00 G06K9/00

    摘要: A system and method for completing trajectory prediction from agent-augmented environments that include receiving image data associated with surrounding environment of an ego agent and processing an agent-augmented static representation of the surrounding environment of the ego agent based on the image data. The system and method also include processing a set of spatial graphs that correspond to an observation time horizon based on the agent-augmented static representation. The system and method further include predicting future trajectories of agents that are located within the surrounding environment of the ego agent based on the spatial graphs.