-
公开(公告)号:US20200334476A1
公开(公告)日:2020-10-22
申请号:US16916488
申请日:2020-06-30
Applicant: TUSIMPLE, INC.
Inventor: Panqu WANG , Tian Li
Abstract: A system and method for taillight signal recognition using a convolutional neural network is disclosed. An example embodiment includes: receiving a plurality of image frames from one or more image-generating devices of an autonomous vehicle; using a single-frame taillight illumination status annotation dataset and a single-frame taillight mask dataset to recognize a taillight illumination status of a proximate vehicle identified in an image frame of the plurality of image frames, the single-frame taillight illumination status annotation dataset including one or more taillight illumination status conditions of a right or left vehicle taillight signal, the single-frame taillight mask dataset including annotations to isolate a taillight region of a vehicle; and using a multi-frame taillight illumination status dataset to recognize a taillight illumination status of the proximate vehicle in multiple image frames of the plurality of image frames, the multiple image frames being in temporal succession.
-
公开(公告)号:US20250086802A1
公开(公告)日:2025-03-13
申请号:US18434501
申请日:2024-02-06
Applicant: TuSimple, Inc.
Inventor: Dongqiangzi YE , Zixiang ZHOU , Weijia CHEN , Yufei XIE , Yu WANG , Panqu WANG , Lingting GE
Abstract: A method of processing point cloud information includes converting points in a point cloud obtained from a lidar sensor into a voxel grid, generating, from the voxel grid, sparse voxel features by applying a multi-layer perceptron and one or more max pooling layers that reduce dimension of input data; applying a cascade of an encoder that performs a N-stage sparse-to-dense feature operation, a global context pooling (GCP) module, and an M-stage decoder that performs a dense-to-sparse feature generation operation. The GCP module bridges an output of a last stage of the N-stages with an input of a first stage of the M-stages, where N and M are positive integers. The GCP module comprises a multi-scale feature extractor; and performing one or more perception operations on an output of the M-stage decoder and/or an output of the GCP module.
-
公开(公告)号:US20250046075A1
公开(公告)日:2025-02-06
申请号:US18518156
申请日:2023-11-22
Applicant: TuSimple, Inc.
Inventor: Long SHA , Junliang ZHANG , Rundong GE , Xiangchen ZHAO , Fangjun ZHANG , Yizhe ZHAO , Panqu WANG
IPC: G06V10/98 , B60W50/02 , B60W50/029 , B60W60/00 , G06V10/26 , G06V10/28 , G06V10/48 , G06V10/75 , G06V20/56
Abstract: A unified framework for detecting perception anomalies in autonomous driving systems is described. The perception anomaly detection framework takes an input image from a camera in or on a vehicle and identifies anomalies as belonging to one of three categories. Lens anomalies are associated with poor sensor conditions, such as water, dirt, or overexposure. Environment anomalies are associated with unfamiliar changes to an environment. Finally, object anomalies are associated with unknown objects. After perception anomalies are detected, the results are sent downstream to cause a behavior change of the vehicle.
-
公开(公告)号:US20240265710A1
公开(公告)日:2024-08-08
申请号:US18475647
申请日:2023-09-27
Applicant: TuSimple, Inc.
Inventor: Zhe CHEN , Yizhe ZHAO , Lingting GE , Panqu WANG
CPC classification number: G06V20/58 , B60W60/001 , G06T7/248 , G06T7/74 , G06V10/776 , G06V10/87 , B60W2420/403 , B60W2552/15 , B60W2556/40 , B60W2720/10 , G06T2207/30252
Abstract: The present disclosure provides methods and systems for operating an autonomous vehicle. In some embodiments, the system may obtain, by a camera associated with an autonomous vehicle, an image of an environment of the autonomous vehicle, the environment including a road on which the autonomous vehicle is operating and an occlusion on the road. The system may identify the occlusion in the image based on map information of the environment and at least one camera parameter of the camera for obtaining the image. The system may identify an object represented in the image, and determine a confidence score relating to the object. The confidence score may indicate a likelihood a representation of the object in the image is impacted by the occlusion. The system may determine an operation algorithm based on the confidence score; and cause the autonomous vehicle to operate based on the operation algorithm.
-
公开(公告)号:US20230184931A1
公开(公告)日:2023-06-15
申请号:US17987200
申请日:2022-11-15
Applicant: TuSimple, Inc.
Inventor: Panqu WANG , Lingting GE
IPC: G01S13/931 , G01S13/89 , G01S17/931 , G01S17/89 , G01S13/86
CPC classification number: G01S13/931 , G01S13/89 , G01S17/931 , G01S17/89 , G01S13/865
Abstract: Vehicles can include systems and apparatus for performing signal processing on sensor data from radar(s) and LiDAR(s) located on the vehicles. A method includes obtaining and filtering radar point cloud data of an area in an environment in which a vehicle is operating on a road to obtain filtered radar point cloud data; obtaining a light detection and ranging point cloud data of at least some of the area, where the light detection and ranging point cloud data include information about a bounding box that surrounds an object on the road; determining a set of radar point cloud data that are associated with the bounding box that surrounds the object; and causing the vehicle to operate based on one or more characteristics of the object determined from the set of radar point cloud data.
-
公开(公告)号:US20250050913A1
公开(公告)日:2025-02-13
申请号:US18486809
申请日:2023-10-13
Applicant: TuSimple, Inc.
Inventor: Zhe CHEN , Lingting GE , Joshua Miguel RODRIGUEZ , Ji HAN , Panqu WANG , Junjun XIN , Xiaoling HAN , Yizhe ZHAO
IPC: B60W60/00
Abstract: Techniques are described for operating a vehicle using sensor data provided by one or more ultrasonic sensors located on or in the vehicle. An example method includes receiving, by a computer located in a vehicle, data from an ultrasonic sensor located on the vehicle, where the data includes a first set of coordinates of two points associated with a location where an object is detected by the ultrasonic sensor; determining a second set of coordinates associated with a point in between the two points; performing a first determination that the second set of coordinates is associated with a lane or a road on which the vehicle is operating; performing a second determination that the object is movable; and sending, in response to the first determination and the second determination, a message that causes the vehicle to perform a driving related operation while the vehicle is operating on the road.
-
公开(公告)号:US20250029274A1
公开(公告)日:2025-01-23
申请号:US18488657
申请日:2023-10-17
Applicant: TuSimple, Inc.
Inventor: Yizhe ZHAO , Zhe CHEN , Ye FAN , Lingting GE , Zhe HUANG , Panqu WANG , Xue MEI
Abstract: The present disclosure provides methods and systems of sampling-based object pose determination. An example method includes obtaining, for a time frame, sensor data of the object acquired by a plurality of sensors; generating a two-dimensional bounding box of the object in a projection plane based on the sensor data of the time frame; generating a three-dimensional pose model of the object based on the sensor data of the time frame and a model reconstruction algorithm; generating, based on the sensor data, the pose model, and multiple sampling techniques, a plurality of pose hypotheses of the object corresponding to the time frame, generating a hypothesis projection of the object for each of the pose hypotheses by projecting the pose hypothesis onto the projection plane; determining evaluation results by comparing the hypothesis projections with the bounding box; and determining, based on the evaluation results, an object pose for the time frame.
-
公开(公告)号:US20240379004A1
公开(公告)日:2024-11-14
申请号:US18784692
申请日:2024-07-25
Applicant: TUSIMPLE, INC.
Inventor: Panqu WANG
Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
-
公开(公告)号:US20240320990A1
公开(公告)日:2024-09-26
申请号:US18598715
申请日:2024-03-07
Applicant: TuSimple, Inc.
Inventor: Rundong GE , Long SHA , Haiping WU , Xiangchen ZHAO , Fangjun ZHANG , Zilong GUO , Hongyuan DU , Pengfei CHEN , Panqu WANG
CPC classification number: G06V20/588 , B60W10/20 , G06T7/20 , G06V10/54 , G06V10/56 , G06V10/751 , G06V20/584
Abstract: Techniques are described for performing an image processing on frames of a camera located on or in a vehicle. An example technique includes receiving, by a computer located in a vehicle, a first image and a second image from a camera; determining a first set of characteristics about a first set of pixels in the first image and a second set of characteristics about a second set of pixels in the second image; obtaining a motion information for each pixel in the second set by comparing the second set of characteristics with the first set of characteristics; generating, using the motion information for each pixel in the second set, a combined set of characteristics; determining attributes of a road using at least some of the combined set of characteristics; and causing the vehicle to perform a driving related operation in response to the determining the attributes of the road.
-
公开(公告)号:US20240203135A1
公开(公告)日:2024-06-20
申请号:US18474812
申请日:2023-09-26
Applicant: TuSimple, Inc.
Inventor: Yizhe ZHAO , Lingting GE , Panqu WANG
CPC classification number: G06V20/588 , B60W60/001 , G06T7/12 , G06T7/74 , G06T15/00 , G06V20/70 , G08G1/167 , B60W2420/42 , B60W2520/10 , B60W2552/10 , B60W2552/15 , B60W2555/60 , B60W2556/40 , B60W2710/18 , B60W2710/20 , G06T2207/30256
Abstract: Techniques are described for autonomous driving operation that includes receiving, by a computer located in a vehicle, an image from a camera located on the vehicle while the vehicle is operating on a road, wherein the image includes a plurality of lanes of the road; for each of the plurality of lanes: obtaining, from a map database stored in the computer, a set of values that describe locations of boundaries of a lane; dividing the lane into a plurality of polygons; rendering the plurality of polygons onto the image; and determining identifiers of lane segments of the lane; determining one or more characteristics of a lane segment on which the vehicle is operating based on an identifier of the lane segment; and causing the vehicle to perform a driving related operation in response to the one or more characteristics of the lane segment on which the vehicle is operating.
-
-
-
-
-
-
-
-
-