Abstract:
A computing device may be configured to receive sensor information indicative of respective characteristics of vehicles on a road of travel of a first vehicle. The computing device may be configured to identify, based on the respective characteristics, a second vehicle that exhibits an aggressive driving behavior manifested as an unsafe or unlawful driving action. Also, based on the respective characteristics, the computing device may be configured to determine a type of the second vehicle. The computing device may be configured to estimate a distance between the first vehicle and the second vehicle. The computing device may be configured to modify a control strategy of the first vehicle, based on the aggressive driving behavior of the second vehicle, the type of the second vehicle, and the distance between the first vehicle and the second vehicle; and control the first vehicle based on the modified control strategy.
Abstract:
Methods and systems for object detection using multiple sensors are described herein. In an example embodiment, a vehicle's computing device may receive sensor data frames indicative of an environment at different rates from multiple sensors. Based on a first frame from a first sensor indicative of the environment at a first time period and a portion of a first frame that corresponds to the first time period from a second sensor, the computing device may estimate parameters of objects in the vehicle's environment. The computing device may modify the parameters in response to receiving subsequent frames or subsequent portions of frame of sensor data from the sensors even if the frames arrive at the computing device out of order. The computing device may provide the parameters of the objects to systems of the vehicle for object detection and obstacle avoidance.
Abstract:
Methods and systems are disclosed for determining sensor degradation by actively controlling an autonomous vehicle. Determining sensor degradation may include obtaining sensor readings from a sensor of an autonomous vehicle, and determining baseline state information from the obtained sensor readings. A movement characteristic of the autonomous vehicle, such as speed or position, may then be changed. The sensor may then obtain additional sensor readings, and second state information may be determined from these additional sensor readings. Expected state information may be determined from the baseline state information and the change in the movement characteristic of the autonomous vehicle. A comparison of the expected state information and the second state information may then be performed. Based on this comparison, a determination may be made as to whether the sensor has degraded.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
A vehicle configured to operate in an autonomous mode may engage in an obstacle evaluation technique that includes employing a sensor system to collect data relating to a plurality of obstacles, identifying from the plurality of obstacles an obstacle pair including a first obstacle and a second obstacle, engaging in an evaluation process by comparing the data collected for the first obstacle to the data collected for the second obstacle, and in response to engaging in the evaluation process, making a determination of whether the first obstacle and the second obstacle are two separate obstacles.
Abstract:
Methods and systems for object detection using laser point clouds are described herein. In an example implementation, a computing device may receive laser data indicative of a vehicle's environment from a sensor and generate a two dimensional (2D) range image that includes pixels indicative of respective positions of objects in the environment based on the laser data. The computing device may modify the 2D range image to provide values to given pixels that map to portions of objects in the environment lacking laser data, which may involve providing values to the given pixels based on the average value of neighboring pixels positioned by the given pixels. Additionally, the computing device may determine normal vectors of sets of pixels that correspond to surfaces of objects in the environment based on the modified 2D range image and may use the normal vectors to provide object recognition information to systems of the vehicle.
Abstract:
A computing device may identify an object in an environment of a vehicle and receive a first three-dimensional (3D) point cloud depicting a first view of the object. The computing device may determine a reference point on the object in the first 3D point cloud, and receive a second 3D point cloud depicting a second view of the object. The computing device may determine a transformation between the first view and the second view, and estimate a projection of the reference point from the first view relative to the second view based on the transformation so as to trace the reference point from the first view to the second view. The computing device may determine one or more motion characteristics of the object based on the projection of the reference point.
Abstract:
The present invention relates to using image content to facilitate navigation in panoramic image data. In an embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.
Abstract:
Aspects of the disclosure relate generally to detecting the edges of lane lines. Specifically, a vehicle driving on a roadway may use a laser to collect data for the roadway. A computer may process the data received from the laser in order to extract the points which potentially reside on two lane lines defining a lane. The extracted points are used by the computer to determine a model of a left lane edge and a right lane edge for the lane. The model may be used to estimate a centerline between the two lane lines. All or some of the model and centerline estimates, may be used to maneuver a vehicle in real time and also to update or generate map information used to maneuver vehicles.
Abstract:
Aspects of the invention relate generally to autonomous vehicles. Specifically, the features described may be used alone or in combination in order to improve the safety, use, driver experience, and performance of these vehicles.