Abstract:
Methods and systems for estimating vehicle speed are disclosed. A light detecting and ranging (LIDAR) device obtains a set of spatial points indicative of locations of reflective surfaces in an environment of the LIDAR device. A plurality of target points that correspond to a target surface of a target vehicle is identified in the set of spatial points. The plurality of target points includes a first point indicative of a first location on the target surface obtained by the LIDAR device at a first time and a second point indicative of a second location on the target surface obtained by the LIDAR device at a second time. A speed of the target vehicle is estimated based on the first location, the first time, the second location, and the second time.
Abstract:
Methods and systems for alignment of light detection and ranging (LIDAR) data are described. In some examples, a computing device of a vehicle may be configured to compare a three-dimensional (3D) point cloud to a reference 3D point cloud to detect obstacles on a road. However, in examples, the 3D point cloud and the reference 3D point cloud may be misaligned. To align the 3D point cloud with the reference 3D point cloud, the computing device may be configured to determine a planar feature in the 3D point cloud of the road and a corresponding planar feature in the reference 3D point cloud. Further, the computing device may be configured to determine, based on comparison of the planar feature to the corresponding planar feature, a transform. The computing device may be configured to apply the transform to align the 3D point cloud with the reference 3D point cloud.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
Aspects of the disclosure relate generally to safe and effective use of autonomous vehicles. More specifically, objects detected in a vehicle's surroundings may be detected by the vehicle's various sensors and identified based on their relative location in a roadgraph. The roadgraph may include a graph network of information such as roads, lanes, intersections, and the connections between these features. The roadgraph may also include the boundaries of areas, including for example, crosswalks or bicycle lanes. In one example, an object detected in a location corresponding to a crosswalk area of the roadgraph may be identified as a person. In another example, an object detected in a location corresponding to a bicycle area of the roadgraph and identified as a bicycle. By identifying the type of object in this way, an autonomous vehicle may be better prepared to react to or simply avoid the object.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
The present invention relates to using image content to facilitate navigation in panoramic image data. In an embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.
Abstract:
Aspects of the present disclosure relate generally to generating elevation maps. More specifically, data points may be collected by a laser moving along a roadway and used to generate an elevation map of the roadway. The collected data points may be projected onto a two dimensional or “2D” grid. The grid may include a plurality of cells, each cell of the grid representing a geolocated second of the roadway. The data points of each cell may be evaluated to identify an elevation for the particular cell. For example, the data points in a particular cell may be filtered in various ways including occlusion, interpolation from neighboring cells, etc. The minimum value of the remaining data points within each cell may then be used as the elevation for the particular cell, and the elevation of a plurality of cells may be used to generate an elevation map of the roadway.
Abstract:
Aspects of the disclosure relate generally to maneuvering autonomous vehicles. Specifically, the vehicle may use a laser to collect scan data for a section of roadway. The vehicle may access a detailed map including the section of the roadway. A disturbance indicative of an object and including a set of data points data may be identified from the scan data based on the detailed map. The detailed map may also be used to estimate a heading of the disturbance. A bounding box for the disturbance may be estimated using the set of data points as well as the estimated heading. The parameters of the bounding box may then be adjusted in order to increase or maximize the average density of data points of the disturbance along the edges of the bounding box visible to the laser. This adjusted bounding box may then used to maneuver the vehicle.
Abstract:
Aspects of the present disclosure relate generally to safe and effective use of autonomous vehicles. More specifically, an autonomous vehicle 301, 501 is able to detect objects in its surroundings which are within the sensor fields 410, 411, 430, 431, 420A-423A, 420B-423B, 570-75, 580. In response to detecting objects, the computer 110 may adjust the autonomous vehicle's speed or change direction. In some examples, however, the sensor fields may be changed or become less reliable based on objects or other features in the vehicle's surroundings. As a result, the vehicle's computer 110 may calculate the size and shape of the area of sensor diminution 620, 720 and a new sensor field based on this area of diminution. In response to identifying the area of sensor diminution or the new sensor field, the vehicle's computer may change the control strategies of the vehicle.
Abstract:
Aspects of the invention pertain to enhanced zooming capability of user devices. A user device such as a mobile phone with a camera may capture images of different objects of interest. The capture and zooming limitations of the user device are overcome by replacing, supplementing or otherwise enhancing the image taken with one or more geo-coded images stored in a database. For instance, if the user attempts to zoom in on a feature of an object of interest and exceeds the zooming capability of the user device, a request is sent to a remote server to provide an image showing the feature of the object of interest at a desired resolution. The server determines which, if any, stored images correlate to the captured image of the object of interest. The resulting imagery is provided to the user device and is presented on a display.