Abstract:
Aspects of the disclosure relate generally to detecting the edges of lane lines. Specifically, a vehicle driving on a roadway may use a laser to collect data for the roadway. A computer may process the data received from the laser in order to extract the points which potentially reside on two lane lines defining a lane. The extracted points are used by the computer to determine a model of a left lane edge and a right lane edge for the lane. The model may be used to estimate a centerline between the two lane lines. All or some of the model and centerline estimates, may be used to maneuver a vehicle in real time and also to update or generate map information used to maneuver vehicles.
Abstract:
Aspects of the disclosure relate generally to detecting discrete actions by traveling vehicles. The features described improve the safety, use, driver experience, and performance of autonomously controlled vehicles by performing a behavior analysis on mobile objects in the vicinity of an autonomous vehicle. Specifically, an autonomous vehicle is capable of detecting and tracking nearby vehicles and is able to determine when these nearby vehicles have performed actions of interest by comparing their tracked movements with map data.
Abstract:
An autonomous vehicle configured to determine the heading of an object-of-interest based on a point cloud. An example computer-implemented method involves: (a) receiving spatial-point data indicating a set of spatial points, each spatial point representing a point in three dimensions, where the set of spatial points corresponds to an object-of-interest; (b) determining, for each spatial point, an associated projected point, each projected point representing a point in two dimensions; (c) determining a set of line segments based on the determined projected points, where each respective line segment connects at least two determined projected points; (d) determining an orientation of at least one determined line segment from the set of line segments; and (e) determining a heading of the object-of-interest based on at least the determined orientation.
Abstract:
Aspects of the disclosure relate generally to safe and effective use of autonomous vehicles. More specifically, objects detected in a vehicle's surroundings may be detected by the vehicle's various sensors and identified based on their relative location in a roadgraph. The roadgraph may include a graph network of information such as roads, lanes, intersections, and the connections between these features. The roadgraph may also include the boundaries of areas, including for example, crosswalks or bicycle lanes. In one example, an object detected in a location corresponding to a crosswalk area of the roadgraph may be identified as a person. In another example, an object detected in a location corresponding to a bicycle area of the roadgraph and identified as a bicycle. By identifying the type of object in this way, an autonomous vehicle may be better prepared to react to or simply avoid the object.
Abstract:
A vehicle configured to operate in an autonomous mode may engage in an obstacle evaluation technique that includes employing a sensor system to collect data relating to a plurality of obstacles, identifying from the plurality of obstacles an obstacle pair including a first obstacle and a second obstacle, engaging in an evaluation process by comparing the data collected for the first obstacle to the data collected for the second obstacle, and in response to engaging in the evaluation process, making a determination of whether the first obstacle and the second obstacle are two separate obstacles.
Abstract:
A method is provided for processing an image in which only parts of the image that appear above a point on a horizon line are analyzed to identify an object. In one embodiment, the distance between the object and a vehicle is determined, and at least one of the speed and direction of the vehicle is changed when it is determined that the distance is less than the range of a sensor. The method for processing images is not limited to vehicular applications only and it may be used in all applications where computer vision is used to identify objects in an image.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
A system and method of displaying transitions between street level images is provided. In one aspect, the system and method creates a plurality of polygons that are both textured with images from a 2D street level image and associated with 3D positions, where the 3D positions correspond with the 3D positions of the objects contained in the image. These polygons, in turn, are rendered from different perspectives to convey the appearance of moving among the objects contained in the original image.
Abstract:
A method and apparatus are provided for optimizing one or more object detection parameters used by an autonomous vehicle to detect objects in images. The autonomous vehicle may capture the images using one or more sensors. The autonomous vehicle may then determine object labels and their corresponding object label parameters for the detected objects. The captured images and the object label parameters may be communicated to an object identification server. The object identification server may request that one or more reviewers identify objects in the captured images. The object identification server may then compare the identification of objects by reviewers with the identification of objects by the autonomous vehicle. Depending on the results of the comparison, the object identification server may recommend or perform the optimization of one or more of the object detection parameters.
Abstract:
The present invention relates to using image content to facilitate navigation in panoramic image data. In an embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.