Abstract:
A vehicle may receive one or more images of an environment of the vehicle. The vehicle may also receive a map of the environment. The vehicle may also match at least one feature in the one or more images with corresponding one or more features in the map. The vehicle may also identify a given area in the one or more images that corresponds to a portion of the map that is within a threshold distance to the one or more features. The vehicle may also compress the one or more images to include a lower amount of details in areas of the one or more images other than the given area. The vehicle may also provide the compressed images to a remote system, and responsively receive operation instructions from the remote system.
Abstract:
The present disclosure is directed to an autonomous vehicle having a vehicle control system. The vehicle control system includes a vehicle detection system. The vehicle detection system includes receiving an image of a field of view of the vehicle and identifying a region-pair in the image with a sliding-window filter. The region-pair is made up of a first region and a second region. Each region is determined based on a color of pixels within the sliding-window filter. The vehicle detection system also determines a potential second vehicle in the image based on the region-pair. In response to determining the potential second vehicle in the image, the vehicle detection system performs a multi-stage classification of the image to determine whether the second vehicle is present in the image. Additionally, the vehicle detection system provides instructions to control the first vehicle based at least on the determined second vehicle.
Abstract:
An autonomous vehicle is configured to detect an active turn signal indicator on another vehicle. An image-capture device of the autonomous vehicle captures an image of a field of view of the autonomous vehicle. The autonomous vehicle captures the image with a short exposure to emphasize objects having brightness above a threshold. Additionally, a bounding area for a second vehicle located within the image is determined. The autonomous vehicle identifies a group of pixels within the bounding area based on a first color of the group of pixels. The autonomous vehicle also calculates an oscillation of an intensity of the group of pixels. Based on the oscillation of the intensity, the autonomous vehicle determines a likelihood that the second vehicle has a first active turn signal. Additionally, the autonomous vehicle is controlled based at least on the likelihood that the second vehicle has a first active turn signal.
Abstract:
Example methods and systems for camera calibration using structure from motion techniques are described herein. Within examples, an autonomous vehicle may receive images from a vehicle camera system and may determine an image-based pose based on the images. To determine an image-bases pose, an autonomous vehicle may perform various processes related to structure from motion, such as image matching and bundle adjustment. In addition, the vehicle may determine a sensor-based pose indicative of a position and orientation of the vehicle through using information provided by vehicle sensors. The vehicle may align the image-based pose with the sensor-based pose to determine any adjustments to the position or orientation that may calibrate the cameras. In an example, a computing device of the vehicle may align the different poses using transforms, rotations, and/or scaling.
Abstract:
To align a first digital 3D model of a scene with a second digital 3D model of the scene, real-world photographs of the scene are received and synthetic photographs of the first digital 3D model are generated according to different camera poses of a virtual camera. Using the real-world photographs and the synthetic photographs as input photographs, points in a coordinate system of the second digital 3D model are generated. Camera poses of the input photographs in the coordinate system of the second 3D model also are determined. Alignment data for aligning the first 3D model with the second 3D model is generated using the camera poses of the virtual camera and the camera poses corresponding to the input photographs.
Abstract:
A vehicle may receive one or more images of an environment of the vehicle. The vehicle may also receive a map of the environment. The vehicle may also match at least one feature in the one or more images with corresponding one or more features in the map. The vehicle may also identify a given area in the one or more images that corresponds to a a portion of the map that is within a threshold distance to the one or more features. The vehicle may also compress the one or more images to include a lower amount of details in areas of the one or more images other than the given area. The vehicle may also provide the compressed images to a remote system, and responsively receive operation instructions from the remote system.
Abstract:
Methods and systems for detection of a construction zone sign are described. A computing device, configured to control the vehicle, may be configured to receive, from an image-capture device coupled to the computing device, images of a vicinity of the road on which the vehicle is travelling. Also, the computing device may be configured to determine image portions in the images that may depict sides of the road at a predetermined height range. Further, the computing device may be configured to detect a construction zone sign in the image portions, and determine a type of the construction zone sign. Accordingly, the computing device may be configured to modify a control strategy associated with a driving behavior of the vehicle; and control the vehicle based on the modified control strategy.
Abstract:
Methods and systems for real-time road flare detection using templates and appropriate color spaces are described. A computing device of a vehicle may be configured to receive an image of an environment of the vehicle. The computing device may be configured to identify a given pixels in the plurality of pixels having one or more of: (i) a red color value greater than a green color value, and (ii) the red color value greater than a blue color value. Further, the computing device may be configured to make a comparison between one or more characteristics of a shape of an object represented by the given pixels in the image and corresponding one or more characteristics of a predetermined shape of a road flare; and determine a likelihood that the object represents the road flare.
Abstract:
Methods and systems for the use of detected objects for image processing are described. A computing device autonomously controlling a vehicle may receive images of the environment surrounding the vehicle from an image-capture device coupled to the vehicle. In order to process the images, the computing device may receive information indicating characteristics of objects in the images from one or more sources coupled to the vehicle. Examples of sources may include RADAR, LIDAR, a map, sensors, a global positioning system (GPS), or other cameras. The computing device may use the information indicating characteristics of the objects to process received images, including determining the approximate locations of objects within the images. Further, while processing the image, the computing device may use information from sources to determine portions of the image to focus upon that may allow the computing device to determine a control strategy based on portions of the image.