Abstract:
A method for autonomously aligning a tow hitch ball on a towing vehicle and a trailer drawbar on a trailer through a human-machine interface (HMI) assisted visual servoing process. The method includes providing rearview images from a rearview camera. The method includes touching the tow ball on a display to register a location of the tow ball in the image and touching the drawbar on the display to register a location of a target where the tow ball will be properly aligned with the drawbar. The method provides a template pattern around the target on the image and autonomously moves the vehicle so that the tow ball moves towards the target. The method predicts a new location of the target as the vehicle moves and identifies the target in new images as the vehicle moves by comparing the previous template pattern with an image patch around the predicted location.
Abstract:
An apparatus for capturing an image includes a plurality of lens elements coaxially encompassed within a lens housing. One of the lens elements includes an aspheric lens element having a surface profile configured to enhance a desired region of a captured image. At least one glare-reducing element coaxial with the plurality of lens elements receives light subsequent to the light sequentially passing through each of the lens elements. An imaging chip receives the light subsequent to the light passing through the at least one glare-reducing element. The imaging chip includes a plurality of green, blue and red pixels.
Abstract:
A method for displaying a captured image on a display device. A real image is captured by a vision-based imaging device. A virtual image is generated from the captured real image based on a mapping by a processor. The mapping utilizes a virtual camera model with a non-planar imaging surface. Projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device.
Abstract:
Examples of techniques for dynamically selecting a batch size used in vehicle camera image processing are disclosed. In one example implementation, a method includes generating, by a processing device, a batch table and a mode table. The method further includes determining, by the processing device, image processing performance requirements for a current mode of a vehicle using the mode table, the vehicle comprising a plurality of cameras configured to capture a plurality of images. The method further includes selecting, by the processing device, a batch size and a processing frequency based at least in part on the image processing performance requirements for the current mode of the vehicle. The method further includes processing, by an accelerator, at least a subset of the plurality of images based at least in part on the batch size and processing frequency.
Abstract:
Techniques for road scene primitive detection using a vehicle camera system are disclosed. In one example implementation, a computer-implemented method includes receiving, by a processing device having at least two parallel processing cores, at least one image from a camera associated with a vehicle on a road. The processing device generates a plurality of views from the at least one image that include a feature primitive. The feature primitive is indicative of a vehicle or other road scene entities of interest. Using each of the parallel processing cores, a set of primitives are identified from one or more of the plurality of views. The feature primitives are identified using one or more of machine learning and classic computer vision techniques. The processing device outputs, based on the plurality of views, result primitives based on the plurality of identified primitives from multiple views based on the plurality of identified entities.
Abstract:
Systems, Methods and Apparatuses are provided for detecting surface conditions, which includes: an image scene captured by a camera wherein the image scene includes: a set of a plurality of regions of interest (ROIs); and a processor configured to receive the image scene to: extract at least a first and a second ROI from the set of the plurality of ROIs of the image scene; associate the first ROI with an above-horizon region and associate the second ROI with a surface region; analyze the first ROI and the second ROI in parallel for a condition related to an ambient lighting in the first ROI and for an effect related to the ambient lighting in the second ROI; and extract from the first ROI features of the condition of the ambient lighting and extract from the second ROI features of the effect of the ambient lighting on a surface region.
Abstract:
A vehicle includes a plurality of on-vehicle cameras, and a controller executes a method to evaluate a travel surface by capturing images for fields of view of the respective cameras. Corresponding regions of interest for the images are identified, wherein each of the regions of interest is associated with the portion of the field of view of the respective camera that includes the travel surface. Portions of the images are extracted, wherein each extracted portion is associated with the region of interest in the portion of the field of view of the respective camera that includes the travel surface and wherein one extracted portion of the respective image includes the sky. The extracted portions of the images are compiled into a composite image datafile, and an image analysis of the composite image datafile is executed to determine a travel surface state. The travel surface state is communicated to another controller.
Abstract:
A system and method are provided for detecting and identifying elongated objects relative to a host vehicle. The method includes detecting objects relative to the host vehicle using a plurality of object detection devices, identifying patterns in detection data that correspond to an elongated object, wherein the detection data includes data fused from at least two of the plurality of object detection devices, determining initial object parameter estimates for the elongated object using each of the plurality of object detection devices, calculating object parameter estimates for the elongated object by fusing the initial object parameter estimates from each of the plurality of object detection devices, and determining an object type classification for the elongated object by fusing the initial object parameter estimates from each of the plurality of object detection devices.
Abstract:
A vehicle, system and method of navigating a vehicle. The vehicle and system include a digital camera for capturing a target image of a target domain of the vehicle, and a processor. The processor is configured to: determine a target segmentation loss for training the neural network to perform semantic segmentation of a target image in a target domain, determine a value of a pseudo-label of the target image by reducing the target segmentation loss while providing aa supervision of the training over the target domain, perform semantic segmentation on the target image using the trained neural network to segment the target image and classify an object in the target image, and navigate the vehicle based on the classified object in the target image.
Abstract:
A method is used to evaluate a camera-related subsystem in a digital network, e.g., aboard a vehicle or fleet, by receiving, via a camera diagnostic module (CDM), sensor reports from the subsystem and possibly from a door sensor, rain/weather sensor, or other sensor. The CDM includes data tables corresponding to subsystem-specific fault modes. The method includes evaluating performance of the camera-related subsystem by comparing potential fault indicators in the received sensor reports to one of the data tables, and determining a pattern of fault indicators in the reports. The pattern is indicative of a health characteristic of the camera-related subsystem. A control action is executed with respect to the digital network in response to the health characteristic, including recording a diagnostic or prognostic code indicative of the health characteristic. The digital network and vehicle are also disclosed.