Abstract:
A method for generating a glare-reduced image from images captured by a camera device of a subject vehicle includes obtaining a short-exposure image and a long-exposure image and generating a resulting high dynamic range image based on the short-exposure and long-exposure images. Pixel values are monitored within both the short- and long-exposure images. A light source region is identified within both the short- and long-exposure images based on the monitored pixel values. A glaring region is identified based on the identified light source region and one of calculated pixel ratios and calculated pixel differences between the monitored pixel values of the long- and short-exposure images. The identified glaring region upon the resulting high dynamic range image is modified with the identified light source region within the short-exposure image. The glare-reduced image is generated based on the modified identified glaring region upon the resulting HDR image.
Abstract:
A system and method for determining when to display frontal curb view images to a driver of a vehicle, and what types of images to display. A variety of factors—such as vehicle speed, GPS/location data, the existence of a curb in forward-view images, and vehicle driving history—are evaluated as potential triggers for the curb view display, which is intended for situations where the driver is pulling the vehicle into a parking spot which is bounded in front by a curb or other structure. When forward curb-view display is triggered, a second evaluation is performed to determine what image or images to display which will provide the best view of the vehicle's position relative to the curb. The selected images are digitally synthesized or enhanced, and displayed on a console-mounted or in-dash display device.
Abstract:
An apparatus for capturing an image includes a plurality of lens elements coaxially encompassed within a lens housing. A split-sub-pixel imaging chip includes an IR-pass filter coating applied on selected sub-pixels. The sub-pixels include a long exposure sub-pixel and a short-exposure sub-pixel for each of a plurality of green blue and red pixels.
Abstract:
A system and method for providing calibration and de-warping for ultra-wide FOV cameras. The method includes estimating intrinsic parameters such as the focal length of the camera and an image center of the camera using multiple measurements of the near optical axis object points and a pinhole camera model. The method further includes estimating distortion parameters of the camera using an angular distortion model that defines an angular relationship between an incident optical ray passing an object point in an object space and an image point on an image plane that is an image of the object point on the incident optical ray. The method can include a parameter optimization process to refine the parameter estimation.
Abstract:
A system for analyzing images includes a processing device including a receiving module configured to receive an image associated with a target domain, and a domain adaptation module configured to characterize one or more features represented in the received image based on a domain adaptation model. The domain adaptation model is generated using a machine learning algorithm to train the domain adaptation model, and the machine learning algorithm is configured to train the domain adaptation model based on one or more source domain images associated with a source domain, one or more previously acquired images associated with the target domain, and acquired characterization data associated with the target domain. The system also includes an output module configured to output the received image with characterization data identifying one or more features characterized by the domain adaptation module.
Abstract:
Systems, Methods and Apparatuses are provided for detecting surface conditions, which includes: an image scene captured by a camera wherein the image scene includes: a set of a plurality of regions of interest (ROIs); and a processor configured to receive the image scene to: extract at least a first and a second ROI from the set of the plurality of ROIs of the image scene; associate the first ROI with an above-horizon region and associate the second ROI with a surface region; analyze the first ROI and the second ROI in parallel for a condition related to an ambient lighting in the first ROI and for an effect related to the ambient lighting in the second ROI; and extract from the first ROI features of the condition of the ambient lighting and extract from the second ROI features of the effect of the ambient lighting on a surface region.
Abstract:
A method is used to evaluate a camera-related subsystem in a digital network, e.g., aboard a vehicle or fleet, by receiving, via a camera diagnostic module (CDM), sensor reports from the subsystem and possibly from a door sensor, rain/weather sensor, or other sensor. The CDM includes data tables corresponding to subsystem-specific fault modes. The method includes evaluating performance of the camera-related subsystem by comparing potential fault indicators in the received sensor reports to one of the data tables, and determining a pattern of fault indicators in the reports. The pattern is indicative of a health characteristic of the camera-related subsystem. A control action is executed with respect to the digital network in response to the health characteristic, including recording a diagnostic or prognostic code indicative of the health characteristic. The digital network and vehicle are also disclosed.
Abstract:
An autonomic vehicle control system is described, and includes a vehicle spatial monitoring system including a subject spatial sensor that is disposed to monitor a spatial environment proximal to the autonomous vehicle. A controller is in communication with the subject spatial sensor, and the controller includes a processor and a memory device including an instruction set. The instruction set is executable to evaluate the subject spatial sensor, which includes determining first, second, third, fourth and fifth SOH (state of health) parameters associated with the subject spatial sensor, and determining an integrated SOH parameter for the subject spatial sensor based thereupon.
Abstract:
A method for determining a thickness of water on a path of travel. A plurality of images of a surface of the path of travel is captured by an image capture device over a predetermined sampling period. A plurality of wet surface detection techniques are applied to each of the images. A detection rate is determined in real-time for each wet surface detection technique. A detection rate trigger condition is determined as a function of a velocity of the vehicle for each detection rate. The real-time determined detection rate trigger conditions are compared to predetermined detection rate trigger conditions in a classification module to identify matching results pattern. A water film thickness associated with the matching results pattern is identified in the classification module. A water film thickness signal is provided to a control device. The control device applies the water film thickness signal to mitigate the wet surface condition.
Abstract:
A method for determining a wet surface condition of a road. An image of a road surface is captured by an image capture device of the host vehicle. The image capture device is mounted on a side of the host vehicle and captures an image in a downward direction. Identifying in the captured image, by a processor, a region of interest. The region of interest is in a region sideways to a face of the wheel. The region of interest is representative of where sideways splash as generated by the wheel occurs. A determination is made whether water is present in the region of interest. A wet road surface signal is generated in response to the identification of water in the region of interest.