Abstract:
A method of determining a wet surface condition of a road. An image of a road surface is captured by an image capture device of the host vehicle. The image capture device is mounted on a side of the host vehicle and an image is captured in a downward direction. A region of interest is identified in the captured image by a processor. The region of interest is in a region rearward of a tire of a host vehicle. The region of interest is representative of where a tire track as generated by the tire rotating on the road when the road surface is wet. A determination is made whether water is present in the region of interest as a function of identifying the tire track. A wet road surface signal is generated in response to the identification of water in the region of interest.
Abstract:
A system and method for creating an enhanced perspective view of an area in front of a vehicle, using images from left-front and right-front cameras. The enhanced perspective view removes the distortion and exaggerated perspective effects which are inherent in wide-angle lens images. The enhanced perspective view uses a camera model including a virtual image surface and other processing techniques which provide corrections for two types of problems which are typically present in de-warped perspective images—including a stretching effect at the peripheral area of a wide-angle image de-warped by rectilinear projection, and double image of objects in an area where left-front and right-front camera images overlap.
Abstract:
A system and method are provided for detecting remote vehicles relative to a host vehicle using wheel detection. The system and method include tracking wheel candidates based on wheel detection data received from a plurality of object detection devices, comparing select parameters relating to the wheel detection data for each of the tracked wheel candidates, and identifying a remote vehicle by determining if a threshold correlation exists between any of the tracked wheel candidates based on the comparison of select parameters.
Abstract:
An embodiment contemplates a method of calibrating multiple image capture devices of a vehicle. A plurality of image capture devices having different poses are provided. At least one of the plurality of image capture devices is identified as a reference device. An image of a patterned display exterior of the vehicle is captured by the plurality of image capture devices. The vehicle traverses across the patterned display to capture images at various instances of time. A processor identifies common landmarks of the patterned display between each of the images captured by the plurality of image capture devices and the reference device. Images of the patterned display captured by each of the plurality of image devices are stitched using the identified common landmarks. Extrinsic parameters of each image capture device are adjusted relative to the reference device based on the stitched images for calibrating the plurality of image capture devices.
Abstract:
A method for providing a high resolution display image that includes providing a camera image that can be processed into at least two different camera views. The method also includes identifying a warped grid in each of the at least two different camera views and identifying a minimum field of view for displaying each of the at least two different camera views. The method further includes cropping the camera image based on the identified minimum field of view and de-warping the at least two different camera views to provide the high resolution display image.
Abstract:
An apparatus for capturing an image includes a plurality of lens elements coaxially encompassed within a lens housing. A split-sub-pixel imaging chip includes an IR-pass filter coating applied on selected sub-pixels. The sub-pixels include a long exposure sub-pixel and a short-exposure sub-pixel for each of a plurality of green blue and red pixels.
Abstract:
Method for applying super-resolution to images captured by a camera device of a vehicle includes receiving a plurality of image frames captured by the camera device. For each image frame, a region of interest is identified within the image frame requiring resolution related to detail per pixel to be increased. Spatially-implemented super-resolution is applied to the region of interest within each image to enhance image sharpness within the region of interest.
Abstract:
A method for providing a high resolution display image that includes providing a camera image that can be processed into at least two different camera views. The method also includes identifying a warped grid in each of the at least two different camera views and identifying a minimum field of view for displaying each of the at least two different camera views. The method further includes cropping the camera image based on the identified minimum field of view and de-warping the at least two different camera views to provide the high resolution display image.
Abstract:
A method of determining a road surface condition for a vehicle driving on a road. Probabilities associated with a plurality of road surface conditions based on an image of a capture scene are determined by a first probability module. Probabilities associated with the plurality of road surface conditions based on vehicle operating data are determined by a second probability module. The probabilities determined by the first and second probability modules are input to a data fusion unit for fusing the probabilities and determining a road surface condition. A refined probability is output from the data fusion unit that is a function of the fused first and second probabilities. The refined probability from the data fusion unit is provided to an adaptive learning unit. The adaptive learning unit generates output commands that refine tunable parameters of at least the first probability and second probability modules for determining the respective probabilities.
Abstract:
A method of displaying a captured image on a display device of a driven vehicle. A scene exterior of the driven vehicle is captured by an at least one vision-based imaging and at least one sensing device. A time-to-collision is determined for each object detected. A comprehensive time-to-collision is determined for each object as a function of each of the determined time-to-collisions for each object. An image of the captured scene is generated by a processor. The image is dynamically expanded to include sensed objects in the image. Sensed objects are highlighted in the dynamically expanded image. The highlighted objects identifies objects proximate to the driven vehicle that are potential collisions to the driven vehicle. The dynamically expanded image with highlighted objects and associated collective time-to-collisions are displayed for each highlighted object in the display device that is determined as a potential collision.