Abstract:
A method for determining a wet road surface condition for a vehicle driving on a road. A first image exterior of the vehicle is captured by an image capture device. A second image exterior of the vehicle is captured by the image capture device. A section of the road is identified in the first and second captured images. A texture of the road in the first and second images captured by a processor are compared. A determination is made whether the texture of the road in the first image is different from the texture of the road in the second image. A wet driving surface indicating signal is generated in response to the determination that the texture of the road in the first image is different than the texture of the road in the second image.
Abstract:
A method of calibrating multiple vehicle-based image capture devices of a vehicle. An image is captured by at least one image capture device. A reference object is identified in the captured image. The reference object has known world coordinates. Known features of the vehicle are extracted in the captured image. A relative location and orientation of the vehicle in world coordinates is determined relative to the reference object. Each of the multiple image capture devices is calibrated utilizing intrinsic and extrinsic parameters of the at least one image capture device as a function of the relative location and orientation of the vehicle in world coordinates.
Abstract:
A method for displaying a captured image on a display device. A real image is captured by a vision-based imaging device. A virtual image is generated from the captured real image based on a mapping by a processor. The mapping utilizes a virtual camera model with a non-planar imaging surface. Projecting the virtual image formed on the non-planar image surface of the virtual camera model to the display device.
Abstract:
A method for displaying a captured image on a display device. A scene is captured by at least one vision-based imaging device. A virtual image of the captured scene is generated by a processor using a camera model. A view synthesis technique is applied to the captured image by the processor for generating a de-warped virtual image. A dynamic rearview mirror display mode is actuated for enabling a viewing mode of the de-warped image on the rearview mirror display device. The de-warped image is displayed in the enabled viewing mode on the rearview mirror display device.
Abstract:
A vehicle imaging system includes an image capture device capturing an image exterior of a vehicle. The captured image includes at least a portion of a sky scene. A processor generates a virtual image of a virtual sky scene from the portion of the sky scene captured by the image capture device. The processor determines a brightness of the virtual sky scene from the virtual image. The processor dynamically adjusts a brightness of the captured image based the determined brightness of the virtual image. A rear view mirror display device displays the adjusted captured image.
Abstract:
A system for analyzing images includes a processing device including a receiving module configured to receive an image associated with a target domain, and a domain adaptation module configured to characterize one or more features represented in the received image based on a domain adaptation model. The domain adaptation model is generated using a machine learning algorithm to train the domain adaptation model, and the machine learning algorithm is configured to train the domain adaptation model based on one or more source domain images associated with a source domain, one or more previously acquired images associated with the target domain, and acquired characterization data associated with the target domain. The system also includes an output module configured to output the received image with characterization data identifying one or more features characterized by the domain adaptation module.
Abstract:
Examples of techniques for controlling a vehicle based on trailer position are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method includes extracting, by a processing device, a feature point on a trailer from an image captured by a camera associated with a vehicle, the trailer being coupled to the vehicle. The method further includes determining, by the processing device, a distance between the feature point on the trailer and a virtual boundary. The method further includes, responsive to determining that the distance between the feature point on the trailer and the virtual boundary is less than a threshold, controlling, by the processing device, the vehicle to cause the distance between the feature point on the trailer and the virtual boundary to increase.
Abstract:
Examples of techniques for controlling a vehicle based on trailer sway are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method includes estimating, by a processing device, an estimated articulation angle between a vehicle and a trailer coupled to the vehicle. The method further includes calculating, by the processing device, an expected articulation angle between the vehicle and the trailer. The method further includes comparing, by the processing device, the estimated articulation angle and the expected articulation angle to determine whether the trailer is experiencing trailer sway. The method further includes, responsive to determining that the trailer is experiencing sway, controlling, by the processing device, the vehicle to reduce the trailer sway.
Abstract:
An autonomic vehicle control system is described, and includes a vehicle spatial monitoring system including a subject spatial sensor that is disposed to monitor a spatial environment proximal to the autonomous vehicle. A controller is in communication with the subject spatial sensor, and the controller includes a processor and a memory device including an instruction set. The instruction set is executable to evaluate the subject spatial sensor, which includes determining first, second, third, fourth and fifth SOH (state of health) parameters associated with the subject spatial sensor, and determining an integrated SOH parameter for the subject spatial sensor based thereupon.
Abstract:
A vehicle subsystem includes an on-vehicle camera that is disposed to monitor a field of view (FOV) that includes a travel surface for the vehicle. A controller captures, via the on-vehicle camera, an image file associated with the FOV and segments the image file into a first set of regions associated with the travel surface and a second set of regions associated with an above-horizon portion. Image features on each of the first set of regions and the second set of regions are extracted and classified. A surface condition for the travel surface for the vehicle is identified based upon the classified extracted image features from each of the first set of regions and the second set of regions. Operation of the vehicle is controlled based upon the identified surface condition.