-
公开(公告)号:US20180211121A1
公开(公告)日:2018-07-26
申请号:US15415733
申请日:2017-01-25
Applicant: Ford Global Technologies, LLC
Inventor: Maryam Moosaei , Guy Hotson , Vidya Nariyambut Murali , Madeline J. Goh
IPC: G06K9/00 , G06T5/20 , G06K9/46 , H04N9/77 , H04N7/18 , G01S17/93 , B60R1/00 , G05D1/00 , G05D1/02
CPC classification number: G06K9/00825 , G01S17/023 , G01S17/936 , G06K9/6273 , H04N7/183
Abstract: The present invention extends to methods, systems, and computer program products for detecting vehicles in low light conditions. Cameras are used to obtain RGB images of the environment around a vehicle. RGB images are converted to LAB images. The “A” channel is filtered to extract contours from LAB images. The contours are filtered based on their shapes/sizes to reduce false positives from contours unlikely to correspond to vehicles. A neural network classifies an object as a vehicle or non-vehicle based the contours. Accordingly, aspects provide reliable autonomous driving with lower cost sensors and improved aesthetics. Vehicles can be detected at night as well as in other low light conditions using their head lights and tail lights, enabling autonomous vehicles to better detect other vehicles in their environment. Vehicle detections can be facilitated using a combination of virtual data, deep learning, and computer vision.
-
62.
公开(公告)号:US20180018527A1
公开(公告)日:2018-01-18
申请号:US15210670
申请日:2016-07-14
Applicant: Ford Global Technologies, LLC
Abstract: A method for generating training data is disclosed. The method may include executing a simulation process. The simulation process may include traversing a virtual camera through a virtual driving environment comprising at least one virtual precipitation condition and at least one virtual no precipitation condition. During the traversing, the virtual camera may be moved with respect to the virtual driving environment as dictated by a vehicle-motion model modeling motion of a vehicle driving through the virtual driving environment while carrying the virtual camera. Virtual sensor data characterizing the virtual driving environment in both virtual precipitation and virtual no precipitation conditions may be recorded. The virtual sensor data may correspond to what a real sensor would have output had it sensed the virtual driving environment in the real world.
-
公开(公告)号:US20170206434A1
公开(公告)日:2017-07-20
申请号:US14995482
申请日:2016-01-14
Applicant: Ford Global Technologies, LLC
Inventor: Vidya Nariyambut Murali , Madeline Jane Schrier
CPC classification number: G06K9/628 , G06K9/00818 , G06K9/00993 , G06K9/4628 , G06K9/4642 , G06K9/6232 , G06K9/6256 , G06K9/627 , G06K9/6273 , G06T1/20 , G06T7/70 , G06T2207/20084 , G06T2207/30252
Abstract: Disclosures herein teach applying a set of sections spanning a down-sampled version of an image of a road-scene to a low-fidelity classifier to determine a set of candidate sections for depicting one or more objects in a set of classes. The set of candidate sections of the down-sampled version may be mapped to a set of potential sectors in a high-fidelity version of the image. A high-fidelity classifier may be used to vet the set of potential sectors, determining the presence of one or more objects from the set of classes. The low-fidelity classifier may include a first Convolution Neural Network (CNN) trained on a first training set of down-sampled versions of cropped images of objects in the set of classes. Similarly, the high-fidelity classifier may include a second CNN trained on a second training set of high-fidelity versions of cropped images of objects in the set of classes.
-
64.
公开(公告)号:US20170109644A1
公开(公告)日:2017-04-20
申请号:US14887031
申请日:2015-10-19
Applicant: Ford Global Technologies, LLC
Inventor: Vidya Nariyambut Murali , Sneha Kadetotad , Daniel Levine
CPC classification number: G06N7/005 , G06N99/005
Abstract: Systems, methods, and devices for sensor fusion are disclosed herein. A system for sensor fusion includes one or more sensors, a model component, and an inference component. The model component is configured to calculate values in a joint-probabilistic graphical model based on the sensor data. The graphical model includes nodes corresponding to random variables and edges indicating correlations between the nodes. The inference component is configured to detect and track obstacles near a vehicle based on the sensor data and the model using a weighted-integrals-and-sums-by-hashing (WISH) algorithm.
-
公开(公告)号:US12233912B2
公开(公告)日:2025-02-25
申请号:US17571944
申请日:2022-01-10
Applicant: Ford Global Technologies, LLC
Inventor: Nikhil Nagraj Rao , Francois Charette , Shruthi Venkat , Sandhya Sridhar , Vidya Nariyambut Murali
Abstract: A location of a first object can be determined in an image. A line can be drawn on the first image based on the location of the first object. A deep neural network can be trained to determine a relative location between the first object in the image and a second object in the image based on the line. The deep neural network can be optimized by determining a fitness score that divides a number of deep neural network parameters by a performance score. The deep neural network can be output.
-
公开(公告)号:US20240046625A1
公开(公告)日:2024-02-08
申请号:US17817235
申请日:2022-08-03
Applicant: Ford Global Technologies, LLC
Inventor: Nikita Jaipuria , Xianling Zhang , Katherine Stevo , Jinesh Jain , Vidya Nariyambut Murali , Meghana Laxmidhar Gaopande
IPC: G06V10/778 , G06V10/77
CPC classification number: G06V10/778 , G06V10/7715
Abstract: A computer includes a processor and a memory storing instructions executable by the processor to receive a dataset of images; extract feature data from the images; optimize a number of clusters into which the images are classified based on the feature data; for each cluster, optimize a number of subclusters into which the images in that cluster are classified; determine a metric indicating a bias of the dataset toward at least one of the clusters or subclusters based on the number of clusters, the numbers of subclusters, distances between the respective clusters, and distances between the respective subclusters; and after determining the metric, train a machine-learning program using a training set constructed from the clusters and the subclusters.
-
公开(公告)号:US20230196740A1
公开(公告)日:2023-06-22
申请号:US17552913
申请日:2021-12-16
Applicant: Ford Global Technologies, LLC
Inventor: Vidya Nariyambut Murali , Nikita Jaipuria , Xianling Zhang
IPC: G06V10/774 , G06V20/70 , G06T7/00 , G06T7/11
CPC classification number: G06V10/7747 , G06V20/70 , G06T7/0002 , G06T7/11 , G06T2207/20081 , G06T2207/20084
Abstract: This disclosure describes systems and methods for improved training data acquisition. An example method may include sending, by a processor, an indication for a user to capture data relating to a first area of interest using a first mobile device. The example method may also include determining, by the processor, that first data captured by the first mobile device would fail to satisfy a quality requirement. The example method may also include causing, by the processor, to present an indication through the first mobile device to the user to adjust the first mobile device. The example method may also include determining, by the processor, that second data captured by the first mobile device after being adjusted would satisfy the quality requirement. The example method may also include receiving, by the processor, the second data from the first mobile device. The example method may also include receiving, by the processor, third data from a second mobile device, wherein the second data and third data are used to train a neural network associated with a vehicle.
-
公开(公告)号:US20210291832A1
公开(公告)日:2021-09-23
申请号:US16821034
申请日:2020-03-17
Applicant: Ford Global Technologies, LLC
Inventor: Kyle Simmons , Luke Niewiadomski , Roger Trombley , Frederic Christen , Christoph Kessler , Katherine Rouen , Erick Michael Lavoie , Hamid M. Golgiri , Bruno Sielly Jales Costa , Nikhil Nagraj Rao , Vidya Nariyambut Murali , John Michael Celli , Frank Golub , Seyed Armin Raeis Hosseiny , Bo Bao , Siyuan Ma , Hemanth Yadav Aradhyula
Abstract: A system for assisting in aligning a vehicle for hitching with a trailer includes a vehicle steering system, a wireless communication module, a detection system outputting a signal including scene data of an area to a rear of the vehicle, and a controller. The controller receives, via the wireless communication module, an automated hitching initiation command from an external wireless device, receives the scene data and identifying the trailer within the area to the rear of the vehicle, derives a backing path to align a hitch ball mounted on the vehicle to a coupler of the trailer, and controls the vehicle steering system to maneuver the vehicle including reversing along the backing path.
-
公开(公告)号:US11087186B2
公开(公告)日:2021-08-10
申请号:US16657327
申请日:2019-10-18
Applicant: Ford Global Technologies, LLC
Inventor: Madeline Jane Schrier , Vidya Nariyambut Murali
Abstract: The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.
-
公开(公告)号:US11042758B2
公开(公告)日:2021-06-22
申请号:US16460066
申请日:2019-07-02
Applicant: Ford Global Technologies, LLC
Inventor: Nikita Jaipuria , Gautham Sholingar , Vidya Nariyambut Murali , Rohan Bhasin , Akhil Perincherry
Abstract: A computer, including a processor and a memory, the memory including instructions to be executed by the processor to generate a synthetic image and corresponding ground truth and generate a plurality of domain adapted synthetic images by processing the synthetic image with a variational auto encoder-generative adversarial network (VAE-GAN), wherein the VAE-GAN is trained to adapt the synthetic image from a first domain to a second domain. The instructions can include further instructions to train a deep neural network (DNN) based on the domain adapted synthetic images and the corresponding ground truth and process images with the trained deep neural network to determine objects.
-
-
-
-
-
-
-
-
-