-
公开(公告)号:US11798139B2
公开(公告)日:2023-10-24
申请号:US17099995
申请日:2020-11-17
Applicant: GM Global Technology Operations LLC
Inventor: Michael Slutsky
CPC classification number: G06T5/003 , G06N3/045 , G06N3/08 , G06T5/002 , G06T5/005 , G06T2207/20081 , G06T2207/20084 , G06T2207/30248
Abstract: Systems and methods to perform noise-adaptive non-blind deblurring on an input image that includes blur and noise involve implementing a first neural network on the input image to obtain one or more parameters and performing regularized deconvolution to obtain a deblurred image from the input image. The regularized deconvolution uses the one or more parameters to control noise in the deblurred image. A method includes implementing a second neural network to remove artifacts from the deblurred image and provide an output image.
-
22.
公开(公告)号:US20230150429A1
公开(公告)日:2023-05-18
申请号:US17528715
申请日:2021-11-17
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Michael Slutsky , Albert Shalumov
IPC: B60R1/00 , B60K35/00 , H04N5/247 , G06K9/00 , G06K9/62 , G06T7/55 , G06T7/70 , H04N13/111 , G06T19/00
CPC classification number: B60R1/00 , B60K35/00 , H04N5/247 , G06K9/00791 , G06K9/6267 , G06K9/6261 , G06T7/55 , G06T7/70 , H04N13/111 , G06T19/006 , B60R2300/105 , B60R2300/207 , B60R2300/307 , B60R2300/303 , B60R2300/60 , B60K2370/166 , B60K2370/178 , B60K2370/21 , B60K2370/31 , B60K2370/52 , B60Q9/00
Abstract: Presented are intelligent vehicle systems with networked on-body vehicle cameras with camera-view augmentation capabilities, methods for making/using such systems, and vehicles equipped with such systems. A method for operating a motor vehicle includes a system controller receiving, from a network of vehicle-mounted cameras, camera image data containing a target object from a perspective of one or more cameras. The controller analyzes the camera image to identify characteristics of the target object and classify these characteristics to a corresponding model collection set associated with the type of target object. The controller then identifies a 3D object model assigned to the model collection set associated with the target object type. A new “virtual” image is generated by replacing the target object with the 3D object model positioned in a new orientation. The controller commands a resident vehicle system to execute a control operation using the new image.
-
公开(公告)号:US11532165B2
公开(公告)日:2022-12-20
申请号:US16801587
申请日:2020-02-26
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Michael Slutsky , Daniel Kigli
Abstract: In various embodiments, methods and systems are provided for processing camera data from a camera system associated with a vehicle. In one embodiment, a method includes: storing a plurality of photorealistic scenes of an environment; training, by a processor, a machine learning model to produce a surround view approximating a ground truth surround view using the plurality of photorealistic scenes as training data; and processing, by a processor, the camera data from the camera system associated with the vehicle based on the trained machine learning model to produce a surround view of an environment of the vehicle.
-
公开(公告)号:US20200309530A1
公开(公告)日:2020-10-01
申请号:US16371365
申请日:2019-04-01
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Michael Slutsky , Daniel I. Dobkin
Abstract: A vehicle pose determining system and method for accurately estimating the pose of a vehicle (i.e., the location and/or orientation of a vehicle). The system and method use a form of sensor fusion, where output from vehicle dynamics sensors (e.g., accelerometers, gyroscopes, encoders, etc.) is used with output from vehicle radar sensors to improve the accuracy of the vehicle pose data. Uncorrected vehicle pose data derived from dynamics sensor data is compensated with correction data that is derived from occupancy grids that are based on radar sensor data. The occupancy grids, which are 2D or 3D mathematical objects that are somewhat like radar-based maps, must correspond to the same geographic location. The system and method use mathematical techniques (e.g., cost functions) to rotate and shift multiple occupancy grids until a best fit solution is determined, and the best fit solution is then used to derive the correction data that, in turn, improves the accuracy of the vehicle pose data.
-
公开(公告)号:US10605924B2
公开(公告)日:2020-03-31
申请号:US15667062
申请日:2017-08-02
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Michael Slutsky , Ariel Lipson , Itai Afek
Abstract: The present application generally relates communications and hazard avoidance within a monitored driving environment. More specifically, the application teaches a system for improved target object detection in a vehicle equipped with a laser detection and ranging LIDAR system by simultaneously transmitting multiple lasers in an array and resolving angular ambiguities using a plurality of horizontal detectors and a plurality of vertical detectors.
-
公开(公告)号:US20190353778A1
公开(公告)日:2019-11-21
申请号:US15979963
申请日:2018-05-15
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Michael Slutsky , Daniel I. Dobkin
Abstract: A vehicle, system and method of mapping the environment is disclosed. The system includes a sensor and a processor. The sensor is configured to obtain a detection from an object in an environment surrounding the vehicle. The processor is configured to compute a plurality of radial components and a plurality of angular components for a positive inverse sensor model (ISM) of an occupancy grid, select a radial component corresponding to a range of the detection from the plurality of radial components and selecting an angular component corresponding to an angle of the detection from the plurality of angular components, multiply the selected radial component and the selected angular component to create an occupancy grid for the detection, and map the environment using the occupancy grid.
-
公开(公告)号:US10416679B2
公开(公告)日:2019-09-17
申请号:US15634070
申请日:2017-06-27
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Ariel Lipson , Michael Slutsky , Oded Bialer
Abstract: The present application generally relates communications and hazard avoidance within a monitored driving environment. More specifically, the application teaches a system and method for improved target object detection in a vehicle equipped with a laser detection and ranging LIDAR system by transmitting a light pulse for a known duration and comparing a duration of the received pulse to the transmitted pulse in order to determine an orientation of a surface of a target.
-
公开(公告)号:US12008817B2
公开(公告)日:2024-06-11
申请号:US17198954
申请日:2021-03-11
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Albert Shalumov , Michael Slutsky
IPC: G06V20/58 , G06F18/214 , G06F18/22 , G06N3/04 , G06N3/08 , G06T5/70 , G06T7/13 , G06T7/55 , G06V30/262 , H04N7/18 , H04N23/698 , H04N23/90 , G05D1/00
CPC classification number: G06V20/58 , G06F18/214 , G06F18/22 , G06N3/04 , G06N3/08 , G06T5/70 , G06T7/13 , G06T7/55 , G06V30/274 , H04N7/181 , H04N23/698 , H04N23/90 , G05D1/0246 , G06T2207/20081 , G06T2207/20084 , G06T2207/30261
Abstract: Methods and system for training a neural network for depth estimation in a vehicle. The methods and systems receive respective training image data from at least two cameras. Fields of view of adjacent cameras of the at least two cameras partially overlap. The respective training image data is processed through a neural network providing depth data and semantic segmentation data as outputs. The neural network is trained based on a loss function. The loss function combines a plurality of loss terms including at least a semantic segmentation loss term and a panoramic loss term. The panoramic loss term includes a similarity measure regarding overlapping image patches of the respective image data that each correspond to a region of overlapping fields of view of the adjacent cameras. The semantic segmentation loss term quantifies a difference between ground truth semantic segmentation data and the semantic segmentation data output from the neural network.
-
29.
公开(公告)号:US20230196790A1
公开(公告)日:2023-06-22
申请号:US17557448
申请日:2021-12-21
Applicant: GM GLOBAL TECHNOLOGY OPERATIONS LLC
Inventor: Tzvi Philipp , Eran Kishon , Michael Slutsky
CPC classification number: G06V20/584 , B60W50/14 , G06V10/25 , G02B27/01 , B60W2050/146 , B60W2420/42 , G02B2027/0196
Abstract: A visual perception system includes a scanning camera, color sensor with color filter array (CFA), and a classifier node. The camera captures full color pixel images of a target object, e.g., a traffic light, and processes the pixel images through a narrow band pass filter (BPF), such that the narrow BPF outputs monochromatic images of the target object. The color sensor and CFA receive the monochromatic images. The color sensor has at least three color channels each corresponding to different colors of spectral data in the monochromatic images. The classifier node uses a predetermined classification decision tree to classify constituent pixels of the monochromatic images into different color bins as a corresponding color of interest. The color of interest may be used to perform a control action, e.g., via an automated driver assist system (ADAS) control unit or an indicator device.
-
公开(公告)号:US20230050264A1
公开(公告)日:2023-02-16
申请号:US17402000
申请日:2021-08-13
Applicant: GM Global Technology Operations LLC
Inventor: Michael Slutsky , Albert Shalumov
Abstract: Systems and methods for generating a virtual view of a virtual camera based on an input image are described. A system for generating a virtual view of a virtual camera based on an input image can include a capturing device including a physical camera and a depth sensor. The system also includes a controller configured to determine an actual pose of the capturing device; determine a desired pose of the virtual camera for showing the virtual view; define an epipolar geometry between the actual pose of the capturing device and the desired pose of the virtual camera; and generate a virtual image depicting objects within the input image according to the desired pose of the virtual camera for the virtual camera based on an epipolar relation between the actual pose of the capturing device, the input image, and the desired pose of the virtual camera.
-
-
-
-
-
-
-
-
-