Abstract:
Disclosed herein are an autonomous driving method for avoiding a stopped vehicle and an apparatus for the same. The autonomous driving method for avoiding a stopped vehicle is performed by an autonomous driving control apparatus provided in an autonomous vehicle, and includes obtaining taillight recognition information for a stopped vehicle identified ahead of the autonomous vehicle, determining whether the stopped vehicle is to be avoided in consideration of the taillight recognition information, when it is determined that the stopped vehicle is to be avoided, setting an avoidance method in consideration of whether lane returning is to be performed, which is determined based on an autonomous driving task, and setting an avoidance time point corresponding to the avoidance method and controlling the autonomous vehicle to avoid the stopped vehicle by traveling along an avoidance path generated in conformity with the avoidance time point.
Abstract:
Disclosed are a method and an apparatus for generating a map for autonomous driving and recognizing a location based on the generated map. When generating a map, a spherical range image is obtained by projecting 3D coordinate information corresponding to a 3D space onto a 2D plane, and semantic segmentation is performed on the spherical range image to generate a semantic segmented image. Then, map data including a spherical range image, a semantic segmented image, and lane attribute information are generated.
Abstract:
Disclosed is a system performing a method for detecting intersection traffic light information including a traffic light detection module including an image sensor for generating first signal data based on traffic light image data in which a traffic light is included, a communication module that receives second signal data for communication with a surrounding object and an external device, an object information collection module that collects dynamic data of the surrounding object, and a signal information inference module that infers third signal data based on the dynamic data. The dynamic data of the surrounding object includes at least one information of whether the surrounding object moves, a moving direction of the surrounding object, and whether the surrounding object accelerates or decelerates. Each of the signal data includes pieces of information about a type of the traffic light and a signal direction of the traffic light.
Abstract:
Disclosed is a system for executing a traffic light recognition model learning and inference method, includes a data collection platform including a camera for collecting image data, and a first processor that samples traffic light image data including a traffic light among the image data, generates annotation data based on the traffic light image data, and generates a traffic light data set using the traffic light image data and the annotation data, wherein the traffic light data set includes information on a location of the traffic light, a type of the traffic light, traffic light on/off, and a traffic signal direction of the traffic light.
Abstract:
Disclosed herein is an apparatus and method for providing location and heading information of an autonomous driving vehicle on a road within a housing complex. The apparatus includes an image sensor installed on an autonomous driving vehicle and configured to detect images of surroundings depending on motion of the autonomous driving vehicle. A wireless communication unit is installed on the autonomous driving vehicle and is configured to receive a Geographic Information System (GIS) map of inside of a housing complex transmitted from an in-housing complex management device in a wireless manner. A location/heading recognition unit is installed on the autonomous driving vehicle, and is configured to recognize location and heading of the autonomous driving vehicle based on the image information received from the image sensor and the GIS map of the inside of the housing complex received via the wireless communication unit.
Abstract:
Disclosed is a processor which includes a camera image feature extractor that extracts a camera image feature based on a camera image, a LIDAR image feature extractor that extracts a LIDAR image feature based on a LIDAR image, a sampling unit that performs a sampling operation based on the camera image feature and the LIDAR image feature and generates a sampled LIDAR image feature, a fusion unit that fuses the camera image feature and the sampled LIDAR image feature and generates a fusion map, and a decoding unit that decodes the fusion map and generates a depth map. The sampling operation includes back-projecting a pixel location of the camera image feature on a camera coordinate system to generate a back-projection point, and projecting the back-projection point on a plane of the LIDAR image to calculate sampling coordinates.
Abstract:
Disclosed is a processor which includes a camera image feature extractor that extracts a camera image feature based on a camera image, a LIDAR image feature extractor that extracts a LIDAR image feature based on a LIDAR image, a sampling unit that performs a sampling operation based on the camera image feature and the LIDAR image feature and generates a sampled LIDAR image feature, a fusion unit that fuses the camera image feature and the sampled LIDAR image feature and generates a fusion map, and a decoding unit that decodes the fusion map and generates a depth map. The sampling operation includes back-projecting a pixel location of the camera image feature on a camera coordinate system to generate a back-projection point, and projecting the back-projection point on a plane of the LIDAR image to calculate sampling coordinates.
Abstract:
Disclosed herein are an apparatus and method for automatic parking based on recognition of a parking area environment. The method may include searching for an available parking space, determining whether reverse parking is possible by recognizing the environment of the available parking space, recognizing at least one additional vehicle located in the vicinity of the available parking space, and setting a parking destination based on the determination of whether reverse parking is possible and the result of recognition of the at least one additional vehicle.
Abstract:
Disclosed herein is a method of controlling an autonomous vehicle driving in a lane of a main line. The method may include determining whether the autonomous vehicle is driving in a target lane to accommodate merging traffic, determining whether a merge request message is received from a merging vehicle when the autonomous vehicle is determined to drive in the target lane, determining whether a collision with the merging vehicle will occur based on the merge request message when the merge request message is received, and sending a merge approval message to the merging vehicle when the collision with the merging vehicle is expected.
Abstract:
Disclosed herein are an object recognition apparatus of an automated driving system using error removal based on object classification and a method using the same. The object recognition method is configured to train a multi-object classification model based on deep learning using training data including a data set corresponding to a noise class, into which a false-positive object is classified, among classes classified by the types of objects, to acquire a point cloud and image data respectively using a LiDAR sensor and a camera provided in an autonomous vehicle, to extract a crop image, corresponding to at least one object recognized based on the point cloud, from the image data and input the same to the multi-object classification model, and to remove a false-positive object classified into the noise class, among the at least one object, by the multi-object classification model.