-
公开(公告)号:US10650548B1
公开(公告)日:2020-05-12
申请号:US16731093
申请日:2019-12-31
申请人: STRADVISION, INC.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Dongsoo Shin , Donghun Yeo , Wooju Ryu , Myeong-Chun Lee , Hyungsoo Lee , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for detecting a location of a subject vehicle capable of an autonomous driving by using a landmark detection. And the method includes steps of: (a) a computing device, if a live feature map is acquired, detecting each of feature map coordinates on the live feature map per each of reference objects included in a subject data region corresponding to a location and a posture of the subject vehicle, by referring to (i) reference feature maps corresponding to the reference objects, and (ii) the live feature map; (b) the computing device detecting image coordinates of the reference objects on a live image by referring to the feature map coordinates; and (c) the computing device detecting an optimized subject coordinate of the subject vehicle by referring to 3-dimensional coordinates of the reference objects in a real world.
-
公开(公告)号:US10650279B1
公开(公告)日:2020-05-12
申请号:US16724301
申请日:2019-12-22
申请人: STRADVISION, INC.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Dongsoo Shin , Donghun Yeo , Wooju Ryu , Myeong-Chun Lee , Hyungsoo Lee , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A learning method for generating integrated object detection information of an integrated image by integrating first object detection information and second object detection information is provided. The method includes steps of: (a) a learning device, if the first object detection information and the second object detection information is acquired, instructing a concatenating network included in a DNN to generate pair feature vectors including information on pairs of first original ROIs and second original ROIs; (b) the learning device instructing a determining network included in the DNN to apply FC operations to the pair feature vectors, to thereby generate (i) determination vectors and (ii) box regression vectors; (c) the learning device instructing a loss unit to generate an integrated loss, and performing backpropagation processes by using the integrated loss, to thereby learn at least part of parameters included in the DNN.
-
43.
公开(公告)号:US20200090047A1
公开(公告)日:2020-03-19
申请号:US16132479
申请日:2018-09-17
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A learning method for a CNN (Convolutional Neural Network) capable of encoding at least one training image with multiple feeding layers, wherein the CNN includes a 1st to an n-th convolutional layers, which respectively generate a 1st to an n-th main feature maps by applying convolution operations to the training image, and a 1st to an h-th feeding layers respectively corresponding to h convolutional layers (1≤h≤(n-1)) is provided. The learning method includes steps of: a learning device instructing the convolutional layers to generate the 1st to the n-th main feature maps, wherein the learning device instructs a k-th convolutional layer to acquire a (k−1)-th main feature map and an m-th sub feature map, and to generate a k-th main feature map by applying the convolution operations to the (k−1)-th integrated feature map generated by integrating the (k−1)-th main feature map and the m-th sub feature map.
-
公开(公告)号:US10551846B1
公开(公告)日:2020-02-04
申请号:US16257993
申请日:2019-01-25
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A learning method for improving segmentation performance to be used for detecting road user events including pedestrian events and vehicle events using double embedding configuration in a multi-camera system is provided. The learning method includes steps of: a learning device instructing similarity convolutional layer to generate similarity embedding feature by applying similarity convolution operations to a feature outputted from a neural network; instructing similarity loss layer to output a similarity loss by referring to a similarity between two points sampled from the similarity embedding feature, and its corresponding GT label image; instructing distance convolutional layer to generate distance embedding feature by applying distance convolution operations to the similarity embedding feature; instructing distance loss layer to output a distance loss for increasing inter-class differences among mean values of instance classes and decreasing intra-class variance values of the instance classes; backpropagating at least one of the similarity loss and the distance loss.
-
公开(公告)号:US10528867B1
公开(公告)日:2020-01-07
申请号:US16154060
申请日:2018-10-08
申请人: StradVision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , Sukhoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for learning a neural network by adjusting a learning rate each time when an accumulated number of iterations reaches one of a first to an n-th specific values. The method includes steps of: (a) a learning device, while increasing k from 1 to (n−1), (b1) performing a k-th learning process of repeating the learning of the neural network at a k-th learning rate by using a part of the training data while the accumulated number of iterations is greater than a (k−1)-th specific value and is equal to or less than a k-th specific value, (b2) changing a k-th gamma to a (k+1)-th gamma by referring to k-th losses of the neural network which are obtained by the k-th learning process and (ii) changing a k-th learning rate to a (k+1)-th learning rate by using the (k+1)-th gamma.
-
公开(公告)号:US10438082B1
公开(公告)日:2019-10-08
申请号:US16171645
申请日:2018-10-26
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
IPC分类号: G06K9/00 , G06K9/32 , G06N5/04 , G06N3/08 , G06K9/34 , G06K9/20 , G06T7/70 , G06K9/62 , G06N20/00 , G06N3/04
摘要: A method for learning parameters of a CNN capable of detecting ROIs determined based on bottom lines of nearest obstacles in an input image is provided. The method includes steps of: a learning device instructing a first to an n-th convolutional layers to generate a first to an n-th encoded feature maps from the input image; instructing an n-th to a first deconvolutional layers to generate an n-th to a first decoded feature maps from the n-th encoded feature map; if a specific decoded feature map is divided into directions of rows and columns, generating an obstacle segmentation result by referring to a feature of the n-th to the first decoded feature maps; instructing an RPN to generate an ROI bounding box by referring to each anchor box, and losses by referring to the ROI bounding box and its corresponding GT; and backpropagating the losses, to learn the parameters.
-
公开(公告)号:US10430691B1
公开(公告)日:2019-10-01
申请号:US16254541
申请日:2019-01-22
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using a target object merging network and a target region estimating network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device (i) instructing the target region estimating network to search for k-th estimated target regions, (ii) instructing an RPN to generate (k_1)-st to (k_n)-th object proposals, corresponding to an object on a (k_1)-st to a (k_n)-th manipulated images, and (iii) instructing the target object merging network to merge the object proposals and merge (k_1)-st to (k_n)-th object detection information, outputted from an FC layer. The method can be useful for multi-camera, SVM (surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
-
公开(公告)号:US10423860B1
公开(公告)日:2019-09-24
申请号:US16254522
申请日:2019-01-22
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using an image concatenation and a target object merging network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device instructing an image-manipulating network to generate n manipulated images; instructing an RPN to generate first to n-th object proposals respectively in the manipulated images, and instructing an FC layer to generate first to n-th object detection information; and instructing the target object merging network to merge the object proposals and merge the object detection information. In this method, the object proposals can be generated by using lidar. The method can be useful for multi-camera, SVM(surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
-
公开(公告)号:US10423840B1
公开(公告)日:2019-09-24
申请号:US16263168
申请日:2019-01-31
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A post-processing method for detecting lanes to plan the drive path of an autonomous vehicle by using a segmentation score map and a clustering map is provided. The method includes steps of: a computing device acquiring the segmentation score map and the clustering map from a CNN; instructing a post-processing module to detect lane elements including pixels forming the lanes referring to the segmentation score map and generate seed information referring to the lane elements, the segmentation score map, and the clustering map; instructing the post-processing module to generate base models referring to the seed information and generate lane anchors referring to the base models; instructing the post-processing module to generate lane blobs referring to the lane anchors; and instructing the post-processing module to detect lane candidates referring to the lane blobs and generate a lane model by line-fitting operations on the lane candidates.
-
公开(公告)号:US10402695B1
公开(公告)日:2019-09-03
申请号:US16255044
申请日:2019-01-23
申请人: Stradvision, Inc.
发明人: Kye-Hyeon Kim , Yongjoong Kim , Insu Kim , Hak-Kyoung Kim , Woonhyun Nam , SukHoon Boo , Myungchul Sung , Donghun Yeo , Wooju Ryu , Taewoong Jang , Kyungjoong Jeong , Hongmo Je , Hojin Cho
摘要: A method for learning parameters of a CNN for image recognition is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (a) instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating pixels, per each ROI, on pooled ROI feature maps; (b) instructing a 1×H1 convolutional layer to generate a first adjusted feature map using a first reshaped feature map, generated by concatenating features in H1 channels of the integrated feature map, and instructing a 1×H2 convolutional layer to generate a second adjusted feature map using a second reshaped feature map, generated by concatenating features in H2 channels of the first adjusted feature map; and (c) instructing a second transposing layer or a classifying layer to divide the second adjusted feature map by each pixel, to thereby generate pixel-wise feature maps.
-
-
-
-
-
-
-
-
-