-
公开(公告)号:US11875535B2
公开(公告)日:2024-01-16
申请号:US17339177
申请日:2021-06-04
Inventor: Zhikang Zou , Xiaoqing Ye , Xiao Tan , Hao Sun
CPC classification number: G06T7/80 , G06T7/74 , G06T2207/10028
Abstract: A method and an apparatus for calibrating an external parameter of a camera are provided. The method may include: acquiring a time-synchronized data set of three-dimensional point clouds and two-dimensional image of a calibration reference object, the two-dimensional image being acquired by a camera with a to-be-calibrated external parameter; establishing a transformation relationship between a point cloud coordinate system and an image coordinate system, the transformation relationship including a transformation parameter; back-projecting the data set of the three-dimensional point clouds onto a plane where the two-dimensional image is located through the transformation relationship to obtain a set of projection points of the three-dimensional point clouds; adjusting the transformation parameter to map the set of the projection points onto the two-dimensional image; and obtaining an external parameter of the camera based on the adjusted transformation parameter and the data set of the three-dimensional point clouds.
-
公开(公告)号:US20210304438A1
公开(公告)日:2021-09-30
申请号:US17346835
申请日:2021-06-14
Inventor: Xiaoqing Ye , Zhikang Zou , Xiao Tan , Hao Sun
Abstract: The present disclosure provides an object pose obtaining method, and an electronic device, relates to technology fields of image processing, computer vision, and deep learning. A detailed implementation is: extracting an image block of an object from an image, and generating a local coordinate system corresponding to the image block; obtaining 2D projection key points in an image coordinate system corresponding to a plurality of 3D key points on a 3D model of the object; converting the 2D projection key points into the local coordinate system to generate corresponding 2D prediction key points; obtaining direction vectors between each pixel point in the image block and each 2D prediction key point, and obtaining a 2D target key point corresponding to each 2D predication key point based on the direction vectors; and determining a pose of the object according to the 3D key points and the 2D target key points.
-
公开(公告)号:US11887388B2
公开(公告)日:2024-01-30
申请号:US17346835
申请日:2021-06-14
Inventor: Xiaoqing Ye , Zhikang Zou , Xiao Tan , Hao Sun
CPC classification number: G06V20/647 , G06N3/08 , G06T7/75 , G06V10/82 , G06T2200/08 , G06T2207/20081 , G06T2207/30244 , G06T2210/04 , G06V10/462
Abstract: The present disclosure provides an object pose obtaining method, and an electronic device, relates to technology fields of image processing, computer vision, and deep learning. A detailed implementation is: extracting an image block of an object from an image, and generating a local coordinate system corresponding to the image block; obtaining 2D projection key points in an image coordinate system corresponding to a plurality of 3D key points on a 3D model of the object; converting the 2D projection key points into the local coordinate system to generate corresponding 2D prediction key points; obtaining direction vectors between each pixel point in the image block and each 2D prediction key point, and obtaining a 2D target key point corresponding to each 2D predication key point based on the direction vectors; and determining a pose of the object according to the 3D key points and the 2D target key points.
-
公开(公告)号:US20210350541A1
公开(公告)日:2021-11-11
申请号:US17382871
申请日:2021-07-22
Inventor: Qu CHEN , Xiaoqing Ye , Zhikang Zou , Hao Sun
Abstract: The disclosure provides a portrait extracting method, a portrait extracting apparatus and a storage medium. The method includes: obtaining an image to be processed; obtaining a semantic segmentation result and an instance segmentation result of the image, in which the semantic segmentation result includes a mask image of a portrait area of the image, and the instance segmentation result includes a mask image of at least one portrait in the image; fusing the mask. image of the at least one portrait and the mask image of the portrait area to generate a fused mask image of the at least one portrait; and extracting the at least one portrait in the image based on the fused mask image of the at least one portrait.
-
-
-