-
公开(公告)号:US20230196520A1
公开(公告)日:2023-06-22
申请号:US17974383
申请日:2022-10-26
Inventor: Seung Yong LEE , Hyeong Seok SON , Sung Hyun CHO , Jun Yong LEE
Abstract: The present disclosure provides a method of effectively deblurring a defocus blur in an input image based on an inverse kernel. The defocus deblurring method includes: generating, by an encoder network, an input feature map by encoding the input image; filtering, by an atrous convolution network including a plurality of atrous convolutional layers arranged in parallel, the input feature map to generate an output feature map having reduced blur component; and generating, by a decoder network, an output image having reduced blur from the output feature map with the reduced blur component and the input image.
-
公开(公告)号:US20220198616A1
公开(公告)日:2022-06-23
申请号:US17497824
申请日:2021-10-08
Inventor: Seung Yong LEE , Jun Yong LEE , Hyeong Seok SON , Sung Hyun CHO
Abstract: A video quality improvement method may comprise: inputting a structure feature map converted from current target frame by first convolution layer to first multi-task unit and second multi-task unit, which is connected to an output side of first multi-task unit, among the plurality of multi-task units; inputting a main input obtained by adding the structure feature map to a feature space, which is converted by second convolution layer from those obtained by concatenating, in channel dimension, a previous target frame and a correction frame of the previous frame to first multi-task unit; and inputting current target frame to Nth multi-task unit connected to an end of output side of second multi-task unit, wherein Nth multi-task unit outputs a correction frame of current target frame, and machine learning of the video quality improvement model is performed using an objective function calculated through the correction frame of current target frame.
-
公开(公告)号:US20240212279A1
公开(公告)日:2024-06-27
申请号:US18396421
申请日:2023-12-26
Inventor: Seung Yong LEE , Hyo Min KIM
IPC: G06T17/20
CPC classification number: G06T17/20
Abstract: A detailed 3-dimensional (3D) object reconstruction method, executed by a computing device, may comprise: obtaining Laplacian coordinates with local details and direction and size information for of curvatures on a 3D surface defined by an input point cloud; and converting the Laplacian coordinates to absolute coordinates using a mesh for the 3D surface and Laplacian coordinates for each vertex of the mesh and a Laplace-Beltrami operator.
-
公开(公告)号:US20220366539A1
公开(公告)日:2022-11-17
申请号:US17770993
申请日:2020-11-11
Inventor: Seung Yong LEE , Sung Hyun CHO , Hyeong Seok SON
Abstract: An image processing method and apparatus based on machine learning are disclosed. The image processing method based on machine learning, according to the present invention, may comprise the steps of: generating a first corrected image by inputting an input image to a first convolution neural network; generating an intermediate image on the basis of the input image; performing machine learning on a first loss function of the first convolution neural network on the basis of the first corrected image and the intermediate image; and performing machine learning on a second loss function of the first convolution neural network on the basis of the first corrected image and a natural image.
-
5.
公开(公告)号:US20240212093A1
公开(公告)日:2024-06-27
申请号:US18396411
申请日:2023-12-26
Inventor: Seung Yong LEE , Sung Hyun CHO , Jun Yong LEE
IPC: G06T3/4053 , G06T3/4046
CPC classification number: G06T3/4053 , G06T3/4046
Abstract: A method for generating a super-resolution video by using a multi-camera video may comprise: generating a resolution-improved ultra-wide-angle video frame at an arbitrary time step by inputting an ultra-wide-angle video frame of a first resolution at the arbitrary time step, ultra-wide-angle video frames right before and right after the arbitrary time step, and a wide-angle video frame for reference at the arbitrary time step, to a bidirectional neural network, wherein the generating of the resolution-improved ultra-wide-angle video frame is performed using accumulated information at a past time step based on the arbitrary time step, and accumulated information at a future time step based on the arbitrary time step, and wherein a second resolution, which is a resolution of the generated ultra-wide-angle video frame, is greater than the first resolution.
-
公开(公告)号:US20230206515A1
公开(公告)日:2023-06-29
申请号:US17974399
申请日:2022-10-26
Inventor: Seung Yong LEE , Yu Cheol JUNG , Gwang Jin JU , Won Jong JANG
IPC: G06T11/00 , G06T11/60 , G06V10/771 , G06V10/82
CPC classification number: G06T11/001 , G06T11/60 , G06V10/771 , G06V10/82 , G06T2210/44
Abstract: The present disclosure provides a caricature generation method capable of expressing detailed and realistic facial exaggerations and allowing a reduction of training labor and cost. A caricature generating method includes: providing a generation network comprising a plurality of layers connected in series including coarse layers of lowest resolutions and pre-trained to be suitable for synthesizing a shape of a caricature and fine layers of highest resolutions and pre-trained to be suitable for tuning a texture of the caricature; applying input feature maps representing an input facial photograph to the coarse layers to generate shape feature maps and deforming the shape feature maps by shape exaggeration blocks to generate deformed shape feature maps; applying the deformed shape feature maps to the fine layers to change a texture represented by the deformed shape feature maps and generate output feature maps; and generating a caricature image according to the output feature map.
-
-
-
-
-