-
1.
公开(公告)号:US20230177693A1
公开(公告)日:2023-06-08
申请号:US17743448
申请日:2022-05-13
Inventor: Junyong NOH , Jung Eun YOO , Kwanggyoon SEO , Sanghun PARK , Jaedong KIM , Dawon LEE
CPC classification number: G06T7/11 , G06T15/205 , G06T7/62
Abstract: Provided is a method of framing a three dimensional (3D) target object for generation of a virtual camera layout. The method may include analyzing a reference video image to extract a framing rule for at least one reference object in the reference video image, generating a framing rule for at least one 3D target object using the framing rule for the at least one reference object in the reference video image, and using the framing rule for the at least one 3D target object for generation of a virtual camera layout.
-
公开(公告)号:US20240290356A1
公开(公告)日:2024-08-29
申请号:US18586642
申请日:2024-02-26
Inventor: Junyong NOH , Dawon LEE , Jung Eun YOO , Kyungmin CHO , Bumki KIM , GyeongHun IM
IPC: G11B27/031 , G06V20/40 , G06V40/16 , G11B27/10
CPC classification number: G11B27/031 , G06V20/46 , G06V20/49 , G06V40/171 , G11B27/10
Abstract: Disclosed is an apparatus for generating a cross-edited video and a method of operating the apparatus. The method includes: obtaining a plurality of videos; sequentially generating video pieces such that one of the plurality of videos is played according to a timeline, based on a first transition reward that is based on a similarity between a video before a transition and a video after the transition in a specific frame and a continuous play time and on a second transition reward that is based on an elapsed time from a previous transition point; and generating a cross-edited video by connecting the video pieces.
-
公开(公告)号:US20230206528A1
公开(公告)日:2023-06-29
申请号:US17847472
申请日:2022-06-23
Inventor: Junyong NOH , Ha Young CHANG , Kwanggyoon SEO , Jung Eun YOO
CPC classification number: G06T13/20 , G06T7/194 , G06T2207/20081 , G06T2207/20084
Abstract: A method of training a deep neural network for generating a cinemagraph is disclosed. The method may include preparing a foreground layer input by using an input video, preparing a background layer input by using the input video, providing a foreground layer output and a background layer output from the DNN by inputting, to the DNN, the foreground layer input and the background layer input, providing an output video by synthesizing the foreground layer output with the background layer output, and updating intrinsic parameters of the DNN a plurality of times, based on the input video, the foreground layer input, the foreground layer output, the background layer output, and the output video.
-
-