-
公开(公告)号:US20240378695A1
公开(公告)日:2024-11-14
申请号:US18032825
申请日:2022-11-04
Applicant: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
Inventor: SA IM SHIN , Bo Eun KIM , Han Mu PARK , Chung Il KIM
Abstract: The present invention relates to an image inpainting apparatus and an image inpainting method, the image inpainting apparatus including: a background inpainting part configured to generate a background-inpainted image by carrying out inpainting on a background with respect to an input image in which a region to be inpainted is set up; an object inpainting part configured to generate an object image by carrying out inpainting on an object; and an image overlapping part configured to generate an output image by causing the background-inpainted image and the object image, which are generated, to overlap each other.
-
公开(公告)号:US20240193797A1
公开(公告)日:2024-06-13
申请号:US18531940
申请日:2023-12-07
Applicant: Korea Electronics Technology Institute
Inventor: Bo Eun KIM , Jung Ho KIM , Sa Im SHIN
Abstract: There are provided a method and a system for generating human motions, which generate motions of an empty frame by using motions in a given frame. A human motion generation method according to an embodiment includes: a first step of transforming, by a system, a domain of pose information of a frame; a second step of generating, by the system, motion features of an empty frame in the transformed domain; and a third step of inversely transforming, by the system, the generated motion features into a time domain. Accordingly, the method and system may effectively generate motions by obtaining a basis vector to be used for transforming a domain of motion trajectory information by training a deep learning-based transform model, transforming a motion trajectory through the basis vector, and inputting the transformed motion trajectory to a motion generation model.
-
公开(公告)号:US20190286931A1
公开(公告)日:2019-09-19
申请号:US16043338
申请日:2018-07-24
Applicant: Korea Electronics Technology Institute
Inventor: Bo Eun KIM , Choong Sang CHO , Hye Dong JUNG , Young Han LEE
Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.
-
公开(公告)号:US20240096071A1
公开(公告)日:2024-03-21
申请号:US18368600
申请日:2023-09-15
Applicant: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
Inventor: Sa Im SHIN , Jung Ho KIM , Bo Eun KIM
CPC classification number: G06V10/7747 , G06T7/251 , G06V10/255 , G06V10/82 , G06V10/95 , G06V20/46 , G06V40/23 , G06T2207/20081 , G06T2207/20084 , G06T2207/30196
Abstract: There is provided a video processing method performed by a computing device, the method including the steps of: collecting video from an external device; generating preprocessed data by extracting two-dimensional or three-dimensional skeleton information from the video; pre-training a first artificial intelligence model including N transformer blocks from the preprocessed data by applying an attention from a body of an object to a plurality of joints, an attention from each of the plurality of joints to the body, and an attention between persons; and learning, when parameters determined as a result of the pre-training of the first artificial intelligence model are transferred, a method of recognizing an action from the video received from the external device, using a second artificial intelligence model including the N transformer blocks on the basis of the parameters, wherein N is a natural number equal to or larger than 2.
-
公开(公告)号:US20210117723A1
公开(公告)日:2021-04-22
申请号:US17016654
申请日:2020-09-10
Applicant: Korea Electronics Technology Institute
Inventor: Bo Eun KIM , Hye Dong JUNG
Abstract: A method and a system for automatically generating multiple captions of an image are provided. A method for training an auto image caption generation model according to an embodiment of the present disclosure includes: generating a caption attention map by using an image; converting the generated caption attention map into a latent variable by projecting the caption attention map onto a latent space; deriving a guide map by using the latent variable; and training to generate captions of an image by using the guide map and the image. Accordingly, a plurality of captions describing various characteristics of an image and including various expressions can be automatically generated.
-
-
-
-