-
公开(公告)号:US20180096222A1
公开(公告)日:2018-04-05
申请号:US15724118
申请日:2017-10-03
Inventor: Hyunjin YOON , Siadari T. SUPRAPTO , Hoon Ki LEE , Mi Kyong HAN
IPC: G06K9/62
CPC classification number: G06K9/627 , G06K9/00744 , G06K9/00758 , G06K9/00765 , G06K9/4609 , G06K9/6215 , G06K9/6256 , G06K9/6261 , G06K9/6277 , G06K2009/00738 , G06K2209/27
Abstract: A method and an apparatus for authoring a machine learning-based immersive media are provided. The apparatus determines an immersive effect type of an original image of image contents to be converted into an immersive media by using an immersive effect classifier learned using an existing immersive media that the immersive effect is already added to an image, detects an immersive effect section of the original image based on the immersive effect type determination result, and generates metadata of the detected immersive effect section.
-
公开(公告)号:US20200334553A1
公开(公告)日:2020-10-22
申请号:US16854002
申请日:2020-04-21
Inventor: Hyunjin YOON , Mi Kyong HAN
Abstract: An apparatus and a method for predicting error possibility, including: generating a first annotation for input data for training by using an algorithm; performing a machine-learning for an annotation evaluation model based on the first annotation and a correction history for the first annotation; generating a second annotation for input data for evaluating by using the algorithm; and predicting the error probability of the second annotation based on the annotation evaluation model are provided.
-
公开(公告)号:US20180262716A1
公开(公告)日:2018-09-13
申请号:US15917313
申请日:2018-03-09
Inventor: Jin Ah KANG , Hyunjin YOON , Deockgu JEE , Jong Hyun JANG , Mi Kyong HAN
IPC: H04N7/15 , G06K9/00 , G10L25/03 , H04N19/136
CPC classification number: H04N7/152 , G06K9/00268 , G06K9/00295 , G06K9/00335 , G06K9/0061 , G06K9/00664 , G10L25/03 , H04M3/567 , H04M3/568 , H04N19/136
Abstract: Provided are a method of providing a video conference service and apparatuses performing the same, the method including determining contributions of a plurality of participants to a video conference based on first video signals and first audio signals of devices of the plurality of participants participating in the video conference, and generating a second video signal and a second audio signal to be transmitted to the devices of the plurality of participants based on the contributions.
-
-