-
公开(公告)号:US20220221981A1
公开(公告)日:2022-07-14
申请号:US17500351
申请日:2021-10-13
Inventor: Yongho LEE , Wookho SON , Beom Ryeol LEE
IPC: G06F3/0484 , G06F3/01 , G02B27/00 , G02B27/01 , G06N20/00
Abstract: A computing device adapts an interface for extended reality. The computing device collects user information and external environment information when a user loads a virtual interface to experience extended reality content, and selects a highest interaction accuracy from among one or more interaction accuracies mapped to the collected user information and external environment information. The computing device determines content information mapped to the highest interaction accuracy, and reloads the virtual interface based on a state of the virtual interface that is determined based on the determined content information.
-
12.
公开(公告)号:US20190171280A1
公开(公告)日:2019-06-06
申请号:US16140210
申请日:2018-09-24
Inventor: Wook Ho SON , Seung Woo NAM , Hee Seok OH , Beom Ryeol LEE
Abstract: Disclosed is an apparatus and method of generating a VR sickness prediction model. A method of generating a VR sickness prediction model according to the present disclosure includes: displaying virtual reality content on a display unit; detecting first VR sickness information of a user who experiences the virtual reality content using a sensor; determining second VR sickness information using a user input that is input from the user in response to a request for inputting a degree of VR sickness for the virtual reality content; performing machine learning based on supervised learning using VR sickness-inducing factors for the virtual reality content, the first VR sickness information, and the second VR sickness information; and determining a correlation between the VR sickness-inducing factors and a list of VR sickness symptoms on a basis of the performed machine learning.
-
13.
公开(公告)号:US20160261855A1
公开(公告)日:2016-09-08
申请号:US15058292
申请日:2016-03-02
Inventor: Jeung Chul PARK , Hyung Jae SON , Beom Ryeol LEE , Il Kwon JEONG
CPC classification number: H04N13/282 , G06T7/11 , G06T7/174 , G06T7/194 , G06T2207/10024 , H04N13/257 , H04N2013/0092
Abstract: Provided is a method of generating multi-view immersive content. The method includes obtaining a multi-view background image from a plurality of cameras arranged in a curved shape, modeling the obtained multi-view background image to generate a codebook corresponding to the multi-view background image, obtaining a multi-view image including an object from the plurality of cameras and separating a foreground and a background from the obtained multi-view image by using the generated codebook, and synthesizing the object included in the separated foreground with a virtual background to generate multi-view immersive content.
Abstract translation: 提供了一种产生多视图沉浸式内容的方法。 该方法包括从布置成弯曲形状的多个摄像机获取多视点背景图像,对所获得的多视点背景图像进行建模,生成与多视点背景图像对应的码本,获得包括多视点背景图像的多视点图像 并且通过使用生成的码本将所获得的多视图图像中的前景和背景分离,并且将分离的前景中包括的对象与虚拟背景合成以生成多视图沉浸式内容。
-
-