-
公开(公告)号:US20250021168A1
公开(公告)日:2025-01-16
申请号:US18657297
申请日:2024-05-07
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Ravi SHARMA
Abstract: Disclosed is a method, implemented in a Visual See Through (VST) device, for interpreting user gestures in multi-reality scenarios. The method includes identifying one or more camera view zones based on fields of view of one or more cameras, determining one or more contexts based on an analysis of each of the one or more camera view zones, classifying the one or more camera view zones for each of the determined one or more contexts, and recognizing a user gesture as an input based on the classified one or more camera view zones.
-
公开(公告)号:US20250147804A1
公开(公告)日:2025-05-08
申请号:US19017043
申请日:2025-01-10
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Ravi SHARMA
Abstract: A method performed by a constraint internet of things (IoT) device for enabling a decentralized block chain of a plurality of constraint IoT devices, may include: creating a virtual resource pool by collecting resources of the plurality of constraint IoT devices connected in the decentralized block chain; creating a plurality of virtual nodes corresponding to the plurality of constraint IoT devices, wherein the virtual resource pool is accessible by the plurality of virtual nodes; receiving at least one first request to be executed by at least one other constraint IoT device; assigning the at least one first request to the at least one of the plurality of virtual nodes corresponding to the at least one other constraint IoT device; and allocating at least a portion of resources from the virtual resource pool to the at least one of the plurality of virtual nodes.
-
3.
公开(公告)号:US20240127551A1
公开(公告)日:2024-04-18
申请号:US18238831
申请日:2023-08-28
Applicant: Samsung Electronics Co., Ltd.
Inventor: Ravi SHARMA
CPC classification number: G06T19/006 , G06F3/017 , G06T19/20 , G06V20/44 , G06T2219/2004 , G06T2219/2016 , G06V2201/07
Abstract: A method for handling an event of a real environment in a virtual reality (VR) environment, includes: identifying an object, a user, and an event occurring in the real environment around a primary user wearing a VR module implemented by at least one hardware processor; determining a first set of parameters of the identified object and the user, and a second set of parameters of the event and an actor in the event, wherein the actor performs the event and is the user; analyzing the VR environment to determine a suitable part in the VR environment; creating an augmented reality (AR) of the actor; scaling, based on the first set of parameters and the second set of parameters, the AR; aligning the AR in relation to the real environment; and merging the aligned AR into the suitable part of the VR environment while maintaining a context of the VR environment.
-
-