Abstract:
Techniques and systems are provided for providing recommendations for extended reality systems. In some examples, a system determines one or more environmental features associated with a real-world environment of an extended reality system. The system determines one or more user features associated with a user of the extended reality system. The system also outputs, based on the one or more environmental features and the one or more user features, a notification associated with at least one application supported by the extended reality system.
Abstract:
A head-mounted device may include a processor configured to receive information from a sensor that is indicative of a position of the head-mounted device relative to a reference point on a face of a user; and adjust a rendering of an item of virtual content based on the position or a change in the position of the device relative to the face. The sensor may be distance sensor, and the processor may be configured to adjust the rendering of the item of virtual content based a measured distance or change of distance between the head-mounted device and the point of reference on the user's face. The point of reference on the user's face may be one or both of the user's eyes.
Abstract:
Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
Abstract:
Methods and apparatuses are disclosed for assisting a user in performing a three dimensional scan of an object. An example user device to assist with scanning may include a processor. The user device may further include a scanner coupled to the processor and configured to perform a three dimensional scan of an object. The user device may also include a display to display a graphical user interface, wherein the display is coupled to the processor. The user device may further include a memory coupled to the processor and the display, the memory including one or more instructions that when executed by the processor cause the graphical user interface to display a target marker for a three dimensional (3D) scan and display a scanner position marker to assist in moving the scanner to a preferred location and direction.
Abstract:
Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
Abstract:
Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
Abstract:
A method of generating metadata includes using at least one digital image to select a plurality of objects, wherein the at least one digital image depicts the plurality of objects in relation to a physical space. The method also includes, by at least one processor and based on information indicating positions of the selected objects in a location space, producing metadata that identifies one among a plurality of candidate geometrical arrangements of the selected objects.
Abstract:
Systems, methods, and non-transitory media are provided for generating virtual private spaces for extended reality (XR) experiences. An example method can include initiating a virtual session for presenting virtual content and identifying, for the virtual session, a portion of a physical space for use as a virtual private space for presenting at least a portion of the virtual content. The method can include outputting boundary information defining a boundary of the virtual private space, and generate at least the portion of the virtual content for the virtual private space. At least the portion of the virtual content is viewable in the virtual private space by one or more authorized users of the virtual session and is not viewable by one or more unauthorized users.
Abstract:
A method, an apparatus, and a computer program product for communication are provided. A content providing device is operable to use engagement level information to modify presentation of content. In one aspect, the content providing device may determine a first engagement level of a first user in the presentation environment, compare the first engagement level to an engagement threshold to determine if the first engagement level is less than the engagement threshold, and provide the content in the presentation environment using a second set of presentation attributes upon a determination that the first engagement level is less than the engagement threshold.
Abstract:
A method of generating metadata includes using at least one digital image to select at least one among a plurality of objects, wherein the at least one digital image depicts the plurality of objects in relation to a physical space. The method also includes, in response to the selecting at least one object, determining a position of the at least one object in a location space. The method also includes, based on said determined position, producing metadata that identifies one among a plurality of separate regions that divide the location space, wherein said plurality of separate regions includes regions of unequal size.