Abstract:
Systems, apparatuses (or devices), methods, and computer-readable media are provided for generating virtual content. For example, a device (e.g., an extended reality device) can obtain an image of a scene of a real-world environment, wherein the real-world environment is viewable through a display of the extended reality device as virtual content is displayed by the display. The device can detect at least a part of a physical hand of a user in the image. The device can generate a virtual keyboard based on detecting at least the part of the physical hand. The device can determine a position for the virtual keyboard on the display of the extended reality device relative to at least the part of the physical hand. The device can display the virtual keyboard at the position on the display.
Abstract:
A device includes a memory and one or more processors configured to process image data corresponding to a user's face to generate face data. The one or more processors are configured to process sensor data to generate feature data and to generate a representation of an avatar based on the face data and the feature data. The one or more processors are also configured to generate an audio output for the avatar based on the sensor data.
Abstract:
Methods, systems, computer-readable media, and apparatuses for audio signal processing are presented. Some configurations include determining that first audio activity in at least one microphone signal is voice activity; determining whether the voice activity is voice activity of a participant in an application session active on a device; based at least on a result of the determining whether the voice activity is voice activity of a participant in the application session, generating an antinoise signal to cancel the first audio activity; and by a loudspeaker, producing an acoustic signal that is based on the antinoise signal. Applications relating to shared virtual spaces are described.
Abstract:
A method, an apparatus, and a computer program product for communication are provided. A content providing device is operable to use engagement level information to modify presentation of content. In one aspect, the content providing device may determine that an engagement level at a first time is less than an engagement threshold at the first time. The engagement level may be based at least on one or more contextual items associated with presentation of content by the content providing device. Further, the content providing device may store a marker associated with the content at the first time in storage. Moreover, the content providing device may determine whether the engagement level at a second time is greater than or equal to the engagement threshold.