Abstract:
A method is provided, including: activating a plurality of glove emitters positioned on a glove interface object; using a plurality of proximity sensors positioned at fingertip portions of the glove interface object to determine a proximity of the fingertip portions to the glove emitters; in response to determining a location of the glove interface object within a predefined distance of a peripheral device, activating a plurality of peripheral emitters positioned at the peripheral device, and transitioning, from using the proximity sensors to determine the proximity of the fingertip portions to the glove emitters, to using the proximity sensors to determine a proximity of the fingertip portions to the peripheral emitters.
Abstract:
A method is provided, including the following method operations: receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images and the inertial data to determine a current and predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver towards the predicted future location of the HMD; tracking a gaze of a user of the HMD; generating image data depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user; transmitting the image data and the audio data via the RF transceiver to the HMD.
Abstract:
A method for processing graphics for a GPU program, translating instructions from a shading language into an intermediate language with a front end of a GPU compiler; translating the instructions from the intermediate language into a GPU object language with a back end of the GPU compiler; wherein the instructions in the shading language include instructions defining a layout of resources for the GPU program.
Abstract:
Certain aspects of the present disclosure include systems and techniques for generating content that indicates a sensation associated with audio. One example method generally includes monitoring audio to be played during display of an associated portion of an interactive content stream provided over a communication network to at least one viewing device during an interactive session, and analyzing, via a machine learning component, the audio to determine a sensation associated with at least a portion of the audio. The method may also include determining an effect indicating the sensation, wherein the effect is associated with one or more output devices associated with the at least one viewing device, and outputting an indication of the effect to the associated output devices, wherein the effect is configured to be output along with the audio in real-time with the display of the associated portion of the interactive content stream.
Abstract:
A method for dynamically altering an avatar in a video game is provided, including: streaming gameplay video of a session of a video game over a network to a plurality of spectator devices, wherein the session enables gameplay by a player represented by an avatar in the video game; receiving, over the network from the plurality of spectator devices, comments from spectators viewing the gameplay video via the spectator devices; analyzing the comments to determine content of the comments during the session; using the determined content of the comments to generate an avatar modification for the player; implementing the avatar modification to alter the avatar of the player.
Abstract:
Methods and system for providing streaming content of a video game at a client device includes receiving frames of streaming content from a game server. The frames represent a current game state. The frames are analyzed to generate predicted frames that are likely to occur following the current frames. The predicted frames are stored in a prediction frame buffer and used to fill any gaps in subsequent frames representing subsequent game state received from the game server.
Abstract:
To improve the fidelity of a motion sensor, voice-induced components in signals from the motion sensor as well as haptic-induced components in signals from the motion sensor are canceled prior to outputting the final motion signal to an app requiring knowledge of device motion, such as motion of a HMD for a computer game.
Abstract:
A user's eyes and if desired head is tracked as the user's gaze follows a moving object on a display. Motion blur of the moving object is keyed to the eye/head tracking. Motion blur of other objects in the frame also may be keyed to the eye/head tracking.
Abstract:
Systems and methods for customized dialogue support in virtual environments are provided. Dialogue maps stored in memory may specify dialogue triggers each associated with a corresponding dialogue instruction. Data regarding an interactive session associated with a user device may be monitored based on one or more of the stored dialogue maps. The presence of one of the dialogue triggers specified by the one or more dialogue maps may be detected based on the monitored data. Customized dialogue output may be generated in response to the detected dialogue trigger and based on the dialogue instruction corresponding to the detected dialogue trigger. The customized dialogue output may be provided to the interactive session in real-time with detection of the detected dialogue trigger.
Abstract:
The present technology provides solutions for crowd-sourcing stream productions for a virtual esports environment. A method can include generating a virtual environment associated with an interactive session that includes a plurality of spectator devices, wherein each of the spectator devices is presented with a different display based on a corresponding vantage point located within the virtual environment; receiving a plurality of media captures from the spectator devices, wherein each of the media captures is captured from the corresponding vantage point of the spectator device within the virtual environment; selecting one of the media captures based on a comparison of visibility of an asset in the virtual environment; and streaming the selected media capture to a primary display on a requesting device.