Abstract:
A method is provided, including: receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images of the interactive environment and the inertial data to determine a predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver in a direction that is towards the predicted future location of the HMD; tracking a gaze of a user of the HMD; predicting a movement of the gaze of the user; generating video depicting a view of a virtual environment for the HMD; wherein the regions of the view are rendered differently based on the predicted movement of the gaze of the user; wirelessly transmitting the video via the RF transceiver to the HMD using the adjusted beamforming direction.
Abstract:
Predictive pre-fetching of streams for 360 degree video is described. User view orientation metadata is obtained for a 360 degree video stream that includes data for a plurality of viewports. Data corresponding to one or more high-resolution frames for a particular one of the viewports is pre-fetched based on the user view orientation metadata and those frames are displayed. The high resolution frames are characterized by a higher resolution than for remaining viewports.
Abstract:
A wireless communication system for use by a head-mounted display (HMD), including: the HMD having a band for attachment to a head of a user; the wireless communication system includes an antenna disposed on a top side of the band of the HMD, the antenna configured to receive an RF signal from an RF transmitter, the RF signal including video data that is encoded in a compressed format; the wireless communication system includes a receiver configured to decode the RF signal received through the antenna; the HMD includes a display configured to render the video data.
Abstract:
A method is provided, including the following method operations: tracking a location of a head-mounted display (HMD) in a real space; rendering to the HMD a first view of a virtual reality (VR) space, the first view of the VR space being defined from a perspective determined by the location of the HMD in the real space; tracking a location of a portable device in the real space; rendering to the portable device a second view of the VR space, the second view of the VR space being defined from a perspective determined by the location of the portable device in the real space relative to the location of the HMD in the real space.
Abstract:
A method is provided, including the following method operations: receiving captured images of an interactive environment in which a head-mounted display (HMD) is disposed; receiving inertial data processed from at least one inertial sensor of the HMD; analyzing the captured images and the inertial data to determine a current and predicted future location of the HMD; using the predicted future location of the HMD to adjust a beamforming direction of an RF transceiver towards the predicted future location of the HMD; tracking a gaze of a user of the HMD; generating image data depicting a view of a virtual environment for the HMD, wherein regions of the view are differentially rendered; generating audio data depicting sounds from the virtual environment, the audio data being configured to enable localization of the sounds by the user; transmitting the image data and the audio data via the RF transceiver to the HMD.
Abstract:
The performance of a player of a computer game is noted and the player accorded a latency handicap based thereon. The latency handicap is used to slow down play of the computer game, preferably only during times of high player activity. The latency handicap can be reduced over time or owing to improvement in the player's performance.
Abstract:
A method is provided, including the following operations: receiving video of a first user; processing the video to identify signed communications in a first sign language made by the first user; translating the signed communications in the first sign language into signed communications in a second sign language; rendering an avatar of the user performing the translated signed communications in the second sign language; presenting the avatar on a display for viewing by a second user.
Abstract:
Methods and systems for interacting with an augmented reality application includes a providing an array of barometric pressure sensors within a housing of a wearable device to capture pressure variances detected from motion of one or more facial features that are proximate to the array of barometric pressure sensors. The pressure variances are analyzed to identify motion metrics related to the motion of the facial features. The motion metrics are used to derive engagement metrics of the user to content of the augmented reality application presented to the user.
Abstract:
A method to identify positions of fingers of a hand is described. The method includes capturing images of a first hand using a plurality of cameras that are part of a wearable device. The wearable device is attached to a wrist of a second hand and the plurality of cameras of the wearable device is disposed around the wearable device. The method includes repeating capturing of additional images of the first hand, the images and the additional images captured to produce a stream of captured image data during a session of presenting the virtual environment in a head mounted display (HMD). The method includes sending the stream of captured image data to a computing device that is interfaced with the HMD. The computing device is configured to process the captured image data to identify changes in positions of the fingers of the first hand.
Abstract:
Methods and systems are provided for processing feature similarity for a real-world space used for augmented reality (AR) gameplay is disclosed. The method includes receiving captured sensor data from the real-world space used by a user for said AR gameplay of a game. The sensor data provides data to identify characteristics of physical objects in the real-world space. The method includes generating a user space score using the characteristics of the physical objects identified in the real-world space. The method includes comparing the user space score to a game space score that is predefined for the game. The comparing is used to produce a fit handicap for the AR gameplay of the game in the real-world space by the user. Adjustments to gameplay parameters may be made to compensate or adjust the fit handicap.