Abstract:
An example system includes a first computing device comprising a first graphics processing unit (GPU) implemented in circuitry, and a second computing device comprising a second GPU implemented in circuitry. The first GPU is configured to determine graphics primitives of a computer graphics scene that are visible from a camera viewpoint, generate a primitive atlas that includes data representing the graphics primitives that are visible from the camera viewpoint, and shade the visible graphics primitives in the primitive atlas to produce a shaded primitive atlas. The second GPU is configured to render an image using the shaded primitive atlas.
Abstract:
An example system includes a first computing device comprising a first graphics processing unit (GPU) implemented in circuitry, and a second computing device comprising a second GPU implemented in circuitry. The first GPU is configured to perform a first portion of an image rendering process to generate intermediate graphics data and send the intermediate graphics data to the second computing device. The second GPU is configured to perform a second portion of the image rendering process to render an image from the intermediate graphics data. The first computing device may be a video game console, and the second computing device may be a virtual reality (VR) headset that warps the rendered image to produce a stereoscopic image pair.
Abstract:
An example system includes a first computing device comprising a first graphics processing unit (GPU) implemented in circuitry, and a second computing device comprising a second GPU implemented in circuitry. The first GPU is configured to determine graphics primitives of a computer graphics scene that are visible from a camera viewpoint, generate a primitive atlas that includes data representing the graphics primitives that are visible from the camera viewpoint, and shade the visible graphics primitives in the primitive atlas to produce a shaded primitive atlas. The second GPU is configured to render an image using the shaded primitive atlas.
Abstract:
Disclosed are a system, apparatus, and method for multiple client simultaneous localization and mapping. Tracking and mapping may be performed locally and independently by each of a plurality of clients. At configurable points in time map data may be sent to a server for stitching and fusion. In response to successful stitching and fusion to one or more maps known to the server, updated position and orientation information relative to the server's maps may be sent back to the clients. Clients may update their local map data with the received server location data. Clients may receive additional map data from the server, which can be used for extending their maps. Clients may send queries to the server for 3D maps, and the queries may include metadata.
Abstract:
A method for spatial interaction in Augmented Reality (AR) includes displaying an AR scene that includes an image of a real-world scene, a virtual target object, and a virtual cursor. A position of the virtual cursor is provided according to a first coordinate system within the AR scene. A user device tracks a pose of the user device relative to a user hand according to a second coordinate system. The second coordinate system is mapped to the first coordinate system to control movements of the virtual cursor. In a first mapping mode, virtual cursor movement is controlled to change a distance between the virtual cursor and the virtual target object. In a second mapping mode, virtual cursor movement is controlled to manipulate the virtual target object. User input is detected to control which of the first mapping mode or the second mapping mode is used.
Abstract:
A method, device, and apparatus for determining optical flow from a plurality of images is described and includes receiving a first image frame from a first plurality of images, where the first plurality of images have a first resolution and a first frame rate. A second image frame may be received from a second plurality of images, where the second plurality of images have a second resolution less than the first resolution and a second frame rate greater than the first frame rate. A first optical flow may be computed from the first image frame to the second image frame. Additionally, the based at least in part on the first optical flow from the first image frame to the second image frame, a third image frame may be output as part of an output stream. The output stream may have a frame rate greater than or equal to the first frame rate, where the third image frame has a resolution greater than or equal to the second resolution.
Abstract:
Disclosed are a system, apparatus, and method for monocular visual simultaneous localization and mapping that handles general 6DOF and panorama camera movements. A 3D map of an environment containing features with finite or infinite depth observed in regular or panorama keyframes is received. The camera is tracked in 6DOF from finite, infinite, or mixed feature sets. Upon detection of a panorama camera movement towards unmapped scene regions, a reference panorama keyframe with infinite features is created and inserted into the 3D map. When panoramic camera movement extends toward unmapped scene regions, the reference keyframe is extended with further dependent panorama keyframes. Panorama keyframes are robustly localized in 6DOF with respect to finite 3D map features. Localized panorama keyframes contain 2D observations of infinite map features that are matched with 2D observations in other localized keyframes. 2D-2D correspondences are triangulated, resulting in new finite 3D map features.