Abstract:
Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a placement position for an item of virtual content can be transmitted to one or more of a first device and a second device. The placement position can be based on correlated map data generated based on first map data obtained from the first device and second map data obtained from the second device. In some examples, the first device can transmit the placement position to the second device.
Abstract:
A wearable display device is described that is connected to a host device. The wearable display device includes one or more sensors configured to generate eye pose data indicating a user's field of view, one or more displays, and one or more processors. The one or more processors are configured to output a representation of the eye pose data to the host device and extract one or more depth values for a rendered frame from depth data output by the host device. The rendered frame is generated using the eye pose data. The one or more processors are further configured to modify one or more pixel values of the rendered frame using the one or more depth values to generate a warped rendered frame and output, for display at the one or more displays, the warped rendered frame.
Abstract:
Methods, systems, and devices for split rending of multiple graphic layers are described. An extended reality (XR) system may include a processing device that generates and renders multiple graphic layers and a display device that displays the graphic layers to create a virtual environment. The processing device may divide the multiple graphic layer into sets of graphic layers and composite each set into a composite layer for transmission to the display device over a respective stream. Each group of graphic layers may include graphic layers of the same type that are consecutively ordered with respect to their Z orders and that have similar frame rates.
Abstract:
Methods, systems, and devices for image processing are described. A device may include a plurality of buffer components, each of which may receive a pixel lines that may each be associated with a respective raw image. An arbitration component of the device may combine at least some of the pixel lines into one or more data packets. The arbitration component may pass, using an arbitration scheme such as a time division multiplexing scheme, the one or more data packets from the arbitration component to a shared image signal processor (ISP) of the device. The shared ISP may generate a respective processed image based at least in part on the one or more data packets. In some examples, the device may maintain a respective set of image statistics, registers, and the like for at least some of the raw images.
Abstract:
A method performed by an electronic device is described. The method includes receiving a plurality of images from a first camera with a first field of view and a second plurality of images from a second camera with a second field of view. An overlapping region exists between the first field of view and the second field of view. The method also includes predicting a disparity of a moving object present in a first image of the first plurality of images. The moving object is not present in a corresponding second image of the second plurality of images. The method further includes determining warp vectors based on the predicted disparity. The method additionally includes combining an image from the first plurality of images with an image from the second plurality of images based on the determined warp vectors.
Abstract:
A wearable display device is described that is connected to a host device. The wearable display device includes one or more sensors configured to generate eye pose data indicating a user's field of view, one or more displays, and one or more processors. The one or more processors are configured to output a representation of the eye pose data to the host device and extract one or more depth values for a rendered frame from depth data output by the host device. The rendered frame is generated using the eye pose data. The one or more processors are further configured to modify one or more pixel values of the rendered frame using the one or more depth values to generate a warped rendered frame and output, for display at the one or more displays, the warped rendered frame.
Abstract:
Certain aspects of the present disclosure relate to a method for compressed sensing (CS). The CS is a signal processing concept wherein significantly fewer sensor measurements than that suggested by Shannon/Nyquist sampling theorem can be used to recover signals with arbitrarily fine resolution. In this disclosure, the CS framework is applied for sensor signal processing in order to support low power robust sensors and reliable communication in Body Area Networks (BANs) for healthcare and fitness applications.
Abstract:
Generally described, aspects of the present disclosure relate to generation of an image representing a panned shot of an object by an image capture device. In one embodiment, a panned shot may be performed on a series of images of a scene. The series of images may include at least subject object moving within the scene. Motion data of the subject object may be captured by comparing the subject object in a second image of the series of images to the subject object in a first image of the series of images. A background image is generated by implementing a blur process using the first image and the second image based on the motion data. A final image is generated by including the image of the subject object in the background image.
Abstract:
An electronic device and method capture multiple images of a scene of real world at a several zoom levels, the scene of real world containing text of one or more sizes. Then the electronic device and method extract from each of the multiple images, one or more text regions, followed by analyzing an attribute that is relevant to OCR in one or more versions of a first text region as extracted from one or more of the multiple images. When an attribute has a value that meets a limit of optical character recognition (OCR) in a version of the first text region, the version of the first text region is provided as input to OCR.
Abstract:
An electronic device and method identify a block of text in a portion of an image of real world captured by a camera of a mobile device, slice sub-blocks from the block and identify characters in the sub-blocks that form a first sequence to a predetermined set of sequences to identify a second sequence therein. The second sequence may be identified as recognized (as a modifier-absent word) when not associated with additional information. When the second sequence is associated with additional information, a check is made on pixels in the image, based on a test specified in the additional information. When the test is satisfied, a copy of the second sequence in combination with the modifier is identified as recognized (as a modifier-present word). Storage and use of modifier information in addition to a set of sequences of characters enables recognition of words with or without modifiers.