Abstract:
Systems and techniques are described herein for processing data. For instance, an apparatus for processing data is provided. The apparatus may include an image signal processor (ISP) configured to: receive image data and an indication of a region of interest (ROI) from an image sensor; determine image-processing settings for processing the image data based on the ROI; and process the image data based on the image-processing settings.
Abstract:
A method and a system for warping a rendered frame is disclosed. On a host device of a split-rendering system, the method includes generating the rendered frame based on head tracking information of a user. The method also includes identifying a region of interest (ROI) of the rendered frame. The method also includes generating metadata for a warping operation from the ROI. The method further include transmitting the rendered frame and the metadata for a warping operation of the rendered frame. On a client device of the split-rendering system, the method includes transmitting head tracking information of a user by a client device. The method also includes receiving the rendered frame and metadata. The method further includes warping the rendered frame using the metadata and display pose information. The host device and the client device may be combined into an all-in-one head mounted display.
Abstract:
A method and a system for warping a rendered frame is disclosed. On a host device of a split-rendering system, the method includes generating the rendered frame based on head tracking information of a user. The method also includes identifying a region of interest (ROI) of the rendered frame. The method also includes generating metadata for a warping operation from the ROI. The method further include transmitting the rendered frame and the metadata for a warping operation of the rendered frame. On a client device of the split-rendering system, the method includes transmitting head tracking information of a user by a client device. The method also includes receiving the rendered frame and metadata. The method further includes warping the rendered frame using the metadata and display pose information. The host device and the client device may be combined into an all-in-one head mounted display.
Abstract:
Techniques and systems are provided for determining one or more camera settings. For example, an indication of a selection of an image quality metric for adjustment can be received, and a target image quality metric value for the selected image quality metric can be determined. A data point can be determined from a plurality of data points. The data point corresponds to a camera setting having an image quality metric value closest to the target image quality metric value.
Abstract:
Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a placement position for an item of virtual content can be transmitted to one or more of a first device and a second device. The placement position can be based on correlated map data generated based on first map data obtained from the first device and second map data obtained from the second device. In some examples, the first device can transmit the placement position to the second device.
Abstract:
Disclosed embodiments pertain to a method on a Mobile Station (MS) for input of text for abugida writing systems. In some embodiments, the method may comprise obtaining a base character by performing Optical Character Recognition (OCR) on written user-input on the MS. A conjunct character may also be obtained by applying one or more functional or diacritical operators to the base character. The conjunct character may then be displayed.
Abstract:
Systems and techniques are described for performing foveated sensing. In some aspects, a method (e.g., implemented by an image sensor) can include capturing sensor data for a frame associated with a scene, obtaining information corresponding to a region of interest (ROI) associated with the scene, generating a first portion (having a first resolution) of the frame corresponding to the ROI, generating a second portion of the frame having a second resolution, and outputting (e.g., to an image signal processor (ISP)) the first portion and the second portion. In some aspects, a method (e.g., implemented by an ISP) can receive, from an image sensor, sensor data for a frame associated with a scene, generating a first version (having a first resolution) of the frame based on an ROI associated with the scene, and generating a second version of the frame having a second resolution (lower than the first resolution).
Abstract:
Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a placement position for an item of virtual content can be transmitted to one or more of a first device and a second device. The placement position can be based on correlated map data generated based on first map data obtained from the first device and second map data obtained from the second device. In some examples, the first device can transmit the placement position to the second device.
Abstract:
Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a placement position for an item of virtual content can be transmitted to one or more of a first device and a second device. The placement position can be based on correlated map data generated based on first map data obtained from the first device and second map data obtained from the second device. In some examples, the first device can transmit the placement position to the second device.
Abstract:
Techniques and systems are provided for light estimation. In some examples, a system receives a plurality of frames associated with a scene. The plurality of frames includes a first frame and a second frame occurring after the first frame. The system determines, based on image data of the first frame, a first light estimate associated with the scene. The system also determines, based on image data of the second frame, a second light estimate associated with the scene. The system further generates an aggregate light estimate associated with the scene based on combining the second light estimate with at least the first light estimate.