Abstract:
Systems, apparatuses, and methods to relate images of words to a list of words are provided. A trellis based word decoder analyses a set of OCR characters and probabilities using a forward pass across a forward trellis and a reverse pass across a reverse trellis. Multiple paths may result, however, the most likely path from the trellises has the highest probability with valid links. A valid link is determined from the trellis by some dictionary word traversing the link. The most likely path is compared with a list of words to find the word closest to the most.
Abstract:
Systems, apparatuses, and methods to relate images of words to a list of words are provided. A trellis based word decoder analyses a set of OCR characters and probabilities using a forward pass across a forward trellis and a reverse pass across a reverse trellis. Multiple paths may result, however, the most likely path from the trellises has the highest probability with valid links. A valid link is determined from the trellis by some dictionary word traversing the link. The most likely path is compared with a list of words to find the word closest to the most.
Abstract:
An attribute is computed based on pixel intensities in an image of the real world, and thereafter used to identify at least one input for processing the image to identify at least a first maximally stable extremal region (MSER) therein. The at least one input is one of (A) a parameter used in MSER processing or (B) a portion of the image to be subject to MSER processing. The attribute may be a variance of pixel intensities, or computed from a histogram of pixel intensities. The attribute may be used with a look-up table, to identify parameter(s) used in MSER processing. The attribute may be a stroke width of a second MSER of a subsampled version of the image. The attribute may be used in checking whether a portion of the image satisfies a predetermined test, and if so including the portion in a region to be subject to MSER processing.
Abstract:
An electronic device and method may capture an image of an environment, followed by identification of blocks of connected components in the image. A test for overlap of spans may be made, between a span of a block selected (e.g. for having a line of pixels) and another span of an adjacent block located above, or below, or to the left, or to the right of the selected block and when satisfied, these two blocks are merged. Blocks may additionally be tested, e.g., for relative heights of the two blocks, and/or aspect ratio of either or both blocks, etc. Classification of a merged block as text or non-text may use attributes of the merged block, such as location of a horizontal pixel line, number of vertical pixel lines, and number of black-white transitions and number of white-black transitions in a subset of rows located below the horizontal pixel line.
Abstract:
Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a placement position for an item of virtual content can be transmitted to one or more of a first device and a second device. The placement position can be based on correlated map data generated based on first map data obtained from the first device and second map data obtained from the second device. In some examples, the first device can transmit the placement position to the second device.
Abstract:
An example method of capturing a 360° field-of-view image includes capturing, with one or more processors, a first portion of a 360° field-of-view using a first camera module and capturing, with the one or more processors, a second portion of the 360° field-of-view using a second camera module. The method further includes determining, with the one or more processors, a target overlap region based on a disparity in a scene captured by the first portion and the second portion and causing, with the one or more processors, the first camera module, the second camera module, or both the first camera module and the second camera module to reposition to a target camera setup based on the target overlap region. The method further includes capturing, with the one or more processors, the 360° field-of-view image with the first camera and the second camera arranged at the target camera setup.
Abstract:
The present disclosure relates to methods and apparatus for graphics processing. In some aspects, the apparatus may receive, by a first device from a second device, first position information corresponding to a first orientation of the second device. The apparatus can also generate, by the first device, first graphical content based on the first position information. Further, the apparatus can also generate, by the first device, motion information for warping the first graphical content. The apparatus can also encode, by the first device, the first graphical content. Additionally, the apparatus can provide, by the first device to the second device, the motion information and the encoded first graphical content.
Abstract:
Certain aspects of the present disclosure relate to a wearable system including one or more wearable acquisition devices. Each acquisition device includes a sensor to capture samples of a biomedical signal and circuitry to process the samples for transmission to a mobile device. The samples are encoded for transmission and decoded at the mobile device to reconstruct the biomedical signal and, based on the reconstructed biomedical signal, provide output through a user interface of the mobile device. The wearable system includes at least an acquisition device for capturing an electro-cardiogram signal (ECG). Other biomedical signals, such as a photoplethysmograph (PPG) signal, may also be captured. The wearable system may comprise a Body Area Network (BAN).
Abstract:
Methods, devices, and apparatuses are provided to facilitate a positioning of an item of virtual content in an extended reality environment. For example, a first user may access the extended reality environment through a display of a mobile device, and in some examples, the methods may determine positions and orientations of the first user and a second user within the extended reality environment. The methods may also determine a position for placement of the item of virtual content in the extended reality environment based on the determined positions and orientations of the first user and the second user, and perform operations that insert the item of virtual content into the extended reality environment at the determined placement position.
Abstract:
Example techniques are described for image processing. Processing circuitry may warp image content of a previous frame based on pose information of a device when the device requested image content information of the previous frame and pose information of the device when the device requested image content information of a current frame to generate warped image content, and blend image content from the warped image content with image content of the current frame to generate an error concealed frame. A display screen may display image content based on the error concealed frame.