Abstract:
Embodiments disclosed facilitate robust, accurate, and reliable recovery of words and/or characters in the presence of non-uniform lighting and/or shadows. In some embodiments, a method to recover text from image may comprise: expanding a Maximally Stable Extremal Region (MSER) in an image, the neighborhood comprising a plurality of sub-blocks; thresholding a subset of the plurality of sub-blocks in the neighborhood, the subset comprising sub-blocks with text, wherein each sub-block in the subset is thresholded using a corresponding threshold associated with the sub-block; and obtaining a thresholded neighborhood.
Abstract:
Systems, apparatuses, and methods to relate images of words to a list of words are provided. A trellis based word decoder analyses a set of OCR characters and probabilities using a forward pass across a forward trellis and a reverse pass across a reverse trellis. Multiple paths may result, however, the most likely path from the trellises has the highest probability with valid links. A valid link is determined from the trellis by some dictionary word traversing the link. The most likely path is compared with a list of words to find the word closest to the most.
Abstract:
A difference in intensities of a pair of pixels in an image is repeatedly compared to a threshold, with the pair of pixels being separated by at least one pixel (“skipped pixel”). When the threshold is found to be exceeded, a selected position of a selected pixel in the pair, and at least one additional position adjacent to the selected position are added to a set of positions. The comparing and adding are performed multiple times to generate multiple such sets, each set identifying a region in the image, e.g. an MSER. Sets of positions, identifying regions whose attributes satisfy a test, are merged to obtain a merged set. Intensities of pixels identified in the merged set are used to generate binary values for the region, followed by classification of the region as text/non-text. Regions classified as text are supplied to an optical character recognition (OCR) system.
Abstract:
A difference in intensities of a pair of pixels in an image is repeatedly compared to a threshold, with the pair of pixels being separated by at least one pixel (“skipped pixel”). When the threshold is found to be exceeded, a selected position of a selected pixel in the pair, and at least one additional position adjacent to the selected position are added to a set of positions. The comparing and adding are performed multiple times to generate multiple such sets, each set identifying a region in the image, e.g. an MSER. Sets of positions, identifying regions whose attributes satisfy a test, are merged to obtain a merged set. Intensities of pixels identified in the merged set are used to generate binary values for the region, followed by classification of the region as text/non-text. Regions classified as text are supplied to an optical character recognition (OCR) system.
Abstract:
An electronic device and method receive (for example, from a memory), a grayscale image of a scene of real world captured by a camera of a mobile device. The electronic device and method also receive a color image from which the grayscale image is generated, wherein each color pixel is stored as a tuple of multiple components. The electronic device and method determine a new intensity for at least one grayscale pixel in the grayscale image, based on at least one component of a tuple of a color pixel located in correspondence to the at least one grayscale pixel. The determination may be done conditionally, by checking whether a local variance of intensities is below a predetermined threshold in a subset of grayscale pixels located adjacent to the at least one grayscale pixel, and selecting the component to provide most local variance of intensities.
Abstract:
An electronic device and method identify a block of text in a portion of an image of real world captured by a camera of a mobile device, slice sub-blocks from the block and identify characters in the sub-blocks that form a first sequence to a predetermined set of sequences to identify a second sequence therein. The second sequence may be identified as recognized (as a modifier-absent word) when not associated with additional information. When the second sequence is associated with additional information, a check is made on pixels in the image, based on a test specified in the additional information. When the test is satisfied, a copy of the second sequence in combination with the modifier is identified as recognized (as a modifier-present word). Storage and use of modifier information in addition to a set of sequences of characters enables recognition of words with or without modifiers.
Abstract:
Systems, apparatuses, and methods to relate images of words to a list of words are provided. A trellis based word decoder analyses a set of OCR characters and probabilities using a forward pass across a forward trellis and a reverse pass across a reverse trellis. Multiple paths may result, however, the most likely path from the trellises has the highest probability with valid links. A valid link is determined from the trellis by some dictionary word traversing the link. The most likely path is compared with a list of words to find the word closest to the most.
Abstract:
Systems, apparatuses, and methods to relate images of words to a list of words are provided. A trellis based word decoder analyses a set of OCR characters and probabilities using a forward pass across a forward trellis and a reverse pass across a reverse trellis. Multiple paths may result, however, the most likely path from the trellises has the highest probability with valid links. A valid link is determined from the trellis by some dictionary word traversing the link. The most likely path is compared with a list of words to find the word closest to the most.
Abstract:
An attribute is computed based on pixel intensities in an image of the real world, and thereafter used to identify at least one input for processing the image to identify at least a first maximally stable extremal region (MSER) therein. The at least one input is one of (A) a parameter used in MSER processing or (B) a portion of the image to be subject to MSER processing. The attribute may be a variance of pixel intensities, or computed from a histogram of pixel intensities. The attribute may be used with a look-up table, to identify parameter(s) used in MSER processing. The attribute may be a stroke width of a second MSER of a subsampled version of the image. The attribute may be used in checking whether a portion of the image satisfies a predetermined test, and if so including the portion in a region to be subject to MSER processing.
Abstract:
An electronic device and method identify a block of text in a portion of an image of real world captured by a camera of a mobile device, slice sub-blocks from the block and identify characters in the sub-blocks that form a first sequence to a predetermined set of sequences to identify a second sequence therein. The second sequence may be identified as recognized (as a modifier-absent word) when not associated with additional information. When the second sequence is associated with additional information, a check is made on pixels in the image, based on a test specified in the additional information. When the test is satisfied, a copy of the second sequence in combination with the modifier is identified as recognized (as a modifier-present word). Storage and use of modifier information in addition to a set of sequences of characters enables recognition of words with or without modifiers.