Abstract:
Providing improved card art for display comprises receiving, by one or more computing devices, an image of a card and performing an image recognition algorithm on the image. The computing device identifies images represented on the card image and comparing the identified images to an image database. The computing device determines a standard card art image associated with the identified image based at least in part on the comparison and associates the standard card art image with an account of a user, the account being associated with the card in the image. The computing device displays the standard card art as a representation of the account.
Abstract:
Comparing extracted card data from a continuous scan comprises receiving, by one or more computing devices, a digital scan of a card; obtaining a plurality of images of the card from the digital scan of the physical card; performing an optical character recognition algorithm on each of the plurality of images; comparing results of the application of the optical character recognition algorithm for each of the plurality of images; determining if a configured threshold of the results for each of the plurality of images match each other; and verifying the results when the results for each of the plurality of images match each other. Threshold confidence level for the extracted card data can be employed to determine the accuracy of the extraction. Data is further extracted from blended images and three-dimensional models of the card. Embossed text and holograms in the images may be used to prevent fraud.
Abstract:
An optimal recognition for handwritten input based on receiving a touch input from a user may be selected by applying both a delayed stroke recognizer as well as an overlapping recognizer to the handwritten input. A score may be generated for both the delayed stroke recognition as well as the overlapping recognition and the recognition corresponding to the highest score may be presented as the overall recognition.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
Abstract:
Techniques are provided for segmenting an input by cut point classification and training a cut classifier. A method may include receiving, by a computerized text recognition system, an input in a script. A heuristic may be applied to the input to insert multiple cut points. For each of the cut points, a probability may be generated and the probability may indicate a likelihood that the cut point is correct. Multiple segments of the input may be selected, and the segments may be defined by cut points having a probability over a threshold. Next, the segments of the input may be provided to a character recognizer. Additionally, a method may include training a cut classifier using a machine learning technique, based on multiple text training examples, to determine the correctness of a cut point in an input.
Abstract:
A set S is initialized. Initially, S is empty; but, as the disclosed process is performed, items are added to it. It may contain one or more samples (e.g., items) from each class. One or more labeled samples for one or more classes may be obtained. A series of operations may be performed, iteratively, until a stopping criterion is reach to obtain the reduced set. For each class of the one or more classes, a point may be generated based on at least one sample in the class having a nearest neighbor in a set S with a different class label than the sample. The point may be added to the set S. The process may be repeated unless a stopping criterion is reached. A nearest neighbor for a submitted point in the set S may be identified and a candidate nearest neighbor may be output for the submitted point.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.
Abstract:
Embodiments herein provide computer-implemented techniques for allowing a user computing device to extract financial card information using optical character recognition (“OCR”). Extracting financial card information may be improved by applying various classifiers and other transformations to the image data. For example, applying a linear classifier to the image to determine digit locations before applying the OCR algorithm allows the user computing device to use less processing capacity to extract accurate card data. The OCR application may train a classifier to use the wear patterns of a card to improve OCR algorithm performance. The OCR application may apply a linear classifier and then a nonlinear classifier to improve the performance and the accuracy of the OCR algorithm. The OCR application uses the known digit patterns used by typical credit and debit cards to improve the accuracy of the OCR algorithm.