Abstract:
This disclosure provides a method and system to create an educational assessment using an image processing system. According to an exemplary method, a preexisting printed assessment is scanned to produce an image file and the image processing system generates and executes a search query based on a user selected preexisting question. The executed search queries a Data Warehouse/Repository (DW/R) based on the search query to retrieve one or more predefined questions and associated metadata. The retrieved question and metadata are used to create the educational assessment.
Abstract:
A store profile generation system includes a mobile base and an image capture assembly mounted on the base. The assembly includes at least one image capture device for acquiring images of product display units in a retail environment. A control unit acquires the images captured by the at least one image capture device at a sequence of locations of the mobile base in the retail environment. The control unit extracts product-related data from the acquired images and generates a store profile indicating locations of products and their associated tags throughout the retail environment, based on the extracted product-related data. The store profile can be used for generating new product labels for a sale in an appropriate order for a person to match to the appropriate locations in a single pass through the store.
Abstract:
The present disclosure relates to systems and methods for use in a retail store. An example system includes a mobile base, a printer, an image capture subsystem on the mobile base and coupled to the printer, the image capture system including at least one image capture device and at least one image processor, the image capture device configured to obtain images of items in the retail store, the image processor configured to derive item identification data from the images of items, and a control subsystem coupled to the printer and to the image capture subsystem, where the control subsystem is configured to receive information identifying items requiring signage, acquire item identification data from the image capture subsystem, determine, based on the information identifying items requiring signage and on the item identification data, items requiring signage, and to direct the printer to print signage for the items requiring signage.
Abstract:
This disclosure provides an image processing method and system for recognizing barcodes and/or product labels. According to an exemplary embodiment, the method uses a multifaceted detection process that includes both image enhancement of a candidate barcode region and other product label information associated with a candidate barcode region to identify a product label, where the candidate barcode region includes a nonreadable barcode. According to one exemplary application, a store profile is generated based on the identifications of the product labels which are associated with a location of a product within a store.
Abstract:
Eliminate or reduce the impact of glare in printed information tag recognition applications using single- and multi-pose external illumination coupled with intelligent processing. A shelf imager can acquire shelf images for printed information tag localization and recognition. An external illuminator can provide at least one illumination condition/pose for shelf image acquisition in addition to lighting associated with the enclosed environment. A glare region of interest (ROI) detector can analyze all or a portion of the acquired shelf images for glare to determine whether additional images need to be acquired using different illumination conditions provided by the single- or multi-pose external illuminator or whether full or portion of acquired images need to be analyzed by a printed information tag locator and recognizer. A printed information tag locator and recognizer can analyze all or a portion of the acquired images to localize and recognize data printed on the printed information tags.
Abstract:
Methods, devices, and systems replace solid lines of user-fillable areas of a print job with patterned lines and then print the print job with the patterned lines to print user-fillable pre-printed forms, using a printing device. These methods, devices, and systems also scan at least one of the user-fillable pre-printed forms having user markings to produce a scan, using an optical scanner. Further, such methods, devices, and systems produce an altered scan by removing only the patterned lines from the scan to leave the user markings in the altered scan using the image processor. Then, these methods, devices, and systems can identify user-supplied characters by performing automated character recognition on the user markings in the altered scan using the image processor and output such user-supplied characters from the image processor.
Abstract:
A configuration system generates a calibration target to be printed, the target including a set of machine-readable and visually-identifiable landmarks and associated location-encoding marks which encode known locations of the landmarks. A plurality of test images of the printed calibration target is acquired by the system from an image capture assembly. Positions of the landmarks in the acquired test images and the location-encoding marks in the acquired test images are detected by the system. The system decodes the locations of the landmarks from the detected location-encoding marks and spatially characterizes the image capture assembly, based on the detected positions of the landmarks in the acquired test images and their decoded known locations.
Abstract:
A store profile generation system includes a mobile base and an image capture assembly mounted on the base. The assembly includes at least one image capture device for acquiring images of product display units in a product facility, product labels being associated with the product display units which include product-related data. A control unit acquires the images captured by the at least one image capture device at a sequence of locations of the mobile base in the product facility. The control unit extracts the product-related data from the acquired images and constructs a store profile indicating locations of the product labels throughout the product facility, based on the extracted product-related data. The store profile can be used for generating new product labels for a sale in an appropriate order for a person to match to the appropriate locations in a single pass through the store.
Abstract:
A system of identifying one or more fillable fields of an electronic form may include an electronic device, and a computer-readable storage medium that includes one or more programming instructions. The programming instructions are configured to instruct the electronic device to receive an electronic form, identify fillable field candidates of the electronic form, and determine, for each fillable field candidate, whether the fillable field candidate is a fillable field. The system updates metadata associated with the electronic form by applying a sequencing framework to only the fillable fields by obtaining position information for each fillable field that indicates a position of the fillable field on the document, sorting the fillable fields based on the position information to form a sequence of fillable fields, determining a designator to each fillable field that indicates a position of a corresponding fillable field in the sequence, and storing the designator.
Abstract:
A configuration system generates a calibration target to be printed, the target including a set of machine-readable and visually-identifiable landmarks and associated location-encoding marks which encode known locations of the landmarks. A plurality of test images of the printed calibration target is acquired by the system from an image capture assembly. Positions of the landmarks in the acquired test images and the location-encoding marks in the acquired test images are detected by the system. The system decodes the locations of the landmarks from the detected location-encoding marks and spatially characterizes the image capture assembly, based on the detected positions of the landmarks in the acquired test images and their decoded known locations.