Abstract:
Systems and methods for automating an image rejection process. Features including texture, spatial structure, and image quality characteristics can be extracted from one or more images to train a classifier. Features can be calculated with respect to a test image for submission of the features to the classifier, given an operating point corresponding to a desired false positive rate. One or more inputs can be generated from the classifier as a confidence value corresponding to a likelihood of, for example: a license plate being absent in the image, the license plate being unreadable, or the license plate being obstructed. The confidence value can be compared against a threshold to determine if the image(s) should be removed from a human review pipeline, thereby reducing images requiring human review.
Abstract:
Methods and systems for tag recognition in captured images. A candidate region can be localized from regions of interest with respect to a tag and a tag number shown in the regions of interest within a side image of a vehicle. A number of confidence levels can then be calculated with respect to each digit recognized as a result of an optical character recognition operation performed with respect to the tag number. Optimal candidates within the candidate region can be determined for the tag number based on individual character confidence levels among the confidence levels. Optimal candidates from a pool of valid tag numbers can then be validated using prior appearance probabilities and data returned, which is indicative of the most probable tag to be detected to improve image recognition accuracy.
Abstract:
Methods and systems for localizing numbers and characters in captured images. A side image of a vehicle captured by one or more cameras can be preprocessed to determine a region of interest. A confidence value of series of windows within regions of interest of different sizes and aspect ratios containing a structure of interest can be calculated. Highest confidence candidate regions can then be identified with respect to the regions of interest and at least one region adjacent to the highest confidence candidate regions. An OCR operation can then be performed in the adjacent region. An identifier can then be returned from the adjacent region in order to localize numbers and characters in the side image of the vehicle.
Abstract:
A video sequence can be continuously acquired at a predetermined frame rate and resolution by an image capturing unit installed at a location. A video frame can be extracted from the video sequence when a vehicle is detected at an optimal position for license plate recognition by detecting a blob corresponding to the vehicle and a virtual line on an image plane. The video frame can be pruned to eliminate a false positive and multiple frames with respect to a similar vehicle before transmitting the frame via a network. A license plate detection/localization can be performed on the extracted video frame to identify a sub-region with respect to the video frame that are most likely to contain a license plate. A license plate recognition operation can be performed and an overall confidence assigned to the license plate recognition result.
Abstract:
This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.
Abstract:
Methods and devices acquire images using a stereo camera or camera network aimed at a first location. The first location comprises multiple parallel primary lanes merging into a reduced number of at least one secondary lane, and moving items within the primary lanes initiate transactions while in the primary lanes and complete the transactions while in the secondary lane. Such methods and devices calculate distances of the moving items from the camera to identify in which of the primary lanes each of the moving items was located before merging into the secondary lane. These methods and devices then order the transactions in a merge order corresponding to a sequence in which the moving items entered the secondary lane from the primary lanes. Also, the methods and devices output the transactions in the merge.
Abstract:
This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.
Abstract:
Methods and systems for bootstrapping an OCR engine for license plate recognition. One or more OCR engines can be trained utilizing purely synthetically generated characters. A subset of classifiers, which require augmentation with real examples, along how many real examples are required for each, can be identified. The OCR engine can then be deployed to the field with constraints on automation based on this analysis to operate in a “bootstrapping” period wherein some characters are automatically recognized while others are sent for human review. The previously determined number of real examples required for augmenting the subset of classifiers can be collected. Each subset of identified classifiers can then be retrained as the number of real examples required becomes available.
Abstract:
A method for detecting a vehicle running a stop signal positioned at an intersection includes acquiring a sequence of frames from at least one video camera monitoring an intersection being signaled by the stop signal. The method includes defining a first region of interest (ROI) including a road region located before the intersection on the image plane. The method includes searching the first ROI for a candidate violating vehicle. In response to detecting the candidate violating vehicle, the method includes tracking at least one trajectory of the detected candidate violating vehicle across a number of frames. The method includes classifying the candidate violating vehicle as belonging to one of a violating vehicle and a non-violating vehicle based on the at least one trajectory.
Abstract:
Methods, systems, and processor-readable media for video anomaly detection based upon a sparsity model. A video input can be received and two or more diverse descriptors of an event can be computed from the video input. The descriptors can be combined to form an event matrix. A sparse reconstruction of the event matrix can be performed with respect to an over complete dictionary of training events represented by the diverse descriptors. A step can then be performed to determine if the event is anomalous by computing an outlier rejection measure on the sparse reconstruction.