Abstract:
Methods and systems for enhancing the accuracy of license plate state identification in an ALPR (Automated License Plate Recognition) system. This is accomplished through use of individual character-by-character image-based classifiers that are trained to distinguish between the fonts for different states. At runtime, the OCR result for the license plate code can be used to determine which character in the plate would provide the highest discriminatory power for arbitrating between candidate state results. This classifier is then applied to the individual character image to provide a final selection of the estimated state/jurisdiction for the plate.
Abstract:
Methods and systems for tag recognition in captured images. A candidate region can be localized from regions of interest with respect to a tag and a tag number shown in the regions of interest within a side image of a vehicle. A number of confidence levels can then be calculated with respect to each digit recognized as a result of an optical character recognition operation performed with respect to the tag number. Optimal candidates within the candidate region can be determined for the tag number based on individual character confidence levels among the confidence levels. Optimal candidates from a pool of valid tag numbers can then be validated using prior appearance probabilities and data returned, which is indicative of the most probable tag to be detected to improve image recognition accuracy.
Abstract:
A method for detecting a vehicle running a stop signal positioned at an intersection includes acquiring a sequence of frames from at least one video camera monitoring an intersection being signaled by the stop signal. The method includes defining a first region of interest (ROI) including a road region located before the intersection on the image plane. The method includes searching the first ROI for a candidate violating vehicle. In response to detecting the candidate violating vehicle, the method includes tracking at least one trajectory of the detected candidate violating vehicle across a number of frames. The method includes classifying the candidate violating vehicle as belonging to one of a violating vehicle and a non-violating vehicle based on the at least one trajectory.
Abstract:
A method for detecting a vehicle running a stop signal includes acquiring at least two evidentiary images of a candidate violating vehicle captured from at least one camera monitoring an intersection. The method includes extracting feature points in each of the at least two evidentiary images. The method includes computing feature descriptors for each of the extracted feature points. The method includes determining a correspondence between feature points having matching feature descriptors at different locations in the at least two evidentiary images. The method includes extracting at least one attribute for each correspondence. The method includes determining if the candidate violating vehicle is in violation of running the stop signal using the extracted attribute.
Abstract:
A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.
Abstract:
A system and method of monitoring a region of interest comprises obtaining visual data comprising image frames of the region of interest over a period of time, analyzing individual subjects within the region of interest, the analyzing including at least one of tracking movement of individual subjects over time within the region of interest or extracting an appearance attribute of the individual subjects, and defining a group to include individual subjects having at least one of similar movement profiles or similar appearance attributes. The tracking movement includes detecting at least one of a trajectory of an individual subject within the region of interest, a dwell of an individual subject in at least one location within the region of interest, or an entrance or exit location within the region of interest.
Abstract:
This disclosure provides a method and system for automated sequencing of vehicles in side-by-side drive-thru configurations via appearance-based classification. According to an exemplary embodiment, an automated sequencing method includes computer-implemented method of automated sequencing of vehicles in a side-by-side drive-thru, the method comprising: a) an image capturing device capturing video of a merge-point area associated with multiple lanes of traffic merging; b) detecting in the video a vehicle as it traverses the merge-point area; c) classifying the detected vehicle associated with traversing the merge-point area as coming from one of the merging lanes; and d) aggregating vehicle classifications performed in step c) to generate a merge sequence of detected vehicles.
Abstract:
Multi-stage vehicle detection systems and methods for side-by-side drive-thru configurations. One or more video cameras for an image-capturing unit) can be employed for capturing video of a drive-thru of interest in a monitored area. A group of modules can be provided, which define multiple virtual detection loops in the video and sequentially perform classification with respect to each virtual detection loops among the multiple virtual detection loops, starting from a virtual detection loop closest to an order point, and when a vehicle having a car ID is sitting in a drive-thru queue, so as to improve vehicle detection performance in automated post-merge sequencing.
Abstract:
Methods, systems, and processor-readable media for video anomaly detection based upon a sparsity model. A video input can be received and two or more diverse descriptors of an event can be computed from the video input. The descriptors can be combined to form an event matrix. A sparse reconstruction of the event matrix can be performed with respect to an over complete dictionary of training events represented by the diverse descriptors. A step can then be performed to determine if the event is anomalous by computing an outlier rejection measure on the sparse reconstruction.
Abstract:
A method for updating an event sequence includes acquiring video data of a queue area from at least one image source; searching the frames for subjects located at least near a region of interest (ROI) of defined start points in the video data; tracking a movement of each detected subject through the queue area over a subsequent series of frames; using the tracking, determining if a location of the a tracked subject reaches a predefined merge point where multiple queues in the queue area converge into a single queue lane; in response to the tracked subject reaching the predefined merge point, computing an observed sequence of where the tracked subject places among other subjects approaching an end-event point; and, updating a sequence of end-events to match the observed sequence of subjects in the single queue lane.