Abstract:
Methods, systems and processor-readable media for adaptive character segmentation in an automatic license plate recognition application. A region of interest can be identified in an image of a license plate acquired via an automatic license plate recognition engine. Characters in the image with respect to the region of interest can be segmented using a histogram projection associated with particular segmentation threshold parameters. The characters in the image can be iteratively validated if a minimum number of valid characters is determined based on the histogram projection and the particular segmentation threshold parameters to produce character images sufficient to identify the license plate.
Abstract:
Methods, systems and processor-readable media for adaptive character segmentation in an automatic license plate recognition application. A region of interest can be identified in an image of a license plate acquired via an automatic license plate recognition engine. Characters in the image with respect to the region of interest can be segmented using a histogram projection associated with particular segmentation threshold parameters. The characters in the image can be iteratively validated if a minimum number of valid characters is determined based on the histogram projection and the particular segmentation threshold parameters to produce character images sufficient to identify the license plate.
Abstract:
Methods and systems for continuously monitoring the gaze direction of a driver of a vehicle over time. Video is received, which is captured by a camera associated with, for example, a mobile device within a vehicle, the camera and/or mobile device mounted facing the driver of the vehicle. Frames can then be extracted from the video. A facial region can then be detected, which corresponds to the face of the driver within the extracted frames. Features descriptors can then be computed from the facial region. A gaze classifier derived from the vehicle, the driver, and the camera can then be applied, wherein the gaze classifier receives the feature descriptors as inputs and outputs at least one label corresponding to one or more predefined finite number of gaze classes to identify the gaze direction of the driver of the vehicle.
Abstract:
Methods and systems for continuously monitoring the gaze direction of a driver of a vehicle over time. Video is received, which is captured by a camera associated with, for example, a mobile device within a vehicle, the camera and/or mobile device mounted facing the driver of the vehicle. Frames can then be extracted from the video. A facial region can then be detected, which corresponds to the face of the driver within the extracted frames. Features descriptors can then be computed from the facial region. A gaze classifier derived from the vehicle, the driver, and the camera can then be applied, wherein the gaze classifier receives the feature descriptors as inputs and outputs at least one label corresponding to one or more predefined finite number of gaze classes to identify the gaze direction of the driver of the vehicle.
Abstract:
Methods and systems for detecting anomalies in transportation related video footage. In an offline training phase, receiving video footage of a traffic location can be received. Also, in an offline training phase, event encodings can be extracted from the video footage and collected or compiled into a training dictionary. One or more input video sequences captured at the traffic location or a similar traffic location can be received in an online detection phase. Then, an event encoding corresponding to the input video sequence can be extracted. The event encoding can be reconstructed with a low rank sparsity prior model applied with respect to the training dictionary. The reconstruction error between actual and reconstructed event encodings can then be computed in order to determine if an event thereof is anomalous by comparing the reconstruction error with a threshold.
Abstract:
Methods and systems for detecting anomalies in transportation related video footage. In an offline training phase, receiving video footage of a traffic location can be received. Also, in an offline training phase, event encodings can be extracted from the video footage and collected or compiled into a training dictionary. One or more input video sequences captured at the traffic location or a similar traffic location can be received in an online detection phase. Then, an event encoding corresponding to the input video sequence can be extracted. The event encoding can be reconstructed with a low rank sparsity prior model applied with respect to the training dictionary. The reconstruction error between actual and reconstructed event encodings can then be computed in order to determine if an event thereof is anomalous by comparing the reconstruction error with a threshold.