Abstract:
A method, non-transitory computer readable medium and apparatus for calculating a by spot occupancy of a parking lot are disclosed. For example, the method includes receiving an indication of a triggering event, sending a query to receive a first image and a second image in response to the triggering event, receiving the first image and the second image, analyzing the first image and the second image to determine a change in an occupancy status of a parking spot within the parking lot and calculating the by spot occupancy of the parking lot based on the change in the occupancy status of the parking spot.
Abstract:
A method and system for parking occupancy detection comprises collecting video of the parking facility with a video capturing device, counting an occupancy of a parking facility using at least one sensor to establish a sensor occupancy count, classifying each of a plurality of parking spots as occupied or vacant with a classifier according to the classification threshold in order to establish a video occupancy count, determining a difference between the sensor occupancy count and the video occupancy count, and setting the sensor occupancy count to equal the video occupancy count if the difference between the sensor occupancy count and the video occupancy count exceeds a difference threshold.
Abstract:
A method, system, and apparatus for vehicle occupancy detection comprises collecting image data of a vehicle, creating a first value according to a plurality of characteristics of a driver position of the vehicle, and then creating at least one other value according to a plurality of characteristics of at least one other candidate occupant position in the vehicle. The characteristics of the driver position of the vehicle and candidate occupant position of the vehicle are compared in order to determine the vehicle occupancy.
Abstract:
A store profile generation system includes a mobile base and an image capture assembly mounted on the base. The assembly includes at least one image capture device for acquiring images of product display units in a product facility, product labels being associated with the product display units which include product-related data. A control unit acquires the images captured by the at least one image capture device at a sequence of locations of the mobile base in the product facility. The control unit extracts the product-related data from the acquired images and constructs a store profile indicating locations of the product labels throughout the product facility, based on the extracted product-related data. The store profile can be used for generating new product labels for a sale in an appropriate order for a person to match to the appropriate locations in a single pass through the store.
Abstract:
A system and method for detecting electronic device use by a driver of a vehicle including acquiring an image including a vehicle from an associated image capture device positioned to view oncoming traffic, locating a windshield region of the vehicle in the captured image, processing pixels of the windshield region of the image for computing a feature vector describing the windshield region of the vehicle, applying the feature vector to a classifier for classifying the image into respective classes including at least classes for candidate electronic device use and candidate electronic device non-use, and outputting the classification.
Abstract:
A system and method for triggering image re-capture in image processing by receiving a first image captured using a first mode, performing a computer vision task on the first image to produce a first result, generating a confidence score of the first result using a machine learning technique, triggering an image re-capture using a second mode in response to the confidence score of the first result, and performing the computer vision task on a result of the image recapture using the second mode.
Abstract:
Methods and systems for reducing the required footprint of SNoW-based classifiers via optimization of classifier features. A compression technique involves two training cycles. The first cycle proceeds normally and the classifier weights from this cycle are used to rank the Successive Mean Quantization Transform (SMQT) features using several criteria. The top N (out of 512 features) are then chosen and the training cycle is repeated using only the top N features. It has been found that OCR accuracy is maintained using only 60 out of 512 features leading to an 88% reduction in RAM utilization at runtime. This coupled with a packing of the weights from doubles to single byte integers added a further 8× reduction in RAM footprint or a reduction of 68× over the baseline SNoW method.
Abstract:
Methods and systems for continuously monitoring the gaze direction of a driver of a vehicle over time. Video is received, which is captured by a camera associated with, for example, a mobile device within a vehicle, the camera and/or mobile device mounted facing the driver of the vehicle. Frames can then be extracted from the video. A facial region can then be detected, which corresponds to the face of the driver within the extracted frames. Features descriptors can then be computed from the facial region. A gaze classifier derived from the vehicle, the driver, and the camera can then be applied, wherein the gaze classifier receives the feature descriptors as inputs and outputs at least one label corresponding to one or more predefined finite number of gaze classes to identify the gaze direction of the driver of the vehicle.
Abstract:
A system and method for triggering image re-capture in image processing by receiving a first image captured using a first mode, performing a computer vision task on the first image to produce a first result, generating a confidence score of the first result using a machine learning technique, triggering an image re-capture using a second mode in response to the confidence score of the first result, and performing the computer vision task on a result of the image recapture using the second mode.
Abstract:
A method for compensation of banding in a marking platform includes: initiating a calibration stage; marking a test pattern over multiple intervals of a lowest fundamental frequency among marking modules; obtaining image data for the test pattern from a sensor; obtaining 1× signals from sensors associated with the marking modules; and processing the image data in relation to the 1× signals to form banding profiles for multiple marking modules. Alternatively, the method may include: processing image data in relation to 1× signals to form banding profiles for multiple marking modules; determining amplitudes in multiple banding profiles exceeds a threshold to identify dominant banding profiles; and processing dominant banding profiles to form dominant banding signatures. Alternatively, the method may include: initiating a correction stage; obtaining 1× signals from sensors associated with dominant marking modules; and periodically processing dominant banding signatures and 1× signals to determine a banding compensation value.