Abstract:
A camera outputs video as a sequence of video frames having pixel values in a first (e.g., relatively low dimensional) color space, where the first color space has a first number of channels. An image-processing device maps the video frames to a second (e.g., relatively higher dimensional) color representation of video frames. The mapping causes the second color representation of video frames to have a greater number of channels relative to the first number of channels. The image-processing device extracts a second color representation of a background frame of the scene. The image-processing device can then detect foreground objects in a current frame of the second color representation of video frames by comparing the current frame with the second color representation of a background frame. The image-processing device then outputs an identification of the foreground objects in the current frame of the video.
Abstract:
This disclosure provides a method and system to locate/detect static occlusions associated with an image captured scene including a tracked object. According to an exemplary method, static occlusions are automatically located by monitoring the motion of single or multiple objects in a scene over time and with the use of an associated accumulator array.
Abstract:
This disclosure provides a static occlusion handling method and system for use with appearance-based video tracking algorithms where static occlusions are present. The method and system assumes that the objects to be tracked move in according to structured motion patterns within a scene, such as vehicles moving along a roadway. A primary concept is to replicate pixels associated with the tracked object from previous frames to current or future frames when the tracked object coincides with a static occlusion, where the predicted motion of the tracked object is a basis for replication of the pixels.
Abstract:
A method for automatically determining a dynamic queue configuration includes acquiring a series of frames from an image source surveying a queue area. The method includes detecting at least one subject in a frame. The method includes tracking locations of each detected subject across the series of frames. The method includes generating calibrated tracking data by mapping the tracking locations to a predefined coordinate system. The method includes localizing a queue configuration descriptor based on the tracking data.
Abstract:
A system and method for detecting customer drive-off/walk-off from a customer queue. An embodiment includes acquiring images of a retail establishment, said images including at least a portion of a customer queue region, determining a queue configuration within the images, analyzing the images to detect entry of a customer into the customer queue, tracking a customer detected in the customer queue as the customer progresses within the queue, analyzing the images to detect if the customer leaves the customer queue, and generating a drive-off notification if a customer leaves the queue.
Abstract:
A method for removing false foreground image content in a foreground detection process performed on a video sequence includes, for each current frame, comparing a feature value of each current pixel against a feature value of a corresponding pixel in a background model. The each current pixel is classified as belonging to one of a candidate foreground image and a background based on the comparing. A first classification image representing the candidate foreground image is generated using the current pixels classified as belonging to the candidate foreground image. The each pixel in the first classification image is classified as belonging to one of a foreground image and a false foreground image using a previously trained classifier. A modified classification image is generated for representing the foreground image using the pixels classified as belonging to the foreground image while the pixels classified as belonging to the false foreground image are removed.
Abstract:
A system and method of providing annotated trajectories by receiving image frames from a video camera and determining a location based on the image frames from the video camera. The system and method can further include the steps of determining that the location is associated with a preexisting annotation and displaying the preexisting annotation. Additionally or alternatively, the system and method can further include the steps of generating a new annotation automatically or based on a user input and associating the new annotation with the current location.