Abstract:
Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
Abstract:
A method for face detection is disclosed. The method includes evaluating a scanning window using a first weak classifier in a first stage classifier. The method also includes evaluating the scanning window using a second weak classifier in the first stage classifier based on the evaluation using the first weak classifier.
Abstract:
Examples are described of segmenting an image into image regions based on depicted categories of objects, and for refining the image regions semantically. For example, a system can determine that a first image region in an image depicts a first category of object. The system can generate a color distance map of the first image region that maps color distance values to each pixel in the first image region. A color distance value quantifies a difference between a color value of a pixel in the first image region and a color value of a sample pixel in a second image region in the image. The system can process the image based on a refined variant of the first image region that is refined based on the color distance map, for instance by removing pixels from the first image region whose color distances fall below a color distance threshold.
Abstract:
A method performed by an electronic device is described. The method includes receiving a set of frames. The set of frames describes a moving three-dimensional (3D) object. The method also includes registering the set of frames based on a canonical model. The canonical model includes geometric information and optical information. The method additionally includes fusing frame information of each frame to the canonical model based on the registration. The method further includes reconstructing the 3D object based on the canonical model.
Abstract:
In various implementations, object tracking in a video content analysis system can be augmented with an image-based object re-identification system (e.g., for person re-identification or re-identification of other objects) to improve object tracking results for objects moving in a scene. The object re-identification system can use image recognition principles, which can be enhanced by considering data provided by object trackers that can be output by an object traffic system. In a testing stage, the object re-identification system can selectively test object trackers against object models. For most input video frames, not all object trackers need be tested against all object models. Additionally, different types of object trackers can be tested differently, so that a context provided by each object tracker can be considered. In a training stage, object models can also be selectively updated.
Abstract:
A device includes a memory buffer and a processor. The memory buffer is configured to store background image-blocks corresponding to image-blocks of a plurality of image frames of a video stream. The processor is configured to partition a particular image frame of the video stream into multiple image-blocks. The processor is also configured to generate a predicted background image-block based on one or more of the background image-blocks. The processor is further configured to determine a background prediction error based on a comparison of the predicted background image-block and a corresponding image-block of the particular image frame. The processor is also configured, based on determining that the background prediction error is greater than a threshold, to extract from the image-block at least one of a background image-block corresponding to the image-block or a foreground image-block corresponding to the image-block.
Abstract:
A method for detecting and tracking a target object is described. The method includes performing motion-based tracking for a current video frame by comparing a previous video frame and the current video frame. The method also includes selectively performing object detection in the current video frame based on a tracked parameter.
Abstract:
A method performed by an electronic device is described. The method includes determining a local motion pattern by determining a set of local motion vectors within a region of interest between a previous frame and a current frame. The method also includes determining a global motion pattern by determining a set of global motion vectors between the previous frame and the current frame. The method further includes calculating a separation metric based on the local motion pattern and the global motion pattern. The separation metric indicates a motion difference between the local motion pattern and the global motion pattern. The method additionally includes tracking an object based on the separation metric.
Abstract:
A method for object classification by an electronic device is described. The method includes obtaining an image frame that includes an object. The method also includes determining samples from the image frame. Each of the samples represents a multidimensional feature vector. The method further includes adding the samples to a training set for the image frame. The method additionally includes pruning one or more samples from the training set to produce a pruned training set. One or more non-support vector negative samples are pruned first. One or more non-support vector positive samples are pruned second if necessary to avoid exceeding a sample number threshold. One or more support vector samples are pruned third if necessary to avoid exceeding the sample number threshold. The method also includes updating classifier model weights based on the pruned training set.
Abstract:
A method performed by an electronic device is described. The method includes determining a local motion pattern by determining a set of local motion vectors within a region of interest between a previous frame and a current frame. The method also includes determining a global motion pattern by determining a set of global motion vectors between the previous frame and the current frame. The method further includes calculating a separation metric based on the local motion pattern and the global motion pattern. The separation metric indicates a motion difference between the local motion pattern and the global motion pattern. The method additionally includes tracking an object based on the separation metric.