Abstract:
A method of dynamically updating a feature database that contains features corresponding to a known target object includes providing an image, extracting a first set of features from within the captured image, and comparing the first set of features to the features stored in the feature database. If it is determined that the target object is present in the image then at least one of the extracted features of the first set that are not already included in the feature database are added to the feature database.
Abstract:
A method of recognizing an object of interest in an image includes extracting a first set of features from within the image. Each extracted feature in the first set of features is then categorized as either blob-like or edge-like. A second set of features is then taken from the first set, where a number of the edge-like features to include in the second set of features is based on a relative number of edge-like features to blob-like features included in the first set of extracted features. An object of interest within the image is detected according to the second set of features.
Abstract:
A method of recognizing an object of interest in an image includes extracting a first set of features from within the image. Each extracted feature in the first set of features is then categorized as either blob-like or edge-like. A second set of features is then taken from the first set, where a number of the edge-like features to include in the second set of features is based on a relative number of edge-like features to blob-like features included in the first set of extracted features. An object of interest within the image is detected according to the second set of features.
Abstract:
Disclosed are a system, apparatus, and method for 3D object segmentation within an environment. Image frames are obtained from one or more depth cameras or at different times and planar segments are extracted from data obtained from the image frames. Candidate segments that comprise a non-planar object surface are identified from the extracted planar segments. In one aspect, certain extracted planar segments are identified as comprising a non-planar object surface, and are referred to as candidate segments. Confidence of preexisting candidate segments are adjusted in response to determining correspondence with a candidate segment. In one aspect, one or more preexisting candidate segments are determined to comprise a surface of a preexisting non-planar object hypothesis. Confidence in the non-planar object hypothesis is updated in response to determining correspondence with a candidate segment.
Abstract:
In one example, a method for exiting an object detection pipeline includes determining, while in the object detection pipeline, a number of features within a first tile of an image, wherein the image consists of a plurality of tiles, performing a matching procedure using at least a subset of the features within the first tile if the number of features within the first tile meets a threshold value, exiting the object detection pipeline if a result of the matching procedure indicates an object is recognized in the image, and presenting the result of the matching procedure.
Abstract:
Systems, apparatus and methods for triggering a depth sensor and/or limiting bandwidth and/or maintaining privacy are presented. By limiting use of a depth sensor to times when an optical image alone is insufficient, mobile device power is saved. Furthermore, by reducing a size of an optical image to only the portion of the image needed to detect an object, bandwidth is saved and privacy is maintained by not communicating unneeded or undesired information.
Abstract:
A difference in intensities of a pair of pixels in an image is repeatedly compared to a threshold, with the pair of pixels being separated by at least one pixel (“skipped pixel”). When the threshold is found to be exceeded, a selected position of a selected pixel in the pair, and at least one additional position adjacent to the selected position are added to a set of positions. The comparing and adding are performed multiple times to generate multiple such sets, each set identifying a region in the image, e.g. an MSER. Sets of positions, identifying regions whose attributes satisfy a test, are merged to obtain a merged set. Intensities of pixels identified in the merged set are used to generate binary values for the region, followed by classification of the region as text/non-text. Regions classified as text are supplied to an optical character recognition (OCR) system.