Abstract:
A system and method for accurately mapping between image coordinates and geo-coordinates, called geo-spatial registration. The system utilizes the imagery and terrain information contained in the geo-spatial database to precisely align geodetically calibrated reference imagery with an input image, e.g., dynamically generated video images, and thus achieve a high accuracy identification of locations within the scene. When a sensor, such as a video camera, images a scene contained in the geo-spatial database, the system recalls a reference image pertaining to the imaged scene. This reference image is aligned very accurately with the sensor's images using a parametric transformation. Thereafter, other information that is associated with the reference image can easily be overlaid upon or otherwise associated with the sensor imagery.
Abstract:
By adding a side network to a face recognition network, output of early convolution blocks may be used to determine relative bounding box values. The relative bounding box values may be used to refine existing boundary box value with an eye on improving the generation, by the face recognition network, of embedding vectors.
Abstract:
Iris recognition is achieved by (1) iris acquisition that permits a user to self-position his or her eye into an imager's field of view without the need for any physical contact, (2) spatially locating the data defining that portion of a digitized video image of the user's eye that defines solely the iris thereof without any initial spatial condition of the iris being provided, and (3) pattern matching the spatially located data defining the iris of the user's eye with stored data defining a model iris by employing normalized spatial correlation for first comparing, at each of a plurality of spatial scales, each of distinctive spatial characteristics of the respective irises that are spatially registered with one another to quantitatively determine, at each of the plurality of spatial scales, a goodness value of match at that spatial scale, and then judging whether or not the pattern which manifests solely the iris of the user's eye matches the digital data which manifests solely the model iris in accordance with a certain combination of the quantitatively-determined goodness values of match at each of said plurality of spatial scales.
Abstract:
The present invention is embodied in a method for representing and analyzing spatiotemporal data in order to make qualitative yet semantically meaningful distinctions among various regions of the data at an early processing stage. In one embodiment of the invention, successive frames of image data are analyzed to classify spatiotemporal regions as being stationary, exhibiting coherent motion, exhibiting incoherent motion, exhibiting scintillation and so lacking in structure as to not support further inference. The exemplary method includes filtering the image data in a spatiotemporal plane to identify regions that exhibit various spatiotemporal characteristics. The output data provided by these filters is then used to classify the data.