Abstract:
Separations or images relating to film or other fields may be registered using a variety of features, such as, for example: (1) correcting one or more film distortions; (2) automatically determining a transformation to reduce a film distortion; (3) applying multiple criteria of merit to a set of features to determine a set of features to use in determining a transformation; (4) determining transformations for areas in an image or a separation in a radial order; (5) comparing areas in images or separations by weighting feature pixels differently than non-feature pixels; (6) determining distortion values for transformations by applying a partial distortion measure and/or using a spiral search configuration; (7) determining transformations by using different sets of features to determine corresponding transformation parameters in an iterative manner; and (8) applying a feathering technique to neighboring areas within an image or separation.
Abstract:
Blotches may be identified and processed to reduce or eliminate the blotch. The blotch may be in just one of several separations and multiple separations may be used, for example, to identify the blotch. An implementation (i) compares a first component image of an image with a first component image of a reference image, (ii) compares a second component image of the image with a second component image of the reference image, and (iii) determines based on these comparisons whether the first component image of the image includes a blotch. Multiple image separations also, or alternatively, may be used, for example, to modify the blotch, as well as to evaluate whether a modification is beneficial.
Abstract:
Blotches may be identified and processed to reduce or eliminate the blotch. The blotch may be in just one of several separations and multiple separations may be used, for example, to identify the blotch. An implementation (i) compares a first component image of an image with a first component image of a reference image, (ii) compares a second component image of the image with a second component image of the reference image, and (iii) determines based on these comparisons whether the first component image of the image includes a blotch. Multiple image separations also, or alternatively, may be used, for example, to modify the blotch, as well as to evaluate whether a modification is beneficial.
Abstract:
Images may be registered using temporal (time-based) and spatial information. In a film implementation, because film is a sequence of frames, using information from neighboring frames may enable a temporally smoother visual experience. In addition, it may be beneficial to take advantage of the fact that consecutive frames are often shifted similarly during the photographic process. Distortion measures may be used that discount candidate transformations that are considered to be too far from one or more preferred transformations, such as, for example, an optimal transformation from another frame or block or a currently-optimal transformation from the same frame/block. Composite color images may be processed to provide registration of underlying components.
Abstract:
Blotches may be identified and processed to reduce or eliminate the blotch. The blotch may be in just one of several separations and multiple separations may be used, for example, to identify the blotch. An implementation (i) compares a first component image of an image with a first component image of a reference image, (ii) compares a second component image of the image with a second component image of the reference image, and (iii) determines based on these comparisons whether the first component image of the image includes a blotch. Multiple image separations also, or alternatively, may be used, for example, to modify the blotch, as well as to evaluate whether a modification is beneficial.
Abstract:
Certain disclosed implementations use digital image processing to reduce the differential resolution among separations or images in film frames, such as, for example, red flare. A location in the red image may be selected using information from another image. The selected location may be modified using information from that other image. The selection may include comparing features of an edge in the first image with features of a corresponding edge in the other image. The modification may include performing wavelet transformations of the two images and copying certain coefficients (or a function of these coefficients) produced by the application of the transformation to the second image to the coefficients produced by the application of the transformation to the first image. The copied coefficients may be correlated with the selected location. Other disclosed techniques vary from the above and may be applied to other fields.
Abstract:
Computer-implemented systems and methods for identifying an object in an image are provided. In one example, the method includes identifying a first object related to an electronic image. The image includes at least a second object. Based at least in part on the identity of the first object, social networking information related to the first object is used to programmatically identify the second object. The first object and/or the second object may be a person. In some embodiments, metadata associated with the image may be used to identify the second object. Based at least in part on the identifications, social networking information may be associated between the first object and the second object.
Abstract:
Similar faces may be determined within images based on human perception of facial similarity. The user may provide an image including a query face to which the user wishes to find faces that are similar. Similar faces may be determined based on similarity information. Similarity information may be generated from information related to a human perception of facial similarity. Images that include faces determined to be similar, based on the similarity information, may be provided to the user as search result images. The user then may provide feedback to indicate the user's perception of similarity between the query face and the search result images.
Abstract:
Images may be registered using temporal (time-based) and spatial information. In a film implementation, because film is a sequence of frames, using information from neighboring frames may enable a temporally smoother visual experience. In addition, it may be beneficial to take advantage of the fact that consecutive frames are often shifted similarly during the photographic process. Distortion measures may be used that discount candidate transformations that are considered to be too far from one or more preferred transformations, such as, for example, an optimal transformation from another frame or block or a currently-optimal transformation from the same frame/block. Composite color images may be processed to provide registration of underlying components.
Abstract:
Similar faces may be determined within images based on human perception of facial similarity. The user may provide an image including a query face to which the user wishes to find faces that are similar. Similar faces may be determined based on similarity information. Similarity information may be generated from information related to a human perception of facial similarity. Images that include faces determined to be similar, based on the similarity information, may be provided to the user as search result images. The user then may provide feedback to indicate the user's perception of similarity between the query face and the search result images.