Abstract:
Computer-implemented systems and methods for identifying an object in an image are provided. In one example, the method includes identifying a first object related to an electronic image. The image includes at least a second object. Based at least in part on the identity of the first object, social networking information related to the first object is used to programmatically identify the second object. The first object and/or the second object may be a person. In some embodiments, metadata associated with the image may be used to identify the second object. Based at least in part on the identifications, social networking information may be associated between the first object and the second object.
Abstract:
Images are searched to locate faces that are the same as a query face. Images that include a face that is the same as the query face may be presented to a user as search result images. Images also may be sorted by the faces included in the images and presented to the user as sorted search result images. The user may provide explicit or implicit feedback regarding the search result images. Additional feedback may be inferred regarding the search result images based on the user-provided feedback, and the results may be updated based on the user-provided and inferred feedback.
Abstract:
Similar faces may be determined within images based on human perception of facial similarity. The user may provide an image including a query face to which the user wishes to find faces that are similar. Similar faces may be determined based on similarity information. Similarity information may be generated from information related to a human perception of facial similarity. Images that include faces determined to be similar, based on the similarity information, may be provided to the user as search result images. The user then may provide feedback to indicate the user's perception of similarity between the query face and the search result images.
Abstract:
Images may be registered using temporal (time-based) and spatial information. In a film implementation, because film is a sequence of frames, using information from neighboring frames may enable a temporally smoother visual experience. In addition, it may be beneficial to take advantage of the fact that consecutive frames are often shifted similarly during the photographic process. Distortion measures may be used that discount candidate transformations that are considered to be too far from one or more preferred transformations, such as, for example, an optimal transformation from another frame or block or a currently-optimal transformation from the same frame/block. Composite color images may be processed to provide registration of underlying components.
Abstract:
Separations or images relating to film or other fields may be registered using a variety of features, such as, for example: (1) correcting one or more film distortions; (2) automatically determining a transformation to reduce a film distortion; (3) applying multiple criteria of merit to a set of features to determine a set of features to use in determining a transformation; (4) determining transformations for areas in an image or a separation in a radial order; (5) comparing areas in images or separations by weighting feature pixels differently than non-feature pixels; (6) determining distortion values for transformations by applying a partial distortion measure and/or using a spiral search configuration; (7) determining transformations by using different sets of features to determine corresponding transformation parameters in an iterative manner; and (8) applying a feathering technique to neighboring areas within an image or separation.
Abstract:
Separations or images relating to film or other fields may be registered using a variety of features, such as, for example: (1) correcting one or more film distortions; (2) automatically determining a transformation to reduce a film distortion; (3) applying multiple criteria of merit to a set of features to determine a set of features to use in determining a transformation; (4) determining transformations for areas in an image or a separation in a radial order; (5) comparing areas in images or separations by weighting feature pixels differently than non-feature pixels; (6) determining distortion values for transformations by applying a partial distortion measure and/or using a spiral search configuration; (7) determining transformations by using different sets of features to determine corresponding transformation parameters in an iterative manner; and (8) applying a feathering technique to neighboring areas within an image or separation.
Abstract:
Separations or images relating to film or other fields may be registered using a variety of features, such as, for example: (1) correcting one or more film distortions; (2) automatically determining a transformation to reduce a film distortion; (3) applying multiple criteria of merit to a set of features to determine a set of features to use in determining a transformation; (4) determining transformations for areas in an image or a separation in a radial order; (5) comparing areas in images or separations by weighting feature pixels differently than non-feature pixels; (6) determining distortion values for transformations by applying a partial distortion measure and/or using a spiral search configuration; (7) determining transformations by using different sets of features to determine corresponding transformation parameters in an iterative manner; and (8) applying a feathering technique to neighboring areas within an image or separation.
Abstract:
Separations or images relating to film or other fields may be registered using a variety of features, such as, for example: (1) correcting one or more film distortions; (2) automatically determining a transformation to reduce a film distortion; (3) applying multiple criteria of merit to a set of features to determine a set of features to use in determining a transformation; (4) determining transformations for areas in an image or a separation in a radial order; (5) comparing areas in images or separations by weighting feature pixels differently than non-feature pixels; (6) determining distortion values for transformations by applying a partial distortion measure and/or using a spiral search configuration; (7) determining transformations by using different sets of features to determine corresponding transformation parameters in an iterative manner; and (8) applying a feathering technique to neighboring areas within an image or separation.
Abstract:
Separations or images relating to film or other fields may be registered using a variety of features, such as, for example: (1) correcting one or more film distortions; (2) automatically determining a transformation to reduce a film distortion; (3) applying multiple criteria of merit to a set of features to determine a set of features to use in determining a transformation; (4) determining transformations for areas in an image or a separation in a radial order; (5) comparing areas in images or separations by weighting feature pixels differently than non-feature pixels; (6) determining distortion values for transformations by applying a partial distortion measure and/or using a spiral search configuration; (7) determining transformations by using different sets of features to determine corresponding transformation parameters in an iterative manner; and (8) applying a feathering technique to neighboring areas within an image or separation.
Abstract:
Images are searched to locate faces that are the same as a query face. Images that include a face that is the same as the query face may be presented to a user as search result images. Images also may be sorted by the faces included in the images and presented to the user as sorted search result images. The user may provide explicit or implicit feedback regarding the search result images. Additional feedback may be inferred regarding the search result images based on the user-provided feedback, and the results may be updated based on the user-provided and inferred feedback.