Abstract:
An image capturing apparatus, method, and storage medium identifying image metadata based on user interest. The image capturing apparatus includes a first image capturing unit configured to capture an image of a scene for a user, a second image capturing unit configured to capture an image of the user, an identification unit configured to identify at least one region of interest of the scene based on a combination of eye and facial characteristics of the user of the image capturing apparatus during an image capturing operation, a processing unit configured to analyze at least facial characteristics of the user associated with each region of interest during the image capturing operation, a determining unit configured to determine a facial expression classification associated with each region of interest based on corresponding analyzed facial characteristics for each region during the image capturing operation, a recording unit configured to record facial expression metadata based on information representing the at least one region of interest and the facial expression classification associated with an image captured during the image capturing operation, and a rendering unit configured to render the image using the recorded facial expression metadata.
Abstract:
An image selected to be printed is rendered for display, prior to printing, based on the relative position and orientation of a display in relation to a user's head, where the displayed rendered image is a representation of what the rendered image will look like when printed. The user's eye movement relative to the rendered image is tracked, with at least one area of interest in the image to the viewer being determined based on the viewer's eye movement, an imaging property of the at least one area of interest is adjusted, the image to be printed is rendered based on adjusting the imaging property, and the image is printed.
Abstract:
An image is displayed by determining a relative position and orientation of a display in relation to a viewer's head, and rendering an image based on the relative position and orientation. The viewer's eye movement relative to the rendered image is tracked, with at least one area of interest in the image to the viewer being determined based on the viewer's eye movement, and an imaging property of the at least one area of interest is adjusted. Computer-generated data is obtained for display based on the at least one area of interest. At least one imaging property of the computer-generated data is adjusted according to the at least one imaging property that was adjusted for the at least one area of interest and the computer-generated data is displayed in the at least one area of interest along with a section of the image displayed in the at least one area of interest.
Abstract:
An image is displayed by determining a relative position and orientation of a display in relation to a viewer's head, and rendering an image based on the relative position and orientation. The viewer's eye movement relative to the rendered image is tracked, with at least one area of interest in the image to the viewer being determined based on the viewer's eye movement, and an imaging property of the at least one area of interest is adjusted.
Abstract:
An image capturing apparatus, method, and storage medium identifying image metadata based on user interest. The image capturing apparatus includes a first image capturing unit configured to capture an image of a scene for a user, a second image capturing unit configured to capture an image of the user, an identification unit configured to identify at least one region of interest of the scene based on a combination of eye and facial characteristics of the user of the image capturing apparatus during an image capturing operation, a processing unit configured to analyze at least facial characteristics of the user associated with each region of interest during the image capturing operation, a determining unit configured to determine a facial expression classification associated with each region of interest based on corresponding analyzed facial characteristics for each region during the image capturing operation, a recording unit configured to record facial expression metadata based on information representing the at least one region of interest and the facial expression classification associated with an image captured during the image capturing operation, and a rendering unit configured to render the image using the recorded facial expression metadata.
Abstract:
The present invention provides for determining a gamut boundary description for a color device, the color device being characterized at least by a destination transform which converts colors from a device-independent color space to a device-dependent color space and which reports out-of-gamut colors. A set of sample values is determined in the device-independent color space. For each of the sample values within the set of sample values, the destination transform is applied to the sample value, and in a case where the sample value is in gamut, the sample value is included within a set of gamut boundary values. The gamut boundary description is determined by forming a set of polygonal surfaces based on the set of gamut boundary values. Accordingly, a gamut boundary description is determined without necessarily having to sample additional color values as the number of colorant channels for the color device increases.
Abstract:
A device model object which numerically constructs colorimetric measurements based on access to a spectrally-based device profile. In situations where a color management module issues a request for spectral measurements, then the device model object provides spectral measurements directly from the spectrally-based device profile. However, in situations where the color management module issues a request for colorimetric measurements, then the device model object numerically constructs colorimetric measurements based on numerical integration of spectral measurements from the spectrally-based device profile against a viewing condition white point. The constructed measurements are provided to the color management module and they are also cached for possible future use. In this way, the device model object is able to support requests for both measurement-based device profiles and spectrally-based device profiles.
Abstract:
An overall color transformation is constructed from multiple ones of individual color transformation steps, the overall color transformation used by a color management system to transform colors from one color space to another. A sequence of sequential add operations is executed, each add operation adding a single one of the individual transformation steps to an intermediate transformation constructed from preceding add operations, and for each add operation returning at least one value which characterizes the add operation. The sequence of subsequent add operations is altered based on preceding ones the returned values.
Abstract:
Managing color data to transform source color image data from a source device into destination color image data for rendering by a destination device, including accessing a source color data file corresponding to the source device, the source color data file containing source device color characteristic data, constructing a source color transform based on the source device color characteristic data contained in the source color data file, and applying the source color transform to the source color image data to transform the source color image data from a source device color space into interim color image data in an interim color space.
Abstract:
A color management architecture includes multiple color transform modules chainable together by a framework, with each color transform module having access to color profiles which provide data necessary to convert color data in accordance with algorithmic functionality in the transform modules. The color profiles are stored in accordance with a pre-designated format, such as a standardized format that is neither vendor specific nor platform specific. Each color transform module further includes the functionality to read to and write from a phantom profile. The phantom profile is also organized in the same pre-designated format, and thus serves as a primary conduit for data transfer between chained ones of the color transform modules.