Abstract:
An image capturing apparatus, method, and storage medium identifying image metadata based on user interest. The image capturing apparatus includes a first image capturing unit configured to capture an image of a scene for a user, a second image capturing unit configured to capture an image of the user, an identification unit configured to identify at least one region of interest of the scene based on a combination of eye and facial characteristics of the user of the image capturing apparatus during an image capturing operation, a processing unit configured to analyze at least facial characteristics of the user associated with each region of interest during the image capturing operation, a determining unit configured to determine a facial expression classification associated with each region of interest based on corresponding analyzed facial characteristics for each region during the image capturing operation, a recording unit configured to record facial expression metadata based on information representing the at least one region of interest and the facial expression classification associated with an image captured during the image capturing operation, and a rendering unit configured to render the image using the recorded facial expression metadata.
Abstract:
An image is displayed by determining a relative position and orientation of a display in relation to a viewer's head, and rendering an image based on the relative position and orientation. The viewer's eye movement relative to the rendered image is tracked, with at least one area of interest in the image to the viewer being determined based on the viewer's eye movement, and an imaging property of the at least one area of interest is adjusted. Computer-generated data is obtained for display based on the at least one area of interest. At least one imaging property of the computer-generated data is adjusted according to the at least one imaging property that was adjusted for the at least one area of interest and the computer-generated data is displayed in the at least one area of interest along with a section of the image displayed in the at least one area of interest.
Abstract:
An image selected to be printed is rendered for display, prior to printing, based on the relative position and orientation of a display in relation to a user's head, where the displayed rendered image is a representation of what the rendered image will look like when printed. The user's eye movement relative to the rendered image is tracked, with at least one area of interest in the image to the viewer being determined based on the viewer's eye movement, an imaging property of the at least one area of interest is adjusted, the image to be printed is rendered based on adjusting the imaging property, and the image is printed.
Abstract:
An image is displayed by determining a relative position and orientation of a display in relation to a viewer's head, and rendering an image based on the relative position and orientation. The viewer's eye movement relative to the rendered image is tracked, with at least one area of interest in the image to the viewer being determined based on the viewer's eye movement, and an imaging property of the at least one area of interest is adjusted.
Abstract:
An image capturing apparatus, method, and storage medium identifying image metadata based on user interest. The image capturing apparatus includes a first image capturing unit configured to capture an image of a scene for a user, a second image capturing unit configured to capture an image of the user, an identification unit configured to identify at least one region of interest of the scene based on a combination of eye and facial characteristics of the user of the image capturing apparatus during an image capturing operation, a processing unit configured to analyze at least facial characteristics of the user associated with each region of interest during the image capturing operation, a determining unit configured to determine a facial expression classification associated with each region of interest based on corresponding analyzed facial characteristics for each region during the image capturing operation, a recording unit configured to record facial expression metadata based on information representing the at least one region of interest and the facial expression classification associated with an image captured during the image capturing operation, and a rendering unit configured to render the image using the recorded facial expression metadata.
Abstract:
An image capturing apparatus and method for selective real-time focus/parameter adjustment. The image capturing apparatus includes a display unit, an adjustment unit, and a generation unit. The display unit is configured to display an image. The interface unit is configured to enable a user to select a plurality of regions of the image displayed on the display unit. The adjustment unit is configured to enable the user to adjust at least one focus/parameter of at least one selected region of the image displayed on the display unit. The generation unit is configured to convert the image including at least one adjusted selected region into image data, where at least one focus/parameter of the at least one adjusted selected region has been adjusted by the adjustment unit prior to conversion.
Abstract:
Image capture using an image capture device that includes an imaging assembly having a spectral sensitivity tunable in accordance with a spectral capture mask, and imaging optics having a polarization filter tunable in accordance with a polarization control mask for projecting an image of a scene onto the imaging assembly. A default polarization control mask is applied to the imaging optics, and a default spectral capture mask is applied to the imaging assembly. An image of the scene is captured using the imaging assembly.
Abstract:
An image capture device includes an image sensor for capturing image data of a scene, and a display screen for displaying a preview of the captured image of the scene. Additionally, the image capture device includes a photovoltaic solar cell for outputting electrical energy responsive to environmental lighting conditions. A control section determines whether the image capture device is or is not currently being used in a bright environment. Responsive to a determination that the image capture device is currently being used in a bright environment, the control section increases brightness of the display screen, and switches electrical energy outputted from the photovoltaic cell for use by the display screen.
Abstract:
An image capturing apparatus and method for selective real-time focus/parameter adjustment. The image capturing apparatus includes a display unit, an adjustment unit, and a generation unit. The display unit is configured to display an image. The interface unit is configured to enable a user to select a plurality of regions of the image displayed on the display unit. The adjustment unit is configured to enable the user to adjust at least one focus/parameter of at least one selected region of the image displayed on the display unit. The generation unit is configured to convert the image including at least one adjusted selected region into image data, where at least one focus/parameter of the at least one adjusted selected region has been adjusted by the adjustment unit prior to conversion.
Abstract:
A system for displaying three-dimensional (3-D) content and enabling a user to interact with the content in an immersive, realistic environment is described. The system has a display component that is non-planar and provides the user with an extended field-of-view (FOV), one factor in the creating the immersive user environment. The system also has a tracking sensor component for tracking a user face. The tracking sensor may include one or more 3-D and 2-D cameras. In addition to tracking the face or head, it may also track other body parts, such as hands and arms. An image perspective adjustment module processes data from the face tracking and enables the user to perceive the 3-D content with motion parallax. The hand and other body part output data is used by gesture detection modules to detect collisions between the user's hand and 3-D content. When a collision is detected, there may be tactile feedback to the user to indicate that there has been contact with a 3-D object. All these components contribute towards creating an immersive and realistic environment for viewing and interacting with 3-D content.