Abstract:
A device and a method for generating an output image in which a subject has been captured are provided. A method of performing image processing may include: obtaining a raw image by a camera sensor of the device, by using a first processor configured to control the device; inputting the raw image to a first artificial intelligence (AI) model trained to scale image brightness, by using a second processor configured to perform AI-based image processing on the raw image; obtaining tone map data output from the first AI model, by using the second processor; and storing an output image generated based on the tone map data.
Abstract:
Provided is a display device that illuminates a printed material taking into consideration the characteristics of the printed material. The display device obtains an image for illuminating the printed material, and information pertaining to the optical characteristics, such as light reflectance or transmission, of the printed material. Then, on the basis of metadata pertaining to a luminance or color of the obtained image and the obtained information, the display device controls a light source illuminating the printed material. Through this, the display device enhances the dynamic range or the color gamut of the printed material.
Abstract:
An image capturing unit captures an image including an embedded image that is printed on the basis of image data in which at least a color component has been modulated according to additional information. An adjustment unit adjusts a white balance of the image captured by the image capturing unit on the basis of an adjustment value associated with the embedded image. A processing unit processes image data of the image captured by the image capturing unit whose white balance has been adjusted by the adjustment unit to read the additional information in the image captured by the image capturing unit.
Abstract:
An automatic white balance (AWB) method is performed on an image to adjust color gains of the image. The illuminant of the image is determined among a set of candidate illuminants, where each candidate illuminant is described by a corresponding coordinate pair (p, q) in a chromaticity coordinate system. The illuminant of the image can be determined by calculating an indicator value for each candidate illuminant; determining a threshold for indicator values of the candidate illuminants; identifying a subset of the candidate illuminants that have the indicator values not greater than the threshold; and for all candidate illuminants in the subset, calculating a weighted average of corresponding coordinate pairs to obtain an averaged coordinate pair. Chromatic adaptation of the illuminant in the PQ domain can also be performed on the image.
Abstract:
In one example, a method for white balancing image data includes obtaining, for a zone of a color space, a mesh defining a plurality of polygons having vertices that are each associated with one or more white balance parameters; identifying a polygon of the plurality of polygons that includes a pixel of the image data; determining one or more white balance parameters for the pixel of the image data based on an interpolation of white balance parameters associated with vertices of the polygon that includes the pixel of the image data; and performing, based on the one or more white balance parameters for the pixel of the image data, a white balance operation on the pixel of the image data.
Abstract:
An image processing device includes a color reproduction characteristic acquiring unit that acquires a color reproduction characteristic of a display device, a color reproduction characteristic correction unit that corrects the color reproduction characteristic, a feature amount extraction unit that extracts a feature amount which is an object of the correction of the color reproduction characteristic acquired by the color reproduction characteristic acquiring unit, and an evaluation image selecting unit that selects, based on the feature amount, an evaluation image which is a source for generating a confirmation image including (i) an image when the color reproduction characteristic acquired by the color reproduction characteristic acquiring unit and before the correction is used and (ii) an image when the color reproduction characteristic after the correction by the color reproduction characteristic correction unit is used.
Abstract:
A method comprising: performing a first object recognition round on an image to detect at least a first object; matching the first detected object to a first reference object, thereby recognizing the first object; determining a chromatic adaptation transform between the first recognized object and the first reference object; applying the chromatic adaptation transform to the image; performing a second object recognition round on the chromatically adapted image to detect a second object that is different than the first recognized object; and matching the second detected object with a second reference object, thereby recognizing the second object.
Abstract:
In the image processing device, the image processing method and the recording medium, the image extractor extracts, from the captured images, captured images regarded as being captured in the same time range, as extracted images. The target image determiner selects an extracted image which were captured by a capturing person who captured largest number of extracted images and with a capturing device of a type used to capture largest number of extracted images, as a target image. The object image determiner selects an extracted image showing a subject similar to a subject present in the target image, as an object image. The color table generator generates a color table for matching colors of the object image to colors of the target image. The color conversion processor carries out the color conversion applying the color table to the object image.
Abstract:
A tincture adjustment value used to adjust a monochrome signal to a tincture desired by a user is set, and a tincture conversion table and chromaticity line table are generated based on that tincture adjustment value and the profile of an image output apparatus. Using the generated tables, a lightness signal L* corresponding to an input monochrome signal is converted into a distance signal l on a chromaticity line, and the distance signal l is converted into a chromaticity signal (a*, b*). The lightness signal L* and chromaticity signal (a*, b*) are converted into a color signal of the image output apparatus.
Abstract:
A device and a method for dynamically correcting camera module sensitivity variation using face data are disclosed. The method includes accessing a digital image frame of a scene where the digital image frame originates from a camera module. In response to detection of a face area in the digital image frame, a face chromaticity is calculated from the face area detected in the digital image frame by a processor. The method further includes determining a lighting condition at the scene associated with the digital image frame. Further, the method includes comparing the face chromaticity with a reference face chromaticity associated with the lighting condition to determine a chromaticity gain shift. Thereafter, the method includes correcting a gray point curve of the camera module based on the chromaticity gain shift to obtain a corrected gray point curve of the camera module.