Abstract:
Methods and systems for generating an image quality metric are described. A reference and a test image are first converted to the ITP color space. After calculating difference images ΔI, ΔT, and ΔP, using the color channels of the two images, the difference images are convolved with low pass filters, one for the I channel and one for the chroma channels (I or P). The image quality metric is computed as a function of the sum of squares of filtered ΔI, ΔT, and ΔP values. The chroma low-pass filter is designed to maximize matching the image quality metric with subjective results.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
Input VR imagery is received. Global motions as represented in the input VR imagery relative to a viewer of a virtual reality (VR) application is extracted. A dampening factor is applied to the global motions to generate dampened global motions. VR imagery to be rendered to the viewer at a time point is generated based on the input VR imagery and the dampened global motions.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
Dithering techniques for images are described herein. An input image of a first bit depth is separated into a luma and one or more chroma components. A model of the optical transfer function (OTF) of the human visual system (HVS) is used to generate dither noise which is added to the chroma components of the input image. The model of the OTF is adapted in response to viewing distances determined based on the spatial resolution of the chroma components. An image based on the original input luma component and the noise-modified chroma components is quantized to a second bit depth, which is lower than the first bit depth, to generate an output dithered image.
Abstract:
A method for delivering media to a playback device including outputting first test media to be viewed by a first user. The method further includes receiving a first user input related to a first perception of the first test media by the first user and indicating a first personalized quality of experience of the first user with respect to the first test media. The method further includes generating a first personalized sensitivity profile including one or more viewing characteristics of the first user based on the first user input, and determining, based at least in part on the first personalized sensitivity profile, a first media parameter. The first media parameter is determined in order to increase an efficiency of media delivery to the first playback device over a network while preserving the first personalized quality of experience of the first user.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
Systems and methods are disclosed for dynamically adjusting the backlight of a display during video playback. Given an input video stream and associated minimum, average, or maximum luminance values of the video frames in the video stream, values of a function of the frame min, mid, or max luminance values are filtered using a temporal filter to generate a filtered output value for each frame. The instantaneous dynamic range of a target display is determined based on the filtered output value and the minimum and maximum brightness values of the display. A backlight control level is computed based on the instantaneous dynamic range, and the input signal is tone mapped by a display management process to be displayed on the target display at the selected backlight level. The design of a temporal filter based on an exponential moving average filter and scene-change detection is presented.
Abstract:
A method may include generating a hybrid image associated with a first interpretation corresponding to a first value of a media parameter and a second interpretation corresponding to a second value of the media parameter. The hybrid image may include a first visibility ratio between the first interpretation and the second interpretation. The method may include refining the hybrid image to create a refined hybrid image that includes a second visibility ratio different than the first visibility ratio. The method may include displaying the refined hybrid image, and receiving a user input related to a first perception of the refined hybrid image by a user. The method may include determining, based at least in part on the user input, an optimized value of the media parameter, and providing output media for display to the user to a playback device according to the optimized value of the media parameter.