Abstract:
Pixels of a display system may comprise light emitting elements whose light output levels are set based on image data as well as background areas whose light reflectance levels are set based on the same image data. These light output levels and/or light reflectance levels may also be adjusted based on an ambient light level. The background areas may comprise light reflective elements which may be controlled individually or as a whole. The light output levels of the light emitting elements and the light reflectance levels of the light reflective elements are configured to generate collectively a pixel value for the pixel dependent on the image data. One or more modulation algorithms may be used to control the energy consumption, dynamic range, color gamut, point-spread function, etc., of the pixels in the display system.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
Embodiments are disclosed for hybrid near/far-field speaker virtualization. In an embodiment, a method comprises: receiving a source signal including channel-based audio or audio objects; generating near-field gain(s) and far-field gain(s) based on the source signal and a blending mode; generating a far-field signal based, at least in part, on the source signal and the far-field gain(s); rendering, using a speaker virtualizer, the far-field signal for playback of far-field acoustic audio through far-field speakers into an audio reproduction environment; generating a near-field signal based at least in part on the source signal and the near-field gain(s); prior to providing the far-field signal to the far-field speakers, sending the near-field signal to a near-field playback device or an intermediate device coupled to the near-field playback device; providing the far-field signal to the far-field speakers; and providing the near-field signal to the near-field speakers to synchronously overlay the far-field acoustic audio.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
A highlight mask is generated for an image to identify one or more highlights in the image. One or more highlight classifiers are determined for the one or more highlights in the image. One or more highlight gains are applied with the highlight mask to luminance amplitudes of pixels in the one or more highlights in the image to generate a scaled image. The one or more high-light gains for the one or more highlights are determined based at least in part on the one or more highlight classifiers determined for the one or more highlights.
Abstract:
Several embodiments of display systems that use narrowband emitters are disclosed herein. In one embodiment, a display system comprises, for at least one primary color, a plurality of narrowband emitters distributed around the primary color point. The plurality of narrowband emitters provides a more regular power vs. spectral distribution in a desired band of frequencies.
Abstract:
A handheld imaging device has a data receiver that is configured to receive reference encoded image data. The data includes reference code values, which are encoded by an external coding system. The reference code values represent reference gray levels, which are being selected using a reference grayscale display function that is based on perceptual non-linearity of human vision adapted at different light levels to spatial frequencies. The imaging device also has a data converter that is configured to access a code mapping between the reference code values and device-specific code values of the imaging device. The device-specific code values are configured to produce gray levels that are specific to the imaging device. Based on the code mapping, the data converter is configured to transcode the reference encoded image data into device-specific image data, which is encoded with the device-specific code values.
Abstract:
An encoder receives a sequence of images in extended or visual dynamic range (VDR). For each image, a dynamic range compression function and associated parameters are selected to convert the input image into a second image with a lower dynamic range. Using the input image and the second image, a residual image is computed. The input VDR image sequence is coded using a layered codec that uses the second image as a base layer and a residual image that is derived from the input and second images as one or more residual layers. Using the residual image, a false contour detection method (FCD) estimates the number of potential perceptually visible false contours in the decoded VDR image and iteratively adjusts the dynamic range compression parameters to prevent or reduce the number of false contours. Examples that use a uniform dynamic range compression function are also described.
Abstract:
A method comprising acquiring a set of voltage signals from a set of electrodes arranged in proximity to the ears of a user, based on the set of voltage signals, determining an EOG gaze vector in ego-centric coordinates, determining a head pose of the user in display coordinates, using a sensor device worn by the user, combining the EOG gaze vector and head pose to obtain a gaze vector in display coordinates, and determining a gaze point by calculating an intersection of the gaze vector and an imaging surface having a known position in display coordinates.