Abstract:
In an example method, one or more processing devices receive encoded image data, and cause visual content to be presented on a display device according to the encoded image data. Further, the one or more processing devices receive measurement data regarding the visual content presented on the display device, and determine, based on the measurement data, one or more first perceptual quantizer (PQ) codes corresponding to the visual content presented on the display device. Further, the one or more processing devices determine, based on the encoded image data, one or more second PQ codes, and determine one or more metrics indicative of a performance characteristic the display device based on the first PQ codes and the second PQ codes. The one or more processing devices store a data item including the one or more metrics.
Abstract:
In an example method, one or more processing devices receive encoded image data, and cause visual content to be presented on a display device according to the encoded image data. Further, the one or more processing devices receive measurement data regarding the visual content presented on the display device, and determine, based on the measurement data, one or more first perceptual quantizer (PQ) codes corresponding to the visual content presented on the display device. Further, the one or more processing devices determine, based on the encoded image data, one or more second PQ codes, and determine one or more metrics indicative of a performance characteristic the display device based on the first PQ codes and the second PQ codes. The one or more processing devices store a data item including the one or more metrics.
Abstract:
In a coding system, an encoder codes video data according to a predetermined protocol, which, when decoded causes an associated decoder to perform a predetermined sequence of decoding operations. The encoder may perform local decodes of the coded video data, both in the manner dictated by the coding protocol that is at work and also by one or more alternative decoding operations. The encoder may estimate relative performance of the alternative decoding operations as compared to a decoding operation that is mandated by the coding protocol. The encoder may provide identifiers in metadata that is associated with the coded video data to identify such levels of distortion and/or levels of resources conserved. A decoder may refer to such identifiers when determining when to engage alternative decoding operations as may be warranted under resource conservation policies.
Abstract:
A video quality assessment method may include frame-by-frame analysis of a test video sequence (often compressed) with its original (reference) counterpart, pre-conditioning elements of the test and reference frames, defining a region of interest in the pre-conditioned test frame and estimating relative errors within the region of interest between the test and reference frame, filtering the estimated errors of the region of interest temporally across adjacent frames within a perpetually relevant time window, aggregating the filtered errors within the time window, ranking the aggregated errors, selecting a subset of the ranked errors, aggregating across the selected subset of errors, and inputting said aggregated error to a quality assessment system to determine a quality classification along with an estimated quality assessment.
Abstract:
Devices and methods for determining image quality using full-reference and non-reference techniques. Full reference image quality may be determined prior to output of an image or video frame from an image sensor processor by temporarily retaining image data from the image sensor and comparing processed image data of the image to the retained, non-processed image data of the same image. Full reference image quality determination may be assisted by a heuristic-based fault indicator. Image quality may also be determined by a non-reference technique of matching the image to one of various scenarios that are associated with sets of heuristics and applying the heuristics of the particular scenario to the image. Instead of relying on a nominal frame rate, video timing quality may be determined by comparing the capture time interval between successive video frames to the presentation time interval of the same video frames.
Abstract:
Devices and methods for determining image quality using full-reference and non-reference techniques. Full reference image quality may be determined prior to output of an image or video frame from an image sensor processor by temporarily retaining image data from the image sensor and comparing processed image data of the image to the retained, non-processed image data of the same image. Full reference image quality determination may be assisted by a heuristic-based fault indicator. Image quality may also be determined by a non-reference technique of matching the image to one of various scenarios that are associated with sets of heuristics and applying the heuristics of the particular scenario to the image. Instead of relying on a nominal frame rate, video timing quality may be determined by comparing the capture time interval between successive video frames to the presentation time interval of the same video frames.
Abstract:
A video quality comparison tool provides for direct visual perceptual comparison of video sequences. Two inputs are presented at the same position and size, with no-look user choice of which to see and easy back-and-forth comparison while the videos are playing, single-stepping, or paused.
Abstract:
Devices and methods for determining image quality using full-reference and non-reference techniques. Full reference image quality may be determined prior to output of an image or video frame from an image sensor processor by temporarily retaining image data from the image sensor and comparing processed image data of the image to the retained, non-processed image data of the same image. Full reference image quality determination may be assisted by a heuristic-based fault indicator. Image quality may also be determined by a non-reference technique of matching the image to one of various scenarios that are associated with sets of heuristics and applying the heuristics of the particular scenario to the image. Instead of relying on a nominal frame rate, video timing quality may be determined by comparing the capture time interval between successive video frames to the presentation time interval of the same video frames.
Abstract:
A video quality assessment method may include frame-by-frame analysis of a test video sequence (often compressed) with its original (reference) counterpart, pre-conditioning elements of the test and reference frames, defining a region of interest in the pre-conditioned test frame and estimating relative errors within the region of interest between the test and reference frame, filtering the estimated errors of the region of interest temporally across adjacent frames within a perpetually relevant time window, aggregating the filtered errors within the time window, ranking the aggregated errors, selecting a subset of the ranked errors, aggregating across the selected subset of errors, and inputting said aggregated error to a quality assessment system to determine a quality classification along with an estimated quality assessment.
Abstract:
Devices and methods for determining image quality using full-reference and non-reference techniques. Full reference image quality may be determined prior to output of an image or video frame from an image sensor processor by temporarily retaining image data from the image sensor and comparing processed image data of the image to the retained, non-processed image data of the same image. Full reference image quality determination may be assisted by a heuristic-based fault indicator. Image quality may also be determined by a non-reference technique of matching the image to one of various scenarios that are associated with sets of heuristics and applying the heuristics of the particular scenario to the image. Instead of relying on a nominal frame rate, video timing quality may be determined by comparing the capture time interval between successive video frames to the presentation time interval of the same video frames.