Abstract:
A device for encoding video data may be configured to encode video data according to a set of sample adaptive offset (SAO) types; perform a plurality of coding passes to test a subset of the SAO types for a first block of video data, wherein the subset is smaller than the set; select from the subset of SAO types an SAO type for the first block of video data; and generate for inclusion in an encoded bitstream, information for identifying the selected SAO type for the first block.
Abstract:
Systems and methods for encoding and decoding video data are disclosed. The method can include signaling in syntax information a picture parameter set (PPS) indicating a first tile size partition. The method can also include storing a plurality of tile size partitions and associated PPS identifiers (PPSID) in a database. If a second tile size partition for a second frame of video data is the same as a tile size partition stored in the database, the method can include signaling the PPSID for the corresponding tile size partition. If the second tile size partition is not the same as a tile size partition stored in the database, the method can include signaling a new PPS with the second tile size partition. The system can provide an encoder and a decoder for processing the video data encoded by the method for encoding video data.
Abstract:
The disclosure provides a system and methods for encoding video data. The method can include storing a data structure in a memory, the data structure having a first plurality of data elements arranged corresponding to a second plurality of data elements of a first video data block, and defining a periphery, the data structure further including data related to all of a smallest prediction unit (PU) for the first video data block. The method can also include increasing a size of the data structure in the memory by adding a plurality of extended units along the periphery of the first plurality of data elements, each extended unit having data related to a smallest data element of the first video data block, the extended units being set to default values. The method can also comprise encoding the first video data block based on the data structure.
Abstract:
A system and method for applying Rate Distortion Optimized Quantization (RDOQ) is disclosed. In one example, there is provided a method that includes determining at least one prediction type and at least one partition type for use in encoding at least one block of video data. The method further includes applying a non-RDOQ quantization scheme to the at least one block of the video data. The non-RDOQ quantization scheme may be applied during the determination of the at least one prediction type and the at least one partition type. The method further includes applying an RDOQ quantization scheme to the at least one block upon determining the at least one prediction type and the at least one partition type.
Abstract:
A video encoding device comprises a memory configured and at least one processor configured to: determine whether a metric meets a condition based on statistics, wherein the statistics are associated with a first video encoding mode checking order and a second video encoding mode checking order, responsive to determining that the metric meets the condition, select a first encoding mode checking order to encode the first block of video data responsive to determining that the condition is not met, select a second encoding mode checking order different from the first encoding mode checking order to encode the first block of video data, update the statistics based on the selected first or second encoding mode checking order, and encode a second block of video data, based on the updated statistics, and using the first or second mode checking order.
Abstract:
Provided are techniques for low complexity video coding. For example, a video coder may be configured to calculate a first sum of absolute difference (SAD) value between a coding unit (CU) block and a first corresponding block in a reference frame, and define branching conditions for branching of CU sizes based on the first SAD value, the branching conditions including a background condition and/or a homogeneous condition. The video coder may be configured to detect the background condition if the first SAD value of the CU block is less than a first threshold background value, and detect the homogeneous condition if a second SAD value of a sub-block of the CU block is between upper and lower homogeneous threshold values based on the first SAD value. The branching of the CU sizes may be based on detecting the background or homogeneous conditions.
Abstract:
Techniques for coding video data include coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that is one of an intra pulse code modulation (IPCM) coding mode and a lossless coding mode. In some examples, the lossless coding mode may use prediction. The techniques further include assigning a non-zero quantization parameter (QP) value for the at least one block coded using the coding mode. The techniques also include performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information of a base, or reference, layer and an enhancement layer. The processor determines whether a base layer reference index is valid for the enhancement layer, and resolves mismatches between base layer and enhancement layer reference indices and reference frame picture order counts. Resolving mismatches may comprise deriving valid reference information from the base layer, using spatial motion information of video data associated with the reference information of the base and/or enhancement layers.
Abstract:
A method of coding delta quantization parameter values is described. In one example a video decoder may receive a delta quantization parameter (dQP) value for a current quantization block of video data, wherein the dQP value is received whether or not there are non-zero transform coefficients in the current quantization block. In another example, a video decoder may receive the dQP value for the current quantization block of video data only in the case that the QP Predictor for the current quantization block has a value of zero, and infer the dQP value to be zero in the case that the QP Predictor for the current quantization block has a non-zero value, and there are no non-zero transform coefficients in the current quantization block.
Abstract:
An example video encoder is configured to receive an indication of merge mode coding of a block within a parallel motion estimation region (PMER), generate a merge mode candidate list comprising one or more spatial neighbor motion vector (MV) candidates and one or more temporal motion vector prediction (TMVP) candidates, wherein motion information of at least one of the spatial neighbor MV candidates is known to be unavailable during coding of the block at an encoder, determine an index value identifying, within the merge mode candidate list, one of the TMVP candidates or the spatial neighbor MV candidates for which motion information is available during coding of the particular block, and merge mode code the block using the identified MV candidate.