Abstract:
A block-request streaming system provides for improvements in the user experience and bandwidth efficiency of such systems, typically using an ingestion system that generates data in a form to be served by a conventional file server (HTTP, FTP, or the like), wherein the ingestion system intakes content and prepares it as files or data elements to be served by the file server. The system might include controlling the sequence, timing and construction of block requests, time based indexing, variable block sizing, optimal block partitioning, control of random access point placement, including across multiple presentation versions, dynamically updating presentation data, and/or efficiently presenting live content and time shifting.
Abstract:
The techniques of this disclosure may be generally related to using motion information for a corresponding block from a texture view component that corresponds with a block in a depth view component in coding the block in the depth view component. In some examples, for coding purposes, the techniques may use motion information when the spatial resolution of the texture view component is different than the spatial resolution of the depth view component.
Abstract:
This disclosure describes features and techniques applicable to three-dimensional (3D) video coding. In one example, a technique may include coding a texture view video block, and coding a depth view video block, wherein the depth view video block is associated with the texture view video block. Coding the depth view video block may include coding a syntax element to indicate whether or not motion information associated with the texture view video block is adopted as motion information associated with the depth view video block.
Abstract:
Systems, methods, and devices for coding multilayer video data are disclosed that may include encoding, decoding, transmitting, or receiving multilayer video data. The systems, methods, and devices may receive or transmit a non-entropy coded representation format within a video parameter set (VPS). The systems, methods, and devices may code (encode or decode) video data based on the non-entropy coded representation format within the VPS, wherein the representation format includes one or more of chroma format, whether different color planes are separately coded, picture width, picture height, luma bit depth, and chroma bit depth.
Abstract:
In an example, a method of decoding video data includes selecting a motion information derivation mode from a plurality of motion information derivation modes for determining motion information for a current block, where each motion information derivation mode of the plurality comprises performing a motion search for a first set of reference data that corresponds to a second set of reference data outside of the current block, and where the motion information indicates motion of the current block relative to reference video data. The method also includes determining the motion information for the current block using the selected motion information derivation mode. The method also includes decoding the current block using the determined motion information and without decoding syntax elements representative of the motion information.
Abstract:
In one example, a device includes a video coder (e.g., a video encoder or a video decoder) configured to code parameter set information for a video bitstream, code video data of a base layer of the video bitstream using the parameter set information, and code video data of an enhancement layer of the video bitstream using at least a portion of the parameter set information. The parameter set information may include, for example, profile and level information and/or hypothetical reference decoder (HRD) parameters. For example, the video coder may code a sequence parameter set (SPS) for a video bitstream, code video data of a base layer of the video bitstream using the SPS, and code video data of an enhancement layer of the video bitstream using at least a portion of the SPS, without using any other SPS for the enhancement layer.
Abstract:
In one example, a device includes a video coder configured to determine, for each reference picture in one or more reference picture lists for a current picture, whether the reference picture is to be included in a plurality of reference pictures based on types for the reference pictures in the reference picture lists, compare picture order count (POC) values of each of the plurality of reference pictures to a POC value of the current picture to determine a motion vector predictor for a current block based on motion vectors of a co-located block of video data in a reference picture of the plurality of reference pictures, determine whether a forward motion vector or a backward motion vector of the co-located block is to be initially used to derive the motion vector predictor, and code a motion vector for the current block of video data relative to the motion vector predictor.
Abstract:
Examples include a device for coding video data, the device including a memory configured to store video data, and one or more processors configured to obtain adaptive loop filtering (ALF) information for a current coding tree unit (CTU) from one or more of: (i) one or more spatial neighbor CTUs of the current CTU or (ii) one or more temporal neighbor CTUs of the current CTU, to form a candidate list based at least partially on the obtained ALF information for the current CTU, and to perform a filtering operation on the current CTU using ALF information associated with a candidate from the candidate list. Coding video data includes encoding video data, decoding video data, or both encoding and decoding video data.
Abstract:
In an example, a method of decoding video data may include receiving a first block of video data. The first block of video data may be a sub-block of a prediction unit. The method may include receiving one or more blocks of video data that neighbor the first block of video data. The method may include determining motion information of at least one of the one or more blocks of video data that neighbor the first block of video data. The method may include decoding, using overlapped block motion compensation, the first block of video data based at least in part on the motion information of the at least one of the one or more blocks that neighbor the first block of video data.
Abstract:
An apparatus for decoding video information according to certain aspects includes a memory unit and a processor operationally coupled to the memory unit. The memory unit is configured to store at least one reference picture list of an enhancement layer, the at least one reference picture list comprising residual prediction reference picture information. The processor is configured to: decode signaled information about residual prediction reference picture generation; generate a residual prediction reference picture based on an enhancement layer reference picture and the decoded signaled information such that the generated residual prediction reference picture has the same motion field and the same picture order count (POC) as the enhancement layer reference picture from which it is generated; and store the generated residual prediction reference picture in the at least one reference picture list of the enhancement layer.