Abstract:
A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include an indicator identifying a number of temporal layers of the video stream.
Abstract:
Systems, methods, and devices for coding video data are described herein. In some aspects, a memory unit is configured to store the video data. The video data includes a base layer and an enhancement layer. The base layer includes a coding unit tree co-located with an enhancement layer coding unit in the enhancement layer. The coding unit tree includes a plurality of nodes arranged in a tree structure and motion vectors. The enhancement layer coding unit is inter-mode coded. A processor is configured to split the enhancement layer coding unit into a plurality of nodes arranged in a tree structure that is the same as the tree structure of the coding unit tree. The processor is also configured to perform motion prediction for the enhancement layer coding unit based on the motion vectors of the coding unit tree.
Abstract:
A method and apparatus for decoding and encoding multiview video data is described. An example method may include coding a block of video data using a motion vector prediction process, determining a motion vector candidate list, determining a disparity vector candidate list for the motion prediction process, wherein the disparity vector candidate list includes at least two types of disparity vectors from a plurality of disparity vector types, the plurality including a spatial disparity vector (SDV), a smooth temporal-view (STV) disparity vector, a view disparity vector (VDV), and a temporal disparity vector (TDV), and performing the motion vector prediction process using one of the disparity vector candidate list and the motion vector candidate list.
Abstract:
In general, techniques are described for separately coding depth and texture components of video data. A video coding device for coding video data that includes a view component comprised of a depth component and a texture component may perform the techniques. The video coding device may comprise, as one example, a processor configured to activate a parameter set as a texture parameter set for the texture component of the view component, and code the texture component of the view component based on the activated texture parameter set.
Abstract:
In general, techniques are described for separately processing depth and texture components of video data. A device configured to process video data including a view component comprised of a depth component and a texture component may perform various aspects of the techniques. The device may comprise a processor configured to determine a supplemental enhancement information message that applies when processing the view component of the video data, and determine a nested supplemental enhancement information message that applies in addition to the supplemental enhancement information message when processing the depth component of the view component.
Abstract:
A block-request streaming system provides for low-latency streaming of a media presentation. A plurality of media segments are generated according to an encoding protocol. Each media segment includes a random access point. A plurality of media fragments are encoded according to the same protocol. The media segments are aggregated from a plurality of media fragments.
Abstract:
A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include an indicator identifying a number of parameter sets stored within one or more access units of the video stream.
Abstract:
An example method for encoding or decoding video data includes storing, by a video coder and in a reference picture buffer, a version of a current picture of the video data, including the current picture in a reference picture list (RPL) used to predict the current picture, and coding, by the video coder and based on the RPL, a block of video data in the current picture based on a predictor block of video data included in the version of the current picture stored in the reference picture buffer.
Abstract:
This disclosure describes techniques for 3D video coding. In particular, this disclosure is related to techniques for advanced residual prediction (ARP) in 3D-HEVC. According to one techniques of this disclosure, when performing inter-view ARP for a bi-directionally predicted block, the video coder may determine a motion vector for a first corresponding block as part of performing ARP for a first prediction direction and reuse that determined motion vector when performing ARP for a second prediction direction. According to another technique, for a bi-directionally predicted block, a video coder may apply ARP in only one direction for a chroma component of a block but apply ARP in two directions for a luma component of the block. According to another technique, a video coder may selectively apply ARP to chroma components based on block size. These simplifications, as well as other techniques included in this disclosure, may reduce overall coding complexity.
Abstract:
This disclosure describes techniques for simplifying depth inter mode coding in a three-dimensional (3D) video coding process, such as 3D-HEVC. The techniques include generating a motion parameter candidate list, e.g., merging candidate list, for a current depth prediction unit (PU). In some examples, the described techniques include determining that a sub-PU motion parameter inheritance (MPI) motion parameter candidate is unavailable for inclusion in the motion parameter candidate list for the current depth PU if motion parameters of a co-located texture block to a representative block of the current depth PU are unavailable. In some examples, the described techniques include deriving a sub-PU MPI candidate for inclusion in the motion parameter candidate list for the current depth PU only if a partition mode of the current depth PU is 2N×2N.