Abstract:
An interlayer video decoding method and apparatus and an interlayer video encoding method and apparatus are provided. The decoding method includes: reconstructing, based on encoding information obtained from a bitstream, a first layer image and a first layer depth map; determining whether a disparity vector is predictable using peripheral blocks of a second layer current block; and when the disparity vector is not predictable using the peripheral blocks, determining a disparity vector of the second layer current block using a default disparity vector and the reconstructed first layer depth map.
Abstract:
An interlayer video decoding method is provided. The interlayer video decoding method includes determining a disparity vector for performing inter layer prediction on a second layer current block with reference to a first layer image; determining a first layer reference location corresponding to the determined disparity vector in relation to a location of the second layer current block; obtaining motion information of at least one peripheral block located in the periphery of the first layer reference location; and adding at least one motion information of the obtained motion information to a candidate list for inter prediction.
Abstract:
Disclosed is a video decoding method including: obtaining a disparity vector having components in sub-pixel unit for interlayer prediction between images belonging to a current layer and a reference layer; determining a position of an integer pixel of the reference layer corresponding to a position indicated by the disparity vector obtained from the position of a current pixel of the current layer; and decoding the image of the current layer by using prediction information on a candidate area of the reference layer corresponding to the determined position of the integer pixel.
Abstract:
Apparatuses and methods configured to encode and decode multi-layer video are provided. A method of prediction-decoding a multi-layer video includes obtaining information indicating whether a decoded picture buffer (DPB) storing a first layer and a DPB storing a second layer operate identically, and operating the DPB storing the second layer based on the obtained information.
Abstract:
Provided is a method of reconstructing multilayer images including obtaining RAP picture information of a plurality of layers including a base layer and an enhancement layer, independently decoding a RAP picture of the base layer by using RAP picture information, and independently decoding a RAP picture of the enhancement layer by using RAP picture information.
Abstract:
A multi-layer video coding method includes generating network abstraction layer (NAL) units for each data unit by dividing a multi-layer video according to data units, and adding scalable information to a video parameter set (VPS) NAL UNIT from among pieces of transmission unit data for each data unit.
Abstract:
Provided are a multi-view video decoding apparatus and method and a multi-view encoding apparatus and method. The decoding method includes: determining whether a prediction mode of a current block being decoded is a merge mode; when the prediction mode is determined to be the merge mode, forming a merge candidate list including at least one of an inter-view candidate, a spatial candidate, a disparity candidate, a view synthesis prediction candidate, and a temporal candidate; and predicting the current block by selecting a merge candidate for predicting the current block from the merge candidate list, wherein whether to include, in the merge candidate list, at least one of a view synthesis prediction candidate for an adjacent block of the current block and a view synthesis prediction candidate for the current block is determined based on whether view synthesis prediction is performed on the adjacent block and the current block.
Abstract:
An interlayer video decoding method comprises reconstructing a first layer image based on encoding information acquired from a first layer bitstream; reconstructing a second layer current block determined as a predetermined partition mode and a prediction mode by using interlayer prediction information acquired from a second layer bitstream and a first layer reference block corresponding to a current block of a first layer reconstruction image that is to be reconstructed in a second layer; determining whether to perform luminance compensation on the second layer current block in a partition mode in which the second layer current block is not split; and compensating for luminance of the second layer current block according to whether luminance compensation is performed and reconstructing a second layer image including the second layer current block of which luminance is compensated for.
Abstract:
A method of generating a parameter set includes obtaining common information commonly inserted into at least two lower parameter sets referring to the same upper parameter set; determining whether the common information is to be added to at least one among the upper parameter set and the at least two lower parameter sets; and adding the common information to at least one among the upper parameter set and the at least two lower parameter sets, based on a result of determining whether the common information is to be added to at least one among the upper parameter set and the at least two lower parameter sets.
Abstract:
Provided are an inter-layer video encoding method and apparatus therefor and an inter-layer video decoding method and apparatus therefor. An inter-layer video decoding method involves reconstructing a first layer image, based on encoding information obtained from a first layer bitstream; in order to reconstruct a second layer block determined as a predetermined partition type and to be in a prediction mode, determining whether to perform illumination compensation for the reconstructed second layer block determined by using a first layer reference block that is from among the reconstructed first layer image and corresponds to the second layer block; and generating the reconstructed second layer block by using inter-layer prediction information obtained from a second layer bitstream and the first layer reference block, and generating a second layer image including the reconstructed second layer block whose illumination is determined according to whether the illumination compensation was performed.