Abstract:
Provided are a video encoding method and a video decoding method according to spatial subdivisions based on splitting a picture into a first tile and a second tile, and splitting a current tile among the first tile and the second tile into at least one slice segment, encoding the first tile and the second tile, independently from each other, and encoding maximum coding units of a current slice segment among the at least one slice segment included in the current tile, with respect to the at least one slice segment included in the current tile.
Abstract:
Provided are a method and apparatus for interpolating an image. The method includes: selecting a first filter, from among a plurality of different filters, for interpolating between pixel values of integer pixel units, according to an interpolation location; and generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the integer pixel units by using the selected first filter.
Abstract:
A method, performed by a device, of processing an image, may include obtaining a circular image generated by photographing a target space through a fisheye lens; generating metadata including lens shading compensation information for correcting color information of the obtained circular image; and transmitting the obtained circular image and the metadata to a terminal.
Abstract:
Provided are entropy encoding and entropy decoding for video encoding and decoding. The video entropy decoding method includes: determining a bin string and a bin index for a maximum coding unit that is obtained from a bitstream; determining a value of a syntax element by comparing the determined bin string with bin strings that is assignable to the syntax element in the bin index; storing context variables for the maximum coding unit when the syntax element is a last syntax element in the maximum coding unit, a dependent slice segment is includable in a picture in which the maximum coding unit is included, and the maximum coding unit is a last maximum coding unit in a slice segment; and restoring symbols of the maximum coding unit by using the determined value of the syntax element.
Abstract:
Provided is a method of decoding a scalable video comprising a plurality of layers, the method including: obtaining, from a bitstream, first information indicating a number lower than, by 1, a maximum number of layers allowed to refer to a video parameter set with respect to the scalable video from among layers included in each coded video sequence; decoding a first picture included in a first layer; and performing, by a second picture included in a second layer, at least one of inter-layer sample prediction and inter-layer motion prediction between the first layer and the second layer by referring to the decoded first picture, wherein the first layer is a base layer corresponding to a lowest layer of the plurality of layers, the second layer is a layer using a decoding method different from the first layer, and when the first and second layers use different decoding methods, the first information has a value higher than 0.
Abstract:
Provided is a multi-layer video decoding method. The multi-layer video decoding method includes: obtaining, from a bitstream, dependency information indicating whether a first layer refers to a second layer; if the dependency information indicates that the first layer refers to the second layer, obtaining a reference picture set of the first layer, based on whether type information of the first layer and type information of the second layer are equal to each other; and decoding encoded data of a current image included in the first layer, based on the reference picture set.
Abstract:
Provided is an inter-layer video decoding method. The inter-layer video decoding method includes: determining whether a current block is split into two or more regions by using a depth block corresponding to the current block; generating a merge candidate list including at least one merge candidate for the current block, based on a result of the determination; determining motion information of the current block by using motion information of one of the at least one merge candidate included in the merge candidate list; and decoding the current block by using the determined motion information, wherein the generating of the merge candidate list includes determining whether a view synthesis prediction candidate is available as the merge candidate according to the result of the determination.
Abstract:
Provided is an inter-layer video decoding method including: obtaining prediction mode information of a depth image; generating a prediction block of a current block forming the depth image, based on the obtained prediction mode information; and decoding the depth image by using the prediction block, wherein the obtaining of the prediction mode information includes obtaining a first flag, which indicates whether the depth image allows a method of predicting the depth image by splitting blocks forming the depth image into at least two partitions using a wedgelet as a boundary, and a second flag, which indicates whether the depth image allows a method of predicting the depth image by splitting the blocks forming the depth image into at least two partitions using a contour as a boundary.
Abstract:
An inter-view video decoding method may include determining a disparity vector of a current second-view depth block by using a specific sample value selected within a sample value range determined based on a preset bit-depth, detecting a first-view depth block corresponding to the current second-view depth block by using the disparity vector, and reconstructing the current second-view depth block by generating a prediction block of the current second-view depth block based on coding information of the first-view depth block.
Abstract:
Provided is a merge mode for determining, by using motion information of another block, motion information of pictures that construct a multiview video. A multiview video decoding method includes obtaining motion inheritance information specifying whether or not motion information of a corresponding block of a first layer which corresponds to a current block of a second layer is available as motion information of the second layer, obtaining a merge candidate list by selectively including the motion information of the corresponding block in merge candidates when the current block that was encoded according to the merge mode is decoded, determining a merge candidate included in the merge candidate list according to merge candidate index information, and obtaining motion information of the current block, based on the merge candidate.