Abstract:
A method of encoding a video signal using a graph-based transformation includes: generating a residual block using a prediction block generated according to an intra prediction mode; obtaining at least one of a self-loop weight indicating a weight of boundary pixels in the residual block or a correlation coefficient indicating an inter-pixel correlation, on the basis of a prediction angle corresponding to the intra prediction mode; generating a graph on the basis of at least one of the self-loop weight or the correlation coefficient; determining a graph-based transformation kernel on the basis of the graph; and performing a transform for the residual block using the graph-based transformation kernel.
Abstract:
The present invention provides a method of decoding a video signal using graph-based transform comprising the steps: extracting prediction unit partition information of a current coding unit from the video signal; obtaining a graph-based transform kernel from predetermined table information based on the prediction unit partition information; and performing an inverse-transform of a transform unit using the graph-based transform kernel, wherein the graph-based transform kernel corresponds to at least one of the prediction unit partition information and an edge weight, and the edge weight is a predetermined value representing a correlation between pixels.
Abstract:
The present invention provides a 3D video decoding method and device. A 3D video decoding method according to the present invention comprises the steps of: determining whether to apply motion vector inheritance for inducing a motion vector of a depth picture, using motion information of a texture picture; when it is determined to apply the motion vector inheritance, inducing a current block within the depth picture to a sub-block sized depth sub-block for the motion vector inheritance; inducing a motion vector of the depth sub-block from a texture block within the texture picture corresponding to the depth sub-block; and inducing a reconstructed sample of the current block by generating a prediction sample of the depth sub-block on the basis of the motion vector of the depth sub-block.
Abstract:
The present invention relates to 3D video coding device and method. A decoding method, according to the present invention, provides a 3D video decoding method. A decoding method comprises the steps of: obtaining a disparity value on the basis of a reference view and a predetermined value; deriving movement information of a current block in a depth picture on the basis of the disparity value; and generating a prediction sample of the current block on the basis of the movement information, wherein the reference view is a view of a reference picture in a reference picture list. According to the present invention, even when a base view cannot be accessed, a disparity vector can be derived on the basis of an available reference view index in a decoded picture buffer (DPB), and coding efficiency can be enhanced.
Abstract:
The present invention relates to a method for constituting a merge candidate list by using a view synthesis prediction (VSP) and the like in multi-view video coding. The method for constituting the merge candidate list according to the present invention comprises the steps of: determining a prediction mode for a current block; inducing, as a merge candidate, motion information from neighboring blocks of the current block when the prediction mode for the current block is a merge mode or a skip mode; and constituting the merge candidate list by using the motion information of the neighboring blocks and the deep parity information induced from the neighboring blocks of the current block.
Abstract:
The present invention relates to a video signal processing method and device capable of: obtaining a reference view block by using a predetermined motion vector; obtaining the depth value of a reference block which corresponds to the reference view block; obtaining an inter-view motion vector for a current block by using at least one depth value of the reference depth block; and decoding the current block by using the inter-view motion vector.
Abstract:
The present invention relates to a method and an apparatus for coding a video signal, and more specifically, a motion vector between viewpoints is obtained by using a depth value of a depth block, which corresponds to a current texture block, and an illumination difference is compensated. By obtaining the motion vector between the viewpoints by using the depth value of the depth block, which corresponds to the current texture block, and compensating the illumination difference, the present invention can obtain an accurate prediction value of the current texture block and thus increase accuracy in inter-prediction between the viewpoints.
Abstract:
The present invention provides a method for performing a transform, the method comprising the steps of: deriving a row transform set, a column transform set, and a permutation matrix on the basis of a given transform matrix (H) and error tolerance parameter; obtaining a row-column transform (RCT) coefficient on the basis of the row transform set, the column transform set, and the permutation matrix; and performing a quantization and an entropy encoding on the RCT coefficient, wherein the permutation matrix represents a matrix obtained by permutating a row of an identity matrix.
Abstract:
The present invention provides a method for decoding a video signal by using a graph-based transform, comprising the steps of: parsing a transform index from the video signal; generating a line graph on the basis of edge information on a target unit; aligning transform vectors for each of segments of the line graph on the basis of a transform type corresponding to the transform index; acquiring a transform kernel by realigning the transform vectors for each of segments of the line graph according to a predetermined condition; and performing an inverse transform for the target unit on the basis of the transform kernel.
Abstract:
The present invention provides a method for decoding a video signal using a graph-based transform including receiving, from the video signal, a transform index for a target block; deriving a graph-based transform kernel corresponding to the transform index, and the graph-based transform kernel is determined based on boundary information, which represents a property of a signal for a block boundary; and decoding the target block based on the graph-based transform kernel.