Abstract:
Provided is a video decoding method including obtaining split information indicating whether a current block is to be split from a bitstream; splitting the current block into two or more sub-blocks when the split information indicates that the current block is to be split; determining lower horizontal coding order information of the sub-blocks of the current block according to higher horizontal coding order information applied to the current block, based on at least one of split information, size information, and neighboring block information of the current block; and decoding the sub-blocks according to the lower horizontal coding order information.
Abstract:
A method of decoding motion information includes determining a coding factor value of a differential motion vector of a current block, when adaptive encoding has been applied to the differential motion vector, determining a first result value generated by applying the adaptive encoding to the differential motion vector, based on information included in the bitstream, obtaining the differential motion vector by applying the determined coding factor value to the first result value according to a certain operation, and obtaining a motion vector of the current block, based on the obtained differential motion vector and a prediction motion vector of the current block.
Abstract:
Provided is a method of decoding motion information, in which information for determining motion-related information includes spatial information and temporal prediction candidate, the spatial information indicates a direction of a spatial prediction candidate used in sub-units, from among spatial prediction candidates at a left side and upper sides of a current prediction unit, and the temporal information indicates a reference prediction unit of a previous picture, which is used for predicting the current prediction unit. Also, provided is an encoding or decoding apparatus executing an encoding or decoding method.
Abstract:
An encoding device for encoding a bit stream including an image frame may include an encoder configured to determine a position of a pixel having a last non-zero coefficient in a conversion coefficient block constituting the image frame with reference to a predetermined pixel of the conversion coefficient block, identify a sub-block including the last non-zero coefficient, among a plurality of sub-blocks constituting the conversion coefficient block, convert a position of the last non-zero coefficient determined with reference to the predetermined pixel of the conversion coefficient block into a position with reference to a predetermined pixel of the identified sub-block, and encode the sub-block including the last non-zero coefficient and the converted position of the last non-zero coefficient.
Abstract:
Provided is an encoding method for encoding a last position of a significant transformation coefficient in lossless coding, according to an exemplary embodiment, the encoding method including: performing scanning from a first point to a second point of a coding unit in a predetermined order to obtain a transformation coefficient included in the coding unit; determining a last position of a significant transformation coefficient that is not 0 from among transformation coefficients included in the coding unit; determining position information corresponding to the determined last position with respect to the second point; and encoding the determined position information.
Abstract:
Provided are a method and apparatus for determining a context model for entropy encoding and decoding of a transformation coefficient. According to the method and apparatus, a context set index ctxset is obtained based on color component information of a transformation unit, a location of a current subset, and whether there is a significant transformation coefficient having a value greater than a first critical value in a previous subset, and a context offset c1 is obtained based on a length of a previous transformation coefficient having consecutive 1s. Also, a context index ctxids for entropy encoding and decoding of a first critical value flag is determined based on the obtained context set index and the context offset.
Abstract:
A method of decoding a motion vector includes: obtaining information indicating a motion vector resolution (MVR) of a current block from a bitstream; selecting one candidate block from among at least one candidate block, based on the MVR of the current block; and obtaining a motion vector of the current block corresponding to the MVR, by using a motion vector of the determined one candidate block as a prediction motion vector of the current block.
Abstract:
Provided is a video decoding method including: obtaining affine parameter group candidates of a current block based on whether adjacent blocks of the current block are decoded; determining an affine parameter group of the current block from among the affine parameter group candidates, according to affine parameter information of the current block; and reconstructing the current block, based on one or more affine parameters included in the affine parameter group.
Abstract:
Provided is a video decoding method including obtaining split information indicating whether a current block is to be split; and when the split information indicates that the current block is to be split, splitting the current block into at least two lower blocks, obtaining encoding order information indicating an encoding order of the at least two lower blocks of the current block from the bitstream, determining a decoding order of the at least two lower blocks based on the encoding order information, and decoding the at least two lower blocks according to the decoding order.
Abstract:
The present disclosure relates to a method and device for filtering reference samples in intra-prediction. An encoded bitstream may be received, information regarding an intra-prediction mode of a current block may be obtained from the bitstream, a filter may be determined based on a signal component of the current block, a width and height of the current block, and a value of at least one among reference samples neighboring the current block, filtered reference samples may be produced by applying the filter to the reference samples, and a predicted sample of the current block may be produced based on the filtered reference samples and the intra-prediction mode.