Abstract:
A method and apparatus for encoding and decoding a video are provided. The method of encoding the video includes: determining whether a unidirectional motion estimation mode and a bidirectional motion estimation mode are to be used based on a size of a current prediction unit to be encoded, performing the motion estimation and the motion compensation on the current prediction unit according to the determining of whether the unidirectional motion estimation mode and the bidirectional motion estimation mode are to be used, determining an optimum motion estimation mode of the current prediction unit based on an encoding cost of the current prediction unit obtained through the performing of the motion estimation and the motion compensation, and encoding information indicating the determined optimum motion estimation mode based on the size of the current prediction unit.
Abstract:
A video encoding method and apparatus and video decoding method and apparatus generate a restored image having a minimum error with respect to an original image based on offset merge information indicating whether offset parameters of a current block and at least one neighboring block from among blocks of video are identical.
Abstract:
A motion prediction method includes determining, when a current slice is a B slice, a reference picture list to be used with respect to a current prediction unit from among prediction units included in a coding unit, and outputting, when a size of the current prediction unit is 4×8 or 8×4, inter-prediction index information of the current prediction unit indicating the reference picture list from among an L0 list and an L1 list, and when the size of the current prediction unit is not 4×8 or 8×4, the inter-prediction index information of the current prediction unit indicating the reference picture list from among the L0 list, the L1 list, and a bi-prediction list.
Abstract:
A video encoding method and apparatus and video decoding method and apparatus generate a restored image having a minimum error with respect to an original image based on offset merge information indicating whether offset parameters of a current block and at least one neighboring block from among blocks of video are identical.
Abstract:
Provided are a method and apparatus of encoding a video by compensating for a pixel value and a method and apparatus of decoding a video by compensating for a pixel value. The method of encoding the video includes: encoding image data; decoding the encoded image data and generating a restored image by performing loop filtering on the decoded image data; determining a compensation value corresponding to errors between a predetermined group restored pixels in the restored image and corresponding original pixels, and a pixel group including a restored pixel to be compensated for by using the compensation value; and encoding the compensation value and transmitting the encoded compensation value and a bitstream of the encoded image data.
Abstract:
Provided are a method and apparatus for interpolating an image. The method includes: selecting a first filter, from among a plurality of different filters, for interpolating between pixel values of integer pixel units, according to an interpolation location; and generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the integer pixel units by using the selected first filter.
Abstract:
An image encoding method includes generating a first frequency coefficient matrix by transforming a predetermined block to a frequency domain; determining whether the first frequency coefficient matrix includes coefficients whose absolute values are greater than a predetermined value; generating a second frequency coefficient matrix by selectively partially switching at least one of rows and columns of the first frequency coefficient matrix according to an angle parameter based on a determination result; and selectively encoding the second frequency coefficient matrix based on the determination result.
Abstract:
A method and apparatus for performing transformation and inverse transformation on a current block by using multi-core transform kernels in video encoding and decoding processes. A video decoding method may include obtaining, from a bitstream, multi-core transformation information indicating whether multi-core transformation kernels are to be used according to a size of a current block; obtaining horizontal transform kernel information and vertical transform kernel information from the bitstream when the multi-core transformation kernels are used according to the multi-core transformation information; determining a horizontal transform kernel for the current block according to the horizontal transform kernel information; determining a vertical transform kernel for the current block according to the vertical transform kernel information; and performing inverse transformation on the current block by using the horizontal transform kernel and the vertical transform kernel.
Abstract:
An image decoding method and apparatus according to an embodiment may extract, from a bitstream, a quantization coefficient generated through core transformation, secondary transformation, and quantization; generate an inverse-quantization coefficient by performing inverse quantization on the quantization coefficient; generate a secondary inverse-transformation coefficient by performing secondary inverse-transformation on a low frequency component of the inverse-quantization coefficient, the secondary inverse-transformation corresponding to the secondary transformation; and perform core inverse-transformation on the secondary inverse-transformation coefficient, the core inverse-transformation corresponding to the core transformation.
Abstract:
Provided is a video decoding method including determining a displacement vector per unit time of pixels of a current block in a horizontal direction or a vertical direction, the pixels including a pixel adjacent to an inside of a boundary of the current block, by using values about reference pixels included in a first reference block and a second reference block, without using a stored value about a pixel located outside boundaries of the first reference block and the second reference block; and obtaining a prediction block of the current block by performing block-unit motion compensation and pixel group unit motion compensation on the current block by using a gradient value in the horizontal direction or the vertical direction of a first corresponding reference pixel in the first reference block which corresponds to a current pixel included in a current pixel group in the current block, a gradient value in the horizontal direction or the vertical direction of a second corresponding reference pixel in the second reference block which corresponds to the current pixel, a pixel value of the first corresponding reference pixel, a pixel value of the second corresponding reference pixel, and a displacement vector per unit time of the current pixel in the horizontal direction or the vertical direction. In this regard, the current pixel group may include at least one pixel.