Abstract:
A method, performed by a device, of processing an image, may include obtaining a circular image generated by photographing a target space through a fisheye lens; generating metadata including lens shading compensation information for correcting color information of the obtained circular image; and transmitting the obtained circular image and the metadata to a terminal.
Abstract:
A method, performed by a device, of processing an image, may include obtaining a circular image generated by photographing a target space through a fisheye lens; generating metadata including lens shading compensation information for correcting color information of the obtained circular image; and transmitting the obtained circular image and the metadata to a terminal.
Abstract:
A video encoding method, a video encoding apparatus, a video decoding method, and a video decoding apparatus are provided. The video encoding method includes reconstructing a first layer image based on encoding information of the first layer image that is obtained from a bitstream, splitting a largest coding unit of a second layer image into coding units based on split information of the second layer image that is obtained from the bitstream, splitting the coding units into prediction units for prediction encoding, determining whether to use a coding tool for decoding a current prediction unit based on at least one among a prediction mode of the current prediction unit among the prediction units, size information of the current prediction unit, and color depth information of the current prediction unit, and decoding the current prediction unit, using the coding tool, in response to the determining to use the coding tool.
Abstract:
An interlayer video decoding method includes: reconstructing a color image and a depth image of a first layer in relation to coding information of the color image and the depth image of the first layer obtained from a bitstream; determining whether a prediction mode of a current block of a second layer image to be decoded is a view synthesis prediction mode that predicts the current block based on an image synthesized from a first layer image; determining a depth-based disparity vector indicating a depth-corresponding block of the first layer with respect to the current block, when the prediction mode is the view synthesis prediction mode; performing view synthesis prediction on the current block from the depth-corresponding block of the first layer indicated by the depth-based disparity vector; and reconstructing the current block by using a prediction block generated by prediction.
Abstract:
Provided is a multi-layer video decoding method for efficiently obtaining, from a bitstream, information indicating a maximum size of a decoded picture buffer (DPB) regarding a layer set including a plurality of layers.
Abstract:
A multilayer video encoding method includes encoding a multilayer video, generating network adaptive layer (NAL) units for data units included in the encoded multilayer video, and adding scalable extension type information, for a scalable extension of the multilayer video, to a video parameter set (VPS) NAL unit among the NAL units, the VPS NAL unit including VPS information that is information commonly applied to the multilayer video.
Abstract:
Provided are methods and apparatuses for encoding and decoding a multiview video. The method of decoding the multiview video includes obtaining a data unit including encoding information of texture pictures and depth map pictures of a multiview image related to a same point of time, obtaining, from the data unit, view information of pictures that are encoded and are included in the data unit, type information indicating a type of each of the pictures from among the texture pictures and the depth map pictures, and reference flag information indicating whether each of the pictures is previously inter-layer predicted by referring to a texture picture of the same point of time or to a depth map picture of the same point of time, determining an encoding order of the pictures, based on the obtained information, and decoding the texture pictures and depth map pictures based on the determined encoding order.
Abstract:
Provided are entropy encoding and entropy decoding for video encoding and decoding. The video entropy decoding method includes: determining a bin string and a bin index for a maximum coding unit that is obtained from a bitstream; determining a value of a syntax element by comparing the determined bin string with bin strings that is assignable to the syntax element in the bin index; storing context variables for the maximum coding unit when the syntax element is a last syntax element in the maximum coding unit, a dependent slice segment is includable in a picture in which the maximum coding unit is included, and the maximum coding unit is a last maximum coding unit in a slice segment; and restoring symbols of the maximum coding unit by using the determined value of the syntax element.
Abstract:
Provided are a method and apparatus for interpolating an image. The method includes: selecting a first filter, from among a plurality of different filters, for interpolating between pixel values of integer pixel units, according to an interpolation location; and generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the integer pixel units by using the selected first filter.
Abstract:
A multiview video decoding method includes receiving a base view image stream of a base viewpoint and additional view image streams of at least two additional viewpoints, restoring base view images by performing motion compensation that references base view anchor pictures of an I-picture type, by using the base view image stream, restoring a view decoding refresh image configured for viewpoint switching for changing a first additional viewpoint, by performing disparity compensation that references at least one of the restored base view images, on a first additional view image stream, and restoring first additional view images of the first additional viewpoint by performing at least one of disparity compensation that references the restored base view images and motion compensation that references restored images of the first additional viewpoint excluding the view decoding refresh image that precedes the restored first additional view images, on the first additional view image stream.