Abstract:
A method and apparatus for image compression using a latent variable are provided. The multiple components of the latent variable may be sorted in order of importance. Through sorting, when the feature information of only some of the multiple components is used, the quality of a reconstructed image may be improved. In order to generate a latent variable, the components of which are sorted in order of importance, learning may be performed in various manners. Also, less important information may be eliminated from the latent variable, and processing, such as quantization, may be applied to the latent variable. Through elimination and processing, the amount of data for the latent variable may be reduced.
Abstract:
Disclosed herein are a method and apparatus for video decoding and a method and apparatus for video encoding. A prediction block for a target block is generated by predicting the target block using a prediction network, and a reconstructed block for the target block is generated based on the prediction block and a reconstructed residual block. The prediction network includes an intra-prediction network and an inter-prediction network and uses a spatial reference block and/or a temporal reference block when it performs prediction. For learning in the prediction network, a loss function is defined, and learning in the prediction network is performed based on the loss function.
Abstract:
Disclosed herein are a method, an apparatus, and a storage medium for image encoding/decoding. An intra-prediction mode for the target block is derived, and intra-prediction for the target block that uses the derived intra-prediction mode is performed. The intra-prediction mode for the target block is derived using an artificial neural network, and an MPM list for the target block is derived using information about the target block, pieces of information about blocks adjacent to the target block, and the artificial neural network. The artificial neural network outputs one or more available intra-prediction modes. Further, the artificial neural network outputs match probabilities for one or more candidate intra-prediction modes, and each of the match probabilities for the candidate intra-prediction modes indicates a probability that the corresponding candidate intra-prediction mode matches the intra-prediction mode for the target block.
Abstract:
An encoding apparatus extracts features of an image by applying multiple padding operations and multiple downscaling operations to an image represented by data and transmits feature information indicating the features to a decoding apparatus. The multiple padding operations and the multiple downscaling operations are applied to the image in an order in which one padding operation is applied and thereafter one downscaling operation corresponding to the padding operation is applied. A decoding method receives feature information from an encoding apparatus, and generates a to reconstructed image by applying multiple upscaling operations and multiple trimming operations to an image represented by the feature information. The multiple upscaling operations and the multiple trimming operations are applied to the image in an order in which one upscaling operation is applied and thereafter one trimming operation corresponding to the upscaling operation is applied.
Abstract:
An inter-prediction method and apparatus uses a reference frame generated based on deep learning. In the inter-prediction method and apparatus, a reference frame is selected, and a virtual reference frame is generated based on the selected reference frame. A reference picture list is configured to include the generated virtual reference frame, and inter prediction for a target block is performed based on the virtual reference frame. The virtual reference frame may be generated based on a deep-learning network architecture, and may be generated based on video interpolation and/or video extrapolation that use the selected reference frame.
Abstract:
A forward error correction encoding method includes: separating a first header section from an inputted packet stream; generating a second payload section by encoding a first payload section of the packet stream, from which the first header section is separated, according to a preset code rate; generating a second header section according to the code rate; and combining the first header section, the second header section, and the second payload section.
Abstract:
There are provided an apparatus, method, system, and recording medium for performing selective encoding/decoding on feature information. An encoding apparatus generates residual feature information. The encoding apparatus transmits the residual feature information to a decoding apparatus through a residual feature map bitstream. The residual feature information is the difference between feature information extracted from an original image and feature information extracted from a reconstructed image. Feature information of the reconstructed image is generated using the reconstructed image. Reconstructed feature information is generated using the feature information of the reconstructed image and reconstructed residual feature information.
Abstract:
Disclosed herein are a method, an apparatus and a storage medium for image encoding/decoding using a binary mask. An encoding method includes generating a latent vector using an input image, generating a selected latent vector component set using a binary mask, and generating a main bitstream by performing entropy encoding on the selected latent vector component set. A decoding method includes generating a selected latent vector component set including one or more selected latent vector components by performing entropy decoding on a main bitstream and generating the latent vector in which the one or more selected latent vector components are relocated by relocating the selected latent vector component set in the latent vector.
Abstract:
An encoding apparatus extracts features of an image by applying multiple padding operations and multiple downscaling operations to an image represented by data and transmits feature information indicating the features to a decoding apparatus. The multiple padding operations and the multiple downscaling operations are applied to the image in an order in which one padding operation is applied and thereafter one downscaling operation corresponding to the padding operation is applied. A decoding method receives feature information from an encoding apparatus, and generates a reconstructed image by applying multiple upscaling operations and multiple trimming operations to an image represented by the feature information. The multiple upscaling operations and the multiple trimming operations are applied to the image in an order in which one upscaling operation is applied and thereafter one trimming operation corresponding to the upscaling operation is applied.
Abstract:
Disclosed herein are a method and apparatus for measuring video quality based on a perceptually sensitive region. The quality of video may be measured based on a perceptually sensitive region and a change in the perceptually sensitive region. The perceptually sensitive region includes a spatial perceptually sensitive region, a temporal perceptually sensitive region, and a spatio-temporal perceptually sensitive region. Perceptual weights are applied to a detected perceptually sensitive region and a change in the detected perceptually sensitive region. Distortion is calculated based on the perceptually sensitive region and the change in the perceptually sensitive region, and a result of quality measurement for a video is generated based on the calculated distortion.