Abstract:
A method for context-modeling coding information of a video signal for compressing or decompressing the coding information is provided. An initial value of a function for probability coding of coding information of a video signal of an enhanced layer is determined based on coding information of a video signal of a base layer.
Abstract:
The present invention relates to encoding and decoding a video signal by motion compensated temporal filtering. In one embodiment, a first sequence of frames are decoded by inverse motion compensated temporal filtering by selectively adding to a first image block in the first sequence image information, the image information being based on at least one of (1) a second image block from the first sequence and (2) a third image block from an auxiliary sequence of frames.
Abstract:
The present invention relates to a method for using interlaced video signal of a base layer in interlayer texture prediction. The present method separates interaced video signal of a base layer into even-field and odd-field components, interpolates the even-field and the odd-field components respectively in vertical and/or horizontal direction, and constructs a combined video data by interleaving the interpolated even-field and odd-field components.
Abstract:
A method and an apparatus of decoding a video signal are provided. The present invention includes the steps of parsing first coding information indicating whether a residual data of an image block in the enhanced layer is predicted from a corresponding block in the base layer, from the bitstream of the enhanced layer, and decoding the video signal based on the first coding information. And, the step of parsing includes the step of performing modeling of the first coding information based on second coding information indicating whether prediction information of the corresponding block in the base layer is used to decode the image block in the enhanced layer. Accordingly, the present invention raises efficiency of video signal processing by enabling a decoder to derive information on a prediction mode of a current block in a decoder instead of transferring the information to the decoder.
Abstract:
A method for context-modeling coding information of a video signal for compressing or decompressing the coding information is provided. An initial value of a function for probability coding of coding information of a video signal of an enhanced layer is determined based on coding information of a video signal of a base layer.
Abstract:
A method and an apparatus of decoding a video signal are provided. The present invention includes the steps of parsing first coding information indicating whether a residual data of an image block in the enhanced layer is predicted from a corresponding block in the base layer, from the bitstream of the enhanced layer, and decoding the video signal based on the first coding information. And, the step of parsing includes the step of performing modeling of the first coding information based on second coding information indicating whether prediction information of the corresponding block in the base layer is used to decode the image block in the enhanced layer. Accordingly, the present invention raises efficiency of video signal processing by enabling a decoder to derive information on a prediction mode of a current block in a decoder instead of transferring the information to the decoder.
Abstract:
A method for context-modeling coding information of a video signal for compressing or decompressing the coding information is provided. An initial value of a function for probability coding of coding information of a video signal of an enhanced layer is determined based on coding information of a video signal of a base layer.
Abstract:
In one embodiment, the method includes predicting at least a portion of a current image in a current layer based on at least a residual coded portion of a base image in a base layer, a reference image, shift information for samples in the predicted current image, and offset information indicating a position offset between at least one boundary pixel of the reference image and at least one boundary pixel of the current image. The residual coded portion represents difference pixel data.