Abstract:
The invention relates to an encoding apparatus for processing a video signal comprising a plurality of frames dividable into video coding blocks. A first video coding block of a current frame of the video signal is partitioned into a first segment associated with a first segment motion vector relative to a first reference frame of the video signal and a second segment. The first video coding block is associated with a plurality of virtual partitions. Each virtual partition is associated with a respective subset of the plurality of video coding blocks of the current frame. Each video coding block of the respective subset neighbors the first video coding block and is associated with a motion vector.
Abstract:
Embodiments of the disclosure relate to an encoding apparatus and a decoding apparatus. The encoding apparatus is configured to process a video signal, the video signal comprising a plurality of frames, each frame being dividable into a plurality of video coding blocks, each video coding block comprising a plurality of pixels. The encoding apparatus comprises a partitioner configured to partition a first video coding block of the plurality of video coding blocks of a first frame of the video signal into a first segment and a second segment, wherein the first segment comprises a first set of the plurality of pixels of the first video coding block and the second segment comprises a second set of the plurality of pixels of the first video coding block.
Abstract:
A decoding apparatus partitions a video coding block based on coding information into two or more segments including a first segment and a second segment. The coding information comprises a first segment motion vector associated with the first segment and a second segment motion vector associated with the second segment. A co-located first segment in a first reference frame is determined based on the first segment motion vector and a co-located second segment in a second reference frame is determined based on the second segment motion vector. A predicted video coding block is generated based on the co-located first segment and the co-located second segment. A divergence measure is determined based on the first segment motion vector and the second segment motion vector and a first or second filter is applied depending on the divergence measure to the predicted video coding block.
Abstract:
A depth image filtering method, and a filtering threshold obtaining method and apparatus are provided. The method in the embodiments of the present invention includes: determining, for each pixel in an adjacent area of a pixel of a to-be-filtered depth image, whether the pixel meets a preset condition; determining a set of pixels meeting the preset condition; and determining a pixel value of the pixel of the to-be-filtered depth image according to pixel values of the pixels in the set. According to the embodiments of the present invention, a ringing effect at an edge of a depth image is effectively removed, and discontinuity of the depth image is reduced, thereby improving quality of a video image.
Abstract:
A method, an apparatus and a system for a rapid motion search applied in template matching are disclosed. The method includes: selecting motion vectors of blocks related to a current block as candidate motion vectors of the current block; after the uniqueness of a series of the candidate motion vectors of the current block is maintained, calculating the cost function of the candidate motion vectors in a corresponding template area of a reference frame, and obtaining the motion vector of the best matching template from the candidate motion vectors of the current block. In the embodiments of the present invention, there is no need to determine a large search range and no need to determine the corresponding search path template, and it is only necessary to perform a search in a smaller range.
Abstract:
A method for encoding a video signal includes generating an extension region of a first face of a reference frame, where the extension region includes a plurality of extension samples, and a sample value of each extension sample is based on a sample value of a sample of a second face of the reference frame, determining a use of an extension region, providing, based on the use, picture level extension usage information based on the extension region, and encoding the picture level extension usage information into an encoded video signal.
Abstract:
The present invention provides an encoder for encoding a frame of a video sequence and a corresponding decoder. The encoder comprises a partitioner and an entropy coder. The partitioner is configured to receive a current block of the frame and obtain a list of candidate geometric partitioning (GP) lines. Each of the candidate GP lines is generated based on information of one or more candidate neighbor blocks of the current block. The partitioner is further configured to determine a final GP line that partitions the current block into two segments, select a GP line from the list of GP lines to obtain a selected GP line, and generate a GP parameter for the current block. The GP parameter includes offset information indicating an offset between the final GP line and the selected GP line. The entropy coder is configured to encode the GP parameter.
Abstract:
An apparatus for decoding 3D video data is provided, the 3D video data comprising a plurality of texture frames and a plurality of associated depth maps, the apparatus comprising: a first texture decoder configured to decode a video coding block of a first texture frame associated with a first view; a first depth map decoder configured to decode a video coding block of a first depth map associated with the first texture frame; a depth map filter configured to generate an auxiliary depth map on the basis of the first depth map; a first view synthesis prediction unit configured to generate a predicted video coding block of a view synthesis predicted second texture frame associated with a second view on the basis of the video coding block of the first texture frame and the auxiliary depth map.
Abstract:
Embodiments of the present invention provide multi-view video coding and coding methods and corresponding apparatuses. The multi-view video coding method includes: minimizing an error between a currently coded view image and a warped view image of a front view image to obtain an optimal warping offset; calculating disparity information between the front view image and the currently coded view image by using the optimal warping offset, a camera parameter of a view, and depth image information of the front view image; and calculating the warped view image of the front view image by using the disparity information and the front view image, and predicting a current view image by using the warped view image as a prediction signal.
Abstract:
A method, an apparatus and a system for a rapid motion search applied in template matching are disclosed. The method includes: selecting motion vectors of blocks related to a current block as candidate motion vectors of the current block; after the uniqueness of a series of the candidate motion vectors of the current block is maintained, calculating the cost function of the candidate motion vectors in a corresponding template area of a reference frame, and obtaining the motion vector of the best matching template from the candidate motion vectors of the current block. In the embodiments of the present invention, there is no need to determine a large search range and no need to determine the corresponding search path template, and it is only necessary to perform a search in a smaller range.