Abstract:
A method and apparatus for deriving a motion vector predictor (MVP) candidate set for a block are disclosed. Embodiments according to the present invention generate a complete full MVP candidate set based on the redundancy-removed MVP candidate set if one or more redundant MVP candidates exist. In one embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value corresponding to a non-redundant MVP is assigned to each replacement MVP candidate. In another embodiment, the method generates the complete full MVP candidate set by adding replacement MVP candidates to the redundancy-removed MVP candidate set and a value is as signed to each replacement MVP candidate according to a rule. The procedure of assigning value, checking redundancy, removing redundant MVP candidate are repeated until the MVP candidate set is complete and full.
Abstract:
A video processing method includes: decoding apart of a bitstream to generate a decoded frame, where the decoded frame is a projection-based frame that includes projection faces in a projection layout; and remapping sample locations of the projection-based frame to locations on the sphere, where a sample location within the projection-based frame is converted into a local sample location within a projection face packed in the projection-based frame; in response to adjustment criteria being met, an adjusted local sample location within the projection face is generated by applying adjustment to at least one coordinate value of the local sample location within the projection face, and the adjusted local sample location within the projection face is remapped to a location on the sphere; and in response to the adjustment criteria not being met, the local sample location within the projection face is remapped to a location on the sphere.
Abstract:
A video encoding method includes: encoding a projection-based frame to generate a part of a bitstream, wherein at least a portion of a 360-degree content of a sphere is mapped to projection faces via cube-based projection, and the projection-based frame has the projection faces packed in a cube-based projection layout; and signaling at least one syntax element via the bitstream, wherein said at least one syntax element is associated with a mapping function that is employed by the cube-based projection to determine sample locations for each of the projection faces.
Abstract:
A video decoding method includes decoding a part of a bitstream to generate a decoded frame, and parsing at least one syntax element from the bitstream. The decoded frame is a projection-based frame that has projection faces packed in a cube-based projection layout. At least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection. The at least one syntax element is indicative of packing of the projection faces in the cube-based projection layout.
Abstract:
Methods and apparatus of processing 360-degree virtual reality (VR360) pictures are disclosed. According to one method, if a leaf coding unit contains one or more face edges, the leaf processing unit is split into sub-processing units along the face edges without the need to signal the partition. In another method, if the quadtree (QT) of binary tree (BT) partition depth for a processing unit has not reached the maximum QT or BT depth, the processing unit is split. If the processing unit contains a horizontal face edge, QT or horizontal BT partition is applied. If the processing unit contains a vertical face edge, QT or vertical BT partition is applied.
Abstract:
Methods and apparatus of coding a video sequence, wherein pictures from the video sequence contain one or more discontinuous edges are disclosed. The loop filtering process associated with the loop filter is then applied to the current reconstructed pixel to generate a filtered reconstructed pixel, where if the loop filtering process is across a virtual boundary of the current picture, one or more alternative reference pixels are used to replace unexpected reference pixels located in a different side of the virtual boundary from the current reconstructed pixel, and said one or more alternative reference pixels are generated from second reconstructed pixels in a same side of the virtual boundary as the current reconstructed pixel. According to another method, reference pixels are derived from spherical neighbouring reference pixels for the loop filtering process.
Abstract:
A video processing method includes: receiving an omnidirectional content corresponding to a sphere, obtaining projection faces from the omnidirectional content, and creating a projection-based frame by generating at least one padding region and packing the projection faces and said at least one padding region in a 360 VR projection layout. The projection faces packed in the 360 VR projection layout include a first projection face and a second projection face, where there is an image content discontinuity edge between the first projection face and the second projection face if the first projection face connects with the second projection face. The at least one padding region packed in the 360 VR projection layout includes a first padding region, where the first padding region connects with the first projection face and the second projection face for isolating the first projection face from the second projection face in the 360 VR projection layout.
Abstract:
Methods and apparatus of processing cube face images are disclosed. According to embodiments of the present invention, one or more discontinuous boundaries within each assembled cubic frame are determined and used for selective filtering, where the filtering process is skipped at said one or more discontinuous boundaries within each assembled cubic frame when the filtering process is enabled. Furthermore, the filtering process is applied to one or more continuous areas in each assembled cubic frame.
Abstract:
A video decoding method includes decoding a part of a bitstream to generate a decoded frame, and parsing at least one syntax element from the bitstream. The decoded frame is a projection-based frame that has projection faces packed in a cube-based projection layout. At least a portion of a 360-degree content of a sphere is mapped to the projection faces via cube-based projection. The at least one syntax element is indicative of packing of the projection faces in the cube-based projection layout
Abstract:
A projection-based frame is generated according to an omnidirectional video frame and a triangle-based projection layout. The projection-based frame has a 360-degree image content represented by triangular projection faces assembled in the triangle-based projection layout. A 360-degree image content of a viewing sphere is mapped onto the triangular projection faces via a triangle-based projection of the viewing sphere. One side of a first triangular projection face has contact with one side of a second triangular projection face, one side of a third triangular projection face has contact with another side of the second triangular projection face. One image content continuity boundary exists between one side of the first triangular projection face and one side of the second triangular projection face, and another image content continuity boundary exists between one side of the third triangular projection face and another side of the second triangular projection face.