Abstract:
A video processing method includes: receiving a bitstream, wherein a part of the bitstream transmits encoded information of a projection-based frame that has a 360-degree content represented by projection faces packed in a 360-degree Virtual Reality (360 VR) projection layout, and the projection-based frame has at least one boundary; and decoding, by a video decoder, the part of the bitstream, including: generating a reconstructed frame, parsing a flag from the bitstream, and applying an in-loop filtering operation to the reconstructed frame. The flag indicates that the in-loop filtering operation is blocked from being applied to each of said at least one boundary in the reconstructed frame. In response to the flag, the in-loop filtering operation is blocked from being applied to each of the at least one boundary in the reconstructed frame.
Abstract:
A video processing method includes receiving a bitstream, and decoding, by a video decoder, the bitstream to generate a decoded frame. The decoded frame is a projection-based frame that has a 360-degree image/video content represented by triangular projection faces packed in an octahedron projection layout. An omnidirectional image/video content of a viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
Abstract:
A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and an octahedron projection layout, and encoding, by a video encoder, the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by triangular projection faces packed in the octahedron projection layout. The omnidirectional image/video content of the viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
Abstract:
A video processing method includes: obtaining a plurality of projection faces from an omnidirectional content of a sphere, wherein the omnidirectional content of the sphere is mapped onto the projection faces via cubemap projection, and the projection faces comprise a first projection face; obtaining, by a re-sampling circuit, a first re-sampled projection face by re-sampling at least a portion of the first projection face through non-uniform mapping; generating a projection-based frame according to a projection layout of the cubemap projection, wherein the projection-based frame comprises the first re-sampled projection face packed in the projection layout; and encoding the projection-based frame to generate a part of a bitstream.
Abstract:
A method and system for encoding a group of coding blocks and packetizing the compressed data into slices/packets with hard-limited packet size are disclosed. According to the present invention, a packetization map for at least a portion of a current picture is determined. The packetization map associates coding blocks in at least a portion of the current picture with one or more packets by identifying a corresponding group of coding blocks for each packet of said one or more packets. The corresponding group of coding blocks for each packet is then encoded according to the packetization map and the size of each packet is determined. The packet size is checked. If any packet size exceeds a constrained size, a new packetization map is generated and the corresponding group of coding blocks for each packet is encoded according to the new packetization map.
Abstract:
Apparatus and methods are disclosed for partially decoding video frames when a sub-region of the video is selected for viewing. The method identifies and decodes data units and pixel blocks of video frames needed to display the sub-region while bypassing data units and pixel blocks that are identified as unnecessary for displaying the sub-region. A video encoder receives a video frame comprising a plurality of cubic surfaces in a first configuration corresponding to a full sized 360VR image. Each cubic surface corresponds to a different surface of a cube. The encoder reformats the received video frame by rearranging the plurality of cubic surfaces according to a second configuration that is different than the first configuration. The second configuration re-arranges the six surfaces of a cubic 360VR image in order to fully utilize the line buffer and allow the line buffer to be narrower than the full sized 360VR image.
Abstract:
A computing system includes a plurality of processing circuits and a storage device. The processing circuits have at least a first processing circuit and a second processing circuit. The storage device is shared between at least the first processing circuit and the second processing circuit. The first processing circuit performs a whole cache flush operation to prepare exchange data in the storage device. The second processing circuit gets the exchange data from the storage device.
Abstract:
Methods and apparatus of processing 360-degree virtual reality images are disclosed. According to one method, the method receives coded data for an extended 2D (two-dimensional) frame including an encoded 2D frame with one or more encoded guard bands, wherein the encoded 2D frame is projected from a 3D (three-dimensional) sphere using a target projection, wherein said one or more encoded guard bands are based on a blending of one or more guard bands with an overlapped region when the overlapped region exists. The method then decodes the coded data into a decoded extended 2D frame including a decoded 2D frame with one or more decoded guard bands, and derives a reconstructed 2D frame from the decoded extended 2D frame.
Abstract:
A video processing method includes: obtaining a plurality of projection faces from an omnidirectional content of a sphere, wherein the omnidirectional content of the sphere is mapped onto the projection faces via cubemap projection, and the projection faces comprise a first projection face; obtaining, by a re-sampling circuit, a first re-sampled projection face by re-sampling at least a portion of the first projection face through non-uniform mapping; generating a projection-based frame according to a projection layout of the cubemap projection, wherein the projection-based frame comprises the first re-sampled projection face packed in the projection layout; and encoding the projection-based frame to generate a part of a bitstream.
Abstract:
According to one method, at a source side or an encoder side, a selected viewport associated with the 360-degree virtual reality images is determined. One or more parameters related to the selected pyramid projection format are then determined. According to the present invention, one or more syntax elements for said one or more parameters are included in coded data of the 360-degree virtual reality images. The coded data of the 360-degree virtual reality images are provided as output data. At a receiver side or a decoder side, one or more syntax elements for one or more parameters are parsed from the coded data of the 360-degree virtual reality images. A selected pyramid projection format associated with the 360-degree virtual reality images is determined based on information including said one or more parameters. The 360-degree virtual reality images are then recovered according to the selected viewport.