Abstract:
A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and a viewport-based cube projection layout, and encoding the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by rectangular projection faces packed in the viewport-based cube projection layout. The rectangular projection faces include a first rectangular projection face, a second rectangular projection face, a third rectangular projection face, a fourth rectangular projection face, a fifth rectangular projection face, and a sixth rectangular projection face split into partial rectangular projection faces. The first rectangular projection face corresponds to user's viewport, and is enclosed by a surrounding area composed of the second rectangular projection face, the third rectangular projection face, the fourth rectangular projection face, the fifth rectangular projection face, and the partial rectangular projection faces.
Abstract:
Methods and apparatus of processing cube face images are disclosed. According one method, each set of six cubic faces is converted into one rectangular assembled image by assembling each set of six cubic faces to maximize a number of continuous boundaries and to minimize a number of discontinuous boundaries. Each continuous boundary corresponds to one boundary between two connected faces with continuous contents from one face to another face. Each discontinuous boundary corresponds to one boundary between two connected faces with discontinuous contents from one face to another face. The method may further comprise applying video coding to the video sequence outputting the compressed data of the video sequence. According to another method, a fully-connected cubic-face image representing an unfolded image from the six faces of the cube is generated and the blank areas are filled with padding data to form a rectangular assembled image.
Abstract:
A method and apparatus of video encoding or decoding for a video encoding or decoding system applied to multi-face sequences corresponding to a 360-degree virtual reality sequence are disclosed. According the present invention, one or more multi-face sequences representing the 360-degree virtual reality sequence are derived. If Inter prediction is selected for a current block in a current face, one virtual reference frame is derived for each face of said one or more multi-face sequences by assigning one target reference face to a center of said one virtual reference frame and connecting neighboring faces of said one target reference face to said one target reference face at boundaries of said one target reference face. Then, the current block in the current face is encoded or decoded using a current virtual reference frame derived for the current face to derive an Inter predictor for the current block.
Abstract:
Methods and apparatus of processing omnidirectional images are disclosed. According to one method, a current set of omnidirectional images converted from each spherical image in a 360-degree panoramic video sequence using a selected projection format is received, where the selected projection format belongs to a projection format group comprising a cubicface format, and the current set of omnidirectional images with the cubicface format consists of six cubic faces. If the selected projection format corresponds to the cubicface format, one or more mapping syntax elements to map the current set of omnidirectional images into a current cubemap image are signaled. The coded data are then provided in a bitstream including said one or more mapping syntax elements for the current set of omnidirectional images.
Abstract:
A video processing method includes receiving an omnidirectional content corresponding to a sphere, generating a projection-based frame according to at least the omnidirectional content and a segmented sphere projection (SSP) format, and encoding, by a video encoder, the projection-based frame to generate a part of a bitstream. The projection-based frame has a 360-degree content represented by a first circular projection face, a second circular projection face, and at least one rectangular projection face packed in an SSP layout. A north polar region of the sphere is mapped onto the first circular projection face. A south polar region of the sphere is mapped onto the second circular projection face. At least one non-polar ring-shaped segment between the north polar region and the south polar region of the sphere is mapped onto said at least one rectangular projection face.
Abstract:
A video processing method includes: receiving an omnidirectional content corresponding to a sphere, obtaining projection faces from the omnidirectional content, and creating a projection-based frame by generating at least one padding region and packing the projection faces and said at least one padding region in a 360 VR projection layout. The projection faces packed in the 360 VR projection layout include a first projection face and a second projection face, where there is an image content discontinuity edge between the first projection face and the second projection face if the first projection face connects with the second projection face. The at least one padding region packed in the 360 VR projection layout includes a first padding region, where the first padding region connects with the first projection face and the second projection face for isolating the first projection face from the second projection face in the 360 VR projection layout.
Abstract:
A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and a viewport-based cube projection layout, and encoding the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by rectangular projection faces packed in the viewport-based cube projection layout. The rectangular projection faces include a first rectangular projection face, a second rectangular projection face, a third rectangular projection face, a fourth rectangular projection face, a fifth rectangular projection face, and a sixth rectangular projection face split into partial rectangular projection faces. The first rectangular projection face corresponds to user's viewport, and is enclosed by a surrounding area composed of the second rectangular projection face, the third rectangular projection face, the fourth rectangular projection face, the fifth rectangular projection face, and the partial rectangular projection faces.
Abstract:
A video encoding method includes: generating reconstructed blocks for coding blocks within a frame, respectively, wherein the frame has a 360-degree image content represented by projection faces arranged in a 360-degree Virtual Reality (360 VR) projection layout, and there is at least one image content discontinuity edge resulting from packing of the projection faces in the frame; and configuring at least one in-loop filter, such that the at least one in-loop filter does not apply in-loop filtering to reconstructed blocks located at the least one image content discontinuity edge.
Abstract:
A resonator circuit includes: a first inductive element and a second inductive element that is connected to the first inductive element in series; a first capacitive element, connected to a first end of the first inductive element and a first output end of the resonator circuit; and a set of second capacitive elements connected in series, the set of second capacitive elements having one end connected between the first and second capacitive elements and having another end connected between the second capacitive element and a second output end of the resonator circuit. The intermediate end of the set of second capacitive elements is used as a third output end of the resonator circuit.
Abstract:
A video processing method includes: receiving an omnidirectional content corresponding to a sphere, obtaining projection faces from the omnidirectional content, and creating a projection-based frame by generating at least one padding region and packing the projection faces and said at least one padding region in a 360 VR projection layout. The projection faces packed in the 360 VR projection layout include a first projection face and a second projection face, where there is an image content discontinuity edge between the first projection face and the second projection face if the first projection face connects with the second projection face. The at least one padding region packed in the 360 VR projection layout includes a first padding region, where the first padding region connects with the first projection face and the second projection face for isolating the first projection face from the second projection face in the 360 VR projection layout.