Abstract:
A method of encoding a frame to generate an output bitstream has following steps: dividing the frame into partitions; dividing each of the partitions into blocks, wherein each of the blocks is composed of pixels; assigning a first segmentation identifier to each of first blocks located at partition boundaries each between two adjacent partitions within the frame, wherein the first blocks belong to a first segment, and the first segmentation identifier is signaled per first block; and encoding each of the blocks. The step of encoding each of the blocks includes: generating reconstructed blocks for the blocks, respectively; and configuring an in-loop filter by a predetermined in-loop filtering setting in response to the first segmentation identifier, wherein the in-loop filter with the predetermined in-loop filtering setting does not apply in-loop filtering to each reconstructed block corresponding to the first segment.
Abstract:
A method for performing efficiency optimization of an electronic device and an associated apparatus are provided, where the method includes the steps of: performing at least one detection operation according to at least one signal of the electronic device to generate at least one detection result; and selecting a rectifier size of a plurality of rectifier sizes of a configurable rectifier within the electronic device according to the at least one detection result, to control the configurable rectifier to operate with the rectifier size, wherein the configurable rectifier is arranged for performing rectification operations, and the configurable rectifier is configurable to operate with at least one portion of the configurable rectifier being activated.
Abstract:
A video processing method includes: receiving a bitstream, wherein a part of the bitstream transmits encoded information of a projection-based frame that has a 360-degree content represented by projection faces packed in a 360-degree Virtual Reality (360 VR) projection layout, and the projection-based frame has at least one boundary; and decoding, by a video decoder, the part of the bitstream, including: generating a reconstructed frame, parsing a flag from the bitstream, and applying an in-loop filtering operation to the reconstructed frame. The flag indicates that the in-loop filtering operation is blocked from being applied to each of said at least one boundary in the reconstructed frame. In response to the flag, the in-loop filtering operation is blocked from being applied to each of the at least one boundary in the reconstructed frame.
Abstract:
A video processing method includes receiving a bitstream, and decoding, by a video decoder, the bitstream to generate a decoded frame. The decoded frame is a projection-based frame that has a 360-degree image/video content represented by triangular projection faces packed in an octahedron projection layout. An omnidirectional image/video content of a viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
Abstract:
A video processing method includes: receiving an omnidirectional image/video content corresponding to a viewing sphere, generating a sequence of projection-based frames according to the omnidirectional image/video content and an octahedron projection layout, and encoding, by a video encoder, the sequence of projection-based frames to generate a bitstream. Each projection-based frame has a 360-degree image/video content represented by triangular projection faces packed in the octahedron projection layout. The omnidirectional image/video content of the viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. An equator of the viewing sphere is not mapped along any side of each of the triangular projection faces.
Abstract:
A video processing method includes: obtaining a plurality of projection faces from an omnidirectional content of a sphere, wherein the omnidirectional content of the sphere is mapped onto the projection faces via cubemap projection, and the projection faces comprise a first projection face; obtaining, by a re-sampling circuit, a first re-sampled projection face by re-sampling at least a portion of the first projection face through non-uniform mapping; generating a projection-based frame according to a projection layout of the cubemap projection, wherein the projection-based frame comprises the first re-sampled projection face packed in the projection layout; and encoding the projection-based frame to generate a part of a bitstream.
Abstract:
A resonator circuit includes: a first inductive element and a second inductive element that is connected to the first inductive element in series; a first capacitive element, connected to a first end of the first inductive element and a first output end of the resonator circuit; and a set of second capacitive elements connected in series, the set of second capacitive elements having one end connected between the first and second inductive elements and having another end connected between the second inductive element and a second output end of the resonator circuit. The intermediate end of the set of second capacitive elements is used as a third output end of the resonator circuit.
Abstract:
A resonator circuit includes: a first inductive element and a second inductive element that is connected to the first inductive element in series; a first capacitive element, connected to a first end of the first inductive element and a first output end of the resonator circuit; and a set of second capacitive elements connected in series, the set of second capacitive elements having one end connected between the first and second inductive elements and having another end connected between the second inductive element and a second output end of the resonator circuit. The intermediate end of the set of second capacitive elements is used as a third output end of the resonator circuit.
Abstract:
Apparatus and methods are disclosed for partially decoding video frames when a sub-region of the video is selected for viewing. The method identifies and decodes data units and pixel blocks of video frames needed to display the sub-region while bypassing data units and pixel blocks that are identified as unnecessary for displaying the sub-region. A video encoder receives a video frame comprising a plurality of cubic surfaces in a first configuration corresponding to a full sized 360VR image. Each cubic surface corresponds to a different surface of a cube. The encoder reformats the received video frame by rearranging the plurality of cubic surfaces according to a second configuration that is different than the first configuration. The second configuration re-arranges the six surfaces of a cubic 360VR image in order to fully utilize the line buffer and allow the line buffer to be narrower than the full sized 360VR image.
Abstract:
An exemplary video processing method includes: receiving an omnidirectional content corresponding to a sphere; obtaining a plurality of projection faces from the omnidirectional content of the sphere according to a pyramid projection; creating at least one padding region; and generating a projection-based frame by packing the projection faces and the at least one padding region in a pyramid projection layout. The projection faces packed in the pyramid projection layout include a first projection face. The at least one padding region packed in the pyramid projection layout includes a first padding region. The first padding region connects with at least the first projection face, and forms at least a portion of one boundary of the pyramid projection layout.