Abstract:
An exemplary decoding method of an input video bitstream including a first bitstream and a second bitstream includes: decoding a first picture in the first bitstream; after a required decoded data derived from decoding the first picture is ready for a first decoding operation of a second picture in the first bitstream, performing the first decoding operation; and after a required decoded data derived from decoding the first picture is ready for a second decoding operation of a picture in the second bitstream, performing the second decoding operation, wherein a time period of decoding the second picture in the first bitstream and a time period of decoding the picture in the second bitstream are overlapped in time.
Abstract:
An image resizing method includes at least the following steps: receiving at least one input image; performing an image content analysis upon at least one image selected from the at least one input image to obtain an image content analysis result; and creating a target image with a target image resolution by scaling the at least one input image according to the image content analysis result, wherein the target image resolution is different from an image resolution of the at least one input image.
Abstract:
An image adjustment method, applied to an image sensing system comprising an image sensor, comprising: (a) sensing a target image by the image sensor; (b) dividing the target image to a plurality of image regions; (c) acquiring location information of at least one first target feature in the image regions; (d) computing brightness information of each of the image regions; (e) generating adjustment curves according to the brightness information and according to required brightness values of each of the image regions; and (f) adjusting brightness values of the image regions according to the adjustment curves. The step (d) adjusts the brightness information according to the location information or the step (e) adjusts the adjustment curves according to the location information.
Abstract:
A method for tuning a plurality of image signal processor (ISP) parameters of a camera includes performing a first iteration. The first iteration includes extracting image features from an initial image, arranging a tuning order of the plurality of ISP parameters of the camera according to at least the plurality of ISP parameters and the image features, tuning a first set of the ISP parameters according to the tuning order to generate a first tuned set of the ISP parameters, and replacing the first set of the ISP parameters with the first tuned set of the ISP parameters in the plurality of ISP parameters to generate a plurality of updated ISP parameters.
Abstract:
An image processing method is applied to an operation device and includes analyzing an unprocessed image to split the unprocessed image into a first region and a second region, applying a first image processing algorithm to the first region for acquiring a first processed result, applying a second image processing algorithm different from the first image processing algorithm to the second region for acquiring a second processed result, and generating a processed image via the first processed result and the second processed result.
Abstract:
An image enhancement method applied to an image enhancement apparatus and includes acquiring a first edge feature from a first spectral image and a second edge feature from a second spectral image, analyzing similarity between the first edge feature and the second edge feature to align the first spectral image with the second spectral image, acquiring at least one first detail feature from the first spectral image and at least one second detail feature from the second spectral image, comparing the first edge feature and the second edge feature to generate a first weight and a second weight, and fusing the first detail feature weighted by the first weight with the second detail feature weighted by the second weight to generate a fused image. The first spectral image and the second spectral image are captured at the same point of time.
Abstract:
A video encoding method includes: setting a 360-degree Virtual Reality (360 VR) projection layout of projection faces, wherein the projection faces have a plurality of triangular projection faces located at a plurality of positions in the 360 VR projection layout, respectively; encoding a frame having a 360-degree image content represented by the projection faces arranged in the 360 VR projection layout to generate a bitstream; and for each position included in at least a portion of the positions, signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate at least one of an index of a triangular projection view filled into a corresponding triangular projection face located at the position and a rotation angle of content rotation applied to the triangular projection view filled into the corresponding triangular projection face located at the position.
Abstract:
An exemplary image processing method includes the following steps: receiving an image input composed of at least one source image; receiving algorithm selection information corresponding to each source image; checking corresponding algorithm selection information of each source image to determine a selected image processing algorithm from a plurality of different image processing algorithms; and performing an object oriented image processing operation upon the source image based on the selected image processing algorithm. The algorithm selection information indicates an image quality of each source image and is generated from one of an auxiliary sensor, an image processing module of an image capture apparatus, a processing circuit being one of a video decoder, a frame rate converter, and an audio/video synchronization (AV-Sync) module, or is a user-defined mode setting.
Abstract:
A projection-based frame is generated according to an omnidirectional video frame and an octahedron projection layout. The projection-based frame has a 360-degree image content represented by triangular projection faces assembled in the octahedron projection layout. A 360-degree image content of a viewing sphere is mapped onto the triangular projection faces via an octahedron projection of the viewing sphere. One side of a first triangular projection face has contact with one side of a second triangular projection face, one side of a third triangular projection face has contact with another side of the second triangular projection face. One image content continuity boundary exists between one side of the first triangular projection face and one side of the second triangular projection face, and another image content continuity boundary exists between one side of the third triangular projection face and another side of the second triangular projection face.
Abstract:
A method for performing image processing control and an associated apparatus are provided, where method may include the steps of: performing image coding on image information of at least one frame to generate encoded data of the at least one frame, wherein in the encoded data, a specific frame of the at least one frame includes a plurality of tiles, and each tile of the plurality of tiles includes a plurality of superblocks; and generating a bitstream carrying the encoded data of the at least one frame, wherein at least a partition type and a transform size of each superblock within a specific tile of the plurality of tiles are derivable from information corresponding to the specific tile within the encoded data, having no need to derive the partition type and the transform size from information corresponding to another tile of the plurality of tiles within the encoded data.