Abstract:
An image generation method includes: determining at least one first image capture setting and at least one second image capture setting; controlling an image capture device to generate a plurality of first successive captured images for a capture trigger event according to the at least one first image capture setting and generate a plurality of second successive captured images for the same capture trigger event according to the at least one second image capture setting. Variation of the at least one first image capture setting is constrained within a first predetermined range during generation of the first successive captured images. Difference between the at least one first image capture setting and the at least one second image capture setting is beyond the first predetermined range. Variation of the at least one second image capture setting is constrained within a second predetermined range during generation of the second successive captured images.
Abstract:
One video coding method includes at least the following steps: utilizing a visual quality evaluation module for evaluating visual quality based on data involved in a coding loop; and referring to at least the evaluated visual quality for performing sample adaptive offset (SAO) filtering. Another video coding method includes at least the following steps: utilizing a visual quality evaluation module for evaluating visual quality based on data involved in a coding loop; and referring to at least the evaluated visual quality for deciding a target coding parameter associated with sample adaptive offset (SAO) filtering.
Abstract:
An exemplary video recording method of recording an output video sequence for an image capture module includes at least the following steps: deriving a first video sequence from an input video sequence generated by the image capture module, wherein the first video sequence is composed of a plurality of video frames; calculating an image quality metric value for each of the video frames of the first video sequence; referring to the image quality metric value to select or drop each of the video frames of the first video sequence, and accordingly obtaining a second video sequence composed of selected video frames; and generating the recorded output video sequence according to the second video sequence.
Abstract:
One video coding method includes at least the following steps: utilizing a visual quality evaluation module for evaluating visual quality based on data involved in a coding loop; and referring to at least the evaluated visual quality for performing motion estimation. Another video coding method includes at least the following steps: utilizing a visual quality evaluation module for evaluating visual quality based on data involved in a coding loop; and referring to at least the evaluated visual quality for deciding a target coding parameter associated with motion estimation.
Abstract:
An image-based motion sensor has a camera system and a processing system. The camera system generates an image output including a plurality of captured images. The processing system obtains a motion sensor output by processing the image output, and identifies a user input as one of a plurality of pre-defined user actions according to the motion sensor output. Different functions of at least one application performed by one electronic device are controlled by the pre-defined user actions. The motion sensor output includes information indicative of at least one of a motion status and an orientation status of the image-based motion sensor. Each of the captured images has more than one color component, and only values of one single color component are involved in obtaining the motion sensor output.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video flame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video frame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A video coding method includes at least the following steps: utilizing a visual quality evaluation module for evaluating visual quality based on data involved in a coding loop; and referring to at least the evaluated visual quality for deciding a target bit allocation of a rate-controlled unit in video coding. Besides, a video coding apparatus has a visual quality evaluation module, a rate controller and a coding circuit. The visual quality evaluation module evaluates visual quality based on data involved in a coding loop. The rate controller refers to at least the evaluated visual quality for deciding a target bit allocation of a rate-controlled unit. The coding circuit has the coding loop included therein, and encodes the rate-controlled unit according to the target bit allocation.
Abstract:
An image-based motion sensor has a camera system and a processing system. The camera system generates an image output including a plurality of captured images. The processing system obtains a motion sensor output by processing the image output, and identifies a user input as one of a plurality of pre-defined user actions according to the motion sensor output. Different functions of at least one application performed by one electronic device are controlled by the pre-defined user actions. The motion sensor output includes information indicative of at least one of a motion status and an orientation status of the image-based motion sensor. Each of the captured images has more than one color component, and only values of one single color component are involved in obtaining the motion sensor output.
Abstract:
An exemplary video recording method of recording an output video sequence for an image capture module includes at least the following steps: deriving a first video sequence from an input video sequence generated by the image capture module, wherein the first video sequence is composed of a plurality of video frames; calculating an image quality metric value for each of the video frames of the first video sequence; referring to the image quality metric value to select or drop each of the video frames of the first video sequence, and accordingly obtaining a second video sequence composed of selected video frames; and generating the recorded output video sequence according to the second video sequence.