Abstract:
A video frame processing method, which comprises: (a) capturing at least two video frames via a multi-view camera system comprising a plurality of cameras; (b) recording timestamps for each the video frame; (c) determining a major camera and a first sub camera out of the multi-view camera system, based on the timestamps, wherein the major camera captures a major video sequence comprising at least one major video frame, the first sub camera captures a video sequence of first view comprising at least one video frame of first view; (d) generating a first reference video frame of first view according to one first reference major video frame of the major video frames, which is at a reference timestamp corresponding to the first reference video frame of first view, and according to at least one the video frame of first view surrounding the reference timestamp; and (e) generating a multi-view video sequence comprising a first multi-view video frame, wherein the first multi-view video frame is generated based on the first reference video frame of first view and the first reference major video frame.
Abstract:
An image generation method includes: determining at least one first image capture setting and at least one second image capture setting; controlling an image capture device to generate a plurality of first successive captured images for a capture trigger event according to the at least one first image capture setting and generate a plurality of second successive captured images for the same capture trigger event according to the at least one second image capture setting. Variation of the at least one first image capture setting is constrained within a first predetermined range during generation of the first successive captured images. Difference between the at least one first image capture setting and the at least one second image capture setting is beyond the first predetermined range. Variation of the at least one second image capture setting is constrained within a second predetermined range during generation of the second successive captured images.
Abstract:
A synchronization controller for a multi-sensor camera device includes a detection circuit and a control circuit. The detection circuit detects asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles. The control circuit controls an operation of the multi-sensor camera device in response to the asynchronization detected by the detection circuit. In addition, a synchronization method applied to a multi-sensor camera device includes following steps: detecting asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles; and controlling an operation of the multi-sensor camera device in response to the detected asynchronization.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video flame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A video frame processing method, which comprises: (a) capturing at least one first video frame via a first camera; (b) capturing at least one second video frame via a second camera; and (c) adjusting one candidate second video frame of the second video frames based on one of the first video frame to generate a target single view video frame.
Abstract:
A transmission interface includes a first pin, a second pin, a conversion unit, and a decoding unit. The conversion unit receives a serial input data stream via the first pin and receives a serial clock via the second pin. The conversion unit converts the serial input data stream to parallel input data and converts the serial clock to a parallel clock. The serial input data stream has a full swing form. The decoding unit receives and decodes the parallel input data and generates an input data signal according to the decoded parallel input data.
Abstract:
A synchronization controller for a multi-sensor camera device includes a detection circuit and a control circuit. The detection circuit detects asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles. The control circuit controls an operation of the multi-sensor camera device in response to the asynchronization detected by the detection circuit. In addition, a synchronization method applied to a multi-sensor camera device includes following steps: detecting asynchronization between image outputs generated from the multi-sensor camera device, wherein the image outputs correspond to different viewing angles; and controlling an operation of the multi-sensor camera device in response to the detected asynchronization.
Abstract:
An on-line stereo camera calibration method employed by an electronic device with a stereo camera device includes: retrieving a feature point set, and utilizing a stereo camera calibration circuit on the electronic device to calculate a stereo camera parameter set based on the retrieved feature point set. In addition, an on-line stereo camera calibration device on an electronic device with a stereo camera device includes a stereo camera calibration circuit. The stereo camera calibration circuit includes an input interface and a stereo camera calibration unit. The input interface is used to retrieve a feature point set. The stereo camera calibration unit is used to calculate a stereo camera parameter set based on at least the retrieved feature point set.
Abstract:
An on-line stereo camera calibration method employed by an electronic device with a stereo camera device includes: retrieving a feature point set, and utilizing a stereo camera calibration circuit on the electronic device to calculate a stereo camera parameter set based on the retrieved feature point set. In addition, an on-line stereo camera calibration device on an electronic device with a stereo camera device includes a stereo camera calibration circuit. The stereo camera calibration circuit includes an input interface and a stereo camera calibration unit. The input interface is used to retrieve a feature point set. The stereo camera calibration unit is used to calculate a stereo camera parameter set based on at least the retrieved feature point set.
Abstract:
A video frame processing method, which comprises: (a) capturing at least two video frames via a multi-view camera system comprising a plurality of cameras; (b) recording timestamps for each the video frame; (c) determining a major camera and a first sub camera out of the multi-view camera system, based on the timestamps, wherein the major camera captures a major video sequence comprising at least one major video frame, the first sub camera captures a video sequence of first view comprising at least one video frame of first view; (d) generating a first reference video frame of first view according to one first reference major video frame of the major video frames, which is at a reference timestamp corresponding to the first reference video frame of first view, and according to at least one the video frame of first view surrounding the reference timestamp; and (e) generating a multi-view video sequence comprising a first multi-view video frame, wherein the first multi-view video frame is generated based on the first reference video frame of first view and the first reference major video frame.