Abstract:
Various examples with respect to method and apparatus for active stereo vision are described. An apparatus may include an electromagnetic (EM) wave emitter, a first sensor and a second sensor. During operation, the EM wave emitter emits EM waves toward a scene, the first sensor captures a first image of the scene in an infrared (IR) spectrum, and the second sensor captures a second image of the scene in a light spectrum. The first image and second image, when processed, may enable active stereo vision.
Abstract:
An on-line stereo camera calibration method employed by an electronic device with a stereo camera device includes: retrieving a feature point set, and utilizing a stereo camera calibration circuit on the electronic device to calculate a stereo camera parameter set based on the retrieved feature point set. In addition, an on-line stereo camera calibration device on an electronic device with a stereo camera device includes a stereo camera calibration circuit. The stereo camera calibration circuit includes an input interface and a stereo camera calibration unit. The input interface is used to retrieve a feature point set. The stereo camera calibration unit is used to calculate a stereo camera parameter set based on at least the retrieved feature point set.
Abstract:
A method for performing image control in an electronic device and an associated apparatus are provided, where the method may include the steps of: obtaining a specific image from a camera module of the electronic device, and obtaining a specific focus-related parameter corresponding to the specific image, wherein the specific focus-related parameter is related to focus control of the camera module; determining a specific resize parameter corresponding to the specific image according to a relationship between the specific resize parameter and the specific focus-related parameter; and performing a resize operation on the specific image according to the specific resize parameter, to control the specific image to be displayed in a quasi-scale-invariant manner.
Abstract:
A method for performing preview control in an electronic device and an associated apparatus are provided, where the method may include the steps of: obtaining a specific preview image from a camera module of the electronic device, and obtaining a specific focus-related parameter corresponding to the specific preview image, wherein the specific focus-related parameter is related to focus control of the camera module; determining a specific resize parameter corresponding to the specific preview image according to a predetermined relationship between the specific resize parameter and the specific focus-related parameter; and performing a resize operation on the specific preview image according to the specific resize parameter, to control the specific preview image to be displayed in a quasi-scale-invariant manner with respect to a plurality of preview images.
Abstract:
An on-line stereo camera calibration method employed by an electronic device with a stereo camera device includes: retrieving a feature point set, and utilizing a stereo camera calibration circuit on the electronic device to calculate a stereo camera parameter set based on the retrieved feature point set. In addition, an on-line stereo camera calibration device on an electronic device with a stereo camera device includes a stereo camera calibration circuit. The stereo camera calibration circuit includes an input interface and a stereo camera calibration unit. The input interface is used to retrieve a feature point set. The stereo camera calibration unit is used to calculate a stereo camera parameter set based on at least the retrieved feature point set.
Abstract:
A three-dimensional (3D) image capture method, employed in an electronic device with a monocular camera and a 3D display, includes at least the following steps: while the electronic device is moving, deriving a 3D preview image from a first preview image and a second preview image generated by the monocular camera, and providing 3D preview on the 3D display according to the 3D preview image, wherein at least one of the first preview image and the second preview image is generated while the electronic device is moving; and when a capture event is triggered, outputting the 3D preview image as a 3D captured image.
Abstract:
A video frame processing method, which comprises: (a) capturing at least two video frames via a multi-view camera system comprising a plurality of cameras; (b) recording timestamps for each the video frame; (c) determining a major camera and a first sub camera out of the multi-view camera system, based on the timestamps, wherein the major camera captures a major video sequence comprising at least one major video frame, the first sub camera captures a video sequence of first view comprising at least one video frame of first view; (d) generating a first reference video frame of first view according to one first reference major video frame of the major video frames, which is at a reference timestamp corresponding to the first reference video frame of first view, and according to at least one the video frame of first view surrounding the reference timestamp; and (e) generating a multi-view video sequence comprising a first multi-view video frame, wherein the first multi-view video frame is generated based on the first reference video frame of first view and the first reference major video frame.