Abstract:
An electronic device includes a communication interface and a processor, wherein the processor receives a first image predistorted and rendered at a first time, from an external device through the communication interface, calculates a pixel shift for each pixel of the received first image at a second time that is different from the first time, generates a second image by reprojecting the first image based on the calculated pixel shift, and transmits the generated second image to the external device through the communication interface.
Abstract:
Provided is a processor configured to compute elements affecting an optimization matrix in connection with a first measurement, among elements of a Hessian matrix, instead of generating a whole Hessian matrix for a map point and a camera pose based on all measurements, and accumulate the computed elements over the optimization matrix used to perform optimization operations in relation to states of the map point and the camera pose.
Abstract:
A simultaneous localization and mapping device is provided. The device includes an image obtaining device configured to capture color images and depth images of a surrounding environment; an initial pose estimating device configured to estimate an initial pose based on the color images and the depth images; a map constructing device configured to construct a three-dimensional map based on the depth images and the color images; and a pose determining device configured to determine a final pose based on the initial pose and the three-dimensional map.
Abstract:
Provided are non-uniform light-emitting lidar (light detection and ranging) apparatuses and autonomous robots including the same. A lidar apparatus may include a light source configured to emit light, an optical unit arranged on an optical path of light emitted from the light source and configured to change an optical profile of the light to be non-uniform, and a 3D sensor configured to sense location of an object by receiving reflection light from the object.
Abstract:
An apparatus and a method for acquiring depth information are disclosed. To acquire depth information, illumination light of which an amount of light has been modulated by a modulation signal is emitted towards a subject, and an image is captured using an image sensor. An image signal is sequentially acquired from a plurality of rows while shifting a phase of the illumination light at a start exposure time of a row belonging to an intermediate region of a pixel array region of the image sensor. Depth information is calculated from image signals acquired during a plurality of frames while shifting a phase of the modulation signal.
Abstract:
A depth image measuring camera includes an illumination device configured to irradiate an object with light, and a light-modulating optical system configured to receive the light reflected from the object. The depth image measuring camera includes an image sensor configured to generate an image of the object by receiving light incident on the image sensor that passes through the light-modulating optical system. The light-modulating optical system includes a plurality of lenses having a same optical axis, and an optical modulator configured to operate in two modes for measuring a depth of the object.
Abstract:
A three-dimensional (3D) image sensor device and an electronic apparatus including the 3D image sensor device are provided. The 3D image sensor device includes: a shutter driver that generates a driving voltage of a sine wave biased with a first bias voltage, from a loss-compensated recycling energy; an optical shutter that varies transmittance of reflective light reflected from a subject, according to the driving voltage, and modulates the reflective light to generate at least two optical modulation signals having different phases; and an image generator that generates 3D image data for the subject which includes depth information calculated based on a phase difference between the at least two optical modulation signals.