Abstract:
A gesture detecting apparatus including a light emitter configure to emit light towards an object, a camera configured to capture light emitted from the light emitter and reflected by the object, and a signal controller configured to control the light emitter and the camera, in which the light emitter comprises a first light and second light, at least one of which is configured to emit light having non-monotonic intensity characteristics.
Abstract:
A processor-implemented method includes obtaining a visual association feature indicating an association between a first image frame and a second image frame and a visual appearance feature indicating the same object appearance in the first image frame and the second image frame, constructing a visual reprojection constraint based on the visual association feature, constructing a visual feature metric constraint based on the visual appearance feature, and performing localization and mapping based on the visual reprojection constraint and the visual feature metric constraint.
Abstract:
A processor-implemented method includes obtaining a first motion matrix corresponding to an extended reality (XR) system and a second motion matrix based on a conversion coefficient from an XR system coordinate system into a rolling shutter (RS) camera coordinate system, and projecting an RS color image of a current frame onto a global shutter (GS) color image coordinate system based on the second motion matrix and generating a GS color image of the current frame, wherein the second motion matrix is a motion matrix of a timestamp of a depth image captured by a GS camera corresponding to a timestamp of a first scanline of an RS color image captured by the GS camera.
Abstract:
An augmented reality (AR) device and a method of predicting a pose in the AR device is provided. In the augmented reality (AR) device inertial measurement unit (IMU) values corresponding to the movement of the AR device are obtained at an IMU rate, intermediate 6-degrees of freedom (6D) poses of the AR device are estimated based on the IMU values and images around the AR device via a visual-inertial simultaneous localization and mapping (VI-SLAM) module, and a pose prediction model for predicting relative 6D poses of the AR device is generated by performing learning by using a deep neural network.
Abstract:
Provided is a method of generating a computer-generated hologram (CGH), the method including obtaining complex data including amplitude data of object data and phase data of the object data corresponding to a spatial light modulator (SLM) plane by propagating the object data from an image plane to the SLM plane, encoding the complex data into encoded amplitude data, and generating a CGH based on the object data including the encoded amplitude data.
Abstract:
A three-dimensional (3D) image sensor device and an electronic apparatus including the 3D image sensor device are provided. The 3D image sensor device includes: a shutter driver that generates a driving voltage of a sine wave biased with a first bias voltage, from a loss-compensated recycling energy; an optical shutter that varies transmittance of reflective light reflected from a subject, according to the driving voltage, and modulates the reflective light to generate at least two optical modulation signals having different phases; and an image generator that generates 3D image data for the subject which includes depth information calculated based on a phase difference between the at least two optical modulation signals.
Abstract:
A depth image generating apparatus includes a light source configured to emit light; an optical shutter provided on a path of the light reflected by an object and configured to modulate a waveform of the reflected light by changing a transmissivity of the optical shutter with respect to the reflected light; a driver configured to apply a driving voltage to the light source and a driving voltage to the optical shutter; a temperature measurer configured to measure a temperature of the optical shutter; a controller configured to control driving voltages; and a depth information obtainer configured to generate an image corresponding to the reflected light that passes through the optical shutter, extract a phase difference between a phase of the light emitted by the light source to the object and a phase of the reflected light, and obtain depth information regarding the object based on the phase difference.
Abstract:
Provided are a three-dimensional (3D) camera including a wavelength-variable light source for directly measuring transmittance and a method of measuring the transmittance. The 3D camera includes, as well as a light source, a transmission type shutter, and an image sensor, and a wavelength-variable light source capable of irradiating a light with a variable wavelength without being thermally affected by the light source, the image sensor, and the transmission type shutter. The wavelength-variable light source may directly measure a change in transmittance by irradiating light toward the transmission type shutter while the 3D camera operates.
Abstract:
Provided are an electronic device for processing computer-generated holography (CGH) and a method thereof. The electronic device generates a plurality of depth layers (computer-generated holography) having different depth information from image data at a first view point, and reprojects each of the plurality of depth layers based on the user's pose information at the second view point different from the first view point to generates CGH.
Abstract:
An apparatus for accelerating simultaneous localization and mapping (SLAM) includes a SLAM processor including a front-end processor and a back-end processor. The front-end processor is configured to track a position of a first feature, among features extracted from a first frame, in a second frame subsequent to the first frame, and the back-end processor is configured to obtain a first measurement regarding a map point and a camera pose of the first feature based on the position of the first feature in the second frame tracked by the front-end processor, compute elements affecting an optimization matrix in relation to the first measurement, among elements of a Hessian matrix regarding the map point and the camera pose, and accumulate the computed elements in the optimization matrix used to perform an optimization operation with respect to states of the map point and the camera pose.