Abstract:
A focus tunable optical system includes a compound lens, which includes a plurality of focus tunable lenses. Further, the focus tunable optical system comprises a controller, which is configured to shift a focus of the compound lens from a first focal plane to a second focal plane. To this end, the controller is configured to apply, individually to each focus tunable lens of the plurality of the focus tunable lenses, a control signal having a first value for the first focal plane and a second value for the second focal plane.
Abstract:
A multifocal display device has a focus tunable lens, a controller, and a storage. The controller selectively tunes the focus of the lens to a plurality of focal planes of different index during a frame per A focal plane of lower index has a shorter focal distance. The storage stores a plurality of focal plane groups, each group including the plurality of focal planes in a different sequence. The controller selects a first group and tunes, during a first frame period, the focus of the lens to each one of the focal planes in the first group according to their sequence, and selects a second group from groups allowed by a selection rule, and tunes, during a second frame period, the focus of the lens to each one of the focal planes in the second group according to their sequence.
Abstract:
A device and method perform Simultaneous Localization and Mapping (SLAM). The device includes at least one processor configured to perform the SLAM method, which includes the following operations. Preprocess, in a first processing stage, a received data sequence including multiple images recorded by a camera and sensor readings from multiple sensors in order to obtain a frame sequence. Each frame of the frame sequence includes a visual feature set related to one of the images at a determined time instance and sensor readings from that time instance. Sequentially process, in a second processing stage, each frame of the frame sequence based on the visual feature set and the sensor readings included in that frame in order to generate a sequence mapping graph. Merge, in a third processing stage, the sequence mapping graph with at least one other graph, in order to generate or update a full graph.
Abstract:
A method for encoding an input signal comprising signal frames into quantized bits is disclosed, the method comprises generating, for each frame of the input signal, a signal matrix comprising matrix coefficients obtained from that frame, grouping the matrix coefficients of each signal matrix into a plurality of partition vectors, and for each partition vector, selecting one vector quantization scheme from among a plurality of vector quantization schemes and quantizing that partition vector according to the selected vector quantization scheme to obtain the quantized bits. In an adaptive mode, the method comprises grouping differently the matrix coefficients obtained from different frames, and/or selecting different vector quantization schemes for partition vectors obtained from different frames.
Abstract:
The present disclosure provides a an image processing device for providing a plurality of enhanced partial images which together represent an enhanced three-dimensional, 3D, image, wherein each enhanced partial image is a two-dimensional, 2D, image associated with one of a plurality of focal planes. The image processing device includes processing circuitry configured to receive or generate a plurality of initial partial images, which together form an initial 3D image, wherein each initial partial image is a 2D image associated with one of the plurality of focal planes; and to generate, from each of the initial partial images, an enhanced partial image by generating a blurred version of the initial partial image; and blending the initial partial image with the blurred version of the initial partial image.
Abstract:
The present invention provides a device comprising circuitry configured to obtain a plurality of initial two-dimensional, 2D, images which together represent an initial three-dimensional, 3D, image, wherein each initial 2D image is associated with one of a plurality of focal planes. The device is further configured to generate one or more blurred versions of each of the initial 2D images on one or more of the focal planes other than its associated focal plane; generate a high passed version of each of the initial 2D images on its associated focal plane; and generate a plurality of final 2D images by generating for each focal plane a final 2D image based on the high passed version of the initial 2D image associated with that focal plane and one or more blurred versions generated on that focal plane from one or more of the other initial 2D images.
Abstract:
The present disclosure provides a device, in particular a multifocal display device. The device includes: a display element configured to generate an image; and a controller configured to control the display element according to at least a first bit sequence provided over a first determined time period and a second bit sequence provided over a second determined time period, in order to generate the image with one or more colors, the bit sequences including for each color a number of bits of different significance. Moreover, the device is configured to generate the first bit sequence from an original bit sequence based on discarding at least one bit of a color and to generate the second bit sequence from the original bit sequence based on discarding at least one other bit of the color.
Abstract:
A multifocal display device has a focus tunable lens (FTL) and a controller configured to shift a focus of the FTL from a first focal plane to a second focal plane by applying a compensated control signal to the FTL. The controller is configured to generate a current compensated control signal value, which is a value of the compensated control signal for a current point in time, based on one or more previous compensated control signal values, which are values of the compensated control signal at one or more previous points in time.
Abstract:
An audio signal encoding method is provided. The method comprises: collecting audio signal samples, determining sinusoidal components in subsequent frames, estimation of amplitudes and frequencies of the components for each frame, merging thus obtained pairs into sinusoidal trajectories, splitting particular trajectories into segments, transforming particular trajectories to the frequency domain by means of a digital transform performed on segments longer than the frame duration, quantization and selection of transform coefficients in the segments, entropy encoding, outputting the quantized coefficients as output data, wherein segments of different trajectories starting within a particular time are grouped into Groups of Segments (GOS), and the partitioning of trajectories into segments is synchronized with the endpoints of a Group of Segments).
Abstract:
An apparatus for estimating an overall mixing time, where the apparatus comprises a processing element configured to determine differences between energy profiles of a first room impulse response of the first pair of room impulse responses and a second room impulse response of the first pair of room impulse responses at a plurality of different sample times of the first pair of room impulse responses, set a sample time of the plurality of sample times as a mixing time for the first pair of room impulse responses at which the difference between the energy profiles of the first room impulse response and the second room impulse response of the first pair of room impulse responses is equal to or below a threshold value, and determine the overall mixing time based on the mixing time for the first pair of room impulse responses.