Abstract:
A medical image processing apparatus may include an image data generator to generate image data corresponding to at least two different energy bands by using an X-ray, an ROI processor to highlight a tissue of interest classified based on a predetermined characteristic to be distinguished from a normal tissue, in the generated image data, and a display to alternately display first image data in which the tissue of interest is not highlighted, and second image data in which the tissue of interest is highlighted to be distinguished from the normal tissue.
Abstract:
Provided is a method and apparatus for providing a three-dimensional (3D) image. A plurality of first projection images may be created by detecting X-rays which are emitted toward an object at different angles. A plurality of second projection images with respect to a partial volume of the object may be created by applying a forward projection and interpolation to at least one of the plurality of first projection images. A left image and a right image may be selected from among the plurality of second projection images, and the selected left image and right image may be displayed for a user as a 3D projection image.
Abstract:
The X-ray imaging device includes an X-ray generator to generate an X-ray and radiate the X-ray to an object, an X-ray detector to detect the X-ray passing through the object and acquire an image signal of the object, and a controller to analyze the image signal of the object, evaluate a characteristic of the object and generate at least one of a single energy X-ray image and a multiple energy X-ray image according to the evaluated characteristic.
Abstract:
A video encoder is provided. The video encoder according to an example embodiment includes: a differentiable prediction (DP) module configured to output an optimal initial search position by performing full search in a predetermined area by using a pair of frames of a video as input; and a motion estimation (ME) module configured to perform motion estimation by moving a search position toward the optimal initial search position output by the DP module.
Abstract:
To generate a charging path for a battery, a method includes generating simulation data for charging currents based on a battery model indicating an internal state of a battery, generating an initial look-up table (LUT) for the charging currents and preset battery voltage limits based on the simulation data, the initial LUT representing initial charging limit conditions of the battery for stages corresponding to the charging currents, generating a modified LUT by adjusting at least one of the initial charging limit conditions of the initial LUT, in response to the initial LUT failing to satisfy a threshold, determining a final LUT based on the modified LUT, in response to the modified LUT satisfying the threshold, and generating a charging path for the battery based on the final LUT.
Abstract:
An electronic device may operate a plurality of light sources, where each light source operates according to a light source code of a light source code set, each light source code being unique with respect to each other light source code, capture a glint signal corresponding to light emitted from the plurality of light sources through an event camera, obtain glint information from event data from the event camera, estimate a corneal sphere center position and an eye rotation center position based on the glint information, and determine three-dimensional (3D) gaze-related information based on the corneal sphere center position and the eye rotation center position.
Abstract:
An apparatus for implicit neural video representation is provided. The apparatus for implicit neural video representation includes: a first neural network configured to output pixel-to-pixel matching information up to a keyframe by using space-time coordinates of a video as input; and a second neural network configured to output Red-Green-Blue (RGB) data by using the space-time coordinates and the output pixel-to-pixel matching information as input.
Abstract:
A processor-implemented method includes obtaining a visual association feature indicating an association between a first image frame and a second image frame and a visual appearance feature indicating the same object appearance in the first image frame and the second image frame, constructing a visual reprojection constraint based on the visual association feature, constructing a visual feature metric constraint based on the visual appearance feature, and performing localization and mapping based on the visual reprojection constraint and the visual feature metric constraint.
Abstract:
An apparatus with video processing includes: one or more processors configured to: generate a syntax element processable by a target standard codec by inputting a quantization parameter, a pre-decoded reference image, and a plurality of frames comprised in a video to a neural network and compressing the plurality of frames, and generate a bitstream by performing entropy encoding on the syntax element.
Abstract:
A processor-implemented method includes obtaining a first motion matrix corresponding to an extended reality (XR) system and a second motion matrix based on a conversion coefficient from an XR system coordinate system into a rolling shutter (RS) camera coordinate system, and projecting an RS color image of a current frame onto a global shutter (GS) color image coordinate system based on the second motion matrix and generating a GS color image of the current frame, wherein the second motion matrix is a motion matrix of a timestamp of a depth image captured by a GS camera corresponding to a timestamp of a first scanline of an RS color image captured by the GS camera.