Abstract:
A system and method for tracking a gaze at a distance are provided. A remote gaze tracking system may include an infrared lighting unit including a plurality of infrared lightings to emit an infrared light toward a user, a gaze tracking module to track a position of a face of the user, and to collect, from the tracked position of the face, an eye image including at least one reflected light among a plurality of corneal reflected lights and a lens-reflected light, the corneal reflected lights being reflected from a cornea by the emitted infrared light, and the lens-reflected light being reflected from a lens of glasses, and a processor to compare a magnitude of the lens-reflected light with a threshold in the collected eye image, and when the magnitude of the lens-reflected light is equal to or less than the threshold, to detect coordinates of a center of each of the plurality of corneal reflected lights, and to calculate a gaze position.
Abstract:
A progressive video streaming apparatus and method based on a visual perception are provided, and the progressive video streaming apparatus may include a gaze detector to detect gaze information including at least one of a location of a focus and a viewing angle of a user, a video playback quality determiner to determine video playback quality layers, based on the detected gaze information, a progressive streaming receiver to request video data and receive the video data, using a visual perception priority based on the detected gaze information, and a visual perception-based player to play back the received video data, by controlling an interactive delay to be reduced below a selected criterion, while reducing a visually recognized quality change in the received video data below another selected criterion.
Abstract:
There is provided a scalable point cloud encoding/decoding method and apparatus. The scalable point cloud decoding method comprises: acquiring an encoded texture image, an encoded geometry image, encoded occupancy map information, and encoded auxiliary patch-info information from a bitstream; acquiring a decoded texture image for each partition using the encoded texture image; reconstructing a geometry image of at least one item selected from among the encoded geometry image, the encoded occupancy map information, and the encoded auxiliary catch-info information; and reconstructing a point cloud using the texture images for the respective partitions and the geometry image.
Abstract:
The present invention provides a method of performing a position measurement on the basis of a signal from a global navigation satellite system (GNSS). The method includes: receiving initial position information; estimating an initial value of a multipath error through a measurement of the GNSS signal on the basis of the received initial position information; estimating a multipath error from the initial value of the multipath error on the basis of a change in the measurement; removing the estimated multipath error from the measurement of the GNSS signal; and performing the position measurement on the basis of the GNSS measurement from which the multipath error is removed.
Abstract:
A method of receiving content in a client is provided. The method may include receiving, from a server, a spatial set identifier (ID) corresponding to a tile group including at least one tile, sending, to the server, a request for first content corresponding to metadata, and receiving, from the server, the first content corresponding to the request.
Abstract:
The present invention proposes a method and apparatus for correcting a motion of panorama video captured by a plurality of cameras. The method of the present invention includes performing global motion estimation for estimating smooth motion trajectories from the panorama video, performing global motion correction for correcting a motion in each frame of the estimated smooth motion trajectories, performing local motion correction for correcting a motion of each of the plurality of cameras for the results in which the motions have been corrected, and performing warping on the results on which the local motion correction has been performed.
Abstract:
A gaze tracking apparatus and method are provided that may calculate a three-dimensional (3D) position of a user using at least two wide angle cameras, may perform panning, tilting, and focusing based on position information of the user and eye region detection information, using a narrow angle camera, may detect pupil center information and corneal reflected light information from an eye image acquired through an operation of the narrow angle camera, and may finally calculate gaze position information from the pupil center information and the corneal reflected light information.
Abstract:
There is provided a method of decoding point cloud data, the method comprising: decoding a bitstream to obtain attribute information of point cloud data, acquiring attributes of the point cloud data using the decoded attribute information and reconstructing the point cloud data using the obtained attributes, wherein the attribute information contains bit depth information of each attribute of the point cloud data.
Abstract:
Rate adaptation is carried out using bit error rate (BER) to enable effective multimedia transmission. The BER can be estimated using signal strength in a MAC layer and modulation information (FIGS. 7-9), and can be compatibly used in different wireless networks by means of message standardization.
Abstract:
A method and apparatus of recognizing a facial expression using a local feature-based adaptive decision tree are provided. A method of recognizing a facial expression by a facial expression recognition apparatus may include splitting a facial region included in an input image into local regions, extracting a facial expression feature from each of the local regions using a preset feature extracting algorithm, and recognizing a facial expression from the input image using the extracted facial expression feature based on a decision tree generated by repeatedly classifying facial expressions into two classes until one facial expression is contained in one class and determining a facial expression feature for a corresponding classification.