Abstract:
A photographing device includes a photographing unit, an image processor which separates an object from a first photographing image obtained by the photographing unit, a display which displays a background live view obtained by superimposing the separated object on a live view of a background, and a controller which obtains a second photographing image corresponding to the live view of the background when a command to shoot the background is input and generates a composite image based on the separated object and the second photographing image.
Abstract:
A method for controlling a video segmentation apparatus is provided. The method includes receiving an image corresponding to a frame of a video; estimating a motion of an object in the received image to be extracted from the received image, determining a plurality of positions of windows corresponding to the object; adjusting at least one of a size and a spacing of at least one window located at a position of the plurality of determined positions of the windows based on an image characteristic; and extracting the object from the received image based on the at least one window of which the at least one of the size and the spacing is adjusted.
Abstract:
A master device providing an image to a slave device providing a virtual reality service is provided. The master device includes: a content input configured to receive an input stereoscopic image; a communicator configured to perform communication with the slave device providing the virtual reality service; and a processor configured to determine a viewpoint region corresponding to a motion state of the corresponding slave device in the input stereoscopic image on the basis of motion information received from the slave device and control the communicator to transmit an image of the identified viewpoint region to the slave device.
Abstract:
A wearable device that is configured to be worn on a body of a user and a control method thereof are provided. The wearable device includes an image projector configured to project a virtual user interface (UI) screen, a camera configured to capture an image, and a processor configured to detect a target area from the image captured by the camera, control the image projector to project the virtual UI screen, which corresponds to at least one of a shape and a size of the target area, onto the target area, and perform a function corresponding to a user interaction that is input through the virtual UI screen.
Abstract:
Provided is a method for transmitting data about an omnidirectional image by a server. The method comprises the steps of: receiving, from a terminal, information about a viewport of the terminal; selecting, on the basis of information about the viewport and the respective qualities of a plurality of tracks associated with the omnidirectional image, at least one track among the plurality of tracks; and transmitting data about the selected at least one track to the terminal.
Abstract:
Methods and apparatuses are provided for transmitting information about an omni-directional image based on user motion information by a server. Motion parameters are received from an apparatus worn by a user for displaying an omni-directional image. User motion information is generated based on the received motion parameters. First packing information corresponding to a user position is generated based on the user motion information. Second packing information corresponding to a position in close proximity to the user position is generated based on the user motion information. Third packing information is generated based on the first packing information and the second packing information At least one of the first packing information, the second packing information, and the third packing information is transmitted to the apparatus.
Abstract:
A three-Dimensional (3D) image conversion apparatus for converting a two-Dimensional (2D) image into a 3D image and a method for controlling the 3D image conversion apparatus are provided. The method includes displaying the 2D image to be converted into the 3D image, receiving a user input designating at least one object included in the 2D image, obtaining boundaries of the at least one object included in the 2D image based on the received user input to identify each of the at least one object, analyzing the 2D image including the at least one object to obtain depth information of each of the at least one object, and arranging the identified each of the at least one object based on the obtained depth information to generate the 3D image.