Abstract:
An electronic device and method for capturing an image are disclosed. The electronic device includes an image sensor configured to capture images, a location sensor configured to detect a location of the electronic device, and a processor. The processor may execute the method, which includes capturing a first image, and detecting a first location where the first image is captured, detecting, by a processor, a second location at which a second image is to be captured and generating guidance information for travel to the second location, and when a present location is within a predefined range of the second location, automatically capturing the second image.
Abstract:
A method and an apparatus produce and reproduce Augmented Reality (AR) contents in a mobile terminal. In the method, contents are produced. An image including an object corresponding to the contents is recognized. Recognition information for the object corresponding to the contents is obtained based on a recognition result. AR contents including the contents and the recognition information are generated. Therefore, AR contents for an input image may be easily produced and reproduced, and the AR contents may be used as independent multimedia contents, not an auxiliary means of other contents.
Abstract:
An apparatus and a method for displaying images in an electronic device are provided. The electronic device includes a processor that obtains an image and a depth map corresponding to the image, separates the image into one or more areas based on the depth map of the image, applies an effect, which is different from at least one of other areas, to at least one of the areas separated from the image, and connects the areas, to which the different effects have been applied, as a single image, and a display that displays the single image.
Abstract:
Provided is an electronic device and method for providing three-dimensional (3D) map processing and a 3D map service. The electronic device includes a memory configured to store an image set and a map platform module which is functionally connected with the memory and is implemented with a processor. The map platform module is configured to obtain an image set comprising a plurality of images for a path on an external space surrounding the electronic device, to determine an area corresponding to an object included in the external space from at least one of the plurality of images, to obtain information about the object based on whether the object is configured to communicatively connect with the electronic device, and to display the information in association with the area through a display functionally connected with the electronic device. Other embodiments are also possible.
Abstract:
An electronic device for providing map information associated with a space of interest is provided. The electronic device includes a display and a processor configured to display, on the display, at least a portion of a map including at least one node associated with at least one image photographed at a corresponding position of the space of interest and additional information on the at least one image, change, in response to an input or an event, a first image associated with a first node among the at least one node or first additional information on the first image, and display, on the map through the display, at least a portion of the changed first image or at least a portion of the changed first additional information.
Abstract:
A three-Dimensional (3D) image conversion apparatus for converting a two-Dimensional (2D) image into a 3D image and a method for controlling the 3D image conversion apparatus are provided. The method includes displaying the 2D image to be converted into the 3D image, receiving a user input designating at least one object included in the 2D image, obtaining boundaries of the at least one object included in the 2D image based on the received user input to identify each of the at least one object, analyzing the 2D image including the at least one object to obtain depth information of each of the at least one object, and arranging the identified each of the at least one object based on the obtained depth information to generate the 3D image.