Abstract:
An apparatus and method for controlling a plurality of terminals using a gesture recognition. The method includes recognizing a gesture by using an action sensor; verifying an angle of a paired second terminal; deciding an input signal by combining the gesture and the angle; and controlling a first terminal in response to the decided input signal.
Abstract:
A method of processing a document by an electronic device is provided. The method includes displaying, by a display unit, a document, detecting selected areas in the displayed document, extracting information from the detected selected areas, generating the extracted information as lists, and storing the lists together with link information of documents where the lists are located, wherein the lists are stored as one document list.
Abstract:
A wearable electronic device is provided. The wearable electronic device includes a camera, a display, a microphone, a sensor, memory storing one or more computer programs, and one or more processors communicatively coupled with the camera, the display, the microphone, the sensor, and the memory, wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively, cause the wearable electronic device to determine an environment level indicating a degree of danger of a surrounding environment for a user of the wearable electronic device based on an image obtained through the camera, determine whether to use the sensor, the camera, and the microphone depending on the determined environment level, obtain obstacle information and an estimated impulse between an obstacle and the user based on data input through at least one determined to be used among the sensor, the camera, or the microphone, determine a user interface (UI) level for displaying danger of collision between the obstacle and the user based on the environment level, the obstacle information, and the estimated impulse, generate a graphic entity indicating a direction and a danger degree of the obstacle depending on the UI level, and display an obstacle-related UI including the graphic entity on the display.
Abstract:
A wearable device includes memory storing instructions, a camera, and at least one processor. The instructions, when executed by the at least one processor individually or collectively, causes the wearable device to obtain at least one first image having a first attribute for tracking of body portion of a user and at least one second image having a second attribute different from the first attribute for tracking of an external electronic device, to obtain first feature values for tracking of the body portion from the at least one first image and second feature values for tracking of the external electronic device from the at least one second image, to change a mode of the wearable device from the first mode to a second mode based on the first feature values and the second feature values, and to obtain, in the second mode, another images.
Abstract:
An electronic device and method are disclosed. The electronic device includes a first and second camera, display, memory and processor. The processor implements the method, including acquiring image data of an external environment via the first camera, detecting a plurality of objects included in the image data, identifying a first object corresponding to the detected gaze among the detected plurality of objects, configuring a first precision for spatial mapping of the identified first object and a second precision of at least one other object from among the detected plurality of objects, wherein the first precision is higher than the second precision, executing 3D spatial mapping on the image data using the first precision for the identified first object and the second precision for the at least one other object, and displaying a 3D space generated based on the image data and the spatial mapping.
Abstract:
An electronic device is provided that includes a display and a processor that executes a first application based on a first language, displays a first execution screen corresponding to the first application, wherein a content, which changes over time, is displayed in a first area of the first execution screen, executes a second application in response to receiving a first user input, translates a text included in the first execution screen from the first language to a second language using the second application and displays the translation, in a state in which the second application is executed and in response to the content in the first area being changed, extracts the text included in the changed content, translates the extracted text from the first language to the second language using the second application, and displays a second execution screen corresponding to the first execution screen based on the second language.
Abstract:
An electronic device is provided. The electronic device includes a display, a communication module, a memory, and at least one processor configured to be operatively connected to the display, the communication module, and the memory. The at least one processor may be configured to detect an occurrence of an event for executing a remaining fingerprint theft prevention service. The at least one processor may be configured to receive remaining fingerprint candidate group data corresponding to a fingerprint candidate area. The at least one processor may be configured to determine authentication validity of the fingerprint candidate area. The at least one processor may be configured to transmit a security level obtained by evaluating a security risk of remaining fingerprints remaining on the display. The at least one processor may be configured to output security guidance information on the remaining fingerprints of the display.
Abstract:
An electronic device includes a processor configured to: store a masking review image and store an image after the masking review image is stored, identify a first visual object of the image, store the identified first visual object of the image, identify a second visual object in the masking review image. The second visual object corresponds to the identified first visual object. The processor is further configured to: store the identified second visual object, perform a masking on the image based on first data corresponding to the first visual object, perform a masking on the masking review image based on second data corresponding to the second visual object, and encode the masking review image based on a remaining capacity of a storage device. The remaining capacity of the storage device is smaller than a designated reference capacity.
Abstract:
A spin coater may include a spin chuck, a nozzle, a first temperature controller and a second temperature controller. The spin chuck may be configured make contact with a central portion of a lower surface of a substrate and may be configured to rotate the substrate when photoresist is on the substrate. The nozzle may be arranged over a central portion of the spin chuck and configured to provide a central portion of an upper surface of the substrate with photoresist. The first temperature controller may be configured to control a temperature in a first region of the spin chuck. The second temperature controller may be configured to control a temperature in a second region of the spin chuck.
Abstract:
An electronic device is provided. The electronic device includes a camera module, a first display, a second display, and at least one processor. The at least one processor may be configured to display a first screen and a second screen for displaying at least one function of a camera application on the first display facing in a first direction and the second display facing in a second direction opposite to the first direction, respectively, wherein at least one of the first screen or the second screen includes an image obtained through the camera module, identify a first input for selecting a first function of the camera application through the first screen, identify a second input related to a subject's gaze or gesture in a state in which the first function of the camera application is selected through the first screen, and display, on the second screen, in a state in which the first function of the camera application is selected through the first screen, an indication indicating that selection of the first function by the second input related to the subject's gaze or gesture is restricted.