Abstract:
A multi-gaze recognition device according to one embodiment of the present specification includes a display module configured to display a display area containing a visual content capable of being scrolled, an input module configured to receive an input of a touch gesture signal to scroll the visual content, an image acquisition module configured to acquire an image in front of a device, and an analysis module configured to scroll the visual content using a first gaze and a second gaze detected from the image acquired in front of the device. The first gaze determines a maximum amount of a scroll area to scroll the visual content, the second gaze determines whether an event initiating the scroll occurs.
Abstract:
Discussed are a wearable display device and a method for controlling an augmented reality layer. The wearable display device may include a camera unit configured to capture an image of a user's face, a sensor unit configured to sense whether or not the user is turning his (or her) head, and a controller configured to move a virtual object belonging to a layer being gazed upon by the user's eye-gaze, when at least one of a turning of the user's head and a movement in the user's eye-gaze is identified based upon the image of the user's face captured by the camera unit and information sensed by the sensor unit, and when the user's eye-gaze is gazing upon any one of a first virtual object belonging to a first layer and a second virtual object belonging to a second layer.
Abstract:
Disclosed is a method of controlling a display device including displaying a digital image, detecting a first control input to rotate the digital image, generating a first tactile feedback in a first part of a touch region and a second tactile feedback in a second part of the touch region when rotation of the digital image begins, changing at least one of the first and second tactile feedbacks according to a first rate if a rotation angle of the digital image is a first angle, and changing at least one of the first and second tactile feedbacks according to a second rate if the rotation angle of the digital image is a second angle. The first rate and the second rate is a rate of the second tactile feedback in relation to the first tactile feedback, and the second rate is greater than the first rate.
Abstract:
Disclosed herein are a head-mounted display and a method of controlling the same, more particularly, a method of performing rotation compensation on a captured image based on an angle of rotating a user wearing the head-mounted display and an angle of rotating a camera detached from the head-mounted display.
Abstract:
Disclosed herein are a head-mounted display and a method of controlling the same, more particularly, a method of displaying an image preview interface based on the position of a camera unit in the case that the camera unit mounted to a head-mounted display is detached.
Abstract:
A display device is disclosed. A display device according to an embodiment of the present specification includes a display unit configured to display visual information, a camera unit configured to capture an image in front of the display device, a sensor unit configured to sense user input applied to the display device, and a control unit configured to control the display device, wherein the control unit detects at least one user from the captured image, maintains display of the visual information and processes received user input when the detected user includes a predetermined master user, the control unit detects at least one user from the captured image, maintains display of the visual information and does not process the received user input when the detected user does not include the predetermined master user, and the control unit deactivates the display unit when no user is detected in the captured image.
Abstract:
A display device and a method for controlling the same are disclosed. More specifically, a three-foldable display device and a method for controlling the same are disclosed, wherein the three-foldable display device has two cameras at previously set locations with different resolutions different from each other and controls a location for providing a preview interface in accordance with an activated one of the two cameras.
Abstract:
A mobile device includes a camera unit configured to sense an image; a display unit configured to display the image; a sensor unit configured to detect an input signal and transmit the detected input signal to a processor; a storage unit; and the processor configured to control the display unit, the camera unit, the sensor unit, wherein the processor is further configured to: provide an image capturing interface displaying the image sensed by the camera unit, display a pattern code indicator when a pattern code is recognized from the image, store the image in the storage unit in response to a first input signal, and store data linked to the pattern code in the storage unit in response to a second input signal.
Abstract:
A navigation device for a vehicle including a display unit, and a processor configured to determine a location of the navigation device, detect an object loaded into the vehicle by wirelessly communicating with the object, identify the detected object based on attribute information of the detected object, save a destination history of the identified object including destination information of the vehicle having the loaded identified object, and display at least one recommended destination on the display unit based on the destination history of the object in response to the identified object again being loaded into the vehicle after the destination history has been saved.
Abstract:
A method for controlling a portable device comprising a first body in the center thereof, a second body positioned on the left side of the first body and a third body positioned on the right side of the first body, according to one embodiment of the present specification, may comprise the steps of; detecting a first triggering signal if a first faulted state is converted to a second faulted state; displaying a menu interface in the converted second faulted state on the basis of the detected first triggering signal; detecting a first control input with respect to the displayed menu interface; and if the second faulted state is converted to a third faulted state, displaying a first application in the third faulted state on the basis of the first control input.