Abstract:
The present disclosure generally relates to using avatars and image data for enhanced user interactions. In some examples, user status dependent avatars are generated and displayed with a message associated with the user status. In some examples, a device captures image information to scan an object to create a 3D model of the object. The device determines an algorithm for the 3D model based on the capture image information and provides visual feedback on additional image data that is needed for the algorithm to build the 3D model. In some examples, an application's operation on a device is restricted based on whether an authorized user is identified as using the device based on captured image data. In some examples, depth data is used to combine two sets of image data.
Abstract:
A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture.
Abstract:
Techniques and devices for creating a Forward-Reverse Loop output video and other output video variations. A pipeline may include obtaining input video and determining a start frame within the input video and a frame length parameter based on a temporal discontinuity minimization. The selected start frame and the frame length parameter may provide a reversal point within the Forward-Reverse Loop output video. The Forward-Reverse Loop output video may include a forward segment that begins at the start frame and ends at the reversal point and a reverse segment that starts after the reversal point and plays back one or more frames in the forward segment in a reverse order. The pipeline for the generating Forward-Reverse Loop output video may be part of a shared resource architecture that generates other types of output video variations, such as AutoLoop output videos and Long Exposure output videos.
Abstract:
A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user (22) of a computerized system, and receiving a two dimensional (2D) image of the user, the image including an eye (34) of the user. 3D coordinates of a head (32) of the user are extracted from the 3D map and the 2D image, and a direction of a gaze performed by the user is identified based on the 3D coordinates of the head and the image of the eye.
Abstract:
A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture.
Abstract:
A method includes receiving a sequence of three-dimensional (3D) maps of at least a part of a body of a user of a computerized system and extracting, from the 3D map, 3D coordinates of a head of the user. Based on the 3D coordinates of the head, a direction of a gaze performed by the user and an interactive item presented in the direction of the gaze on a display coupled to the computerized system are identified. An indication is extracted from the 3D maps an indication that the user is moving a limb of the body in a specific direction, and the identified interactive item is repositioned on the display responsively to the indication.
Abstract:
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state.
Abstract:
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state.
Abstract:
The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.
Abstract:
A gesture based user interface includes a movement monitor configured to monitor a user's hand and to provide a signal based on movements of the hand. A processor is configured to provide at least one interface state in which a cursor is confined to movement within a single dimension region responsive to the signal from the movement monitor, and to actuate different commands responsive to the signal from the movement monitor and the location of the cursor in the single dimension region.