Abstract:
In one implementation, a device has a processor, a projector, a first infrared (IR) sensor, a second IR sensor, and instructions stored on a computer-readable medium that are executed by the processor to estimate the sensor-to-sensor extrinsic parameters. The projector projects IR pattern elements onto an environment surface. The first sensor captures a first image including first IR pattern elements corresponding to the projected IR pattern elements and the device estimates 3D positions for first IR pattern elements. The second IR sensor captures a second image including second IR pattern elements corresponding to the projected IR pattern elements and the device matches the first IR pattern elements and the second IR pattern elements. Based on this matching, the device estimates a second extrinsic parameter corresponding to a spatial relationship between the first IR sensor and the second IR sensor.
Abstract:
A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture.
Abstract:
Techniques are disclosed relating to biometric authentication, e.g., facial recognition. In some embodiments, a device is configured to verify that image data from a camera unit exhibits a pseudo-random sequence of image capture modes and/or a probing pattern of illumination points (e.g., from lasers in a depth capture mode) before authenticating a user based on recognizing a face in the image data. In some embodiments, a secure circuit may control verification of the sequence and/or the probing pattern. In some embodiments, the secure circuit may verify frame numbers, signatures, and/or nonce values for captured image information. In some embodiments, a device may implement one or more lockout procedures in response to biometric authentication failures. The disclosed techniques may reduce or eliminate the effectiveness of spoofing and/or replay attacks, in some embodiments.
Abstract:
A facial recognition process operating on a device may include one or more processes that determine if a camera and/or components associated with the camera are obstructed by an object (e.g., a user's hand or fingers). Obstruction of the device may be assessed using flood infrared illumination images when a user's face is not able to be detected by a face detection process operating on the device. Obstruction of the device may also be assessed using a pattern detection process that operates after the user's face is detected by the face detection process. When obstruction of the device is detected, the device may provide a notification to the user that the device (e.g., the camera and/or an illuminator) is obstructed and that the obstruction should be removed for the facial recognition process to operate correctly.
Abstract:
A method, including receiving, by a computer, a two-dimensional image (2D) containing at least a physical surface and segmenting the physical surface into one or more physical regions. A functionality is assigned to each of the one or more physical regions, each of the functionalities corresponding to a tactile input device, and a sequence of three-dimensional (3D) maps is received, the sequence of 3D maps containing at least a hand of a user of the computer, the hand positioned on one of the physical regions. The 3D maps are analyzed to detect a gesture performed by the user, and based on the gesture, an input is simulated for the tactile input device corresponding to the one of the physical regions.
Abstract:
A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user (22) of a computerized system, and receiving a two dimensional (2D) image of the user, the image including an eye (34) of the user. 3D coordinates of a head (32) of the user are extracted from the 3D map and the 2D image, and a direction of a gaze performed by the user is identified based on the 3D coordinates of the head and the image of the eye.
Abstract:
A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture.
Abstract:
A method includes receiving a sequence of three-dimensional (3D) maps of at least a part of a body of a user of a computerized system and extracting, from the 3D map, 3D coordinates of a head of the user. Based on the 3D coordinates of the head, a direction of a gaze performed by the user and an interactive item presented in the direction of the gaze on a display coupled to the computerized system are identified. An indication is extracted from the 3D maps an indication that the user is moving a limb of the body in a specific direction, and the identified interactive item is repositioned on the display responsively to the indication.
Abstract:
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state.
Abstract:
A method, including receiving, by a computer executing a non-tactile three dimensional (3D) user interface, a set of multiple 3D coordinates representing a gesture by a hand positioned within a field of view of a sensing device coupled to the computer, the gesture including a first motion in a first direction along a selected axis in space, followed by a second motion in a second direction, opposite to the first direction, along the selected axis. Upon detecting completion of the gesture, the non-tactile 3D user interface is transitioned from a first state to a second state.