Abstract:
Methods and systems according to one or more embodiments are provided for enhancing interactive inputs. In an embodiment, a method includes concurrently capturing touch input data on a screen of a user device and non-touch gesture input data off the screen of the user device. The method also includes determining an input command based at least in part on a combination of the concurrently captured touch input data and the non-touch gesture input data. And the method further includes affecting an operation of the user device based on the determined input command.
Abstract:
Systems and methods according to one or more embodiments of the present disclosure are provided for seamlessly extending interactive inputs. In an embodiment, a method comprises detecting with a first sensor at least a portion of an input by a control object. The method also comprises determining that the control object is positioned in a transition area. The method further comprises determining whether to detect a subsequent portion of the input with a second sensor based at least in part on the determination that the control object is positioned in the transition area.
Abstract:
Systems and methods are provided that allow a user to interact with a device using gaze detection. In the provided systems and methods, the gaze detection is initiated by detecting a triggering event. Once gaze detection has been initiated, detecting a gaze of a user may allow the user to activate a display component of the device, pass a security challenge on the device, and view content and alerts on the device. The gaze detection may continue looking for the user's gaze and keep the display component of the device activated as long as a gaze is detected, but may deactivate the display component of the device once a gaze is no longer detected. To conserve power the gaze detection may also be deactivated until another triggering event is detected.