Abstract:
An extendable augmented reality (AR) system for recognizing user-selected objects in different contexts. A user may select certain entities (text, objects, etc.) that are viewed on an electronic device and create notes or additional content associated with the selected entities. The AR system may remember those entities and indicate to the user when those entities are encountered by the user in a different context, such as in a different application, on a different device, etc. The AR system offers the user the ability to access the user created note or content when the entities are encountered in the new context.
Abstract:
Approaches are described which enable a computing device (e.g., mobile phone, tablet computer) to display alternate views or layers of information within a window on the display screen when a user's finger (or other object) is detected to be within a particular range of the display screen of the device. For example, a device displaying a road map view on the display screen may detect a user's finger near the screen and, in response to detecting the finger, render a small window that shows a portion of a satellite view of the map proximate to the location of the user's finger. As the user's finger moves laterally above the screen, the window can follow the location of the user's finger and display the satellite views of the various portions of the map over which the user's finger passes.
Abstract:
Approaches are described which enable a computing device (e.g., mobile phone, tablet computer) to display alternate views or layers of information within a window on the display screen when a user's finger (or other object) is detected to be within a particular range of the display screen of the device. For example, a device displaying a road map view on the display screen may detect a user's finger near the screen and, in response to detecting the finger, render a small window that shows a portion of a satellite view of the map proximate to the location of the user's finger. As the user's finger moves laterally above the screen, the window can follow the location of the user's finger and display the satellite views of the various portions of the map over which the user's finger passes.
Abstract:
An electronic device can be configured to enable a user to provide input via a tap of the device without the use of touch sensors (e.g., resistive, capacitive, ultrasonic or other acoustic, infrared or other optical, or piezoelectric touch technologies) and/or mechanical switches. Such a device can include other sensors, including inertial sensors (e.g., accelerometers, gyroscopes, or a combination thereof), microphones, proximity sensors, ambient light sensors, and/or cameras, among others, that can be used to capture respective sensor data. Feature values with respect to the respective sensor data can be extracted, and the feature values can be analyzed using machine learning to determine when the user has tapped on the electronic device. Detection of a single tap or multiple taps performed on the electronic device can be utilized to control the device.
Abstract:
Touch-based input to a computing device can be improved by providing a mechanism to lock or reduce the effects of motion in unintended directions. In one example, a user can navigate in two dimensions, then provide a gesture-based locking action through motion in a third dimension. If a computing device analyzing the gesture is able to detect the locking action, the device can limit motion outside the corresponding third dimension, or lock an interface object for selection, in order to ensure that the proper touch-based input selection is received. Various thresholds, values, or motions can be used to limit motion in one or more axes for any appropriate purpose as discussed herein.
Abstract:
A computing device can be configured to recognize when a user hovers over or is within a determined distance of an element displayed on the computing device to perform certain tasks. Information associated with the element can be displayed when such a hover input is detected. This information may comprise a description of what tasks are performed by selection of the element. This information could also be an enlarged version of the element to help the user disambiguate selection of multiple elements. The information can be displayed in a manner such that at least substantive portions of the information would not be obscured or occluded by the user.
Abstract:
Approaches are described for managing a display of content on a computing device. Content (e.g., images, application data, etc.) is displayed on an interface of the device. An activation movement performed by a user (e.g., a double-tap) can cause the device to enable a content view control mode (such as a zoom control mode) that can be used to adjust a portion of the content being displayed on the interface. The activation movement can also be used to set an area of interest and display a graphical element indicating that the content view control mode is activated. In response to a motion being detected (e.g., a forward tilt or backward of the device), the device can adjust a portion of the content being displayed on the interface, such as displaying a “zoomed-in” portion or a “zoomed-out” portion of the image.
Abstract:
A computing device can capture optical data using optical sensors. In some embodiments, the optical sensors can include front-facing light sensors, image sensors, cameras, etc. The optical data captured by each respective optical sensor can be analyzed to determine an amount of light received by the respective optical sensor. Based, at least in part, on which optical sensors are detecting light and how much light those sensors are detecting, the device can determine (e.g., deduce, predict, estimate, etc.) an area of a device display screen that is likely to be unobstructed by an environment (or portion thereof) in which the device is situated. The area of the display screen that is likely unobstructed can likely be visible to a user of the device. Accordingly, the computing device can provide information at the area of the display screen that is likely to be unobstructed and/or visible to the user.
Abstract:
Approaches are described for providing input to a portable computing device, such as a mobile phone. A user's hand can be detected based on data (e.g., one or more images) obtained by at least one sensor of the device, such as camera, and the images can be analyzed to locate the hand of the user. As part of the location computation, the device can determine a motion being performed by the hand of the user, and the device can determine a gesture corresponding to the motion. In the situation where the device is controlling a media player capable of playing media content, the gesture can be interpreted by the device to cause the device to, e.g., pause a media track or perform another function with respect to the media content being presented via the device.
Abstract:
A computing device can capture audio data representative of audio content present in a current environment. The captured audio data can be compared with audio models to locate a matching audio model. The matching audio model can be associated with an environment. The current environment can be identified based on the environment associated with the matching audio model. In some embodiments, information about the identified current environment can be provided to at least one application executing on the computing device. The at least one the application can be configured to adjust at least one functional aspect based at least in part upon the determined current environment. In some embodiments, one or more computing tasks performed by the computing device can be improved based on information relating to the identified current environment. These computing tasks can include location refinement, location classification, and speech recognition.