Abstract:
Various arrangements for using an augmented reality device are presented. Speech spoken by a person in a real-world scene may be captured by an augmented reality (AR) device. It may be determined that a second AR device is to receive data on the speech. The second AR device may not have been present for the speech when initially spoken. Data corresponding to the speech may be transmitted to the second augmented reality device.
Abstract:
A mobile device includes a housing. The mobile device further includes a first earpiece accessible via a first aperture in a first side of the housing. The mobile device further includes a second earpiece accessible via a second aperture in a second side of the housing, where the second aperture is located substantially in the center of the second side.
Abstract:
A multi-channel sound (MCS) system features intelligent calibration (e.g., of acoustic echo cancellation (AEC)) for use in dynamic acoustic environments. A sensor subsystem is utilized to detect and identify changes in the acoustic environment and determine a “scene” corresponding to the resulting acoustic characteristics for that environment. This detected scene is compared to predetermined scenes corresponding to the acoustic environment. Each predetermined scene has a corresponding pre-tuned filter configuration for optimal AEC performance. Based on the results of the comparison, the pre-tuned filter configuration corresponding to the predetermined scene that most closely matches the detected scene is utilized by the AEC subsystem of the multi-channel sound system.
Abstract:
This disclosure provides systems, methods, and apparatus, including computer programs encoded on computer storage media, for displaying information in various display regions within wearable display devices in a manner that enhances user experience. The wearable display devices may include a flexible display region and may be capable of operating in a wrinkled state. In one aspect, a wearable display device includes a plurality of sensors configured to determine the state of the display. The sensors may, for example, be configured to detect pressure, light, and/or deformation. In some aspects, the device includes a processor configured to provide image data to the display. In some aspects, the processor is capable of changing at least one characteristic of the image data provided to the display based at least in part on input received from the sensors. For example, the processor may re-size an image and/or deactivate a portion of the display.
Abstract:
A mobile device includes a housing. The mobile device further includes a first earpiece accessible via a first aperture in a first side of the housing. The mobile device further includes a second earpiece accessible via a second aperture in a second side of the housing, where the second aperture is located substantially in the center of the second side.
Abstract:
Various embodiments provide methods implemented by a requesting device and a responding device for collectively identifying one or more clusters of nearby computing devices by collaborating and sharing information. In various embodiments, the requesting device may send a distance threshold to responding devices, along with a request for grouping information about computing devices that are within the distance threshold of the responding devices. In response to receiving the request and distance threshold, each responding device may identify a number of other computing devices that are within the distance threshold and may send such information to the requesting device. The requesting device may identify one or more clusters of computing devices based on the received grouping information. By configuring responding devices to detect other computing devices within a modifiable distance threshold, the requesting device may dynamically adjust the size of identified clusters of computing devices.
Abstract:
Embodiments of the present invention are directed toward controlling electronic devices based on hand gestures detected by detecting the topography of a portion of a user's body. For example, pressure data indicative of a user's bone and tissue position corresponding to a certain movement, position, and/or pose of a user's hand may be detected. An electromyographic (EMG) sensor coupled to the user's skin can also be used to determine gestures made by the user. These sensors can be coupled to a camera that can be used to capture images, based on recognized gestures, of a device. The device can then be identified and controlled.
Abstract:
An apparatus, a method, and a computer program product for detecting a gesture of a body part relative to a surface are provided. The apparatus determines if the body part is in proximity of the surface. If the body part is in proximity of the surface, the apparatus determines if electrical activity sensed from the body part is indicative of contact between the body part and the surface. If the body part is in contact with the surface, the apparatus determines if motion activity sensed from the body part is indicative of the gesture.
Abstract:
Various arrangements for customizing a configuration of a mobile device are presented. The mobile device may collect proximity data. The mobile device may determine that a user has gripped the mobile device based on the proximity data. A finger length of the user may be determined using the proximity data. Configuration of the mobile device may be customized at least partially based on the determined finger length of the user.
Abstract:
Various arrangements for recognizing a gesture are presented. User input may be received that causes a gesture classification context to be applied from a plurality of gesture classification contexts. This gesture classification context may be applied, such as to a gesture analysis engine. After applying the gesture classification context, data indicative of a gesture performed by a user may be received. The gesture may be identified in accordance with the applied gesture classification context.