Abstract:
A wearable computing device can detect device-raising gestures. For example, onboard motion sensors of the device can detect movement of the device in real time and infer information about the spatial orientation of the device. Based on analysis of signals from the motion sensors, the device can detect a raise gesture, which can be a motion pattern consistent with the user moving the device's display into his line of sight. In response to detecting a raise gesture, the device can activate its display and/or other components. Detection of a raise gesture can occur in stages, and activation of different components can occur at different stages.
Abstract:
Measurements can be obtained from sensors to determine a state of a device. The state can be used to determine whether to provide an alert. For example, after a first alert is provided, it can be determined that the device is not accessible to the user based on the determined state, and a second alert can be suppressed at a specified time after providing the first alert. The sensor measurements can be monitored after suppressing the second alert, and a state engine can detect a change in a state based on subsequent sensor measurements. If the state change indicates that the device is accessible to the user the second alert can be provided to the user. Alerts can be dismissed based on a change in state. A first device can coordinate alerts sent to or to be provided by a second device by suppressing or dismissing such alerts.
Abstract:
Techniques for mobile devices to subscribe and share raw sensor data are provided. The raw sensor data associated with sensors (e.g., accelerometers, gyroscopes, compasses, pedometers, pressure sensors, audio sensors, light sensors, barometers) of a mobile device can be used to determine the movement or activity of a user. By sharing the raw or compressed sensor data with other computing devices, the other computing devices can determine a motion state based on the sensor data. Additionally, in some instances, the other computing devices can determine a functional state based on the sensor data and the motion state. For example, functional state classification can be associated with each motion state (e.g., driving, walking) by further describing each motion state (e.g., walking on rough terrain, driving while texting).
Abstract:
Methods and mobile devices determine an exit from a vehicle. Sensors of a mobile device can be used to determine when the user is in a vehicle that is driving. The same or different sensors can be used to identify a disturbance (e.g., loss of communication connection from mobile device to a car computer). After the disturbance, an exit confidence score can be determined at various times, and compared to a threshold. A determination of the exit of the user can be determined based on the comparison of the exit confidence score to the threshold. The mobile device can perform one or more functions in response to the exit confidence score exceeding the threshold, such as changing a user interface (e.g., of a navigation app) or obtaining a location to designate a parking location.
Abstract:
A device provides user interfaces for capturing and sending media, such as audio, video, or images, from within a message application. The device detects a movement of the device and in response, plays or records an audio message. The device detects a movement of the device and in response, sends a recorded audio message. The device removes messages from a conversation based on expiration criteria. The device shares a location with one or more message participants in a conversation.
Abstract:
In some implementations, a mobile device can be configured to provide navigation instructions to a user of the mobile device. The navigation instructions can be graphical, textual or audio instructions. The presentation of the navigation instructions can be dynamically adjusted based the importance of individual instructions and/or environmental conditions. For example, each navigation instruction can be associated with an importance value indicating how important the instruction is. The volume of important audio instructions can be adjusted (e.g., increased) to compensate for ambient noise so that a user will be more likely to hear the navigation instruction. The timing and/or repetition of the presentation of important instructions can be adjusted based on weather conditions, traffic conditions, or road conditions and/or road features so that a user will be less likely to miss an important navigation instruction.
Abstract:
In some implementations, a mobile device can be configured to provide navigation instructions to a user of the mobile device. The navigation instructions can be graphical, textual or audio instructions. The presentation of the navigation instructions can be dynamically adjusted based the importance of individual instructions and/or environmental conditions. For example, each navigation instruction can be associated with an importance value indicating how important the instruction is. The volume of important audio instructions can be adjusted (e.g., increased) to compensate for ambient noise so that a user will be more likely to hear the navigation instruction. The timing and/or repetition of the presentation of important instructions can be adjusted based on weather conditions, traffic conditions, or road conditions and/or road features so that a user will be less likely to miss an important navigation instruction.
Abstract:
Systems, methods, devices and computer-readable storage mediums are disclosed for correcting a compass view using map data. In an implementation, a method comprises: receiving, by one or more sensors of a mobile device, sensor data; determining, by a processor of the mobile device, compass offset data for a compass view based on the sensor data and map data; determining, by the processor, a corrected compass view based on the compass offset data; and presenting, by the processor, the corrected compass view.
Abstract:
An electronic device includes a touch-sensitive surface, a display, and a camera sensor. The device displays a message region for displaying a message conversation and receives a request to add media to the message conversation. Responsive to receiving the request, the device displays a media selection interface concurrently with at least a portion of the message conversation. The media selection interface includes a plurality of affordances for selecting media for addition to the message conversation, the plurality of affordances includes a live preview affordance, at least a subset of the plurality of affordances includes thumbnail representations of media available for adding to the message conversation, and the live preview affordance is associated with a live camera preview. Responsive to detecting selection of the live preview affordance, the device captures a new image based on the live camera preview and selects the new image for addition to the message conversation.
Abstract:
Measurements can be obtained from sensors to determine a state of a device. The state can be used to determine whether to provide an alert. For example, after a first alert is provided, it can be determined that the device is not accessible to the user based on the determined state, and a second alert can be suppressed at a specified time after providing the first alert. The sensor measurements can be monitored after suppressing the second alert, and a state engine can detect a change in a state based on subsequent sensor measurements. If the state change indicates that the device is accessible to the user the second alert can be provided to the user. Alerts can be dismissed based on a change in state. A first device can coordinate alerts sent to or to be provided by a second device by suppressing or dismissing such alerts.