Abstract:
In some implementations, a mobile device can be configured to provide simplified audio navigation instructions. The simplified audio navigation instructions can provide a reduced set of audio navigation instructions so that the audio instructions are only presented to the user when the user wishes to or needs to hear the instructions. A user can enable the simplified audio navigation instructions. The simplified audio navigation instructions can be enabled automatically. The simplified audio navigation instructions can be configured with rules for when to present audio navigation instructions. For example, the rules can specify that audio navigation instructions are to be provided for complex road segments, a user defined portion of a route, or specified road types, among other criteria. The mobile device can be configured with exceptions to the rules such that audio navigation instructions can be presented when the user has, for example, deviated from a defined route.
Abstract:
Motion sensors of a mobile device mounted to a vehicle are used to detect a mount angle of the mobile device. The motion sensors are used to determine whether the vehicle is accelerating or de-accelerating, whether the vehicle is turning and whether the mount angle of the mobile device is rotating. The mount angle of the mobile device is obtained from data output from the motion sensors and can be used to correct a compass heading. Data from the motion sensors that are obtained while the vehicle is turning or the mobile device is rotating are not used to obtain the mount angle.
Abstract:
In general, in one aspect, a method includes receiving data from one or more motion sensors of a mobile device and calculating, after the period of time, a statistical measurement of the motion sensor data. The method also includes comparing the calculated statistical measurement to one or more threshold values, and, based on the comparing, determining a dynamic state of the mobile device. The method also includes, based on the determined dynamic state, determining an orientation of the mobile device.
Abstract:
A wearable computing device can detect device-raising gestures. For example, onboard motion sensors of the device can detect movement of the device in real time and infer information about the spatial orientation of the device. Based on analysis of signals from the motion sensors, the device can detect a raise gesture, which can be a motion pattern consistent with the user moving the device's display into his line of sight. In response to detecting a raise gesture, the device can activate its display and/or other components. Detection of a raise gesture can occur in stages, and activation of different components can occur at different stages.
Abstract:
Motion sensors of a mobile device mounted to a vehicle are used to detect a mount angle of the mobile device. The motion sensors are used to determine whether the vehicle is accelerating or de-accelerating, whether the vehicle is turning and whether the mount angle of the mobile device is rotating. The mount angle of the mobile device is obtained from data output from the motion sensors and can be used to correct a compass heading. Data from the motion sensors that are obtained while the vehicle is turning or the mobile device is rotating are not used to obtain the mount angle.
Abstract:
Systems, methods, devices and computer-readable storage mediums are disclosed for correcting a compass view using map data. In an implementation, a method comprises: receiving, by one or more sensors of a mobile device, sensor data; determining, by a processor of the mobile device, compass offset data for a compass view based on the sensor data and map data; determining, by the processor, a corrected compass view based on the compass offset data; and presenting, by the processor, the corrected compass view.
Abstract:
Embodiments are disclosed for head motion prediction for spatial audio applications. In an embodiment, a method comprises: obtaining motion data from a source device and a headset; obtaining transmission delays from a wireless stack of the source; estimating relative motion from the relative source device and headset motion data; calculating a first derivative of the relative motion data; forward predicting the estimated relative motion over the time delays using the first derivative and second derivative of relative motion; determining, using a head tracker, a head pose of the user based on the forward predicted relative motion data; and rendering, using the head pose, spatial audio for playback on the headset.
Abstract:
An electronic device includes a touch-sensitive surface, a display, and a camera sensor. The device displays a message region for displaying a message conversation and receives a request to add media to the message conversation. Responsive to receiving the request, the device displays a media selection interface concurrently with at least a portion of the message conversation. The media selection interface includes a plurality of affordances for selecting media for addition to the message conversation, the plurality of affordances includes a live preview affordance, at least a subset of the plurality of affordances includes thumbnail representations of media available for adding to the message conversation, and the live preview affordance is associated with a live camera preview. Responsive to detecting selection of the live preview affordance, the device captures a new image based on the live camera preview and selects the new image for addition to the message conversation.
Abstract:
Embodiments are disclosed for disabling/re-enabling head tracking for spatial audio applications. In an embodiment, a method comprises: obtaining, using one or more processors of an auxiliary device worn by a user, motion data; tracking, using the one or more processors, the user's head based at least in part on the motion data; determining, using the one or more processors, whether or not the user is walking based at least in part on the motion data; in accordance with determining that the user is walking, determining if a source device configured to deliver spatial audio to the auxiliary device is static for a specified period of time; and in accordance with determining that the user is walking and the source device is static for the specified period of time, disabling the head tracking.
Abstract:
In an example method, a mobile device obtains a signal indicating an acceleration measured by a sensor over a time period. The mobile device determines an impact experienced by the user based on the signal. The mobile device also determines, based on the signal, one or more first motion characteristics of the user during a time prior to the impact, and one or more second motion characteristics of the user during a time after the impact. The mobile device determines that the user has fallen based on the impact, the one or more first motion characteristics of the user, and the one or more second motion characteristics of the user, and in response, generates a notification indicating that the user has fallen.