Abstract:
Systems and methods are disclosed for tracking physiological states and parameters for calorie estimation. A start of an exercise session associated with a user of a wearable computing device is determined. Heart rate data is measured for a first period of time. An onset heart rate value of the user is determined based on the measured heart rate data, the onset heart rate value associated with a lowest valid heart rate measured during the first period of time. A resting heart rate parameter (RHR) of a calorimetry model is associated with at least one of the onset heart rate value, a preset RHR, and an RHR based on user biometric data. Energy expenditure of the user during a second period of time is estimated based on the calorimetry model and a plurality of heart rate measurements obtained by the wearable computing device during the second period of time.
Abstract:
In one aspect, the present disclosure relates to a method including obtaining, by a heart rate sensor of a fitness tracking device, a heart rate measurement of a user of the fitness tracking device; obtaining, by at least one motion sensor, motion data of the user; analyzing, by the fitness tracking device, the motion data of the user to estimate a step rate of the user; estimating, by the fitness tracking device, a load associated with a physical activity of the user by comparing the heart rate measurement with the step rate of the user; and estimating, by the fitness tracking device, an energy expenditure rate of the user using the load and at least one of the heart rate measurement and the step rate.
Abstract:
In one aspect, the present disclosure relates to a method, including obtaining, by the fitness tracking device, motion data of the user over a period of time, wherein the motion data can include a first plurality motion measurements from a first motion sensor of the fitness tracking device; determining, by the fitness tracking device, using the motion data an angle of the fitness tracking device relative to a plane during the period of time; estimating by the fitness tracking device, using the motion data, a range of linear motion of the fitness tracking device through space during the period of time; and comparing, by the fitness tracking device, the angle of the fitness tracking device to a threshold angle and comparing the range of linear motion of the fitness tracking device to a threshold range of linear motion to determine whether the user is sitting or standing.
Abstract:
Embodiments are disclosed for adaptive audio centering for head tracking in spatial audio applications. In an embodiment, a method comprises: obtaining first motion data from an auxiliary device communicatively coupled to a source device, the source device configured to provide spatial audio content and the auxiliary device configured to playback the spatial audio content; obtaining second motion data from one or more motion sensors of the source device; determining whether the source device and auxiliary device are in a period of mutual quiescence based on the first and second motion data; in accordance with determining that the source device and the auxiliary device are in a period of mutual quiescence, re-centering the spatial audio in a three-dimensional virtual auditory space; and rendering the 3D virtual auditory space for playback on the auxiliary device.
Abstract:
In some implementations, responsive to a trigger signal at an associated first time, a mobile device generating a first location value using a first ranging session with one or more other devices. The technique may include storing the first location value in a memory. The technique may include tracking, using a motion sensor of the mobile device, motion of the mobile device to determine a present location relative to the first location value. Further, the technique may include determining that a present location for the mobile device has changed by a predetermined threshold amount from the first location value since the associated first time. Responsive to the present location for the mobile device having changed by more than the predetermined threshold amount since the associated first time, the technique may include, generating a second location value using a second ranging session with the one or more other devices.
Abstract:
Embodiments are disclosed for head tracking state detection based on correlated motion of a source device and a headset communicatively coupled to the source device. In an embodiment, a method comprises: obtaining, using one or more processors of a source device, source device motion data from a source device and headset motion data from a headset; determining, using the one or more processors, correlation measures using the source device motion data and the headset motion data; updating, using the one or more processors, a motion tracking state based on the determined correlation measures; and initiating head pose tracking in accordance with the updated motion tracking state.
Abstract:
Embodiments are disclosed for disabling/re-enabling head tracking for spatial audio applications. In an embodiment, a method comprises: obtaining, using one or more processors of an auxiliary device worn by a user, motion data; tracking, using the one or more processors, the user's head based at least in part on the motion data; determining, using the one or more processors, whether or not the user is walking based at least in part on the motion data; in accordance with determining that the user is walking, determining if a source device configured to deliver spatial audio to the auxiliary device is static for a specified period of time; and in accordance with determining that the user is walking and the source device is static for the specified period of time, disabling the head tracking.
Abstract:
Embodiments are disclosed for adaptive audio centering for head tracking in spatial audio applications. In an embodiment, a method comprises: obtaining first motion data from an auxiliary device communicatively coupled to a source device, the source device configured to provide spatial audio content and the auxiliary device configured to playback the spatial audio content; obtaining second motion data from one or more motion sensors of the source device; determining whether the source device and auxiliary device are in a period of mutual quiescence based on the first and second motion data; in accordance with determining that the source device and the auxiliary device are in a period of mutual quiescence, re-centering the spatial audio in a three-dimensional virtual auditory space; and rendering the 3D virtual auditory space for playback on the auxiliary device.
Abstract:
In an example method, a mobile device obtains a signal indicating an acceleration measured by a sensor over a time period. The mobile device determines an impact experienced by the user based on the signal. The mobile device also determines, based on the signal, one or more first motion characteristics of the user during a time prior to the impact, and one or more second motion characteristics of the user during a time after the impact. The mobile device determines that the user has fallen based on the impact, the one or more first motion characteristics of the user, and the one or more second motion characteristics of the user, and in response, generates a notification indicating that the user has fallen.
Abstract:
Improved techniques and systems are disclosed for determining the components of resistance experienced by a wearer of a wearable device engaged in an activity such as bicycling or running. By monitoring data using the wearable device, improved estimates can be derived for various factors contributing to the resistance experienced by the user in the course of the activity. Using these improved estimates, data sampling rates may be reduced for some or all of the monitored data.