Abstract:
Disclosed are methods and apparatus for transmitting sensor timing correction messages with a host controller. The methods and apparatus determine synchronization messages that are transmitted to a sensor coupled with the host controller via an interface, where the messages indicate a beginning of a synchronization period for synchronizing timing of the host controller and the sensor. Additionally, a delay time message is determined that indicates a time delay between the beginning of the synchronization period and an actual transmission time of the synchronization message. The synchronization message is transmitted with the delay time message in an information message to the sensor, where information message is configured to allow the sensor to correct timing of a sensor timer by accounting for the delay time.
Abstract:
Attention evaluation by an extended reality system, the system determining one or more regions of interest (ROI) for an image displayed to a user. The system may also receive eye tracking information indicating an area of the image that the user is looking at. The system may further generate focus statistics based on the area of the image at which the user is looking at and the one or more ROI; and output the generated focus statistics.
Abstract:
A device includes a memory and one or more processors. The memory is configured to store instructions. The one or more processors are configured to execute the instructions to obtain electrical activity data corresponding to electrical signals from one or more electrical sources within a user's head. The one or more processors are also configured to execute the instructions to render, based on the electrical activity data, audio data to adjust a location of a sound source in a sound field during playback of the audio data.
Abstract:
In some aspects, a user equipment (UE) determines, using an inertial measurement unit, an orientation of the UE and determines, using ambient light sensors, an ambient light condition of the UE. The UE determines, using a machine learning module and based on the orientation and the ambient light condition, a position of the UE. If the position comprises an on-body position, the UE uses the machine learning module and touch data received by a touchscreen of the UE to determine whether the position comprises an in-hand position. If the position comprises the in-hand position, the UE determines, using the machine learning module and based on the orientation and the touch data, a grip mode. If the position comprises an off-body position, the UE determines, using the machine learning module and at least one of the inertial measurement unit or the ambient light sensors, a user presence or a user absence.
Abstract:
Systems, methods, and computer programs are disclosed for reducing motion-to-photon latency and memory bandwidth in a virtual reality display system. An exemplary method involves receiving sensor data from one or more sensors tracking translational and rotational motion of a user for a virtual reality application. An updated position of the user is computed based on the received sensor data. The speed and acceleration of the user movement may be computed based on the sensor data. The updated position, the speed, and the acceleration may be provided to a warp engine configured to update a rendered image before sending to a virtual reality display based on one or more of the updated position, the speed, and the acceleration.
Abstract:
Cardiovascular or respiratory data of a subject is measured using a multi-sensor system. The multi-sensor system includes a mm-wave FMCW radar sensor, an IMU sensor, and one or more proximity sensors. The mm-wave FMCW radar sensor may be selected and its view angle adjusted based on positioning data regarding the subject obtained from the one or more proximity sensors. Each of the mm-wave FMCW radar sensor and the IMU sensor may acquire cardiovascular or respiratory measurements of the subject, and the measurements may be fused for improved accuracy and performance.
Abstract:
Disclosed is a method and apparatus for power-efficiently processing sensor data. In one embodiment, the operations implemented include: configuring a sensor fusion engine and a peripheral controller with a general purpose processor; placing the general purpose processor into a low-power sleep mode; reading data from a sensor and storing the data into a companion memory with the peripheral controller; processing the data in the companion memory with the sensor fusion engine; and awaking the general purpose processor from the low-power sleep mode.
Abstract:
Disclosed is an apparatus and method for power efficient processor scheduling of features. In one embodiment, features may be scheduled for sequential computing, and each scheduled feature may receive a sensor data sample as input. In one embodiment, scheduling may be based at least in part on each respective feature's estimated power usage. In one embodiment, a first feature in the sequential schedule of features may be computed and before computing a second feature in the sequential schedule of features, a termination condition may be evaluated.
Abstract:
Aspects of the invention are related to a method for synchronizing a first sensor clock of a first sensor. The exemplary method comprises: correcting the first sensor clock for a first time, transferring data from the first sensor, and correcting the first sensor clock for a second time, wherein a time interval between two corrections of the first sensor clock is selected such that the first sensor clock is sufficiently aligned with a processor clock of a processor over the time interval.
Abstract:
A method for energy-efficient state change detection and classification of streaming sequential data includes receiving via a first prediction model, sequential data from a sensor. The first prediction model determines a change in an activity state based on the sequential data. An indication that the activity state has changed is transmitted to a second prediction. The second prediction model determines an updated activity state based on the sequential data. The updated activity state is sent to the first prediction model, after which the second prediction enters an inactive state.