Abstract:
An apparatus for calibrating an eye tracking system of a head mounted display displays a moving object in a scene visible through the head mounted display. The object is displayed progressively at a plurality of different points (P) at corresponding different times (T). While the object is at a first point of the plurality of different points in time, the apparatus determines whether an offset between the point P and an eye gaze point (E) satisfies a threshold. The eye-gaze point (E) corresponds to a point where a user is determined to be gazing by the eye tracking system. If the threshold is not satisfied, the apparatus performs a calibration of the eye tracking system when the object is at a second point of the plurality of different points in time. The apparatus then repeats the determining step when the object is at a third point of the plurality of different points in time.
Abstract:
Methods and apparatuses for providing tangible control of sound are provided and described as embodied in a system that includes a sound transducer array along with a touch surface-enabled display table. The array may include a group of transducers (multiple speakers and/or microphones) configured to perform spatial processing of signals for the group of transducers so that sound rendering (in configurations where the array includes multiple speakers), or sound pick-up (in configurations where the array includes multiple microphones), have spatial patterns (or sound projection patterns) that are focused in certain directions while reducing disturbances from other directions. Users may directly adjust parameters related to sound projection patterns by interacting with the touch surface while receiving visual feedback by exercising one or more commands on the touch surface. The commands may be adjusted according to visual feedback received from the change of the display on the touch surface.
Abstract:
Some implementations provide a method for identifying a speaker. The method determines position and orientation of a second device based on data from a first device that is for capturing the position and orientation of the second device. The second device includes several microphones for capturing sound. The second device has movable position and movable orientation. The method assigns an object as a representation of a known user. The object has a moveable position. The method receives a position of the object. The position of the object corresponds to a position of the known user. The method processes the captured sound to identify a sound originating from the direction of the object. The direction of the object is relative to the position and the orientation of the second device. The method identifies the sound originating from the direction of the object as belonging to the known user.
Abstract:
An apparatus, a method, and a computer program product are provided. The apparatus detects an eye gaze on a first region in a real world scene, sets a boundary that surrounds the first region, the boundary excluding at least a second region in the real world scene, performs an object recognition procedure on the first region within the boundary, and refrains from performing an object recognition procedure on the at least the second region.
Abstract:
A method, an apparatus, and a computer program product provide feedback to a user of an augmented reality (AR) device having an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with an object displayed on the HMD. The object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the object satisfies a criterion, and outputs a sensory indication, e.g., visual display, sound, vibration, when the criterion is satisfied. The apparatus may be configured to output a sensory indication when user interaction is successful, e.g., the icon was selected. Alternatively, the apparatus may be configured to output a sensory indication when the user interaction fails.
Abstract:
Access terminals are adapted to facilitate automated wireless communication interactions based on implicit user cues. According to one example, an access terminal can obtain a plurality of user cues, including user cues from at least two primary sensor inputs, as well as other optional user cues. The access terminal may identify the occurrence of a predefined combination of user cues from among the plurality of user cues. In response to the identification of the predefined combination of user cues, the access terminal may affect a wireless communication link with an access terminal associated with another user. Other aspects, embodiments, and features are also included.
Abstract:
Methods and apparatuses for representing a sound field in a physical space are provided and described as embodied in a system that includes a sound transducer array along with a touch surface-enabled display table. The array may include a group of transducers (multiple speakers and/or microphones). The array may be configured to perform spatial processing of signals for the group of transducers so that sound rendering (in configurations where the array includes multiple speakers), or sound pick-up (in configurations where the array includes multiple microphones), may have spatial patterns (or sound projection patterns) that are focused in certain directions while reducing disturbances from other directions.
Abstract:
Methods and apparatuses for representing a sound field in a physical space are provided and described as embodied in a system that includes a sound transducer array along with a touch surface-enabled display table. The array may include a group of transducers (multiple speakers and/or microphones). The array may be configured to perform spatial processing of signals for the group of transducers so that sound rendering (in configurations where the array includes multiple speakers), or sound pick-up (in configurations where the array includes multiple microphones), may have spatial patterns (or sound projection patterns) that are focused in certain directions while reducing disturbances from other directions.