Abstract:
A method, an apparatus, and a computer program product conduct online visual searches through an augmented reality (AR) device having an optical see-through head mounted display (HMD). An apparatus identifies a portion of an object in a field of view of the HMD based on user interaction with the HMD. The portion includes searchable content, such as a barcode. The user interaction may be an eye gaze or a gesture. A user interaction point in relation to the HMD screen is tracked to locate a region of the object that includes the portion and the portion is detected within the region. The apparatus captures an image of the portion. The identified portion of the object does not encompass the entirety of the object. Accordingly, the size of the image is less than the size of the object in the field of view. The apparatus transmits the image to a visual search engine.
Abstract:
A method, an apparatus, and a computer program product conduct online visual searches through an augmented reality (AR) device having an optical see-through head mounted display (HMD). An apparatus identifies a portion of an object in a field of view of the HMD based on user interaction with the HMD. The portion includes searchable content, such as a barcode. The user interaction may be an eye gaze or a gesture. A user interaction point in relation to the HMD screen is tracked to locate a region of the object that includes the portion and the portion is detected within the region. The apparatus captures an image of the portion. The identified portion of the object does not encompass the entirety of the object. Accordingly, the size of the image is less than the size of the object in the field of view. The apparatus transmits the image to a visual search engine.
Abstract:
An apparatus, a method, and a computer program product are provided. The apparatus detects an eye gaze on a first region in a real world scene, sets a boundary that surrounds the first region, the boundary excluding at least a second region in the real world scene, performs an object recognition procedure on the first region within the boundary, and refrains from performing an object recognition procedure on the at least the second region.
Abstract:
An apparatus for calibrating an augmented reality (AR) device having an optical see-through head mounted display (HMD) obtains eye coordinates in an eye coordinate system corresponding to a location of an eye of a user of the AR device, and obtains object coordinates in a world coordinate system corresponding to a location of a real-world object in the field of view of the AR device, as captured by a scene camera having a scene camera coordinate system. The apparatus calculates screen coordinates in a screen coordinate system corresponding to a display point on the HMD, where the calculating is based on the obtained eye coordinates and the obtained object coordinates. The apparatus calculates calibration data based on the screen coordinates, the object coordinates and a transformation from the target coordinate system to the scene camera coordinate system. The apparatus then derives subsequent screen coordinates for the display of AR in relation to other real-world object points based on the calibration data.
Abstract:
Aspects of the disclosed technology relate to an apparatus including a memory and at least one processor. The at least one processor can obtain at least one image of a scene and determine a portion of interest within the scene based on a first input. The first input can include a non-touch input. The at least one processor can output, in response to the first input, content associated with the portion of interest and receive a second input from the user. The second input can include a non-eye gaze input and be associated with the content. An action can be initiated by the one or more processor based on the second input.
Abstract:
Some implementations provide a method for identifying a speaker. The method determines position and orientation of a second device based on data from a first device that is for capturing the position and orientation of the second device. The second device includes several microphones for capturing sound. The second device has movable position and movable orientation. The method assigns an object as a representation of a known user. The object has a moveable position. The method receives a position of the object. The position of the object corresponds to a position of the known user. The method processes the captured sound to identify a sound originating from the direction of the object. The direction of the object is relative to the position and the orientation of the second device. The method identifies the sound originating from the direction of the object as belonging to the known user.
Abstract:
Methods and apparatuses for providing tangible control of sound are provided and described as embodied in a system that includes a sound transducer array along with a touch surface-enabled display table. The array may include a group of transducers (multiple speakers and/or microphones) configured to perform spatial processing of signals for the group of transducers so that sound rendering (in configurations where the array includes multiple speakers), or sound pick-up (in configurations where the array includes multiple microphones), have spatial patterns (or sound projection patterns) that are focused in certain directions while reducing disturbances from other directions. Users may directly adjust parameters related to sound projection patterns by interacting with the touch surface while receiving visual feedback by exercising one or more commands on the touch surface. The commands may be adjusted according to visual feedback received from the change of the display on the touch surface.
Abstract:
Access terminals are adapted to facilitate automated wireless communication interactions based on implicit user cues. According to one example, an access terminal can obtain a plurality of user cues, including user cues from at least two primary sensor inputs, as well as other optional user cues. The access terminal may identify the occurrence of a predefined combination of user cues from among the plurality of user cues. In response to the identification of the predefined combination of user cues, the access terminal may affect a wireless communication link with an access terminal associated with another user. Other aspects, embodiments, and features are also included.
Abstract:
An apparatus, a method, and a computer program product are provided. The apparatus detects an eye gaze on a first region in a real world scene, sets a boundary that surrounds the first region, the boundary excluding at least a second region in the real world scene, performs an object recognition procedure on the first region within the boundary, and refrains from performing an object recognition procedure on the at least the second region.
Abstract:
A method, an apparatus, and a computer program product provide feedback to a user of an augmented reality (AR) device having an optical see-through head mounted display (HMD). The apparatus obtains a location on the HMD corresponding to a user interaction with an object displayed on the HMD. The object may be an icon on the HMD and the user interaction may be an attempt by the user to select the icon through an eye gaze or gesture. The apparatus determines whether a spatial relationship between the location of user interaction and the object satisfies a criterion, and outputs a sensory indication, e.g., visual display, sound, vibration, when the criterion is satisfied. The apparatus may be configured to output a sensory indication when user interaction is successful, e.g., the icon was selected. Alternatively, the apparatus may be configured to output a sensory indication when the user interaction fails.