Abstract:
Control of head-mounted display (HMD) systems is described. One method displays at least one image on a first display layer of a HMD system. While displaying the at least one image, the method adjust a transparency setting of a second display layer from a first value to a second value to cause the second display layer to be opaque to limit an amount of light that passes through the lens for viewing the at least one image on the first display layer.
Abstract:
Single-touch immersion control of head-mounted display (HMD) systems is described. One method outputs video from an electronic device to a HMD system that includes a display layer and a variable-transparency layer. The electronic device controls the variable-transparency layer to operate in a first state in which the variable-transparency layer is transparent and to operate in a second state in which the variable-transparency layer is opaque. The electronic device switches between the second state and the first state in response to a single-touch event detected by the electronic device.
Abstract:
A force-sensitive touch sensor detects location and force of touches applied to the sensor. Movement of an object touching the force-sensitive touch sensor correlates to movement of a pointer on a display device. Varying levels of force applied to the force-sensitive touch sensor are interpreted as different commands. Objects displayed on the display device can be manipulated by a combination of gestures across a surface of the force-sensitive touch sensor and changes in force applied to the force-sensitive touch sensor.
Abstract:
Devices, systems, and, methods are disclosed for processing stylus interactions with a device and drawing the results of those interactions in a manner that reduces lag. This includes the introduction of a separate overlay module layer that can be updated separately from a normal view system/process of a computing device. In this respect, the overlay module layer may be used to remove unnecessary synchronization events to allow for quick display of stylus input events in the overlay module layer while still allowing the normal rendering process of the operating system to be followed.
Abstract:
A force-sensitive touch sensor detects location and force of touches applied to the sensor. Movement of an object touching the force-sensitive touch sensor correlates to movement of a pointer on a display device. Varying levels of force applied to the force-sensitive touch sensor are interpreted as different commands. Objects displayed on the display device can be manipulated by a combination of gestures across a surface of the force-sensitive touch sensor and changes in force applied to the force-sensitive touch sensor.
Abstract:
Techniques for determining whether touch-input gestures approximate straight lines and for animating a display with such gestures are described. The techniques determine a linear regression line for pixel locations comprising a gesture, determine distances of the pixel locations from the linear regression line, and render the set of pixel locations to the display based on the distances and a threshold.
Abstract:
Disclosed herein are techniques and systems for inter-device bearing estimation. Particularly, sensor fusion techniques are disclosed that combine motion data of a local computing device with beamforming data of the local computing device to determine a line-of-site path between the local computing device and a remote computing device. A motion sensor(s) of a local computing device may generate motion data from movement of the local computing device. The local computing device may further determine a direction, relative to the local computing device, of a beampattern in which an antenna(s) of the local computing device radiates energy, the beampattern direction being along a communication path between the local computing device and a remote computing device. The local device may then determine, based at least in part on the motion data and the direction of the beampattern, whether the communication path corresponds to a line-of-sight path between the local and remote devices.
Abstract:
Disclosed herein are techniques and systems for inter-device bearing estimation. Particularly, sensor fusion techniques are disclosed that combine motion data of a local computing device with beamforming data of the local computing device to determine a line-of-site path between the local computing device and a remote computing device. A motion sensor(s) of a local computing device may generate motion data from movement of the local computing device. The local computing device may further determine a direction, relative to the local computing device, of a beampattern in which an antenna(s) of the local computing device radiates energy, the beampattern direction being along a communication path between the local computing device and a remote computing device. The local device may then determine, based at least in part on the motion data and the direction of the beampattern, whether the communication path corresponds to a line-of-sight path between the local and remote devices.
Abstract:
Automatic quotes or references are generated based on a user's interaction with one or more pieces of content. A passage for quotation may be determined based at least in part on usage data including information about interaction with one or more pieces of content. A user may begin to type a quotation and a corresponding passage is inserted. The user may vary the scope of the passage, such as adding sentences or paragraphs. User annotation of the passage while the content is presented may also generate an automatically inserted quotation. A citation descriptive of the quoted passage may also be inserted. The automatically inserted quotation may be configured with a link or script, allowing additional functions or access to source content.
Abstract:
Automatic quotes or references are generated based on a user's interaction with one or more pieces of content. A passage for quotation may be determined based at least in part on usage data including information about interaction with one or more pieces of content. A user may begin to type a quotation and a corresponding passage is inserted. The user may vary the scope of the passage, such as adding sentences or paragraphs. User annotation of the passage while the content is presented may also generate an automatically inserted quotation. A citation descriptive of the quoted passage may also be inserted. The automatically inserted quotation may be configured with a link or script, allowing additional functions or access to source content.