Abstract:
In accordance with example embodiments, hand gestures can be used to provide user input to a wearable computing device, and in particular to identify, signify, or otherwise indicate what may be considered or classified as important or worthy of attention or notice. A wearable computing device, which could include a head-mounted display (HMD) and a video camera, may recognize known hand gestures and carry out particular actions in response. Particular hand gestures could be used for selecting portions of a field of view of the HMD, and generating images from the selected portions. The HMD could then transmit the generated images to one or more applications in a network server communicatively connected with the HMD, including a server or server system hosting a social networking service.
Abstract:
A computer-implemented method includes detecting, at a wearable computing device, a first direction of a first stare, wherein the wearable computing device includes a head-mountable display unit, identifying a target based on the detected first direction, and based on a determination that a first time duration of the first stare is greater than or equal to a first predetermined time threshold, identifying information relevant to the target and displaying the identified information on the display unit. Subsequent to displaying the identified information, the method includes detecting a second stare that is directed at the target or at the displayed information, and based on a determination that a second time duration of the second stare is greater than or equal to a second predetermined time threshold, identifying additional information relevant to the target, and displaying the additional information on the display unit.
Abstract:
Methods and systems involving a virtual window in a head-mounted display (HMD) are disclosed herein. An exemplary system may be configured to: (i) receive head-movement data that is indicative of head movement; (ii) cause an HMD to operate in a first mode in which the HMD is configured to: (a) simultaneously provide a virtual window and a physical-world view in the HMD; (b) display, in the virtual window, a portion of a media item that corresponds to a field of view; (c) determine movement of the field of view; and (d) update the portion of the media item that is displayed in the virtual window; (iii) receive mode-switching input data and responsively cause the HMD to switch between the first mode and a second mode; and (iv) responsive to the mode-switching input data, cause the HMD to operate in the second mode.
Abstract:
Autonomous vehicles use various computing systems to transport passengers from one location to another. A control computer sends messages to the various systems of the vehicle in order to maneuver the vehicle safely to the destination. The control computer may display information on an electronic display in order to allow the passenger to understand what actions the vehicle may be taking in the immediate future. Various icons and images may be used to provide this information to the passenger.
Abstract:
A passenger in an automated vehicle may relinquish control of the vehicle to a control computer when the control computer has determined that it may maneuver the vehicle safely to a destination. The passenger may relinquish or regain control of the vehicle by applying different degrees of pressure, for example, on a steering wheel of the vehicle. The control computer may convey status information to a passenger in a variety of ways including by illuminating elements of the vehicle. The color and location of the illumination may indicate the status of the control computer, for example, whether the control computer has been armed, is ready to take control of the vehicle, or is currently controlling the vehicle.
Abstract:
Disclosed are methods and devices for transitioning a mixed-mode autonomous vehicle from a human driven mode to an autonomously driven mode. Transitioning may include stopping a vehicle on a predefined landing strip and detecting a reference indicator. Based on the reference indicator, the vehicle may be able to know its exact position. Additionally, the vehicle may use the reference indictor to obtain an autonomous vehicle instruction via a URL. After the vehicle knows its precise location and has an autonomous vehicle instruction, it can operate in autonomous mode.
Abstract:
The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.
Abstract:
Disclosed are methods and devices for transitioning a mixed-mode autonomous vehicle from a human driven mode to an autonomously driven mode. Transitioning may include stopping a vehicle on a predefined landing strip and detecting a reference indicator. Based on the reference indicator, the vehicle may be able to know its exact position. Additionally, the vehicle may use the reference indictor to obtain an autonomous vehicle instruction via a URL. After the vehicle knows its precise location and has an autonomous vehicle instruction, it can operate in autonomous mode.
Abstract:
A passenger in an automated vehicle may relinquish control of the vehicle to a control computer when the control computer has determined that it may maneuver the vehicle safely to a destination. The passenger may relinquish or regain control of the vehicle by applying different degrees of pressure, for example, on a steering wheel of the vehicle. The control computer may convey status information to a passenger in a variety of ways including by illuminating elements of the vehicle. The color and location of the illumination may indicate the status of the control computer, for example, whether the control computer has been armed, is ready to take control of the vehicle, or is currently controlling the vehicle.
Abstract:
The present application discloses systems and methods for a virtual input device. In one example, the virtual input device includes a projector and a camera. The projector projects a pattern onto a surface. The camera captures images that can be interpreted by a processor to determine actions. The projector may be mounted on an arm of a pair of eyeglasses and the camera may be mounted on an opposite arm of the eyeglasses. A pattern for a virtual input device can be projected onto a “display hand” of a user, and the camera may be able to detect when the user uses an opposite hand to select items of the virtual input device. In another example, the camera may detect when the display hand is moving and interpret display hand movements as inputs to the virtual input device, and/or realign the projection onto the moving display hand.