Abstract:
A computing device can be controlled based on changes in the angle of a user's head with respect to the device, such as due to the user tilting the device and/or the user tilting his head with respect to the device. Such control based on the angle of the user's head can be achieved even when the user is operating the device “off-axis” or when the device is not orthogonal and/or not centered with respect to the user. This can be accomplished by using an elastic reference point that dynamically adjusts to a detected angle of the user's head with respect to the device. Such an approach can account for differences between when the user is changing his natural resting position and/or the resting position of the device and when the user is intending to perform a gesture based on the angle of the user's head relative to the device.
Abstract:
A video capture device may include multiple cameras that simultaneously capture video data. The video capture device and/or one or more remote computing resources may stitch the video data captured by the multiple cameras to generate stitched video data that corresponds to 360° video. The remote computing resources may apply one or more algorithms to the stitched video data to adjust the color characteristics of the stitched video data, such as lighting, exposure, white balance contrast, and saturation. The remote computing resources may further smooth the transition between the video data captured by the multiple cameras to reduce artifacts such as abrupt changes in color as a result of the individual cameras of the video capture device having different video capture settings. The video capture device and/or the remote computing resources may generate a panoramic video that may include up to a 360° field of view.
Abstract:
The hand which a user is using to hold an electronic device can be determined by analyzing data captured by one or more motion sensors on the device. The curvature to the motion can be indicative of handedness, and processing motion features using a classifier algorithm can enable the determination of handedness with a corresponding confidence. In some embodiments, motion data is collected over a monitoring window, and handedness values are accepted when the handedness value remains the same with at least a minimum confidence for at least a minimum number of window periods. A determination of handedness enables an operating system and/or applications executing on the device to adjust one or more operational or interface aspects in order to make it easier for the user to operate the device using the hand currently holding the device.
Abstract:
Devices, systems and methods are disclosed for identifying content in video data and creating content-based zooming and panning effects to emphasize the content. Contents may be detected and analyzed in the video data using computer vision, machine learning algorithms or specified through a user interface. Panning and zooming controls may be associated with the contents, panning or zooming based on a location and size of content within the video data. The device may determine a number of pixels associated with content and may frame the content to be a certain percentage of the edited video data, such as a close-up shot where a subject is displayed as 50% of the viewing frame. The device may identify an event of interest, may determine multiple frames associated with the event of interest and may pan and zoom between the multiple frames based on a size/location of the content within the multiple frames.
Abstract:
A device may recognize a tilt gesture when a device rotates about an axis and then back again. The gesture may be recognized using a state machine. Recognition of the gesture may be performed based on a context of a device, where the specific movement of the device during a tilt gesture may change based on the context. The tilt gesture may be confirmed using a classifier trained on features describing the gesture and the context.