Abstract:
A device with a display and a touch-sensitive surface: displays a geographic map in a first mode of an application, the geographic map including a plurality of landmarks, the geographic map being displayed at a first magnification level; detects a first input, the first input including a first finger contact at a location on the touch-sensitive surface that corresponds to a first landmark on the display; in response to detecting the first input: when the first input does not satisfy one or more predefined mode-change conditions, changes the magnification level in accordance with the first input and remains in the first mode; and when the first input satisfies the mode-change conditions, selects the first landmark and enters a second mode of the application; while in the second mode, detects a second input; and, in response to detecting the second input, displays information about the first landmark.
Abstract:
A hybrid positioning system for continuously and accurately determining a location of a mobile device is provided. Samples of GPS locations from a pool of mobile devices and accompanying cell tower data, WLAN data, or other comparable network signals are used to construct a dynamic map of particular regions. The dynamic map(s) may be sent to and stored on individual mobile devices such that the mobile device can compare its less accurate, but more readily available, data like cell tower signals to recorded ones and estimate its position more accurately and continuously. The position data may be sent to a server for user in location based services.
Abstract:
Users may view web pages, play games, send emails, take photos, and perform other tasks using mobile devices. Unfortunately, the limited screen size and resolution of mobile devices may restrict users from adequately viewing virtual objects, such as maps, images, email, user interfaces, etc. Accordingly, one or more systems and/or techniques for displaying portions of virtual objects on a mobile device are disclosed herein. A mobile device may be configured with one or more sensors (e.g., a digital camera, an accelerometer, or a magnetometer) configured to detect motion of the mobile device (e.g., a pan, tilt, or forward/backward motion). A portion of a virtual object may be determined based upon the detected motion and displayed on the mobile device. For example, a view of a top portion of an email may be displayed on a cell phone based upon the user panning the cell phone in an upward direction.
Abstract:
Keyboards, mice, joysticks, customized gamepads, and other peripherals are continually being developed to enhance a user's experience when playing computer video games. Unfortunately, many of these devices provide users with limited input control because of the complexity of today's gaming applications. For example, many computer video games require a combination of mouse and keyboard to control even the simplest of in-game tasks (e.g., walking into a room and looking around may require several keyboard keystrokes and mouse movements). Accordingly, one or more systems and/or techniques for performing in-game tasks based upon user input within a multi-touch mouse are disclosed herein. User input comprising one or more user interactions detect by spatial sensors within the multi-touch mouse may be received. A wide variety of in-game tasks (e.g., character movements, character actions, character view, etc.) may be performed based upon the user interactions (e.g., a swipe gesture, a mouse position change, etc.).
Abstract:
Technologies for a camera-based multi-touch input device operable to provide conventional mouse movement data as well as three-dimensional multi-touch data. Such a device is based on an internal camera focused on a mirror or set of mirrors enabling the camera to image the inside of a working surface of the device. The working surface allows light to pass through. An internal light source illuminates the inside of the working surface and reflects off of any objects proximate to the outside of the device. This reflected light is received by the mirror and then directed to the camera. Imaging from the camera can be processed to extract touch points corresponding to the position of one or more objects outside the working surface as well as to detect gestures performed by the objects. Thus the device can provide conventional mouse functionality as well as three-dimensional multi-touch functionality.
Abstract:
Labels of elements in images may be compared to known elements to determine a region from which an image was created. Using this information, the approximate image position can be found, additional elements may be recognized, labels may be checked for accuracy and additional labels may be added.
Abstract:
The first image may be displayed adjacent to the second image where the second image is a three dimensional image. An element may be selected in the first image and a matching element may be selected in the second image. A selection may be permitted to view a merged view where the merged view is the first image displayed over the second image by varying the opaqueness of the images. If the merged view is not acceptable, the method may repeat and if the merged view is acceptable; the first view onto the second view and the merged view may be stored as a merged image.
Abstract:
An interest center-point and a start point are created in an image. A potential function is created where the potential function creates a potential field and guides traversal from the starting point to the interest center-point. The potential field is adjusted to include a sum of potential fields directed toward the center-point where each potential field corresponds to an image. Images are displayed in the potential field at intervals in the traversal from the start point toward the interest center point.