Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
Methods, systems and apparatus are described to provide visual feedback of a change in map view. Various embodiments may display a map view of a map in a two-dimensional map view mode. Embodiments may obtain input indicating a change to a three-dimensional map view mode. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. Some embodiments may allow the input to request a specific display position to display. In response to the input indicating a change to a three-dimensional map view mode, embodiments may then display an animation that moves a virtual camera for the map display to different virtual camera positions to illustrate that the map view mode is changed to a three-dimensional map view mode.
Abstract:
Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
Abstract:
A device that includes at least one processing unit and stores a multi-mode mapping program for execution by the at least one processing unit is described. The program includes a user interface (UI). The UI includes a display area for displaying a two-dimensional (2D) presentation of a map or a three-dimensional (3D) presentation of the map. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations.
Abstract:
A device that includes at least one processing unit and stores a multi-mode mapping program for execution by the at least one processing unit is described. The program includes a user interface (UI). The UI includes a display area for displaying a two-dimensional (2D) presentation of a map or a three-dimensional (3D) presentation of the map. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations.
Abstract:
A mapping program for execution by at least one processing unit of a device is described. The device includes a touch-sensitive screen and a touch input interface. The program renders and displays a presentation of a map from a particular view of the map. The program generates an instruction to rotate the displayed map in response to a multi-touch input from the multi-touch input interface. In order to generate a rotating presentation of the map, the program changes the particular view while receiving the multi-touch input and for a duration of time after the multi-touch input has terminated in order to provide a degree of inertia motion for the rotating presentation of the map.
Abstract:
Systems and methods for rendering 3D maps may highlight a feature in a 3D map while preserving depth. A map tool of a mapping or navigation application that detects the selection of a feature in a 3D map (e.g., by touch) may perform a ray intersection to determine the feature that was selected. The map tool may capture the frame to be displayed (with the selected feature highlighted) in several steps. Each step may translate the map about a pivot point of the selected map feature (e.g., in three or four directions) to capture a new frame. The captured frames may be blended together to create a blurred map view that depicts 3D depth in the scene. A crisp version of the selected feature may then be rendered within the otherwise blurred 3D map. Color, brightness, contrast, or saturation values may be modified to further highlight the selected feature.
Abstract:
Techniques for performing context-sensitive actions in response to touch input are provided. A user interface of an application can be displayed. Touch input can be received in a region of the displayed user interface, and a context can be determined. A first action may be performed if the context is a first context and a second action may instead be performed if the context is a second context different from the first context. In some embodiments, an action may be performed if the context is a first context and the touch input is a first touch input, and may also be performed if the context is a second context and the touch input is a second touch input.
Abstract:
The described embodiments provide a system for performing an action based on a change in a status of a wired or wireless network connection for the system. During operation, the system detects the change in the status of the network connection. In response to detecting the change, the system determines a state of the system. The system then performs one or more actions using the determined state.
Abstract:
A mobile device including a touchscreen display presents an image of a three-dimensional object. The display can concurrently present a user interface element that can be in the form of a virtual button. While the device's user touches and maintains fingertip contact with the virtual button via the touchscreen, the mobile device can operate in a special mode in which physical tilting of the mobile device about physical spatial axes causes the mobile device to adjust the presentation of the image of the three-dimensional object on the display, causing the object to be rendered from different viewpoints in the virtual space that the object virtually occupies. The mobile device can detect such physical tilting based on feedback from a gyroscope and accelerometer contained within the device.