Abstract:
Some embodiments of the invention provide a navigation application that allows a user to peek ahead or behind during a turn-by-turn navigation presentation that the application provides while tracking a device (e.g., a mobile device, a vehicle, etc.) traversal of a physical route. As the device traverses along the physical route, the navigation application generates a navigation presentation that shows a representation of the device on a map traversing along a virtual route that represents the physical route on the map. While providing the navigation presentation, the navigation application can receive user input to look ahead or behind along the virtual route. Based on the user input, the navigation application moves the navigation presentation to show locations on the virtual route that are ahead or behind the displayed current location of the device on the virtual route. This movement can cause the device representation to no longer be visible in the navigation presentation. Also, the virtual route often includes several turns, and the peek ahead or behind movement of the navigation presentation passes the presentation through one or more of these turns. In some embodiments, the map can be defined presented as a two-dimensional (2D) or a three-dimensional (3D) scene.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
For a device that runs a mapping application, a method for providing maneuver indicators along a route of a map. The maneuver indicators are arrows that identify the direction and orientation of a maneuver. A maneuver arrow may be selected and displayed differently from unselected maneuver arrows. Maneuver arrows may be selected automatically based on a user's current location. The mapping application transitions between maneuver arrows and provides an animation for the transition. Complex maneuvers may be indicated by multiple arrows, providing a more detailed guidance for a user of the mapping application.
Abstract:
Methods, systems and apparatus are described to provide visual feedback of a change in map view. Various embodiments may display a map view of a map in a two-dimensional map view mode. Embodiments may obtain input indicating a change to a three-dimensional map view mode. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. Some embodiments may allow the input to request a specific display position to display. In response to the input indicating a change to a three-dimensional map view mode, embodiments may then display an animation that moves a virtual camera for the map display to different virtual camera positions to illustrate that the map view mode is changed to a three-dimensional map view mode.
Abstract:
Methods and systems are provided for efficiently identifying map tiles of a raised-relief map to retrieve from a server. An electronic device can use estimates of height(s) for various region(s) of the map to determine map tiles that are likely viewable from a given position of a virtual camera. The device can calculate the intersection of the field of view of the virtual camera with the estimated heights to determine a location of the map tiles (e.g., as determined by a 2D grid) needed. In this manner, the electronic device can retrieve, from a map server, the map tiles needed to display the image, without retrieving extraneous tiles that are not needed. Identifying such tiles can reduce the amount of data to be sent across a network and reduce the number of requests for tiles, since the correct tiles can be obtained with the first request.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
Some embodiments provide a method for generating road data. The method receives data regarding several road segments and several junctions for a map region. The road segments include a first road segment and a second road segment that intersect at a particular junction. The method determines whether the first road segment and the second road segment are separate segments of a same road. When the first and second road segments are separate segments of the same road, the method defines an aggregate road that references the first and second road segments. In some embodiments, the method determines whether the first and second road segments are separate segments of the same road by using location data and road properties of the first and second road segments. In some embodiments, the aggregate road is stored as an ordered list of road segments that link together at junctions.
Abstract:
Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
Some embodiments of the invention provide a navigation application that allows a user to peek ahead or behind during a turn-by-turn navigation presentation that the application provides while tracking a device (e.g., a mobile device, a vehicle, etc.) traversal of a physical route. As the device traverses along the physical route, the navigation application generates a navigation presentation that shows a representation of the device on a map traversing along a virtual route that represents the physical route on the map. While providing the navigation presentation, the navigation application can receive user input to look ahead or behind along the virtual route. Based on the user input, the navigation application moves the navigation presentation to show locations on the virtual route that are ahead or behind the displayed current location of the device on the virtual route. This movement can cause the device representation to no longer be visible in the navigation presentation. Also, the virtual route often includes several turns, and the peek ahead or behind movement of the navigation presentation passes the presentation through one or more of these turns. In some embodiments, the map can be defined presented as a two-dimensional (2D) or a three-dimensional (3D) scene.