Abstract:
Some embodiments provide a method for a mapping service. For a set of road segments that intersect at a junction in a map region, the method generates an initial set of geometries for use in generating downloadable map information for the map region. For each corner formed by the geometries at the junction, the method determines whether to perform a smoothing operation. When a particular corner meets a set of criteria, the method modifies the geometries of at least one road segment to smooth the corner.
Abstract:
Systems and methods for rendering 3D maps may highlight a feature in a 3D map while preserving depth. A map tool of a mapping or navigation application that detects the selection of a feature in a 3D map (e.g., by touch) may perform a ray intersection to determine the feature that was selected. The map tool may capture the frame to be displayed (with the selected feature highlighted) in several steps. Each step may translate the map about a pivot point of the selected map feature (e.g., in three or four directions) to capture a new frame. The captured frames may be blended together to create a blurred map view that depicts 3D depth in the scene. A crisp version of the selected feature may then be rendered within the otherwise blurred 3D map. Color, brightness, contrast, or saturation values may be modified to further highlight the selected feature.
Abstract:
For a device that runs a mapping application, a method for providing maneuver indicators along a route of a map. The maneuver indicators are arrows that identify the direction and orientation of a maneuver. A maneuver arrow may be selected and displayed differently from unselected maneuver arrows. Maneuver arrows may be selected automatically based on a user's current location. The mapping application transitions between maneuver arrows and provides an animation for the transition. Complex maneuvers may be indicated by multiple arrows, providing a more detailed guidance for a user of the mapping application.
Abstract:
The embodiments described relate to techniques and systems for utilizing a portable electronic device to monitor, process, present and manage data captured by a series of sensors and location awareness technologies to provide a context aware map and navigation application. The context aware map application offers a user interface including visual and audio input and output, and provides several map modes that can change based upon context determined by data captured by a series of sensors and location awareness technologies.
Abstract:
Some embodiments provide a method for a mapping service. For a set of road segments that intersect at a junction in a map region, the method generates an initial set of geometries for use in generating downloadable map information for the map region. For each corner formed by the geometries at the junction, the method determines whether to perform a smoothing operation. When a particular corner meets a set of criteria, the method modifies the geometries of at least one road segment to smooth the corner.
Abstract:
Some embodiments of the invention provide a navigation application that allows a user to peek ahead or behind during a turn-by-turn navigation presentation that the application provides while tracking a device (e.g., a mobile device, a vehicle, etc.) traversal of a physical route. As the device traverses along the physical route, the navigation application generates a navigation presentation that shows a representation of the device on a map traversing along a virtual route that represents the physical route on the map. While providing the navigation presentation, the navigation application can receive user input to look ahead or behind along the virtual route. Based on the user input, the navigation application moves the navigation presentation to show locations on the virtual route that are ahead or behind the displayed current location of the device on the virtual route. This movement can cause the device representation to no longer be visible in the navigation presentation. Also, the virtual route often includes several turns, and the peek ahead or behind movement of the navigation presentation passes the presentation through one or more of these turns. In some embodiments, the map can be defined presented as a two-dimensional (2D) or a three-dimensional (3D) scene.
Abstract:
The embodiments described relate to techniques and systems for utilizing a portable electronic device to monitor, process, present and manage data captured by a series of sensors and location awareness technologies to provide a context aware map and navigation application. The context aware map application offers a user interface including visual and audio input and output, and provides several map modes that can change based upon context determined by data captured by a series of sensors and location awareness technologies.
Abstract:
A device that provides a map and/or navigation application that displays items on the map and/or navigation instructions differently in different modes. The applications of some embodiments provide a day mode and a night mode. In some embodiments the application uses the day mode as a default and activates the night mode when the time is after sunset at the location of the device. Some embodiments activate night mode when multiple conditions are satisfied (for example, when (1) the time is after sunset at the location of the device and (2) the ambient light level is below a threshold brightness).
Abstract:
Systems and methods for rendering 3D maps may highlight a feature in a 3D map while preserving depth. A map tool of a mapping or navigation application that detects the selection of a feature in a 3D map (e.g., by touch) may perform a ray intersection to determine the feature that was selected. The map tool may capture the frame to be displayed (with the selected feature highlighted) in several steps. Each step may translate the map about a pivot point of the selected map feature (e.g., in three or four directions) to capture a new frame. The captured frames may be blended together to create a blurred map view that depicts 3D depth in the scene. A crisp version of the selected feature may then be rendered within the otherwise blurred 3D map. Color, brightness, contrast, or saturation values may be modified to further highlight the selected feature.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.