Abstract:
Some embodiments provide a non-transitory machine-readable medium that stores a mapping application which when executed on a device by at least one processing unit provides automated animation of a three-dimensional (3D) map along a navigation route. The mapping application identifies a first set of attributes for determining a first position of a virtual camera in the 3D map at a first instance in time. Based on the identified first set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the first instance in time. The mapping application identifies a second set of attributes for determining a second position of the virtual camera in the 3D map at a second instance in time. Based on the identified second set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the second instance in time. The mapping application renders an animated 3D map view of the 3D map from the first instance in time to the second instance in time based on the first and second positions of the virtual camera in the 3D map.
Abstract:
Some embodiments provide a mapping application for rendering map portions. The mapping application includes a map receiver for receiving map tiles from a mapping service in response to a request for the map tiles needed for a particular map view. Each map tile includes vector data describing a map region. The mapping application includes a set of mesh building modules. Each mesh building module is for using the vector data in at least one map tile to build a mesh for a particular layer of the particular map view. The mapping application includes a mesh aggregation module for combining layers from several mesh builders into a renderable tile for the particular map view. The mapping application includes a rendering engine for rendering the particular map view.
Abstract:
The described embodiments provide a system for performing an action based on a change in a status of a wired or wireless network connection for the system. During operation, the system detects the change in the status of the network connection. In response to detecting the change, the system determines a state of the system. The system then performs one or more actions using the determined state.
Abstract:
Systems and methods for rendering 3D maps may highlight a feature in a 3D map while preserving depth. A map tool of a mapping or navigation application that detects the selection of a feature in a 3D map (e.g., by touch) may perform a ray intersection to determine the feature that was selected. The map tool may capture the frame to be displayed (with the selected feature highlighted) in several steps. Each step may translate the map about a pivot point of the selected map feature (e.g., in three or four directions) to capture a new frame. The captured frames may be blended together to create a blurred map view that depicts 3D depth in the scene. A crisp version of the selected feature may then be rendered within the otherwise blurred 3D map. Color, brightness, contrast, or saturation values may be modified to further highlight the selected feature.
Abstract:
A mapping program for execution by at least one processing unit of a device is described. The device includes a touch-sensitive screen and a touch input interface. The program renders and displays a presentation of a map from a particular view of the map. The program generates an instruction to rotate the displayed map in response to a multi-touch input from the multi-touch input interface. In order to generate a rotating presentation of the map, the program changes the particular view while receiving the multi-touch input and for a duration of time after the multi-touch input has terminated in order to provide a degree of inertia motion for the rotating presentation of the map.
Abstract:
Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.
Abstract:
Implementations described and claimed herein provide systems and methods for interaction between a user and a machine. In one implementation, machine status information for the machine is received at a dedicated machine component. The machine status information is published onto a distributed node system network of the machine. The machine status information is ingested at a primary interface controller, and an interactive user interface is generated using the primary interface controller. The interactive user interface is generated based on the machine status information. In some implementations, input is received from the user at the primary interface controller through the interactive user interface, and a corresponding action is delegated to one or more subsystems of the machine using the distributed node system network.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
Abstract:
Some embodiments provide a non-transitory machine-readable medium that stores a program which when executed on a device by at least one processing unit performs panning operations on a three-dimensional (3D) map. The program displays a first 3D perspective view of the 3D map. In response to input to pan the 3D map, the program determines a panning movement based on the input and a two-dimensional (2D) view of the 3D map. The program pans the first 3D perspective view of 3D map to a second 3D perspective view of the 3D map based on determined panning movement. The program renders the second 3D perspective view of the 3D map for display on the device.