Abstract:
Techniques for performing context-sensitive actions in response to touch input are provided. A user interface of an application can be displayed. Touch input can be received in a region of the displayed user interface, and a context can be determined. A first action may be performed if the context is a first context and a second action may instead be performed if the context is a second context different from the first context. In some embodiments, an action may be performed if the context is a first context and the touch input is a first touch input, and may also be performed if the context is a second context and the touch input is a second touch input.
Abstract:
Some embodiments provide a method for an application that operates on a mobile device. The method predicts several likely destinations for a vehicle to which the mobile device is connected based on data from a several different sources. The method generates, for a display screen of the vehicle, a display that includes the several likely destinations. In some embodiments, the method receives a first type of input through a control of the vehicle to select one of the likely destinations, and enters a turn-by-turn navigation mode to the selected destination in response to the received input. In some embodiments, the display is for a first destination of the several likely destinations. The method receives a second type of input through a control of the vehicle to step through the set of likely destinations, and generates a display for a second destination in response to the input.
Abstract:
A method of providing a sequence of turn-by-turn navigation instructions on a device traversing a route is provided. Each turn-by-turn navigation instruction is associated with a location on the route. As the device traverses along the route, the method displays a turn-by-turn navigation instruction associated with a current location of the device. The method receives a touch input through a touch input interface of the device while displaying a first turn-by-turn navigation instruction and a first map region that displays the current location and a first location associated with the first turn-by-turn navigation instruction. In response to receiving the touch input, the method displays a second turn-by-turn navigation instruction and a second map region that displays a second location associated with the second turn-by-turn navigation instruction. Without receiving additional input, the method automatically returns to the display of the first turn-by-turn navigation instruction and the first map region.
Abstract:
A mobile device that displays a list of traveling maneuvers or driving directions according to a route from a start location to a destination location is provided. The displayed list of driving directions includes a series of graphical items that each corresponds to a maneuver in the route. The displayed list of driving directions is updated dynamically according to the current position of the mobile device. Each maneuver actually taken or traveled causes the mobile device to display the item that corresponds to the taken maneuver differently. After a number of maneuvers have been taken, the graphical items that correspond to the taken maneuvers are removed from display and new maneuvers are brought into view.
Abstract:
Some embodiments of the invention provide a novel prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for a user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following: (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. In some embodiments, the prediction engine only relies on user-specific data stored on the device on which this engine executes. Alternatively, in other embodiments, it relies only on user-specific data stored outside of the device by external devices/servers. In still other embodiments, the prediction engine relies on user-specific data stored both by the device and by other devices/servers.
Abstract:
A mobile computing device can be used to locate a vehicle parking location in weak location signal scenarios (e.g., weak, unreliable, or unavailable GPS or other location technology). In particular, the mobile device can determine when a vehicle in which the mobile device is located has entered into a parked state. GPS or other primary location technology may be unavailable at the time the mobile device entered into a parked state (e.g., inside a parking structure). The location of the mobile device at a time corresponding to when the vehicle is identified as being parked can be determined using the first location technology as supplemented with sensor data of the mobile device. After the location of the mobile device at a time corresponding to when the vehicle is identified as being parked is determined, the determined location can be associated with an identifier for the current parking location.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
A method of displaying navigational instructions when a navigation application is running in a background mode of an electronic device is provided. The method displays a non-navigation application in the foreground on a display screen of the electronic device. The method displays a navigation bar without a navigation instruction when the device is not near a navigation point. The method displays the navigation bar with a navigation instruction when the device is near a navigation point. In some embodiments, the method receives a command to switch from running the navigation application in the foreground to running another screen view in the foreground. The method then runs the other screen view in the foreground while displaying a navigation status display on an electronic display of the device.
Abstract:
Some embodiments provide a device that stores a novel navigation application. The application in some embodiments includes a user interface (UI) that has a display area for displaying a two-dimensional (2D) navigation presentation or a three-dimensional (3D) navigation presentation. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations.
Abstract:
Some embodiments provide a navigation application that presents a novel navigation presentation on a device. The application identifies a location of the device, and identifies a style of road signs associated with the identified location of the device. The application then generates navigation instructions in form of road signs that match the identified style. To generate the road sign, the application in some embodiments identifies a road sign template image for the identified style, and generates the road sign by compositing the identified road sign template with at least one of text instruction and graphical instruction. In some embodiments, the road sign is generated as a composite textured image that has a texture and a look associated with the road signs at the identified location.