Abstract:
Some embodiments provide a mapping application that provides a variety of UI elements for allowing a user to specify a location (e.g., for viewing or serving as route destinations). In some embodiments, these location-input UI elements appear in succession on a sequence of pages, according to a hierarchy that has the UI elements that require less user interaction appear on earlier pages in the sequence than the UI elements that require more user interaction. In some embodiments, the location-input UI elements that successively appear in the mapping application include (1) selectable predicted-destination notifications, (2) a list of selectable predicted destinations, (3) a selectable voice-based search affordance, and (4) a keyboard. In some of these embodiments, these UI elements appear successively on the following sequence of pages: (1) a default page for presenting the predicted-destination notifications, (2) a destination page for presenting the list of predicted destinations, (3) a search page for receiving voice-based search requests, and (4) a keyboard page for receiving character input.
Abstract:
A method of providing navigation on an electronic device when the display screen is locked. The method receives a verbal request to start navigation while the display is locked. The method identifies a route from a current location to a destination based on the received verbal request. While the display screen is locked, the method provides navigational directions on the electronic device from the current location of the electronic device to the destination. Some embodiments provide a method for processing a verbal search request. The method receives a navigation-related verbal search request and prepares a sequential list of the search results based on the received request. The method then provides audible information to present a search result from the sequential list. The method presents the search results in a batch form until the user selects a search result, the user terminates the search, or the search items are exhausted.
Abstract:
Some embodiments provide a mapping application that includes a novel dynamic scale that can be used to perform different zoom operations. In some embodiments, the scale also serves as a distance measurement indicator for a corresponding zoom level. The application continuously adjusts several different attributes of the scale, including the scale size, the number of segments on the scale and the representative distance of a segment on the scale. In some embodiments, the mapping application provides a smart zoom feature that guides a user during a zoom to a location. In particular, the smart zoom detects that a location of a zoom is near a pin on the map, and if so, zooms to the pin on the map. Otherwise, if the location is near a cloud of pins, the application zooms to the cloud of pins. Otherwise the zoom is directed towards the user's selected location.
Abstract:
A method of providing a sequence of turn-by-turn navigation instructions on a device traversing a route is provided. Each turn-by-turn navigation instruction is associated with a location on the route. As the device traverses along the route, the method displays a turn-by-turn navigation instruction associated with a current location of the device. The method receives a touch input through a touch input interface of the device while displaying a first turn-by-turn navigation instruction and a first map region that displays the current location and a first location associated with the first turn-by-turn navigation instruction. In response to receiving the touch input, the method displays a second turn-by-turn navigation instruction and a second map region that displays a second location associated with the second turn-by-turn navigation instruction. Without receiving additional input, the method automatically returns to the display of the first turn-by-turn navigation instruction and the first map region.
Abstract:
Some embodiments provide a mapping application that provides a variety of UI elements for allowing a user to specify a location (e.g., for viewing or serving as route destinations). In some embodiments, these location-input UI elements appear in succession on a sequence of pages, according to a hierarchy that has the UI elements that require less user interaction appear on earlier pages in the sequence than the UI elements that require more user interaction. In some embodiments, the location-input UI elements that successively appear in the mapping application include (1) selectable predicted-destination notifications, (2) a list of selectable predicted destinations, (3) a selectable voice-based search affordance, and (4) a keyboard. In some of these embodiments, these UI elements appear successively on the following sequence of pages: (1) a default page for presenting the predicted-destination notifications, (2) a destination page for presenting the list of predicted destinations, (3) a search page for receiving voice-based search requests, and (4) a keyboard page for receiving character input.
Abstract:
A multitouch device can interpret and disambiguate different gestures related to manipulating a displayed image of a 3D object, scene, or region. Examples of manipulations include pan, zoom, rotation, and tilt. The device can define a number of manipulation modes, including one or more single-control modes such as a pan mode, a zoom mode, a rotate mode, and/or a tilt mode. The manipulation modes can also include one or more multi-control modes, such as a pan/zoom/rotate mode that allows multiple parameters to be modified simultaneously.
Abstract:
A method of providing a sequence of turn-by-turn navigation instructions on a device traversing a route is provided. Each turn-by-turn navigation instruction is associated with a location on the route. As the device traverses along the route, the method displays a turn-by-turn navigation instruction associated with a current location of the device. The method receives a touch input through a touch input interface of the device while displaying a first turn-by-turn navigation instruction and a first map region that displays the current location and a first location associated with the first turn-by-turn navigation instruction. In response to receiving the touch input, the method displays a second turn-by-turn navigation instruction and a second map region that displays a second location associated with the second turn-by-turn navigation instruction. Without receiving additional input, the method automatically returns to the display of the first turn-by-turn navigation instruction and the first map region.
Abstract:
A method of providing navigation instructions in a locked mode of a device is disclosed. The method, while the display screen of the device is turned off, determines that the device is near a navigation point. The method turns on the display screen and provides navigation instructions. In some embodiments, the method identifies the ambient light level around the device and turns on the display at brightness level determined by the identified ambient light level. The method turns off the display after the navigation point is passed.
Abstract:
Some embodiments of the invention provide a mobile device with a novel route prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for the device's user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. The device's prediction engine only relies on user-specific data stored on the device in some embodiments, relies only on user-specific data stored outside of the device by external devices/servers in other embodiments, and relies on user-specific data stored both by the device and by other devices/servers in other embodiments.
Abstract:
Some embodiments provide a mapping application with novel navigation and/or search tools. In some embodiments, the mapping application formulates predictions about future destinations of a device that executes the mapping application, and provides dynamic notifications regarding these predicted destinations. For instance, when a particular destination is a likely destination (e.g., most likely destination) of the device, the mapping application in some embodiments presents a notification regarding the particular destination (e.g., plays an animation that presents the notification). This notification in some embodiments provides some information about (1) the predicted destination (e.g., a name and/or address for the predicted destination) and (2) a route to this predicted destination (e.g., an estimated time of arrival, distance, and/or amount of ETD for the predicted destination). In some embodiments, the notification is a dynamic not only because it is presented dynamically as the device travels, but also because the information that the notification displays about the destination and/or route to the destination is dynamically updated by the mapping application as the device travels.