Abstract:
Some embodiments of the invention provide a novel prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for a user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following: (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. In some embodiments, the prediction engine only relies on user-specific data stored on the device on which this engine executes. Alternatively, in other embodiments, it relies only on user-specific data stored outside of the device by external devices/servers. In still other embodiments, the prediction engine relies on user-specific data stored both by the device and by other devices/servers.
Abstract:
At least certain embodiments of the present disclosure include an environment with user interface software interacting with a software application to provide gesture operations for a display of a device. A method for operating through an application programming interface (API) in this environment includes transferring, from platform code configured to provide a common framework for handling gesture events, to a program, a rotation transform function call in response to an input that corresponds to a gesture based on two or more concurrent touches. The method includes transferring, via the API, a gesture change function call from the platform code to the program in response to detecting a change in the gesture that corresponds to a change in one or more touches. The method includes, in response to transferring the rotation transform function call and the gesture change function call, performing a rotation transform to rotate a view of the program.
Abstract:
A portable multifunction device includes a touch screen display, and one or more programs configured to be executed by one or more processors. The one or more programs including instructions for displaying an application, wherein the application includes a plurality of input elements that include a respective user input element. Detecting a first input that corresponds to selection of the respective user input element. In response to detecting the first input, enlarging the respective input element, and displaying an input interface for selecting input for the respective user input element, wherein the input interface includes a plurality of text input choices. While displaying the input interface, detecting a second input that corresponds to selection of a respective text input choice of the plurality of text input choices. After detecting the second input, using text that corresponds to the respective text input choice as input for the respective user input element.
Abstract:
At least certain embodiments of the present disclosure include an environment with user interface software interacting with a software application to provide gesture operations for a display of a device. A method for operating through an application programming interface (API) in this environment includes transferring a scaling transform call. The gesture operations include performing a scaling transform such as a zoom in or zoom out in response to a user input having two or more input points. The gesture operations also include performing a rotation transform to rotate an image or view in response to a user input having two or more input points.
Abstract:
A navigation application can generate and display a composite representation of multiple POIs when POI icons representing the POIs appear to be overlapping. Some embodiments display the composite representation when a certain zoom level is reached for a map including the multiple POI icons. In some embodiments, the navigation application can determine POIs that may be of interest to the user based on the user's attributes and activity history and generate the composite representation based on those attributes. The composite representation can include multiple POI icons that are displayed adjacent to each other such that a user of the navigation application can readily identify POIs that are likely to be of interest to the user within a region.
Abstract:
Some embodiments of the invention provide a navigation application that allows a user to peek ahead or behind during a turn-by-turn navigation presentation that the application provides while tracking a device (e.g., a mobile device, a vehicle, etc.) traversal of a physical route. As the device traverses along the physical route, the navigation application generates a navigation presentation that shows a representation of the device on a map traversing along a virtual route that represents the physical route on the map. While providing the navigation presentation, the navigation application can receive user input to look ahead or behind along the virtual route. Based on the user input, the navigation application moves the navigation presentation to show locations on the virtual route that are ahead or behind the displayed current location of the device on the virtual route. This movement can cause the device representation to no longer be visible in the navigation presentation. Also, the virtual route often includes several turns, and the peek ahead or behind movement of the navigation presentation passes the presentation through one or more of these turns. In some embodiments, the map can be defined presented as a two-dimensional (2D) or a three-dimensional (3D) scene.
Abstract:
An improve navigation application can generate and display a composite representation of multiple POIs when POI icons representing the POIs appear to be overlapping. Some embodiments display the composite representation when a certain zoom level is reached for a map including the multiple POI icons. In some embodiments, the navigation application can determine POIs that may be of interest to the user based on the user's attributes and activity history and generate the composite representation based on those attributes. The composite representation can include multiple POI icons that are displayed adjacent to each other such that a user of the navigation application can readily identify POIs that are likely to be of interest to the user within a region.
Abstract:
Methods and apparatus for a map tool displaying a three-dimensional view of a map based on a three-dimensional model of the surrounding environment. The three-dimensional map view of a map may be based on a model constructed from multiple data sets, where the multiple data sets include mapping information for an overlapping area of the map displayed in the map view. For example, one data set may include two-dimensional data including object footprints, where the object footprints may be extruded into a three-dimensional object based on data from a data set composed of three-dimensional data. In this example, the three-dimensional data may include height information that corresponds to the two-dimensional object, where the height may be obtained by correlating the location of the two-dimensional object within the three-dimensional data.
Abstract:
A device with a touch screen display displays an electronic document that includes a respective user input element. The device detects a first input that corresponds to selection of the respective user input element that is displayed with text having a first size. In response to detecting the first input, the device enlarges the respective user input element, moves the respective user input element toward a center of a first portion of the display and displays an input interface for selecting input for the respective user input element in a second portion of the display that is different from the first portion of the display. The input interface includes a plurality of text input choices for entering text that are displayed at a second size larger than the first size. The device uses text that corresponds to a selected text input choice as input for the respective user input element.
Abstract:
Methods, systems and apparatus are described to provide visual feedback of a change in map view. Various embodiments may display a map view of a map in a two-dimensional map view mode. Embodiments may obtain input indicating a change to a three-dimensional map view mode. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. Some embodiments may allow the input to request a specific display position to display. In response to the input indicating a change to a three-dimensional map view mode, embodiments may then display an animation that moves a virtual camera for the map display to different virtual camera positions to illustrate that the map view mode is changed to a three-dimensional map view mode.