Abstract:
An electronic device with a touch-sensitive display can detect a contact with the display, and in response to detecting the contact, the device can display a user interface screen representing a corresponding application. The user interface screen can include an affordance for launching the application, and a set of information obtained from the application, where the set of information is updated in accordance with data from the application.
Abstract:
Some embodiments of the invention provide a mobile device with multiple access modes. The device in some embodiments has at least two access modes, a primary access mode and a secondary access mode, that provide different restrictions for accessing the applications and/or data that are stored on the device. In some embodiments, the mobile device automatically selects applications to share or keep private based metadata associated with the applications.
Abstract:
Some embodiments of the invention provide a mobile device with multiple access modes. The device in some embodiments has at least two access modes, a primary access mode and a secondary access mode, that provide different restrictions for accessing the applications and/or data that are stored on the device. In some embodiments, the primary access mode of the device provides unfettered access to all of the device's applications and/or data that are available to a user, while its secondary access mode provides access to a limited set of applications and/or data that are stored on the device. In some embodiments, the device provides tools to select applications for the secondary access mode.
Abstract:
In some implementations, a user device can predictively route media content to a remote playback device based on playback context information obtained by the user device. The playback context can include local playback context information related to the state and/or context of the user device. The playback context can include remote playback context information related to the state and/or context of available remote playback devices. Based on the playback context information obtained by the user device, the user device can generate a predictive score for each available playback device that indicates or predicts the likelihood that the user will want to send media content to the corresponding playback device. The user device can generate and present a graphical user interface that can identify the playback devices having predictive scores over a threshold score. In some instances, the user device can automatically route selected media content to a predicted playback device.
Abstract:
In some implementations, a user device can predictively route media content to a remote playback device based on playback context information obtained by the user device. The playback context can include local playback context information related to the state and/or context of the user device. The playback context can include remote playback context information related to the state and/or context of available remote playback devices. Based on the playback context information obtained by the user device, the user device can generate a predictive score for each available playback device that indicates or predicts the likelihood that the user will want to send media content to the corresponding playback device. The user device can generate and present a graphical user interface that can identify the playback devices having predictive scores over a threshold score. In some instances, the user device can automatically route selected media content to a predicted playback device.
Abstract:
An electronic device having a camera, while displaying a live preview for the camera, detects activation of a shutter button at a first time. In response, the electronic device acquires, by the camera, a representative image that represents a first sequence of images, and a plurality of images after acquiring the representative image, and also displays an indication in the live preview that the camera is capturing images for the first sequence of images. The electronic device groups images acquired by the camera in temporal proximity to the activation of the shutter button at the first time into the first sequence of images, such that the first sequence of images includes a plurality of images acquired by the camera prior to detecting activation of the shutter button at the first time, the representative image, and the plurality of images acquired by the camera after acquiring the representative image.
Abstract:
A host device can establish a verified session with a wearable device. The host device can determine whether the verified session is in progress. In accordance with a determination that the verified session is in progress, the host device can provide a user interface to request confirmation that the identifier is to be provided to the wearable device. The host device can receive an input at the user interface and, in accordance with a determination that the input indicates a confirmation that the identifier is to be provided to the wearable device, the host can identify a user identifier to provide to the wearable device, and transmit the user identifier to the wearable device.
Abstract:
A computer system or electronic device displays, at a first time, a first user interface that includes a placement location and is configured to spatially accommodate a respective user interface object of a plurality of user interface objects, including first and second user interface objects. At the first time, the first user interface object is displayed at the placement location. At a second time, the first user interface is displayed with the second user interface object at the placement location. The second user interface object is automatically selected for display based on a current context of the device at the second time. In response to detecting a gesture of a first type directed to the placement location, the second user interface object is replaced with a different user interface object from the plurality of user interface objects associated with the placement location.
Abstract:
At a computer system, display a first user interface that includes a placement location that is configured to spatially accommodate a respective user interface object of a plurality of user interface objects corresponding to different applications. The plurality of user interface objects includes a first user interface object corresponding to a first application, and a second user interface object corresponding to a second application different from the first application. The first user interface object corresponding to the first application is displayed at the placement location. The computer system detects an occurrence of an event associated with a change in status of the second application corresponding to the second user interface object. In response, the first user interface object is ceased to be displayed at the placement location, and the first user interface with the second user interface object is displayed at the placement location.
Abstract:
An electronic device with a display and a fingerprint sensor displays a fingerprint enrollment interface and detects, on the fingerprint sensor, a plurality of finger gestures performed with a finger. The device collects fingerprint information from the plurality of finger gestures performed with the finger. After collecting the fingerprint information, the device determines whether the collected fingerprint information is sufficient to enroll a fingerprint of the finger. When the collected fingerprint information for the finger is sufficient to enroll the fingerprint of the finger, the device enrolls the fingerprint of the finger with the device. When the collected fingerprint information for the finger is not sufficient to enroll the fingerprint of the finger, the device displays a message in the fingerprint enrollment interface prompting a user to perform one or more additional finger gestures on the fingerprint sensor with the finger.