Abstract:
An application can generate multiple user interfaces for display across multiple electronic devices. After the electronic devices establish communication, an application running on at least one of the devices can present a first set of information items on a touch-enabled display of one of the electronic devices. The electronic device can receive a user selection of one of the first set of information items. In response to receiving the user selection, the application can generate a second set of information items for display on the other electronic device. The second set of information items can represent an additional level of information related to the selected information item.
Abstract:
A device with a display and a touch-sensitive surface displays a user interface including a user interface object at a first location. While displaying the user interface, the device detects a portion of an input, including a contact at a location on the touch-sensitive surface corresponding to the user interface object. In response to detecting the portion of the input: upon determining that the portion of the input meets menu-display criteria, the device displays a plurality of selectable options that corresponds to the user interface object on the display; and, upon determining that the portion of the input meets object-move criteria, the device moves the user interface object or a representation thereof from the first location to a second location according to the movement of the contact.
Abstract:
The present disclosure provides a method for mirrored control between devices performed at a first electronic device including one or more processors, memory, and a touch-sensitive display. The method includes: sending an item from a first instant messenger application running on the first electronic device to a second instant messenger application running on a second electronic device; displaying the item in the first instant messenger application, wherein the item is concurrently displayed in the second instant messenger application; receiving information corresponding to an interaction with the item; and in response to receiving information corresponding to the interaction, updating the item on the first electronic device, wherein the update to the item is mirrored on the second electronic device.
Abstract:
A device displays a camera user interface including a live view from a camera. While displaying the live view from the camera: the device records media images that are captured by the camera, while continuing to display the live view from the camera; and the device further displays representations of a plurality of media images that were recorded while displaying the live view from the camera as frames scrolling across the display in a first direction.
Abstract:
An application can generate multiple user interfaces for display across multiple electronic devices. After the electronic devices establish communication, an application running on at least one of the devices can present a first set of information items on a touch-enabled display of one of the electronic devices. The electronic device can receive a user selection of one of the first set of information items. In response to receiving the user selection, the application can generate a second set of information items for display on the other electronic device. The second set of information items can represent an additional level of information related to the selected information item.
Abstract:
The present application is related to a computer for providing output to a user. The computer includes a processor and an input device in communication with the processor. The input device includes a feedback surface and at least one sensor in communication with the feedback surface, the at least one sensor configured to detect a user input to the feedback surface. The processor varies a down-stroke threshold based on a first factor and varies an up-stroke threshold based on a second factor. The down-stroke threshold determines a first output of the computing device, the up-stroke threshold determines a second output of the computing device, and at least of the first factor or the second factor are determined based on the user input.
Abstract:
An example method is performed at a device with a display and a biometric sensor. While the device is in a locked state, the method includes displaying a log-in user interface that is associated with logging in to a first and second user account. While displaying the log-in user interface, the method includes, receiving biometric information, and in response to receiving the biometric information: when the biometric information is consistent with biometric information for the first user account and the first user account does not have an active session, displaying a prompt to input a log-in credential for the first user account; and when the biometric information is consistent with biometric information for the second user account and the second user account does not have an active session on the device, displaying a prompt to input a log-in credential for the second user account.
Abstract:
A device displays a first user interface that includes a first user interface object at a first location in the first user interface and detects a first portion of a first input directed to the first user interface object. In response, if the first portion of the first input meets first criteria including a first input threshold, the device displays selectable options that correspond to the first user interface object; and, if the first portion of the first input meets the first criteria and meets second criteria that require first movement, the device ceases to display the selectable options and moves the first user interface object or a representation thereof to a second location in accordance with the first movement.
Abstract:
An electronic device responds to a user input that includes a movement detected while displaying a user interface object at a first position relative to a first user interface of a first application. If the user input is directed to a location that corresponds to the first user interface object at a start of the movement, the electronic device moves a respective representation of the first user interface in accordance with the movement, relative to a background located behind the first user interface and remaining visible during the movement. If the user input is directed to a portion of the first user interface that is away from the location that corresponds the first user interface object, at the start of the movement, the electronic device performs an operation within the first user interface without moving the respective representation of the first user interface relative to the background.
Abstract:
A device displays, in a first display region of a camera user interface, a first live view from a first camera. While displaying the first live view from the first camera, the device detects movement of a first user input (e.g., a first contact on a touch-sensitive surface). In accordance with a determination that the movement of the first user input meets first movement criteria, the device moves the first live view in the first display region in accordance with the movement of the first user input, displays a second live view from a second camera, and moves the second live view in the first display region in accordance with the movement of the first user input.