Abstract:
The present disclosure relates to user interfaces for manipulating user interface objects. A device, including a display and a rotatable input mechanism, is described in relation to manipulating user interface objects. In some examples, the manipulation of the object is a scroll, zoom, or rotate of the object. In other examples, objects are selected in accordance with simulated magnetic properties.
Abstract:
While in a first mode, a first electronic device displays on a touch-sensitive display a first application view that corresponds to a first application. In response to detecting a first input, the electronic device enters a second mode, and concurrently displays in a first predefined area an initial group of application icons with at least a portion of the first application view adjacent to the first predefined area. While in the second mode, in response to detecting a first touch gesture on an application icon that corresponds to a second application, the electronic device displays a popup view corresponding to a full-screen-width view of the second application on a second electronic device. In response to detecting one or more second touch gestures within the popup view, the electronic device performs an action in the second application that updates a state of the second application.
Abstract:
An electronic device with a display displays a first user interface; detects a first input that includes a first movement. In response to detecting the first input, the device slides the first user interface off in a first direction in accordance with the first movement, where a magnitude of the sliding of the first user interface is determined based on a magnitude of the first movement and a first movement proportionality factor; and concurrently slides the second user interface on in the first direction over the first user interface in accordance with the first movement while sliding the first user interface off the display. A magnitude of the sliding of the second user interface over the first user interface is determined based on a magnitude of the first movement and a second movement proportionality factor that is different from the first movement proportionality factor.
Abstract:
Methods and systems related to interfaces for interacting with a digital assistant in a desktop environment are disclosed. In some embodiments, a digital assistant is invoked on a user device by a gesture following a predetermined motion pattern on a touch-sensitive surface of the user device. In some embodiments, a user device selectively invokes a dictation mode or a command mode to process a speech input depending on whether an input focus of the user device is within a text input area displayed on the user device. In some embodiments, a digital assistant performs various operations in response to one or more objects being dragged and dropped onto an iconic representation of the digital assistant displayed on a graphical user interface. In some embodiments, a digital assistant is invoked to cooperate with the user to complete a task that the user has already started on a user device.