Abstract:
A device with a display and a touch-sensitive keyboard with one or more character keys: displays a text entry area; detects a first input on the touch-sensitive keyboard; in accordance with a determination that the first input corresponds to activation of a character key, enters a first character corresponding to the character key into the text entry area; in accordance with a determination that the first input corresponds to a character drawn on the touch-sensitive keyboard: determines one or more candidate characters for the drawn character, and displays a candidate character selection interface that includes at least one of the candidate characters; while displaying the candidate character selection interface, detects a second input that selects a respective candidate character within the candidate character selection interface; and in response to detecting the second input, enters the selected respective candidate character into the text entry area.
Abstract:
Detecting a signal from a touch and hover sensing device, in which the signal can be indicative of concurrent touch events and/or hover events, is disclosed. A touch event can indicate an object touching the device. A hover event can indicate an object hovering over the device. The touch and hover sensing device can ensure that a desired hover event is not masked by an incidental touch event, e.g., a hand holding the device, by compensating for the touch event in the detected signal that represents both events. Conversely, when both a hover event and a touch event are desired, the touch and hover sensing device can ensure that both events are detected by adjusting the device sensors and/or the detected signal. The touch and hover sensing device can also detect concurrent hover events by identifying multiple peaks in the detected signal, each peak corresponding to a position of a hovering object.
Abstract:
While in a first mode, a first electronic device displays on a touch-sensitive display a first application view that corresponds to a first application. In response to detecting a first input, the electronic device enters a second mode, and concurrently displays in a first predefined area an initial group of application icons with at least a portion of the first application view adjacent to the first predefined area. While in the second mode, in response to detecting a first touch gesture on an application icon that corresponds to a second application, the electronic device displays a popup view corresponding to a full-screen-width view of the second application on a second electronic device. In response to detecting one or more second touch gestures within the popup view, the electronic device performs an action in the second application that updates a state of the second application.
Abstract:
In some embodiments, an electronic device with a touch screen display: detects a single finger contact on the touch screen display; creates a touch area that corresponds to the single finger contact; determines a representative point within the touch area; determines if the touch area overlaps an object displayed on the touch screen display, which includes determining if one or more portions of the touch area other than the representative point overlap the object; connects the object with the touch area if the touch area overlaps the object, where connecting maintains the overlap of the object and the touch area; after connecting the object with the touch area, detects movement of the single finger contact; determines movement of the touch area that corresponds to movement of the single finger contact; and moves the object connected with the touch area in accordance with the determined movement of the touch area.
Abstract:
Methods and systems related to interfaces for interacting with a digital assistant in a desktop environment are disclosed. In some embodiments, a digital assistant is invoked on a user device by a gesture following a predetermined motion pattern on a touch-sensitive surface of the user device. In some embodiments, a user device selectively invokes a dictation mode or a command mode to process a speech input depending on whether an input focus of the user device is within a text input area displayed on the user device. In some embodiments, a digital assistant performs various operations in response to one or more objects being dragged and dropped onto an iconic representation of the digital assistant displayed on a graphical user interface. In some embodiments, a digital assistant is invoked to cooperate with the user to complete a task that the user has already started on a user device.
Abstract:
In some embodiments, an electronic device receives handwritten inputs in text entry fields and converts the handwritten inputs into font-based text. In some embodiments, an electronic device selects and deletes text based on inputs from a stylus. In some embodiments, an electronic device inserts text into pre-existing text based on inputs from a stylus. In some embodiments, an electronic device manages the timing of converting handwritten inputs into font-based text. In some embodiments, an electronic device presents a handwritten entry menu. In some embodiments, an electronic device controls the characteristic of handwritten inputs based on selections on the handwritten entry menu. In some embodiments, an electronic device presents autocomplete suggestions. In some embodiments, an electronic device converts handwritten input to font-based text. In some embodiments, an electronic device displays options in a content entry palette.
Abstract:
Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.
Abstract:
In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.
Abstract:
The present disclosure generally relates to interacting with an electronic device without touching a display screen or other physical input mechanisms. In some examples, the electronic device performs an operation in response to a positioning of a user's hand and/or an orientation of the electronic device.
Abstract:
An electronic device has multiple cameras and displays a digital viewfinder user interface for previewing visual information provided by the cameras. The multiple cameras may have different properties such as focal lengths. When a single digital viewfinder is provided, the user interface allows zooming over a zoom range that includes the respective zoom ranges of both cameras. The zoom setting to determine which camera provides visual information to the viewfinder and which camera is used to capture visual information. The user interface also allows the simultaneous display of content provided by different cameras at the same time. When two digital viewfinders are provided, the user interface allows zooming, freezing, and panning of one digital viewfinder independently of the other. The device allows storing of a composite images and/or videos using both digital viewfinders and corresponding cameras.