Abstract:
The present disclosure relates to user interfaces for viewing, creating, editing, and sharing content on an electronic device. A device detects a plurality of discrete inputs that includes a first input followed by a second input. In response to detecting the plurality of discrete inputs, the device performs a sequence of operations that includes a first operation that corresponds to the first input followed by a second operation that corresponds to the second input. In accordance with a determination that the plurality of discrete inputs meets an output-acceleration criteria, the first operation is performed with a first magnitude and the second operation is performed with a second magnitude that is greater than the first magnitude. In accordance with a determination that the plurality of discrete inputs does not meet the output-acceleration criteria, the first operation and the second operation are performed with the same magnitude.
Abstract:
The computing system includes a primary display, memory, and a housing at least partially containing a physical in put mechanism and a touch screen adjacent to the physical input mechanism: displays, on the primary display, a first user interface, the first user interface comprising one or more user interface elements; and identifies an active user interface element among the one or more user interface elements that is in focus on the primary display. In accordance with a determination that the active user interface element that is in focus on the primary display is associated with an application executed by the computing system, the computing system displays a second user interface on the touch screen, including: (A) a first set of corresponding to the application; and (B) at least one system-level affordance corresponding a system-level functionality.
Abstract:
Improved capacitive touch and hover sensing with a sensor array is provided. An AC ground shield positioned behind the sensor array and stimulated with signals of the same waveform as the signals driving the sensor array may concentrate the electric field extending from the sensor array and enhance hover sensing capability. The hover position and/or height of an object that is nearby, but not directly above, a touch surface of the sensor array, e.g., in the border area at the end of a touch screen, may be determined using capacitive measurements of sensors near the end of the sensor array by fitting the measurements to a model. Other improvements relate to the joint operation of touch and hover sensing, such as determining when and how to perform touch sensing, hover sensing, both touch and hover sensing, or neither.
Abstract:
Compensation for sensors in a touch and hover sensing device is disclosed. Compensation can be for sensor resistance and/or sensor sensitivity variation that can adversely affect touch and hover measurements at the sensors. To compensate for sensor resistance, the device can gang adjacent sensors together so as to reduce the overall resistance of the sensors. In addition or alternatively, the device can drive the sensors with voltages from multiple directions so as to reduce the effects of the sensors' resistance. To compensate for sensor sensitivity variation (generally at issue for hover measurements), the device can apply a gain factor to the measurements, where the gain factor is a function of the sensor location, so as to reduce the sensitivity variation at different sensor locations on the device.
Abstract:
The present disclosure generally relates to interacting with an electronic device without touching a display screen or other physical input mechanisms. In some examples, the electronic device performs an operation in response to a positioning of a user's hand and/or an orientation of the electronic device.
Abstract:
An electronic device has multiple cameras and displays a digital viewfinder user interface for previewing visual information provided by the cameras. The multiple cameras may have different properties such as focal lengths. When a single digital viewfinder is provided, the user interface allows zooming over a zoom range that includes the respective zoom ranges of both cameras. The zoom setting to determine which camera provides visual information to the viewfinder and which camera is used to capture visual information. The user interface also allows the simultaneous display of content provided by different cameras at the same time. When two digital viewfinders are provided, the user interface allows zooming, freezing, and panning of one digital viewfinder independently of the other. The device allows storing of a composite images and/or videos using both digital viewfinders and corresponding cameras.
Abstract:
The present disclosure relates to user interfaces for viewing, creating, editing, and sharing content on an electronic device. In accordance with some embodiments, a shortcut hint user interface is displayed in response to detection of a downstroke input of a modifier key. The shortcut hint user interface includes information identifying shortcuts associated with the modifier key.
Abstract:
In some embodiments, an electronic device receives handwritten inputs in text entry fields and converts the handwritten inputs into font-based text. In some embodiments, an electronic device selects and deletes text based on inputs from a stylus. In some embodiments, an electronic device inserts text into pre-existing text based on inputs from a stylus. In some embodiments, an electronic device manages the timing of converting handwritten inputs into font-based text. In some embodiments, an electronic device presents a handwritten entry menu. In some embodiments, an electronic device controls the characteristic of handwritten inputs based on selections on the handwritten entry menu. In some embodiments, an electronic device presents autocomplete suggestions. In some embodiments, an electronic device converts handwritten input to font-based text. In some embodiments, an electronic device displays options in a content entry palette.
Abstract:
The present disclosure generally relates to interacting with an electronic device without touching a display screen or other physical input mechanisms. In some examples, the electronic device performs an operation in response to a positioning of a user's hand and/or an orientation of the electronic device.
Abstract:
In some embodiments, an electronic device optionally identifies a person's face, and optionally performs an action in accordance with the identification. In some embodiments, an electronic device optionally determines a gaze location in a user interface, and optionally performs an action in accordance with the determination. In some embodiments, an electronic device optionally designates a user as being present at a sound-playback device in accordance with a determination that sound-detection criteria and verification criteria have been satisfied. In some embodiments, an electronic device optionally determines whether a person is further or closer than a threshold distance from a display device, and optionally provides a first or second user interface for display on the display device in accordance with the determination. In some embodiments, an electronic device optionally modifies the playing of media content in accordance with a determination that one or more presence criteria are not satisfied.