Abstract:
Systems and processes for scanning a user interface are disclosed. One process can include scanning multiple elements within a user interface by highlighting the elements. The process can further include receiving a selection while one of the elements is highlighted and performing an action on the element that was highlighted when the selection was received. The action can include scanning the contents of the selected element or performing an action associated with the selected element. The process can be used to navigate an array of application icons, a menu of options, a standard desktop or laptop operating system interface, or the like. The process can also be used to perform gestures on a touch-sensitive device or mouse and track pad gestures (e.g., flick, tap, or freehand gestures).
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays a character input area and a keyboard, the keyboard including a plurality of key icons. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture of the one or more gestures that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The respective path traverses one or more locations on the touch-sensitive surface that correspond to one or more key icons of the plurality of key icons without activating the one or more key icons. In response to detecting the respective gesture, the device enters the corresponding respective character in the character input area of the display.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
The present disclosure generally relates to techniques and interfaces for generating synthesized speech outputs. For example, a user interface for a text-to-speech service can include ranked and/or categorized phrases, which can be selected to enter as text. A synthesized speech output is then generated to deliver any entered text, for example, using a personalized voice model.
Abstract:
The present disclosure generally relates to detecting text. The present disclosure describes at least methods for managing a text detection mode, identifying targeted text, and managing modes of a computer system.
Abstract:
In some embodiments, an computer system detects objects, such as physical objects in the physical environment of the electronic device. In some embodiments, the computer system presents indications of characteristics of the physical objects. In some embodiments, the physical objects are entry points to physical locations.