Abstract:
The present disclosure generally relates to user interfaces and techniques for managing audible descriptions for visual media. In some embodiments, the user interfaces and techniques provide different audible descriptions for a portion of a representation of the media, where one audible description is provided before the portion of the representation of the media has been changed and the different audible description is provided after the portion of the representation of media has been changed.
Abstract:
Systems and processes for scanning a user interface are disclosed. One process can include scanning multiple elements within a user interface by highlighting the elements. The process can further include receiving a selection while one of the elements is highlighted and performing an action on the element that was highlighted when the selection was received. The action can include scanning the contents of the selected element or performing an action associated with the selected element. The process can be used to navigate an array of application icons, a menu of options, a standard desktop or laptop operating system interface, or the like. The process can also be used to perform gestures on a touch-sensitive device or mouse and track pad gestures (e.g., flick, tap, or freehand gestures).
Abstract:
An electronic device obtains one or more images of a scene, and displays a preview of the scene. If the electronic device meets levelness criteria, the electronic device provides a first audible and/or tactile output indicating that the camera is obtaining level images of the scene. In some embodiments, the electronic device detects, using one or more sensors, an orientation of a first axis of the electronic device relative to a respective vector, and the levelness criteria include a criterion that is met when the first axis of the electronic device moves within a predefined range of the respective vector. In some embodiments, if the orientation of the first axis of the electronic device moves outside of the predefined range of the respective vector, a second audible and/or tactile output, indicating that the camera is not obtaining level images of the scene, is provided.
Abstract:
An electronic device displays a first user interface including user interface objects. While displaying the first user interface, the device detects a first input on the touch-sensitive surface. In response, if the first input is detected at a location on the touch-sensitive surface that corresponds to a first user interface object of the first user interface and that the first input satisfies first input intensity criteria, the device performs a first operation, including displaying a zoomed-in view of at least a first portion of the first user interface; and, if the first input is detected at a location on the touch-sensitive surface that corresponds to the first user interface object of the first user interface and that the first input does not satisfy first input intensity criteria, the device performs a second operation that is distinct from the first operation.
Abstract:
An electronic device with a camera obtains, with the camera, one or more images of a scene. The electronic device detects a respective feature within the scene. In accordance with a determination that a first mode is active on the device, the electronic device provides a first audible description of the scene. The first audible description provides information indicating a size and/or position of the respective feature relative to a first set of divisions applied to the one or more images of the scene. In accordance with a determination that the first mode is not active on the device, the electronic device provides a second audible description of the plurality of objects. The second audible description is distinct from the first audible description and does not include the information indicating the size and/or position of the respective feature relative to the first set of divisions.
Abstract:
Systems and processes for scanning a user interface are disclosed. One process can include scanning multiple elements within a user interface by highlighting the elements. The process can further include receiving a selection while one of the elements is highlighted and performing an action on the element that was highlighted when the selection was received. The action can include scanning the contents of the selected element or performing an action associated with the selected element. The process can be used to navigate an array of application icons, a menu of options, a standard desktop or laptop operating system interface, or the like. The process can also be used to perform gestures on a touch-sensitive device or mouse and track pad gestures (e.g., flick, tap, or freehand gestures).
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays a character input area and a keyboard, the keyboard including a plurality of key icons. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture of the one or more gestures that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The respective path traverses one or more locations on the touch-sensitive surface that correspond to one or more key icons of the plurality of key icons without activating the one or more key icons. In response to detecting the respective gesture, the device enters the corresponding respective character in the character input area of the display.
Abstract:
Systems and processes for scanning a user interface are disclosed. One process can include scanning multiple elements within a user interface by highlighting the elements. The process can further include receiving a selection while one of the elements is highlighted and performing an action on the element that was highlighted when the selection was received. The action can include scanning the contents of the selected element or performing an action associated with the selected element. The process can be used to navigate an array of application icons, a menu of options, a standard desktop or laptop operating system interface, or the like. The process can also be used to perform gestures on a touch-sensitive device or mouse and track pad gestures (e.g., flick, tap, or freehand gestures).
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
An electronic device with a camera obtains, with the camera, one or more images of a scene. The electronic device detects a respective feature within the scene. In accordance with a determination that a first mode is active on the device, the electronic device provides a first audible description of the scene. The first audible description provides information indicating a size and/or position of the respective feature relative to a first set of divisions applied to the one or more images of the scene. In accordance with a determination that the first mode is not active on the device, the electronic device provides a second audible description of the plurality of objects. The second audible description is distinct from the first audible description and does not include the information indicating the size and/or position of the respective feature relative to the first set of divisions.