Abstract:
An electronic device obtains one or more images of a scene. After obtaining the one or more images of the scene, the electronic device detects a plurality of objects within the scene, provides a first audible description of the scene, and detects a user input that selects a respective object of the plurality of objects within the scene. The first audible description of the scene provides information corresponding to the plurality of objects as a group. In response to the user input selecting the respective object within the scene, the electronic device provides a second audible description of the respective object. The second audible description is distinct from the first audible description and includes a description of one or more characteristics specific to the respective object.
Abstract:
An electronic device with a display showing a user interface (UI) automatically adjusts the zoom level of a magnification region. The electronic device receives a request to magnify at least a portion of the display showing the UI. The electronic device determines the context that the electronic device was operating in at the time of the magnification request. The context is comprised of display parameters, environmental parameters, or both. The electronic device displays the UI at a zoom level determined based on user preferences. Upon detecting a text input condition, the device resizes and optionally moves the magnification region so that the resized magnification region does not overlap with the newly displayed composition interface window. The device uses an auto-snap feature when the content within the magnification region displays a second boundary upon scrolling the content within the magnification region from a first boundary opposite the second one.
Abstract:
Methods, systems, and user interfaces enable users identify a user-selected location of a user interface with reduced movement and motor effort. A first selection indicator is overlaid on the user interface and moved in a first direction. Responsive to receiving a first user input to stop movement of the first selection indicator, movement of the first selection indicator is ceased over a first location of the user interface. A second selection indicator is overlaid on the user interface and the second selection indicator is moved in a second direction. Responsive to receiving a second user input to stop movement of the second selection indicator, movement of the second selection indicator is ceased over a second location of the user interface. The user-selected location of the user interface is selected based at least in part on the first and the second locations of the user interface.
Abstract:
An electronic device in communication with a haptic feedback device that includes a touch-sensitive surface sends instructions to the haptic display to display a document with multiple characters. A respective character is displayed at a respective character size. While the haptic display is displaying the document, the device receives an input that corresponds to a finger contact at a first location on the haptic display. In response to receiving the input, the device associates a first cursor position with the first location, determines a first character in the plurality of characters adjacent to the first cursor position, and sends instructions to the haptic display to output a Braille character, at the first location, that corresponds to the first character. A respective Braille character is output on the haptic display at a respective Braille character size that is larger than the corresponding displayed character size.
Abstract:
An electronic device obtains one or more images of a scene, and displays a preview of the scene. If the electronic device meets levelness criteria, the electronic device provides a first audible and/or tactile output indicating that the camera is obtaining level images of the scene. In some embodiments, the electronic device detects, using one or more sensors, an orientation of a first axis of the electronic device relative to a respective vector, and the levelness criteria include a criterion that is met when the first axis of the electronic device moves within a predefined range of the respective vector. In some embodiments, if the orientation of the first axis of the electronic device moves outside of the predefined range of the respective vector, a second audible and/or tactile output, indicating that the camera is not obtaining level images of the scene, is provided.
Abstract:
An electronic device displays a first user interface including user interface objects. While displaying the first user interface, the device detects a first input on the touch-sensitive surface. In response, if the first input is detected at a location on the touch-sensitive surface that corresponds to a first user interface object of the first user interface and that the first input satisfies first input intensity criteria, the device performs a first operation, including displaying a zoomed-in view of at least a first portion of the first user interface; and, if the first input is detected at a location on the touch-sensitive surface that corresponds to the first user interface object of the first user interface and that the first input does not satisfy first input intensity criteria, the device performs a second operation that is distinct from the first operation.
Abstract:
Methods, systems, and user interfaces enable users identify a user-selected location of a user interface with reduced movement and motor effort. A first selection indicator is overlaid on the user interface and moved in a first direction. Responsive to receiving a first user input to stop movement of the first selection indicator, movement of the first selection indicator is ceased over a first location of the user interface. A second selection indicator is overlaid on the user interface and the second selection indicator is moved in a second direction. Responsive to receiving a second user input to stop movement of the second selection indicator, movement of the second selection indicator is ceased over a second location of the user interface. The user-selected location of the user interface is selected based at least in part on the first and the second locations of the user interface.
Abstract:
An electronic device with a display showing a user interface (UI) automatically adjusts the zoom level of a magnification region. The electronic device receives a request to magnify at least a portion of the display showing the UI. The electronic device determines the context that the electronic device was operating in at the time of the magnification request. The context is comprised of display parameters, environmental parameters, or both. The electronic device displays the UI at a zoom level determined based on user preferences. Upon detecting a text input condition, the device resizes and optionally moves the magnification region so that the resized magnification region does not overlap with the newly displayed composition interface window. The device uses an auto-snap feature when the content within the magnification region displays a second boundary upon scrolling the content within the magnification region from a first boundary opposite the second one.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
Systems and processes for scanning a user interface are disclosed. One process can include scanning multiple elements within a user interface by highlighting the elements. The process can further include receiving a selection while one of the elements is highlighted and performing an action on the element that was highlighted when the selection was received. The action can include scanning the contents of the selected element or performing an action associated with the selected element. The process can be used to navigate an array of application icons, a menu of options, a standard desktop or laptop operating system interface, or the like. The process can also be used to perform gestures on a touch-sensitive device or mouse and track pad gestures (e.g., flick, tap, or freehand gestures).