Abstract:
An electronic device, while in an interaction configuration mode: displays a first user interface that includes a plurality of user interface objects; and, while displaying the first user interface, detects one or more gesture inputs on a touch-sensitive surface. For a respective gesture input, the device determines whether one or more user interface objects of the plurality of user interface objects correspond to the respective gesture input. The device visually distinguishes a first set of user interface objects in the plurality of user interface objects that correspond to the detected one or more gesture inputs from a second set of user interface objects in the plurality of user interface objects that do not correspond to the detected one or more gesture inputs. The device detects an input; and, in response to detecting the input, exits the interaction configuration mode and enters a restricted interaction mode.
Abstract:
Methods for presenting symbolic expressions such as mathematical, scientific, or chemical expressions, formulas, or equations are performed by a computing device. One method includes: displaying a first portion of a symbolic expression within a first area of a display screen; while in a first state in which the first area is selected for aural presentation, aurally presenting first information related to the first portion of the symbolic expression; while in the first state, detecting particular user input; in response to detecting the particular user input, performing the steps of: transitioning from the first state to a second state in which a second area, of the display, is selected for aural presentation; determining second information associated with a second portion, of the symbolic expression, that is displayed within the second area; in response to determining the second information, aurally presenting the second information.
Abstract:
Systems and processes for activating a screen reading program are disclosed. One process can include receiving a request to activate the screen reading program and prompting the user to perform an action to confirm the request. The action can include a making swiping gesture, shaking the device, covering a proximity sensor, tapping a display, or the like. In some examples, the confirming action must be received within a time limit or the input can be ignored. In response to receipt of the confirmation (e.g., within the time limit), the screen reading program can be activated. The time limit can be identified using audible notifications at the start and end of the time limit. In another example, a device can detect an event associated with a request to activate a screen reading program. The event can be detected at any time to cause the device to activate the screen reading program.
Abstract:
Disclosed herein are systems and methods that enable low-vision users to interact with touch-sensitive secondary displays. An example method includes, while operating a touch-sensitive secondary display in an accessibility mode: displaying, on the primary display, a first user interface for an application, and displaying, on the touch-sensitive secondary display, a second user interface that includes: (i) application-specific affordances, and (ii) a system-level affordance, where each application-specific affordance and the system-level affordance are displayed with a first display size. The method includes detecting an input at the application-specific affordance. In response to detecting the input, and while the input remains in contact: continuing to display the first user interface for the application; and displaying, on the primary display, a zoomed-in representation of the at least one application-specific affordance, where the zoomed-in representation of the application-specific affordance is displayed with a second display size that is larger than the first display size.
Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
Abstract:
Disclosed herein are systems and methods that enable low-vision users to interact with touch-sensitive secondary displays. An example method includes: displaying, on a primary display, a first user interface for an application and displaying, on a touch-sensitive secondary display, a second user interface that includes a plurality of application-specific affordances that control functions of the application. Each respective affordance is displayed with a first display size. The method also includes: detecting, via the secondary display, an input that contacts at least one application-specific affordance. In response to detecting the input and while it remains in contact with the secondary display, the method includes: (i) continuing to display the first user interface on the primary display and (ii) displaying, on the primary display, a zoomed-in representation of the at least one application-specific affordance. The zoomed-in representation is displayed with a second display size that is larger than the first display size.
Abstract:
An electronic device with a display showing a user interface (UI) automatically adjusts the zoom level of a magnification region. The electronic device receives a request to magnify at least a portion of the display showing the UI. The electronic device determines the context that the electronic device was operating in at the time of the magnification request. The context is comprised of display parameters, environmental parameters, or both. The electronic device displays the UI at a zoom level determined based on user preferences. Upon detecting a text input condition, the device resizes and optionally moves the magnification region so that the resized magnification region does not overlap with the newly displayed composition interface window. The device uses an auto-snap feature when the content within the magnification region displays a second boundary upon scrolling the content within the magnification region from a first boundary opposite the second one.
Abstract:
Methods for presenting symbolic expressions such as mathematical, scientific, or chemical expressions, formulas, or equations are performed by a computing device. One method includes: displaying a first portion of a symbolic expression within a first area of a display screen; while in a first state in which the first area is selected for aural presentation, aurally presenting first information related to the first portion of the symbolic expression; while in the first state, detecting particular user input; in response to detecting the particular user input, performing the steps of: transitioning from the first state to a second state in which a second area, of the display, is selected for aural presentation; determining second information associated with a second portion, of the symbolic expression, that is displayed within the second area; in response to determining the second information, aurally presenting the second information.
Abstract:
Methods, systems, and user interfaces enable users identify a user-selected location of a user interface with reduced movement and motor effort. A first selection indicator is overlaid on the user interface and moved in a first direction. Responsive to receiving a first user input to stop movement of the first selection indicator, movement of the first selection indicator is ceased over a first location of the user interface. A second selection indicator is overlaid on the user interface and the second selection indicator is moved in a second direction. Responsive to receiving a second user input to stop movement of the second selection indicator, movement of the second selection indicator is ceased over a second location of the user interface. The user-selected location of the user interface is selected based at least in part on the first and the second locations of the user interface.
Abstract:
A method for controlling a peripheral from a group of computing devices is provided. The method sets up a group of computing devices for providing media content and control settings to a peripheral device such as a hearing aid. The computing devices in the group are interconnected by a network and exchange data with each other regarding the peripheral. A master device in the group is directly paired with the peripheral device and can use the pairing connection to provide media content or to apply the control settings to the peripheral device. The peripheral device is paired with only the master devices of the group. A slave device can request to directly pair with the peripheral device and become the master device in order to provide media content to the peripheral.