Abstract:
Disclosed herein are systems and methods that enable low-vision users to interact with touch-sensitive secondary displays. An example method includes: displaying, on a primary display, a first user interface for an application and displaying, on a touch-sensitive secondary display, a second user interface that includes a plurality of application-specific affordances that control functions of the application. Each respective affordance is displayed with a first display size. The method also includes: detecting, via the secondary display, an input that contacts at least one application-specific affordance. In response to detecting the input and while it remains in contact with the secondary display, the method includes: (i) continuing to display the first user interface on the primary display and (ii) displaying, on the primary display, a zoomed-in representation of the at least one application-specific affordance. The zoomed-in representation is displayed with a second display size that is larger than the first display size.
Abstract:
Techniques are disclosed relating to providing audio prompts. In one embodiment, a computing device includes a display, an audio circuit coupled to a speaker, first and second processors, and memory. The memory has first program instructions executable by the first processor to provide, via a first operating system of the computing device, a visual prompt to the display to cause the display to present the visual prompt to a user and send, to the second processor, a request to provide an audio prompt corresponding to the visual prompt via the speaker to the user. The computing device also includes memory having second program instructions executable by the second processor to, in response to the request, provide, via a second operating system, an instruction to the audio circuit to play the audio prompt via the speaker.
Abstract:
An electronic device with a display showing a user interface (UI) automatically adjusts the zoom level of a magnification region. The electronic device receives a request to magnify at least a portion of the display showing the UI. The electronic device determines the context that the electronic device was operating in at the time of the magnification request. The context is comprised of display parameters, environmental parameters, or both. The electronic device displays the UI at a zoom level determined based on user preferences. Upon detecting a text input condition, the device resizes and optionally moves the magnification region so that the resized magnification region does not overlap with the newly displayed composition interface window. The device uses an auto-snap feature when the content within the magnification region displays a second boundary upon scrolling the content within the magnification region from a first boundary opposite the second one.
Abstract:
In accordance with some embodiments, a method is performed at a first device with one or more processors, non-transitory memory, and a display. The method includes displaying, on the display, a device control transfer affordance while operating the first device based on user input from an input device that is in communication with the first device. The method includes receiving a device control transfer user input from the input device selecting the device control transfer affordance that is displayed on the display of the first device. In response to receiving the device control transfer user input, the method includes configuring a second device to be operated based on user input from the input device and ceasing to operate the first device based on user input from the input device.
Abstract:
In an example method, an electronic device receives data regarding a graphical user interface to be presented on a display of the electronic device. The electronic device identifies one or more key regions of the graphical user interface based on the received data and one or more rules. The one or more rules pertain to at least one of a geometric shape, a geometric size, a location, or a hierarchical property. The graphical user interface is presented on the display of the electronic device, and the at least one of the key regions of the graphical user interface is indicated using the electronic device.
Abstract:
Methods for presenting symbolic expressions such as mathematical, scientific, or chemical expressions, formulas, or equations are performed by a computing device. One method includes: displaying a first portion of a symbolic expression within a first area of a display screen; while in a first state in which the first area is selected for aural presentation, aurally presenting first information related to the first portion of the symbolic expression; while in the first state, detecting particular user input; in response to detecting the particular user input, performing the steps of: transitioning from the first state to a second state in which a second area, of the display, is selected for aural presentation; determining second information associated with a second portion, of the symbolic expression, that is displayed within the second area; in response to determining the second information, aurally presenting the second information.
Abstract:
Methods, systems, and user interfaces enable users identify a user-selected location of a user interface with reduced movement and motor effort. A first selection indicator is overlaid on the user interface and moved in a first direction. Responsive to receiving a first user input to stop movement of the first selection indicator, movement of the first selection indicator is ceased over a first location of the user interface. A second selection indicator is overlaid on the user interface and the second selection indicator is moved in a second direction. Responsive to receiving a second user input to stop movement of the second selection indicator, movement of the second selection indicator is ceased over a second location of the user interface. The user-selected location of the user interface is selected based at least in part on the first and the second locations of the user interface.
Abstract:
Broadly speaking, the embodiments disclosed herein describe replacing a current hearing aid profile stored in a hearing aid. In one embodiment, the hearing aid profile is updated by sending a hearing aid profile update request to a hearing aid profile service, receiving the updated hearing aid profile from the hearing aid profile service, and replacing the current hearing aid profile in the hearing aid with the updated hearing aid profile.
Abstract:
Disclosed herein are systems and methods that enable low-vision users to interact with touch-sensitive secondary displays. An example method includes, while operating a touch-sensitive secondary display in an accessibility mode: displaying, on the primary display, a first user interface for an application, and displaying, on the touch-sensitive secondary display, a second user interface that includes: (i) application-specific affordances, and (ii) a system-level affordance, where each application-specific affordance and the system-level affordance are displayed with a first display size. The method includes detecting an input at the application-specific affordance. In response to detecting the input, and while the input remains in contact: continuing to display the first user interface for the application; and displaying, on the primary display, a zoomed-in representation of the at least one application-specific affordance, where the zoomed-in representation of the application-specific affordance is displayed with a second display size that is larger than the first display size.
Abstract:
Disclosed herein are systems and methods that enable low-vision users to interact with touch-sensitive secondary displays. An example method includes, while operating a touch-sensitive secondary display in an accessibility mode: displaying, on the primary display, a first user interface for an application, and displaying, on the touch-sensitive secondary display, a second user interface that includes: (i) application-specific affordances, and (ii) a system-level affordance, where each application-specific affordance and the system-level affordance are displayed with a first display size. The method includes detecting an input at the application-specific affordance. In response to detecting the input, and while the input remains in contact: continuing to display the first user interface for the application; and displaying, on the primary display, a zoomed-in representation of the at least one application-specific affordance, where the zoomed-in representation of the application-specific affordance is displayed with a second display size that is larger than the first display size.