Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
Abstract:
Methods, systems, and user interfaces enable users identify a user-selected location of a user interface with reduced movement and motor effort. A first selection indicator is overlaid on the user interface and moved in a first direction. Responsive to receiving a first user input to stop movement of the first selection indicator, movement of the first selection indicator is ceased over a first location of the user interface. A second selection indicator is overlaid on the user interface and the second selection indicator is moved in a second direction. Responsive to receiving a second user input to stop movement of the second selection indicator, movement of the second selection indicator is ceased over a second location of the user interface. The user-selected location of the user interface is selected based at least in part on the first and the second locations of the user interface.
Abstract:
An electronic device with a display showing a user interface (UI) automatically adjusts the zoom level of a magnification region. The electronic device receives a request to magnify at least a portion of the display showing the UI. The electronic device determines the context that the electronic device was operating in at the time of the magnification request. The context is comprised of display parameters, environmental parameters, or both. The electronic device displays the UI at a zoom level determined based on user preferences. Upon detecting a text input condition, the device resizes and optionally moves the magnification region so that the resized magnification region does not overlap with the newly displayed composition interface window. The device uses an auto-snap feature when the content within the magnification region displays a second boundary upon scrolling the content within the magnification region from a first boundary opposite the second one.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
Some embodiments of the invention provide a mobile device with multiple access modes. The device in some embodiments has at least two access modes, a primary access mode and a secondary access mode, that provide different restrictions for accessing the applications and/or data that are stored on the device. In some embodiments, the primary access mode of the device provides unfettered access to all of the device's applications and/or data that are available to a user, while its secondary access mode provides access to a limited set of applications and/or data that are stored on the device.
Abstract:
Systems and processes for scanning a user interface are disclosed. One process can include scanning multiple elements within a user interface by highlighting the elements. The process can further include receiving a selection while one of the elements is highlighted and performing an action on the element that was highlighted when the selection was received. The action can include scanning the contents of the selected element or performing an action associated with the selected element. The process can be used to navigate an array of application icons, a menu of options, a standard desktop or laptop operating system interface, or the like. The process can also be used to perform gestures on a touch-sensitive device or mouse and track pad gestures (e.g., flick, tap, or freehand gestures).
Abstract:
A dictation computer that includes a fan speed regulator is described. The fan speed regulator monitors a speech recognition unit to determine when the speech recognition unit is activated. Upon detection that the speech recognition unit is activated, the fan speed regulator ducks the speed of a cooling fan embedded within the dictation computer to an optimized speed of rotation over a delay time interval. The fan speed regulator may include components to adapt the optimized speed and delay time to the characteristics of the dictation computer and the user. Other embodiments are also described.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator. The electronic device receives a first single touch input on the touch-sensitive surface at a location that corresponds to the first visual indicator; and, in response to detecting the first single touch input on the touch-sensitive surface at a location that corresponds to the first visual indicator, replaces display of the first visual indicator with display of a first menu. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, the electronic device displays a menu of virtual multitouch contacts.
Abstract:
Disclosed herein are systems and methods that enable low-vision users to interact with touch-sensitive secondary displays. An example method includes, while operating a touch-sensitive secondary display in an accessibility mode: displaying, on the primary display, a first user interface for an application, and displaying, on the touch-sensitive secondary display, a second user interface that includes: (i) application-specific affordances, and (ii) a system-level affordance, where each application-specific affordance and the system-level affordance are displayed with a first display size. The method includes detecting an input at the application-specific affordance. In response to detecting the input, and while the input remains in contact: continuing to display the first user interface for the application; and displaying, on the primary display, a zoomed-in representation of the at least one application-specific affordance, where the zoomed-in representation of the application-specific affordance is displayed with a second display size that is larger than the first display size.