Abstract:
An electronic device with a camera obtains, with the camera, one or more images of a scene. The electronic device detects a respective feature within the scene. In accordance with a determination that a first mode is active on the device, the electronic device provides a first audible description of the scene. The first audible description provides information indicating a size and/or position of the respective feature relative to a first set of divisions applied to the one or more images of the scene. In accordance with a determination that the first mode is not active on the device, the electronic device provides a second audible description of the plurality of objects. The second audible description is distinct from the first audible description and does not include the information indicating the size and/or position of the respective feature relative to the first set of divisions.
Abstract:
A device implementing a system for machine-learning based gesture recognition includes at least one processor configured to, receive, from a first sensor of the device, first sensor output of a first type, and receive, from a second sensor of the device, second sensor output of a second type that differs from the first type. The at least one processor is further configured to provide the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted gesture based on sensor output of the first type and sensor output of the second type. The at least one processor is further configured to determine the predicted gesture based on an output from the machine learning model, and to perform, in response to determining the predicted gesture, a predetermined action on the device.
Abstract:
An electronic device obtains one or more images of a scene. After obtaining the one or more images of the scene, the electronic device detects a plurality of objects within the scene, provides a first audible description of the scene, and detects a user input that selects a respective object of the plurality of objects within the scene. The first audible description of the scene provides information corresponding to the plurality of objects as a group. In response to the user input selecting the respective object within the scene, the electronic device provides a second audible description of the respective object. The second audible description is distinct from the first audible description and includes a description of one or more characteristics specific to the respective object.
Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
Abstract:
Systems and processes are disclosed for initiating and controlling content speaking on touch-sensitive devices. A gesture can be detected on a touchscreen for causing text to be spoken. Displayed content can be analyzed, and a determination can be made based on size, position, and other attributes as to which portion of displayed text should be spoken. In response to detecting the gesture, the identified portion of text can be spoken using a text-to-speech process. A menu of controls can be displayed for controlling the speaking. The menu can automatically be hidden and a persistent virtual button can be displayed that can remain available on the touchscreen despite the user navigating to another view. Selecting the persistent virtual button can restore the full menu of controls, thereby allowing the user to continue to control the speaking even after navigating away from the content being spoken.
Abstract:
Methods for presenting symbolic expressions such as mathematical, scientific, or chemical expressions, formulas, or equations are performed by a computing device. One method includes: displaying a first portion of a symbolic expression within a first area of a display screen; while in a first state in which the first area is selected for aural presentation, aurally presenting first information related to the first portion of the symbolic expression; while in the first state, detecting particular user input; in response to detecting the particular user input, performing the steps of: transitioning from the first state to a second state in which a second area, of the display, is selected for aural presentation; determining second information associated with a second portion, of the symbolic expression, that is displayed within the second area; in response to determining the second information, aurally presenting the second information.
Abstract:
A method for controlling a peripheral from a group of computing devices is provided. The method sets up a group of computing devices for providing media content and control settings to a peripheral device such as a hearing aid. The computing devices in the group are interconnected by a network and exchange data with each other regarding the peripheral. A master device in the group is directly paired with the peripheral device and can use the pairing connection to provide media content or to apply the control settings to the peripheral device. The peripheral device is paired with only the master devices of the group. A slave device can request to directly pair with the peripheral device and become the master device in order to provide media content to the peripheral.
Abstract:
An electronic device in communication with a haptic feedback device that includes a touch-sensitive surface sends instructions to the haptic display to display a document with multiple characters. A respective character is displayed at a respective character size. While the haptic display is displaying the document, the device receives an input that corresponds to a finger contact at a first location on the haptic display. In response to receiving the input, the device associates a first cursor position with the first location, determines a first character in the plurality of characters adjacent to the first cursor position, and sends instructions to the haptic display to output a Braille character, at the first location, that corresponds to the first character. A respective Braille character is output on the haptic display at a respective Braille character size that is larger than the corresponding displayed character size.
Abstract:
An electronic device, while in an interaction configuration mode for a first application, concurrently displays: a first user interface, one or more interaction control user interface objects, and an application restriction controls display user interface object for the first application. The device detects a first gesture, and in response, displays application restriction control user interface objects for the first application. A respective application restriction control user interface object indicates whether a corresponding feature of the first application is configured to be enabled in a restricted interaction mode. The device detects a second gesture, and changes display of a setting in the first application restriction control user interface object for the first application. The device detects a second input, and in response, enters the restricted interaction mode for the first application. The corresponding feature is restricted in accordance with the setting in the first application restriction control user interface object.
Abstract:
The present disclosure generally relates to providing time feedback on an electronic device, and in particular to providing non-visual time feedback on the electronic device. Techniques for providing non-visual time feedback include detecting an input and, in response to detecting the input, initiating output of a first type of non-visual indication of a current time or a second type of non-visual indication of the current time based on the set of non-visual time output criteria met by the input. Techniques for providing non-visual time feedback also include, in response to detecting that a current time has reached a first predetermined time of a set of one or more predetermined times, outputting a first non-visual alert or a second non-visual alert based on a type of watch face that the electronic device is configured to display.