Abstract:
An electronic device in communication with a haptic feedback device that includes a touch-sensitive surface sends instructions to the haptic display to display a document with multiple characters. A respective character is displayed at a respective character size. While the haptic display is displaying the document, the device receives an input that corresponds to a finger contact at a first location on the haptic display. In response to receiving the input, the device associates a first cursor position with the first location, determines a first character in the plurality of characters adjacent to the first cursor position, and sends instructions to the haptic display to output a Braille character, at the first location, that corresponds to the first character. A respective Braille character is output on the haptic display at a respective Braille character size that is larger than the corresponding displayed character size.
Abstract:
Calibration of eye tracking is improved by collecting additional calibration pairs when user is using apps with eye tracking. A user input component is presented on a display of an electronic device, detecting a dwelling action for user input component, and in response to detecting the dwelling action, obtaining a calibration pair comprising an uncalibrated gaze point and a screen location of the user input component, wherein the uncalibrated gaze point is determined based on an eye pose during the dwelling action. A screen gaze estimation is determine based on the uncalibrated gaze point, and in response to determining that the calibration pair is a valid calibration pair, training a calibration model using the calibration pair.
Abstract:
Calibration of eye tracking is improved by collecting additional calibration pairs when user is using apps with eye tracking. A user input component is presented on a display of an electronic device, detecting a dwelling action for user input component, and in response to detecting the dwelling action, obtaining a calibration pair comprising an uncalibrated gaze point and a screen location of the user input component, wherein the uncalibrated gaze point is determined based on an eye pose during the dwelling action. A screen gaze estimation is determine based on the uncalibrated gaze point, and in response to determining that the calibration pair is a valid calibration pair, training a calibration model using the calibration pair.
Abstract:
An electronic device can provide an interactive map with non-visual output, thereby making the map accessible to visually impaired users. The map can be based on a starting location defined based on a current location of the electronic device or on a location entered by the user. Nearby paths, nearby points of interest, or directions from the starting location to an ending location can be identified via audio output. Users can touch a screen of the electronic device in order to virtually explore a neighborhood. A user can be alerted when he is moving along or straying from a path, approaching an intersection or point of interest, or changing terrains. Thus, the user can familiarize himself with city-level spatial relationships without needing to physically explore unfamiliar surroundings.
Abstract:
An electronic device can provide an interactive map with non-visual output, thereby making the map accessible to visually impaired users. The map can be based on a starting location defined based on a current location of the electronic device or on a location entered by the user. Nearby paths, nearby points of interest, or directions from the starting location to an ending location can be identified via audio output. Users can touch a screen of the electronic device in order to virtually explore a neighborhood. A user can be alerted when he is moving along or straying from a path, approaching an intersection or point of interest, or changing terrains. Thus, the user can familiarize himself with city-level spatial relationships without needing to physically explore unfamiliar surroundings.