Abstract:
In some embodiments, a device performs character recognition based on spatial and temporal components of touch input detected on a touch-sensitive surface. In some embodiments, a device provides feedback about handwritten input and its recognition by the device. In some embodiments, a device presents a user interface for changing previously-inputted characters.
Abstract:
An electronic device includes a touch-sensitive surface. The device detects a touch input on the touch-sensitive surface. In response to detecting the touch input, the device, in accordance with a determination that the touch input is at a location on the touch-sensitive surface that is associated with a first intensity model of a plurality of different intensity models, processes the touch input in accordance with an intensity applied by the touch input on the touch-sensitive surface and the first intensity model without generating a tactile output; and, in accordance with a determination that the touch input is at a location on the touch-sensitive surface that is associated with a second intensity model different from the first intensity model, processes the touch input in accordance with an intensity applied by the touch input on the touch-sensitive surface and the second intensity model, including conditionally generating a tactile output.
Abstract:
The subject technology provides for receiving a new input stroke. The subject technology determines whether the new input stroke is associated with an existing line group based on a writing direction estimate of the existing line group. The subject technology merges the new input stroke with the existing line group in response to determining that the new input stroke is associated with the existing line group. The subject technology determines a local orientation of the existing line group including the new input stroke based on an estimate of a direction of writing and a scale of each stroke. The subject technology normalizes the existing line group including the new input stroke using the determined location orientation.
Abstract:
Some embodiments of the invention provide a novel method for recognizing characters that are input through touch strokes on a touch-sensitive sensor (e.g., a touch-sensitive display screen or a touch-sensitive surface) of a device (e.g., a mobile device, a remote control, a trackpad, etc.). In some embodiments, the sensor has a space-constrained area for receiving the touch input. In some embodiments, the method places no limitations on where the user can write in the space provided by the device. As such, successive characters might not follow each other in the space. In fact, later characters might overlap earlier characters or they might appear before earlier characters.
Abstract:
A device receives a user input that corresponds with a sequence of characters. In response to the user input, the device displays simulated handwritten text that includes varying the appearance of characters in the simulated handwritten text based on variations in handwritten text of a respective user. In response to receiving the user input and in accordance with a determination that a first criterion is met, a first character in the sequence of characters has a first appearance that corresponds to the appearance of the first character in handwritten text of the respective user. In accordance with a determination that a second criterion is met, the first character in the sequence of characters has a second appearance that corresponds to the appearance of the first character in handwritten text of the respective user. The second appearance of the first character is different than the first appearance of the first character.
Abstract:
The present disclosure generally relates to handwriting on touch sensitive surfaces. In some examples, text suggestions strokes entered on a touch sensitive surface are viewed and selected in response to a rotatable input mechanism. In some examples, text determined from a set of strokes on the touch sensitive surface is revised based on a subsequently entered stroke on the touch sensitive surface. In some examples, a determination is made whether to include a stroke in a set of strokes based a time between the stroke and the previous stroke. In some examples, determining text based on a set of stroke is interrupted to determined revised text based on the set of strokes and a second stroke.
Abstract:
A device displays a drawing region. While displaying the drawing region, the device detects a sequence of drawing inputs on a touch-sensitive display. In response to the sequence of drawing inputs, the device draws a plurality of strokes in the drawing region. The plurality of strokes correspond to a plurality of characters. After detecting the sequence of drawing inputs, the device detects a predefined gesture that corresponds to a request to perform an operation based on the plurality of characters represented by the plurality of strokes. In response to detecting the predefined gesture, the device concurrently displays a first visual prompt indicating that a first subset of one or more characters in the plurality of characters can be used to perform the operation and a second visual prompt indicating that a second subset of one or more characters in the plurality of characters can be used to perform the operation.
Abstract:
An electronic device includes a touch-sensitive surface. The electronic device includes one or more sensors to detect intensity of contacts with the touch-sensitive surface. The device detects a first touch input on the touch-sensitive surface, and, in response to detecting the first touch input on the touch-sensitive surface, determines a first intensity applied by the first touch input on the touch-sensitive surface. The device identifies a first intensity model identifier from a plurality of predefined intensity model identifiers, and, in accordance with the first intensity applied by the first touch input on the touch-sensitive surface and one or more thresholds associated with the first intensity model identifier, determines a first touch characterization parameter. Subsequent to determining the first touch characterization parameter, the device sends first touch information to the first software application. The first touch information includes the first intensity model identifier and the first touch characterization parameter.
Abstract:
An electronic device detects a first touch input on a first touch region of a touch-sensitive surface, and identifies a first intensity model identifier associated with the first touch region. In response to detecting the first touch input, the device determines a first intensity of the first touch input on the first touch region; determines a first touch characterization parameter; and, subsequently sends to a first software application the first touch characterization parameter. The device also detects a second touch input on a second touch region of the touch-sensitive surface, and identifies a second intensity model identifier associated with the second touch region. In response to detecting the second touch input, the device determines a second intensity of the second touch input on the second touch region; determines a second touch characterization parameter; and, subsequently sends to the first software application the second touch characterization parameter.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.