Abstract:
A computing device outputs a keyboard for display, receives an indication of a first gesture to select a first sequence of one or more keys, determines a set of candidate strings based in part on the first sequence of keys, and outputs for display at least one of the set of candidate strings. The computing device receives an indication of a second gesture to select a second sequence of one or more keys, and determines that characters associated with the second sequence of keys are included in a first candidate word based at least in part on the set of candidate strings, or are included in a second candidate word not based on the first sequence of keys. The computing device modifies the set of candidate strings based at least in part on the determination and outputs for display at least one of the modified candidate strings.
Abstract:
A computing device can be configured to receive an indication of a first input gesture, a first portion of the first input gesture indicating a first character key of a plurality of character keys of a graphical keyboard and a second portion of the first input gesture indicating a second character key of the plurality of character keys. The computing device also can be configured to determine, based at least in part on the first character key and the second character key, a candidate word. The computing device can be configured to output, for display at a region of a display device at which the graphical keyboard is displayed, a gesture completion path extending from the second character key. Further, the computing device can be configured to select, in response to receiving an indication of a second input gesture substantially traversing the gesture completion path, the candidate word.
Abstract:
In response to determining that a first series of user inputs corresponds to a first character string, the computing device outputs, for display at a display device, the first character string. In response to determining that the first character string does not match a word in a lexicon and in response to determining that the first character string ends with a word delimiter, the computing device replaces the first character string with a second character string. After receiving the first series of user inputs, the computing device receives a second series of user inputs. In response to determining that the second series of user inputs corresponds to a third character string, the computing device outputs the third character string. The computing device determines, based at least in part on the first and second series of user inputs, a fourth character string and outputs, for display, the fourth character string.
Abstract:
In general, aspects of this disclosure are directed to techniques for predictive text correction and completion for text entry using virtual keyboards on touch-sensitive displays. A user may be able to type on a representation of a virtual keyboard displayed on touch-sensitive displays by contacting representations of virtual keys included in the virtual keyboard, and the word the user intended to type on the virtual keyboard may be predicted and displayed in place of characters associated with the virtual keys actually contacted by the user. In some examples of the present disclosure, a virtual spacebar key included in the virtual keyboard may be treated in a probabilistic fashion to determine whether a contact received by the touch-sensitive display is intended to select the virtual spacebar key to perform an autocorrect or autocomplete function.
Abstract:
In some examples, a computing device includes at least one processor; and at least one module, operable by the at least one processor to: output, for display at an output device, a graphical keyboard; receive an indication of a gesture detected at a location of a presence-sensitive input device, wherein the location of the presence-sensitive input device corresponds to a location of the output device that outputs the graphical keyboard; determine, based on at least one spatial feature of the gesture that is processed by the computing device using a neural network, at least one character string, wherein the at least one spatial feature indicates at least one physical property of the gesture; and output, for display at the output device, based at least in part on the processing of the at least one spatial feature of the gesture using the neural network, the at least one character string.
Abstract:
In one example, a computing device includes at least one processor that is operatively coupled to a presence-sensitive display and a gesture module operable by the at least one processor. The gesture module may be operable by the at least one processor to output, for display at the presence-sensitive display, a graphical keyboard comprising a plurality of keys and receive an indication of a continuous gesture detected at the presence-sensitive display, the continuous gesture to select a group of keys of the plurality of keys. The gesture module may be further operable to determine, in response to receiving the indication of the continuous gesture and based at least in part on the group of keys of the plurality of keys, a candidate phrase comprising a group of candidate words.
Abstract:
In one example, a method may include outputting, by a computing device and for display, a graphical keyboard comprising a plurality of keys, and receiving an indication of a gesture. The method may include determining an alignment score that is based at least in part on a word prefix and an alignment point traversed by the gesture. The method may include determining at least one alternative character that is based at least in part on a misspelling that includes at least a portion of the word prefix. The method may include determining an alternative alignment score based at least in part on the alternative character; and outputting, by the computing device and for display, based at least in part on the alternative alignment score, a candidate word based at least in part on the alternative character.
Abstract:
A graphical keyboard including a number of keys is output for display at a display device. The computing device receives an indication of a gesture to select at least two of the keys based at least in part on detecting an input unit at locations of a presence-sensitive input device. In response to the detecting and while the input unit is detected at the presence-sensitive input device: the computing device determines a candidate word for the gesture based at least in part on the at least two keys and the candidate word is output for display at a first location of the output device. In response to determining that the input unit is no longer detected at the presence-sensitive input device, the displayed candidate word is output for display at a second location of the display device.
Abstract:
In some examples, a method includes outputting a graphical keyboard (120) for display and responsive to receiving an indication of a first input (124), determining a new character string that is not included in a language model. The method may include adding the new character string to the language model and associating a likelihood value with the new character string. The method may include, responsive to receiving an indication of a second input, predicting the new character string, and responsive to receiving an indication of a third input that rejects the new character string, decreasing the likelihood value associated with the new character string.