Abstract:
Power consumption for a computing device may be managed by one or more keywords. For example, if an audio input obtained by the computing device includes a keyword, a network interface module and/or an application processing module of the computing device may be activated. The audio input may then be transmitted via the network interface module to a remote computing device, such as a speech recognition server. Alternately, the computing device may be provided with a speech recognition engine configured to process the audio input for on-device speech recognition.
Abstract:
Power consumption for a computing device may be managed by one or more keywords. For example, if an audio input obtained by the computing device includes a keyword, a network interface module and/or an application processing module of the computing device may be activated. The audio input may then be transmitted via the network interface module to a remote computing device, such as a speech recognition server. Alternately, the computing device may be provided with a speech recognition engine configured to process the audio input for on-device speech recognition.
Abstract:
Features are disclosed for maintaining data that can be used to personalize spoken language processing, such as automatic speech recognition (“ASR”), natural language understanding (“NLU”), natural language processing (“NLP”), etc. The data may be obtained from various data sources, such as applications or services used by the user. User-specific data maintained by the data sources can be retrieved and stored for use in generating personal models. Updates to data at the data sources may be reflected by separate data sets in the personalization data, such that other processes can obtain the update data sets separate from other data.
Abstract:
Power consumption for a computing device may be managed by one or more keywords. For example, if an audio input obtained by the computing device includes a keyword, a network interface module and/or an application processing module of the computing device may be activated. The audio input may then be transmitted via the network interface module to a remote computing device, such as a speech recognition server. Alternately, the computing device may be provided with a speech recognition engine configured to process the audio input for on-device speech recognition.
Abstract:
Power consumption for a computing device may be managed by one or more keywords. For example, if an audio input obtained by the computing device includes a keyword, a network interface module and/or an application processing module of the computing device may be activated. The audio input may then be transmitted via the network interface module to a remote computing device, such as a speech recognition server. Alternately, the computing device may be provided with a speech recognition engine configured to process the audio input for on-device speech recognition.
Abstract:
According to one or more embodiments of the disclosure, a method is provided. The method may include executing playback of a video. The method may also include receiving user input to rewind at least one portion of the video. Further, the method may include restarting playback of the video at a previous position before the at least one portion of the video. The method may also include activating subtitles associated with the video during playback of the video from the previous position, wherein the subtitles are displayed during playback of the at least one portion of the video. Additionally, the method may include deactivating subtitles during playback of the video after a predetermined amount of time.
Abstract:
In some cases, a handheld device that includes a microphone and a scanner may be used for voice-assisted scanning. For example, a user may provide a voice input via the microphone and may activate the scanner to scan an item identifier (e.g., a barcode). The handheld device may communicate voice data and item identifier information to a remote system for voice-assisted scanning. The remote system may perform automatic speech recognition (ASR) operations on the voice data and may perform item identification operations based on the scanned identifier. Natural language understanding (NLU) processing may be improved by combining ASR information with item information obtained based on the scanned identifier. An action may be executed based on the likely user intent.
Abstract:
Features are disclosed for applying maximum likelihood methods to channel normalization in automatic speech recognition (“ASR”). Feature vectors computed from an audio input of a user utterance can be compared to a Gaussian mixture model. The Gaussian that corresponds to each feature vector can be determined, and statistics (e.g., constrained maximum likelihood linear regression statistics) can then be accumulated for each feature vector. Using these statistics, or some subset thereof, offsets and/or a diagonal transform matrix can be computed for each feature vector. The offsets and/or diagonal transform matrix can be applied to the corresponding feature vector to generate a feature vector normalized based on maximum likelihood methods. The ASR process can then proceed using the transformed feature vectors.
Abstract:
A speech recognition system that also automatically recognizes and acts in response to significant audio interruptions. Received audio is compared with stored acoustic signatures of noises which may trigger a change in device operation, such as pausing, loudening or attenuating of content playback after hearing a certain audio interruption, such as a doorbell, etc. If the received audio matches a stored acoustic model, the system alters an operational state of one or more devices, which may or may not include itself.
Abstract:
An audio buffer is used to capture audio in anticipation of a user command to do so. Sensors and processor activity may be monitored, looking for indicia suggesting that the user command may be forthcoming. Upon detecting such indicia, a circular buffer is activated. Audio correction may be applied to the audio stored in the circular buffer. After receiving the user command instructing the device to process or record audio, at least a portion of the audio that was stored in the buffer before the command is combined with audio received after the command. The combined audio may then be processed, transmitted or stored.