Abstract:
Disclosed herein are systems, methods, and computer-readable storage devices for processing audio signals. An example system configured to practice the method receives audio at a device to be transmitted to a remote speech processing system. The system analyzes one of noise conditions, need for an enhanced speech quality, and network load to yield an analysis. Based on the analysis, the system determines to bypass user-defined options for enhancing audio for speech processing. Then, based on the analysis, the system can modify an audio transmission parameter used to transmit the audio from the device to the remote speech processing system. The audio transmission parameter can be one of an amount of coding, a chosen codec, an amount of coding, or a number of audio channels, for example.
Abstract:
Disclosed herein are systems, methods, and computer-readable storage devices for processing audio signals. An example system configured to practice the method receives audio at a device to be transmitted to a remote speech processing system. The system analyzes one of noise conditions, need for an enhanced speech quality, and network load to yield an analysis. Based on the analysis, the system determines to bypass user-defined options for enhancing audio for speech processing. Then, based on the analysis, the system can modify an audio transmission parameter used to transmit the audio from the device to the remote speech processing system. The audio transmission parameter can be one of an amount of coding, a chosen codec, or a number of audio channels, for example.
Abstract:
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for combining frame and segment level processing, via temporal pooling, for phonetic classification. A frame processor unit receives an input and extracts the time-dependent features from the input. A plurality of pooling interface units generates a plurality of feature vectors based on pooling the time-dependent features and selecting a plurality of time-dependent features according to a plurality of selection strategies. Next, a plurality of segmental classification units generates scores for the feature vectors. Each segmental classification unit (SCU) can be dedicated to a specific pooling interface unit (PIU) to form a PIU-SCU combination. Multiple PIU-SCU combinations can be further combined to form an ensemble of combinations, and the ensemble can be diversified by varying the pooling operations used by the PIU-SCU combinations. Based on the scores, the plurality of segmental classification units selects a class label and returns a result.
Abstract:
Devices, systems, methods, media, and programs for detecting an emotional state change in an audio signal are provided. A plurality of segments of the audio signal is received, with the plurality of segments being sequential. Each segment of the plurality of segments is analyzed, and, for each segment, an emotional state and a confidence score of the emotional state are determined. The emotional state and the confidence score of each segment are sequentially analyzed, and a current emotional state of the audio signal is tracked throughout each of the plurality of segments. For each segment, it is determined whether the current emotional state of the audio signal changes to another emotional state based on the emotional state and the confidence score of the segment.
Abstract:
A system and method for processing speech includes receiving a first information stream associated with speech, the first information stream comprising micro-modulation features and receiving a second information stream associated with the speech, the second information stream comprising features. The method includes combining, via a non-linear multilayer perceptron, the first information stream and the second information stream to yield a third information stream. The system performs automatic speech recognition on the third information stream. The third information stream can also be used for training HMMs.
Abstract:
Devices, systems, methods, media, and programs for detecting an emotional state change in an audio signal are provided. A plurality of segments of the audio signal is received, with the plurality of segments being sequential. Each segment of the plurality of segments is analyzed, and, for each segment, an emotional state and a confidence score of the emotional state are determined. The emotional state and the confidence score of each segment are sequentially analyzed, and a current emotional state of the audio signal is tracked throughout each of the plurality of segments. For each segment, it is determined whether the current emotional state of the audio signal changes to another emotional state based on the emotional state and the confidence score of the segment.
Abstract:
A system and method for processing speech includes receiving a first information stream associated with speech, the first information stream comprising micro-modulation features and receiving a second information stream associated with the speech, the second information stream comprising features. The method includes combining, via a non-linear multilayer perceptron, the first information stream and the second information stream to yield a third information stream. The system performs automatic speech recognition on the third information stream. The third information stream can also be used for training HMMs.
Abstract:
Voice activity in a media signal is detected in an augmented, multi-tier classifier architecture. For instance, a first voice activity indicator, detected in a first modality for a human subject, is received from a first classifier. Then, the system can receive, from a second classifier, a second voice activity indicator detected in a second modality for the human subject, wherein the first voice activity indicator and the second voice activity indicators are based on the human subject at a same time, and wherein the first modality and the second modality are different. The system then concatenates, via a third classifier, the first voice activity indicator and the second voice activity indicator with original features of the human subject, to yield a classifier output, and determine voice activity based on the classifier output.
Abstract:
Disclosed herein are systems, methods, and computer-readable storage media for detecting voice activity in a media signal in an augmented, multi-tier classifier architecture. A system configured to practice the method can receive, from a first classifier, a first voice activity indicator detected in a first modality for a human subject. Then, the system can receive, from a second classifier, a second voice activity indicator detected in a second modality for the human subject, wherein the first voice activity indicator and the second voice activity indicators are based on the human subject at a same time, and wherein the first modality and the second modality are different. The system can concatenate, via a third classifier, the first voice activity indicator and the second voice activity indicator with original features of the human subject, to yield a classifier output, and determine voice activity based on the classifier output.
Abstract:
Disclosed herein are systems, methods, and computer-readable storage devices for processing audio signals. An example system configured to practice the method receives audio at a device to be transmitted to a remote speech processing system. The system analyzes one of noise conditions, need for an enhanced speech quality, and network load to yield an analysis. Based on the analysis, the system determines to bypass user-defined options for enhancing audio for speech processing. Then, based on the analysis, the system can modify an audio transmission parameter used to transmit the audio from the device to the remote speech processing system. The audio transmission parameter can be one of an amount of coding, a chosen codec, an amount of coding, or a number of audio channels, for example.