Abstract:
Systems, methods, and apparatus for using different interfaces to receive from different devices representations of at least one audio signal. In some embodiments, each representation may be generated using at least one microphone of the respective device during a meeting attended by a plurality of participants. In some further embodiments, a first representation may be received from a first device via a telephone network, while a second representation may be received from a second device via a data network. In yet some further embodiments, the first and second representations may be processed to obtain a processed representation of the at least one audio signal.
Abstract:
Systems, methods and apparatus for capturing at least one audio signal using a plurality of microphones that generate a plurality of representations of the at least one audio signal. In some embodiments, the plurality of microphones are disposed in a multiple-microphone setting so that the at least one audio signal is captured by at least two of the plurality of microphones. In some embodiments, at least one of the plurality of microphones is a microphone of a mobile device. The plurality of representations of the at least one audio signal may be processed to obtain a processed representation of the at least one audio signal.
Abstract:
Systems, methods, and apparatus for using at least one mobile device to receive a representation of at least one audio signal. In some embodiments, the at least one audio signal comprises speech of at least one of a plurality of first participants of a meeting, the plurality of first participants participating in the meeting from a first location, and the at least one audio signal may be audibly rendered to at least one second participant of the meeting at a second location different from the first location. In some embodiments, the at least one mobile device may further receive an indication of an identity of a leading speaker of the speech in the at least one audio signal, the leading speaker being identified from among the plurality of first participants, and may render the identity of the leading speaker to the at least one second participant.
Abstract:
Systems, methods and apparatus for capturing at least one audio signal using a plurality of microphones that generate a plurality of representations of the at least one audio signal. In some embodiments, the plurality of microphones are disposed in a multiple-microphone setting so that the at least one audio signal is captured by at least two of the plurality of microphones. In some embodiments, at least one of the plurality of microphones is a microphone of a mobile device. The plurality of representations of the at least one audio signal may be processed to obtain a processed representation of the at least one audio signal.
Abstract:
Techniques for combining the results of multiple recognizers in a distributed speech recognition architecture. Speech data input to a client device is encoded and processed both locally and remotely by different recognizers configured to be proficient at different speech recognition tasks. The client/server architecture is configurable to enable network providers to specify a policy directed to a trade-off between reducing recognition latency perceived by a user and usage of network resources. The results of the local and remote speech recognition engines are combined based, at least in part, on logic stored by one or more components of the client/server architecture.
Abstract:
Techniques for combining the results of multiple recognizers in a distributed speech recognition architecture. Speech data input to a client device is encoded and processed both locally and remotely by different recognizers configured to be proficient at different speech recognition tasks. The client/server architecture is configurable to enable network providers to specify a policy directed to a trade-off between reducing recognition latency perceived by a user and usage of network resources. The results of the local and remote speech recognition engines are combined based, at least in part, on logic stored by one or more components of the client/server architecture.
Abstract:
Techniques for combining the results of multiple recognizers in a distributed speech recognition architecture. Speech data input to a client device is encoded and processed both locally and remotely by different recognizers configured to be proficient at different speech recognition tasks. The client/server architecture is configurable to enable network providers to specify a policy directed to a trade-off between reducing recognition latency perceived by a user and usage of network resources. The results of the local and remote speech recognition engines are combined based, at least in part, on logic stored by one or more components of the client/server architecture.
Abstract:
Techniques for combining the results of multiple recognizers in a distributed speech recognition architecture. Speech data input to a client device is encoded and processed both locally and remotely by different recognizers configured to be proficient at different speech recognition tasks. The client/server architecture is configurable to enable network providers to specify a policy directed to a trade-off between reducing recognition latency perceived by a user and usage of network resources. The results of the local and remote speech recognition engines are combined based, at least in part, on logic stored by one or more components of the client/server architecture.
Abstract:
Techniques for combining the results of multiple recognizers in a distributed speech recognition architecture. Speech data input to a client device is encoded and processed both locally and remotely by different recognizers configured to be proficient at different speech recognition tasks. The client/server architecture is configurable to enable network providers to specify a policy directed to a trade-off between reducing recognition latency perceived by a user and usage of network resources. The results of the local and remote speech recognition engines are combined based, at least in part, on logic stored by one or more components of the client/server architecture.
Abstract:
Techniques for combining the results of multiple recognizers in a distributed speech recognition architecture. Speech data input to a client device is encoded and processed both locally and remotely by different recognizers configured to be proficient at different speech recognition tasks. The client/server architecture is configurable to enable network providers to specify a policy directed to a trade-off between reducing recognition latency perceived by a user and usage of network resources. The results of the local and remote speech recognition engines are combined based, at least in part, on logic stored by one or more components of the client/server architecture.