Abstract:
A method of selectively authorizing access includes obtaining, at an authentication device, first information corresponding to first synthetic biometric data. The method also includes obtaining, at the authentication device, first common synthetic data and second biometric data. The method further includes generating, at the authentication device, second common synthetic data based on the first information and the second biometric data. The method also includes selectively authorizing, by the authentication device, access based on a comparison of the first common synthetic data and the second common synthetic data.
Abstract:
A method for encoding three dimensional audio by a wireless communication device is disclosed. The wireless communication device detects an indication of a plurality of localizable audio sources. The wireless communication device also records a plurality of audio signals associated with the plurality of localizable audio sources. The wireless communication device also encodes the plurality of audio signals.
Abstract:
In general, techniques are described for forming a collaborative sound system. A headend device comprising one or more processors may perform the techniques. The processors may be configured to identify mobile devices that each includes a speaker and that are available to participate in a collaborative surround sound system. The processors may configure the collaborative surround sound system to utilize the speaker of each of the mobile devices as one or more virtual speakers of this system and then render audio signals from an audio source such that when the audio signals are played by the speakers of the mobile devices the audio playback of the audio signals appears to originate from the one or more virtual speakers of the collaborative surround sound system. The processors may then transmit the processed audio signals rendered to the mobile device participating in the collaborative surround sound system.
Abstract:
In general, techniques are described for performing constrained dynamic amplitude panning in collaborative sound systems. A headend device comprising one or more processors may perform the techniques. The processors may be configured to identify, for a mobile device participating in a collaborative surround sound system, a specified location of a virtual speaker of the collaborative surround sound system and determine a constraint that impacts playback of audio signals rendered from an audio source by the mobile device. The processors may be further configure to perform dynamic spatial rendering of the audio source with the determined constraint to render audio signals that reduces the impact of the determined constraint during playback of the audio signals by the mobile device.
Abstract:
Apparatus and methods for audio noise attenuation are disclosed. An audio signal analyzer can determine whether an input audio signal received from a microphone device includes a noise signal having identifiable content. If there is a noise signal having identifiable content, a content source is accessed to obtain a copy of the noise signal. An audio canceller can generate a processed audio signal, having an attenuated noise signal, based on comparing the copy of the noise signal to the input audio signal. Additionally or alternatively, data may be communicated on a communication channel to a separate media device to receive at least a portion of the copy of the noise signal from the separate media device, or to receive content-identification data corresponding to the content source.
Abstract:
A method for encoding multiple directional audio signals using an integrated codec by a wireless communication device is disclosed. The wireless communication device records a plurality of directional audio signals. The wireless communication device also generates a plurality of audio signal packets based on the plurality of directional audio signals. At least one of the audio signal packets includes an averaged signal. The wireless communication device further transmits the plurality of audio signal packets.
Abstract:
In general, techniques are described that enable voice activation for computing devices. A computing device configured to support an audible interface that comprises a memory and one or more processors may be configured to perform the techniques. The memory may store a first audio signal representative of an environment external to a user associated with the computing device and a second audio signal sensed by a microphone coupled to a housing of the computing device. The one or more processors may verify, based on the first audio signal and the second audio signal, that the user activated the audible interface of the computing device, and obtain, based on the verification, additional audio signals representative of one or more audible commands.
Abstract:
A device includes one or more processors configured to determine, based on data descriptive of two or more audio environments, a geometry of a mutual audio environment. The one or more processors are also configured to process audio data, based on the geometry of the mutual audio environment, for output at an audio device disposed in a first audio environment of the two or more audio environments.
Abstract:
A device for speech generation includes one or more processors configured to receive one or more control parameters indicating target speech characteristics. The one or more processors are also configured to process, using a multi-encoder, an input representation of speech based on the one or more control parameters to generate encoded data corresponding to an audio signal that represents a version of the speech based on the target speech characteristics.
Abstract:
A device includes a memory configured to store untransformed ambisonic coefficients at different time segments. The device also includes one or more processors configured to obtain the untransformed ambisonic coefficients at the different time segments, where the untransformed ambisonic coefficients at the different time segments represent a soundfield at the different time segments. The one or more processors are also configured to apply one adaptive network, based on a constraint, to the untransformed ambisonic coefficients at the different time segments to generate transformed ambisonic coefficients at the different time segments, wherein the transformed ambisonic coefficients at the different time segments represent a modified soundfield at the different time segments, that was modified based on the constraint.