Abstract:
An audio coding and decoding apparatus is disclosed. The audio coding apparatus may include an audio signal encoding unit to encode an audio signal; and a bitstream transmission unit to convert the audio signal into a bitstream and transmit the bitstream, wherein the audio signal comprises a channel audio signal, an object audio signal, and a reverberation signal of the object audio signal.
Abstract:
An encoding method, a decoding method, an encoder for performing the encoding method, and a decoder for performing the decoding method are provided. The encoding method includes outputting LP coefficients bitstream and a residual signal by performing an LP analysis on an input signal, outputting a first latent signal obtained by encoding a periodic component of the residual signal, a second latent signal obtained by encoding a non-periodic component of the residual signal, and a weight vector for each of the first latent signal and the second latent signal, using a first neural network module, and outputting a first bitstream obtained by quantizing the first latent signal, a second bitstream obtained by quantizing the second latent signal, and a weight bitstream obtained by quantizing the weight vector, using a quantization module.
Abstract:
Provided is a method and apparatus for designing and testing an audio codec using quantization based on white noise modeling. A neural network-based audio encoder design method includes generating a quantized latent vector and a reconstructed signal corresponding to an input signal by using a white noise modeling-based quantization process, computing a total loss for training a neural network-based audio codec, based on the input signal, the reconstruction signal, and the quantized latent vector, training the neural network-based audio codec by using the total loss, and validating the trained neural network-based audio codec to select the best neural network-based audio codec.
Abstract:
A method and apparatus for rendering an object-based audio signal considering an obstacle are disclosed. A method for rendering an object-based audio signal according to an example embodiment, the method includes identifying an object-based input signal and metadata for the input signal, generating a binaural filter based on the metadata using a binaural room impulse response (BRIR), determining, based on the metadata, whether an obstacle is present between a listener and an object, modifying the generated binaural filter when it is determined that the obstacle is present, and generating a rendered output signal by convolving the modified binaural filter and the input signal.
Abstract:
Disclosed is a method and apparatus for label encoding in a multi-sound event interval. The method includes identifying an event interval in which a plurality of sound events occurs in a sound signal, separating a sound source into sound event signals corresponding to each sound event by performing sound source separation on the event interval, determining energy information for each of the sound event signals, and performing label encoding based on the energy information.
Abstract:
Disclosed are a training method for a learning model for recognizing an acoustic signal, a method of recognizing an acoustic signal using the learning model, and devices for performing the methods. The method of recognizing an acoustic signal using a learning model includes identifying an acoustic signal including an acoustic event or acoustic scene, determining an acoustic feature of the acoustic signal, dividing the determined acoustic feature for each of a plurality of frequency band intervals, and determining the acoustic event or acoustic scene included in the acoustic signal by inputting the divided acoustic features to a trained learning model.
Abstract:
Disclosed is an LPC residual signal encoding/decoding apparatus of an MDCT based unified voice and audio encoding device. The LPC residual signal encoding apparatus analyzes a property of an input signal, selects an encoding method of an LPC filtered signal, and encode the LPC residual signal based on one of a real filterbank, a complex filterbank, and an algebraic code excited linear prediction (ACELP).
Abstract:
An audio signal encoding and decoding method using a learning model, a training method of the learning model, and an encoder and decoder that perform the method, are disclosed. The audio signal decoding method may include extracting a first residual signal and a first linear prediction coefficient by decoding a bitstream received from an encoder, generating a first audio signal from the first residual signal using the first linear prediction coefficient, generating a second linear prediction coefficients and a second residual signal from the first audio signal, obtaining a third linear prediction coefficient by inputting the second linear prediction coefficient into a trained learning model, and generating a second audio signal from the second residual signal using the third linear prediction coefficient.
Abstract:
An audio metadata providing apparatus and method and a multichannel audio data playback apparatus and method to support a dynamic format conversion are provided. Dynamic format conversion information may include information about a plurality of format conversion schemes that are used to convert a first format set by a writer of multichannel audio data into a second format that is based on a playback environment of the multichannel audio data and that are set for each of playback periods of the multichannel audio data. The audio metadata providing apparatus may provide audio metadata including the dynamic format conversion information. The multichannel audio data playback apparatus may identify the dynamic format conversion information from the audio metadata, may convert the first format of the multichannel audio data into the second format based on the identified dynamic format conversion information, and may play back the multichannel audio data with the second format.
Abstract:
Disclosed are methods of encoding and decoding an audio signal, and an encoder and a decoder for performing the methods. The method of encoding an audio signal includes identifying an input signal corresponding to a low frequency band of the audio signal, windowing the input signal, generating a first latent vector by inputting the windowed input signal to a first encoding model, transforming the windowed input signal into a frequency domain, generating a second latent vector by inputting the transformed input signal to a second encoding model, generating a final latent vector by combining the first latent vector and the second latent vector, and generating a bitstream corresponding to the final latent vector.