Abstract:
Disclosed is a bitstream generation method performed by an acoustic data transmission (ADT) encoder, the method including receiving a first audio signal, receiving additional information converted into a bitstream, and transmitting a second audio signal obtained by inserting the bitstream into the first audio signal, to an ADT decoder.
Abstract:
Disclosed are an apparatus and a method of controlling an eye-to-eye contact function, which provide a natural eye-to-eye contact by controlling an eye-to-eye contact function based on gaze information about a local participant and position information about a remote participant on a screen when providing the eye-to-eye contact function by using an image combination method and the like in a teleconference system, thereby improving absorption to a teleconference.
Abstract:
Disclosed is an apparatus and method for encoding/decoding an audio signal using information of a previous frame. An audio signal encoding method includes: generating a current latent vector by reducing dimension of a current frame of an audio signal; generating a concatenation vector by concatenating a previous latent vector generated by reducing dimension of a previous frame of the audio signal with the current latent vector; and encoding and quantizing the concatenation vector.
Abstract:
An audio signal encoding method performed by an encoder includes identifying an audio signal of a time domain in units of a block, generating a combined block by combining i) a current original block of the audio signal and ii) a previous original block chronologically adjacent to the current original block, extracting a first residual signal of a frequency domain from the combined block using linear predictive coding of a time domain, overlapping chronologically adjacent first residual signals among first residual signals converted into a time domain, and quantizing a second residual signal of a time domain extracted from the overlapped first residual signal by converting the second residual signal of the time domain into a frequency domain using linear predictive coding of a frequency domain.
Abstract:
A method and apparatus for performing binaural rendering of an audio signal are provided. The method includes identifying an input signal that is based on an object, and metadata that includes distance information indicating a distance to the object, generating a binaural filter that is based on the metadata, using a binaural room impulse response, obtaining a binaural filter to which a low-pass filter (LPF) is applied, using a frequency response control that is based on the distance information, and generating a binaural-rendered output signal by performing a convolution of the input signal and the binaural filter to which the LPF is applied.
Abstract:
Disclosed are an audio signal encoding method and audio signal decoding method, and an encoder and decoder performing the same. The audio signal encoding method includes applying an audio signal to a training model including N autoencoders provided in a cascade structure, encoding an output result derived through the training model, and generating a bitstream with respect to the audio signal based on the encoded output result.
Abstract:
Disclosed is a method of processing a residual signal for audio coding and an audio coding apparatus. The method learns a feature map of a reference signal through a residual signal learning engine including a convolutional layer and a neural network and performs learning based on a result obtained by mapping a node of an output layer of the neural network and a quantization level of index of the residual signal.
Abstract:
Disclosed are a method for coding a residual signal of LPC coefficients based on collaborative quantization and a computing device for performing the method. The residual signal coding method includes: generating encoded LPC coefficients and LPC residual signals by performing LPC analysis and quantization on an input speech; Determining a predicted LPC residual signal by applying the LPC residual signal to cross module residual learning; Performing LPC synthesis using the coded LPC coefficients and the predicted LPC residual signal; It may include the step of determining an output speech that is a synthesized output according to a result of performing the LPC synthesis.
Abstract:
An audio signal encoding method performed by an encoder includes identifying a time-domain audio signal in a unit of blocks, quantizing a linear prediction coefficient extracted from a combined block in which a current original block of the audio signal and a previous original block chronologically adjacent to the current original block using frequency-domain linear predictive coding (LPC), generating a temporal envelope by dequantizing the quantized linear prediction coefficient, extracting a residual signal from the combined block based on the temporal envelope, quantizing the residual signal by one of time-domain quantization and frequency-domain quantization, and transforming the quantized residual signal and the quantized linear prediction coefficient into a bitstream.
Abstract:
Disclosed is a method of processing a residual signal for audio coding and an audio coding apparatus. The method learns a feature map of a reference signal through a residual signal learning engine including a convolutional layer and a neural network and performs learning based on a result obtained by mapping a node of an output layer of the neural network and a quantization level of index of the residual signal.