Abstract:
An encoding apparatus and a decoding apparatus in a transform between a Modified Discrete Cosine Transform (MDCT)-based coder and a different coder are provided. The encoding apparatus may encode additional information to restore an input signal encoded according to the MDCT-based coding scheme, when switching occurs between the MDCT-based coder and the different coder. Accordingly, an unnecessary bitstream may be prevented from being generated, and minimum additional information may be encoded.
Abstract:
An encoder and an encoding method for a multi-channel signal, and a decoder and a decoding method for a multi-channel signal are disclosed. A multi-channel signal may be efficiently processed by consecutive downmixing or upmixing.
Abstract:
The present invention relates to a system for transmitting and receiving audio, particularly, to a method and apparatus for transmitting and receiving of object-based audio contents, which packetizes audio objects having the same characteristic.To achieve the above, the present invention includes filtering a plurality of ESs according to common information, adding a packet header to the respective filtered ESs and generate ES packets, aggregating all the generated ES packets and then adding a multi-object packet header to the aggregated ES packets to generate an object packet, and multiplexing the generated object packet, packetizing the multiplexed object packet according to a transmitting media and transmitting the packetized object packet.
Abstract:
An audio metadata providing apparatus and method and a multichannel audio data playback apparatus and method to support a dynamic format conversion are provided. Dynamic format conversion information may include information about a plurality of format conversion schemes that are used to convert a first format set by an author of multichannel audio data into a second format that is based on a playback environment of the multichannel audio data and that are each set for corresponding playback periods of the multichannel audio data. The audio metadata providing apparatus may provide audio metadata including the dynamic format conversion information. The multichannel audio data playback apparatus may identify the dynamic format conversion information from the audio metadata, may convert the first format of the multichannel audio data into the second format based on the identified dynamic format conversion information, and may play back the multichannel audio data in the second format.
Abstract:
Disclosed is a unified speech and audio coding (USAC) audio signal encoding/decoding apparatus and method for digital radio services. An audio signal encoding method may include receiving an audio signal, determining a coding method for the received audio signal, encoding the audio signal based on the determined coding method, and configuring, as an audio superframe of a fixed size, an audio stream generated as a result of encoding the audio signal, wherein the coding method may include a first coding method associated with extended high-efficiency advanced audio coding (xHE-AAC) and a second coding method associated with existing advanced audio coding (AAC).
Abstract:
An audio encoding apparatus and method that encodes hybrid contents including an object sound, a background sound, and metadata, and an audio decoding apparatus and method that decodes the encoded hybrid contents are provided. The audio encoding apparatus may include a mixing unit to generate an intermediate channel signal by mixing a background sound and an object sound, a matrix information encoding unit to encode matrix information used for the mixing, an audio encoding unit to encode the intermediate channel signal, and a metadata encoding unit to encode metadata including control information of the object sound.
Abstract:
Methods and apparatuses for hiding and extracting data based on a pilot code sequence are provided. A data hiding method may include converting an input audio signal to a frequency domain, distorting phase information of the audio signal converted to the frequency domain based on a pilot code sequence representing data to be hidden, and converting the audio signal with the distorted phase information to a time domain and transmitting the audio signal converted to the time domain. The pilot code sequence may be a set of phase values corresponding to a bit value “0” or “1” of data.
Abstract:
The present invention relates to a system for transmitting and receiving audio, particularly, to a method and apparatus for transmitting and receiving of object-based audio contents, which packetizes audio objects having the same characteristic. To achieve the above, the present invention includes filtering a plurality of ESs according to common information, adding a packet header to the respective filtered ESs and generate ES packets, aggregating all the generated ES packets and then adding a multi-object packet header to the aggregated ES packets to generate an object packet, and multiplexing the generated object packet, packetizing the multiplexed object packet according to a transmitting media and transmitting the packetized object packet.
Abstract:
Disclosed is a speech processing apparatus and method using a densely connected hybrid neural network. The speech processing method includes inputting a time domain sample of N*1 dimension for an input speech into a densely connected hybrid network; passing the time domain sample through a plurality of dense blocks in a densely connected hybrid network; reshaping the time domain samples into M subframes by passing the time domain samples through the plurality of dense blocks; inputting the M subframes into gated recurrent unit (GRU) components of N/M-dimension; outputting clean speech from which noise is removed from the input speech by passing the M subframes through GRU components.
Abstract:
An audio signal encoding method performed by an encoder includes identifying a time-domain audio signal in a unit of blocks, quantizing a linear prediction coefficient extracted from a combined block in which a current original block of the audio signal and a previous original block chronologically adjacent to the current original block using frequency-domain linear predictive coding (LPC), generating a temporal envelope by dequantizing the quantized linear prediction coefficient, extracting a residual signal from the combined block based on the temporal envelope, quantizing the residual signal by one of time-domain quantization and frequency-domain quantization, and transforming the quantized residual signal and the quantized linear prediction coefficient into a bitstream.