Abstract:
An example electronic apparatus includes a memory configured to store at least one instruction and at least one processor connected to the memory to control the electronic apparatus. The at least one processor is configured to, by executing the at least one instruction, obtain a first audio signal including a voice signal and a noise signal, convert the first audio signal in a time domain to a second audio signal in a frequency domain, obtain a first gain value representing a Signal-to-Noise Ratio (SNR) from the second audio signal, obtain a second gain value with a first dynamic range by filtering the first gain value, obtain a third gain value by inputting the second gain value to a neural network model trained to output a signal from which noise is removed, and convert the second audio signal to a third audio signal from which at least a portion of the noise signal is removed, using the third gain value.
Abstract:
Various embodiments of the present disclosure may provide an electronic device for detecting an input. The electronic device according to various embodiments of the present disclosure may comprise: a speaker; a microphone; and a processor coupled to the speaker and the microphone. The processor may be configured to output a sound through the speaker on the basis of a first audio signal; obtain a second audio signal from the sound through the microphone; detect an input on the basis of a result of comparison between the first audio signal and the inputted second audio signal; and perform an operation according to the input.
Abstract:
A wearable device capable of synchronizing a plurality of wearable devices using body conductivity is provided. The wearable device includes a clock generator configured to generate a clock signal, a signal generator configured to generate a first synchronization signal based on the clock signal, an electrode configured to transmit and receive an electrical signal through a body while contacting the body, a switch configured to connect the signal generator and the electrode or block a connection between the signal generator and the electrode, and at least one processor configured to control the switch to connect the signal generator and the electrode for transmitting the first synchronization signal generated in the signal generator to the electrode in a master mode, and control the switch to block the connection between the signal generator and the electrode in a slave mode.
Abstract:
A device for outputting sound and a method therefor are provided. The sound output method includes predicting external sound to be received from an external environment, variably adjusting sound to be output from the device, based on the predicted external sound, and outputting the adjusted sound.
Abstract:
An electronic apparatus is provided. The electronic apparatus includes a memory and a processor, wherein the processor is configured to, by executing the at least one instruction, acquire a plurality of training data; acquire a plurality of embedding vectors that are mappable to an embedding space for the plurality of training data, respectively; train an artificial intelligence model classifying the plurality of training data based on the plurality of embedding vectors, identify an embedding vector misclassified by the artificial intelligence model among the plurality of embedding vectors, identify an embedding vector closest to the misclassified embedding vector in the embedding space, acquire a synthetic embedding vector corresponding to a path connecting the misclassified embedding vector to the embedding vector closest to the misclassified embedding vector in the embedding space, and re-train the artificial intelligence model by adding the synthetic embedding vector to the training data.
Abstract:
An electronic device includes a communication module, and a processor. The processor is configured to identify context information. The processor is also configured to select a specific task corresponding to the context information from among predetermined inference tasks relating to processing of an audio signal The processor is further configured to select an external electronic device, which is to process the specific task, from among external electronic devices that are establishing a communication connection to the electronic device. Additionally, the processor is configured to assign processing of the specific task to the external electronic device.
Abstract:
A method of processing an audio signal includes: obtaining an audio signal; transmitting the obtained audio signal to an external electronic device; receiving, from the external electronic device, second information for adjusting a third artificial intelligence model configured to process an audio signal in real time; inputting the received second information and the obtained audio signal to the third artificial intelligence model to adjust the third artificial intelligence model; inputting the obtained audio signal to the adjusted third artificial intelligence model to obtain a processed audio signal; and reproducing the processed audio signal.
Abstract:
Various embodiments of the disclosure relate to devices and methods for user authentication on an electronic device. The electronic device comprises a plurality of lighting devices, a camera, a display, and a processor. The processor is configured to display an object on the display, obtain, through the camera, a plurality of images based on an image obtaining pattern randomly generated, obtain biometric information using at least one of the plurality of images, obtain information about a variation in a user's gaze corresponding to a movement of the object displayed on the display using the plurality of images, and perform authentication on the user based on the biometric information and the user gaze variation information. Other embodiments are available as well.
Abstract:
A voice synthesis apparatus is provided. The voice synthesis apparatus includes: an electrode array configured to, in response to voiceless speeches of a user, detect an electromyogram (EMG) signal from skin of the user; a speech activity detection module configured to detect a voiceless speech period of the user; a feature extractor configured to extract a signal descriptor indicating a feature of the EMG signal for the voiceless speech period; and a voice synthesizer configured to synthesize speeches by using the extracted signal descriptor.
Abstract:
A method of determining emotion information from a voice is provided. The method includes receiving a voice frame obtained by converting a sound generated by a user into an electrical signal, detecting phonation information and articulation information, the phonation information being related to phonation of the user and the articulation information being related to articulation of the user, from the voice frame, and determining user emotion information corresponding to the phonation information and the articulation information.