Abstract:
There is provided a learning method and system of a backbone network for visual intelligence based on self-supervised learning and multi-head. A network learning system according to an embodiment generates a plurality of first modified vectors by modifying a first feature vector outputted from a teacher network, generates a plurality of second modified vectors by modifying a second feature vector outputted from a student network, calculates a loss by using the first modified vectors and the second modified vectors, and optimizes parameters of the student network. Accordingly, the effect of learning by knowledge distillation may be enhanced by training the backbone network for visual intelligence like group learning is performed by various teacher networks and student networks.
Abstract:
An image region segmentation method and system suing self-spatial adaptive normalization is provided. The image region segmentation system includes: an encoder configured to encode an image for segmenting a region by using a plurality of encoding blocks; and a decoder configured to decode the image encoded by the encoder and to generate a region-segmented image by using a plurality of decoding blocks, wherein each of the encoding blocks processes an inputted image into a convolution layer, performs spatial adaptive normalization, and then reduces the image and delivers the image to the next encoding block. Accordingly, spatial characteristics of the image are considered in an encoding process and a decoding process, so that region segmentation can be exactly performed with respect to various images.
Abstract:
There is provided a training dataset construction method for speech synthesis through fusion of language, speaker, emotion within an utterance. A training dataset construction method of a speech synthesis model according to an embodiment collects speech data having different speech utterance information, increases the speech data by fusing the collected speech data within one utterance, and generates a training dataset by using the increased speech data. Accordingly, a training dataset for speech synthesis is constructed through fusion of language, speaker, emotion within one utterance, so that quality of speech synthesis of multi-speaker/multi-language/emotion can be enhanced.
Abstract:
Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.
Abstract:
A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.
Abstract:
There is provided a self-directed visual intelligence system, The self-directed visual intelligence system according to an embodiment prepares data necessary for training a visual intelligence model when a change in a visual context of a real world is recognized, configures a visual intelligence model and configures training data of the visual intelligence model, based on the changed visual context of the real world, trains the configured visual intelligence model with the training data, and evaluates performance of the trained visual intelligence model. Accordingly, the visual intelligence model is corrected/improved in a self-directed way according to a change in a visual context of a real world, and is grown/advanced by itself, so that performance of the visual intelligence model is maintained in a best state even in response to any change in the context of the real world.
Abstract:
An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t−1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.
Abstract:
There are provided AI model learning method and system based on self-learning for focusing on specific areas. According to an embodiment, a network learning system includes: a detection module configured to detect a specific area from unlabeled images, and to generate unlabeled area images; a configuration module configured to configure self-learning data by using the generated area images; and a learning module to cause a backbone network to perform self-learning by using the configured self-learning data. Accordingly, an AI model may be trained based on self-learning for focusing on a desired specific area according to a desired purpose, and high-performance analysis specified for various purposes and characteristics of various types of specific areas is possible.
Abstract:
There are provided a method and a system for image segmentation utilizing a GAN architecture. A method for training an image segmentation network according to an embodiment includes: inputting an image to a first network which is trained to output a region segmentation result regarding an input image, and generating a region segmentation result; and inputting the region segmentation result generated at the generation step and a ground truth (GT) to a second network, and acquiring a discrimination result, the second network being trained to discriminate inputted region segmentation results as a result generated by the first network and a GT, respectively; and training the first network and the second network by using the discrimination result. Accordingly, region segmentation performance of a semantic segmentation network regarding various images can be enhanced, and a very small image region can be exactly segmented.
Abstract:
A method for separating audio sources and an audio system using the same are provided. The method introduces the concept of a residual signal to separate a mixed audio signal into audio sources, and separates an audio signal corresponding to at least two of the audio sources as a residual signal and processes the audio signal separately. Therefore, audio separation performance can be improved. In addition, the method re-separates a separated residual signal and adds the separated residual signals to corresponding audio sources. Therefore, audio sources can be separated more safely.