摘要:
A perceptual semantic content estimation method includes: (A) inputting, to data processing means, brain activity induced in a subject by a training stimulation and detected as an output of a brain activity detection means and an annotation of a perceptual content; (B) associating a sematic space representation of the training stimulation and the output of the brain activity detection means in a stored semantic space and storing the association in a training result information storage means; (C) inputting, to the data processing means, an output when the brain activity detection means detects brain activity induced by a novel stimulation, and obtaining a probability distribution in the semantic space which represents perceptual semantic contents for the output of the novel stimulation-induced brain activity by the brain activity detection means on the basis of the association; and (D) estimating a highly probable perceptual semantic content on the basis of the probability distribution.
摘要:
A viewing material evaluating method includes: a brain activity measuring step of measuring a brain activity of a test subject who views a viewing material by using a brain activity measuring unit; a first matrix generating step of generating a first matrix estimating a semantic content of perception of the test subject on the basis of a measurement result acquired in the brain activity measuring step by using a first matrix generating unit; a second matrix generating step of generating a second matrix by performing natural language processing for text information representing a planning intention of the viewing material by using a second matrix generating unit; and a similarity calculating step of calculating similarity between the first matrix and the second matrix by using a similarity calculating unit.
摘要:
Provided is a DNN learning method that can reduce DNN learning time using data belonging to a plurality of categories. The method includes the steps of training a language-independent sub-network 120 and language-dependent sub-networks 122 and 124 with training data of Japanese and English. This step includes: a first step of training a DNN obtained by connecting neurons in an output layer of the sub-network 120 with neurons in an input layer of sub-network 122 with Japanese training data; a step of forming a DNN by connecting the sub-network 124 in place of the sub-network 122 to the sub-network 120 and training it with English data; repeating these steps alternately until all training data ends; and after completion, separating the first sub-network 120 from other sub-networks and storing it as a category-independent sub-network in a storage medium.