Abstract:
A non-transitory computer-readable recording medium recording a learning program for causing a computer to execute processing includes: generating restored data using a plurality of restorers respectively corresponding to a plurality of features from the plurality of features generated by a machine learning model corresponding to each piece of input data, for each piece of the input data input to the machine learning model; and making the plurality of restorers perform learning so that each of the plurality of pieces of restored data respectively generated by the plurality of restorers approaches the input data.
Abstract:
A learning device generates a first feature value and a second feature value by inputting original training data to a first neural network included in a learning model. The learning device learns at least one parameter of the learning model and a parameter of a decoder, reconstructing data inputted to the first neural network, such that reconstruction data outputted from the decoder by inputting the first feature value and the second feature value to the decoder becomes close to the original training data, and that outputted data that is outputted from a second neural network, included in the learning model by inputting the second feature value to the second neural network becomes close to correct data of the original training data.
Abstract:
A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process including obtaining a feature quantity of input data by using a feature generator, generating a first output based on the feature quantity by using a supervised learner for labeled data, generating a second output based on the feature quantity by using an unsupervised learning processing for unlabeled data, and changing a contribution ratio between a first error and a second error in a learning by the feature generator, the first error being generated from the labeled data and the first output, the second error being generated from the unlabeled data and the second output.
Abstract:
An initializable array has a plurality of blocks each having an address word and a data word, a boundary indicative of a two-division position where the plurality of blocks is divided into two divided areas and an initial value for each element of the array is stored, the boundary is a position where a ratio for the number of unwritten blocks in a first area and the number of written blocks in a second area is an integer ratio. An array control program causes a computer to execute shifting the boundary to extend the first area and generating an initialized written block in the first area; in a case where a write destination block is an unwritten block in the second area, forming a link between the initialized written block in the first area and the write destination block; and writing a write value to the write destination block.
Abstract:
When a second pattern is to be generated by adding an event to a first pattern including events, an extraction program causes a computer to execute the following process based on combinations of events. That is, the extraction program causes the computer to generate the second pattern when the number of occurrence, in the second pattern, of each of the events included in the combinations is not more than a threshold. The extraction program causes the computer to calculate, based on data including a plurality of events, a frequency at which one or more of the generated second patterns occur in the data. The extraction program causes the computer to extract the second pattern having the frequency satisfying a predetermined condition. The extraction program causes the computer to add a new event to the extracted second pattern.
Abstract:
A computer-implemented method of a determination processing, the method including: calculating, in response that deterioration of a classification model has occurred, a similarity between a first determination result and each of a plurality of second determination results, the first determination result being a determination result output from the classification model by inputting first input data after the deterioration has occurred to the classification model, and the plurality of second determination results being determination results output from the classification model by inputting, to the classification model, a plurality of pieces of post-conversion data converted by inputting second input data before the deterioration occurs to a plurality of data converters; selecting a data converter from the plurality of data converters on the basis of the similarity; and preprocessing in data input of the classification model by using the selected data converter.
Abstract:
A non-transitory computer-readable storage medium storing a data presentation program that causes at least one computer to execute a process, the process includes acquiring certain data from an estimation target data set that uses an estimation model, based on an estimation result for the estimation target data set; and presenting data obtained by changing the certain data in a direction orthogonal to a direction in which loss of the estimation model fluctuates, in a feature space that relates to feature amounts obtained from the estimation target data set.
Abstract:
A learning method executed by a computer, the learning method includes inputting a first data being a data set of transfer source and a second data being one of data sets of transfer destination to an encoder to generate first distributions of feature values of the first data and second distributions of feature values of the second data; selecting one or more feature values from among the feature values so that, for each of the one or more feature values, a first distribution of the feature value of the first data is similar to a second distribution of the feature value of the second data; inputting the one or more feature values to a classifier to calculate prediction labels of the first data; and learning parameters of the encoder and the classifier such that the prediction labels approach correct answer labels of the first data.
Abstract:
An anomaly detection apparatus generates pieces of image data using a generator and train the generator and a discriminator that discriminates whether an image data, generated by the generator, is real or fake. The anomaly detection apparatus trains the generator such that the generator, in generating the pieces of image data to maximize the discrimination error of the discriminator, generate at least a piece of specified image data to reduce the discrimination error at a fixed rate with respect to the pieces of image data and trains, based on the pieces of image data and the at least a piece of specified image data, the discriminator to minimize the discrimination error.
Abstract:
A non-transitory computer-readable recording medium with a machine learning program recorded therein for enabling a computer to perform processing includes: generating augmented data by data-augmenting at least some data of training data or at least some data of data input to a convolutional layer included in a learner, using a filter corresponding to a size depending on details of the processing of the convolutional layer or a filter corresponding to a size of an identification target for the learner; and learning the learner using the training data and the augmented data.