ONTOLOGY ALIGNMENT APPARATUS, PROGRAM, AND METHOD

    公开(公告)号:EP3407208A1

    公开(公告)日:2018-11-28

    申请号:EP17172311.7

    申请日:2017-05-22

    申请人: Fujitsu Limited

    发明人: HU, Bo

    IPC分类号: G06F17/30 G06N3/04

    CPC分类号: G06F17/30734 G06N3/0454

    摘要: Embodiments include: an ontology alignment apparatus, comprising processor hardware and memory hardware coupled to the processor hardware, the memory storing a plurality of ontologies for alignment, each of the plurality of ontologies defining a hierarchical structure of named concepts, the ontology comprising named concepts and relationships linking pairs of named concepts, the relationships including subsumption relationships, each named concept being named by a respectively assigned label, the processor being configured to perform, for each of the plurality of ontologies:
    - an initial embedding process, comprising: for every word appearing in the labels assigned to named concepts defined by the ontology, obtaining an initial word vector representation of the word;
    - a syntactical embedding process, comprising: in respect of each concept defined by the ontology, forming a set of words comprising: each word in the label assigned to the respective concept; and each word in the label assigned to each concept linked to the respective concept by a subsumption relationship; compiling a syntactical concept matrix comprising, for each concept defined by the ontology, a syntactical concept vector representing the concept and comprising the initial word vector representation of each word in the respective set of words;
    - a projecting process comprising: obtaining a set of projections by projecting each of the syntactical concept vectors for the ontology onto a notional space common to the plurality of ontologies.
    The processor being further configured to perform, for the plurality of ontologies collectively:
    - a rotational alignment process comprising: rotating the sets of projections relative to one another in the notional space to maximise the mutual rotational alignment between the sets; and
    - a mapping process comprising: determining, for target projections among the set of projections for an ontology among the plurality of ontologies, a spatially closest projection from each of the other sets of projections in the notional space after said rotating, and adding to a mapping registry a registry entry mapping the concept represented by the vector of the target projection to the or each of the concepts respectively represented by the vector or vectors of the spatially closest projections.

    OPTICAL FLOW DETERMINATION SYSTEM
    4.
    发明公开

    公开(公告)号:EP3385909A1

    公开(公告)日:2018-10-10

    申请号:EP18165671.1

    申请日:2018-04-04

    IPC分类号: G06T7/277

    摘要: A generative adversarial network (GAN) system 100 includes a generator sub-network 102 configured to examine images 108, 110 of an object 112 moving relative to a viewer of the object 112. The generator sub-network 102 also is configured to generate one or more distribution-based images 300, 302, 304 based on the images 108, 110 that were examined. The system 100 also includes a discriminator sub-network 104 configured to examine the one or more distribution-based images 300, 302, 304 to determine whether the one or more distribution-based images 300, 302, 304 accurately represent the object 112. A predicted optical flow of the object 112 is represented by relative movement of the object 112 as shown in the one or more distribution-based images 300, 302, 304.

    DETERMINING ORDERS OF EXECUTION OF A NEURAL NETWORK

    公开(公告)号:EP3369045A1

    公开(公告)日:2018-09-05

    申请号:EP16810587.2

    申请日:2016-11-30

    申请人: Google LLC

    IPC分类号: G06N3/063

    CPC分类号: G06N3/063 G06N3/0454

    摘要: Systems and methods are provided for determining an order of execution of a neural network. For instance, data indicative of a neural network and data indicative of an amount of available memory in a constrained memory space can be obtained. The neural network can include a plurality of operators. An order of execution associated with the neural network can then be determined. The order of execution specifies an order in which to execute each of the plurality of operators. The order of execution is determined based at least in part on the available memory in the constrained memory space. In particular, one or more graph search algorithms can be performed on a graph that is representative of the neural network to obtain the order of execution.

    GENERATING LARGER NEURAL NETWORKS
    6.
    发明公开

    公开(公告)号:EP3360084A1

    公开(公告)日:2018-08-15

    申请号:EP16809576.8

    申请日:2016-11-11

    申请人: Google LLC

    IPC分类号: G06N3/04 G06N3/08

    CPC分类号: G06N3/082 G06N3/04 G06N3/0454

    摘要: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a larger neural network from a smaller neural network. In one aspect, a method includes obtaining data specifying an original neural network; generating a larger neural network from the original neural network, wherein the larger neural network has a larger neural network structure including the plurality of original neural network units and a plurality of additional neural network units not in the original neural network structure; initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same outputs from the same inputs as the original neural network; and training the larger neural network to determine trained values of the parameters of the original neural network units and the additional neural network units from the initialized values.

    CONVOLUTIONAL GATED RECURRENT NEURAL NETWORKS

    公开(公告)号:EP3360081A1

    公开(公告)日:2018-08-15

    申请号:EP16806343.6

    申请日:2016-11-11

    申请人: Google LLC

    IPC分类号: G06N3/04 G06N3/10

    摘要: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for implementing a convolutional gated recurrent neural network (CGRN). In one of the systems, the CGRN is configured to maintain a state that is a tensor having dimensions x by y by m, wherein x, y, and m are each greater than one, and for each of a plurality of time steps, update a currently maintained state by processing the currently maintained state through a plurality of convolutional gates.

    CONFIGURABLE ACCELERATOR FRAMEWORK, SYSTEM AND METHOD

    公开(公告)号:EP3346427A1

    公开(公告)日:2018-07-11

    申请号:EP17197155.9

    申请日:2017-10-18

    IPC分类号: G06N3/063 G06N3/04

    CPC分类号: G06N3/063 G06N3/0454

    摘要: Embodiments are directed towards a configurable accelerator framework device (400) that includes a stream switch (500) and a plurality of convolution accelerators (600). The stream switch (500) has a plurality of input ports and a plurality of output ports. Each of the input ports is configurable at run time to unidirectionally pass data to any one or more of the output ports via a stream link. Each one of the plurality of convolution accelerators (600) is configurable at run time to unidirectionally receive input data via at least two of the plurality of stream switch output ports, and each one of the plurality of convolution accelerators (600) is further configurable at run time to unidirectionally communicate output data via an input port of the stream switch.

    HARDWARE ACCELERATOR ENGINE AND METHOD
    9.
    发明公开

    公开(公告)号:EP3346425A1

    公开(公告)日:2018-07-11

    申请号:EP17197096.5

    申请日:2017-10-18

    IPC分类号: G06N3/063 G06N3/04

    CPC分类号: G06N3/063 G06N3/0454

    摘要: Embodiments are directed towards a hardware accelerator engine (600) that supports efficient mapping of convolutional stages of deep neural network algorithms. The hardware accelerator engine includes a plurality of convolution accelerators (600A), and each one of the plurality of convolution accelerators (600A) includes a kernel buffer (616), a feature line buffer (618), and a plurality of multiply-accumulate (MAC) units (620). The MAC units (620) are arranged to multiply and accumulate data received from both the kernel buffer (616) and the feature line buffer (618). The hardware accelerator engine also includes at least one input bus coupled to an output bus port of a stream switch, at least one output bus coupled to an input bus port of the stream switch, or at least one input bus and at least one output bus hard wired to respective output bus and input bus ports of the stream switch.

    INFORMATION ESTIMATION APPARATUS AND INFORMATION ESTIMATION METHOD

    公开(公告)号:EP3343456A1

    公开(公告)日:2018-07-04

    申请号:EP17203449.8

    申请日:2017-11-24

    发明人: Adachi, Jingo

    IPC分类号: G06N3/04 G06N3/08 G06N7/00

    摘要: A technique for stable and fast computation of a variance representing a confidence interval for an estimation result in an estimation apparatus using a neural network including an integrated layer that combines a dropout layer for dropping out part of input data and an FC layer for computing a weight is provided. When input data having a multivariate distribution is supplied to the integrated layer, a data analysis unit 30 determines, based on a numerical distribution of terms formed by respective products of each vector element of the input data and the weight, a data type of each vector element of output data from the integrated layer. An estimated confidence interval computation unit 20 applies an approximate computation method associated with the data type, to analytically compute a variance of each vector element of the output data from the integrated layer based on the input data to the integrated layer.