Abstract:
Disclosed herein is a process-based inter-thing collaboration apparatus and method in a Web of Things (WoT) environment, which can perform dynamic collaboration between things in a WoT environment. The presented apparatus includes an inter-thing collaboration design tool unit for designing an Inter-Thing Collaboration Process (ITCP) based on information of things including a device, a service, and a process, and an inter-thing collaboration management unit for dynamically configuring an inter-thing collaboration community based on context information of the things, and executing the ITCP designed by the inter-thing collaboration design tool unit.
Abstract:
An apparatus and method for controlling the execution of mashup WoT service are disclosed herein. The apparatus includes a WoT mashup service functionality entity, a WoT service execution functional entity, and a WoT service repository. The WoT mashup service functionality entity makes a response to a mashup WoT service request from a WoT service user by executing a mashup WoT service optimized for the WoT service user. The WoT service execution functional entity returns the results of simple WoT services to the WoT mashup service functionality entity in response to a request for the execution of the simple WoT services for the execution of the mashup WoT service from the WoT mashup service functionality entity. The WoT service repository stores WoT service execution descriptions each descriptive of execution logic for each of the simple WoT services.
Abstract:
The present invention relates to an apparatus and a method for augmented cognition that can improve the cognitive ability of a worker by collecting various sensed data in a work place, generating artificial sensory information based on the collected sensed data, and providing the generated artificial sensory information in the form of an augmented cognition service.
Abstract:
Exemplary embodiments of the present invention relate to a method and apparatus for big data parallel inference. A method for data parallel inference according to an embodiment of the present invention comprises generating a predetermined network comprising a pattern network and a join network based on rule files and a predetermined algorithm; performing a pattern matching test for input data in parallel on a plurality of pattern matching means by loading the pattern network to each of the plurality of pattern matching means and distributing the inputted data to the plurality of pattern matching means; and inferring new data by performing a join matching test for the data which has passed the pattern matching test. According to embodiments of the present invention, new data can be inferred by analyzing accurately and fast big data.
Abstract:
Exemplary embodiments of the present invention relates to a resource sharing for M2M service capabilities. A sharing local resource sharing method of M2M component according to an embodiment of the present invention comprises generating an replacement network service capability layer when a pre-determined component which is included in a device/gateway domain is not connected to a network domain or a pre-determined request is made; and performing communication with at least one of other components included in the device/gateway domain through the generated replacement NSCL. In exemplary embodiments of the present invention, M2M service capabilities may be replaced or changed according to system conditions.
Abstract:
There is provided a system for providing information. The system includes a data classifying device configured to receive original data and classify the original data as real time data or general data; a real time data analyzing device configured to receive the real time data from the data classifying device and generate condensed information including only a part that satisfies predefined conditions among attribute information of the real time data; and a distributed parallel processing device configured to receive the general data from the data classifying device, perform a predetermined distributed parallel computation process on the general data, and generate analysis information.
Abstract:
Exemplary embodiments of the present invention relate to a resource processing scheme which can be used for web of things services. The web of things plug-in system according to exemplary embodiments of the present invention comprises: a web of things resource storing unit configured to store a web of things resource which represent at least one of a thing itself and any data produced by the thing; and a web of things resource processing unit configured to perform resource processing on the web of things resources stored in the web of things resource storing unit. According to exemplary embodiments of the present invention, information from things can be provided in the form of web resources through the web to web of things service user as well as to another things.