Abstract:
An adaptive block cache management method and a DBMS applying the same are provided. A DB system according to an exemplary embodiment of the present disclosure includes: a cache configured to temporarily store DB data; a disk configured to permanently store the DB data; and a processor configured to determine whether to operate the cache according to a state of the DB system. Accordingly, a high-speed cache is adaptively managed according to a current state of a DBMS, such that a DB processing speed can be improved.
Abstract:
A cache management method for optimizing read performance in a distributed file system is provided. The cache management method includes: acquiring metadata of a file system; generating a list regarding data blocks based on the metadata; and pre-loading data blocks into a cache with reference to the list. Accordingly, read performance in analyzing big data in a Hadoop distributed file system environment can be optimized in comparison to a related-art method.
Abstract:
Fabrics with a multi-layered circuit of high reliability and a manufacturing method thereof are provided. The fabrics with the multi-layered circuit include: a base layer; a first conductive pattern which is formed on the base layer; a second conductive pattern which is formed to intersect with the first conductive pattern at least in part; and an insulating pattern which is formed on an intersection portion which is a region where the first conductive pattern and the second conductive pattern intersect.
Abstract:
There is provided a method for applying a learning model-based power saving model in an intelligent BMC. According to an embodiment, a BMC includes: a prediction module configured to predict future computing resource usage and a future CPU temperature from monitoring data on computing resources; a power capping module configured to control power capping based on the predicted future computing resource usage; a fan control module configured to control a cooling fan based on the predicted future CPU temperature. Accordingly, the BMC effectively/efficiently controls power capping and cooling fans based on prediction by interworking with the on-device AI, thereby reducing power consumption of a data center infrastructure effectively/efficiently.
Abstract:
There is provided an intelligent BMC for predicting a fault by interworking on-device AI. A fault prediction method of a BMC according to an embodiment includes: collecting monitoring information regarding computing modules installed on a main board; calculating a FOFL from the collected monitoring data; and constructing an AI model related to the calculated FOFL and predicting a FOFL from the monitoring data. Accordingly, a fault occurring in various patterns may be predicted based on monitoring data by interworking with on-device AI.
Abstract:
There is provided an edge server system management and control method in a rugged environment. An edge server management apparatus according to an embodiment of the present disclosure includes: a communication unit configured to communicate with an edge server; and a processor configured to collect environmental information of the edge server through the communication unit, and to control an external environment of the edge server and to control resource configuration for an edge service, based on the collected environmental information. Accordingly, it is possible to manage/control an edge server system-based configuration module (a fan, a heater) even, and to operate an edge service by reconfiguring resources of the edge server in a severe industrial site.
Abstract:
There are provided a cloud management method and a cloud management apparatus for rapidly scheduling arrangements of service resources by considering equal distribution of resources in a large-scale container environment of a distributed collaboration type. The cloud management method according to an embodiment includes: receiving, by a cloud management apparatus, a resource allocation request for a specific service; monitoring, by the cloud management apparatus, available resource current statuses of a plurality of clusters, and selecting a cluster that is able to be allocated a requested resource; calculating, by the cloud management apparatus, a suitable score with respect to each of the selected clusters; and selecting, by the cloud management apparatus, a cluster that is most suitable to the requested resource for executing a requested service from among the selected clusters, based on the respective suitable scores. Accordingly, for the method for determining equal resource arrangements between associative clusters according to characteristics of a required resource, a model for selecting a candidate group and finally selecting a cluster that is suitable to a required resource can be supported.
Abstract:
A method for generating firmware by allowing a developer to freely select functions to be included in firmware installed on a main board of a server, and by building a firmware image is provided. The method for generating firmware includes: listing functions that are allowed to be included in firmware installed on a main board of a server; receiving selection of at least one of the listed functions from a user; and building a firmware image including the functions selected by the user.Accordingly, since a firmware image is built by a developer freely selecting functions to be included in firmware installed on a main board of a server, firmware optimized for requirements of the developer can be generated.
Abstract:
A module type PDU for different power supply is provided. The PDU includes: a base configured to transmit different kinds of power; and a multi socket module connected with the base to transmit one kind of power to devices plugs of which are connected to the multi socket module. Accordingly, double power supply can be achieved through a single PDU and thus a PDU installing cost can be reduced, and, as the number of PDUs is reduced, electric equipments can be simplified.
Abstract:
There are a method and an apparatus for managing a hybrid cloud to perform consistent resource management for all resources in a heterogeneous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds. Accordingly, the method and apparatus for hybrid cloud management provides an integration support function between different cluster orchestrations in a heterogenous cluster environment which is comprised of an on-premise cloud and a plurality of public clouds, supports consistent resource management for all resources, and provides optimal workload deployment, free optimal reconfiguration, migration and restoration, whole resource integration scaling.