Abstract:
Systems, methods and tools for managing the job queues of virtual machines, maintaining a low energy profile and a quality of service within the contractual service agreement. The systems migrate jobs to a new VM queue when a assigned VM has failed. The systems employ machine learning techniques to make decisions whether or not to reallocate the job to a VM running in an active mode (non-scalable mode) or a VM operating under a dynamic voltage and frequency scaling (DVFS) mode. The systems reconcile job failures, transfer and/or complete jobs using the network of VMs without degrading the service quality, maintaining a lower power consumption policy through scalable modes, including idle, busy, sleep, DVFS gradient and DVFS maximum modes, improving the overall reliability of the data center by switching the jobs to scalable nodes, increasing the recoverability of the systems in the virtualized environments.
Abstract:
An information handling system (IHS) is disclosed wherein the system includes a processor associated with at least one performance state (P-state), and a memory in communication with the processor. The memory is operable to store a virtualization software and a basic input/out system (BIOS). The BIOS is configured to report a parameter of the P-state to the virtualization software. In addition, the BIOS is configured to transition the processor into a desired P-state. A method for managing performance states in an information handling system (IHS) is further disclosed wherein the method includes providing a basic input/output system (BIOS) in communication with a processor, the processor associated with an at least one performance state (P-state) and reporting a parameter of the at least one P-state to a virtualization software via the BIOS. The method further includes transitioning the processor to a desired P-state via the BIOS.
Abstract:
Technologies are generally described for systems, devices and methods effective to schedule access to a core. In some examples, a first differential voltage frequency scaling (DVFS) value of a first virtual machine may be received by a virtual machine manager. A second DVFS value of a second virtual machine may be received by the virtual machine manager. A third DVFS value of a third virtual machine may be received by the virtual machine manager. The third DVFS value may be substantially the same as the first DVFS value and different from the second DVFS value. A dispatch cycle may be generated to execute the first, second and third virtual machines on the core. After execution of the first virtual machine, the dispatch cycle may require execution of the third virtual machine before execution of the second virtual machine.
Abstract:
In a virtual machine system, an arrangement of virtual machines, which has fault tolerance, is performed. A virtual machine managing apparatus includes a similar group generating unit and an arrangement restriction generating unit. The similar group generating unit generates a group of virtual machines having a similarity relationship which indicates that performance values of virtual machines at each timing are approximately the same, out of plural virtual machines. The arrangement restriction generating unit outputs the group of virtual machines having the similarity relationship as a distributed-arrangement restriction indicating a group of virtual machines to be arranged on different processing apparatuses among plural processing apparatuses carrying out processes of the virtual machines.
Abstract:
An information processing apparatus connected to another information processing apparatus includes an arithmetic processing device, and one or more processors configured to detect an exception event of a self main memory when the arithmetic processing device requests an access to data on a main memory possessed by the another information processing apparatus and vary a clock frequency or a voltage of the arithmetic processing device on the basis of the detection of the exception event.
Abstract:
A system and method for virtualization and cloud security are disclosed. According to one embodiment, a system comprises a first multi-core processing cluster and a second multi-core processing cluster in communication with a network interface card and software instructions. When the software instructions are executed by the second multi-core processing cluster they cause the second multi-core processing cluster to receive a request for a service, create a new or invoke an existing virtual machine to service the request, and return a desired result indicative of successful completion of the service to the first multi-core processing cluster.
Abstract:
An optimized placement of virtual machines may be determined by optimizing an energy cost for a group of virtual machines in various configurations. For various hardware platforms, an energy cost per performance value may be determined. Based on the performance usage of a group of virtual machines, a total power cost may be determined and used for optimization. In some implementations, an optimized placement may include operating a group of virtual machines in a manner that does not exceed a total energy cost for a period of time.
Abstract:
A bridge logic device for a heterogeneous computer system that has at least one performance processor, a processor supporting logic supporting the at least one performance processor to execute tasks of the software, and a hypervisor processor consuming less power than the at least one performance processor is disclosed. The bridge logic device comprises a hypervisor operation logic that maintains status of the system under the at least one performance processor; a processor language translator logic that translates between processor languages of the at least one performance and the hypervisor processors; and a high-speed bus switch that has first, second and third ports for relaying data across any two of the three ports bidirectionally. The switch is connected to the at least one performance processor, the hypervisor processor via the processor language translator logic, and to the processor supporting logic respectively at the first, second, and third port.
Abstract:
A programmable processor and method for improving the performance of processors by expanding at least two source operands, or a source and a result operand, to a width greater than the width of either the general purpose register or the data path width. The present invention provides operands which are substantially larger than the data path with of the processor by using the contents of a general purpose register to specify a memory address at which a plurality of data path widths of data can be read or written, as well as the size and shape of the operand. In addition, several instructions and apparatus for implementing these instructions are described which obtain performance advantages if the operands are not limited to the width and accessible number of general purpose registers.
Abstract:
Technologies are generally provided for reactive loop sensing in multi-datacenter deployments. In some examples, tagged metrics from deployment elements on different datacenter or platform providers may be used by a stability module to generate a synthetic generalized deployment model that aliases multiple system elements into general state vectors. The state vectors may include a transfer vector on the border between each datacenter or platform, and the feedback from the metrics may cause the states of the datacenters/platforms to match the deployment's unobserved variables allowing stability analysis before failure. For example, the metrics may be associated with a portion of the deployment on one of the multiple datacenters. The stability analysis module may compare the received metrics with model metrics derived from a model of the multi-datacenter deployment to determine the stability of the deployment and/or adjust the model for increased stability.