Managing data center resources to achieve a quality of service

    公开(公告)号:US10554505B2

    公开(公告)日:2020-02-04

    申请号:US13630545

    申请日:2012-09-28

    Abstract: In accordance with some embodiments, a cloud service provider may operate a data center in a way that dynamically reallocates resources across nodes within the data center based on both utilization and service level agreements. In other words, the allocation of resources may be adjusted dynamically based on current conditions. The current conditions in the data center may be a function of the nature of all the current workloads. Instead of simply managing the workloads in a way to increase overall execution efficiency, the data center instead may manage the workload to achieve quality of service requirements for particular workloads according to service level agreements.

    Cache-aware adaptive thread scheduling and migration

    公开(公告)号:US10339023B2

    公开(公告)日:2019-07-02

    申请号:US14496216

    申请日:2014-09-25

    Abstract: In one embodiment, a processor includes: a plurality of cores each to independently execute instructions; a shared cache memory coupled to the plurality of cores and having a plurality of clusters each associated with one or more of the plurality of cores; a plurality of cache activity monitors each associated with one of the plurality of clusters, where each cache activity monitor is to monitor one or more performance metrics of the corresponding cluster and to output cache metric information; a plurality of thermal sensors each associated with one of the plurality of clusters and to output thermal information; and a logic coupled to the plurality of cores to receive the cache metric information from the plurality of cache activity monitors and the thermal information and to schedule one or more threads to a selected core based at least in part on the cache metric information and the thermal information for the cluster associated with the selected core. Other embodiments are described and claimed.

    Processors and methods for managing cache tiering with gather-scatter vector semantics

    公开(公告)号:US10268580B2

    公开(公告)日:2019-04-23

    申请号:US15282483

    申请日:2016-09-30

    Abstract: Processors and methods implementing a machine instruction to perform cache line demotion on multiple cache lines to enable efficient sharing of cache lines between processor cores. One general aspect includes a processor comprising: a plurality of hardware processor cores, where each of the hardware processor cores to include a first cache. The processor also includes a second cache, communicatively coupled to and shared by the plurality of hardware processor cores. The processor to support a first machine instruction, the first machine instruction to include a vector register operand identifying a vector register which contains a plurality of data elements each used to identify a cache line. An execution of the first machine instruction by one of the plurality of hardware processor cores to cause a plurality of identified cache lines to be demoted, such that the demoted cache lines are moved from the first cache to the second cache.

    TIMING COMPENSATION FOR A TIMESTAMP COUNTER
    48.
    发明申请

    公开(公告)号:US20190042295A1

    公开(公告)日:2019-02-07

    申请号:US16024554

    申请日:2018-06-29

    Abstract: Particular embodiments described herein provide for an electronic device that can be configured to receive a request for a timestamp associated with a virtual machine, determine a current time from a timestamp counter, and subtract a timing compensation from the current time from the timestamp counter to create the timestamp, where the timing compensation includes an amount of time that execution of the virtual machine was suspended. In an example, a VM_EXIT instruction was used to suspend execution of the virtual machine and the timestamp counter was read before the VM_EXIT instruction was processed by a hypervisor.

Patent Agency Ranking