Probabilistic distance-based arbitration
    1.
    发明授权
    Probabilistic distance-based arbitration 有权
    概率基于距离的仲裁

    公开(公告)号:US09391871B1

    公开(公告)日:2016-07-12

    申请号:US14254996

    申请日:2014-04-17

    Applicant: Google Inc.

    Abstract: Probabilistic arbitration is combined with distance-based weights to achieve equality of service in interconnection networks, such as those used with chip multiprocessors. This arbitration desirably used incorporates nonlinear weights that are assigned to requests. The nonlinear weights incorporate different arbitration weight metrics, namely fixed weight, constantly increasing weight, and variably increasing weight. Probabilistic arbitration for an on-chip router avoids the need for additional buffers or virtual channels, creating a simple, low-cost mechanism for achieving equality of service. The nonlinearly weighted probabilistic arbitration includes additional benefits such as providing quality-of-service features and fairness in terms of both throughput and latency that approaches the global fairness achieved with age-base arbitration. This provides a more stable network by achieving high sustained throughput beyond saturation. Each router or switch in the network may include an arbiter to apply the weighted probabilistic arbitration.

    Abstract translation: 概率仲裁与基于距离的权重相结合,以实现互连网络中的服务等同,如与芯片多处理器一起使用的。 期望使用的仲裁包括分配给请求的非线性权重。 非线性权重包含不同的仲裁权重度量,即固定权重,不断增加的权重和可变增加权重。 片上路由器的概率仲裁避免了对附加缓冲区或虚拟通道的需求,创建了一种实现平等服务的简单,低成本的机制。 非线性加权概率仲裁包括额外的好处,例如提供服务质量特征和在通过年龄基础仲裁实现的全球公平的吞吐量和延迟方面的公平性。 这通过实现高饱和度的高持续吞吐量来提供更稳定的网络。 网络中的每个路由器或交换机可以包括应用加权概率仲裁的仲裁器。

    Systems and methods for energy proportional multiprocessor networks
    2.
    发明授权
    Systems and methods for energy proportional multiprocessor networks 有权
    能量比例多处理器网络的系统和方法

    公开(公告)号:US08806244B1

    公开(公告)日:2014-08-12

    申请号:US14084054

    申请日:2013-11-19

    Applicant: Google Inc.

    Abstract: Energy proportional solutions are provided for computer networks such as datacenters. Congestion sensing heuristics are used to adaptively route traffic across links. Traffic intensity is sensed and links are dynamically activated as they are needed. As the offered load is decreased, the lower channel utilization is sensed and the link speed is reduced to save power. Flattened butterfly topologies can be used in a further power saving approach. Switch mechanisms are exploit the topology's capabilities by reconfiguring link speeds on-the-fly to match bandwidth and power with the traffic demand. For instance, the system may estimate the future bandwidth needs of each link and reconfigure its data rate to meet those requirements while consuming less power. In one configuration, a mechanism is provided where the switch tracks the utilization of each of its links over an epoch, and then makes an adjustment at the end of the epoch.

    Abstract translation: 为诸如数据中心的计算机网络提供能量比例解决方案。 拥塞感知启发式用于自适应地跨链路路由流量。 检测到交通强度,并根据需要动态激活链路。 随着提供的负载减小,感测到较低的信道利用率,并且减少链路速度以节省功率。 扁平蝶形拓扑可以用于进一步节能方法。 交换机制通过重新配置链路速度来快速利用拓扑的功能,以匹配带宽和功率与流量需求。 例如,系统可以估计每个链路的未来带宽需求,并重新配置其数据速率以满足这些要求,同时消耗更少的功率。 在一种配置中,提供了一种机制,其中开关在历元上跟踪其每个链接的利用率,然后在时代结束时进行调整。

    DEDICATED-CORE COMPUTER HARDWARE COMPONENT
    3.
    发明申请

    公开(公告)号:US20180191623A1

    公开(公告)日:2018-07-05

    申请号:US15393529

    申请日:2016-12-29

    Applicant: Google Inc.

    Abstract: A computing system dedicates one or more processing units, such as cores, for the purposes of packet processing software, wherein other processing units simultaneously run application software. In some examples, the system uses dynamic load information to dynamically increase and decrease the number of processing units dedicated to packet processing. The system may further include a mechanism for establishing shared-memory regions for interacting with other applications' users. The shared memory mechanisms provide an abstraction of per-application “command” and “completion queues”. The system may poll per-application command queues for detecting the arrival of new requests. The mechanism also provides detection of application termination, as well as an ability for an application to expose portions of its address space for the reception and transmission of data. In some examples, the system further includes a framework for executing software-defined handlers inline with threads that run packet processing and transport software.

    Transparent upgrade of a system service or application

    公开(公告)号:US10261780B2

    公开(公告)日:2019-04-16

    申请号:US15583849

    申请日:2017-05-01

    Applicant: Google Inc.

    Abstract: Systems and methods for updating an application without a restart are provided. A processor can start a second application instance while a first application instance is still executing. The first application instance can transfer a first set of state information to the second application instance. The second application instance can declare its readiness for activation in response to completion of the transfer. The first application instance can deactivate in response to the declaration. Deactivation includes transferring a second set of state information from the first application instance to the second application instance and releasing single-access resources. The second application instance can activate. Activation includes receiving the second set of state information, and accessing the single-access resources. The second application instance can declare that activation is complete in response to completion of the activation. The first application instance can terminate in response to the declaration.

    TRANSPARENT UPGRADE OF A SYSTEM SERVICE OR APPLICATION

    公开(公告)号:US20180314515A1

    公开(公告)日:2018-11-01

    申请号:US15583849

    申请日:2017-05-01

    Applicant: Google Inc.

    CPC classification number: G06F8/656 H04L67/34

    Abstract: Systems and methods for updating an application without a restart are provided. A processor can start a second application instance while a first application instance is still executing. The first application instance can transfer a first set of state information to the second application instance. The second application instance can declare its readiness for activation in response to completion of the transfer. The first application instance can deactivate in response to the declaration. Deactivation includes transferring a second set of state information from the first application instance to the second application instance and releasing single-access resources. The second application instance can activate. Activation includes receiving the second set of state information, and accessing the single-access resources. The second application instance can declare that activation is complete in response to completion of the activation. The first application instance can terminate in response to the declaration.

Patent Agency Ranking