Reducing batch completion time in a computer network with max-min fairness
    1.
    发明授权
    Reducing batch completion time in a computer network with max-min fairness 有权
    在最小公平的计算机网络中减少批量完成时间

    公开(公告)号:US09307013B1

    公开(公告)日:2016-04-05

    申请号:US13905353

    申请日:2013-05-30

    Applicant: Google Inc.

    CPC classification number: G06F9/4843 G06F3/1262 H04L67/1002

    Abstract: The present disclosure describes a system and method for reducing total batch completion time using a max-min fairness process. In some implementations, the max-min fairness process described herein reduces the batch completion time by collectively routing the batches in a way that targets providing the same effective path capacity across all requests. More particularly, given a network shared by batches of flows, total throughput is increased with max-min fairness (and therefore batch completion time decreased) if the nth percentile fastest flow of a batch cannot increase its throughput without decreasing the nth percentile fastest flow of another batch whose throughput is not greater than the throughput of the first batch.

    Abstract translation: 本公开描述了使用最大 - 最小公平过程来减少总批量完成时间的系统和方法。 在一些实施方案中,本文描述的最大 - 最小公平性过程通过以指向在所有请求之间提供相同有效路径容量的方式集体路由批次来减少批次完成时间。 更具体地说,给定由批量流量共享的网络,如果批次的第n百分位数最快的流量不能增加其吞吐量而不减少第n百分位数最快的流量,则总吞吐量以最大 - 最小公平性(并且因此批次完成时间减少)增加 另一批的吞吐量不大于第一批的吞吐量。

    Reducing batch completion time in a computer network with per-destination max-min fairness
    2.
    发明授权
    Reducing batch completion time in a computer network with per-destination max-min fairness 有权
    在每个目的地最大 - 最小公平性的计算机网络中减少批量完成时间

    公开(公告)号:US09288146B1

    公开(公告)日:2016-03-15

    申请号:US13903676

    申请日:2013-05-28

    Applicant: Google Inc.

    CPC classification number: H04L47/10 H04L45/125

    Abstract: The present disclosure describes a system and method for reducing total batch completion time using a per-destination max-min fairness scheme. In a distributed computer system, worker nodes often simultaneously return responses to a server node. In some distributed computer systems, multiple batches can traverse a network at any one given time. The nodes of the network are often unaware of the batches other nodes are sending through the network. Accordingly, in some implementations, the different batches encounter different effective path capacities as nodes send flows through links that are or become bottlenecked. The per-destination max-min fairness scheme described herein reduces the total batch completion time by collectively routing the batches in a way that targets providing substantially uniform transmission times without under underutilizing the network.

    Abstract translation: 本公开描述了使用每目的地最大 - 最小公平方案来减少总批量完成时间的系统和方法。 在分布式计算机系统中,工作者节点通常同时向服务器节点返回响应。 在一些分布式计算机系统中,多个批次可以在任何一个给定时间穿过网络。 网络的节点通常不知道其他节点通过网络发送的批次。 因此,在一些实现中,当节点通过正在或已经成为瓶颈的链路发送流时,不同批次遇到不同的有效路径容量。 本文描述的每目的地最大 - 最小公平性方案通过以不提供基本上均匀的传输时间的方式集体路由批次来减少总批次完成时间,而不在网络不充分利用的情况下。

    Logical topology in a dynamic data center network
    3.
    发明授权
    Logical topology in a dynamic data center network 有权
    动态数据中心网络中的逻辑拓扑

    公开(公告)号:US09184999B1

    公开(公告)日:2015-11-10

    申请号:US13872626

    申请日:2013-04-29

    Applicant: Google Inc.

    CPC classification number: H04L41/0823 H04L41/12 H04L41/145

    Abstract: A system for configuring a network topology in a data center is disclosed. The data center includes nodes having ports capable of supporting data links that can be connected to other nodes. The system includes a memory and a processing unit coupled to the memory. The processing unit receives demand information indicative of demands between nodes. The processing unit determines a set of constraints on the network topology based on the nodes, feasible data links between the nodes, and the demand information. The processing unit determines an objective function based on a sum of data throughput across data links satisfying demands. The processing unit performs an optimization of the objective function subject to the set of constraints using a linear program. The processing unit configures the network topology by establishing data links between the nodes according to results of the optimization.

    Abstract translation: 公开了一种用于在数据中心中配置网络拓扑的系统。 数据中心包括具有能够支持可连接到其他节点的数据链路的端口的节点。 该系统包括耦合到存储器的存储器和处理单元。 处理单元接收指示节点之间的需求的需求信息。 处理单元基于节点,节点之间的可行数据链路和需求信息来确定网络拓扑上的一组约束。 处理单元基于满足需求的数据链路之间的数据吞吐量的总和来确定目标函数。 处理单元使用线性程序来执行受约束集合的目标函数的优化。 处理单元通过根据优化结果在节点之间建立数据链路来配置网络拓扑。

    Weighted load balancing in a multistage network using heirachical ECMP

    公开(公告)号:US09716658B1

    公开(公告)日:2017-07-25

    申请号:US14539796

    申请日:2014-11-12

    Applicant: GOOGLE INC.

    CPC classification number: H04L47/125 H04L45/24 H04L45/7453

    Abstract: A method for weighted routing of data traffic can include generating a first hash value based on a header of a data packet and performing a lookup in a first ECMP table using the first hash value to select a secondary ECMP table from at least two secondary un-weighted ECMP tables, the first ECMP table including a weighted listing of the at least two secondary un-weighted ECMP tables. The method can also include generating a second hash value based on the header of the data packet and performing a lookup in the selected secondary ECMP table based on the second hash value to select an egress port of a plurality of egress ports of the data switch and forwarding the data packet on the selected egress port.

    Systems and methods for routing data through data centers using an indirect generalized hypercube network

    公开(公告)号:US09705798B1

    公开(公告)日:2017-07-11

    申请号:US14149469

    申请日:2014-01-07

    Applicant: Google Inc.

    CPC classification number: H04L47/122 H04L43/0888 H04L45/22

    Abstract: Aspects and implementations of the present disclosure are directed to an indirect generalized hypercube network in a data center. Servers in the data center participate in both an over-subscribed fat tree network hierarchy culminating in a gateway connection to external networks and in an indirect hypercube network interconnecting a plurality of servers in the fat tree. The participant servers have multiple network interface ports, including at least one port for a link to an edge layer network device of the fat tree and at least one port for a link to a peer server in the indirect hypercube network. Servers are grouped by edge layer network device to form virtual switches in the indirect hypercube network and data packets are routed between servers using routes through the virtual switches. Routes leverage properties of the hypercube topology. Participant servers function as destination points and as virtual interfaces for the virtual switches.

    Weighted load balancing in a multistage network using hierarchical ECMP
    6.
    发明授权
    Weighted load balancing in a multistage network using hierarchical ECMP 有权
    使用分级ECMP在多级网络中加权负载平衡

    公开(公告)号:US09571400B1

    公开(公告)日:2017-02-14

    申请号:US14217937

    申请日:2014-03-18

    Applicant: GOOGLE INC.

    CPC classification number: H04L47/125 H04L45/24 H04L45/7453

    Abstract: A method for weighted routing of data traffic can include generating a first hash value based on a header of a data packet and performing a lookup in a first equal cost multi-path (ECMP) table using the first hash value to select a secondary ECMP table. The first ECMP table can include a weighted listing of at least two secondary ECMP tables. The method can further include generating a second hash value based on the header of the data packet and performing a lookup in the selected secondary ECMP table based on the second hash value to select an egress port of a plurality of egress ports of the data switch. The method can further include forwarding the data packet on the selected egress port.

    Abstract translation: 用于数据业务的加权路由的方法可以包括基于数据分组的报头生成第一散列值,并且使用第一散列值在第一等价多路径(ECMP)表中执行查找以选择辅助ECMP表 。 第一个ECMP表可以包括至少两个辅助ECMP表的加权列表。 该方法还可以包括基于数据分组的报头生成第二哈希值,并且基于第二哈希值在所选择的辅助ECMP表中执行查找以选择数据交换机的多个出口端口的出口端口。 该方法还可以包括在选择的出口端口上转发数据分组。

    Logical topology in a dynamic data center network

    公开(公告)号:US09197509B1

    公开(公告)日:2015-11-24

    申请号:US13872630

    申请日:2013-04-29

    Applicant: Google Inc.

    CPC classification number: H04L41/0823 H04L41/12 H04L41/145

    Abstract: A system for configuring a network topology in a data center is disclosed. The data center includes nodes having ports capable of supporting data links that can be connected to other nodes. The system includes a memory and a processing unit coupled to the memory. The processing unit receives demand information indicative of demands between nodes. The processing unit determines a set of constraints on the network topology based on the nodes, feasible data links between the nodes, and the demand information. The processing unit determines an objective function based on a sum of data throughput across data links satisfying demands. The processing unit performs an optimization of the objective function subject to the set of constraints using a linear program. The processing unit configures the network topology by establishing data links between the nodes according to results of the optimization.

    Systems and methods for routing data through data centers using an indirect generalized hypercube network

    公开(公告)号:US09929960B1

    公开(公告)日:2018-03-27

    申请号:US15609847

    申请日:2017-05-31

    Applicant: Google Inc.

    CPC classification number: H04L47/122 H04L43/0888 H04L45/22

    Abstract: Aspects and implementations of the present disclosure are directed to an indirect generalized hypercube network in a computer network facility. Servers in the computer network facility participate in both an over-subscribed fat tree network hierarchy culminating in a gateway connection to external networks and in an indirect hypercube network interconnecting a plurality of servers in the fat tree. The participant servers have multiple network interface ports, including at least one port for a link to an edge layer network device of the fat tree and at least one port for a link to a peer server in the indirect hypercube network. Servers are grouped by edge layer network device to form virtual switches in the indirect hypercube network and data packets are routed between servers using routes through the virtual switches. Routes leverage properties of the hypercube topology. Participant servers function as destination points and as virtual interfaces for the virtual switches.

    WEIGHTED LOAD BALANCING USING SCALED PARALLEL HASHING

    公开(公告)号:US20170149877A1

    公开(公告)日:2017-05-25

    申请号:US15396512

    申请日:2016-12-31

    Applicant: Google Inc.

    Abstract: A method for weighted data traffic routing can include receiving a data packet at data switch, where the data switch includes a plurality of egress ports. The method can also include, for each of the egress ports, generating an independent hash value based on one or more fields of the data packet and generating a weighted hash value by scaling the hash value using a scaling factor. The scaling factor can be based on at least two traffic routing weights of a plurality of respective traffic routing weights associated with the plurality of egress ports. The method can further include selecting an egress port of the plurality of egress ports based on the weighted hash value for each of the egress ports and transmitting the data packet using the selected egress port.

    Weighted load balancing using scaled parallel hashing
    10.
    发明授权
    Weighted load balancing using scaled parallel hashing 有权
    使用缩放的并行散列加权负载平衡

    公开(公告)号:US09565114B1

    公开(公告)日:2017-02-07

    申请号:US14217921

    申请日:2014-03-18

    Applicant: GOOGLE INC.

    Abstract: A method for weighted data traffic routing can include receiving a data packet at data switch, where the data switch includes a plurality of egress ports. The method can also include, for each of the egress ports, generating an independent hash value based on one or more fields of the data packet and generating a weighted hash value by scaling the hash value using a scaling factor. The scaling factor can be based on at least two traffic routing weights of a plurality of respective traffic routing weights associated with the plurality of egress ports. The method can further include selecting an egress port of the plurality of egress ports based on the weighted hash value for each of the egress ports and transmitting the data packet using the selected egress port.

    Abstract translation: 加权数据流量路由的方法可以包括在数据交换处接收数据分组,其中数据交换机包括多个出口端口。 该方法还可以包括对于每个出口端口,基于数据分组的一个或多个字段生成独立散列值,并且通过使用缩放因子缩放哈希值来生成加权散列值。 缩放因子可以基于与多个出口相关联的多个相应业务路由权重的至少两个业务路由权重。 该方法还可以包括基于每个出口端口的加权散列值来选择多个出口端口的出口端口,并使用所选择的出口端口发送数据分组。

Patent Agency Ranking