Abstract:
Embodiments include receiving configuration information including a match criterion for packets received at a network device in a network and a pool of layer 3 addresses associated with a set of servers in the network, resolving layer 2 destination addresses based on the layer 3 addresses of the servers, and programming a hardware layer of the network device based, at least in part, on the match criterion, the pool of layer 3 addresses, and the layer 2 destination addresses. Specific embodiments include configuring a policy to indicate that packets from an external source are to be forwarded to a server of the set of servers. Further embodiments include receiving a packet at the network device, and matching the packet to the pool of layer 3 addresses and the resolved layer 2 addresses based, at least in part, on the match criterion programmed in the hardware layer.
Abstract:
In an example, a network switch is configured to natively act as a high-speed load balancer. Numerous load-balancing techniques may be used, including one that bases the traffic “bucket” on a source IP address of an incoming packet. This particular technique provides a network administrator a powerful tool for shaping network traffic. For example, by assigning certain classes of computers on the network particular IP addresses, the network administrator can ensure that the traffic is load balanced in a desirable fashion. To further increase flexibility, the network administrator may apply a bit mask to the IP address, and expose only a portion, selected from a desired octet of the address.
Abstract:
A network apparatus for providing native load balancing within a switch, including: a first network interface operable to communicatively couple to a first network; a plurality of second network interfaces operable to communicatively couple to a second network; one or more logic elements providing a switching engine operable for providing network switching or routing; and one or more logic elements providing a load balancing engine operable for: load balancing network traffic among a plurality of service nodes; probing a first service node; determining that the first service node is unavailable; and reassigning the buckets associated with the first service node to a next available service node.
Abstract:
In an example, there is disclosed a network apparatus for providing native load balancing within a switch, including: a first network interface operable to communicatively couple to a first network; a plurality of second network interfaces operable to communicatively couple to a second network; one or more logic elements forming a switching engine operable for providing network switching or routing; and one or more logic elements providing a load balancing engine operable for: load balancing network traffic among a plurality of service nodes; probing a first service node; and determining that the first service node is unavailable.
Abstract:
Embodiments include receiving configuration information including a match criterion for packets received at a network device in a network and a pool of layer 3 addresses associated with a set of servers in the network, resolving layer 2 destination addresses based on the layer 3 addresses of the servers, and programming a hardware layer of the network device based, at least in part, on the match criterion, the pool of layer 3 addresses, and the layer 2 destination addresses. Specific embodiments include configuring a policy to indicate that packets from an external source are to be forwarded to a server of the set of servers. Further embodiments include receiving a packet at the network device, and matching the packet to the pool of layer 3 addresses and the resolved layer 2 addresses based, at least in part, on the match criterion programmed in the hardware layer.
Abstract:
In one embodiment a packet of data is received at a network element. At least one field is parsed from the packet of data. A forwarding entry is identified from a plurality of forwarding entries based on the at least one field. The forwarding entry of the plurality of forwarding entries is formed by merging information from at least one load balancing entry and at least one access control list (ACL) entry. The data packet is forwarded through a port of the network element in accordance with the identified forwarding entry.
Abstract:
In one embodiment, load balancing criteria and an indication of a plurality of network nodes is received. A plurality of forwarding entries are created based on the load balancing criteria and the indication of the plurality of nodes. A content addressable memory of a network element is programmed with the plurality of forwarding entries. The network element selectively load balances network traffic by applying the plurality of forwarding entries to the network traffic, wherein network traffic meeting the load balancing criteria is load balanced among the plurality of network nodes.
Abstract:
In an example, a network switch is configured to natively act as a high-speed load balancer. Numerous load-balancing techniques may be used, including one that bases the traffic “bucket” on a source IP address of an incoming packet. This particular technique provides a network administrator a powerful tool for shaping network traffic. For example, by assigning certain classes of computers on the network particular IP addresses, the network administrator can ensure that the traffic is load balanced in a desirable fashion. To further increase flexibility, the network administrator may apply a bit mask to the IP address, and expose only a portion, selected from a desired octet of the address.
Abstract:
In an example, a network switch is configured to natively act as a high-speed load balancer. Numerous load-balancing techniques may be used, including one that bases the traffic “bucket” on a source IP address of an incoming packet. This particular technique provides a network administrator a powerful tool for shaping network traffic. For example, by assigning certain classes of computers on the network particular IP addresses, the network administrator can ensure that the traffic is load balanced in a desirable fashion. To further increase flexibility, the network administrator may apply a bit mask to the IP address, and expose only a portion, selected from a desired octet of the address.
Abstract:
In an example, a network switch is configured to operate natively as a load balancer. The switch receives incoming traffic on a first interface communicatively coupled to a first network, and assigns the traffic to one of a plurality of traffic buckets. This may include looking up a destination IP of an incoming packet in a fast memory such as a ternary content-addressable memory (TCAM) to determine whether the packet is directed to a virtual IP (VIP) address that is to be load balanced. If so, part of the source destination IP address may be used as a search tag in the TCAM to assign the incoming packet to a traffic bucket or IP address of a service node.