Abstract:
In an example, a network switch is configured to natively act as a high-speed load balancer. Numerous load-balancing techniques may be used, including one that bases the traffic “bucket” on a source IP address of an incoming packet. This particular technique provides a network administrator a powerful tool for shaping network traffic. For example, by assigning certain classes of computers on the network particular IP addresses, the network administrator can ensure that the traffic is load balanced in a desirable fashion. To further increase flexibility, the network administrator may apply a bit mask to the IP address, and expose only a portion, selected from a desired octet of the address.
Abstract:
In one embodiment, a network element that performs network traffic bridging receives load balancing criteria comprising an indication of at least one transport layer port number and an indication of a plurality of network nodes. A plurality of forwarding entries are created based on the load balancing criteria. A forwarding entry specifies the at least one transport layer port number and a network node of the plurality of network nodes. The network element applies the plurality of forwarding entries to network traffic to load balance, among the plurality of network nodes, network traffic that matches the at least one transport layer port number.
Abstract:
In an example, there is disclosed a network apparatus for providing native load balancing, including: a first network interface to communicatively couple to a first network; a plurality of second network interfaces to communicatively couple to a second network; one or more logic elements providing a switching engine to provide network switching or routing; and one or more logic elements, including at least one hardware logic element, providing a load balancing engine to: load balance network traffic among a plurality of service nodes; probe a service node with a first probe for a first service; and probe the service node with a second probe for a second service, the second probe different in kind from the first probe.
Abstract:
The present disclosure relates to providing shared resources to virtual devices on a network switch. In one example, a switch comprises a plurality of virtual device contexts (VDCs) and a default virtual device context (DVDC). The DVDC stores configuration data that identifies a network resource. The DVDC transmits a reference to the configuration data to each of the plurality of VDCs. Each of plurality of VDCs receives the reference from the DVDC. When the DVDC receive, from the at least one of the plurality of VDCs, a request to access the configuration data via the reference, the DVDC transmits at least a portion of the configuration data to the at least one of the plurality of VDCs. The at least the portion of the configuration data is operable to initiate a connection between the at least one of the plurality of VDCs and the network resource.
Abstract:
A method including: in a network element that includes one or more hardware memory resources of fixed storage capacity for storing data used to configure a plurality of networking features of the network element and a utilization management process running on the network element, the utilization management process performing operations including: obtaining a plurality of entries of the one or more hardware memory resources representing utilization of the one or more hardware memory resources by network traffic passing through the network element; sorting the plurality of entries of the one or more hardware memory resources by statistics associated with the network traffic passing through the network element to produce sorted entries; and sending the extracted to a network management application for display is disclosed. An apparatus and one or more non-transitory computer readable storage media to execute the method are also provided.
Abstract:
The present disclosure describes several key features of an agent deployable on a service appliance: agent architecture/design, transport and channel abstractions of the agent, new message definition components, channel switching (e.g., platform independent processing), Channel state machine, platform dependent hooks (e.g., memory, timers), Service key data store, and Secure channel infrastructure. Many of these features alleviate the vendor of the service appliance from having to provide the features. The features and standardization thereof enable the system to be more robust (and increases code quality). Speed of integration is decreased while the risk of integration issues is also decreased. Updates to the agent can be deployed in a controlled and efficient manner. Furthermore, the agent can ensure security between a switch and the agent. The agent deployed and running on vendor appliances provides a unique way to present transport channels that run between the switch, agent, and other service appliance components.
Abstract:
A method for setting up standby links on a link failure may be provided. The method comprises for a set of N link ports and M standby link ports, where N and M are integers and N is not equal to M, performing the following functions. Determining the status of a link from a first link port of the N ports. After the link from the first link port has failed, determining when a standby link port from the M standby link ports has been assigned to the first link port of the N ports. After the standby link port has been assigned, determining the health of the standby link port. After the standby link port has been assigned and is healthy, redirecting traffic from the first link port to the standby link port.
Abstract:
Methods and apparatus for providing one-arm node clustering using a port channel are provided herein. An example application node may be communicatively connected to at least one application node, and the application node may be connected to a network through a port channel. The application node may include: a link included in the port channel for accommodating the network data being communicated between the remote client and server; and a processor configured to send/receive a cluster control packet to/from the at least one application node through the link included in the port channel.
Abstract:
Methods and apparatuses for automating return traffic redirection to a service appliance by injecting forwarding policies in a packet-forwarding element are disclosed herein. An example method for automating return traffic redirection can include: establishing a communication channel between a service appliance and a packet-forwarding element; and transmitting an out-of-band message over the communication channel to the packet-forwarding element. The message can include a forwarding policy that requests the packet-forwarding element to forward predetermined packets to the service appliance.
Abstract:
Methods and apparatuses for automating return traffic redirection to a service appliance by injecting forwarding policies in a packet-forwarding element are disclosed herein. An example method for automating return traffic redirection can include: establishing a communication channel between a service appliance and a packet-forwarding element; and transmitting an out-of-band message over the communication channel to the packet-forwarding element. The message can include a forwarding policy that requests the packet-forwarding element to forward predetermined packets to the service appliance.