Abstract:
In one embodiment, an apparatus includes a switch core that has a multi-stage switch fabric. A first set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have a protocol. Each peripheral processing device from the first set of peripheral processing devices is a storage node that has virtualized resources. The virtualized resources of the first set of peripheral processing devices collectively define a virtual storage resource interconnected by the switch core. A second set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have the protocol. Each peripheral processing device from the first set of peripheral processing devices is a compute node that has virtualized resources. The virtualized resources of the second set of peripheral processing devices collectively define a virtual compute resource interconnected by the switch core.
Abstract:
In one embodiment, edge devices can be configured to be coupled to a multi-stage switch fabric and peripheral processing devices. The edge devices and the multi-stage switch fabric can collectively define a single logical entity. A first edge device from the edge devices can be configured to be coupled to a first peripheral processing device from the peripheral processing devices. The second edge device from the edge devices can be configured to be coupled to a second peripheral processing device from the peripheral processing devices. The first edge device can be configured such that virtual resources including a first virtual resource can be defined at the first peripheral processing device. A network management module coupled to the edge devices and configured to provision the virtual resources such that the first virtual resource can be migrated from the first peripheral processing device to the second peripheral processing device.
Abstract:
In one embodiment, edge devices can be configured to be coupled to a multi-stage switch fabric and peripheral processing devices. The edge devices and the multi-stage switch fabric can collectively define a single logical entity. A first edge device from the edge devices can be configured to be coupled to a first peripheral processing device from the peripheral processing devices. The second edge device from the edge devices can be configured to be coupled to a second peripheral processing device from the peripheral processing devices. The first edge device can be configured such that virtual resources including a first virtual resource can be defined at the first peripheral processing device. A network management module coupled to the edge devices and configured to provision the virtual resources such that the first virtual resource can be migrated from the first peripheral processing device to the second peripheral processing device.
Abstract:
In one embodiment, edge devices can be configured to be coupled to a multi-stage switch fabric and peripheral processing devices. The edge devices and the multi-stage switch fabric can collectively define a single logical entity. A first edge device from the edge devices can be configured to be coupled to a first peripheral processing device from the peripheral processing devices. The second edge device from the edge devices can be configured to be coupled to a second peripheral processing device from the peripheral processing devices. The first edge device can be configured such that virtual resources including a first virtual resource can be defined at the first peripheral processing device. A network management module coupled to the edge devices and configured to provision the virtual resources such that the first virtual resource can be migrated from the first peripheral processing device to the second peripheral processing device.
Abstract:
A first network client requests initiation of a data transfer with a second network client. An admission control facility (ACF) responds to the initiation request by performing admission analysis to determine whether to initiate the data transfer. The ACF sends one or more packets to the second network client. In response, the second network client sends acknowledgment packets back to the ACF. The ACF performs admission analysis based on the packets sent and the acknowledgment packets, and determines whether the data transfer should be initiated based on the analysis. The admission analysis may be based on a variety of factors, such as the average time to receive an acknowledgment for each packet, the variance of the time to receive an acknowledgment for each packet, a combination of these factors, or a combination of these and other factors.
Abstract:
Methods and devices for processing packets are provided. The processing device may include an input interface for receiving data units containing header information of respective packets; a first module configurable to perform packet filtering based on the received data units; a second module configurable to perform traffic analysis based on the received data units; a third module configurable to perform load balancing based on the received data units; and a fourth module configurable to perform route lookups based on the received data units.
Abstract:
In some embodiments, an apparatus includes a core network node configured to be operatively coupled to a set of wired network nodes and a set of wireless network nodes. The core network node is configured to receive, at a first time, a first data packet to be sent to a wired device operatively coupled to a wired network node from the set of wired network nodes. The core network node is configured to also receive, at a second time, a second data packet to be sent to a wireless device operatively coupled to a wireless network node from the set of wireless network nodes. The core network node is configured to apply a common policy to the first data packet and the second data packet based on an identifier of a user associated with both the wireless device and the wired device.
Abstract:
A system selectively drops data from queues. The system includes a drop table that stores drop probabilities. The system selects one of the queues to examine and generates an index into the drop table to identify one of the drop probabilities for the examined queue. The system then determines whether to drop data from the examined queue based on the identified drop probability.
Abstract:
In one embodiment, a method includes sending a first flow control signal to a first stage of transmit queues when a receive queue is in a congestion state. The method also includes sending a second flow control signal to a second stage of transmit queues different from the first stage of transmit queues when the receive queue is in the congestion state.
Abstract:
In some embodiments, an apparatus comprises a core network node and a control module within an enterprise network architecture. The core network node is configured to be operatively coupled to a set of wired network nodes and a set of wireless network nodes. The core network node is configured to receive a first tunneled packet associated with a first session from a wired network node from the set of wired network nodes. The core network node is configured to also receive a second tunneled packet associated with a second session from a wireless network node from the set of wireless network nodes through intervening wired network nodes from the set of wired network nodes. The control module is operatively coupled to the core network node. The control module is configured to manage the first session and the second session.