Abstract:
An access network is described in which a centralized controller provides seamless end-to-end service from a core-facing edge of a service provider network through aggregation and access infrastructure out to access nodes located proximate to the subscriber devices. The controller operates to provide a central configuration point for configuring aggregation nodes (AGs) of a network of the service provider so as to provide transport services to transport traffic between access nodes (AXs) and edge routers on opposite borders of the network.
Abstract:
A method performed by an I/O unit connected to another I/O unit in a network device. The method includes receiving a packet; segmenting the packet into a group of data blocks; storing the group of data blocks in a data memory; generating data protection information for a data block of the group of data blocks; creating a control block for the data block; storing, in a control memory, a group of data items for the control block, the group of data items including information associated with a location, of the data block, within the data memory and the data protection information for the data block; performing a data integrity check on the data block, using the data protection information, to determine whether the data block contains a data error; and outputting the data block when the data integrity check indicates that the data block does not contain a data error.
Abstract:
In one embodiment, an apparatus includes a switch core that has a multi-stage switch fabric. A first set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have a protocol. Each peripheral processing device from the first set of peripheral processing devices is a storage node that has virtualized resources. The virtualized resources of the first set of peripheral processing devices collectively define a virtual storage resource interconnected by the switch core. A second set of peripheral processing devices coupled to the multi-stage switch fabric by a set of connections that have the protocol. Each peripheral processing device from the first set of peripheral processing devices is a compute node that has virtualized resources. The virtualized resources of the second set of peripheral processing devices collectively define a virtual compute resource interconnected by the switch core.
Abstract:
In one embodiment, edge devices can be configured to be coupled to a multi-stage switch fabric and peripheral processing devices. The edge devices and the multi-stage switch fabric can collectively define a single logical entity. A first edge device from the edge devices can be configured to be coupled to a first peripheral processing device from the peripheral processing devices. The second edge device from the edge devices can be configured to be coupled to a second peripheral processing device from the peripheral processing devices. The first edge device can be configured such that virtual resources including a first virtual resource can be defined at the first peripheral processing device. A network management module coupled to the edge devices and configured to provision the virtual resources such that the first virtual resource can be migrated from the first peripheral processing device to the second peripheral processing device.
Abstract:
In some examples, a device includes at least two integrated circuits (ICs) and a first multi-chip module (MCM) substrate coupled to the at least two ICs, the first MCM substrate comprising a first ball grid array (BGA), wherein the first BGA comprises a first pitch indicative of a distance between balls of the first BGA. The device further includes a second MCM substrate coupled to the first MCM substrate with the first BGA, the second MCM substrate comprising a second BGA, wherein the second BGA comprises a second pitch indicative of a distance between balls of the second BGA, and wherein the second pitch is greater than the first pitch. The device further includes a printed circuit board (PCB) coupled to the second MCM substrate with the second BGA, wherein the first MCM substrate and the second MCM substrate comprise organic, non-silicon insulating material.
Abstract:
Methods and devices for processing packets are provided. The processing device may include an input interface for receiving data units containing header information of respective packets; a first module configurable to perform packet filtering based on the received data units; a second module configurable to perform traffic analysis based on the received data units; a third module configurable to perform load balancing based on the received data units; and a fourth module configurable to perform route lookups based on the received data units.
Abstract:
In some embodiments, an apparatus comprises a processing module, disposed within a first switch fabric element, configured to detect a second switch fabric element having a routing module when the second switch fabric element is operatively coupled to the first switch fabric element. The processing module is configured to define a virtual processing module configured to be operatively coupled to the second switch fabric element. The virtual processing module is configured to receive a request from the second switch fabric element for forwarding information and the virtual processing module is configured to send the forwarding information to the routing module.
Abstract:
One example method includes receiving, by a central analytics system, a query for traffic flow data associated with a geographically distributed network of network devices, outputting, by the central analytics system, the query to a plurality of analytics pods, wherein each of the plurality of analytics pods is coupled to a storage unit of a network device within the geographically distributed network, and, responsive to outputting the query, receiving, by the central analytics system and from the plurality of analytics pods, results of the query, wherein the results include at least the traffic flow data from the plurality of analytics pods based on the query.
Abstract:
In some embodiments, an apparatus includes a core network node configured to be operatively coupled to a set of wired network nodes and a set of wireless network nodes. The core network node is configured to receive, at a first time, a first data packet to be sent to a wired device operatively coupled to a wired network node from the set of wired network nodes. The core network node is configured to also receive, at a second time, a second data packet to be sent to a wireless device operatively coupled to a wireless network node from the set of wireless network nodes. The core network node is configured to apply a common policy to the first data packet and the second data packet based on an identifier of a user associated with both the wireless device and the wired device.
Abstract:
In general, the invention is directed to techniques for reducing deadlocks that may arise when performing fabric replication. For example, as described herein, a network device includes packet replicators that each comprises a plurality of resource partitions. A replication data structure for a packet received by the network device includes packet replicator nodes that are arranged hierarchically to occupy one or more levels of the replication data structure. Each of the resource partitions in each of the plurality of packet replicators is associated with a different level of the replication data structure. The packet replicators replicate the packet according to the replication data structure, and each of the packet replicators handles the packet using the one of the resource partitions of the packet replicator that is associated with the level of the replication data structure occupied by the node that corresponds to that particular packet replicator.