Abstract:
An apparatus comprises a given multimode optical waveguide extending in a given direction. The apparatus also comprises another multimode optical waveguide extending in another direction and intersecting with the given multimode waveguide. The apparatus further comprises a bi-stable optical switch positioned at the intersection of the given multimode optical waveguide and the another multimode optical waveguide to redirect a multimode optical signal transmitted on the given multimode optical waveguide to the another optical waveguide in a redirection state and pass the multimode optical signal transmitted on the given multimode optical waveguide across the intersection of the given multimode optical waveguide and the another optical waveguide in a pass-through state. The bi-stable optical switch can comprise a gap extending diagonally from a given corner of the intersection of the given and the another optical multimode waveguides to an opposing corner of the intersection.
Abstract:
Described is a switch architecture that combines address management with simplified hardware to implement fast route lookup within network switches such as Ethernet switches. A managed address includes a cluster ID which is shared by all endpoints in a cluster, and a member ID which is unique for each node in the cluster. The switch extracts the cluster ID from a target address for a packet and compares it against at least one cluster ID stored in a cluster identification memory. Responsive to a match, the switch generates a port identification for the packet using a fast lookup table. Responsive to no match, the target address is considered an unmanaged address. In one implementation, a slow lookup table can be used to generate a port identification for the unmanaged address.
Abstract:
Methods and systems for caching data from a head end of a queue are described. The cached data can then be selectively forwarded from the data producer to the data consumer upon request.
Abstract:
A system installer is operable to configure hardware components in a reconfigurable data center for a hardware platform. The system installer is also operable to install software on the hardware platform.
Abstract:
A branch operation is processed using a branch predict instruction and an associated branch instruction. The branch predict instruction indicates a predicted direction, a target address, and an instruction address for the associated branch instruction. When the branch predict instruction is detected, the target address is stored at an entry indicated by the associated branch instruction address and a prefetch request is triggered to the target address. The branch predict instruction may also include hint information for managing the storage and use of the branch prediction information.
Abstract:
A configurable Clos network includes leafs and spines and a switch fabric that connects the leafs and the spines. The switch fabric couples each leaf port of each leaf to at least one spine port of each spine.
Abstract:
Embodiments herein relate to addition or modification to a forwarding table based on an address. A first packet having a source address and a location value may be received. The source address includes a source of the first packet and the location value indicates at least part of a route along a network to the source address. The forwarding table is not modified or no new entry is added to the forwarding table, if the forwarding table does not include the source address.
Abstract:
A dynamic pinning remote direct memory access is performed by creating sections of data to be transferred through a remote direct memory access. Each section includes a subset of the data to be transferred or received. To perform the remote direct memory access, each section is pinned, used for the remote direct memory access, and released after the transfer is complete.
Abstract:
A method for load balancing Ethernet traffic within a fat tree network (315, 455) includes randomly assigning incoming messages (510) into hash classes using a hash function (520); allocating the hash classes among uplinks (550); and transmitting the incoming messages on the uplinks (550) according to the hash class. A network switch (515) for load balancing communication flows in a fat tree network (315, 455) includes downlinks (545) and uplinks (550); the network switch (515) being configured to route communication flows among the downlinks (545) and uplinks (550); a hash module (520) which receives a MAC address from a message (510) and outputs a hash address; and a TCAM lookup module (535) which allocates the hash address into a hash class and allocates the hash class to one of the uplinks (550).
Abstract:
A service configuration for a service is generated using a service specification and at least one library including at least one of a hardware component and a software component available to be implemented for the service.