Abstract:
The invention provide a data communication method, including: sending, by a first aggregation node in a first pod, data packets of a first data flow to at least one core node in a core layer in a packet based balancing manner; receiving, by the at least one core node, the data packets of the first data flow, and sending the received data packets to a second aggregation node in a second pod; and receiving, by the second aggregation node, the data packets of the first data flow, and sorting the data packets of the first data flow to obtain the first data flow.
Abstract:
This application provides example traffic balancing methods, network devices, and electronic devices, and relates to the field of communications technologies. An example electronic device can create or maintain a first flowpac, and classify a packet that uses a first node as a destination into the first flowpac. When a network balancing parameter meets a preset condition, a second flowpac is created or maintained. A subsequent packet that uses the first node as a destination is classified into the second flowpac, where packets belonging to a same flowpac have a same destination node and a same sending path, and a sending path of a packet in the second flowpac is different from a sending path of a packet in the first flowpac.
Abstract:
The invention provides a data communication method, including: sending, by the first electrical node, request information to an electrical node, where the request information is used to request an expected data volume quota of a first VOQ, and the first VOQ stores at least one first data packet to be sent to the electrical node; receiving response information, where the response information includes a target data volume quota; and sending the at least one first data packet to the electrical node via the at least one optical node based on the target data volume quota.
Abstract:
This application provides a networking processor. The networking processor is provided with a plurality of ports, and the networking processor further includes a plurality of buses and a plurality of switch units. The switch unit is bound to at least one of the ports, and is configured to receive data from outside of the networking processor and send the data to the outside of the networking processor through the bound port. Each of the plurality of buses is bound to at least two of the switch units, each of the switch units is bound to at least one of the plurality of buses, and the switch unit forwards the data between different switch units through a network formed by the plurality of buses. In this application, latency of data switching is reduced.
Abstract:
Embodiments of the present invention relate to the communications field, and provide a multi-chassis cascading apparatus. The apparatus includes a line card chassis LCC, where a fabric interface chip FIC and a switch element SE 1/3 are deployed in each line card chassis LCC; the fabric interface chip FIC is connected to the switch element SE 1/3 that is located in the same line card chassis LCC as the fabric interface chip FIC is; and a switch element SE 2 is deployed in each line card chassis LCC; the switch element SE 1/3 is connected to the switch element SE 2 that is located in the same line card chassis LCC as the switch element SE 1/3 is; and the switch element SE 1/3 is connected to the switch element SE 2 that is located in another line card chassis LCC.
Abstract:
The present disclosure provides a switch fabric system, the system including M first crossbar units (CUs) and N second CUs, where each first CU includes L first input ports, a first arbiter, a first crossbar, and N first output ports. Each second CU includes M second input ports, a second arbiter, a second crossbar, and one second output port. M×N first output ports of the M first CUs are respectively coupled to N×M second input ports of the N second CUs, where N first output ports of each first CU are respectively coupled to and in a one-to-one correspondence with one second input port of each second CU in the N second CUs. In the example system, N equals M×L, and M, N, and L are all positive integers.
Abstract:
Embodiments of the present invention provide a method and an apparatus for forwarding traffic of a switching system. The switching system includes a first LCC, at least one second LCC, and at least one third LCC that are interconnected according to a mesh form topology; and the method includes: receiving, by the first LCC, a packet, and parsing the packet to acquire a destination address of the packet; and when the destination address indicates that the packet is to be sent to the third LCC, if a currently preset configuration mode of the switching system is a first configuration mode, bearing, by the first LCC, the packet on a third link, and forwarding the packet to the third LCC, where the first configuration mode indicates that an N-hop mode is currently applied to the switching system, where N is a natural number greater than or equal to 3.
Abstract:
A cell processing method and apparatus are provided. The method includes: obtaining, by a first sending end, a first timestamp compensation time; adding, by the first sending end, the first timestamp compensation time to a first timestamp carried in a first cell, where the first timestamp is a sending time of the first cell; and sending, by the first sending end to a receiving end, the first cell that is added with the first timestamp compensation time, so that the receiving end forwards the first cell according to the first timestamp that is added with the first timestamp compensation time. In the present invention, a first timestamp compensation time is added to a first timestamp carried in a first cell, which improves cell forwarding efficiency of the receiving end and prevents the occurrence of cell accumulation in a link.