-
公开(公告)号:US11558309B1
公开(公告)日:2023-01-17
申请号:US17369992
申请日:2021-07-08
Applicant: MELLANOX TECHNOLOGIES, LTD.
Inventor: Ilan Pardo
IPC: H04L47/625 , H04L49/90 , H04L49/9005 , H04L47/62
Abstract: A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.
-
公开(公告)号:US20220368644A1
公开(公告)日:2022-11-17
申请号:US17702332
申请日:2022-03-23
Applicant: FUJITSU LIMITED
Inventor: Jun KATO
IPC: H04L47/41 , H04L47/2441 , H04L47/27 , H04L49/9005
Abstract: An information processing apparatus including: a memory; and a processor coupled to the memory, the processor being configured to perform processing including: executing a buffer management processing that, under flow control over communication executed by an arithmetic processing device, sequentially obtains a plurality of packets transmitted and destined for the arithmetic processing device, stores the packets in a buffer, generates one aggregated packet by aggregating the packets, and transmits the aggregated packet to the arithmetic processing device; executing an ACK management processing that decides transmission timing for ACKs to a transmission source of the packets based on a flow rate for the aggregated packet; and executing a window management processing that decides a receive window size representing a data amount to be transmitted by one flow to the arithmetic processing device based on the flow rate for the aggregated packet.
-
63.
公开(公告)号:US20220353205A1
公开(公告)日:2022-11-03
申请号:US17866473
申请日:2022-07-16
Applicant: ARTERIS, INC.
Inventor: John CODDINGTON
IPC: H04L49/9005 , H04L49/9047 , H04L43/0852 , H04L45/00 , H04L45/44
Abstract: A buffered switch system for end-to-end data congestion and traffic drop prevention. More specifically, and without limitation, the various aspects and embodiments of the invention relates to the management of buffered switch to prevent the balancing act of buffer sizing, latency, and traffic drop.
-
公开(公告)号:US20220255884A1
公开(公告)日:2022-08-11
申请号:US17594624
申请日:2020-03-23
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Partha Pratim Kundu , David Charles Hewson
IPC: H04L49/9005 , H04L49/90 , H04L49/9047 , G06F13/40
Abstract: A network interface controller (NIC) capable of efficiently utilizing an output buffer is provided. The NIC can be equipped with an output buffer, a host interface, an injector logic block, and an allocation logic block. The output buffer can include a plurality of cells, each of which can be a unit of storage in the output buffer. If the host interface receives a command from a host device, the injector logic block can generate a packet based on the command. The allocation logic block can then determine whether the packet is a multi-cell packet. If the packet is a multi-cell packet, the allocation logic block can determine a virtual index for the packet. The allocation logic block can then store, in an entry in a data structure, the virtual index, and a set of physical indices of cells storing the packet.
-
公开(公告)号:US20220217098A1
公开(公告)日:2022-07-07
申请号:US17606715
申请日:2020-04-02
Applicant: Microsoft Technology Licensing, LLC
Inventor: Zhixiong Niu , Ran Shu , Lei Qu , Peng Chen , Yongqiang Xiong , Guo Chen
IPC: H04L49/102 , H04L49/9005 , H04L49/90 , H04L67/1097
Abstract: In accordance with implementations of the subject matter described herein, there is provided a solution for streaming communication between devices. In this solution, a memory of a first device comprising a ring buffer is allocated to be dedicated for storing a data stream of an application to be transmitted to a second electronic device. The application of the first device writes data to be transmitted into the ring buffer, to form a portion of the first data stream, and a write pointer of the ring buffer is thus updated. A portion of data is read based on a source memory address from the ring buffer via the interface device. The interface device also transmits the data portion to a second device. The read data portion is stored in a dedicated ring buffer of the memory. In accordance with the solution, an efficient streaming communication interface is provided between devices.
-
公开(公告)号:US20220210097A1
公开(公告)日:2022-06-30
申请号:US17477782
申请日:2021-09-17
Applicant: Intel Corporation
Inventor: Ziye YANG
IPC: H04L49/90 , H04L49/103 , H04L49/9005 , H04L9/40 , H04L67/1097
Abstract: Examples described herein relate to at least one processor and circuitry, when operational, to: cause a first number of processors of the at least one processor to access queues exclusively allocated for packets to be processed by the first number of processors; cause a second number of processors of the at least one processor to identify commands consistent with Non-volatile Memory Express (NVMe) over Quick User Data Protocol Internet Connections (QUIC), wherein the commands are received in the packets and the second number is based at least in part on a rate of received commands; and cause performance of the commands using a third number of processors. In some examples, the circuitry, when operational, is to: based on detection of a new connection on a first port, associate the new connection with a second port, wherein the second port is different than the first port and select at least one processor to identify and process commands received on the new connection.
-
公开(公告)号:US12231342B1
公开(公告)日:2025-02-18
申请号:US18117290
申请日:2023-03-03
Applicant: Marvell Asia Pte Ltd
Inventor: Bruce Kwan , William Brad Matthews
IPC: H04L47/25 , H04L47/11 , H04L47/62 , H04L49/9005
Abstract: A network device includes ingress queues for storing data units while the data units are being processed by ingress packet processors, and a plurality of egress buffer memories for storing data units received from the ingress queues while the data units are being processed by the egress packet processors. First circuitry controls respective rates at which data units are transferred from ingress queues to egress buffer memories. Second circuitry monitors the egress buffer memories for congestion and sends, to the first circuitry, flow control messages related to congestion resulting of egress buffer memories. The first circuitry progressively increases over time a rate at which data from each ingress queue are transferred to an egress buffer memory in response to receiving a flow control message that indicates that congestion corresponding to the egress buffer memory has ended.
-
68.
公开(公告)号:US20250030627A1
公开(公告)日:2025-01-23
申请号:US18907686
申请日:2024-10-07
Applicant: Hewlett Packard Enterprise Development LP
Inventor: David Charles Hewson , Partha Kundu
IPC: H04L45/28 , G06F9/50 , G06F9/54 , G06F12/0862 , G06F12/1036 , G06F12/1045 , G06F13/14 , G06F13/16 , G06F13/28 , G06F13/38 , G06F13/40 , G06F13/42 , G06F15/173 , H04L1/00 , H04L43/0876 , H04L43/10 , H04L45/00 , H04L45/02 , H04L45/021 , H04L45/028 , H04L45/12 , H04L45/122 , H04L45/125 , H04L45/16 , H04L45/24 , H04L45/42 , H04L45/745 , H04L45/7453 , H04L47/10 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/34 , H04L47/52 , H04L47/62 , H04L47/625 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/78 , H04L47/80 , H04L49/00 , H04L49/101 , H04L49/15 , H04L49/90 , H04L49/9005 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/28 , H04L69/40
Abstract: A network interface controller (NIC) capable of efficient load balancing among the hardware engines is provided. The NIC can be equipped with a plurality of ordering control units (OCUs), a queue, a selection logic block, and an allocation logic block. The selection logic block can determine, from the plurality of OCUs, an OCU for a command from the queue, which can store one or more commands. The allocation logic block can then determine a selection setting for the OCU, select an egress queue for the command based on the selection setting, and send the command to the egress queue.
-
公开(公告)号:US12177277B2
公开(公告)日:2024-12-24
申请号:US17313353
申请日:2021-05-06
Applicant: Intel Corporation
Inventor: Lokpraveen Mosur , Ilango Ganga , Robert Cone , Kshitij Arun Doshi , John J. Browne , Mark Debbage , Stephen Doyle , Patrick Fleming , Doddaballapur Jayasimha
IPC: H04L65/61 , H04L47/50 , H04L49/9005
Abstract: In one embodiment, a system includes a device and a host. The device includes a device stream buffer. The host includes a processor to execute at least a first application and a second application, a host stream buffer, and a host scheduler. The first application is associated with a first transmit streaming channel to stream first data from the first application to the device stream buffer. The first transmit streaming channel has a first allocated amount of buffer space in the device stream buffer. The host scheduler schedules enqueue of the first data from the first application to the first transmit streaming channel based at least in part on availability of space in the first allocated amount of buffer space in the device stream buffer. Other embodiments are described and claimed.
-
70.
公开(公告)号:US20240323113A1
公开(公告)日:2024-09-26
申请号:US18677994
申请日:2024-05-30
Applicant: Hewlett Packard Enterprise Development LP
Inventor: Jonathan P. Beecroft , Abdulla M. Bataineh , Thomas L. Court
IPC: H04L45/28 , G06F9/50 , G06F9/54 , G06F12/0862 , G06F12/1036 , G06F12/1045 , G06F13/14 , G06F13/16 , G06F13/28 , G06F13/38 , G06F13/40 , G06F13/42 , G06F15/173 , H04L1/00 , H04L43/0876 , H04L43/10 , H04L45/00 , H04L45/02 , H04L45/021 , H04L45/028 , H04L45/12 , H04L45/122 , H04L45/125 , H04L45/16 , H04L45/24 , H04L45/42 , H04L45/745 , H04L45/7453 , H04L47/10 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/34 , H04L47/52 , H04L47/62 , H04L47/625 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/78 , H04L47/80 , H04L49/00 , H04L49/101 , H04L49/15 , H04L49/90 , H04L49/9005 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/28 , H04L69/40
CPC classification number: H04L45/28 , G06F9/505 , G06F9/546 , G06F12/0862 , G06F12/1036 , G06F12/1063 , G06F13/14 , G06F13/16 , G06F13/1642 , G06F13/1673 , G06F13/1689 , G06F13/28 , G06F13/385 , G06F13/4022 , G06F13/4068 , G06F13/4221 , G06F15/17331 , H04L1/0083 , H04L43/0876 , H04L43/10 , H04L45/02 , H04L45/021 , H04L45/028 , H04L45/122 , H04L45/123 , H04L45/125 , H04L45/16 , H04L45/20 , H04L45/22 , H04L45/24 , H04L45/38 , H04L45/42 , H04L45/46 , H04L45/566 , H04L45/70 , H04L45/745 , H04L45/7453 , H04L47/11 , H04L47/12 , H04L47/122 , H04L47/18 , H04L47/20 , H04L47/22 , H04L47/24 , H04L47/2441 , H04L47/2466 , H04L47/2483 , H04L47/30 , H04L47/32 , H04L47/323 , H04L47/34 , H04L47/39 , H04L47/52 , H04L47/621 , H04L47/6235 , H04L47/626 , H04L47/6275 , H04L47/629 , H04L47/76 , H04L47/762 , H04L47/781 , H04L47/80 , H04L49/101 , H04L49/15 , H04L49/30 , H04L49/3009 , H04L49/3018 , H04L49/3027 , H04L49/90 , H04L49/9005 , H04L49/9021 , H04L49/9036 , H04L49/9047 , H04L67/1097 , H04L69/22 , H04L69/40 , G06F2212/50 , G06F2213/0026 , G06F2213/3808 , H04L69/28
Abstract: Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic with fast, effective flow control of individual applications and traffic flows in conjunction with an end host. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. As a result, an ingress edge switch can perform fine grain flow control of individual sources of the flows residing on an end host.
-
-
-
-
-
-
-
-
-