RECEIVING PACKET DATA
    1.
    发明公开

    公开(公告)号:US20230336503A1

    公开(公告)日:2023-10-19

    申请号:US17721771

    申请日:2022-04-15

    Inventor: Shachar RAINDEL

    CPC classification number: H04L49/9036 H04L49/9084 H04L49/9068

    Abstract: Embodiments of the present disclosure include techniques for receiving and processing packets. A program configures a network interface to store data from each received packet in one or more packet buffers. If data from a packet exceeds the capacity of the assigned packet buffers, remaining data from the packet may be stored in an overflow buffer. The packet may then be deleted efficiently without delays resulting from handling the remaining data.

    SCHEDULING PAGE MIGRATIONS USING LATENCY TOLERANCE

    公开(公告)号:US20240004683A1

    公开(公告)日:2024-01-04

    申请号:US17853655

    申请日:2022-06-29

    CPC classification number: G06F9/45558 G06F2009/4557

    Abstract: Solutions for scheduling page migrations use latency tolerance of coupled devices, such as external peripheral devices (e.g., network adapters), to prevent buffer overflows or other negative performance. A latency tolerance of a device coupled to a virtual object, such as a virtual machine (VM) is determined. This may include the device exposing its latency tolerance using latency tolerance reporting (LTR). When a page migration for the virtual object is pending, a determination is made whether sufficient time exists to perform the page migration, based on at least the latency tolerance of the device. The page migration is performed if sufficient time exists. Otherwise, the page migration is delayed. In some examples, latency tolerances of multiple devices are considered. In some examples, multiple page migrations are performed contemporaneously, based on latency tolerances. Various options are disclosed, such as the page migration being performed by the virtual object software or the device.

    CACHE REPLACEMENT POLICY OPTIMIZATION FOR PRODUCER-CONSUMER FLOWS

    公开(公告)号:US20230305968A1

    公开(公告)日:2023-09-28

    申请号:US17706044

    申请日:2022-03-28

    Abstract: Embodiments of the present disclosure includes techniques for cache memory replacement in a processing unit. A first data production operation to store first data to a first cache line of the cache memory is detected at a first time. A retention status of the first cache line is updated to a first retention level as a result of the first data production operation. Protection against displacement of the first data in the first cache line is increased based on the first retention level. A first data consumption operation retrieving the first data from the first cache line is detected at a second time after the first time. The retention status of the first cache line is updated to a second retention level as a result of the first data consumption operation, the second retention level being a lower level of retention than the first retention level.

    PARTIAL MEMORY UPDATES USING PRESET TEMPLATES

    公开(公告)号:US20230305739A1

    公开(公告)日:2023-09-28

    申请号:US17706088

    申请日:2022-03-28

    CPC classification number: G06F3/0655 G06F3/0604 G06F3/0679

    Abstract: Embodiments of the present disclosure includes techniques for partial memory updates in a computer system. A data structure template is received. A first write data of a first write operation is received from a first data source, the first write operation performed in connection with provisioning of a first data payload to memory communicatively coupled with a processing unit. A first merge operation is performed involving the first write data and the first data structure template to obtain a first data structure update. The first data structure update is written to the memory, thereby improving efficiency of updating a first data structure associated with the first data payload.

    NETWORK LATENCY ESTIMATION IN DISTRIBUTED COMPUTING SYSTEMS

    公开(公告)号:US20220360511A1

    公开(公告)日:2022-11-10

    申请号:US17812525

    申请日:2022-07-14

    Inventor: Shachar RAINDEL

    Abstract: Techniques for network latency estimation in a computer network are disclosed herein. One example technique includes instructing first and second nodes in the computer network to individually perform traceroute operations along a first round-trip route and a second round-trip route between the first and second nodes. The first round-trip route includes an inbound network path of an existing round-trip route between the first and second nodes and an outbound network path that is a reverse of the inbound network path. The second round-trip route has an outbound network path of the existing round-trip route and an inbound network path that is a reverse of the outbound network path. The example technique further includes upon receiving traceroute information from the additional traceroute operations, determine a latency difference between the inbound and outbound network paths of the existing round-trip route based on the received additional traceroute information.

Patent Agency Ranking