VIRTUAL MACHINE MIGRATION WHILE MAINTAINING LIVE NETWORK LINKS

    公开(公告)号:US20210117224A1

    公开(公告)日:2021-04-22

    申请号:US17134305

    申请日:2020-12-26

    Abstract: Disclosed is a source host including a processor. The processor operates a virtual machine (VM) to communicate network traffic over a communication link. The processor also initiates migration of the VM to a destination host. The processor also suspends the VM during migration of the VM to the destination host. The source host also includes a live migration circuit coupled to the processor. The live migration circuit manages a session associated with the communication link while the VM is suspended during migration. The live migration circuit buffers changes to a session state and transfers the buffered session state changes to the destination host for replay after the VM is reactivated on the destination host. The live migration circuit keeps the sessions alive during migration to alleviate connection losses.

    Virtual machine migration while maintaining live network links

    公开(公告)号:US11537419B2

    公开(公告)日:2022-12-27

    申请号:US15395884

    申请日:2016-12-30

    Abstract: Disclosed is a source host including a processor. The processor operates a virtual machine (VM) to communicate network traffic over a communication link. The processor also initiates migration of the VM to a destination host. The processor also suspends the VM during migration of the VM to the destination host. The source host also includes a live migration circuit coupled to the processor. The live migration circuit manages a session associated with the communication link while the VM is suspended during migration. The live migration circuit buffers changes to a session state and transfers the buffered session state changes to the destination host for replay after the VM is reactivated on the destination host. The live migration circuit keeps the sessions alive during migration to alleviate connection losses.

    TECHNOLOGIES FOR SCALABLE PACKET RECEPTION AND TRANSMISSION

    公开(公告)号:US20190327190A1

    公开(公告)日:2019-10-24

    申请号:US16460424

    申请日:2019-07-02

    Abstract: Technologies for scalable packet reception and transmission include a network device. The network device is to establish a ring that is defined as a circular buffer and includes a plurality of slots to store entries representative of packets. The network device is also to generate and assign receive descriptors to the slots in the ring. Each receive descriptor includes a pointer to a corresponding memory buffer to store packet data. The network device is further to determine whether the NIC has received one or more packets and copy, with direct memory access (DMA) and in response to a determination that the NIC has received one or more packets, packet data of the received one or more packets from the NIC to the memory buffers associated with the receive descriptors assigned to the slots in the ring.

    Technologies for scalable network packet processing with lock-free rings

    公开(公告)号:US10999209B2

    公开(公告)日:2021-05-04

    申请号:US15635581

    申请日:2017-06-28

    Abstract: Technologies for network packet processing include a computing device that receives incoming network packets. The computing device adds the incoming network packets to an input lockless shared ring, and then classifies the network packets. After classification, the computing device adds the network packets to multiple lockless shared traffic class rings, with each ring associated with a traffic class and output port. The computing device may allocate bandwidth between network packets active during a scheduling quantum in the traffic class rings associated with an output port, schedule the network packets in the traffic class rings for transmission, and then transmit the network packets in response to scheduling. The computing device may perform traffic class separation in parallel with bandwidth allocation and traffic scheduling. In some embodiments, the computing device may perform bandwidth allocation and/or traffic scheduling on each traffic class ring in parallel. Other embodiments are described and claimed.

    MEMORY RING-BASED JOB DISTRIBUTION FOR PROCESSOR CORES AND CO-PROCESSORS

    公开(公告)号:US20180285154A1

    公开(公告)日:2018-10-04

    申请号:US15473885

    申请日:2017-03-30

    Abstract: An apparatus includes a processor, a co-processor and a memory ring. The memory ring includes a plurality of slots that are associated with a plurality of jobs. The processor is to apply a set of rules and based on the application of the set of rules, selectively access a first slot of the plurality of slots to read first data stored in the first slot representing a first job of the plurality of jobs and process the first job based on the first data. The co-processor is to apply the set of rules and based on the application of the set of rules, access a second slot of the plurality of slots other than the first slot to read second data representing a second job of the plurality of jobs and process the second job based on the second data.

Patent Agency Ranking