Method and system for managing network-to-network interconnection

    公开(公告)号:US11575540B2

    公开(公告)日:2023-02-07

    申请号:US17671265

    申请日:2022-02-14

    Abstract: This disclosure describes methods and systems to externally manage network-to-network interconnect configuration data in conjunction with a centralized database subsystem. An example of the methods includes receiving and storing, in the centralized database subsystem, data indicative of user intent to interconnect at least a first network and a second network. The example method further includes, based at least in part on the data indicative of user intent, determining and storing, in the centralized database subsystem, a network intent that corresponds to the user intent. The example method further includes providing data indicative of the network intent from the centralized database subsystem to a first data plane adaptor, associated with the first network, and a second data plane adaptor, associated with the second network.

    HIGHLY-AVAILABLE DISTRIBUTED NETWORK ADDRESS TRANSLATION (NAT) ARCHITECTURE WITH FAILOVER SOLUTIONS

    公开(公告)号:US20210103507A1

    公开(公告)日:2021-04-08

    申请号:US16592613

    申请日:2019-10-03

    Abstract: This disclosure describes techniques for providing a distributed scalable architecture for Network Address Translation (NAT) systems with high availability and mitigations for flow breakage during failover events. The NAT servers may include functionality to serve as fast-path servers and/or slow-path servers. A fast-path server may include a NAT worker that includes a cache of NAT mappings to perform stateful network address translation and to forward packets with minimal latency. A slow-path server may include a mapping server that creates new NAT mappings, depreciates old ones, and answers NAT worker state requests. The NAT system may use virtual mapping servers (VMSs) running on primary physical servers with state duplicated VMSs on different physical failover servers. Additionally, the NAT servers may implement failover solutions for dynamically allocated routeable address/port pairs assigned to new sessions by assigning new outbound address/port pairs when a session starts and broadcasting pairing information.

    REACTIVE APPROACH TO RESOURCE ALLOCATION FOR MICRO-SERVICES BASED INFRASTRUCTURE

    公开(公告)号:US20200328977A1

    公开(公告)日:2020-10-15

    申请号:US16380401

    申请日:2019-04-10

    Abstract: Systems, methods, and computer-readable media are provided for predictive content pre-fetching and allocation of resources for providing network service access. In some examples, traffic in a network environment is monitored and a related network service to a requested network service is recognized. A UDP probe for the related network service is sent to at least one candidate server of a plurality of candidate servers within the network environment. A candidate server of the plurality of candidate servers is selected for provisioning of the related network service. The candidate server gathers one or more pre-fetched resources for provisioning the related network service. Accordingly, traffic associated with provisioning of the related network service can be steered to the candidate server by a load balancer for provisioning of the related network service using the one or more pre-fetched resources.

    EFFICIENT AND FLEXIBLE LOAD-BALANCING FOR CLUSTERS OF CACHES UNDER LATENCY CONSTRAINT

    公开(公告)号:US20200244758A1

    公开(公告)日:2020-07-30

    申请号:US16261462

    申请日:2019-01-29

    Abstract: The present technology provides a system, method and computer readable medium for steering a content request among plurality of cache servers based on multi-level assessment of content popularity. In some embodiments a three levels of popularity may be determined comprising popular, semi-popular and unpopular designations for the queried content. The processing of the query and delivery of the requested content depends on the aforementioned popularity level designation and comprises a acceptance of the query at the edge cache server to which the query was originally directed, rejection of the query and re-direction to a second edge cache server or redirection of the query to origin server to thereby deliver the requested content. The proposed technology results in higher hit ratio for edge cache clusters by steering requests for semi-popular content to one or more additional cache servers while forwarding request for unpopular content to origin server.

    Multi-homed load-balanced rate-based tunnels

    公开(公告)号:US10425339B2

    公开(公告)日:2019-09-24

    申请号:US15332020

    申请日:2016-10-24

    Abstract: In one embodiment, a splitting device in a computer network transmits to a combining device first and second portions of a data stream via first and second tunnels, respectively, where packets of the data stream indicate a time of transmission of the packets from the splitting device, a first and second transmission rate of the packets on a respective one of the first and second tunnels, and sequencing information of the packets within the data stream. The splitting device receives from the combining device a first and second receive rate of the packets for each of the first and second tunnels, respectively. In response to the first receive rate being less than the first transmission rate, the splitting device reduces the first transmission rate and increases the second transmission rate.

    DATA STREAM PIPELINING AND REPLICATION AT A DELIVERY NODE OF A CONTENT DELIVERY NETWORK

    公开(公告)号:US20190116246A1

    公开(公告)日:2019-04-18

    申请号:US15784361

    申请日:2017-10-16

    Abstract: A content delivery node receives data packets carrying content from an upstream source of content, and writes segments of the received content directly to a memory buffer of a memory using direct memory access (DMA) data transfers. The node derives, for each segment, respective segment-specific metadata based on contents of the segment, and stores the respective segment-specific metadata in the memory. The node receives from multiple downstream client devices respective requests for the same content. Each request includes client-specific information. Responsive to the requests, the node: identifies one or more segments that satisfy the requests; generates, for each client device, client-specific metadata using the client-specific information and the segment-specific metadata for the one or more segments; constructs, for each client, a client-specific data packet that includes the one or more segments and the client-specific metadata; and transmits the client-specific data packets to the downstream client devices.

    System and method for providing a bit indexed service chain

    公开(公告)号:US10225187B2

    公开(公告)日:2019-03-05

    申请号:US15465764

    申请日:2017-03-22

    Abstract: Disclosed is a method that modifies a bit indexed explicit replication (BIER) algorithm. The method includes receiving a packet at a node, wherein the packet includes a BIER header identifying a bitstring, the bitstring including a first bit indicating a first destination and a second bit indicating a second destination and forwarding the packet through one or more networks toward the first destination and the second destination based on the bitstring and a predetermined bit selection order. The predetermined bit selection order causes a sequential delivery of the packet to the first destination and the second destination. After the packet arrives at the first destination, the method includes setting the first bit to zero in the bitstring and forwarding the packet through the one or more networks toward the second destination according to the updated bitstring.

    Transparent and efficient multi-destination TCP communications based on bit indexed explicit replication

    公开(公告)号:US10135756B2

    公开(公告)日:2018-11-20

    申请号:US15252101

    申请日:2016-08-30

    Abstract: Systems, methods, and computer-readable storage media for multi-destination TCP communications using bit indexed explicit replication (BIER). In some examples, a system can generate a TCP packet associated with a TCP session involving a set of destination devices, and encode an array of bits into the TCP packet to yield a TCP multicast packet. The array of bits can define the destination devices as destinations for the multicast packet. The system can transmit the TCP multicast packet towards the destination devices through a BIER domain. The system can receive acknowledgements from a first subset of the destination devices. Based on the acknowledgements, the system can determine that the first subset of the destination devices received the multicast packet and a second subset of the destination devices did not receive the multicast packet. The system can then retransmit the multicast packet to the second subset of the destination devices.

Patent Agency Ranking