Technique for content delivery over the internet
    62.
    发明申请
    Technique for content delivery over the internet 有权
    互联网内容传送技术

    公开(公告)号:US20020169890A1

    公开(公告)日:2002-11-14

    申请号:US09851267

    申请日:2001-05-08

    Abstract: A content delivery system for a content provider that comprises at least two content delivery servers for delivering contents, a preference database for storing an estimated distance between each of the at least two content delivery servers and a client, and a content provider domain name server for mapping the name of the content provider to the at least two content delivery servers and selecting one of the content delivery servers to deliver the content, that has a shortest estimated distance to the client in the preference database.

    Abstract translation: 一种用于内容提供商的内容递送系统,其包括用于递送内容的至少两个内容递送服务器,用于存储所述至少两个内容递送服务器中的每一个与客户端之间的估计距离的偏好数据库,以及内容提供商域名服务器, 将所述内容提供商的名称映射到所述至少两个内容递送服务器,并且选择所述内容递送服务器之一来递送所述优选数据库中具有到所述客户端的估计距离最短的内容。

    Method for predicting file download time from mirrored data centers in a global computer netwrok
    63.
    发明申请
    Method for predicting file download time from mirrored data centers in a global computer netwrok 有权
    在全球计算机网络中从镜像数据中心预测文件下载时间的方法

    公开(公告)号:US20020124080A1

    公开(公告)日:2002-09-05

    申请号:US09867141

    申请日:2001-05-30

    Abstract: An intelligent traffic redirection system performs global load balancing for Web sites located at mirrored data centers. The system relies on a network map that is generated continuously for the user-base of the entire Internet. Instead of probing each local name server (or other host) that is connectable to the mirrored data centers, the network map identifies connectivity with respect to a much smaller set of proxy points, called nullcorenull (or nullcommonnull) points. A core point then becomes representative of a set of local name servers (or other hosts) that, from a data center's perspective, share the point. Once core points are identified, a systematic methodology is used to estimate predicted actual download times to a given core point from each of the mirrored data centers. Preferably, ICMP (or so-called nullpingnull packets) are used to measure roundtrip time (RTT) and latency between a data center and a core point. Using such data, an average latency is calculated, preferably using an exponentially time-weighted average of all previous measurements and the new measurement. A similar function is used to calculate average packet loss. Using the results, a score is generated for each path between one of the data centers and the core point, and the score is representative of a file download time. Preferably, the score is generated by modifying an average latency with a penalty factor dependent on the time-weighted average loss function. Whichever data center has the best score (representing the best-performing network connectivity for that time slice) is then associated with the core point.

    Abstract translation: 智能流量重定向系统对位于镜像数据中心的网站执行全局负载平衡。 系统依赖于为整个互联网的用户群连续生成的网络地图。 而不是探测可连接到镜像数据中心的每个本地名称服务器(或其他主机),网络映射标识了相对于称为“核心”(或“公共”)点的更小的一组代理点的连接性。 因此,核心点将成为一组本地名称服务器(或其他主机)的代表,从数据中心的角度来看,它们共享这一点。 一旦识别出核心点,就使用系统方法来估计从每个镜像数据中心到给定核心点的预测实际下载时间。 优选地,ICMP(或所谓的“ping”分组)被用于测量数据中心和核心点之间的往返时间(RTT)和等待时间。 使用这样的数据,优选地使用所有先前测量和新测量的指数时间加权平均值来计算平均延迟。 使用类似的功能来计算平均丢包。 使用结果,为数据中心之一和核心点之间的每个路径生成分数,分数代表文件下载时间。 优选地,通过依赖于时间加权平均损失函数的惩罚因子修改平均延迟来产生得分。 然后,与核心点相关联的数据中心具有最佳分数(表示该时间片的最佳性能的网络连接)。

    Loading balancing across servers in a computer network
    65.
    发明授权
    Loading balancing across servers in a computer network 失效
    在计算机网络中的服务器之间加载平衡

    公开(公告)号:US06351775B1

    公开(公告)日:2002-02-26

    申请号:US08866461

    申请日:1997-05-30

    Abstract: A dynamic routing of object requests among a collection or cluster of servers factors the caching efficiency of the servers and the load balance or just the load balance. The routing information on server location can be dynamically updated by piggybacking meta information with the request response. To improve the cache hit at the server, the server selection factors the identifier (e.g. URL) of the object requested. A partitioning method can map object identifiers into classes; and requester nodes maintain a server assignment table to map each class into a server selection. The class-to-server assignment table can change dynamically as the workload varies and also factors the server capacity. The requester node need only be informed on an “on-demand” basis on the dynamic change of the class-to-server assignment (and thus reduce communication traffic). In the Internet, the collection of servers can be either a proxy or Web server cluster and can include a DNS and/or TCP-router. The PICS protocol can be used by the server to provide the meta information on the “new” class-to-server mapping when a request is directed to a server based on an invalid or obsolete class-to-server mapping. DNS based routing for load balancing of a server cluster can also benefit. By piggybacking meta data with the returned object to reassign the requester to another server for future requests, adverse effects of the TTL on the load balance are overcome without increasing traffic.

    Abstract translation: 服务器集合或集群之间的对象请求的动态路由会导致服务器的缓存效率和负载平衡或负载平衡。 服务器位置的路由信息​​可以通过用请求响应捎带元信息来动态更新。 为了改善服务器上的缓存命中,服务器选择将所请求的对象的标识符(例如URL)进行因子化。 分区方法可以将对象标识符映射到类中; 并且请求者节点维护服务器分配表以将每个类映射到服务器选择。 分类到服务器分配表可能会随着工作量的变化而动态变化,也会影响服务器的容量。 请求者节点只需要在“按需”的基础上通知类到服务器分配的动态变化(从而减少通信流量)。 在Internet中,服务器的集合可以是代理服务器或Web服务器集群,并且可以包括DNS和/或TCP路由器。 服务器可以使用PICS协议,在根据无效或过时的类到服务器映射将请求定向到服务器时,为“新”类到服务器映射提供元信息。 基于DNS的路由用于服务器集群的负载平衡也可以受益。 通过将返回的对象搭载元数据以将请求者重新分配给另一个服务器以供将来请求,可以克服TTL对负载平衡的不利影响,而不增加流量。

    Client-side resource-based load-balancing with delayed-resource-binding using TCP state migration to WWW server farm
    66.
    发明授权
    Client-side resource-based load-balancing with delayed-resource-binding using TCP state migration to WWW server farm 失效
    使用TCP状态迁移到WWW服务器场的客户端资源负载平衡与延迟资源绑定

    公开(公告)号:US06182139B2

    公开(公告)日:2001-01-30

    申请号:US09103336

    申请日:1998-06-23

    Inventor: Juergen Brendel

    Abstract: A client-side dispatcher resides on a client machine below high-level client applications and TCP/IP layers. The client-side dispatcher performs TCP state migration to relocate the client-server TCP connection to a new server by storing packets locally and later altering them before transmission. The client-side dispatcher operates in several modes. In an error-recovery mode, when a server fails, error packets from the server are intercepted by the client-side dispatcher. Stored connection packet's destination addresses are changed to an address of a relocated server. The altered packets then establish a connection with the relocated server. Source addresses of packets from the server are changed to that of the original server that crashed so that the client application is not aware of the error. In a delayed URL-based dispatch mode, the client-side dispatcher intercepts connection packets before they are sent over the network. Reply packets are faked by the client-side dispatcher to appear to be from a server and then sent to up to the client TCP/IP layers. The client's TCP then sends URL packet identifying the resource requested. The client-side dispatcher decodes the URL and picks a server and sends the packet to the server. Reply packets from the server are intercepted, and data packets altered to have the source address of the faked server. Multicast of the initial packet to multiple servers is used for empirical load-balancing by the client. The first server to respond is chosen while the others are reset. Thus the client-side dispatcher picks the fastest of several servers.

    Abstract translation: 客户端调度器位于高级客户端应用程序和TCP / IP层之下的客户机上。 客户端调度程序通过在本地存储数据包,然后在传输之前进行更改,执行TCP状态迁移,将客户端 - 服务器TCP连接重新定位到新服务器。 客户端调度员在多种模式下运行。 在错误恢复模式下,当服务器发生故障时,客户端调度程序会拦截来自服务器的错误数据包。 存储的连接数据包的目标地址更改为重定位服务器的地址。 所修改的数据包然后与重新定位的服务器建立连接。 来自服务器的数据包的源地址将更改为崩溃的原始服务器的源地址,以便客户机应用程序不知道错误。 在基于延迟的基于URL的调度模式下,客户端调度员在通过网络发送之前拦截连接数据包。 客户端调度员的回复数据包是由服务器发出的,然后发送到客户端的TCP / IP层。 客户端的TCP然后发送标识所请求的资源的URL包。 客户端调度程序解码URL并选择一个服务器,并将数据包发送到服务器。 截取来自服务器的回复数据包,数据包被更改为具有假服务器的源地址。 初始包到多个服务器的组播用于客户端的经验负载均衡。 选择第一个要响应的服务器,而其他服务器被重置。 因此,客户端调度员选择了最快的几个服务器。

    Load balancing of client connections across a network using server based algorithms
    67.
    发明授权
    Load balancing of client connections across a network using server based algorithms 失效
    使用基于服务器的算法,跨网络的客户端连接负载平衡

    公开(公告)号:US06178160B1

    公开(公告)日:2001-01-23

    申请号:US08996683

    申请日:1997-12-23

    Abstract: A plurality of web servers (16, 18, and 20) have a common host name, and their authoritative domain server (24 or 26) responds to requests from a local domain-name server (22) for the network address corresponding to their common host name by making an estimate of the performance costs of adding a further client to each of the web servers and then sending the local domain-name server (22) the network address of the server to which the addition of a further client will result in the least performance cost. The performance cost is defined as the difference in the average number of waiting clients, and it takes into account both the additional response time for existing clients and the projected response time for the prospective new client.

    Abstract translation: 多个网络服务器(16,18和20)具有公共主机名,并且它们的权威域服务器(24或26)响应来自本地域名服务器(22)的请求,用于对应于它们的共同的网络地址 通过估计将另外的客户端添加到每个web服务器的性能成本,然后发送本地域名服务器(22),添加另外的客户端将导致的服务器的网络地址的估计的主机名称 性能成本最低。 性能成本被定义为平均等待客户数量的差异,并考虑到现有客户的额外响应时间和预期新客户端的预计响应时间。

    In path edge relay insertion
    68.
    发明授权

    公开(公告)号:US12120186B1

    公开(公告)日:2024-10-15

    申请号:US18363415

    申请日:2023-08-01

    CPC classification number: H04L67/148 H04L67/1038 H04L67/141

    Abstract: The present disclosure describes systems and methods for migrating communications between a client device and an application hosted by a cloud server. The method includes receiving from an edge server a signal from a client device requesting the establishment of a new communication path between the client device and the cloud server through the edge server. A first connection between client device and the edge server is established, and a second connection between the edge server and cloud server is also established. Once the connections are established, the communication between the client device and the application is migrated from a direct connection between the client device and cloud server to the first and second connections.

    Service request handling
    69.
    发明授权

    公开(公告)号:US12069126B2

    公开(公告)日:2024-08-20

    申请号:US17926021

    申请日:2021-05-19

    CPC classification number: H04L67/1038 H04L67/561 H04L67/63 H04L69/40

    Abstract: There is provided a method for handling a service request. The method is performed by a first service communication proxy (SCP) node. If no response is received from a second network function (NF) node of a service producer to a first request transmitted towards the second NF node via the first SCP node, where the first request is for the second NF node to execute a service requested by a first NF node of a service consumer, transmission of information is initiated towards the first NF node. The information is indicative that no response is received from the second NF node to the first request.

Patent Agency Ranking