SYSTEM AND METHOD FOR GRAPH BASED MONITORING AND MANAGEMENT OF DISTRIBUTED SYSTEMS

    公开(公告)号:US20190286548A1

    公开(公告)日:2019-09-19

    申请号:US16434106

    申请日:2019-06-06

    Abstract: A controller can receive first and second metrics respectively indicating distributed computing system servers' CPU, memory, or disk utilization, throughput, or latency for a first time. The controller can receive third and fourth metrics for a second time. The controller can determine a first graph including vertices corresponding to the servers and edges indicating data flow between the servers, a second graph including edges indicating the first metrics satisfy a first threshold, a third graph including edges indicating the second metrics satisfy a second threshold, a fourth graph including edges indicating the third metrics fail to satisfy the first threshold, and a fifth graph including edges indicating the fourth metrics fail to satisfy the second threshold. The controller can display a sixth graph indicating at least one of first changes between the second graph and the fourth graph or second changes between the third graph and the fifth graph.

    SYSTEM AND METHOD FOR GRAPH BASED MONITORING AND MANAGEMENT OF DISTRIBUTED SYSTEMS

    公开(公告)号:US20190114247A1

    公开(公告)日:2019-04-18

    申请号:US15786790

    申请日:2017-10-18

    Abstract: Systems, methods, and computer-readable media are disclosed for graph based monitoring and management of network components of a distributed streaming system. In one aspect, a method includes generating, by a processor, a first metrics and a second metrics based on data collected on a system; generating, by the processor, a topology graph representing data flow within the system; generating, by the processor, at least one first metrics graph corresponding to the first metrics based in part on the topology graph; generating, by the processor, at least one second metrics graph corresponding to the second metrics based in part on the topology graph; identifying, by the processor, a malfunction within the system based on a change in at least one of the first metrics graph and the second metrics graph; and sending, by the processor, a feedback on the malfunction to an operational management component of the system.

    VIRTUALIZED NETWORK FUNCTIONS AND SERVICE CHAINING IN SERVERLESS COMPUTING INFRASTRUCTURE

    公开(公告)号:US20180302277A1

    公开(公告)日:2018-10-18

    申请号:US15485948

    申请日:2017-04-12

    CPC classification number: H04L41/5054 H04L41/5045 H04L41/5096

    Abstract: In one embodiment, a method implements virtualized network functions in a serverless computing system having networked hardware resources. An interface of the serverless computing system receives a specification for a network service including a virtualized network function (VNF) forwarding graph (FG). A mapper of the serverless computing system determines an implementation graph comprising edges and vertices based on the specification. A provisioner of the serverless computing system provisions a queue in the serverless computing system for each edge. The provisioner further provisions a function in the serverless computing system for each vertex, wherein, for at least one or more functions, each one of said at least one or more functions reads incoming messages from at least one queue. The serverless computing system processes data packets by the queues and functions in accordance with the VNF FG. The queues and functions processes data packets in accordance with the VNF FG.

    PERFORMANCE OF OBJECT STORAGE SYSTEMS
    15.
    发明申请

    公开(公告)号:US20170371558A1

    公开(公告)日:2017-12-28

    申请号:US15192255

    申请日:2016-06-24

    Abstract: Approaches are disclosed for improving performance of logical disks. A logical disk can comprise several storage devices. In an object storage system (OSS), when a logical disk stores a file, fragments of the file are stored distributed across the storage devices. Each of the fragments of the file is asymmetrically stored in (write) and retrieved from (read) the storage devices. The performance of the logical disk is improved by reconfiguring one or more of the storage devices based on an influence that each of the storage devices has on performance of the logical disk and the asymmetric read and write operations of each of the storage devices. For example, latency of the logical disk can be reduced by reconfiguring one or more of the plurality of storage disks based on a proportion of the latency of the logical device that is attributable to each of the plurality of storage devices.

    TENANT-LEVEL SHARDING OF DISKS WITH TENANT-SPECIFIC STORAGE MODULES TO ENABLE POLICIES PER TENANT IN A DISTRIBUTED STORAGE SYSTEM
    17.
    发明申请
    TENANT-LEVEL SHARDING OF DISKS WITH TENANT-SPECIFIC STORAGE MODULES TO ENABLE POLICIES PER TENANT IN A DISTRIBUTED STORAGE SYSTEM 审中-公开
    具有特定存储模块的磁盘的潜在级别保护,以便在分布式存储系统中实现每个优先级的策略

    公开(公告)号:US20160334998A1

    公开(公告)日:2016-11-17

    申请号:US14713851

    申请日:2015-05-15

    Abstract: Embodiments include receiving an indication of a data storage module to be associated with a tenant of a distributed storage system, allocating a partition of a disk for data of the tenant, creating a first association between the data storage module and the disk partition, creating a second association between the data storage module and the tenant, and creating rules for the data storage module based on one or more policies configured for the tenant. Embodiments further include receiving an indication of a type of subscription model selected for the tenant, and selecting the disk partition to be allocated based, at least in part, on the subscription model selected for the tenant. More specific embodiments include generating a storage map indicating the first association between the data storage module and the disk partition and indicating the second association between the data storage module and the tenant.

    Abstract translation: 实施例包括接收与分布式存储系统的租户相关联的数据存储模块的指示,为租户的数据分配磁盘的分区,在数据存储模块和磁盘分区之间创建第一关联,创建一个 数据存储模块和租户之间的第二关联,以及基于为租户配置的一个或多个策略来创建数据存储模块的规则。 实施例还包括接收为租户选择的订阅模式的类型的指示,以及至少部分地基于为租户选择的订阅模型来选择要分配的磁盘分区。 更具体的实施例包括生成指示数据存储模块和磁盘分区之间的第一关联并指示数据存储模块和租户之间的第二关联的存储映射。

    Flow based network service insertion using a service chain identifier
    18.
    发明授权
    Flow based network service insertion using a service chain identifier 有权
    使用服务链标识符的基于流的网络服务插入

    公开(公告)号:US09203765B2

    公开(公告)日:2015-12-01

    申请号:US14014742

    申请日:2013-08-30

    Abstract: Techniques are provided to generate and store a network graph database comprising information that indicates a service node topology, and virtual or physical network services available at each node in a network. A service request is received for services to be performed on packets traversing the network between at least first and second endpoints. A subset of the network graph database is determined that can provide the services requested in the service request. A service chain and service chain identifier is generated for the service based on the network graph database subset. A flow path is established through the service chain by flow programming network paths between the first and second endpoints using the service chain identifier.

    Abstract translation: 提供技术来生成和存储包括指示服务节点拓扑的信息和在网络中的每个节点处可用的虚拟或物理网络服务的网络图数据库。 接收对在至少第一和第二端点之间穿过网络的分组执行的服务的服务请求。 确定网络图数据库的子集,其可以提供服务请求中请求的服务。 基于网络图数据库子集为服务生成服务链和服务链标识符。 通过使用服务链标识符的第一和第二端点之间的流程编程网络路径,通过服务链建立流路径。

    ALLOCATING RESOURCES FOR MULTI-PHASE, DISTRIBUTED COMPUTING JOBS
    19.
    发明申请
    ALLOCATING RESOURCES FOR MULTI-PHASE, DISTRIBUTED COMPUTING JOBS 有权
    分配资源用于多相,分布式计算作业

    公开(公告)号:US20150199208A1

    公开(公告)日:2015-07-16

    申请号:US14156149

    申请日:2014-01-15

    CPC classification number: G06F9/45533 G06F9/44505 G06F9/5005

    Abstract: In one embodiment, data indicative of the size of an intermediate data set generated by a first resource device is received at a computing device. The intermediate data set is associated with a virtual machine to process the intermediate data set. A virtual machine configuration is determined based on the size of the intermediate data set. A second resource device is selected to execute the virtual machine based on the virtual machine configuration and on an available bandwidth between the first and second resource devices. The virtual machine is then assigned to the second resource device to process the intermediate data set.

    Abstract translation: 在一个实施例中,在计算设备处接收指示由第一资源设备生成的中间数据集的大小的数据。 中间数据集与虚拟机相关联以处理中间数据集。 基于中间数据集的大小来确定虚拟机配置。 选择第二资源设备以基于虚拟机配置和第一和第二资源设备之间的可用带宽来执行虚拟机。 然后将虚拟机分配给第二资源设备以处理中间数据集。

    LEVERAGING HARDWARE ACCELERATORS FOR SCALABLE DISTRIBUTED STREAM PROCESSING IN A NETWORK ENVIRONMENT
    20.
    发明申请
    LEVERAGING HARDWARE ACCELERATORS FOR SCALABLE DISTRIBUTED STREAM PROCESSING IN A NETWORK ENVIRONMENT 有权
    在网络环境中实现可扩展分布式流程加工的硬件加速器

    公开(公告)号:US20150103837A1

    公开(公告)日:2015-04-16

    申请号:US14054542

    申请日:2013-10-15

    Inventor: Debojyoti Dutta

    CPC classification number: H04L47/125

    Abstract: An example method for leveraging hardware accelerators for scalable distributed stream processing in a network environment is provided and includes allocating a plurality of hardware accelerators to a corresponding plurality of bolts of a distributed stream in a network, facilitating a handshake between the hardware accelerators and the corresponding bolts to allow the hardware accelerators to execute respective processing logic according to the corresponding bolts, and performing elastic allocation of hardware accelerators and load balancing of stream processing in the network. The distributed stream comprises a topology of at least one spout and the plurality of bolts. In specific embodiments, the allocating includes receiving capability information from the bolts and the hardware accelerators, and mapping the hardware accelerators to the bolts based on the capability information. In some embodiments, facilitating the handshake includes executing a shadow process to interface between the hardware accelerator and the distributed stream.

    Abstract translation: 提供了用于在网络环境中利用硬件加速器进行可扩展分布式流处理的示例性方法,并且包括将多个硬件加速器分配到网络中的分布式流的相应多个螺栓,从而促进硬件加速器与相应的硬件加速器之间的握手 螺栓,允许硬件加速器根据相应的螺栓执行相应的处理逻辑,并执行硬件加速器的弹性分配和网络中流处理的负载均衡。 分布式流包括至少一个喷口和多个螺栓的拓扑。 在具体实施例中,分配包括从螺栓和硬件加速器接收能力信息,以及基于能力信息将硬件加速器映射到螺栓。 在一些实施例中,促进握手包括执行影子处理以在硬件加速器和分布式流之间进行接口。

Patent Agency Ranking