ENTITY-CENTRIC LOG INDEXING WITH CONTEXT EMBEDDING

    公开(公告)号:US20180285397A1

    公开(公告)日:2018-10-04

    申请号:US15478304

    申请日:2017-04-04

    Abstract: In one embodiment, a device in a network tokenizes a plurality of strings from unstructured log data into entity tokens and non-entity tokens. The entity tokens identify entities in the network. The device identifies patterns of tokens in the tokenized strings. The device determines entity-centric contexts from the identified patterns. A particular entity-centric context comprises a sequence of tokens that precede or follow an entity token in the tokenized strings. The device associates similar ones of the entity-centric contexts. The device generates a lookup index based in part on the entities and the similar entity-centric contexts.

    AUTOMATED LOG ANALYSIS
    12.
    发明申请

    公开(公告)号:US20180157713A1

    公开(公告)日:2018-06-07

    申请号:US15368373

    申请日:2016-12-02

    Abstract: There is disclosed in an example a computer-implemented method of providing automated log analysis, including: receiving a log stream comprising a plurality of transaction log entries, the log entries comprising a time stamp, a component identification (ID), and a name value pair identifying a transaction; creating an index comprising mapping a key ID to a name value pair of a log entry; and selecting from the index a key ID having a relatively large number of repetitions. There is also disclosed an apparatus and computer-readable medium for performing the method.

    CLOUD RESOURCE PLACEMENT OPTIMIZATION AND MIGRATION EXECUTION IN FEDERATED CLOUDS

    公开(公告)号:US20170149687A1

    公开(公告)日:2017-05-25

    申请号:US14951110

    申请日:2015-11-24

    CPC classification number: H04L47/78 H04L67/1002

    Abstract: The present disclosure describes a method for cloud resource placement optimization. A resources monitor monitors state information associated with cloud resources and physical hosts in the federated cloud having a plurality of clouds managed by a plurality of cloud providers. A rebalance trigger triggers a rebalancing request to initiate cloud resource placement optimization based on one or more conditions. A cloud resource placement optimizer determines an optimized placement of cloud resources on physical hosts across the plurality of clouds in the federated cloud based on (1) costs including migration costs, (2) the state information, and (3) constraints, wherein each physical host is identified in the constraints-driven optimization solver by an identifier of a respective cloud provider and an identifier of the physical host. A migrations enforcer determines an ordered migration plan and transmits requests to place or migrate cloud resources according to the ordered migration plan.

    FPGA acceleration for serverless computing

    公开(公告)号:US11740935B2

    公开(公告)日:2023-08-29

    申请号:US17519395

    申请日:2021-11-04

    CPC classification number: G06F9/4881 G06F9/5038 G06F9/5066 G06F9/5088

    Abstract: In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.

    FPGA ACCELERATION FOR SERVERLESS COMPUTING

    公开(公告)号:US20220058054A1

    公开(公告)日:2022-02-24

    申请号:US17519395

    申请日:2021-11-04

    Abstract: In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.

    Serverless computing and task scheduling

    公开(公告)号:US10884807B2

    公开(公告)日:2021-01-05

    申请号:US15485910

    申请日:2017-04-12

    Abstract: In one embodiment, a method for serverless computing comprises: receiving a task definition, wherein the task definition comprises a first task and a second task chained to the first task; adding the first task and the second task to a task queue; executing the first task from the task queue using hardware computing resources in a first serverless environment associated with a first serverless environment provider; and executing the second task from the task queue using hardware computing resources in a second serverless environment selected based on a condition on an output of the first task.

    Optimizing serverless computing using a distributed computing framework

    公开(公告)号:US10678444B2

    公开(公告)日:2020-06-09

    申请号:US15943640

    申请日:2018-04-02

    Abstract: Aspects of the technology provide improvements to a Serverless Computing (SLC) workflow by determining when and how to optimize SLC jobs for computing in a Distributed Computing Framework (DCF). DCF optimization can be performed by abstracting SLC tasks into different workflow configurations to determined optimal arrangements for execution in a DCF environment. A process of the technology can include steps for receiving an SLC job including one or more SLC tasks, executing one or more of the tasks to determine a latency metric and a throughput metric for the SLC tasks, and determining if the SLC tasks should be converted to a Distributed Computing Framework (DCF) format based on the latency metric and the throughput metric. Systems and machine-readable media are also provided.

    SYSTEM AND METHOD FOR GRAPH BASED MONITORING AND MANAGEMENT OF DISTRIBUTED SYSTEMS

    公开(公告)号:US20190286548A1

    公开(公告)日:2019-09-19

    申请号:US16434106

    申请日:2019-06-06

    Abstract: A controller can receive first and second metrics respectively indicating distributed computing system servers' CPU, memory, or disk utilization, throughput, or latency for a first time. The controller can receive third and fourth metrics for a second time. The controller can determine a first graph including vertices corresponding to the servers and edges indicating data flow between the servers, a second graph including edges indicating the first metrics satisfy a first threshold, a third graph including edges indicating the second metrics satisfy a second threshold, a fourth graph including edges indicating the third metrics fail to satisfy the first threshold, and a fifth graph including edges indicating the fourth metrics fail to satisfy the second threshold. The controller can display a sixth graph indicating at least one of first changes between the second graph and the fourth graph or second changes between the third graph and the fifth graph.

    SYSTEM AND METHOD FOR GRAPH BASED MONITORING AND MANAGEMENT OF DISTRIBUTED SYSTEMS

    公开(公告)号:US20190114247A1

    公开(公告)日:2019-04-18

    申请号:US15786790

    申请日:2017-10-18

    Abstract: Systems, methods, and computer-readable media are disclosed for graph based monitoring and management of network components of a distributed streaming system. In one aspect, a method includes generating, by a processor, a first metrics and a second metrics based on data collected on a system; generating, by the processor, a topology graph representing data flow within the system; generating, by the processor, at least one first metrics graph corresponding to the first metrics based in part on the topology graph; generating, by the processor, at least one second metrics graph corresponding to the second metrics based in part on the topology graph; identifying, by the processor, a malfunction within the system based on a change in at least one of the first metrics graph and the second metrics graph; and sending, by the processor, a feedback on the malfunction to an operational management component of the system.

Patent Agency Ranking