MANAGING RESOURCE ALLOCATION IN A STREAM PROCESSING FRAMEWORK

    公开(公告)号:US20170083380A1

    公开(公告)日:2017-03-23

    申请号:US14994131

    申请日:2016-01-12

    CPC classification number: G06F9/5083 G06F9/52

    Abstract: The technology disclosed relates to managing resource allocation to task sequences in a stream processing framework. In particular, it relates to operating a computing grid that includes machine resources, with heterogeneous containers defined over whole machines and some containers including multiple machines. It also includes initially allocating multiple machines to a first container, initially allocating first set of stateful task sequences to the first container, running the first set of stateful task sequences as multiplexed units of work under control of a container-scheduler, where each unit of work for a first task sequence runs to completion on first machine resources in the first container, unless it overruns a time-out, before a next unit of work for a second task sequence runs multiplexed on the first machine resources. It further includes automatically modifying a number of machine resources and/or a number assigned task sequences to a container.

    RECOVERY STRATEGY FOR A STREAM PROCESSING SYSTEM

    公开(公告)号:US20200183796A1

    公开(公告)日:2020-06-11

    申请号:US16793936

    申请日:2020-02-18

    Abstract: The technology disclosed relates to discovering multiple previously unknown and undetected technical problems in fault tolerance and data recovery mechanisms of modem stream processing systems. In addition, it relates to providing technical solutions to these previously unknown and undetected problems. In particular, the technology disclosed relates to discovering the problem of modification of batch size of a given batch during its replay after a processing failure. This problem results in over-count when the input during replay is not a superset of the input fed at the original play. Further, the technology disclosed discovers the problem of inaccurate counter updates in replay schemes of modem stream processing systems when one or more keys disappear between a batch's first play and its replay. This problem is exacerbated when data in batches is merged or mapped with data from an external data store.

    Recovery strategy for a stream processing system

    公开(公告)号:US10606711B2

    公开(公告)日:2020-03-31

    申请号:US15954014

    申请日:2018-04-16

    Abstract: The technology disclosed relates to discovering multiple previously unknown and undetected technical problems in fault tolerance and data recovery mechanisms of modern stream processing systems. In addition, it relates to providing technical solutions to these previously unknown and undetected problems. In particular, the technology disclosed relates to discovering the problem of modification of batch size of a given batch during its replay after a processing failure. This problem results in over-count when the input during replay is not a superset of the input fed at the original play. Further, the technology disclosed discovers the problem of inaccurate counter updates in replay schemes of modern stream processing systems when one or more keys disappear between a batch's first play and its replay. This problem is exacerbated when data in batches is merged or mapped with data from an external data store.

    Stream Processing Task Deployment Using Precompiled Libraries

    公开(公告)号:US20190250947A1

    公开(公告)日:2019-08-15

    申请号:US16396522

    申请日:2019-04-26

    Abstract: The technology disclosed provides a novel and innovative technique for compact deployment of application code to stream processing systems. In particular, the technology disclosed relates to obviating the need of accompanying application code with its dependencies during deployment (i.e., creating fat jars) by operating a stream processing system within a container defined over worker nodes of whole machines and initializing the worker nodes with precompiled dependency libraries having precompiled classes. Accordingly, the application code is deployed to the container without its dependencies, and, once deployed, the application code is linked with the locally stored precompiled dependencies at runtime. In implementations, the application code is deployed to the container running the stream processing system between 300 milliseconds and 6 seconds. This is drastically faster than existing deployment techniques that take anywhere between 5 to 15 minutes for deployment.

    Providing strong ordering in multi-stage streaming processing

    公开(公告)号:US10191768B2

    公开(公告)日:2019-01-29

    申请号:US14986365

    申请日:2015-12-31

    Abstract: The technology disclosed relates to providing strong ordering in multi-stage processing of near real-time (NRT) data streams. In particular, it relates to maintaining current batch-stage information for a batch at a grid-scheduler in communication with a grid-coordinator that controls dispatch of batch-units to the physical threads for a batch-stage. This includes operating a computing grid, and queuing data from the NRT data streams as batches in pipelines for processing over multiple stages in the computing grid. Also included is determining, for a current batch-stage, batch-units pending dispatch, in response to receiving the current batch-stage information; identifying physical threads that processed batch-units for a previous batch-stage on which the current batch-stage depends and have registered pending tasks for the current batch-stage; and dispatching the batch-units for the current batch-stage to the identified physical threads subsequent to complete processing of the batch-units for the previous batch-stage.

    PROVIDING STRONG ORDERING IN MULTI-STAGE STREAMING PROCESSING
    16.
    发明申请
    PROVIDING STRONG ORDERING IN MULTI-STAGE STREAMING PROCESSING 审中-公开
    在多级流水处理中提供强大的订单

    公开(公告)号:US20170075721A1

    公开(公告)日:2017-03-16

    申请号:US14986365

    申请日:2015-12-31

    CPC classification number: G06F9/4881 G06F9/466 G06F2209/484 G06F2209/485

    Abstract: The technology disclosed relates to providing strong ordering in multi-stage processing of near real-time (NRT) data streams. In particular, it relates to maintaining current batch-stage information for a batch at a grid-scheduler in communication with a grid-coordinator that controls dispatch of batch-units to the physical threads for a batch-stage. This includes operating a computing grid, and queuing data from the NRT data streams as batches in pipelines for processing over multiple stages in the computing grid. Also included is determining, for a current batch-stage, batch-units pending dispatch, in response to receiving the current batch-stage information; identifying physical threads that processed batch-units for a previous batch-stage on which the current batch-stage depends and have registered pending tasks for the current batch-stage; and dispatching the batch-units for the current batch-stage to the identified physical threads subsequent to complete processing of the batch-units for the previous batch-stage.

    Abstract translation: 所公开的技术涉及在近实时(NRT)数据流的多级处理中提供强顺序。 特别地,其涉及在与控制批处理单元到批处理阶段的物理线程的分配单元的网格协调器通信的网格调度器处维护批次的当前批处理阶段信息。 这包括运行计算网格,并将来自NRT数据流的数据作为管道中的批次进行排队,以便在计算网格中的多个阶段进行处理。 还包括响应于接收到当前批次阶段信息而确定当前批次阶段待批发批处理单元; 识别处理当前批处理阶段所依赖的先前批次阶段并且已经为当前批处理阶段注册了待处理任务的批处理单元的物理线程; 并且在完成前一个批次阶段的批次单元处理之后,将当前分批阶段的分批单元分派到所识别的物理线程。

    Managing resource allocation in a stream processing framework

    公开(公告)号:US11086688B2

    公开(公告)日:2021-08-10

    申请号:US16200365

    申请日:2018-11-26

    Abstract: The technology disclosed herein relates to method, system, and computer program product (computer-readable storage device) embodiments for managing resource allocation in a stream processing framework. An embodiment operates by configuring an allocation of a task sequence and machine resources to a container, partitioning a data stream into a plurality of batches arranged for parallel processing by the container via the machine resources allocated to the container, and running the task sequence, running at least one batch of the plurality of batches. Some embodiments may also include changing the allocation responsive to a determination of an increase in data volume, and may further include changing the allocation to a previous state of the allocation, responsive to a determination of a decrease in data volume. Additionally, time-based throughput of the data stream may be monitored for a given worker node configured to run a batch of the plurality of batches.

    Managing Resource Allocation in a Stream Processing Framework

    公开(公告)号:US20190138367A1

    公开(公告)日:2019-05-09

    申请号:US16200360

    申请日:2018-11-26

    Abstract: The technology disclosed herein relates to method, system, and computer program product (computer-readable storage device) embodiments for managing resource allocation in a stream processing framework. An embodiment operates by configuring an allocation of a task sequence and machine resources to a container, and by running the task sequence, wherein the task sequence is configured to be run continuously as a plurality of units of work corresponding to the task sequence. Some embodiments further include changing the allocation responsive to a determination of an increase in data volume. A query may be taken from the task sequence and processed. Responsive to the query, a real-time result may be returned. Query processing may involve continuously applying a rule to the data stream, in real time or near real time. The rule may be set via a query language. Additionally, the data stream may be partitioned into batches for parallel processing.

    Handling multiple task sequences in a stream processing framework

    公开(公告)号:US10198298B2

    公开(公告)日:2019-02-05

    申请号:US14986351

    申请日:2015-12-31

    Abstract: The technology disclosed improves existing streaming processing systems by allowing the ability to both scale up and scale down resources within an infrastructure of a stream processing system. In particular, the technology disclosed relates to a dispatch system for a stream processing system that adapts its behavior according to a computational capacity of the system based on a run-time evaluation. The technical solution includes, during run-time execution of a pipeline, comparing a count of available physical threads against a set number of logically parallel threads. When a count of available physical threads equals or exceeds the number of logically parallel threads, the solution includes concurrently processing the batches at the physical threads. Further, when there are fewer available physical threads than the number of logically parallel threads, the solution includes multiplexing the batches sequentially over the available physical threads.

    Managing resource allocation in a stream processing framework

    公开(公告)号:US10146592B2

    公开(公告)日:2018-12-04

    申请号:US14994131

    申请日:2016-01-12

    Abstract: The technology disclosed relates to managing resource allocation to task sequences in a stream processing framework. In particular, it relates to operating a computing grid that includes machine resources, with heterogeneous containers defined over whole machines and some containers including multiple machines. It also includes initially allocating multiple machines to a first container, initially allocating first set of stateful task sequences to the first container, running the first set of stateful task sequences as multiplexed units of work under control of a container-scheduler, where each unit of work for a first task sequence runs to completion on first machine resources in the first container, unless it overruns a time-out, before a next unit of work for a second task sequence runs multiplexed on the first machine resources. It further includes automatically modifying a number of machine resources and/or a number assigned task sequences to a container.

Patent Agency Ranking