Managing resource allocation in a stream processing framework

    公开(公告)号:US11086687B2

    公开(公告)日:2021-08-10

    申请号:US16200360

    申请日:2018-11-26

    Abstract: The technology disclosed herein relates to method, system, and computer program product (computer-readable storage device) embodiments for managing resource allocation in a stream processing framework. An embodiment operates by configuring an allocation of a task sequence and machine resources to a container, and by running the task sequence, wherein the task sequence is configured to be run continuously as a plurality of units of work corresponding to the task sequence. Some embodiments further include changing the allocation responsive to a determination of an increase in data volume. A query may be taken from the task sequence and processed. Responsive to the query, a real-time result may be returned. Query processing may involve continuously applying a rule to the data stream, in real time or near real time. The rule may be set via a query language. Additionally, the data stream may be partitioned into batches for parallel processing.

    RECOVERY STRATEGY FOR A STREAM PROCESSING SYSTEM

    公开(公告)号:US20180307571A1

    公开(公告)日:2018-10-25

    申请号:US15954014

    申请日:2018-04-16

    Abstract: The technology disclosed relates to discovering multiple previously unknown and undetected technical problems in fault tolerance and data recovery mechanisms of modern stream processing systems. In addition, it relates to providing technical solutions to these previously unknown and undetected problems. In particular, the technology disclosed relates to discovering the problem of modification of batch size of a given batch during its replay after a processing failure. This problem results in over-count when the input during replay is not a superset of the input fed at the original play. Further, the technology disclosed discovers the problem of inaccurate counter updates in replay schemes of modern stream processing systems when one or more keys disappear between a batch's first play and its replay. This problem is exacerbated when data in batches is merged or mapped with data from an external data store.

    Stream processing task deployment using precompiled libraries

    公开(公告)号:US10275278B2

    公开(公告)日:2019-04-30

    申请号:US15265817

    申请日:2016-09-14

    Abstract: The technology disclosed provides a novel and innovative technique for compact deployment of application code to stream processing systems. In particular, the technology disclosed relates to obviating the need of accompanying application code with its dependencies during deployment (i.e., creating fat jars) by operating a stream processing system within a container defined over worker nodes of whole machines and initializing the worker nodes with precompiled dependency libraries having precompiled classes. Accordingly, the application code is deployed to the container without its dependencies, and, once deployed, the application code is linked with the locally stored precompiled dependencies at runtime. In implementations, the application code is deployed to the container running the stream processing system between 300 milliseconds and 6 seconds. This is drastically faster than existing deployment techniques that take anywhere between 5 to 15 minutes for deployment.

    Maintaining throughput of a stream processing framework while increasing processing load

    公开(公告)号:US09965330B2

    公开(公告)日:2018-05-08

    申请号:US14986401

    申请日:2015-12-31

    CPC classification number: G06F9/505 G06F3/0613 G06F3/0631 G06F3/067 G06F9/5061

    Abstract: The technology disclosed relates to maintaining throughput of a stream processing framework while increasing processing load. In particular, it relates to defining a container over at least one worker node that has a plurality workers, with one worker utilizing a whole core within a worker node, and queuing data from one or more incoming near real-time (NRT) data streams in multiple pipelines that run in the container and have connections to at least one common resource external to the container. It further relates to concurrently executing the pipelines at a number of workers as batches, and limiting simultaneous connections to the common resource to the number of workers by providing a shared connection to a set of batches running on a same worker regardless of the pipelines to which the batches in the set belong.

    Managing processing of long tail task sequences in a stream processing framework

    公开(公告)号:US09842000B2

    公开(公告)日:2017-12-12

    申请号:US14986419

    申请日:2015-12-31

    CPC classification number: G06F9/5038 G06F9/5072 G06F9/5088 G06F17/30516

    Abstract: The technology disclosed relates to managing processing of long tail task sequences in a stream processing framework. In particular, it relates to operating a computing grid that includes a plurality of physical threads which processes data from one or more near real-time (NRT) data streams for multiple task sequences, and queuing data from the NRT data streams as batches in multiple pipelines using a grid-coordinator that controls dispatch of the batches to the physical threads. The method also includes assigning a priority-level to each of the pipelines using a grid-scheduler, wherein the grid-scheduler initiates execution of a first number of batches from a first pipeline before execution of a second number of batches from a second pipeline, responsive to respective priority levels of the first and second pipelines.

    HANDLING MULTIPLE TASK SEQUENCES IN A STREAM PROCESSING FRAMEWORK
    6.
    发明申请
    HANDLING MULTIPLE TASK SEQUENCES IN A STREAM PROCESSING FRAMEWORK 审中-公开
    在流程处理框架中处理多个任务序列

    公开(公告)号:US20170075693A1

    公开(公告)日:2017-03-16

    申请号:US14986351

    申请日:2015-12-31

    CPC classification number: G06F9/5088 G06F9/4881 G06F2209/483

    Abstract: The technology disclosed improves existing streaming processing systems by allowing the ability to both scale up and scale down resources within an infrastructure of a stream processing system. In particular, the technology disclosed relates to a dispatch system for a stream processing system that adapts its behavior according to a computational capacity of the system based on a run-time evaluation. The technical solution includes, during run-time execution of a pipeline, comparing a count of available physical threads against a set number of logically parallel threads. When a count of available physical threads equals or exceeds the number of logically parallel threads, the solution includes concurrently processing the batches at the physical threads. Further, when there are fewer available physical threads than the number of logically parallel threads, the solution includes multiplexing the batches sequentially over the available physical threads.

    Abstract translation: 所公开的技术通过允许在流处理系统的基础设施中扩展和缩小资源的能力来改进现有的流处理系统。 特别地,所公开的技术涉及一种用于流处理系统的调度系统,其基于运行时评估根据系统的计算容量来调整其行为。 该技术解决方案包括在流水线的运行时执行期间,将可用物理线程的数量与设定数量的逻辑并行线程进行比较。 当可用物理线程的数量等于或超过逻辑并行线程数时,该解决方案包括同时处理物理线程上的批处理。 此外,当存在比逻辑并行线程数量少的可用物理线程时,解决方案包括在可用物理线程上顺序复用批次。

    Recovery strategy for a stream processing system

    公开(公告)号:US11288142B2

    公开(公告)日:2022-03-29

    申请号:US16793936

    申请日:2020-02-18

    Abstract: The technology disclosed relates to discovering multiple previously unknown and undetected technical problems in fault tolerance and data recovery mechanisms of modem stream processing systems. In addition, it relates to providing technical solutions to these previously unknown and undetected problems. In particular, the technology disclosed relates to discovering the problem of modification of batch size of a given batch during its replay after a processing failure. This problem results in over-count when the input during replay is not a superset of the input fed at the original play. Further, the technology disclosed discovers the problem of inaccurate counter updates in replay schemes of modem stream processing systems when one or more keys disappear between a batch's first play and its replay. This problem is exacerbated when data in batches is merged or mapped with data from an external data store.

    Modifying task dependencies at worker nodes using precompiled libraries

    公开(公告)号:US11216302B2

    公开(公告)日:2022-01-04

    申请号:US16396522

    申请日:2019-04-26

    Abstract: The technology disclosed provides a novel and innovative technique for compact deployment of application code to stream processing systems. In particular, the technology disclosed relates to obviating the need of accompanying application code with its dependencies during deployment (i.e., creating fat jars) by operating a stream processing system within a container defined over worker nodes of whole machines and initializing the worker nodes with precompiled dependency libraries having precompiled classes. Accordingly, the application code is deployed to the container without its dependencies, and, once deployed, the application code is linked with the locally stored precompiled dependencies at runtime. In implementations, the application code is deployed to the container running the stream processing system between 300 milliseconds and 6 seconds. This is drastically faster than existing deployment techniques that take anywhere between 5 to 15 minutes for deployment.

    Providing strong ordering in multi-stage streaming processing

    公开(公告)号:US10592282B2

    公开(公告)日:2020-03-17

    申请号:US16259745

    申请日:2019-01-28

    Abstract: The technology disclosed relates to providing strong ordering in multi-stage processing of near real-time (NRT) data streams. In particular, it relates to maintaining current batch-stage information for a batch at a grid-scheduler in communication with a grid-coordinator that controls dispatch of batch-units to the physical threads for a batch-stage. This includes operating a computing grid, and queuing data from the NRT data streams as batches in pipelines for processing over multiple stages in the computing grid. Also included is determining, for a current batch-stage, batch-units pending dispatch, in response to receiving the current batch-stage information; identifying physical threads that processed batch-units for a previous batch-stage on which the current batch-stage depends and have registered pending tasks for the current batch-stage; and dispatching the batch-units for the current batch-stage to the identified physical threads subsequent to complete processing of the batch-units for the previous batch-stage.

    PROVIDING STRONG ORDERING IN MULTI-STAGE STREAMNG PROCESSING

    公开(公告)号:US20190155646A1

    公开(公告)日:2019-05-23

    申请号:US16259745

    申请日:2019-01-28

    CPC classification number: G06F9/4881 G06F2209/484 G06F2209/485

    Abstract: The technology disclosed relates to providing strong ordering in multi-stage processing of near real-time (NRT) data streams. In particular, it relates to maintaining current batch-stage information for a batch at a grid-scheduler in communication with a grid-coordinator that controls dispatch of batch-units to the physical threads for a batch-stage. This includes operating a computing grid, and queuing data from the NRT data streams as batches in pipelines for processing over multiple stages in the computing grid. Also included is determining, for a current batch-stage, batch-units pending dispatch, in response to receiving the current batch-stage information; identifying physical threads that processed batch-units for a previous batch-stage on which the current batch-stage depends and have registered pending tasks for the current batch-stage; and dispatching the batch-units for the current batch-stage to the identified physical threads subsequent to complete processing of the batch-units for the previous batch-stage.

Patent Agency Ranking