Scheduling mapreduce job sets
    41.
    发明授权
    Scheduling mapreduce job sets 有权
    计划mapreduce工作集

    公开(公告)号:US09141430B2

    公开(公告)日:2015-09-22

    申请号:US13460687

    申请日:2012-04-30

    IPC分类号: G06F9/46 G06F9/50

    CPC分类号: G06F9/5038 G06F2209/5017

    摘要: Determining a schedule of a batch workload of MapReduce jobs is disclosed. A set of multi-stage jobs for processing in a MapReduce framework is received, for example, in a master node. Each multi-stage job includes a duration attribute, and each duration attribute includes a stage duration and a stage type. The MapReduce framework is separated into a plurality of resource pools. The multi-stage jobs are separated into a plurality of subgroups corresponding with the plurality of pools. Each subgroup is configured for concurrent processing in the MapReduce framework. The multi-stage jobs in each of the plurality of subgroups are placed in an order according to increasing stage duration. For each pool, the multi-stage jobs in increasing order of stage duration are sequentially assigned from either a front of the schedule or a tail of the schedule by stage type.

    摘要翻译: 公布了确定MapReduce作业的批处理工作量的时间表。 例如,在主节点中接收一组用于在MapReduce框架中处理的多阶段作业。 每个多级作业包括持续时间属性,并且每个持续时间属性包括阶段持续时间和阶段类型。 MapReduce框架分为多个资源池。 多级作业被分成与多个池对应的多个子组。 每个子组都配置为在MapReduce框架中进行并发处理。 多个子组中的每个子组中的多阶段作业按照阶段持续时间的增加顺序放置。 对于每个池,阶段持续时间的递增顺序的多阶段作业从时间表的前面或按阶段类型的时间表的尾部顺序分配。

    Methods and systems for distributed processing on consumer devices
    42.
    发明授权
    Methods and systems for distributed processing on consumer devices 有权
    消费者设备上分布式处理的方法和系统

    公开(公告)号:US09059996B2

    公开(公告)日:2015-06-16

    申请号:US14164642

    申请日:2014-01-27

    申请人: CSC Holdings, LLC

    摘要: Systems and methods are used to provide distributed processing on a service provider network that includes a plurality of remotely located consumer devices. Each of the remotely located consumer devices includes a processing device. A service is provided from the service provider network to the remotely located consumer devices. Distributed processing of a task on the processing devices of the remotely located consumer devices occurs, the distributed processing being unrelated to the service provided to the consumers. The distributed processing occurs even when the processing devices are in use by corresponding remotely located consumer devices.

    摘要翻译: 系统和方法用于在包括多个远程定位的消费者设备的服务提供商网络上提供分布式处理。 每个位于远程的消费者设备包括处理设备。 从服务提供商网络向远程位置的消费者设备提供服务。 发生在位于远程的消费者设备的处理设备上的任务的分布式处理,该分布式处理与提供给消费者的服务无关。 即使处理设备由相应的远程位置的消费者设备使用,分布式处理也发生。

    SCHEDULING, INTERPRETING AND RASTERISING TASKS IN A MULTI-THREADED RASTER IMAGE PROCESSOR
    43.
    发明申请
    SCHEDULING, INTERPRETING AND RASTERISING TASKS IN A MULTI-THREADED RASTER IMAGE PROCESSOR 有权
    多线程图像处理器中的调度,解码和分析任务

    公开(公告)号:US20150145872A1

    公开(公告)日:2015-05-28

    申请号:US14548168

    申请日:2014-11-19

    IPC分类号: G06T1/20 G06F9/46

    摘要: A method of rasterising a document using a plurality of threads interprets objects of the document by performing interpreting tasks associated with the objects. Objects associated with different pages are interpreted in parallel. A plurality of rasterising tasks associated with the performed interpreting tasks are established, each performed interpreting task establishing a plurality of rasterising tasks. The method estimates an amount of parallelisable work available to be performed using the plurality of threads. The amount of parallelisable work is estimated using the established rasterising tasks and an expected number of interpreting tasks to be performed. The method selects, based on the estimated amount of parallelisable work, one of (i) an interpreting task to interpret objects of the document, and (ii) a rasterising task from the established plurality of rasterising tasks, and then executes the selected task using at least one thread to rasterize the document.

    摘要翻译: 使用多个线程光栅化文档的方法通过执行与对象相关联的解释任务来解释文档的对象。 与不同页面相关联的对象被并行解释。 建立与执行的解释任务相关联的多个光栅化任务,每个执行解释任务建立多个光栅化任务。 该方法估计可使用多个线程执行的可并行工作的量。 可以使用已建立的光栅化任务和要执行的预期解释任务数来估计可并行工作量。 该方法基于估计的可平行工作量来选择(i)解释文档的对象的解释任务,以及(ii)来自已建立的多个光栅化任务的光栅化任务,然后使用 至少一个用于光栅化文档的线程。

    Systems and methods to process a request received at an application program interface
    44.
    发明授权
    Systems and methods to process a request received at an application program interface 有权
    处理在应用程序接口接收的请求的系统和方法

    公开(公告)号:US09043401B2

    公开(公告)日:2015-05-26

    申请号:US12576097

    申请日:2009-10-08

    IPC分类号: G06F15/16 G06F9/50

    摘要: Methods and systems to process a request received at an application program interface are described. The system receives a request from a client machine that includes a job that is associated with data. The request is received at an application program interface. Next, a peer-to-peer network of processing nodes generates a plurality of sub-jobs based on the job. The peer-to-peer network of processing nodes schedules the plurality of sub-jobs for parallel processing based on an availability of resources that are respectively utilized by the sub-jobs and parallel processes the plurality of sub-jobs before generating task results that are respectively associated with the plurality of sub-jobs.

    摘要翻译: 描述了处理在应用程序接口处接收的请求的方法和系统。 系统从包含与数据相关联的作业的客户端机器接收请求。 该请求在应用程序接口处被接收。 接下来,处理节点的对等网络基于该作业生成多个子作业。 处理节点的点对点网络基于在生成任务结果之前分别由子作业和并行处理多个子作业的资源的可用性来调度多个子作业以进行并行处理 分别与多个子作业相关联。

    Runtime task with inherited dependencies for batch processing
    46.
    发明授权
    Runtime task with inherited dependencies for batch processing 有权
    具有继承依赖关系的运行时任务进行批处理

    公开(公告)号:US08990820B2

    公开(公告)日:2015-03-24

    申请号:US12339083

    申请日:2008-12-19

    IPC分类号: G06F9/46 G06F9/48

    CPC分类号: G06F9/4843 G06F2209/5017

    摘要: A batch job processing architecture that dynamically creates runtime tasks for batch job execution and to optimize parallelism. The task creation can be based on the amount of processing power available locally or across batch servers. The work can be allocated across multiple threads in multiple batch server instances as there are available. A master task splits the items to be processed into smaller parts and creates a runtime task for each. The batch server picks up and executes as many runtime tasks as the server is configured to handle. The runtime tasks can be run in parallel to maximize hardware utilization. Scalability is provided by splitting runtime task execution across available batch server instances, and also across machines. During runtime task creation, all dependency and batch group information is propagated from the master task to all runtime tasks. Dependencies and batch group configuration are honored by the batch engine.

    摘要翻译: 批处理作业处理架构,可动态创建用于批处理作业执行的运行时任务并优化并行性。 任务创建可以基于本地或跨批量服务器可用的处理能力的量。 可以在多个批处理服务器实例中的多个线程上分配该工作。 主任务将要处理的项目分成较小的部分,并为每个部分创建一个运行时任务。 批处理服务器拾取并执行与服务器配置为处理相同数量的运行时任务。 可以并行运行运行时任务以最大化硬件利用率。 通过在可用批处理服务器实例之间以及跨机器间划分运行时任务执行来提供可扩展性。 在运行时任务创建期间,所有依赖关系和批处理组信息都从主任务传播到所有运行时任务。 批次引擎履行依赖关系和批次组配置。

    COMPUTATION HARDWARE WITH HIGH-BANDWIDTH MEMORY INTERFACE
    47.
    发明申请
    COMPUTATION HARDWARE WITH HIGH-BANDWIDTH MEMORY INTERFACE 审中-公开
    具有高带宽存储器接口的计算硬件

    公开(公告)号:US20150067273A1

    公开(公告)日:2015-03-05

    申请号:US14015872

    申请日:2013-08-30

    IPC分类号: G06F3/06

    摘要: Various embodiments relating to performing multiple computations are provided. In one embodiment, a computing system includes an off-chip storage device configured to store a plurality of stream elements and associated tags and a computation device. The computation device includes an on-chip storage device configured to store a plurality of independently addressable resident elements, and a plurality of parallel processing units. Each parallel processing unit may be configured to receive one or more stream elements and associated tags from the off-chip storage device and select one or more resident elements from a subset of resident elements driven in parallel from the on-chip storage device. A selected resident element may be indicated by an associated tag as matching a stream element. Each parallel processing unit may be configured to perform one or more computations using the one or more stream elements and the one or more selected resident elements.

    摘要翻译: 提供了与执行多次计算有关的各种实施例。 在一个实施例中,计算系统包括被配置为存储多个流元素和相关联的标签的片外存储设备和计算设备。 该计算装置包括:片上存储装置,被配置为存储多个可独立寻址的驻留单元,以及多个并行处理单元。 每个并行处理单元可以被配置为从芯片外存储设备接收一个或多个流元素和相关联的标签,并且从片上存储设备并行驱动的驻留元件的子集中选择一个或多个驻留元素。 选择的驻留单元可以由相关联的标签指示为匹配流元素。 每个并行处理单元可以被配置为使用一个或多个流元素和一个或多个选择的驻留元素来执行一个或多个计算。

    CPU SCHEDULER CONFIGURED TO SUPPORT LATENCY SENSITIVE VIRTUAL MACHINES
    48.
    发明申请
    CPU SCHEDULER CONFIGURED TO SUPPORT LATENCY SENSITIVE VIRTUAL MACHINES 有权
    CPU SCHEDULER配置支持LATENCY敏感虚拟机

    公开(公告)号:US20150058861A1

    公开(公告)日:2015-02-26

    申请号:US14468121

    申请日:2014-08-25

    申请人: VMware, Inc.

    IPC分类号: G06F9/50

    摘要: A host computer has one or more physical central processing units (CPUs) that support the execution of a plurality of containers, where the containers each include one or more processes. Each process of a container is assigned to execute exclusively on a corresponding physical CPU when the corresponding container is determined to be latency sensitive. The assignment of a process to execute exclusively on a corresponding physical CPU includes the migration of tasks from the corresponding physical CPU to one or more other physical CPUs of the host system, and the directing of task and interrupt processing to the one or more other physical CPUs. Tasks of the process corresponding to the container are then executed on the corresponding physical CPU.

    摘要翻译: 主计算机具有支持执行多个容器的一个或多个物理中央处理单元(CPU),其中容器各自包括一个或多个处理。 当相应的容器被确定为延迟敏感时,容器的每个进程被分配为在相应的物理CPU上专门执行。 专门在对应的物理CPU上执行的进程的分配包括将任务从对应的物理CPU迁移到主机系统的一个或多个其他物理CPU,以及将任务和中断处理定向到一个或多个其他物理 CPU。 然后在对应的物理CPU上执行与容器对应的进程的任务。

    Task Scheduling for Highly Concurrent Analytical and Transaction Workloads
    50.
    发明申请
    Task Scheduling for Highly Concurrent Analytical and Transaction Workloads 有权
    高度并发分析和事务工作负载的任务计划

    公开(公告)号:US20140380322A1

    公开(公告)日:2014-12-25

    申请号:US13925629

    申请日:2013-06-24

    申请人: SAP AG

    IPC分类号: G06F9/48

    摘要: Systems and method for a task scheduler with dynamic adjustment of concurrency levels and task granularity are disclosed for improved execution of highly concurrent analytical and transactional systems. The task scheduler can avoid both over commitment and underutilization of computing resources by monitoring and controlling the number of active worker threads. The number of active worker threads can be adapted to avoid underutilization of computing resources by giving the OS control of additional worker threads processing blocked application tasks. The task scheduler can dynamically determine a number of parallel operations for a particular task based on the number of available threads. The number of available worker threads can be determined based on the average availability of worker threads in the recent history of the application. Based on the number of available worker threads, the partitionable operation can be partitioned into a number of sub operations and executed in parallel.

    摘要翻译: 公开了具有并发级别和任务粒度的动态调整的任务调度器的系统和方法,用于改进高并发分析和事务系统的执行。 任务调度器可以通过监视和控制活动的工作线程的数量来避免计算资源的过度承诺和利用不足。 活动的工作线程的数量可以通过给操作系统控制额外的工作线程处理被阻止的应用程序任务而适应于避免计算资源利用不足。 任务调度器可以基于可用线程的数量动态地确定特定任务的并行操作的数量。 可以根据应用程序的最近历史中的工作线程的平均可用性来确定可用的工作线程数。 基于可用的工作线程的数量,可分割操作可以划分为多个子操作并且并行执行。