-
公开(公告)号:US10552226B2
公开(公告)日:2020-02-04
申请号:US15838187
申请日:2017-12-11
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs (Central Processing Unit) or GPUs (Graphics Processing Unit) attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
-
公开(公告)号:US10534647B2
公开(公告)日:2020-01-14
申请号:US15698587
申请日:2017-09-07
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The parallel computing program is stored in a memory to allocate threads between a host processor and a GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g., GPUs or CPUs, separate from the host processor.
-
公开(公告)号:US09858122B2
公开(公告)日:2018-01-02
申请号:US15236317
申请日:2016-08-12
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
CPC classification number: G06F9/5044 , G06F9/4843 , G06F2209/5018
Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs (Central Processing Units) or GPUs (Graphical Processing Units) attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
-
公开(公告)号:US09411550B2
公开(公告)日:2016-08-09
申请号:US14601080
申请日:2015-01-20
Applicant: Apple Inc.
Inventor: John S. Harper , Kenneth C. Dyke , Jeremy Sandmel
CPC classification number: G06F3/1431 , G06F2200/1614 , G09G5/12 , G09G5/373 , G09G5/377 , G09G2340/04 , G09G2340/0407 , G09G2340/0435 , G09G2340/0485 , G09G2360/04
Abstract: A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaler performs the scaling operations asynchronously with respect to the compositing of the graphics content. The data processing system automatically mirrors the image on the external display device unless the application program is publishing additional graphics content for display on the external display device.
-
公开(公告)号:US09292340B2
公开(公告)日:2016-03-22
申请号:US13723014
申请日:2012-12-20
Applicant: Apple Inc.
Inventor: Aaftab AbdulLatif Munshi , Jeremy Sandmel
CPC classification number: G06F9/5027 , G06F8/314 , G06F8/41 , G06F8/445 , G06F8/458 , G06F9/4843 , G06F9/505 , G06F9/541 , G06T1/20 , G06T2200/28
Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The parallel computing program is stored in a memory to allocate threads between a host processor and a GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g GPUs or CPUs, separate from the host processor.
Abstract translation: 描述了以并行计算架构的编程语言执行并行计算程序的方法和装置。 并行计算程序存储在具有并行处理器的系统中的存储器中。 并行计算程序存储在存储器中以在主处理器和GPU之间分配线程。 编程语言包括允许应用程序使用API调用主机处理器和GPU之间的线程执行的API。 编程语言包括用于在主处理器中执行的主机功能的主机功能数据令牌和用于在与主机处理器分离的一个或多个计算处理器(例如,GPU或CPU)中执行的计算内核功能的核心功能数据令牌。
-
公开(公告)号:US20150317192A1
公开(公告)日:2015-11-05
申请号:US14713144
申请日:2015-05-15
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
CPC classification number: G06F9/445 , G06F8/41 , G06F8/447 , G06F9/44542 , G06F9/4843 , G06F9/5044 , G06F9/541
Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.
Abstract translation: 描述了在一个或多个物理计算设备(例如CPU或GPU)中同时调度用于在一个或多个物理计算设备中执行的调度队列中的多个可执行程序的方法和装置。 一个或多个可执行文件在来自具有用于不同于一个或多个物理计算设备的物理计算设备的类型的现有可执行程序的源的在线编译。 确定与调度的可执行程序相对应的元件之间的依赖性关系,以在多个物理计算设备中同时选择要被多个线程执行的可执行文件。 如果GPU忙于图形处理线程,则初始化用于在物理计算设备的GPU中执行可执行程序的线程被初始化以在物理计算设备的另一个CPU中执行。 用于API函数的源和现有可执行文件存储在API库中以在多个物理计算设备中执行多个可执行程序,包括来自源的现有可执行文件和在线编译的可执行文件。
-
公开(公告)号:US11836506B2
公开(公告)日:2023-12-05
申请号:US18147984
申请日:2022-12-29
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
CPC classification number: G06F9/445 , G06F8/41 , G06F8/447 , G06F9/44542 , G06F9/4843 , G06F9/5044 , G06F9/541
Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.
-
公开(公告)号:US20230185583A1
公开(公告)日:2023-06-15
申请号:US18147984
申请日:2022-12-29
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
CPC classification number: G06F9/445 , G06F8/41 , G06F8/447 , G06F9/541 , G06F9/4843 , G06F9/5044 , G06F9/44542
Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.
-
公开(公告)号:US11544075B2
公开(公告)日:2023-01-03
申请号:US15234199
申请日:2016-08-11
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads. Sources and existing executables for an API function are stored in an API library to execute a plurality of executables in a plurality of physical compute devices, including the existing executables and online compiled executables from the sources.
-
公开(公告)号:US11106504B2
公开(公告)日:2021-08-31
申请号:US16741578
申请日:2020-01-13
Applicant: Apple Inc.
Inventor: Aaftab Munshi , Jeremy Sandmel
Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The parallel computing program is stored in a memory to allocate threads between a host processor and a GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g., GPUs or CPUs, separate from the host processor.
-
-
-
-
-
-
-
-
-