摘要:
A compilation system generates one or more energy windows in a program to be executed on a data processors such that power/energy consumption of the data processor can be adjusted in which window, so as to minimize the overall power/energy consumption of the data processor during the execution of the program. The size(s) of the energy window(s) and/or power option(s) in each window can be determined according to one or more parameters of the data processor and/or one or more characteristics of the energy window(s).
摘要:
Methods, apparatus and computer software product for optimization of data transfer between two memories includes determining access to master data stored in one memory and/or to local data stored in another memory such that either or both of the size of total data transferred and the number of data transfers required to transfer the total data can be minimized. The master and/or local accesses are based on, at least in part, respective structures of the master and local data.
摘要:
A system for compiling programs for execution thereof using a hierarchical processing system having two or more levels of memory hierarchy can perform memory-level-specific optimizations, without exceeding a specified maximum compilation time. To this end, the compiler system employs a polyhedral model and limits the dimensions of a polyhedral program representation that is processed by the compiler at each level using a focalization operator that temporarily reduces one or more dimensions of the polyhedral representation. Semantic correctness is provided via a defocalization operator that can restore all polyhedral dimensions that had been temporarily removed.
摘要:
Methods, apparatus and computer software product for source code optimization are provided. In an exemplary embodiment, a first custom computing apparatus is used to optimize the execution of source code on a second computing apparatus. In this embodiment, the first custom computing apparatus contains a memory, a storage medium and at least one processor with at least one multi-stage execution unit. The second computing apparatus contains at least one vector execution unit that allow for parallel execution of tasks on constant-strided memory locations. The first custom computing apparatus optimizes the code for parallelism, locality of operations, constant-strided memory accesses and vectorized execution on the second computing apparatus. This Abstract is provided for the sole purpose of complying with the Abstract requirement rules. This Abstract is submitted with the explicit understanding that it will not be used to interpret or to limit the scope or the meaning of the claims.
摘要:
A multiresolution parser (MRP) can selectively extract one or more information units from a dataset based on the available processing capacity and/or the arrival rate of the dataset. Should any of these parameters change, the MRP can adaptively change the information units to be extracted such that the benefit or value of the extracted information is maximized while minimizing the cost of extraction. This tradeoff is facilitated, at least in part, by an analysis of the spectral energy of the datasets expected to be processed by the MRP. The MRP can also determine its state after a processing iteration and use that state information in subsequent iterations to minimize the required computations in such subsequent iterations, so as to improve processing efficiency.
摘要:
In a system for automatic generation of event-driven, tuple-space based programs from a sequential specification, a hierarchical mapping solution can target different runtimes relying on event-driven tasks (EDTs). The solution uses loop types to encode short, transitive relations among EDTs that can be evaluated efficiently at runtime. Specifically, permutable loops translate immediately into conservative point-to-point synchronizations of distance one. A runtime-agnostic which can be used to target the transformed code to different runtimes.
摘要:
Methods, apparatus and computer software product for optimization of data transfer between two memories includes determining access to master data stored in one memory and/or to local data stored in another memory such that either or both of the size of total data transferred and the number of data transfers required to transfer the total data can be minimized. The master and/or local accesses are based on, at least in part, respective structures of the master and local data.
摘要:
A network is designed based on its topology and the expected flow patterns in the network. The use of the latter can lead to efficient use of network resources and can reduce or even minimize waste. Non-interference properties of the expected flows can yield an improved or even optimal design.
摘要:
A processor-implemented method includes approximating an optimization problem for training an artificial neural network as a nested polynomial optimization problem. The method also includes dividing the nested polynomial optimization problem into a sequence of sub-problems. The method further includes hierarchically solving the sequence of sub-problems to train the artificial neural network.
摘要:
A compilation system can define, at compile time, the data blocks to be managed by an Even Driven Task (EDT) based runtime/platform, and can also guide the runtime/platform on when to create and/or destroy the data blocks, so as to improve the performance of the runtime/platform. The compilation system can also guide, at compile time, how different tasks may access the data blocks they need in a manner that can improve performance of the tasks.