Method of entropy randomization on a parallel computer

    公开(公告)号:US09335970B2

    公开(公告)日:2016-05-10

    申请号:US13785515

    申请日:2013-03-05

    CPC classification number: G06F7/58 H04L9/0861 H04L2209/125

    Abstract: Method, system, and computer program product for randomizing entropy on a parallel computing system using network arithmetic logic units (ALUs). In one embodiment, network ALUs on nodes of the parallel computing system pseudorandomly modify entropy data during broadcast operations through application of arithmetic and/or logic operations. That is, each compute node's ALU may modify the entropy data during broadcasts, thereby mixing, and thus improving, the entropy data with every hop of entropy data packets from one node to another. At each compute node, the respective ALUs may further deposit modified entropy data in, e.g., local entropy pools such that software running on the compute nodes and needing entropy data may fetch it from the entropy pools. In some embodiments, entropy data may be broadcast via dedicated packets or included in unused portions of existing broadcast packets.

    Method of entropy distribution on a parallel computer
    2.
    发明授权
    Method of entropy distribution on a parallel computer 有权
    并行计算机上熵分布的方法

    公开(公告)号:US09092285B2

    公开(公告)日:2015-07-28

    申请号:US13778715

    申请日:2013-02-27

    CPC classification number: G06F7/582 G06F7/588

    Abstract: Method for performing an operation, the operation including, responsive to receiving a file system request at a file system, retrieving a first entropy pool element from the file system, and inserting, at the file system, the first entropy pool element into a network packet sent from the file system responsive to the file system request.

    Abstract translation: 用于执行操作的方法,所述操作包括响应于在文件系统处接收到文件系统请求,从所述文件系统检索第一熵池元素,以及在所述文件系统将所述第一熵池元素插入到网络包中 根据文件系统请求从文件系统发送。

    Determining instruction execution history in a debugger

    公开(公告)号:US10977160B2

    公开(公告)日:2021-04-13

    申请号:US16597444

    申请日:2019-10-09

    Abstract: Determining instruction execution history in a debugger, including: retrieving, from an instruction cache, cache data that includes an age value for each cache line in the instruction cache; sorting, by the age value for each cache line, entries in the instruction cache; retrieving, using an address contained in each cache line, one or more instructions associated with the address contained in each cache line; and displaying the one or more instructions.

    Determining instruction execution history in a debugger

    公开(公告)号:US10552297B2

    公开(公告)日:2020-02-04

    申请号:US16415503

    申请日:2019-05-17

    Abstract: Determining instruction execution history in a debugger, including: retrieving, from an instruction cache, cache data that includes an age value for each cache line in the instruction cache; sorting, by the age value for each cache line, entries in the instruction cache; retrieving, using an address contained in each cache line, one or more instructions associated with the address contained in each cache line; and displaying the one or more instructions.

    Determining instruction execution history in a debugger

    公开(公告)号:US10372590B2

    公开(公告)日:2019-08-06

    申请号:US14088030

    申请日:2013-11-22

    Abstract: Determining instruction execution history in a debugger, including: retrieving, from an instruction cache, cache data that includes an age value for each cache line in the instruction cache; sorting, by the age value for each cache line, entries in the instruction cache; retrieving, using an address contained in each cache line, one or more instructions associated with the address contained in each cache line; and displaying the one or more instructions.

    METHOD OF ENTROPY RANDOMIZATION ON A PARALLEL COMPUTER

    公开(公告)号:US20140223149A1

    公开(公告)日:2014-08-07

    申请号:US13785515

    申请日:2013-03-05

    CPC classification number: G06F7/58 H04L9/0861 H04L2209/125

    Abstract: Method, system, and computer program product for randomizing entropy on a parallel computing system using network arithmetic logic units (ALUs). In one embodiment, network ALUs on nodes of the parallel computing system pseudorandomly modify entropy data during broadcast operations through application of arithmetic and/or logic operations. That is, each compute node's ALU may modify the entropy data during broadcasts, thereby mixing, and thus improving, the entropy data with every hop of entropy data packets from one node to another. At each compute node, the respective ALUs may further deposit modified entropy data in, e.g., local entropy pools such that software running on the compute nodes and needing entropy data may fetch it from the entropy pools. In some embodiments, entropy data may be broadcast via dedicated packets or included in unused portions of existing broadcast packets.

    Method of entropy randomization on a parallel computer
    7.
    发明授权
    Method of entropy randomization on a parallel computer 有权
    并行计算机熵随机化方法

    公开(公告)号:US09335969B2

    公开(公告)日:2016-05-10

    申请号:US13760510

    申请日:2013-02-06

    CPC classification number: G06F7/58 H04L9/0861 H04L2209/125

    Abstract: Method, system, and computer program product for randomizing entropy on a parallel computing system using network arithmetic logic units (ALUs). In one embodiment, network ALUs on nodes of the parallel computing system pseudorandomly modify entropy data during broadcast operations through application of arithmetic and/or logic operations. That is, each compute node's ALU may modify the entropy data during broadcasts, thereby mixing, and thus improving, the entropy data with every hop of entropy data packets from one node to another. At each compute node, the respective ALUs may further deposit modified entropy data in, e.g., local entropy pools such that software running on the compute nodes and needing entropy data may fetch it from the entropy pools. In some embodiments, entropy data may be broadcast via dedicated packets or included in unused portions of existing broadcast packets.

    Abstract translation: 使用网络算术逻辑单元(ALU)在并行计算系统上随机化熵的方法,系统和计算机程序产品。 在一个实施例中,并行计算系统的节点上的网络ALU在广播操作期间通过应用算术和/或逻辑运算来伪随机地修改熵数据。 也就是说,每个计算节点的ALU可以在广播期间修改熵数据,从而将熵数据与来自一个节点的熵数据分组的每一跳混合并从而改善熵数据。 在每个计算节点处,相应的ALU可以进一步在例如本地熵池中存储修改的熵数据,使得在计算节点上运行并需要熵数据的软件可以从熵池获取它。 在一些实施例中,熵数据可以经由专用分组广播或包括在现有广播分组的未使用部分中。

    Calculating a checksum with inactive networking components in a computing system
    8.
    发明授权
    Calculating a checksum with inactive networking components in a computing system 有权
    使用计算系统中的非活动网络组件计算校验和

    公开(公告)号:US08943199B2

    公开(公告)日:2015-01-27

    申请号:US13740525

    申请日:2013-01-14

    CPC classification number: H04L43/04 H04L1/00 H04L1/0061

    Abstract: Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

    Abstract translation: 使用计算系统中的非活动网络组件来计算校验和,包括:由校验和分发管理器识别非活动网络组件,其中所述非活动网络组件包括用于计算校验和的校验和计算引擎; 由校验和分发管理器向不活动网络组件发送描述要由主动网络组件发送的数据块的元数据; 由非活动网络组件计算数据块的校验和; 从非活动网络组件向校验和分发管理器发送数据块的校验和; 以及由所述主动网络组件发送包括所述数据块和所述数据块的校验和的数据通信消息。

    Remote direct memory access (‘RDMA’) in a parallel computer
    9.
    发明授权
    Remote direct memory access (‘RDMA’) in a parallel computer 有权
    并行计算机中的远程直接存储器访问('RDMA')

    公开(公告)号:US08874681B2

    公开(公告)日:2014-10-28

    申请号:US13688706

    申请日:2012-11-29

    CPC classification number: G06F15/167 G06F12/00 G06F12/1081

    Abstract: Remote direct memory access (‘RDMA’) in a parallel computer, the parallel computer including a plurality of nodes, each node including a messaging unit, including: receiving an RDMA read operation request that includes a virtual address representing a memory region at which to receive data to be transferred from a second node to the first node; responsive to the RDMA read operation request: translating the virtual address to a physical address; creating a local RDMA object that includes a counter set to the size of the memory region; sending a message that includes an DMA write operation request, the physical address of the memory region on the first node, the physical address of the local RDMA object on the first node, and a remote virtual address on the second node; and receiving the data to be transferred from the second node.

    Abstract translation: 并行计算机中的远程直接存储器访问(“RDMA”),并行计算机包括多个节点,每个节点包括消息单元,包括:接收RDMA读取操作请求,其包括虚拟地址,该虚拟地址表示存储区域 接收要从第二节点传送到第一节点的数据; 响应于RDMA读取操作请求:将虚拟地址转换为物理地址; 创建本地RDMA对象,其包括设置为存储器区域的大小的计数器; 发送包括DMA写入操作请求的消息,第一节点上的存储器区域的物理地址,第一节点上的本地RDMA对象的物理地址以及第二节点上的远程虚拟地址; 并从第二节点接收要传送的数据。

    Adaptive recovery for parallel reactive power throttling

    公开(公告)号:US08799696B2

    公开(公告)日:2014-08-05

    申请号:US13706882

    申请日:2012-12-06

    CPC classification number: G06F1/3234 G06F1/206 Y02D10/16

    Abstract: Power throttling may be used to conserve power and reduce heat in a parallel computing environment. Compute nodes in the parallel computing environment may be organized into groups based on, for example, whether they execute tasks of the same job or receive power from the same converter. Once one of compute nodes in the group detects that a parameter (i.e., temperature, current, power consumption, etc.) has exceeded a first threshold, power throttling on all the nodes in the group may be activated. However, before deactivating power throttling, a plurality of parameters associated with the group of compute nodes may be monitored to ensure they are all below a second threshold. If so, the power throttling for all of the compute nodes is deactivated.

Patent Agency Ranking