Transactional memory that performs a direct 32-bit lookup operation
    1.
    发明授权
    Transactional memory that performs a direct 32-bit lookup operation 有权
    执行直接32位查找操作的事务内存

    公开(公告)号:US09100212B2

    公开(公告)日:2015-08-04

    申请号:US13552605

    申请日:2012-07-18

    CPC classification number: H04L12/4625 G06F9/3004 G06F12/06 G06F15/163

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a base address, a starting bit position, and a mask size. In response to the lookup command, the TM pulls an input value (IV). The TM uses the starting bit position and the mask size to select a portion of the IV. A first sub-portion of the portion of the IV and the base address are summed to generate a memory address. The memory address is used to read a word containing multiple result values (RVs) from memory. One RV from the word is selected using a multiplexing circuit and a second sub-portion of the portion of the IV. If the selected RV is a final value, then lookup operation is complete and the TM sends the RV to the processor, otherwise the TM performs another lookup operation based upon the selected RV.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 该命令包括基地址,起始位位置和掩码大小。 响应于查找命令,TM拉取输入值(IV)。 TM使用起始位位置和掩码大小来选择IV的一部分。 将IV的部分和基地址的第一子部分相加以生成存储器地址。 存储器地址用于从存储器读取包含多个结果值(RV)的单词。 使用多路复用电路和IV部分的第二子部分选择来自该单词的一个RV。 如果所选的RV是最终值,则查找操作完成,并且TM将RV发送到处理器,否则TM基于所选择的RV执行另一查找操作。

    Transactional memory that performs a direct 24-BIT lookup operation
    2.
    发明授权
    Transactional memory that performs a direct 24-BIT lookup operation 有权
    执行直接24位查找操作的事务内存

    公开(公告)号:US09098264B2

    公开(公告)日:2015-08-04

    申请号:US13552627

    申请日:2012-07-18

    Abstract: A transactional memory (TM) receives a lookup command across a bus from a processor. Only final result values are stored in memory. The command includes a base address, a starting bit position, and mask size. In response to the lookup command, the TM pulls an input value (IV). A selecting circuit within the TM uses the starting bit position and mask size to select a portion of the IV. The portion of the IV and the base address are used to generate a memory address. The memory address is used to read a word containing multiple result values (RVs) from memory. One RV from the word is selected using a multiplexing circuit and a result location value (RLV) generated from the portion of the IV. A word selector circuit and arithmetic circuits are used to generate the memory address and RLV. The TM sends the selected RV to the processor.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的查找命令。 只有最终的结果值存储在内存中。 该命令包括基地址,起始位位置和掩码大小。 响应于查找命令,TM拉取输入值(IV)。 TM内的选择电路使用起始位位置和掩码大小来选择IV的一部分。 IV和基地址的部分用于生成内存地址。 存储器地址用于从存储器读取包含多个结果值(RV)的单词。 使用多路复用电路和从IV部分产生的结果位置值(RLV)来选择来自该单词的一个RV。 字选择器电路和运算电路用于产生存储器地址和RLV。 TM将所选择的RV发送到处理器。

    Recursive lookup with a hardware trie structure that has no sequential logic elements
    3.
    发明授权
    Recursive lookup with a hardware trie structure that has no sequential logic elements 有权
    具有没有顺序逻辑元素的硬件trie结构的递归查找

    公开(公告)号:US08902902B2

    公开(公告)日:2014-12-02

    申请号:US13552555

    申请日:2012-07-18

    CPC classification number: H03K17/00 G06F9/467 G06F13/40 H04L45/745 H04L45/748

    Abstract: A hardware trie structure includes a tree of internal node circuits and leaf node circuits. Each internal node is configured by a corresponding multi-bit node control value (NCV). Each leaf node can output a corresponding result value (RV). An input value (IV) supplied onto input leads of the trie causes signals to propagate through the trie such that one of the leaf nodes outputs one of the RVs onto output leads of the trie. In a transactional memory, a memory stores a set of NCVs and RVs. In response to a lookup command, the NCVs and RVs are read out of memory and are used to configure the trie. The IV of the lookup is supplied to the input leads, and the trie looks up an RV. A non-final RV initiates another lookup in a recursive fashion, whereas a final RV is returned as the result of the lookup command.

    Abstract translation: 硬件特里结构包括一棵内部节点电路和叶节点电路。 每个内部节点由相应的多位节点控制值(NCV)配置。 每个叶节点可以输出相应的结果值(RV)。 提供给特里的输入引线的输入值(IV)使得信号通过三通传播,使得一个叶节点将其中一个RV输出到该线索的输出引线。 在事务存储器中,存储器存储一组NCV和RV。 响应于查找命令,NCV和RV从存储器中读出并用于配置特里。 查询的IV被提供给输入引线,并且特技查找RV。 非最终RV以递归方式发起另一次查找,而作为查找命令的结果返回最终RV。

    Transactional memory that performs an atomic metering command
    4.
    发明授权
    Transactional memory that performs an atomic metering command 有权
    执行原子计量命令的事务内存

    公开(公告)号:US08775686B2

    公开(公告)日:2014-07-08

    申请号:US13598533

    申请日:2012-08-29

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    Abstract: A transactional memory (TM) receives an Atomic Metering Command (AMC) across a bus from a processor. The command includes a memory address and a meter pair indicator value. In response to the AMC, the TM pulls an input value (IV). The TM uses the memory address to read a word including multiple credit values from a memory unit. Circuitry within the TM selects a pair of credit values, subtracts the IV from each of the pair of credit values thereby generating a pair of decremented credit values, compares the pair of decremented credit values with a threshold value, respectively, thereby generating a pair of indicator values, performs a lookup based upon the pair of indicator values and the meter pair indicator value, and outputs a selector value and a result value that represents a meter color. The selector value determines the credit values written back to the memory unit.

    Abstract translation: 事务存储器(TM)从处理器接收总线上的原子计量命令(AMC)。 该命令包括存储器地址和仪表对指示器值。 对于AMC,TM提取输入值(IV)。 TM使用存储器地址从存储器单元读取包括多个信用值的单词。 TM内的电路选择一对信用值,从该对信用值中的每一个中减去IV,从而生成一对递减的信用值,将一对递减的信用值与阈值进行比较,从而产生一对 指示符值,根据指示符值对和仪表对指示符值执行查找,并输出选择器值和表示仪表颜色的结果值。 选择器值确定写入存储单元的信用值。

    Staggered Island Structure In An Island-Based Network Flow Processor
    5.
    发明申请
    Staggered Island Structure In An Island-Based Network Flow Processor 有权
    基于岛屿网络流处理器的交错岛结构

    公开(公告)号:US20130219100A1

    公开(公告)日:2013-08-22

    申请号:US13399433

    申请日:2012-02-17

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes rectangular islands disposed in rows. In one example, the configurable mesh data bus is configurable to form a command/push/pull data bus over which multiple transactions can occur simultaneously on different parts of the integrated circuit. The rectangular islands of one row are oriented in staggered relation with respect to the rectangular islands of the next row. The left and right edges of islands in a row align with left and right edges of islands two rows down in the row structure. The data bus involves multiple meshes. In each mesh, the island has a centrally located crossbar switch and six radiating half links, and half links down to functional circuitry of the island. The staggered orientation of the islands, and the structure of the half links, allows half links of adjacent islands to align with one another.

    Abstract translation: 基于岛屿的网络流处理器(IB-NFP)集成电路包括以行排列的矩形岛。 在一个示例中,可配置的网格数据总线可配置成形成命令/推/拉数据总线,多个事务可以同时发生在集成电路的不同部分上。 一行的矩形岛相对于下一行的矩形岛定向成交错关系。 一行中的岛的左边缘和右边缘与行结构中两行向下的岛的左边缘和右边缘对齐。 数据总线涉及多个网格。 在每个网格中,岛具有位于中心的交叉开关和六个辐射半连接,并且一半连接到岛的功能电路。 岛屿的交错取向和半连接的结构允许相邻岛屿的一半链接彼此对齐。

    Multiple coprocessor architecture to process a plurality of subtasks in parallel
    6.
    发明授权
    Multiple coprocessor architecture to process a plurality of subtasks in parallel 失效
    多个协处理器架构并行处理多个子任务

    公开(公告)号:US07007156B2

    公开(公告)日:2006-02-28

    申请号:US09751943

    申请日:2000-12-28

    CPC classification number: G06F9/5044 G06F2209/5017

    Abstract: A programmed state processing machine architecture and method that provides improved efficiency for processing data manipulation tasks. In one embodiment, the processing machine comprises a control engine and a plurality coprocessors, a data memory, and an instruction memory. A sequence of instructions having a plurality of portions are issued by the instruction memory, wherein the control engine and each of the processors is caused to perform a specific task based on the portion of the instructions designated for that component. Accordingly, a data manipulation task can be divided into a plurality of subtasks that are processed in parallel by respective processing components in the architecture.

    Abstract translation: 一种编程状态处理机架构和方法,可提高处理数据操作任务的效率。 在一个实施例中,处理机包括控制引擎和多个协处理器,数据存储器和指令存储器。 具有多个部分的指令序列由指令存储器发出,其中使控制引擎和每个处理器基于为该部件指定的指令的部分执行特定任务。 因此,数据操作任务可以被划分成由架构中的各个处理组件并行处理的多个子任务。

    Global event chain in an island-based network flow processor

    公开(公告)号:US09626306B2

    公开(公告)日:2017-04-18

    申请号:US13399983

    申请日:2012-02-17

    CPC classification number: G06F13/00 G06F13/4022

    Abstract: An island-based network flow processor (IB-NFP) integrated circuit includes islands organized in rows. A configurable mesh event bus extends through the islands and is configured to form one or more local event rings and a global event chain. The configurable mesh event bus is configured with configuration information received via a configurable mesh control bus. Each local event ring involves event ring circuits and event ring segments. In one example, an event packet being communicated along a local event ring reaches an event ring circuit. The event ring circuit examines the event packet and determines whether it meets a programmable criterion. If the event packet meets the criterion, then the event packet is inserted into the global event chain. The global event chain communicates the event packet to a global event manager that logs events and maintains statistics and other information.

    Picoengine pool transactional memory architecture
    8.
    发明授权
    Picoengine pool transactional memory architecture 有权
    Picoengine池事务内存架构

    公开(公告)号:US09268600B2

    公开(公告)日:2016-02-23

    申请号:US13970601

    申请日:2013-08-20

    Applicant: Gavin J. Stark

    Inventor: Gavin J. Stark

    CPC classification number: G06F9/467 G06F15/163 H04L45/745 H04L45/7457

    Abstract: A transactional memory (TM) includes a selectable bank of hardware algorithm prework engines, a selectable bank of hardware lookup engines, and a memory unit. The memory unit stores result values (RVs), instructions, and lookup data operands. The transactional memory receives a lookup command across a bus from one of a plurality of processors. The lookup command includes a source identification value, data, a table number value, and a table set value. In response to the lookup command, the transactional memory selects one hardware algorithm prework engine and one hardware lookup engine to perform the lookup operation. The selected hardware algorithm prework engine modifies data included in the lookup command. The selected hardware lookup engine performs a lookup operation using the modified data and lookup operands provided by the memory unit. In response to performing the lookup operation, the transactional memory returns a result value and optionally an instruction.

    Abstract translation: 事务存储器(TM)包括可选择的硬件算法预处理引擎组,可选择的硬件查找引擎组和存储器单元。 存储单元存储结果值(RV),指令和查找数据操作数。 事务存储器从多个处理器之一接收总线上的查找命令。 查找命令包括源标识值,数据,表号值和表设置值。 响应于查找命令,事务存储器选择一个硬件算法预处理引擎和一个硬件查找引擎来执行查找操作。 所选的硬件算法预处理引擎修改查找命令中包含的数据。 所选择的硬件查找引擎使用由存储器单元提供的经修改的数据和查找操作数来执行查找操作。 响应于执行查找操作,事务存储器返回结果值和可选的指令。

    Network appliance that determines what processor to send a future packet to based on a predicted future arrival time
    9.
    发明授权
    Network appliance that determines what processor to send a future packet to based on a predicted future arrival time 有权
    网络设备根据预计的未来到达时间确定要发送未来数据包的处理器

    公开(公告)号:US09071545B2

    公开(公告)日:2015-06-30

    申请号:US13668251

    申请日:2012-11-03

    CPC classification number: H04L45/30 H04L43/0852 H04L47/245 H04L47/283

    Abstract: A network appliance includes a network processor and several processing units. Packets a flow pair are received onto the network appliance. Without performing deep packet inspection on any packet of the flow pair, the network processor analyzes the flows, estimates therefrom the application protocol used, and determines a predicted future time when the next packet will likely be received. The network processor determines to send the next packet to a selected one of the processing units based in part on the predicted future time. In some cases, the network processor causes a cache of the selected processing unit to be preloaded shortly before the predicted future time. When the next packet is actually received, the packet is directed to the selected processing unit. In this way, packets are directed to processing units within the network appliance based on predicted future packet arrival times without the use of deep packet inspection.

    Abstract translation: 网络设备包括网络处理器和多个处理单元。 将流对的数据包接收到网络设备上。 网络处理器不对流对的任何数据包进行深度数据包检测,从而分析流量,从而估算所使用的应用协议,并确定下一个数据包可能被接收时的预测未来时间。 部分地基于预测的未来时间,网络处理器确定将下一个分组发送到所选择的一个处理单元。 在一些情况下,网络处理器使所选择的处理单元的高速缓存在预测的未来时间之前不久被预加载。 当实际接收到下一个分组时,分组被引导到所选择的处理单元。 以这种方式,基于预测的未来分组到达时间,分组被定向到网络设备内的处理单元,而不使用深度分组检查。

    Distributed credit FIFO link of a configurable mesh data bus
    10.
    发明授权
    Distributed credit FIFO link of a configurable mesh data bus 有权
    可配置的网状数据总线的分布式信用FIFO链路

    公开(公告)号:US09069649B2

    公开(公告)日:2015-06-30

    申请号:US13399846

    申请日:2012-02-17

    CPC classification number: G06F13/4022 G06F13/00 H04L47/39 H04L49/901

    Abstract: An island-based integrated circuit includes a configurable mesh data bus. The data bus includes four meshes. Each mesh includes, for each island, a crossbar switch and radiating half links. The half links of adjacent islands align to form links between crossbar switches. A link is implemented as two distributed credit FIFOs. In one direction, a link portion involves a FIFO associated with an output port of a first island, a first chain of registers, and a second FIFO associated with an input port of a second island. When a transaction value passes through the FIFO and through the crossbar switch of the second island, an arbiter in the crossbar switch returns a taken signal. The taken signal passes back through a second chain of registers to a credit count circuit in the first island. The credit count circuit maintains a credit count value for the distributed credit FIFO.

    Abstract translation: 基于岛的集成电路包括可配置的网状数据总线。 数据总线包括四个网格。 每个网格对于每个岛包括一个交叉开关和辐射半连接。 相邻岛屿的半连接对齐以形成交叉开关之间的连接。 链接被实现为两个分布式信用FIFO。 在一个方向上,链接部分涉及与第一岛的输出端口,第一寄存器链和与第二岛的输入端口相关联的第二FIFO相关联的FIFO。 当交易值通过FIFO并通过第二岛的交叉开关时,交叉开关中的仲裁器返回一个取得的信号。 所采集的信号通过第二个寄存器链回到第一个岛的信用计数电路。 信用计数电路维持分配信用FIFO的信用计数值。

Patent Agency Ranking