Packet traffic control in a network processor

    公开(公告)号:US09906468B2

    公开(公告)日:2018-02-27

    申请号:US13283252

    申请日:2011-10-27

    CPC classification number: H04L49/15 H04L47/52

    Abstract: A network processor controls packet traffic in a network by maintaining a count of pending packets. In the network processor, a pipe identifier (ID) is assigned to each of a number of paths connecting a packet output to respective network interfaces receiving those packets. A corresponding pipe ID is attached to each packet as it is transmitted. A counter employs the pipe ID to maintain a count of packets to be transmitted by a network interface. As a result, the network processor manages traffic on a per-pipe ID basis to ensure that traffic thresholds are not exceeded.

    Scalable efficient I/O port protocol
    3.
    发明授权
    Scalable efficient I/O port protocol 有权
    可扩展的高效I / O端口协议

    公开(公告)号:US08364851B2

    公开(公告)日:2013-01-29

    申请号:US10677583

    申请日:2003-10-02

    CPC classification number: G06F15/17381 G06F12/0817 G06F2212/621

    Abstract: A system that supports a high performance, scalable, and efficient I/O port protocol to connect to I/O devices is disclosed. A distributed multiprocessing computer system contains a number of processors each coupled to an I/O bridge ASIC implementing the I/O port protocol. One or more I/O devices are coupled to the I/O bridge ASIC, each I/O device capable of accessing machine resources in the computer system by transmitting and receiving message packets. Machine resources in the computer system include data blocks, registers and interrupt queues. Each processor in the computer system is coupled to a memory module capable of storing data blocks shared between the processors. Coherence of the shared data blocks in this shared memory system is maintained using a directory based coherence protocol. Coherence of data blocks transferred during I/O device read and write accesses is maintained using the same coherence protocol as for the memory system. Data blocks transferred during an I/O device read or write access may be buffered in a cache by the I/O bridge ASIC only if the I/O bridge ASIC has exclusive copies of the data blocks. The I/O bridge ASIC includes a DMA device that supports both in-order and out-of-order DMA read and write streams of data blocks. An in-order stream of reads of data blocks performed by the DMA device always results in the DMA device receiving coherent data blocks that do not have to be written back to the memory module.

    Abstract translation: 公开了一种支持高性能,可扩展和高效的I / O端口协议来连接到I / O设备的系统。 分布式多处理计算机系统包含多个处理器,每个处理器都耦合到实现I / O端口协议的I / O桥ASIC。 一个或多个I / O设备耦合到I / O桥ASIC,每个I / O设备能够通过发送和接收消息分组来访问计算机系统中的机器资源。 计算机系统中的机器资源包括数据块,寄存器和中断队列。 计算机系统中的每个处理器耦合到能够存储处理器之间共享的数据块的存储器模块。 使用基于目录的一致性协议来维护该共享存储器系统中的共享数据块的一致性。 使用与存储系统相同的一致性协议来维护I / O设备读写访问期间传输的数据块的一致性。 只有当I / O桥ASIC具有数据块的排他副本时,I / O桥ASIC才能缓存在I / O设备读或写访问期间传输的数据块。 I / O桥ASIC包括支持数据块的顺序和无序DMA读和写数据流的DMA设备。 由DMA设备执行的数据块的顺序读取流总是导致DMA设备接收不必写入存储器模块的相干数据块。

    METHOD AND APPARATUS FOR POWER CONTROL
    5.
    发明申请
    METHOD AND APPARATUS FOR POWER CONTROL 有权
    用于功率控制的方法和装置

    公开(公告)号:US20110185203A1

    公开(公告)日:2011-07-28

    申请号:US12695648

    申请日:2010-01-28

    Abstract: Embodiments of the present invention relate to limiting maximum power dissipation occurred in a processor. Therefore, when an application that requires excessive amounts of power is being executed, the execution of the application may be prevented to reduce dissipated or consumed power. Example embodiments may stall the issue or execution of instructions by the processor, allowing software or hardware to reduce the power of an application by imposing a decrease in the performance of the application.

    Abstract translation: 本发明的实施例涉及限制在处理器中发生的最大功率耗散。 因此,当正在执行需要过多功率的应用时,可以防止应用的执行以减少耗散或消耗的功率。 示例性实施例可以阻止处理器发出或执行指令,允许软件或硬件通过施加应用的性能的降低而降低应用的功率。

    Method and apparatus for establishing secure sessions
    7.
    发明授权
    Method and apparatus for establishing secure sessions 有权
    用于建立安全会话的方法和装置

    公开(公告)号:US07240203B2

    公开(公告)日:2007-07-03

    申请号:US10025509

    申请日:2001-12-19

    CPC classification number: H04L63/164 G06F21/606

    Abstract: A method and apparatus for processing security operations are described. In one embodiment, a processor includes a number of execution units to process a number of requests for security operations. The number of execution units are to output the results of the number of requests to a number of output data structures associated with the number of requests within a remote memory based on pointers stored in the number of requests. The number of execution units can output the results in an order that is different from the order of the requests in a request queue. The processor also includes a request unit coupled to the number of execution units. The request unit is to retrieve a portion of the number of requests from the request queue within the remote memory and associated input data structures for the portion of the number of requests from the remote memory. Additionally, the request unit is to distribute the retrieved requests to the number of execution units based on availability for processing by the number of execution units.

    Abstract translation: 描述用于处理安全操作的方法和装置。 在一个实施例中,处理器包括多个执行单元,用于处理多个安全操作请求。 执行单元的数量是基于存储在请求数中的指针,将与多个与远程存储器内的请求数相关联的输出数据结构的请求数的结果输出。 执行单元的数量可以按照与请求队列中的请求顺序不同的顺序输出结果。 处理器还包括耦合到执行单元数量的请求单元。 请求单元从远程存储器中的请求队列中检索一部分请求数,并且从远程存储器中获取部分请求的相关联的输入数据结构。 此外,请求单元是基于执行单元的数量的处理的可用性将检索到的请求分发到执行单元的数量。

    Apparatus and method for data deskew
    8.
    发明授权
    Apparatus and method for data deskew 有权
    数据去偏移的装置和方法

    公开(公告)号:US07209531B1

    公开(公告)日:2007-04-24

    申请号:US10397083

    申请日:2003-03-26

    CPC classification number: H04L7/0338 H03K5/133 H04L7/005 H04L7/10

    Abstract: A deskew circuit utilizing a coarse delay adjustment and fine delay adjustment centers the received data in a proper data window and aligns the data for proper sampling. In one scheme, bit state transitions of a training sequence for SPI-4 protocol is used to adjust delays to align the transition points.

    Abstract translation: 利用粗略延迟调整和精细延迟调整的偏移电路将接收到的数据集中在适当的数据窗口中,并对准数据以进行适当的采样。 在一种方案中,SPI-4协议的训练序列的位状态转换用于调整延迟以对齐转换点。

    Broadcast invalidate scheme
    9.
    发明授权
    Broadcast invalidate scheme 有权
    广播无效方案

    公开(公告)号:US07076597B2

    公开(公告)日:2006-07-11

    申请号:US10685039

    申请日:2003-10-14

    CPC classification number: G06F12/0826

    Abstract: A directory-based multiprocessor cache control scheme for distributing invalidate messages to change the state of shared data in a computer system. The plurality of processors are grouped into a plurality of clusters. A directory controller tracks copies of shared data sent to processors in the clusters. Upon receiving an exclusive request from a processor requesting permission to modify a shared copy of the data, the directory controller generates invalidate messages requesting that other processors sharing the same data invalidate that data. These invalidate messages are sent via a point-to-point transmission only to master processors in clusters actually containing a shared copy of the data. Upon receiving the invalidate message, the master processors broadcast the invalidate message in an ordered fan-in/fan-out process to each processor in the cluster. All processors within the cluster invalidate a local copy of the shared data if it exists and once the master processor receives acknowledgements from all processors in the cluster, the master processor sends an invalidate acknowledgment message to the processor that originally requested the exclusive rights to the shared data. The cache coherency is scalable and may be implemented using the hybrid point-to-point/broadcast scheme or a conventional point-to-point only directory-based invalidate scheme.

    Abstract translation: 用于分发无效消息以改变计算机系统中的共享数据的状态的基于目录的多处理器高速缓存控制方案。 多个处理器被分组成多个簇。 目录控制器跟踪发送到集群中的处理器的共享数据的副本。 当从处理器接收到请求许可修改数据的共享副本的独占请求时,目录控制器产生无效消息,请求共享相同数据的其他处理器使该数据无效。 这些无效消息通过点对点传输仅发送到实际包含数据共享副本的集群中的主处理器。 在收到无效消息后,主处理器将有序扇入/扇出进程中的无效消息广播到群集中的每个处理器。 集群内的所有处理器使共享数据的本地副本(如果存在)无效,并且一旦主处理器从集群中的所有处理器接收到确认,则主处理器向原始请求共享的专有权的处理器发送无效确认消息 数据。 高速缓存一致性是可扩展的,并且可以使用混合点对点/广播方案或常规的仅基于点对点的仅基于目录的无效方案来实现。

    Interface for a security coprocessor
    10.
    发明授权
    Interface for a security coprocessor 有权
    用于安全协处理器的接口

    公开(公告)号:US06789147B1

    公开(公告)日:2004-09-07

    申请号:US10025512

    申请日:2001-12-19

    CPC classification number: H04L63/166 G06F21/602

    Abstract: A method and apparatus for processing security operations are described. In one embodiment, a processor includes a number of execution units to process a number of requests for security operations. The number of execution units are to output the results of the number of requests to a number of output data structures associated with the number of requests within a remote memory based on pointers stored in the number of requests. The number of execution units can output the results in an order that is different from the order of the requests in a request queue. The processor also includes a request unit coupled to the number of execution units. The request unit is to retrieve a portion of the number of requests from the request queue within the remote memory and associated input data structures for the portion of the number of requests from the remote memory. Additionally, the request unit is to distribute the retrieved requests to the number of execution units based on availability for processing by the number of execution units.

    Abstract translation: 描述用于处理安全操作的方法和装置。 在一个实施例中,处理器包括多个执行单元,用于处理多个安全操作请求。 执行单元的数量是基于存储在请求数中的指针,将与多个与远程存储器内的请求数相关联的输出数据结构的请求数的结果输出。 执行单元的数量可以按照与请求队列中的请求顺序不同的顺序输出结果。 处理器还包括耦合到执行单元数量的请求单元。 请求单元从远程存储器中的请求队列中检索一部分请求数,并且从远程存储器中获取部分请求的相关联的输入数据结构。 此外,请求单元是基于执行单元的数量的处理的可用性将检索到的请求分发到执行单元的数量。

Patent Agency Ranking