Co-processor including a media access controller
    1.
    发明授权
    Co-processor including a media access controller 有权
    协处理器包括媒体访问控制器

    公开(公告)号:US06898673B2

    公开(公告)日:2005-05-24

    申请号:US10105973

    申请日:2002-03-25

    摘要: A compute engine includes a central processing unit coupled to a coprocessor. The coprocessor includes a media access controller engine and a data transfer engine. The media access controller engine couples the compute engine to a communications network. The data transfer engine couples the media access controller engine to a set of cache memory. In further embodiments, a compute engine includes two media access controller engines. A reception media access controller engine receives data from the communications network. A transmission media access controller engine transmits data to the communications network. The compute engine also includes two data transfer engines. A streaming output engine stores network data from the reception media access controller engine in cache memory. A streaming input engine transfers data from cache memory to the transmission media access controller engine. In one implementation, the compute engine performs different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.

    摘要翻译: 计算引擎包括耦合到协处理器的中央处理单元。 协处理器包括媒体访问控制器引擎和数据传输引擎。 媒体访问控制器引擎将计算引擎耦合到通信网络。 数据传输引擎将媒体访问控制器引擎耦合到一组高速缓冲存储器。 在另外的实施例中,计算引擎包括两个媒体访问控制器引擎。 接收媒体接入控制器引擎从通信网络接收数据。 传输媒体接入控制器引擎向通信网络发送数据。 计算引擎还包括两个数据传输引擎。 流输出引擎将来自接收媒体访问控制器引擎的网络数据存储在高速缓冲存储器中。 流输入引擎将数据从高速缓冲存储器传输到传输介质访问控制器引擎。 在一个实现中,计算引擎执行不同的网络服务,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。

    Method and apparatus for determining precedence in a classification engine
    4.
    发明申请
    Method and apparatus for determining precedence in a classification engine 失效
    用于在分类引擎中确定优先级的方法和装置

    公开(公告)号:US20070174537A1

    公开(公告)日:2007-07-26

    申请号:US11726940

    申请日:2007-03-23

    IPC分类号: G06F12/06 G06F12/00

    CPC分类号: G11C15/00 Y10S707/99936

    摘要: A precedence determination system including a first type memory bank configured to receive a first search signal and to provide first search result indications, a second type memory bank configured to receive a second search signal and to provide second search result indications, a precedence number table coupled to the first and second type memory banks and configured to provide programmable precedence numbers, and a precedence determination circuit coupled to the first and second type memory banks and the precedence number table and configured to provide a third search result indication is disclosed. In one embodiment, the first type memory bank can be a static random access memory (SRAM) and the second type memory bank can be a ternary content addressable memory (TCAM).

    摘要翻译: 一种优先确定系统,包括配置成接收第一搜索信号并提供第一搜索结果指示的第一类型存储器组,被配置为接收第二搜索信号并提供第二搜索结果指示的第二类型存储器组, 并且被配置为提供可编程优先号,并且公开了耦合到第一和第二类型存储体的优先级确定电路和优先级数字表,并且被配置为提供第三搜索结果指示。 在一个实施例中,第一类型存储体可以是静态随机存取存储器(SRAM),并且第二类型存储体可以是三元内容可寻址存储器(TCAM)。

    Processing packets in cache memory
    5.
    发明授权
    Processing packets in cache memory 有权
    在高速缓存中处理数据包

    公开(公告)号:US06745289B2

    公开(公告)日:2004-06-01

    申请号:US10105151

    申请日:2002-03-25

    IPC分类号: G06F1200

    摘要: A system for processing data includes a first set of cache memory and a second set of cache memory that are each coupled to a main memory. A compute engine coupled to the first set of cache memory transfers data from a communications medium into the first set of cache memory. The system transfers the data from the first set of cache memory to the second set of cache memory, in response to a request for the data from a compute engine coupled to the second set of cache memory. Data is transferred between the sets of cache memory without accessing main memory, regardless of whether the data has been modified. The data is also transferred directly between sets of cache memory when the data is exclusively owned by a set of cache memory or shared by sets of cache memory. In one implementation, the above-described cache memory arrangement is employed with a compute engine that provides different network services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.

    摘要翻译: 一种用于处理数据的系统包括第一组高速缓冲存储器和第二组高速缓冲存储器,每个缓存存储器都耦合到主存储器。 耦合到第一组高速缓冲存储器的计算引擎将数据从通信介质传送到第一组高速缓冲存储器。 响应于来自耦合到第二组高速缓冲存储器的计算引擎的数据的请求,系统将数据从第一组高速缓存存储器传送到第二组高速缓冲存储器。 无论数据是否被修改,数据都在高速缓存存储器集之间传输,而不访问主存储器。 当数据由一组高速缓冲存储器独占拥有或由高速缓存存储器组共享时,数据也直接在高速缓存存储器集之间传送。 在一个实现中,上述高速缓冲存储器布置与提供不同网络服务的计算引擎一起使用,包括但不限于:1)虚拟专用网; 2)安全套接字层处理; 3)网页缓存; 4)超文本标记语言压缩; 5)病毒检查; 6)防火墙支持; 和7)网页切换。

    Bandwidth allocation for a data path

    公开(公告)号:US06938093B2

    公开(公告)日:2005-08-30

    申请号:US10105508

    申请日:2002-03-25

    摘要: A compute engine allocates data path bandwidth among different classes of packets. The compute engine identifies a packet's class and determines whether to transmit the packet based on the class' available bandwidth. If the class has available bandwidth, the compute engine grants the packet access to the data path. Otherwise, the compute engine only grants the packet access to the data path if none of the other packets waiting for data path access have a class with available bandwidth. After a packet is provided to the data path, the compute engine decrements a bandwidth allocation count for the packet's class. Once the bandwidth count for each class is exhausted, the compute engine sets each count to a respective starting value—reflecting the amount of bandwidth available to a class relative to the other classes. A compute engine employing the above-described bandwidth allocation can be employed to perform different networking services, including but not limited to: 1) virtual private networking; 2) secure sockets layer processing; 3) web caching; 4) hypertext mark-up language compression; 5) virus checking; 6) firewall support; and 7) web switching.

    Method and apparatus for providing internal table extensibility with external interface
    9.
    发明申请
    Method and apparatus for providing internal table extensibility with external interface 失效
    提供内部表扩展性与外部接口的方法和装置

    公开(公告)号:US20050086374A1

    公开(公告)日:2005-04-21

    申请号:US10687785

    申请日:2003-10-17

    IPC分类号: G06F15/16 H04L12/56 H04L29/06

    摘要: A configurable lookup table extension system including a plurality of lookup tables arranged in an internal memory, an external memory, and a flexible controller configured to couple at least one of the plurality of lookup tables to the external memory through a single memory interface is disclosed. Implementations of this system can support the flexible allocation of IP and MAC table entries so that a router/switch can flexibly support applications suited to a particular allocation. This approach provides an efficient scheme for extending multiple internal tables to external memory via a single external interface. Further, such extensibility is also programable to allow the size and number of external tables to be configured by software. This solution can provide the flexibility of customizing table sizes for different markets and/or customer requirements.

    摘要翻译: 公开了一种可配置的查找表扩展系统,其包括布置在内部存储器中的多个查找表,外部存储器和被配置为通过单个存储器接口将多个查找表中的至少一个耦合到外部存储器的灵活控制器。 该系统的实现可以支持IP和MAC表项的灵活分配,使得路由器/交换机可以灵活地支持适合于特定分配的应用。 这种方法提供了一种通过单个外部接口将多个内部表扩展到外部存储器的有效方案。 此外,这种可扩展性也是可编程的,以允许通过软件配置外部表的大小和数量。 该解决方案可以为不同市场和/或客户需求定制表格大小提供灵活性。