-
公开(公告)号:US11509750B2
公开(公告)日:2022-11-22
申请号:US16132424
申请日:2018-09-16
申请人: Cavium, Inc.
IPC分类号: H04L69/22
摘要: A system with co-resident data-plane and network interface controllers embodying a method for network switching of a data packet incoming from a network at a packet input processor portion of a network interface resource comprising the packet input processor, a packet output processor, and a network interface controller, implemented on a chip, to a target entity, is disclosed. Additionally, the system embodying a method for network switching of a data packet outgoing from an internal facing interface of a network interface controller portion of the network interface resource to a network is disclosed.
-
公开(公告)号:US10782907B2
公开(公告)日:2020-09-22
申请号:US15923851
申请日:2018-03-16
申请人: CAVIUM, INC.
发明人: Anh T. Tran , Gerald Schmidt , Tsahi Daniel , Saurabh Shrivastava
IPC分类号: G06F3/06 , G11C15/04 , H03K19/17728 , H04L12/741 , G06F12/0864 , H04L12/743
摘要: Embodiments of the present invention relate to multiple parallel lookups using a pool of shared memories by proper configuration of interconnection networks. The number of shared memories reserved for each lookup is reconfigurable based on the memory capacity needed by that lookup. The shared memories are grouped into homogeneous tiles. Each lookup is allocated a set of tiles based on the memory capacity needed by that lookup. The tiles allocated for each lookup do not overlap with other lookups such that all lookups can be performed in parallel without collision. Each lookup is reconfigurable to be either hash-based or direct-access. The interconnection networks are programmed based on how the tiles are allocated for each lookup.
-
公开(公告)号:US10523567B2
公开(公告)日:2019-12-31
申请号:US16126644
申请日:2018-09-10
申请人: Cavium, Inc.
发明人: Martin Leslie White
IPC分类号: H04L12/803 , H04L12/26 , G06F9/50 , H04L12/801
摘要: A data processing system includes a phantom queue for each of a plurality of output ports each associated with an output link for outputting data. The phantom queues receive/monitor traffic on the respective ports and/or the associated links such that the congestion or traffic volume on the output ports/links is able to be determined by a congestion mapper coupled with the phantom queues. Based on the determined congestion level on each of the ports/links, the congestion mapper selects one or more non or less congested ports/links as destination of one or more packets. A link selection logic element then processes the packets according to the selected path or multi-path thereby reducing congestion on the system.
-
公开(公告)号:US10419571B2
公开(公告)日:2019-09-17
申请号:US14667488
申请日:2015-03-24
申请人: CAVIUM, INC.
发明人: Martin Leslie White
IPC分类号: H04L29/08 , G06F16/2455
摘要: A forwarding database cache system is described herein. The forwarding database cache system includes a main forwarding database and one or more forwarding database caches. When a packet is received, the cache is searched first for information such as address information, and if found, then the packet is forwarded to the appropriate destination. If the address information is not found in the cache, then the main forwarding database is searched, and the packet is forwarded to the appropriate destination based on the information in the main forwarding database.
-
公开(公告)号:US10216780B2
公开(公告)日:2019-02-26
申请号:US15675336
申请日:2017-08-11
申请人: Cavium, Inc.
发明人: Weihuang Wang , Gerald Schmidt , Tsahi Daniel , Mohan Balan
IPC分类号: G06F17/30 , G06F12/1009 , H04L12/931 , H04L12/935
摘要: Embodiments of the present invention relate to a centralized table aging module that efficiently and flexibly utilizes an embedded memory resource, and that enables and facilitates separate network controllers. The centralized table aging module performs aging of tables in parallel using the embedded memory resource. The table aging module performs an age marking process and an age refreshing process. The memory resource includes age mark memory and age mask memory. Age marking is applied to the age mark memory. The age mask memory provides per-entry control granularity regarding the aging of table entries.
-
公开(公告)号:US20190007323A1
公开(公告)日:2019-01-03
申请号:US16126644
申请日:2018-09-10
申请人: Cavium, Inc.
发明人: Martin Leslie White
IPC分类号: H04L12/803 , H04L12/26 , G06F9/50 , H04L12/801
CPC分类号: H04L47/125 , G06F9/505 , G06F9/5083 , H04L43/0882 , H04L43/16 , H04L47/11 , H04L47/122
摘要: A data processing system includes a phantom queue for each of a plurality of output ports each associated with an output link for outputting data. The phantom queues receive/monitor traffic on the respective ports and/or the associated links such that the congestion or traffic volume on the output ports/links is able to be determined by a congestion mapper coupled with the phantom queues. Based on the determined congestion level on each of the ports/links, the congestion mapper selects one or more non or less congested ports/links as destination of one or more packets. A link selection logic element then processes the packets according to the selected path or multi-path thereby reducing congestion on the system.
-
公开(公告)号:US20180373635A1
公开(公告)日:2018-12-27
申请号:US15631085
申请日:2017-06-23
申请人: Cavium, Inc.
IPC分类号: G06F12/0846 , G06F12/0864 , G06F12/084
摘要: Partition information includes entries that each include an entity identifier and associated cache configuration information. A controller manages memory requests originating from processor cores, including: comparing at least a portion of an address included in a memory request with tags stored in a cache to determine whether the memory request results in a hit or a miss, and comparing an entity identifier included in the memory request with stored entity identifiers to determine a matched entry. The cache configuration information associated with the entity identifier in a matched entry is updated based at least in part on a hit or miss result. The associated cache configuration information includes cache usage information that tracks usage of the cache by an entity associated with the particular entity identifier, and partition descriptors that each define a different group of one or more of the regions.
-
公开(公告)号:US20180349185A1
公开(公告)日:2018-12-06
申请号:US15613889
申请日:2017-06-05
申请人: Cavium, Inc.
发明人: Timothy Toshio Nakada , Jason Daniel Zebchuk , Gregg Alan Bouchard , Tejas Maheshbhai Bhatt , Hong Jik Kim , Ahmed Shahid , Mark Jon Kwong
摘要: Method and system embodying the method for programmable scheduling encompassing: enqueueing at least one command into one of a plurality of queues having a plurality of entries; determining a category of the command at the head entry of each of the plurality of queues; processing each determined non-job category command by a non-job command arbitrator; and processing each determined job category command by a job arbitrator and assignor, is disclosed.
-
公开(公告)号:US20180323789A1
公开(公告)日:2018-11-08
申请号:US16019780
申请日:2018-06-27
申请人: CAVIUM, INC.
IPC分类号: H03K21/02 , H04L12/861
CPC分类号: H03K21/026 , H03K21/00 , H03K23/00 , H03K23/004 , H03K23/005 , H04L47/62 , H04L49/9084
摘要: Embodiments of the present invention relate to an architecture that uses hierarchical statistically multiplexed counters to extend counter life by orders of magnitude. Each level includes statistically multiplexed counters. The statistically multiplexed counters includes P base counters and S subcounters, wherein the S subcounters are dynamically concatenated with the P base counters. When a row overflow in a level occurs, counters in a next level above are used to extend counter life. The hierarchical statistically multiplexed counters can be used with an overflow FIFO to further extend counter life.
-
10.
公开(公告)号:US20180321983A1
公开(公告)日:2018-11-08
申请号:US15588240
申请日:2017-05-05
申请人: Cavium, Inc.
发明人: Kalyana Sundaram Venkataraman , Tejas Maheshbhai Bhatt , Hong Jik Kim , Eric Marenger , Ahmed Shahid , Jason Daniel Zebchuk , Gregg Alan Bouchard
CPC分类号: G06F9/5044 , G06F9/4881
摘要: A method and a system embodying the method for job pre-scheduling in a processing system comprising distributed job management, encompassing: determining a maximum amount of pre-schedulable jobs for each of a plurality of engines; setting for each of the plurality of engines a threshold less than or equal to the maximum amount; pre-scheduling by a scheduler an amount of jobs less than or equal to the threshold to at least one of a plurality of job managers; determining at the at least one of the plurality of job managers managing one of the plurality of engines one of a plurality of data processing devices in order for each pre-scheduled job; and assigning the job to the determined data processing device.
-
-
-
-
-
-
-
-
-