METADATA BASED EFFICIENT PACKET PROCESSING
    2.
    发明公开

    公开(公告)号:US20240015109A1

    公开(公告)日:2024-01-11

    申请号:US17810856

    申请日:2022-07-06

    发明人: Oren Markovitz

    摘要: A method and device are presented for decreasing processing cycles spent forwarding packets of a communication from receive queues to at least one transmit queue of a network interface controller. When received, packets are placed into a receive queue based on property(ies) of a leading packet. Buffer metadata including transmit information is associated with each communication. Processor circuitry transfers the packets from each of the receive queues to a transmit queue and the buffer metadata is used to determine how to transmit the packet and how to process the packet before transmission.

    Digital Signal Processing Over Data Streams
    8.
    发明申请

    公开(公告)号:US20170331881A1

    公开(公告)日:2017-11-16

    申请号:US15152369

    申请日:2016-05-11

    摘要: The techniques and systems described herein are directed to providing deep integration of digital signal processing (DSP) operations with a general-purpose query processor. The techniques and systems provide a unified query language for processing tempo-relational and signal data, provide mechanisms for defining DSP operators, and support incremental computation in both offline and online analysis. The techniques and systems include receiving streaming data, aggregating and performing uniformity processing to generate a uniform signal, and storing the uniform signal in a batched columnar representation. Data can be copied from the batched columnar representation to a circular buffer, where DSP operations are applied to the data. Incremental processing can avoid redundant processing. Improvements to the functioning of a computer are provided by reducing an amount of data that to be passed back and forth between separate query databases and DSP processors, and by reducing a latency of processing and/or memory usage.

    Decoupled packet and data processing rates in switch devices

    公开(公告)号:US09787613B2

    公开(公告)日:2017-10-10

    申请号:US14755485

    申请日:2015-06-30

    摘要: Continuing to integrate more aggregate bandwidth and higher radix into switch devices is an economic imperative because it creates value both for the supplier and customer in large data center environments which are an increasingly important part of the marketplace. While new silicon processes continue to shrink transistor and other chip feature dimensions, process technology cannot be relied upon as a key driver of power reduction. Transitioning from 28 nm to 16 nm is a special case where FinFET provides additional power scaling, but subsequent FinFET nodes are not expected to deliver as substantial of power reductions to meet the desired increases in integration. The disclosed switch architecture attacks the power consumption problem by controlling the rate at which power-consuming activities occur.

    Efficient memory bandwidth utilization in a network device

    公开(公告)号:US09712442B2

    公开(公告)日:2017-07-18

    申请号:US14072744

    申请日:2013-11-05

    摘要: A system for efficient memory bandwidth utilization may include a depacketizer, a packetizer, and a processor core. The depacketizer may generate header information items from received packets, where the header information items include sufficient information for the processor core to process the packets without accessing the payloads from off-chip memory. The depacketizer may accumulate multiple payloads and may write the multiple payloads to the off-chip memory in a single memory transaction when a threshold amount of the payloads have been accumulated. The processor core may receive the header information items and may generate a single descriptor for accessing multiple payloads corresponding to the header information items from the off-chip memory. The packetizer may generate a header for each payload based at least on on-chip information and without accessing off-chip memory. Thus, the subject system provides efficient memory bandwidth utilization, e.g. at least by reducing the number of off-chip memory accesses.