Method, system, and apparatus for supporting multiple address spaces to facilitate data movement

    公开(公告)号:US12130737B2

    公开(公告)日:2024-10-29

    申请号:US18331754

    申请日:2023-06-08

    CPC classification number: G06F12/063 G06F13/28 G06F13/4221 G06F2212/206

    Abstract: Methods, systems, and apparatuses provide support for multiple address spaces in order to facilitate data movement. One apparatus includes an input/output memory management unit (IOMMU) comprising: a plurality of memory-mapped input/output (MMIO) registers that map memory address spaces belonging to the IOMMU and at least a second IOMMU; and hardware control logic operative to: synchronize the plurality of MMIO registers of the at least the second IOMMU; receive, from a peripheral component endpoint coupled to the IOMMU, a direct memory access (DMA) request, the DMA request to a memory address space belonging to the at least the second IOMMU; access the plurality of MMIO registers of the IOMMU based on context data of the DMA request; and access, from the IOMMU, a function assigned to the memory address space belonging to the at least the second IOMMU based on the accessed plurality of MMIO registers.

    Image processing accelerator
    4.
    发明授权

    公开(公告)号:US12111778B2

    公开(公告)日:2024-10-08

    申请号:US17558252

    申请日:2021-12-21

    CPC classification number: G06F13/1668 G06F13/28 G06T1/20 H04N5/765

    Abstract: A processing accelerator includes a shared memory, and a stream accelerator, a memory-to-memory accelerator, and a common DMA controller coupled to the shared memory. The stream accelerator is configured to process a real-time data stream, and to store stream accelerator output data generated by processing the real-time data stream in the shared memory. The memory-to-memory accelerator is configured to retrieve input data from the shared memory, to process the input data, and to store, in the shared memory, memory-to-memory accelerator output data generated by processing the input data. The common DMA controller is configured to retrieve stream accelerator output data from the shared memory and transfer the stream accelerator output data to memory external to the processing accelerator; and to retrieve the memory-to-memory accelerator output data from the shared memory and transfer the memory-to-memory accelerator output data to memory external to the processing accelerator.

    DESCRIPTOR FETCHING FOR A MULTI-QUEUE DIRECT MEMORY ACCESS SYSTEM

    公开(公告)号:US20240330215A1

    公开(公告)日:2024-10-03

    申请号:US18191365

    申请日:2023-03-28

    Applicant: Xilinx, Inc.

    CPC classification number: G06F13/28

    Abstract: Descriptor fetch for a direct memory access system includes obtaining a descriptor for processing a received data packet. A determination is made as to whether the descriptor is a head descriptor of a chain descriptor. In response to determining that the descriptor is a head descriptor, one or more tail descriptors are fetched from a descriptor table specified by the head descriptor. A number of the tail descriptors fetched is determined based on a running count of a buffer size of the chain descriptor determined as each tail descriptor is fetched compared to a size of the data packet.

    Intra-chip and inter-chip data protection

    公开(公告)号:US12105658B2

    公开(公告)日:2024-10-01

    申请号:US17477185

    申请日:2021-09-16

    Applicant: XILINX, INC.

    CPC classification number: G06F13/4027 G06F13/1668 G06F13/28

    Abstract: In one example, an integrated circuit (IC) is provided that includes data circuitry and a processing circuitry. The data circuitry is configured to provide data to be transferred to a different circuitry within the IC or to an external IC. The processing circuitry is configured to: read the data provided by the data circuitry before it is transferred to the different circuitry or the external IC; calculate a first signature for the data; attach the first signature to the data; calculate, after transferring the data to the different circuitry or the external IC, a second signature for the data; extract the first signature corresponding to the data; compare the first signature to the second signature; and generate a signal based on a comparison of the first signature to the second signature.

    STORAGE DEVICE PROVIDING DIRECT MEMORY ACCESS, COMPUTING SYSTEM INCLUDING THE SAME, AND OPERATING METHOD OF THE STORAGE DEVICE

    公开(公告)号:US20240320173A1

    公开(公告)日:2024-09-26

    申请号:US18610528

    申请日:2024-03-20

    CPC classification number: G06F13/28 G06F2213/28

    Abstract: A storage device includes a buffer memory, a first direct memory access (DMA) circuit configured to provide data from a host to the buffer memory or data stored in the buffer memory to the host and output a first virtual address, a second DMA circuit configured to provide data read from a non-volatile memory to the buffer memory or the data stored in the buffer memory to the non-volatile memory and output a second virtual address, an address translation circuit configured to translate the first or second virtual address into a physical address when the first or second virtual address is included in a reference range and skip the translation operation when the first or second virtual address is excluded in the reference range. A buffer controller is configured to access the buffer memory based on the physical address of the first or second virtual address that is excluded.

    MULTI-CORE SYSTEM AND READING METHOD
    10.
    发明公开

    公开(公告)号:US20240311321A1

    公开(公告)日:2024-09-19

    申请号:US18437924

    申请日:2024-02-09

    Inventor: Ryosuke SUGAI

    CPC classification number: G06F13/28

    Abstract: A multi-core system includes a first processor core, a first memory coupled to the first processor core, a first communication IF including a first DMA unit coupled to the first memory, a second processor core, a second memory coupled to the second processor core, a second communication IF including a second DMA unit coupled to the second memory, and an MMU. In a case where page data of a page designated as a read destination by the first processor core is stored in the second memory, the MMU causes the first DMA unit to set a first transmission descriptor based on a page number of the page. The first communication IF transmits a data request including the page number and destination data to the second communication IF according to the first transmission descriptor.

Patent Agency Ranking