Method for Processing Non-Cache Data Write Request, Cache, and Node

    公开(公告)号:US20220276960A1

    公开(公告)日:2022-09-01

    申请号:US17749612

    申请日:2022-05-20

    Abstract: A method for processing a non-cache data write request, a cache, and a node are provided. The method includes: A cache receives a first non-cache data write request from a first processor, and sends the first non-cache data write request to a node, where the first non-cache data write request includes a first address. If the cache determines that the first address is stored in the cache, the cache obtains first data corresponding to the first non-cache data write request from the first processor. When receiving a first data buffer identifier from the node, the cache sends the first data to the node. After receiving the first non-cache data write request, if the cache determines that the first address is locally stored, the cache may obtain the first data from the processor. After receiving the first data buffer identifier, the cache may send the first data to the node.

    Refresh processing method, apparatus, and system, and memory controller

    公开(公告)号:US11037615B2

    公开(公告)日:2021-06-15

    申请号:US16932255

    申请日:2020-07-17

    Abstract: A refresh processing method, apparatus, and system, and memory controllers are provided, to improve memory access efficiency. The refresh processing apparatus includes a plurality of memory controllers that are in one-to-one correspondence with a plurality of memory spaces. Any first memory controller in the plurality of memory controllers is configured to: receive N first indication signals and N second indication signals that are output by N memory controllers other than the first memory controller, where N is greater than or equal to 1; and determine a refresh policy of a first memory space based on at least one of the following information: the N first indication signals, the N second indication signals, and refresh indication information of the first memory space.

    CHIP AND RELATED DEVICE
    3.
    发明申请

    公开(公告)号:US20200328911A1

    公开(公告)日:2020-10-15

    申请号:US16914492

    申请日:2020-06-29

    Abstract: Embodiments provide a chip and a related device. In those embodiments, the chip includes a ring network. The ring network includes a first node and a second a node. The first node determines whether a first injection buffer value is greater than a first threshold and whether a first injection bandwidth is less than a first expected bandwidth. When the first injection buffer value is greater than the first threshold and the first injection bandwidth is less than the first expected bandwidth, the first node sends a first request to the second node, where the first request is used to instruct at least one node in the ring network, other than the first node, to reduce a transmission quantity of first data packets. According to the embodiments of this application, a network bandwidth can be properly allocated according to an actual operating status of a system.

    Bufferless ring network
    4.
    发明授权

    公开(公告)号:US10313097B2

    公开(公告)日:2019-06-04

    申请号:US15886912

    申请日:2018-02-02

    Abstract: A bufferless ring network including at least two nodes and at least two timeslots, the at least two timeslots include a dedicated timeslot, and a first node in the bufferless ring network has use permission for the dedicated timeslot. The first node is configured to, in a state of having the use permission for the dedicated timeslot, detect whether all dedicated timeslots that pass through the first node are available, set a permission switch signal, and cancel the use permission for the dedicated timeslot according to the permission switch signal after detecting that all the dedicated timeslots that pass through the first node are available. A remaining node in the bufferless ring network is configured to obtain the use permission for the dedicated timeslot according to the permission switch signal. The remaining node is a node that needs to use the dedicated timeslot.

    Chip and transmission scheduling method

    公开(公告)号:US10135758B2

    公开(公告)日:2018-11-20

    申请号:US15461666

    申请日:2017-03-17

    Abstract: A chip is provided, where the chip is formed by packaging at least two dies, and the at least two dies form at least one die group. The die group includes a first die and a second die. A first processing unit and n groups of ports are disposed on the first die, and a second processing unit and m groups of ports are disposed on the second die. The first processing unit is configured to: switch at least one group of first type ports in the n groups of ports from input to output and switch a second type port that is in the m groups of ports and that is coupled to each group of the first type ports from output to input.

    Method for processing non-cache data write request, cache, and node

    公开(公告)号:US11789866B2

    公开(公告)日:2023-10-17

    申请号:US17749612

    申请日:2022-05-20

    CPC classification number: G06F12/0802 G06F2212/60

    Abstract: A method for processing a non-cache data write request includes a cache receiving a first non-cache data write request from a first processor, and sending the first non-cache data write request to a node, where the first non-cache data write request includes a first address. If the cache determines that the first address is stored in the cache, the cache obtains first data corresponding to the first non-cache data write request from the first processor. When receiving a first data buffer identifier from the node, the cache sends the first data to the node. After receiving the first non-cache data write request, if the cache determines that the first address is locally stored, the cache may obtain the first data from the processor. After receiving the first data buffer identifier, the cache may send the first data to the node.

    Memory Interleaving Method and Apparatus

    公开(公告)号:US20210149804A1

    公开(公告)日:2021-05-20

    申请号:US17162287

    申请日:2021-01-29

    Abstract: A memory interleaving method includes dividing an access capacity into P partial access capacities based on N pieces of configuration information, where the P partial access capacities have a same size, the N pieces of configuration information are of N memory channels, where one of the N pieces of configuration information corresponds to one memory channel of the N memory channels, each of the N configuration information indicates a quantity of first partial access capacities of the P partial access capacities correspond to a first memory channel, and two partial access capacities correspond to a second memory channel, where a total quantity of memory channels is N, and N is an integer greater than or equal to 2, and mapping the P partial access capacities to the N memory channels.

    Network-on-chip, data transmission method, and first switching node

    公开(公告)号:US10476697B2

    公开(公告)日:2019-11-12

    申请号:US15890856

    申请日:2018-02-07

    Abstract: A network-on-chip and a corresponding method are provided. The network-on-chip includes at least one bufferless ring network in at least one dimension of the network-on-chip. At least one bufferless ring network includes multiple routing nodes, and at least one of the multiple routing nodes is a switching node. Two bufferless ring networks in different dimensions may intersect. The two bufferless ring networks exchange data by using switching nodes. A dedicated slot and a public slot are configured in each bufferless ring network. Only one switching node has permission to use a dedicated slot at a same moment in each bufferless ring network, the permission to use the dedicated slot is transferred successively between switching nodes in each bufferless ring network. The permission to use the dedicated slot is transferred after transmission of data in the dedicated slot is completed.

    Cache memory system and method for accessing cache line

    公开(公告)号:US10114749B2

    公开(公告)日:2018-10-30

    申请号:US15606428

    申请日:2017-05-26

    Inventor: Zhenxi Tu Jing Xia

    Abstract: A cache memory system is provided. The cache memory system includes multiple upper level caches and a current level cache. Each upper level cache includes multiple cache lines. The current level cache includes an exclusive tag random access memory (Exclusive Tag RAM) and an inclusive tag random access memory (Inclusive Tag RAM). The Exclusive Tag RAM is configured to preferentially store an index address of a cache line that is in each upper level cache and whose status is unique dirty (UD). The Inclusive Tag RAM is configured to store an index address of a cache line that is in each upper level cache and whose status is unique clean (UC), shared clean (SC), or shared dirty (SD).

Patent Agency Ranking