-
公开(公告)号:US20230161916A1
公开(公告)日:2023-05-25
申请号:US18151890
申请日:2023-01-09
Applicant: HUAWEI TECHNOLOGIES CO., LTD.
Inventor: Yongzheng WU , Zhiguo GE , Tao HUANG , Chenyu WANG
CPC classification number: G06F21/64 , G06F12/023 , G06F2212/1052
Abstract: A processor obtains a first child node in response to a read request for target data when the first child node is not verified in the processor. The first child node is a child node that is in an integrity tree and that is related to the target data. The integrity tree includes a plurality of root nodes and a plurality of child nodes. The plurality of root nodes are in a decompressed state and the plurality of child nodes are in a compressed state. The processor decompresses the first child node in the compressed state, and caches the decompressed first child node into the processor for integrity verification on the target data. The plurality of child nodes of the integrity tree are stored in a form of the compressed state in the memory, so that storage space of the memory can be saved, and the sizes of the child nodes can be reduced.
-
公开(公告)号:US20170351612A1
公开(公告)日:2017-12-07
申请号:US15686846
申请日:2017-08-25
Inventor: Yuan YAO , Tulika MITRA , Zhiguo GE , Naxin ZHANG
IPC: G06F12/0817 , G06F12/02
CPC classification number: G06F12/0817 , G06F12/023 , G06F2212/604
Abstract: The embodiment of the disclosure discloses a data processing method and device in a cache coherence directory architecture. The method includes that allocating a tag entry in a tag array for a data block; allocating a data entry in a data array for the data block when the data block is actively shared; and de-allocating the data entry when the data block is temporarily private or gets evicted in the data array. Therefore the embodiments of the disclosure allocate data entry only when a data block is actively shared and will not allocate data entry for data block which is not actively shared, therefore smaller directory size can be achieved.
-
公开(公告)号:US20150234744A1
公开(公告)日:2015-08-20
申请号:US14183238
申请日:2014-02-18
Inventor: Mihai PRICOPI , Zhiguo GE , Yuan YAO , Tulika MITRA , Naxin ZHANG
IPC: G06F12/08
CPC classification number: G06F12/0813 , G06F12/0815 , G06F12/084 , G06F12/0842 , G06F2212/1016 , G06F2212/1021 , G06F2212/1024 , G06F2212/1028 , G06F2212/601 , G06F2212/6012 , G06F2212/6042 , Y02D10/13
Abstract: A reconfigurable cache architecture is provided. In processor design, as the density of on-chip components increases, a quantity and complexity of processing cores will increase as well. In order to take advantage of increased processing capabilities, many applications will take advantage of instruction level parallelism. The reconfigurable cache architecture provides a cache memory that in capable of being configured in a private mode and a fused mode for an associated multi-core processor. In the fused mode, individual cores of the multi-core processor can write and read data from certain cache banks of the cache memory with greater control over address routing. The cache architecture further includes control and configurability of the memory size and associativity of the cache memory itself.
Abstract translation: 提供了可重构缓存架构。 在处理器设计中,随着片上组件的密度增加,处理核心的数量和复杂性也将增加。 为了利用增加的处理能力,许多应用将利用指令级并行性。 可重构高速缓存结构提供了能够以专用模式配置的高速缓存存储器和用于相关联的多核处理器的融合模式。 在融合模式下,多核处理器的单个内核可以通过对地址路由的更大控制来写入和读取来自高速缓冲存储器的某些高速缓冲存储器的数据。 高速缓存架构还包括存储器大小的控制和可配置性以及高速缓冲存储器本身的相关性。
-
公开(公告)号:US20160321177A1
公开(公告)日:2016-11-03
申请号:US15208295
申请日:2016-07-12
Inventor: Mihai PRICOPI , Zhiguo GE , Yuan YAO , Tulika MITRA , Naxin ZHANG
IPC: G06F12/0813 , G06F12/084 , G06F12/0842 , G06F12/0815
CPC classification number: G06F12/0813 , G06F12/0815 , G06F12/084 , G06F12/0842 , G06F2212/1016 , G06F2212/1021 , G06F2212/1024 , G06F2212/1028 , G06F2212/601 , G06F2212/6012 , G06F2212/6042 , Y02D10/13
Abstract: A reconfigurable cache architecture is provided. In processor design, as the density of on-chip components increases, a quantity and complexity of processing cores will increase as well. In order to take advantage of increased processing capabilities, many applications will take advantage of instruction level parallelism. The reconfigurable cache architecture provides a cache memory that in capable of being configured in a private mode and a fused mode for an associated multi-core processor. In the fused mode, individual cores of the multi-core processor can write and read data from certain cache banks of the cache memory with greater control over address routing. The cache architecture further includes control and configurability of the memory size and associativity of the cache memory itself.
-
-
-