CACHE LINE RE-REFERENCE INTERVAL PREDICTION USING PHYSICAL PAGE ADDRESS

    公开(公告)号:US20210182213A1

    公开(公告)日:2021-06-17

    申请号:US16716165

    申请日:2019-12-16

    Abstract: Systems, apparatuses, and methods for implementing cache line re-reference interval prediction using a physical page address are disclosed. When a cache line is accessed, a controller retrieves a re-reference interval counter value associated with the line. If the counter is less than a first threshold, then the address of the cache line is stored in a small re-use page buffer. If the counter is greater than a second threshold, then the address is stored in a large re-use page buffer. When a new cache line is inserted in the cache, if its address is stored in the small re-use page buffer, then the controller assigns a high priority to the line to cause it to remain in the cache to be re-used. If a match is found in the large re-use page buffer, then the controller assigns a low priority to the line to bias it towards eviction.

    Setting Operating Points for Circuits in an Integrated Circuit Chip

    公开(公告)号:US20190123648A1

    公开(公告)日:2019-04-25

    申请号:US16130136

    申请日:2018-09-13

    Abstract: The described embodiments include an apparatus that controls voltages for an integrated circuit chip having a set of circuits. The apparatus includes a switching voltage regulator separate from the integrated circuit chip and two or more low dropout (LDO) regulators fabricated on the integrated circuit chip. The switching voltage regulator provides an output voltage that is received as an input voltage by each of the two or more LDO regulators, and each of the two or more LDO regulators provides a local output voltage, each local output voltage received as a local input voltage by a different subset of the circuits in the set of circuits. During operation, a controller sets an operating point for each of the subsets of circuits based on a combined power efficiency for the subsets of the circuits and the LDO regulators, each operating point including a corresponding frequency and voltage.

    METHOD AND APPARATUS FOR MASKING AND TRANSMITTING DATA

    公开(公告)号:US20180081818A1

    公开(公告)日:2018-03-22

    申请号:US15268974

    申请日:2016-09-19

    CPC classification number: G06F12/0897 G06F2212/1024 G06F2212/60

    Abstract: A method and apparatus for transmitting data includes determining whether to apply a mask to a cache line that includes a first type of data and a second type of data for transmission based upon a first criteria. The second type of data is filtered from the cache line, and the first type of data along with an identifier of the applied mask is transmitted. The first type of data and the identifier is received, and the second type of data is combined with the first type of data to recreate the cache line based upon the received identifier.

    Selecting a Precision Level for Executing a Workload in an Electronic Device

    公开(公告)号:US20190310864A1

    公开(公告)日:2019-10-10

    申请号:US15948795

    申请日:2018-04-09

    Abstract: An electronic device includes a controller functional block and a computational functional block. During operation, while the computational functional block executes a test portion of a workload at at least one precision level, the controller functional block monitors a behavior of the computational functional block. Based on the behavior of the computational functional block while executing the test portion of the workload at the at least one precision level, the controller functional block selects a given precision level from among a set of two or more precision levels at which the computational functional block is to execute a remaining portion of the workload. The controller functional block then configures the computational block to execute the remaining portion of the workload at the given precision level.

    RECONFIGURABLE PREDICTION ENGINE FOR GENERAL PROCESSOR COUNTING

    公开(公告)号:US20190286971A1

    公开(公告)日:2019-09-19

    申请号:US15922875

    申请日:2018-03-15

    Abstract: Systems, methods, and devices for determining a derived counter value based on a hardware performance counter. Example devices include input circuitry configured to input a hardware performance counter value; counter engine circuitry configured to determine the derived counter value by applying a model to the hardware performance counter value; and output circuitry configured to communicate the derived counter value to a consumer. In some examples, the consumer includes an operating system scheduler, a memory controller, a power manager, or a data prefetcher, or a cache controller. In some examples, the processor includes circuitry configured to dynamically change the model during operation of the processor. In some examples, the model includes or is generated by an artificial neural network (ANN).

    Method and apparatus for masking and transmitting data

    公开(公告)号:US10042774B2

    公开(公告)日:2018-08-07

    申请号:US15268974

    申请日:2016-09-19

    Abstract: A method and apparatus for transmitting data includes determining whether to apply a mask to a cache line that includes a first type of data and a second type of data for transmission based upon a first criteria. The second type of data is filtered from the cache line, and the first type of data along with an identifier of the applied mask is transmitted. The first type of data and the identifier is received, and the second type of data is combined with the first type of data to recreate the cache line based upon the received identifier.

    Cache management based on access type priority

    公开(公告)号:US11768779B2

    公开(公告)日:2023-09-26

    申请号:US16716194

    申请日:2019-12-16

    Abstract: Systems, apparatuses, and methods for cache management based on access type priority are disclosed. A system includes at least a processor and a cache. During a program execution phase, certain access types are more likely to cause demand hits in the cache than others. Demand hits are load and store hits to the cache. A run-time profiling mechanism is employed to find which access types are more likely to cause demand hits. Based on the profiling results, the cache lines that will likely be accessed in the future are retained based on their most recent access type. The goal is to increase demand hits and thereby improve system performance. An efficient cache replacement policy can potentially reduce redundant data movement, thereby improving system performance and reducing energy consumption.

    CACHE MANAGEMENT BASED ON ACCESS TYPE PRIORITY

    公开(公告)号:US20210182216A1

    公开(公告)日:2021-06-17

    申请号:US16716194

    申请日:2019-12-16

    Abstract: Systems, apparatuses, and methods for cache management based on access type priority are disclosed. A system includes at least a processor and a cache. During a program execution phase, certain access types are more likely to cause demand hits in the cache than others. Demand hits are load and store hits to the cache. A run-time profiling mechanism is employed to find which access types are more likely to cause demand hits. Based on the profiling results, the cache lines that will likely be accessed in the future are retained based on their most recent access type. The goal is to increase demand hits and thereby improve system performance. An efficient cache replacement policy can potentially reduce redundant data movement, thereby improving system performance and reducing energy consumption.

Patent Agency Ranking