QoS-CLASS BASED SERVICING OF REQUESTS FOR A SHARED RESOURCE

    公开(公告)号:US20170293578A1

    公开(公告)日:2017-10-12

    申请号:US15274665

    申请日:2016-09-23

    Abstract: Systems and methods are directed to managing access to a shared memory. A request received at a memory controller, for access to the shared memory from a client of one or more clients configured to access the shared memory, is placed in at least one queue in the memory controller. A series of one or more timeout values is assigned to the request, based, at least in part on a priority associated with the client which generated the request. The priority may be fixed or based on a Quality-of-Service (QoS) class of the client. A timer is incremented while the request remains in the first queue. As the timer traverses each one of the one or more timeout values in the series, a criticality level of the request is incremented. A request with a higher criticality level may be prioritized for servicing over a request with a lower criticality level.

    BRANCH PREDICTION BASED ON LOAD-PATH HISTORY

    公开(公告)号:US20200089504A1

    公开(公告)日:2020-03-19

    申请号:US16136151

    申请日:2018-09-19

    Abstract: Branch prediction methods and systems include, for a branch instruction fetched by a processor, indexing a branch identification (ID) table based on a function of a program counter (PC) value of the branch instruction, wherein each entry of the branch ID table comprises at least a tag field, and an accuracy counter. For a tag hit at an entry indexed by the PC value, if a value of the corresponding accuracy counter is greater than or equal to zero, a prediction counter from a prediction counter pool is selected based on a function of the PC value and a load-path history, wherein the prediction counters comprise respective confidence values and prediction values. A memory-dependent branch prediction of the branch instruction is assigned as the prediction value of the selected prediction counter if the associated confidence value is greater than zero, while branch prediction from a conventional branch predictor is overridden.

    SLICE CONSTRUCTION FOR PRE-EXECUTING DATA DEPENDENT LOADS

    公开(公告)号:US20190087192A1

    公开(公告)日:2019-03-21

    申请号:US15712119

    申请日:2017-09-21

    Abstract: Systems and methods for constructing an instruction slice for prefetching data of a data-dependent load instruction include a slicer for identifying a load instruction in an instruction sequence as a first occurrence of a qualified load instruction which will miss in a last-level cache. A commit buffer stores information pertaining to the first occurrence of the qualified load instruction and shadow instructions which follow. For a second occurrence of the qualified load instruction, an instruction slice is constructed from the information in the commit buffer to form a slice payload. A pre-execution engine pre-executes the instruction slice based on the slice payload to determine an address from which data is to be fetched for execution of a third and any subsequent occurrences of the qualified load instruction. The data is prefetched from the determined address for the third and any subsequent occurrence of the qualified load instruction.

    METHOD AND APPARATUS FOR LOAD VALUE PREDICTION

    公开(公告)号:US20190065964A1

    公开(公告)日:2019-02-28

    申请号:US15691741

    申请日:2017-08-30

    Abstract: A method and apparatus for predicting instruction load values in a processor. While a program is executing the processor is used to train predictors in order to predict load values. In particular 4 differing kinds of predictors are trained. The four predictors are the Last Value Predictor (LVP) which captures loads that encounter very few values, the Stride Address Predictor (SAP) which captures loads based on stride (offset) addresses, a Content Address Predictor (CAP) which captures load addresses that are non-stride and the Context Value Predictor (CVP) which captures load values in a particular context that are non-stride. Training methods and the use of such predictors are disclosed.

Patent Agency Ranking