Next line prefetchers employing initial high prefetch prediction confidence states for throttling next line prefetches in a processor-based system

    公开(公告)号:US10353819B2

    公开(公告)日:2019-07-16

    申请号:US15192416

    申请日:2016-06-24

    Abstract: Next line prefetchers employing initial high prefetch prediction confidence states for throttling next line prefetches in processor-based system are disclosed. Next line prefetcher prefetches a next memory line into cache memory in response to read operation. To mitigate prefetch mispredictions, next line prefetcher is throttled to cease prefetching after prefetch prediction confidence state becomes a no next line prefetch state indicating number of incorrect predictions. Instead of initial prefetch prediction confidence state being set to no next line prefetch state, which is built up in response to correct predictions before performing a next line prefetch, initial prefetch prediction confidence state is set to next line prefetch state to allow next line prefetching. Thus, next line prefetcher starts prefetching next lines before requiring correct predictions to be “built up” in prefetch prediction confidence state. CPU performance may be increased, because prefetching begins sooner rather than waiting for correct predictions to occur.

    Selective flushing of instructions in an instruction pipeline in a processor back to an execution-resolved target address, in response to a precise interrupt

    公开(公告)号:US10255074B2

    公开(公告)日:2019-04-09

    申请号:US14851238

    申请日:2015-09-11

    Abstract: Selective flushing of instructions in an instruction pipeline in a processor back to an execution-determined target address in response to a precise interrupt is disclosed. A selective instruction pipeline flush controller determines if a precise interrupt has occurred for an executed instruction in the instruction pipeline. The selective instruction pipeline flush controller determines if an instruction at the correct resolved target address of the instruction that caused the precise interrupt is contained in the instruction pipeline. If so, the selective instruction pipeline flush controller can selectively flush instructions back to the instruction in the pipeline that contains the correct resolved target address to reduce the amount of new instruction fetching. In this manner, as an example, the performance penalty of precise interrupts can be lessened through less instruction refetching and reduced delay in instruction pipeline refilling when the instruction containing the correct target address is already contained in the pipeline.

    Speculative history forwarding in overriding branch predictors, and related circuits, methods, and computer-readable media
    6.
    发明授权
    Speculative history forwarding in overriding branch predictors, and related circuits, methods, and computer-readable media 有权
    重写分支预测器以及相关电路,方法和计算机可读介质中的投机历史转发

    公开(公告)号:US09582285B2

    公开(公告)日:2017-02-28

    申请号:US14223091

    申请日:2014-03-24

    CPC classification number: G06F9/3848 G06F9/3844

    Abstract: Speculative history forwarding in overriding branch predictors, and related circuits, methods, and computer-readable media are disclosed. In one embodiment, a branch prediction circuit including a first branch predictor and a second branch predictor is provided. The first branch predictor generates a first branch prediction for a conditional branch instruction, and the first branch prediction is stored in a first branch prediction history. The first branch prediction is also speculatively forwarded to a second branch prediction history. The second branch predictor subsequently generates a second branch prediction based on the second branch prediction history, including the speculatively forwarded first branch prediction. By enabling the second branch predictor to base its branch prediction on the speculatively forwarded first branch prediction, an accuracy of the second branch predictor may be improved.

    Abstract translation: 公开了覆盖分支预测器以及相关电路,方法和计算机可读介质中的投机历史转发。 在一个实施例中,提供了包括第一分支预测器和第二分支预测器的分支预测电路。 第一分支预测器产生用于条件分支指令的第一分支预测,并且第一分支预测存储在第一分支预测历史中。 第一分支预测也被推测地转发到第二分支预测历史。 第二分支预测器随后基于包括推测性转发的第一分支预测的第二分支预测历史生成第二分支预测。 通过使第二分支预测器能够将其分支预测设置在推测性转发的第一分支预测上,可以提高第二分支预测器的精度。

    PROVIDING EARLY INSTRUCTION EXECUTION IN AN OUT-OF-ORDER (OOO) PROCESSOR, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA
    7.
    发明申请
    PROVIDING EARLY INSTRUCTION EXECUTION IN AN OUT-OF-ORDER (OOO) PROCESSOR, AND RELATED APPARATUSES, METHODS, AND COMPUTER-READABLE MEDIA 审中-公开
    (OOO)处理器及相关设备,方法和计算机可读介质中提供的早期指令执行

    公开(公告)号:US20160170770A1

    公开(公告)日:2016-06-16

    申请号:US14568637

    申请日:2014-12-12

    Abstract: Providing early instruction execution in an out-of-order (OOO) processor, and related apparatuses, methods, and computer-readable media are disclosed. In one aspect, an apparatus comprises an early execution engine communicatively coupled to a front-end instruction pipeline and a back-end instruction pipeline of an OOO processor. The early execution engine is configured to receive an incoming instruction from the front-end instruction pipeline, and determine whether an input operand of one or more input operands of the incoming instruction is present in a corresponding entry of one or more entries in an early register cache. The early execution engine is also configured to, responsive to determining that the input operand is present in the corresponding entry, substitute the input operand with a non-speculative immediate value stored in the corresponding entry. In some aspects, the early execution engine may execute the incoming instruction using an early execution unit and update the early register cache.

    Abstract translation: 在无序(OOO)处理器以及相关设备,方法和计算机可读介质中提供早期指令执行。 一方面,一种装置包括通信地耦合到OOO处理器的前端指令管线和后端指令管线的早期执行引擎。 早期执行引擎被配置为从前端指令流水线接收输入指令,并且确定输入指令的一个或多个输入操作数的输入操作数是否存在于早期寄存器中的一个或多个条目的对应条目中 缓存。 早期执行引擎还被配置为响应于确定输入操作数存在于相应条目中,用存储在相应条目中的非推测立即值替换输入操作数。 在一些方面,早期执行引擎可以使用早期执行单元来执行输入指令,并且更新早期的寄存器高速缓存。

    Providing load address predictions using address prediction tables based on load path history in processor-based systems

    公开(公告)号:US11709679B2

    公开(公告)日:2023-07-25

    申请号:US15087069

    申请日:2016-03-31

    CPC classification number: G06F9/3832

    Abstract: Aspects disclosed in the detailed description include providing load address predictions using address prediction tables based on load path history in processor-based systems. In one aspect, a load address prediction engine provides a load address prediction table containing multiple load address prediction table entries. Each load address prediction table entry includes a predictor tag field and a memory address field for a load instruction. The load address prediction engine generates a table index and a predictor tag based on an identifier and a load path history for a detected load instruction. The table index is used to look up a corresponding load address prediction table entry. If the predictor tag matches the predictor tag field of the load address prediction table entry corresponding to the table index, the memory address field of the load address prediction table entry is provided as a predicted memory address for the load instruction.

    PROVIDING LOAD ADDRESS PREDICTIONS USING ADDRESS PREDICTION TABLES BASED ON LOAD PATH HISTORY IN PROCESSOR-BASED SYSTEMS

    公开(公告)号:US20170286119A1

    公开(公告)日:2017-10-05

    申请号:US15087069

    申请日:2016-03-31

    Abstract: Aspects disclosed in the detailed description include providing load address predictions using address prediction tables based on load path history in processor-based systems. In one aspect, a load address prediction engine provides a load address prediction table containing multiple load address prediction table entries. Each load address prediction table entry includes a predictor tag field and a memory address field for a load instruction. The load address prediction engine generates a table index and a predictor tag based on an identifier and a load path history for a detected load instruction. The table index is used to look up a corresponding load address prediction table entry. If the predictor tag matches the predictor tag field of the load address prediction table entry corresponding to the table index, the memory address field of the load address prediction table entry is provided as a predicted memory address for the load instruction.

Patent Agency Ranking