PC-based instruction group permissions

    公开(公告)号:US12079142B2

    公开(公告)日:2024-09-03

    申请号:US18343145

    申请日:2023-06-28

    Applicant: Apple Inc.

    Abstract: A permissions model for a processor in which permissions are based on the instruction group of an instruction. These permissions may be stored in permissions tables and indexed using the program counter of the instruction. The permissions may identify which of a plurality of instruction groups of an instruction set architecture (ISA) of a processor are permitted to execute from that program counter value. Accordingly, the instruction group of the instruction can be compared to the permitted instruction groups to determine if the instruction has execution permission. In some cases, the instruction-group-based permissions are secondary execution privileges; additional primary execution permissions that are determined using the program counter may also be used.

    Branch predictor storing encrypted information

    公开(公告)号:US11995446B2

    公开(公告)日:2024-05-28

    申请号:US17661696

    申请日:2022-05-02

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed relating to protecting branch prediction information. In various embodiments, an integrated circuit includes branch prediction logic having a table that maintains a plurality of entries storing encrypted target address information for branch instructions. The branch prediction logic is configured to receive machine context information for a branch instruction having a target address being predicted by the branch prediction logic, the machine context information including a program counter associated with the branch instruction. The branch prediction logic is configured to use the machine context information to decrypt encrypted target address information stored in one of the plurality of entries identified based on the program counter. In some embodiments, the branch prediction logic decrypts the encrypted target address information by performing a cipher to encrypt the machine context information and performing a Boolean exclusive-OR operation of the encrypted machine context information and the encrypted target address information.

    INDIRECT BRANCH PREDICTOR SECURITY PROTECTION

    公开(公告)号:US20220326957A1

    公开(公告)日:2022-10-13

    申请号:US17661696

    申请日:2022-05-02

    Applicant: Apple Inc.

    Abstract: Techniques are disclosed relating to protecting branch prediction information. In various embodiments, an integrated circuit includes branch prediction logic having a table that maintains a plurality of entries storing encrypted target address information for branch instructions. The branch prediction logic is configured to receive machine context information for a branch instruction having a target address being predicted by the branch prediction logic, the machine context information including a program counter associated with the branch instruction. The branch prediction logic is configured to use the machine context information to decrypt encrypted target address information stored in one of the plurality of entries identified based on the program counter. In some embodiments, the branch prediction logic decrypts the encrypted target address information by performing a cipher to encrypt the machine context information and performing a Boolean exclusive-OR operation of the encrypted machine context information and the encrypted target address information.

    Range Mapping of Input Operands for Transcendental Functions

    公开(公告)号:US20200241876A1

    公开(公告)日:2020-07-30

    申请号:US16847068

    申请日:2020-04-13

    Applicant: Apple Inc.

    Abstract: In an embodiment, a processor (e.g. a CPU) may offload transcendental computation to a computation engine that may efficiently perform transcendental functions. The computation engine may implement a range instruction that may be included in a program being executed by the CPU. The CPU may dispatch the range instruction to the computation engine. The range instruction may take an input operand (that is to be evaluated in a transcendental function, for example) and may reference a range table that defines a set of ranges for the transcendental function. The range instruction may identify one of the set of ranges that includes the input operand. For example, the range instruction may output an interval number identifying which interval of an overall set of valid input values contains the input operand. In an embodiment, the range instruction may take an input vector operand and output a vector of interval identifiers.

    Computation Engine with Strided Dot Product
    5.
    发明申请

    公开(公告)号:US20200225958A1

    公开(公告)日:2020-07-16

    申请号:US16837631

    申请日:2020-04-01

    Applicant: Apple Inc.

    Abstract: In an embodiment, a computation engine may perform dot product computations on input vectors. The dot product operation may have a first operand and a second operand, and the dot product may be performed on a subset of the vector elements in the first operand and each of the vector elements in the second operand. The subset of vector elements may be separated in the first operand by a stride that skips one or more elements between each element to which the dot product operation is applied. More particularly, in an embodiment, the input operands of the dot product operation may be a first vector having second vectors as elements, and the stride may select a specified element of each second vector.

    Computation Engine that Operates in Matrix and Vector Modes

    公开(公告)号:US20200034145A1

    公开(公告)日:2020-01-30

    申请号:US16043772

    申请日:2018-07-24

    Applicant: Apple Inc.

    Abstract: In an embodiment, a computation engine is configured to perform vector multiplications, producing either vector results or outer product (matrix) results. The instructions provided to the computation engine specify a matrix mode or a vector mode for the instructions. The computation engine performs the specified operation. The computation engine may perform numerous computations in parallel, in an embodiment. In an embodiment, the instructions may also specify an offset with the input memories, providing additional flexibility in the location of operands. More particularly, the computation engine may be configured to perform numerous multiplication operations in parallel and to accumulate results in a result memory, performing multiply-accumulate operations for each matrix/vector element in the targeted locations of the output memory.

    Range Mapping of Input Operands for Transcendental Functions

    公开(公告)号:US20190250917A1

    公开(公告)日:2019-08-15

    申请号:US15896582

    申请日:2018-02-14

    Applicant: Apple Inc.

    CPC classification number: G06F9/30076 G06F9/3004 G06F9/3802

    Abstract: In an embodiment, a computation engine may offload a processor (e.g. a CPU) and efficiently perform transcendental functions. The computation engine may implement a range instruction that may be included in a program being executed by the CPU. The CPU may dispatch the range instruction to the computation engine. The range instruction may take an input operand (that is to be evaluated in a transcendental function, for example) and may reference a range table that defines a set of ranges for the transcendental function. The range instruction may identify one of the set of ranges that includes the input operand. For example, the range instruction may output an interval number identifying which interval of an overall set of valid input values contains the input operand. In an embodiment, the range instruction may take an input vector operand and output a vector of interval identifiers.

    Matrix computation engine
    8.
    发明授权

    公开(公告)号:US10346163B2

    公开(公告)日:2019-07-09

    申请号:US15800342

    申请日:2017-11-01

    Applicant: Apple Inc.

    Abstract: In an embodiment, a matrix computation engine is configured to perform matrix computations (e.g. matrix multiplications). The matrix computation engine may perform numerous matrix computations in parallel, in an embodiment. More particularly, the matrix computation engine may be configured to perform numerous multiplication operations in parallel on input matrix elements, generating resulting matrix elements. In an embodiment, the matrix computation engine may be configured to accumulate results in a result memory, performing multiply-accumulate operations for each matrix element of each matrix.

    Outer Product Engine
    9.
    发明申请

    公开(公告)号:US20180074824A1

    公开(公告)日:2018-03-15

    申请号:US15264002

    申请日:2016-09-13

    Applicant: Apple Inc.

    Abstract: In an embodiment, an outer product engine is configured to perform outer product operations. The outer product engine may perform numerous multiplication operations in parallel on input vectors, in an embodiment, generating a resulting outer product matrix. In an embodiment, the outer product engine may be configured to accumulate results in a result matrix, performing fused multiply add (FMA) operations to produce the outer product elements (multiply) and to accumulate the outer product elements with previous elements from the result matrix memory (add). A processor may fetch outer product instructions, and may transmit the instructions to the outer product engine when the instructions become non-speculative in an embodiment. The processor may be configured to retire the outer product instructions responsive to transmitting them to the outer product engine.

    Hazard check instructions for enhanced predicate vector operations

    公开(公告)号:US09600280B2

    公开(公告)日:2017-03-21

    申请号:US14034651

    申请日:2013-09-24

    Applicant: Apple Inc.

    Inventor: Jeffry E. Gonion

    Abstract: A hazard check instruction has operands that specify addresses of vector elements to be read by first and second vector memory operations. The hazard check instruction outputs a dependency vector identifying, for each element position of the first vector corresponding to the first vector memory operation, which element position of the second vector that the element of the first vector depends on (if any). In an embodiment, at least one of the vector memory operations has addresses specified using a scalar address in the operands (and a vector attribute associated with the vector). In an embodiment, the operands may include predicates for one or both of the vector memory operations, indicating which vector elements are active. The dependency vector may be qualified by the predicates, indicating dependencies only for active elements.

Patent Agency Ranking