DYNAMIC POWER SCALING OF DIGITAL MODEMS
    2.
    发明申请
    DYNAMIC POWER SCALING OF DIGITAL MODEMS 有权
    数字模式的动态功率调节

    公开(公告)号:US20140071869A1

    公开(公告)日:2014-03-13

    申请号:US13968153

    申请日:2013-08-15

    Abstract: A system and method dynamically scale power consumed by the circuitry of an electronic device based on channel state and/or data rate. The electronic device then operates according to the power scaling. The scaling may be in accordance with an effective data rate, a number of multiple input multiple output (MIMO) layers, receiver type, a cell scenario, or a number of carriers. A number of MIMO layers can be predicted based on at least one of channel conditions or a channel quality index (CQI).

    Abstract translation: 系统和方法基于信道状态和/或数据速率来动态地缩放由电子设备的电路消耗的功率。 电子设备然后根据功率缩放来操作。 缩放可以根据有效数据速率,多输入多输出(MIMO)层,接收器类型,单元方案或多个载波来实现。 可以基于信道条件或信道质量指数(CQI)中的至少一个来预测多个MIMO层。

    QUANTIZED NEURAL NETWORK ARCHITECTURE
    3.
    发明公开

    公开(公告)号:US20240104356A1

    公开(公告)日:2024-03-28

    申请号:US17934476

    申请日:2022-09-22

    CPC classification number: G06N3/0481

    Abstract: Certain aspects of the present disclosure provide techniques and apparatus for quantized machine learning. A quantized input matrix is accessed at a layer of a neural network, and a first interim value is generated in an accumulator by performing matrix multiplication, using the accumulator, of the quantized input matrix and a quantized weight matrix associated with the layer of the neural network. The first interim value is normalized based at least in part on one or more leading sign bits of the first interim value, and the normalized first interim value is dequantized. A second interim value is generated by applying a rounded right-shift operation to the dequantized normalized first interim value, and activation data is generated by applying an activation function to the second interim value.

    PERMUTATION INSTRUCTION
    4.
    发明申请

    公开(公告)号:US20230102564A1

    公开(公告)日:2023-03-30

    申请号:US17448816

    申请日:2021-09-24

    Abstract: A device includes a vector register file, a memory, and a processor. The vector register file includes a plurality of vector registers. The memory is configured to store a permutation instruction. The processor is configured to access a periodicity parameter of the permutation instruction. The periodicity parameter indicates a count of a plurality of data sources that contain source data for the permutation instruction. The processor is also configured to execute the permutation instruction to, for each particular element of multiple elements of a first permutation result register of the plurality of vector registers, select a data source of the plurality of data sources based at least in part on the count of the plurality of data sources and populate the particular element based on a value in a corresponding element of the selected data source.

    Instruction Set Architecture for Neural Network Quantization and Packing

    公开(公告)号:US20230350678A1

    公开(公告)日:2023-11-02

    申请号:US17732361

    申请日:2022-04-28

    CPC classification number: G06F9/30101 G06N3/04

    Abstract: This application is directed to using a single instruction to initiate a sequence of computational operations related to a neural network. An electronic device receives a single instruction to apply a neural network operation to a set of M-bit elements stored in one or more input vector registers. In response to the single instruction, the electronic device implements the neural network operation on the set of M-bit elements to generate a set of P-bit elements by obtaining the set of M-bit elements from the one or more input vector registers, quantizing each of the set of M-bit elements from M bits to P bits, and packing the set of P-bit elements into an output vector register. P is smaller than M. In some embodiments, the neural network operation is a quantization operation including at least a multiplication with a quantization factor and an addition with a zero point.

Patent Agency Ranking