-
公开(公告)号:US12099789B2
公开(公告)日:2024-09-24
申请号:US17118442
申请日:2020-12-10
Applicant: Advanced Micro Devices, Inc.
Inventor: Kevin Y. Cheng , Sooraj Puthoor , Onur Kayiran
IPC: G06F30/331 , G06F9/38 , G06F30/34
CPC classification number: G06F30/331 , G06F9/3877 , G06F30/34
Abstract: Methods, devices, and systems for information communication. Information transmitted from a host to a graphics processing unit (GPU) is received by information analysis circuitry of a field-programmable gate array (FPGA). A pattern in the information is determined by the information analysis circuitry. A predicted information pattern is determined, by the information analysis circuitry, based on the information. An indication of the predicted information pattern is transmitted to the host. Responsive to a signal from the host based on the predicted information pattern, the FPGA is reprogrammed to implement decompression circuitry based on the predicted information pattern. In some implementations, the information includes a plurality of packets. In some implementations, the predicted information pattern includes a pattern in a plurality of packets. In some implementations, the predicted information pattern includes a zero data pattern.
-
公开(公告)号:US20220197647A1
公开(公告)日:2022-06-23
申请号:US17126977
申请日:2020-12-18
Applicant: Advanced Micro Devices, Inc.
Inventor: Onur Kayiran , Mohamed Assem Ibrahim , Shaizeen Aga
Abstract: A memory module includes register selection logic to select alternate local source and/or destination registers to process PIM commands. The register selection logic uses an address-based register selection approach to select an alternate local source and/or destination register based upon address data specified by a PIM command and a split address maintained by a memory module. The register selection logic may alternatively use a register data-based approach to select an alternate local source and/or destination register based upon data stored in one or more local registers. A PIM-enabled memory module configured with the register selection logic described herein is capable of selecting an alternate local source and/or destination register to process PIM commands at or near the PIM execution unit where the PIM commands are executed.
-
3.
公开(公告)号:US20200167328A1
公开(公告)日:2020-05-28
申请号:US16202082
申请日:2018-11-27
Applicant: Advanced Micro Devices, Inc.
Inventor: Mohamed Assem Ibrahim , Onur Kayiran , Yasuko Eckert
IPC: G06F16/22 , G06F16/901
Abstract: A portion of a graph dataset is generated for each computing node in a distributed computing system by, for each subject vertex in a graph, recording for the computing node an offset for the subject vertex, where the offset references a first position in an edge array for the computing node, and for each edge of a set of edges coupled with the subject vertex in the graph, calculating an edge value for the edge based on a connected vertex identifier identifying a vertex coupled with the subject vertex via the edge. When the edge value is assigned to the first position, the edge value is determined by a first calculation, and when the edge value is assigned to position subsequent to the first position, the edge value is determined by a second calculation. In the computing node, the edge value is recorded in the edge array.
-
公开(公告)号:US10310981B2
公开(公告)日:2019-06-04
申请号:US15268953
申请日:2016-09-19
Applicant: Advanced Micro Devices, Inc.
Inventor: Yasuko Eckert , Nuwan Jayasena , Reena Panda , Onur Kayiran , Michael W. Boyer
IPC: G06F12/00 , G06F12/0862 , G06F13/00 , G06F13/28
Abstract: A method and apparatus for performing memory prefetching includes determining whether to initiate prefetching. Upon a determination to initiate prefetching, a first memory row is determined as a suitable prefetch candidate, and it is determined whether a particular set of one or more cachelines of the first memory row is to be prefetched.
-
公开(公告)号:US12164923B2
公开(公告)日:2024-12-10
申请号:US17853790
申请日:2022-06-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Elliott David Binder , Onur Kayiran , Masab Ahmad
Abstract: Methods and systems are disclosed for processing a vector by a vector processor. Techniques disclosed include receiving predicated instructions by a scheduler, each of which is associated with an opcode, a vector of elements, and a predicate. The techniques further include executing the predicated instructions. Executing a predicated instruction includes compressing, based on an index derived from a predicate of the instruction, elements in a vector of the instruction, where the elements in the vector are contiguously mapped, then, after the mapped elements are processed, decompressing the processed mapped elements, where the processed mapped elements are reverse mapped based on the index.
-
公开(公告)号:US20230098421A1
公开(公告)日:2023-03-30
申请号:US17490703
申请日:2021-09-30
Applicant: Advanced Micro Devices, Inc.
Inventor: Onur Kayiran , Mohamed Assem Abd ElMohsen Ibrahim , Shaizeen Aga
Abstract: Methods and apparatuses include a processing unit which helps control the speed and computational resources required for arithmetic operations of two numbers in a first format. The control unit of the processing unit approximates the arithmetic operations using a plurality of decomposed numbers in a second format that facilitates faster calculations than the first format, such that performing arithmetic operations using the decomposed numbers is capable of approximating the results of the arithmetic operations of the two numbers in the first format.
-
公开(公告)号:US20230065546A1
公开(公告)日:2023-03-02
申请号:US17489576
申请日:2021-09-29
Applicant: Advanced Micro Devices, Inc.
Inventor: Mohamed Assem Abd ElMohsen Ibrahim , Onur Kayiran , Shaizeen Aga
IPC: G06F16/2457
Abstract: An electronic device includes a plurality of nodes, each node having a processor that performs operations for processing instances of input data through a model, a local memory that stores a separate portion of model data for the model, and a controller. The controller identifies model data that meets one or more predetermined conditions in the separate portion of the model data in the local memory in some or all of the nodes that is accessible by the processors when processing the instances of input data through the model. The controller then copies the model data that meets the one or more predetermined conditions from the separate portion of the model data in the local memory in the some or all of the nodes to local memories in other nodes. In this way, the controller distributes model data that meets the one or more predetermined conditions among the nodes, making the model data that meets the one or more predetermined conditions available to the nodes without performing remote memory accesses.
-
公开(公告)号:US10303602B2
公开(公告)日:2019-05-28
申请号:US15475435
申请日:2017-03-31
Applicant: Advanced Micro Devices, Inc.
Inventor: Onur Kayiran , Gabriel H. Loh , Yasuko Eckert
IPC: G06F12/08 , G06F12/0806 , G06F12/0804 , G06F12/0817
Abstract: A processing system includes at least one central processing unit (CPU) core, at least one graphics processing unit (GPU) core, a main memory, and a coherence directory for maintaining cache coherence. The at least one CPU core receives a CPU cache flush command to flush cache lines stored in cache memory of the at least one CPU core prior to launching a GPU kernel. The coherence directory transfers data associated with a memory access request by the at least one GPU core from the main memory without issuing coherence probes to caches of the at least one CPU core.
-
公开(公告)号:US20230401154A1
公开(公告)日:2023-12-14
申请号:US17835810
申请日:2022-06-08
Applicant: Advanced Micro Devices, Inc.
Inventor: Mohamed Assem Abd ElMohsen Ibrahim , Onur Kayiran , Shaizeen Dilawarhusen Aga , Yasuko Eckert
IPC: G06F12/0862
CPC classification number: G06F12/0862 , G06F2212/602
Abstract: A system and method for efficiently accessing sparse data for a workload are described. In various implementations, a computing system includes an integrated circuit and a memory for storing tasks of a workload that includes sparse accesses of data items stored in one or more tables. The integrated circuit receives a user query, and generates a result based on multiple data items targeted by the user query. To reduce the latency of processing the workload even with sparse lookup operations performed on the one or more tables, a prefetch engine of the integrated circuit stores a subset of data items in prefetch data storage. The prefetch engine also determines which data items to store in the prefetch data storage based on one or more of a frequency of reuse, a distance or latency of access of a corresponding table of the one more tables, or other.
-
公开(公告)号:US20230205872A1
公开(公告)日:2023-06-29
申请号:US17561170
申请日:2021-12-23
Applicant: Advanced Micro Devices, Inc.
Inventor: Jagadish B. Kotra , Onur Kayiran , John Kalamatianos , Alok Garg
IPC: G06F21/55
CPC classification number: G06F21/554 , G06F2221/034
Abstract: A method includes receiving an indication that a number of activations of a memory structure exceeds a threshold number of activations for a time period, and in response to the indication, throttling instruction execution for a thread issuing the activations.
-
-
-
-
-
-
-
-
-