SCALABLE MEMORY INTERFACE FOR GRAPHICAL PROCESSOR UNIT

    公开(公告)号:US20190213707A1

    公开(公告)日:2019-07-11

    申请号:US15867688

    申请日:2018-01-10

    Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.

    Apparatus and method for shared resource partitioning through credit management

    公开(公告)号:US11023998B2

    公开(公告)日:2021-06-01

    申请号:US16373477

    申请日:2019-04-02

    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.

    Apparatus and method for shared resource partitioning through credit management

    公开(公告)号:US10249017B2

    公开(公告)日:2019-04-02

    申请号:US15234773

    申请日:2016-08-11

    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.

    PAGE TRANSLATION PREFETCH MECHANISM
    6.
    发明申请

    公开(公告)号:US20190163641A1

    公开(公告)日:2019-05-30

    申请号:US15822948

    申请日:2017-11-27

    Abstract: An apparatus to facilitate page translation prefetching is disclosed. The apparatus includes a translation lookaside buffer (TLB), including a first table to store page table entries (PTEs) and a second table to store tags corresponding to each of the PTEs; and prefetch logic to detect a miss of a first requested address in the TLB during a page translation, retrieve a plurality of physical addresses from memory in response to the TLB miss and store the plurality of physical addresses as a plurality of PTEs in a first TLB entry.

    Scalable memory interface for graphical processor unit

    公开(公告)号:US10552937B2

    公开(公告)日:2020-02-04

    申请号:US15867688

    申请日:2018-01-10

    Abstract: Embodiments are generally directed to a scalable memory interface for a graphical processor unit. An embodiment of an apparatus includes a graphical processing unit (GPU) including multiple autonomous engines; a common memory interface for the autonomous engines; and a memory management unit for the common memory interface, the memory management unit including multiple engine modules, wherein each of the engine modules includes a translation-lookaside buffer (TLB) that is dedicated to providing address translation for memory requests for a respective autonomous engine of the plurality of autonomous engines, and a TLB miss tracking mechanism that provides tracking for the respective autonomous engine.

    APPARATUS AND METHOD FOR SHARED RESOURCE PARTITIONING THROUGH CREDIT MANAGEMENT

    公开(公告)号:US20190228499A1

    公开(公告)日:2019-07-25

    申请号:US16373477

    申请日:2019-04-02

    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.

    APPARATUS AND METHOD FOR SHARED RESOURCE PARTITIONING THROUGH CREDIT MANAGEMENT

    公开(公告)号:US20180047131A1

    公开(公告)日:2018-02-15

    申请号:US15234773

    申请日:2016-08-11

    Abstract: An apparatus is provided which comprises: a first engine buffer to receive a first engine request; a first engine register coupled to the first engine buffer, wherein the first engine register is to store first engine credits associated with the first engine buffer; a second engine buffer to receive a second engine request; a second engine register coupled to the second engine buffer, wherein the second engine register is to store second engine credits associated with the second engine buffer; and a common buffer which is common to the first and second engines, wherein the first engine credits represents one or more slots in the common buffer for servicing the first engine request for access to a common resource, and wherein the second engine credits represents one or more slots in the common buffer for servicing the second engine request for access to the common resource.

Patent Agency Ranking