-
公开(公告)号:US20240338238A1
公开(公告)日:2024-10-10
申请号:US18574849
申请日:2022-01-26
申请人: Intel Corporation
发明人: Wei Wang , Kun Tian , Guang Zeng , Gilbert Neiger , Rajesh Sankaran , Asit Mallick , Jr-Shian Tsai , Jacob Jun Pan , Mesut Ergin
CPC分类号: G06F9/45558 , G06F9/3016 , G06F9/45545 , G06F2009/45579
摘要: A method and system of host to guest (H2G) notification are disclosed. H2G is provided via an instruction. The instruction is a send user inter-processor interrupt instruction. An exemplary processor includes decoder circuitry to decode a single instruction and execute the decoded single instruction according to the at least the opcode to cause a host to guest notification from a virtual device running in a host machine on the first physical processor to a virtual device driver running on a virtual processor in a guest machine on a second physical processor.
-
公开(公告)号:US20240289160A1
公开(公告)日:2024-08-29
申请号:US18658822
申请日:2024-05-08
申请人: Intel Corporation
发明人: Barry E. Huntley , Jr-Shian Tsai , Gilbert Neiger , Rajesh M. Sankaran , Mesut A. Ergin , Ravi L. Sahita , Andrew J. Herdrich , Wei Wang
CPC分类号: G06F9/45558 , G06F9/3004 , G06F9/45533 , G06F12/0292 , G06F12/10 , G06F12/109 , G11C7/1072 , G06F2009/45583 , G06F2009/45591 , G06F2009/45595 , G06F2212/151
摘要: A processor of an aspect includes a decode unit to decode an aperture access instruction, and an execution unit coupled with the decode unit. The execution unit, in response to the aperture access instruction, is to read a host physical memory address, which is to be associated with an aperture that is to be in system memory, from an access protected structure, and access data within the aperture at a host physical memory address that is not to be obtained through address translation. Other processors are also disclosed, as are methods, systems, and machine-readable medium storing aperture access instructions.
-
3.
公开(公告)号:US11943340B2
公开(公告)日:2024-03-26
申请号:US17437342
申请日:2019-04-19
申请人: Intel Corporation
发明人: Bo Cui , Cunming Liang , Jr-Shian Tsai , Ping Yu , Xiaobing Qian , Xuekun Hu , Lin Luo , Shravan Nagraj , Xiaowen Zhang , Mesut A. Ergin , Tsung-Yuan C. Tai , Andrew J. Herdrich
CPC分类号: H04L9/0825 , H04L9/0631 , H04L9/085 , H04L63/0236
摘要: In some examples, for process-to-process communication, such as in function linking, a virtual channel can be provisioned to provide virtual machine to virtual machine communications. In response to a transmit request from a source virtual machine, the virtual channel can cause a data copy from a source buffer associated with the source virtual machine without decryption or encryption. The virtual channel provisions a key identifier for the copied data. The destination virtual machine can receive an indication data is available and can cause the data to be decrypted using a key accessed using the key identifier and source address of the copied data. In addition, the data can be encrypted using a second, different key for storage in a destination buffer associated with the destination virtual machine. In some examples, the key identifier and source address is managed by the virtual channel and is not visible to virtual machine or hypervisor.
-
公开(公告)号:US11283723B2
公开(公告)日:2022-03-22
申请号:US16144384
申请日:2018-09-27
申请人: Intel Corporation
发明人: Jiayu Hu , Cunming Liang , Ren Wang , Jr-Shian Tsai , Jingjing Wu , Zhaoyan Chen
IPC分类号: H04L12/835 , H04L47/30 , H04L49/9005 , H04L12/42 , G06F15/173 , H04L49/901
摘要: Technologies for managing a single-producer and single-consumer ring include a producer of a compute node that is configured to allocate data buffers, produce work, and indicate that work has been produced. The compute node is configured to insert reference information for each of the allocated data buffers into respective elements of the ring and store the produced work into the data buffers. The compute node includes a consumer configured to request the produced work from the ring. The compute node is further configured to dequeue the reference information from each of the elements of the ring that correspond to the portion of data buffers in which the produced work has been stored, and set each of the elements of the ring for which the reference information has been dequeued to an empty (i.e., NULL) value. Other embodiments are described herein.
-
公开(公告)号:US10860709B2
公开(公告)日:2020-12-08
申请号:US16024547
申请日:2018-06-29
申请人: Intel Corporation
发明人: Michael Lemay , David M. Durham , Michael E. Kounavis , Barry E. Huntley , Vedvyas Shanbhogue , Jason W. Brandt , Josh Triplett , Gilbert Neiger , Karanvir Grewal , Baiju V. Patel , Ye Zhuang , Jr-Shian Tsai , Vadim Sukhomlinov , Ravi Sahita , Mingwei Zhang , James C. Farwell , Amitabh Das , Krishna Bhuyan
摘要: Disclosed embodiments relate to encoded inline capabilities. In one example, a system includes a trusted execution environment (TEE) to partition an address space within a memory into a plurality of compartments each associated with code to execute a function, the TEE further to assign a message object in a heap to each compartment, receive a request from a first compartment to send a message block to a specified destination compartment, respond to the request by authenticating the request, generating a corresponding encoded capability, conveying the encoded capability to the destination compartment, and scheduling the destination compartment to respond to the request, and subsequently, respond to a check capability request from the destination compartment by checking the encoded capability and, when the check passes, providing a memory address to access the message block, and, otherwise, generating a fault, wherein each compartment is isolated from other compartments.
-
公开(公告)号:US10178054B2
公开(公告)日:2019-01-08
申请号:US15088910
申请日:2016-04-01
申请人: INTEL CORPORATION
发明人: Stephen T. Palermo , Iosif Gasparakis , Scott P. Dubal , Kapil Sood , Trevor Cooper , Jr-Shian Tsai , Jesse C. Brandeburg , Andrew J. Herdrich , Edwin Verplanke
IPC分类号: H04L12/861 , H04L12/715 , H04L12/931 , G06F15/173
摘要: Methods and apparatus for accelerating VM-to-VM Network Traffic using CPU cache. A virtual queue manager (VQM) manages data that is to be kept in VM-VM shared data buffers in CPU cache. The VQM stores a list of VM-VM allow entries identifying data transfers between VMs that may use VM-VM cache “fast-path” forwarding. Packets are sent from VMs to the VQM for forwarding to destination VMs. Indicia in the packets (e.g., in a tag or header) is inspected to determine whether a packet is to be forwarded via a VM-VM cache fast path or be forwarded via a virtual switch. The VQM determines the VM data already in the CPU cache domain while concurrently coordinating with the data to and from the external shared memory, and also ensures data coherency between data kept in cache and that which is kept in shared memory.
-
公开(公告)号:US11409506B2
公开(公告)日:2022-08-09
申请号:US16142401
申请日:2018-09-26
申请人: Intel Corporation
发明人: Yipeng Wang , Ren Wang , Tsung-Yuan C. Tai , Jr-Shian Tsai , Xiangyang Guo
摘要: Examples may include a method of compiling a declarative language program for a virtual switch. The method includes parsing the declarative language program, the program defining a plurality of match-action tables (MATs), translating the plurality of MATs into intermediate code, and parsing a core identifier (ID) assigned to each one of the plurality of MATs. When the core IDs of the plurality of MATs are the same, the method includes connecting intermediate code of the plurality of MATs using function calls, and translating the intermediate code of the plurality of MATs into machine code to be executed by a core identified by the core IDs.
-
公开(公告)号:US10972371B2
公开(公告)日:2021-04-06
申请号:US14671863
申请日:2015-03-27
申请人: Intel Corporation
发明人: Alexander W. Min , Jr-Shian Tsai , Janet Tseng , Kapil Sood , Tsung-Yuan C. Tai
IPC分类号: G06F15/16 , H04L12/26 , H04L12/911 , H04L12/917 , H04L12/813 , H04L12/721
摘要: Technologies for monitoring network traffic include a computing device that monitors network traffic at a graphics processing unit (GPU) of the computing device. The computing device manages computing resources of the computing device based on results of the monitored network traffic. The computing resources may include one or more virtual machines to process network traffic that is to be monitored at the GPU the computing device. Other embodiments are described and claimed.
-
公开(公告)号:US10929323B2
公开(公告)日:2021-02-23
申请号:US16601137
申请日:2019-10-14
申请人: Intel Corporation
发明人: Ren Wang , Yipeng Wang , Andrew Herdrich , Jr-Shian Tsai , Tsung-Yuan C. Tai , Niall D. McDonnell , Hugh Wilkinson , Bradley A. Burres , Bruce Richardson , Namakkal N. Venkatesan , Debra Bernstein , Edwin Verplanke , Stephen R. Van Doren , An Yan , Andrew Cunningham , David Sonnier , Gage Eads , James T. Clee , Jamison D. Whitesell , Jerry Pirog , Jonathan Kenny , Joseph R. Hasting , Narender Vangati , Stephen Miller , Te K. Ma , William Burroughs
IPC分类号: G06F13/37 , G06F9/54 , G06F12/0868 , G06F12/0811 , G06F13/16 , G06F12/04 , G06F9/38
摘要: Apparatus and methods implementing a hardware queue management device for reducing inter-core data transfer overhead by offloading request management and data coherency tasks from the CPU cores. The apparatus include multi-core processors, a shared L3 or last-level cache (“LLC”), and a hardware queue management device to receive, store, and process inter-core data transfer requests. The hardware queue management device further comprises a resource management system to control the rate in which the cores may submit requests to reduce core stalls and dropped requests. Additionally, software instructions are introduced to optimize communication between the cores and the queue management device.
-
公开(公告)号:US20190044869A1
公开(公告)日:2019-02-07
申请号:US15999133
申请日:2018-08-17
申请人: Intel Corporation
发明人: Yipeng Wang , Ren Wang , Janet Tseng , Jr-Shian Tsai , Tsung-Yuan Tai
IPC分类号: H04L12/851 , H04L12/931 , H04L12/713 , H04L12/26 , H04L29/08 , G06F11/34
摘要: Technologies for classifying network flows using adaptive virtual routing include a network appliance with one or more processors. The network appliance is configured to identify a set of candidate classification algorithms from a plurality of classification algorithm designs to perform a flow classification operation and deploy each of the candidate classification algorithms to a processor. Additionally the network appliance is configured to monitor a performance level of each of the deployed candidate classification algorithms and identify a candidate classification algorithm of the deployed candidate classification algorithms with the highest performance level. The network appliance is further configured to deploy the identified candidate classification algorithm with the highest performance level on each of the one or more processors that are configured to perform the flow classification operation. Other embodiments are described herein.
-
-
-
-
-
-
-
-
-