-
公开(公告)号:US20230030060A1
公开(公告)日:2023-02-02
申请号:US17838872
申请日:2022-06-13
Applicant: Intel Corporation
Inventor: Ren Wang , Mia PRIMORAC , Tsung-Yuan C. Tai , Saikrishna EDUPUGANTI , John J. Browne
Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.
-
公开(公告)号:US11379342B2
公开(公告)日:2022-07-05
申请号:US16827410
申请日:2020-03-23
Applicant: Intel Corporation
Inventor: Ren Wang , Bin Li , Andrew J. Herdrich , Tsung-Yuan C. Tai , Ramakrishna Huggahalli
IPC: G06F11/30 , G06F11/34 , G06F12/0811 , G06F12/121 , G06F13/16 , G06F13/42 , G06F12/128 , G06F12/084 , G06F12/0888 , H04L67/1097 , G06F13/28
Abstract: There is disclosed in one example a computing apparatus, including: a processor; a multilevel cache including a plurality of cache levels; a peripheral device configured to write data directly to a selected cache level; and a cache monitoring circuit, including a cache counter to track cache lines evicted from the selected cache level without being processed; and logic to provide a direct write policy according to the cache counter.
-
3.
公开(公告)号:US20220138156A1
公开(公告)日:2022-05-05
申请号:US17521630
申请日:2021-11-08
Applicant: Intel Corporation
Inventor: Ren Wang , Christian Maciocco , Kshitij Doshi , Francesc Guim Bernat , Ned Smith , Satish Jha , Vesh Raj Sharma Banjade , S M Iftekharul Alam
IPC: G06F16/16 , G06F16/13 , G06F16/182
Abstract: Methods, apparatus, systems, and articles of manufacture providing a tiered elastic cloud storage to increase data resiliency are disclosed. An example instructions cause one or more processors to at least execute the instructions to: generate a storage scheme for files based on a categorization of the files and resource capabilities of an edge-based device and a cloud-based device, the categorization including a first group of files to be stored locally at an end user computing device, a second group of files to be stored externally at the edge-based device, and a third group of files to be stored externally at the cloud-based device; in response to an acknowledgement from at least one of the edge-based device or the cloud-based device, generate a map corresponding to locations of the files; store the first group of files in local storage; and cause transmission of the second group of files to the edge-based device and the third group of files to the cloud-based device
-
公开(公告)号:US11283723B2
公开(公告)日:2022-03-22
申请号:US16144384
申请日:2018-09-27
Applicant: Intel Corporation
Inventor: Jiayu Hu , Cunming Liang , Ren Wang , Jr-Shian Tsai , Jingjing Wu , Zhaoyan Chen
IPC: H04L12/835 , H04L47/30 , H04L49/9005 , H04L12/42 , G06F15/173 , H04L49/901
Abstract: Technologies for managing a single-producer and single-consumer ring include a producer of a compute node that is configured to allocate data buffers, produce work, and indicate that work has been produced. The compute node is configured to insert reference information for each of the allocated data buffers into respective elements of the ring and store the produced work into the data buffers. The compute node includes a consumer configured to request the produced work from the ring. The compute node is further configured to dequeue the reference information from each of the elements of the ring that correspond to the portion of data buffers in which the produced work has been stored, and set each of the elements of the ring for which the reference information has been dequeued to an empty (i.e., NULL) value. Other embodiments are described herein.
-
公开(公告)号:US20210240252A1
公开(公告)日:2021-08-05
申请号:US17234681
申请日:2021-04-19
Applicant: Intel Corporation
Inventor: Ren Wang , Christian Maciocco , Sanjay Bakshi , Tsung-Yuan Charles Tai
IPC: G06F1/3287 , G06F1/329 , G06F1/3203 , G06F9/4401
Abstract: The present invention relates to platform power management.
-
公开(公告)号:US20210180965A1
公开(公告)日:2021-06-17
申请号:US16941163
申请日:2020-07-28
Applicant: Intel Corporation
Inventor: Ren Wang , Zhonghong Ou , Arvind Kumar , Kristoffer Fleming , Tsung-Yuan C. Tai , Timothy J. Gresham , John C. Weast , Corey Kukis
IPC: G01C21/34 , H04W4/024 , H04W4/029 , H04B17/318 , H04W24/08
Abstract: Technologies for providing information to a user while traveling include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data. Other embodiments are described and claimed.
-
公开(公告)号:US10817425B2
公开(公告)日:2020-10-27
申请号:US14583389
申请日:2014-12-26
Applicant: Intel Corporation
Inventor: Ren Wang , Andrew J. Herdrich , Yen-cheng Liu , Herbert H. Hum , Jong Soo Park , Christopher J. Hughes , Namakkal N. Venkatesan , Adrian C. Moga , Aamer Jaleel , Zeshan A. Chishti , Mesut A. Ergin , Jr-shian Tsai , Alexander W. Min , Tsung-yuan C. Tai , Christian Maciocco , Rajesh Sankaran
IPC: G06F12/0842 , G06F12/0831 , G06F12/0893 , G06F12/109 , G06F12/0813 , G06F9/455
Abstract: Methods and apparatus implementing Hardware/Software co-optimization to improve performance and energy for inter-VM communication for NFVs and other producer-consumer workloads. The apparatus include multi-core processors with multi-level cache hierarchies including and L1 and L2 cache for each core and a shared last-level cache (LLC). One or more machine-level instructions are provided for proactively demoting cachelines from lower cache levels to higher cache levels, including demoting cachelines from L1/L2 caches to an LLC. Techniques are also provided for implementing hardware/software co-optimization in multi-socket NUMA architecture system, wherein cachelines may be selectively demoted and pushed to an LLC in a remote socket. In addition, techniques are disclosure for implementing early snooping in multi-socket systems to reduce latency when accessing cachelines on remote sockets.
-
公开(公告)号:US10606755B2
公开(公告)日:2020-03-31
申请号:US15640060
申请日:2017-06-30
Applicant: Intel Corporation
Inventor: Anil Vasudevan , Venkata Krishnan , Andrew J. Herdrich , Ren Wang , Robert G. Blankenship , Vedaraman Geetha , Shrikant M. Shah , Marshall A. Millier , Raanan Sade , Binh Q. Pham , Olivier Serres , Chyi-Chang Miao , Christopher B. Wilkerson
IPC: G06F12/0868 , G06F12/0897 , G06F3/06 , G06F12/0811 , G06F12/0871
Abstract: Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
-
公开(公告)号:US10482017B2
公开(公告)日:2019-11-19
申请号:US15721223
申请日:2017-09-29
Applicant: Intel Corporation
Inventor: Karl I. Taht , Christopher B. Wilkerson , Ren Wang , James J. Greensky
IPC: G06F12/08 , G06F12/0831 , G06F12/0846 , G06F12/128
Abstract: Processor, method, and system for tracking partition-specific statistics across cache partitions that apply different cache management policies is described herein. One embodiment of a processor includes: a cache; a cache controller circuitry to partition the cache into a plurality of cache partitions based on one or more control addresses; a cache policy assignment circuitry to apply different cache policies to different subsets of the plurality of cache partitions; and a cache performance monitoring circuitry to track cache events separately for each of the cache partitions and to provide partition-specific statistics to allow comparison between the plurality of cache partitions as a result of applying the different cache policies in a same time period.
-
公开(公告)号:US10339023B2
公开(公告)日:2019-07-02
申请号:US14496216
申请日:2014-09-25
Applicant: Intel Corporation
Inventor: Ren Wang , Tsung-Yuan C. Tai , Paul S. Diefenbaugh , Andrew J. Herdrich
IPC: G06F12/084 , G06F11/30 , G06F11/34 , G06F12/0895 , G06F1/3206 , G06F1/3234 , G06F1/3287
Abstract: In one embodiment, a processor includes: a plurality of cores each to independently execute instructions; a shared cache memory coupled to the plurality of cores and having a plurality of clusters each associated with one or more of the plurality of cores; a plurality of cache activity monitors each associated with one of the plurality of clusters, where each cache activity monitor is to monitor one or more performance metrics of the corresponding cluster and to output cache metric information; a plurality of thermal sensors each associated with one of the plurality of clusters and to output thermal information; and a logic coupled to the plurality of cores to receive the cache metric information from the plurality of cache activity monitors and the thermal information and to schedule one or more threads to a selected core based at least in part on the cache metric information and the thermal information for the cluster associated with the selected core. Other embodiments are described and claimed.
-
-
-
-
-
-
-
-
-