Abstract:
Systems and methods for managing shared cache by multi-core processor. An example processing system comprises: a plurality of processing cores, each processing core communicatively coupled to a last level cache (LLC) slice; and a cache control logic coupled to the plurality of processing cores, the cache control logic configured to perform one of: making an LLC slice of an inactive processing core available to an active processing core or power gating the LLC slice, based on estimating cache requirements by active processing cores.
Abstract:
Technologies for identifying a cache line of a network packet for eviction from an on-processor cache of a network device communicatively coupled to a network controller. The network device is configured to determine whether a cache line of the cache corresponding to the network packet is to be evicted from the cache based on a determination that the network packet is not needed subsequent to processing the network packet, and provide an indication that the cache line is to be evicted from the cache based on an eviction policy received from the network controller.
Abstract:
Methods and systems may provide for determining whether a runtime disablement condition is met with respect to a sleep state and disabling the sleep state if the runtime disablement condition is met. Additionally, the sleep state may be enabled if a runtime reinstatement condition is met. In one example, determining whether the runtime disablement condition is met includes determining a false entry rate for the sleep state, and comparing the false entry rate to an energy-based threshold, wherein the sleep state is disabled if the false entry rate exceeds the energy-based threshold.
Abstract:
Methods and systems may provide for determining a status of a mobile platform, wherein the status indicates whether the mobile platform is stationary, and adapting a detection schedule of one or more location sensors on the mobile platform based at least in part on whether the mobile platform is stationary. Additionally, one or more location updates may be generated based at least in part on information from the one or more location sensors. In one example, a location request is received, wherein the detection schedule is adapted further based on quality of service (QoS) information associated with the location request, and wherein the one or more location updates are generated in response to the location request.
Abstract:
In one embodiment, a processor includes: a plurality of cores each to independently execute instructions; a shared cache memory coupled to the plurality of cores and having a plurality of clusters each associated with one or more of the plurality of cores; a plurality of cache activity monitors each associated with one of the plurality of clusters, where each cache activity monitor is to monitor one or more performance metrics of the corresponding cluster and to output cache metric information; a plurality of thermal sensors each associated with one of the plurality of clusters and to output thermal information; and a logic coupled to the plurality of cores to receive the cache metric information from the plurality of cache activity monitors and the thermal information and to schedule one or more threads to a selected core based at least in part on the cache metric information and the thermal information for the cluster associated with the selected core. Other embodiments are described and claimed.
Abstract:
A network interface device (NID) may determine whether the received data units of the computer system are to be compressed before transmitting the data units. The NID may determine the compression energy value consumed to compress the first K1 data units and a second transmission energy value to transmit the compressed first K1 data units. The NID may then estimate a first transmission energy value that may be consumed by the NID to transmit uncompressed first K1 data units using the second transmission energy value. The NID may then use the first and second transmission energy value and the compression energy value to determine if the remaining (N−K1) data units of the first data stream.
Abstract:
A network interface device (NID) may determine whether the received data units of the computer system are to be compressed before transmitting the data units. The NID may determine the compression energy value consumed to compress the first K1 data units and a second transmission energy value to transmit the compressed first K1 data units. The NID may then estimate a first transmission energy value that may be consumed by the NID to transmit uncompressed first K1 data units using the second transmission energy value. The NID may then use the first and second transmission energy value and the compression energy value to determine if the remaining (N-K1) data units of the first data stream.
Abstract:
Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.
Abstract:
Examples disclosed herein include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data.
Abstract:
Method and system for performing data movement operations is described herein. One embodiment of a method includes: storing data for a first memory address in a cache line of a memory of a first processing unit, the cache line associated with a coherency state indicating that the memory has sole ownership of the cache line; decoding an instruction for execution by a second processing unit, the instruction comprising a source data operand specifying the first memory address and a destination operand specifying a memory location in the second processing unit; and responsive to executing the decoded instruction, copying data from the cache line of the memory of the first processing unit as identified by the first memory address, to the memory location of the second processing unit, wherein responsive to the copy, the cache line is to remain in the memory and the coherency state is to remain unchanged.