TECHNOLOGIES FOR DYNAMIC BATCH SIZE MANAGEMENT

    公开(公告)号:US20230030060A1

    公开(公告)日:2023-02-02

    申请号:US17838872

    申请日:2022-06-13

    Abstract: Technologies for dynamically managing a batch size of packets include a network device. The network device is to receive, into a queue, packets from a remote node to be processed by the network device, determine a throughput provided by the network device while the packets are processed, determine whether the determined throughput satisfies a predefined condition, and adjust a batch size of packets in response to a determination that the determined throughput satisfies a predefined condition. The batch size is indicative of a threshold number of queued packets required to be present in the queue before the queued packets in the queue can be processed by the network device.

    METHOD AND APPARATUS PROVIDING A TIERED ELASTIC CLOUD STORAGE TO INCREASE DATA RESILIENCY

    公开(公告)号:US20220138156A1

    公开(公告)日:2022-05-05

    申请号:US17521630

    申请日:2021-11-08

    Abstract: Methods, apparatus, systems, and articles of manufacture providing a tiered elastic cloud storage to increase data resiliency are disclosed. An example instructions cause one or more processors to at least execute the instructions to: generate a storage scheme for files based on a categorization of the files and resource capabilities of an edge-based device and a cloud-based device, the categorization including a first group of files to be stored locally at an end user computing device, a second group of files to be stored externally at the edge-based device, and a third group of files to be stored externally at the cloud-based device; in response to an acknowledgement from at least one of the edge-based device or the cloud-based device, generate a map corresponding to locations of the files; store the first group of files in local storage; and cause transmission of the second group of files to the edge-based device and the third group of files to the cloud-based device

    Technologies for managing single-producer and single consumer rings

    公开(公告)号:US11283723B2

    公开(公告)日:2022-03-22

    申请号:US16144384

    申请日:2018-09-27

    Abstract: Technologies for managing a single-producer and single-consumer ring include a producer of a compute node that is configured to allocate data buffers, produce work, and indicate that work has been produced. The compute node is configured to insert reference information for each of the allocated data buffers into respective elements of the ring and store the produced work into the data buffers. The compute node includes a consumer configured to request the produced work from the ring. The compute node is further configured to dequeue the reference information from each of the elements of the ring that correspond to the portion of data buffers in which the produced work has been stored, and set each of the elements of the ring for which the reference information has been dequeued to an empty (i.e., NULL) value. Other embodiments are described herein.

    TECHNOLOGIES FOR PROVIDING INFORMATION TO A USER WHILE TRAVELING

    公开(公告)号:US20210180965A1

    公开(公告)日:2021-06-17

    申请号:US16941163

    申请日:2020-07-28

    Abstract: Technologies for providing information to a user while traveling include a mobile computing device to determine network condition information associated with a route segment. The route segment may be one of a number of route segments defining at least one route from a starting location to a destination. The mobile computing device may determine a route from the starting location to the destination based on the network condition information. The mobile computing device may upload the network condition information to a crowdsourcing server. A mobile computing device may predict a future location of the device based on device context, determine a safety level for the predicted location, and notify the user if the safety level is below a threshold safety level. The device context may include location, time of day, and other data. The safety level may be determined based on predefined crime data. Other embodiments are described and claimed.

    Cache-aware adaptive thread scheduling and migration

    公开(公告)号:US10339023B2

    公开(公告)日:2019-07-02

    申请号:US14496216

    申请日:2014-09-25

    Abstract: In one embodiment, a processor includes: a plurality of cores each to independently execute instructions; a shared cache memory coupled to the plurality of cores and having a plurality of clusters each associated with one or more of the plurality of cores; a plurality of cache activity monitors each associated with one of the plurality of clusters, where each cache activity monitor is to monitor one or more performance metrics of the corresponding cluster and to output cache metric information; a plurality of thermal sensors each associated with one of the plurality of clusters and to output thermal information; and a logic coupled to the plurality of cores to receive the cache metric information from the plurality of cache activity monitors and the thermal information and to schedule one or more threads to a selected core based at least in part on the cache metric information and the thermal information for the cluster associated with the selected core. Other embodiments are described and claimed.

Patent Agency Ranking