CALCULATING AND EXPOSING NETWORK CAPACITY AND CONGESTION TO APPLICATIONS

    公开(公告)号:US20230413076A1

    公开(公告)日:2023-12-21

    申请号:US17841445

    申请日:2022-06-15

    摘要: Described are examples for calculating and exposing network capacity and congestion to applications. A network entity such as a radio access network (RAN) intelligent controller (RIC) or virtual base station component receives measurements of a signal quality for a plurality of user devices connected to a RAN. The network entity estimates a deliverable throughput of a wireless link for a user device of the plurality of user devices based on at least the measurements. The network entity can consider other factors such as a number of competing users, queue sizes of the user device and of the competing users, or a scheduling policy. The network entity provides the deliverable throughput to an application server for an application of the user device communicating with the application server via the RAN. The application server can adapt a data rate for the application and the user device based on the deliverable throughput.

    WIRELESS PARAMETER LIMITS FOR PREDICTED VRAN RESOURCE LOADS

    公开(公告)号:US20230388856A1

    公开(公告)日:2023-11-30

    申请号:US17825596

    申请日:2022-05-26

    摘要: A method for utilizing computing resources in a vRAN is described. A predicted resource load is determined for data traffic processing of wireless communication channels served by the vRAN using a trained neural network model. The data traffic processing comprises at least one of PHY data processing or MAC processing for a 5G RAN. Computing resources are allocated for the data traffic processing based on the predicted resource load. Wireless parameter limits are determined for the wireless communication channels that constrain utilization of the allocated computing resources using the trained neural network model, including setting one or more of a maximum number of radio resource units per timeslot or a maximum MCS index for the wireless parameter limits. The data traffic processing is performed using the wireless parameter limits to reduce load spikes that cause a violation of real-time deadlines for the data traffic processing.

    GUARANTYING SLA THRU EDGE CLOUD PATH ORCHESTRATION

    公开(公告)号:US20230018685A1

    公开(公告)日:2023-01-19

    申请号:US17376653

    申请日:2021-07-15

    摘要: The present application relates to communications between a partner network and a wide area network (WAN) via the Internet. Although Internet service providers may act as autonomous systems, the WAN may control routing from the partner network by advertising unicast border gateway protocol (BGP) address prefixes for a plurality of front-end devices in the WAN. An agent in the partner network measures a plurality of paths to a service within the WAN. Each of the plurality of paths is associated with one of the plurality of front-end devices and a respective unicast BGP address prefix. The WAN selects a path within the WAN for the service. The WAN exports a routing rule to the agent. The agent forwards data packets for the service to the respective BGP address prefix via the Internet. The WAN receives data packets for the service of the partner network at the selected device.

    CONTINUOUS LEARNING MODELS ACROSS EDGE HIERARCHIES

    公开(公告)号:US20220414534A1

    公开(公告)日:2022-12-29

    申请号:US17362115

    申请日:2021-06-29

    IPC分类号: G06N20/00

    摘要: Systems and methods are provided for continuous learning of models across hierarchies under a multi-access edge computing. In particular, an on-premises edge server, using a model, generates inference data associated with captured stream data. A data drift determiner determines a data drift in the inference data by comparing the data against reference data generated using a golden model. The data drift indicates a loss of accuracy in the inference data. A gateway model maintains one or more models in a model cache for update the model. The gateway model instructs the one or more servers to train the new model. The gateway model transmits the trained model to update the model in the on-premises edge server. Training the new model includes determining an on-premises edge server with computing resources available to train the new model while generating other inference data for incoming stream data in the data analytic pipeline.

    ALLOCATING COMPUTING RESOURCES DURING CONTINUOUS RETRAINING

    公开(公告)号:US20220188569A1

    公开(公告)日:2022-06-16

    申请号:US17124172

    申请日:2020-12-16

    摘要: Examples are disclosed that relate to methods and computing devices for allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of a machine learning model. In one example, a computing device configured to be located at a network edge between a local network and a cloud service comprises a processor and a memory storing instructions executable by the processor to operate a machine learning model. During a retraining window, a selected portion of a video stream is selected for labeling. At least a portion of a labeled retraining data set is selected for profiling a superset of hyperparameter configurations. For each configuration of the superset of hyperparameter configurations, a profiling test is performed. The profiling test is terminated, and a change in inference accuracy that resulted from the profiling test is extrapolated. Based upon the extrapolated inference accuracies, a set of selected hyperparameter configurations is output.

    DETERMINING REFERENCE SIGNAL TRANSMISSION TIMES

    公开(公告)号:US20230412335A1

    公开(公告)日:2023-12-21

    申请号:US17825766

    申请日:2022-05-26

    摘要: Aspects of the present disclosure relate to determining reference symbol transmission times. In some examples, a method for determining reference symbol transmission times for cellular communications includes receiving signal feedback based on a wireless communication channel between a wireless communication device and a base station, identifying a periodic exchange of reference symbols that are used to adjust beamforming between the wireless communication device and the base station, generating a vector based on the signal feedback, and providing the vector as an input to a trained machine learning model. A training of the trained machine learning model includes calculating a plurality of rewards for a respective plurality of transmission time delays. The plurality of rewards are each calculated based on a function of downlink throughput and uplink overhead. The function of downlink throughput and uplink overhead are based upon a priority level of the wireless communication device.

    EFFICIENCY OF ROUTING TRAFFIC TO AN EDGE COMPUTE SERVER AT THE FAR EDGE OF A CELLULAR NETWORK

    公开(公告)号:US20230110752A1

    公开(公告)日:2023-04-13

    申请号:US17500441

    申请日:2021-10-13

    IPC分类号: H04W28/10

    摘要: A method for improving efficiency of routing edge compute traffic from a user equipment (UE) to an edge compute server at a far edge of a cellular network includes provisioning a near edge control unit (CU) and a near edge user plane function (UPF) at a near edge of the cellular network. The method also includes provisioning a far edge CU, a far edge UPF, and an edge compute workload at the far edge. The method also includes receiving UE traffic at one or more distributed units located at the far edge. The UE traffic includes the edge compute traffic and non-edge compute traffic. The method also includes identifying the edge compute traffic among the UE traffic, routing the edge compute traffic to the edge compute workload at the far edge, and routing the non-edge compute traffic to the near edge UPF at the near edge.

    ALLOCATING COMPUTING RESOURCES DURING CONTINUOUS RETRAINING

    公开(公告)号:US20230030499A1

    公开(公告)日:2023-02-02

    申请号:US17948736

    申请日:2022-09-20

    摘要: Examples are disclosed that relate to methods and computing devices for allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of a machine learning model. In one example, a computing device configured to be located at a network edge between a local network and a cloud service comprises a processor and a memory storing instructions executable by the processor to operate a machine learning model. During a retraining window, a selected portion of a video stream is selected for labeling. At least a portion of a labeled retraining data set is selected for profiling a superset of hyperparameter configurations. For each configuration of the superset of hyperparameter configurations, a profiling test is performed. The profiling test is terminated, and a change in inference accuracy that resulted from the profiling test is extrapolated. Based upon the extrapolated inference accuracies, a set of selected hyperparameter configurations is output.