TRUSTED 5G NETWORK SLICES
    31.
    发明申请

    公开(公告)号:US20220408262A1

    公开(公告)日:2022-12-22

    申请号:US17355033

    申请日:2021-06-22

    Abstract: Slice control elements in a 5G slicing framework are instantiated in trusted hardware to provide for sealed data transmission in a trusted slice. In addition to sealing the data plane in the trusted slice, the control plane for the slice may be secured by the instantiation into the trusted hardware of layer 2 (medium access control—MAC) scheduling functions for radio resources (e.g., subcarriers and time slots). Layer 1 (physical—PHY) may also be configured to further enhance security of the trusted slice by isolating its PHY layer from that of other trusted and non-trusted slices. Such isolation may be implemented, for example, by using dedicated PHY resources, or by limiting resource time sharing to provide temporal isolation.

    SECURITY FOR 5G NETWORK SLICING
    32.
    发明申请

    公开(公告)号:US20220407890A1

    公开(公告)日:2022-12-22

    申请号:US17355056

    申请日:2021-06-22

    Abstract: Slices of a 5G network may be configured to implement a trust model by which network customers are provided with assurances that slice properties meet agreed-upon criteria specified by customer policy so that slices can be trusted. Illustrative slice properties may pertain to service types, geographic area of operations, and attributes associated with software, firmware, and hardware used in the infrastructure of nodes in a trusted slice. Particular values of the properties describe a slice configuration that may be measured, digested, and attested to the customer to provide assurances that the configuration conforms with the policy. The 5G slice trust model may be implemented as a two-way model in which a slice provider performs checks to verify slice properties while customers ensure that only authenticated and authorized user equipment (UE) will access a trusted slice.

    ORCHESTRATING EDGE SERVICE WORKLOADS ACROSS EDGE HIERARCHIES

    公开(公告)号:US20220400085A1

    公开(公告)日:2022-12-15

    申请号:US17348701

    申请日:2021-06-15

    Abstract: Computing resources are managed in a computing environment comprising a computing service provider and an edge computing network. The edge computing network comprises computing and storage devices configured to extend computing resources of the computing service provider to remote users of the computing service provider. The edge computing network collects capacity and usage data for computing and network resources at the edge computing network. The capacity and usage data is sent to the computing service provider. Based on the capacity and usage data, the computing service provider, using a cost function, determines a distribution of workloads pertaining to a processing pipeline that has been partitioned into the workloads. The workloads can be executed at the computing service provider or the edge computing network.

    Methods for Offloading A Task From A Processor to Heterogeneous Accelerators

    公开(公告)号:US20220374262A1

    公开(公告)日:2022-11-24

    申请号:US17324039

    申请日:2021-05-18

    Abstract: Systems and methods are provided for offloading a task from a central processor in a radio access network (RAN) server to one or more heterogeneous accelerators. For example, a task associated with one or more operational partitions (or a service application) associated with processing data traffic in the RAN is dynamically allocated for offloading from the central processor based on workload status information. One or more accelerators are dynamically allocated for executing the task, where the accelerators may be heterogeneous and may not comprise pre-programming for executing the task. The disclosed technology further enables generating specific application programs for execution on the respective heterogeneous accelerators based on a single set of program instructions. The methods automatically generate the specific application programs by identifying common functional blocks for processing data traffic and mapping the functional blocks to the single set of program instructions to generate code native to the respective accelerators.

    GENERATING AND IMPLEMENTING CONTEXT PROFILES IN PROCESSING QUERIES USING FOUNDATION MODELS

    公开(公告)号:US20240419698A1

    公开(公告)日:2024-12-19

    申请号:US18335787

    申请日:2023-06-15

    Abstract: A context analysis system receives a query from a user. The context analysis system generates one or multiple context profiles and generates a prompt for a foundation model for each of the context profiles. The context analysis system analyzes each of the context profiles and generates a relevancy score. The context analysis system selects one of the context profiles based on the relevancy score. In some examples, the context analysis system iteratively determines predicted latencies and relevancies of processing a query in conjunction with a generated context and, based on the predicted latencies and/or relevancies, processes the query using a foundation model, such as a large language model (LLM).

    INTEGRATING MODEL REUSE WITH MODEL RETRAINING FOR VIDEO ANALYTICS

    公开(公告)号:US20240096063A1

    公开(公告)日:2024-03-21

    申请号:US18078402

    申请日:2022-12-09

    CPC classification number: G06V10/7715 G06V2201/10

    Abstract: Systems and methods are provided for reusing and retraining an image recognition model for video analytics. The image recognition model is used for inferring a frame of video data that is captured at edge devices. The edge devices periodically or under predetermined conditions transmits a captured frame of video data to perform inferencing. The disclosed technology is directed to select an image recognition model from a model store for reusing or for retraining. A model selector uses a gating network model to determine ranked candidate models for validation. The validation includes iterations of retraining the image recognition model and stopping the iteration when a rate of improving accuracy by retraining becomes smaller than the previous iteration step. Retraining a model includes generating reference data using a teacher model and retraining the model using the reference data. Integrating reuse and retraining of models enables improvement in accuracy and efficiency.

    PROVISIONING EDGE BACKHAULS FOR DYNAMIC WORKLOADS

    公开(公告)号:US20230088681A1

    公开(公告)日:2023-03-23

    申请号:US17478369

    申请日:2021-09-17

    Abstract: Network capacity is provisioned in a computing environment comprising a computing service provider and an edge computing network. A cost function is applied to usage data for a number of user endpoints at the edge computing network, a number and type of workloads at the edge computing network, offload capability of the edge computing network, and resource capacities at the edge computing network. An estimated network capacity is determined, where the workloads are dynamic, and the cost function is usable to optimize the network capacity with respect to one or more criteria.

Patent Agency Ranking