FAST ADAPTATION FOR DEEP LEARNING APPLICATION THROUGH BACKPROPAGATION

    公开(公告)号:US20240256922A1

    公开(公告)日:2024-08-01

    申请号:US18104073

    申请日:2023-01-31

    CPC classification number: G06N5/04 G16Y20/10 G16Y30/00

    Abstract: Systems and methods are provided for dynamically adapting configuration setting associated with capturing content as input data for inferencing in the Multi-Access Edge Computing in a 5G telecommunication network. The inferencing is based on a use of a deep neural network. In particular, the method includes determining a gradient of a change in inference data over a change in configuration setting for capturing input data (the inference-configuration gradient). The method further updates the configuration setting based on the gradient of a change in inference data over a change in the configuration setting. The inference-configuration gradient is based on a combination of an input-configuration gradient and an inference-input gradient. The input-configuration gradient indicates a change in input data as the configuration setting value changes. The inference-input gradient indicates, as a saliency of the deep neural network, a change in inference result of the input data as the input data changes.

    SECURE DATA INGESTION WITH EDGE COMPUTING
    12.
    发明公开

    公开(公告)号:US20240171391A1

    公开(公告)日:2024-05-23

    申请号:US17990646

    申请日:2022-11-18

    CPC classification number: H04L9/0897 H04L9/0822 H04L9/14 H04L2209/80

    Abstract: The techniques described herein use an edge device to manage the security for a data stream being ingested by a tenant and a cloud platform. The creation of the data stream for ingestion occurs in an environment that is trusted by a tenant (e.g., an on-premises enterprise network). The cloud platform that is part of the data stream ingestion process is outside this trusted environment, and thus, the tenant loses an element of security when ingesting data streams for cloud storage and/or cloud processing. Accordingly, the edge device is configured on a trust boundary so that the data stream ingestion process associated with a cloud platform is secured, or trusted by the tenant. The edge device is configured to encrypt the data stream using a data encryption key and/or manage the protection of the data encryption key.

    CONTINUOUS LEARNING MODELS ACROSS EDGE HIERARCHIES

    公开(公告)号:US20220414534A1

    公开(公告)日:2022-12-29

    申请号:US17362115

    申请日:2021-06-29

    Abstract: Systems and methods are provided for continuous learning of models across hierarchies under a multi-access edge computing. In particular, an on-premises edge server, using a model, generates inference data associated with captured stream data. A data drift determiner determines a data drift in the inference data by comparing the data against reference data generated using a golden model. The data drift indicates a loss of accuracy in the inference data. A gateway model maintains one or more models in a model cache for update the model. The gateway model instructs the one or more servers to train the new model. The gateway model transmits the trained model to update the model in the on-premises edge server. Training the new model includes determining an on-premises edge server with computing resources available to train the new model while generating other inference data for incoming stream data in the data analytic pipeline.

    ALLOCATING COMPUTING RESOURCES DURING CONTINUOUS RETRAINING

    公开(公告)号:US20220188569A1

    公开(公告)日:2022-06-16

    申请号:US17124172

    申请日:2020-12-16

    Abstract: Examples are disclosed that relate to methods and computing devices for allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of a machine learning model. In one example, a computing device configured to be located at a network edge between a local network and a cloud service comprises a processor and a memory storing instructions executable by the processor to operate a machine learning model. During a retraining window, a selected portion of a video stream is selected for labeling. At least a portion of a labeled retraining data set is selected for profiling a superset of hyperparameter configurations. For each configuration of the superset of hyperparameter configurations, a profiling test is performed. The profiling test is terminated, and a change in inference accuracy that resulted from the profiling test is extrapolated. Based upon the extrapolated inference accuracies, a set of selected hyperparameter configurations is output.

Patent Agency Ranking