Method for network management based on NETCONF protocol, and associated network device

    公开(公告)号:US11075793B2

    公开(公告)日:2021-07-27

    申请号:US16331646

    申请日:2016-09-19

    Inventor: Wei Luo Yun Li

    Abstract: The present disclosure discloses a method used in a network device for network management based on NETwork CONFiguration (NETCONF) protocol, and an associated network device. The method includes: receiving a Remote Procedure Call (RPC) message from a network management system, the RPC message instructing the network device to perform an edit operation on the network device's configuration; generating a configuration change notification based on the received RPC message to indicate that the network device's configuration has changed, the configuration change notification indicating a target of the edit operation, a type of the edit operation, and a value of the edit operation; and transmitting the configuration change notification to the network management system.

    METHOD FOR NETWORK MANAGEMENT BASED ON NETCONF PROTOCOL, AND ASSOCIATED NETWORK DEVICE

    公开(公告)号:US20190245732A1

    公开(公告)日:2019-08-08

    申请号:US16331646

    申请日:2016-09-19

    Inventor: Wei Luo Yun Li

    CPC classification number: H04L41/0273 H04L41/0813 H04L41/0859 H04L67/40

    Abstract: The present disclosure discloses a method used in a network device for network management based on NETwork CONFiguration (NETCONF) protocol, and an associated network device. The method includes: receiving a Remote Procedure Call (RPC) message from a network management system, the RPC message instructing the network device to perform an edit operation on the network device's configuration; generating a configuration change notification based on the received RPC message to indicate that the network device's configuration has changed, the configuration change notification indicating a target of the edit operation, a type of the edit operation, and a value of the edit operation; and transmitting the configuration change notification to the network management system.

    Method and system for image compressing and coding with deep learning

    公开(公告)号:US12155849B2

    公开(公告)日:2024-11-26

    申请号:US18012316

    申请日:2020-06-25

    Inventor: Yun Li

    Abstract: An image processing system (100) and methods therein for compressing and coding an image and for optimizing its parameters are disclosed. The embodiments herein provide an improved system and simplified method with deterministic uniform quantization with integer levels based on Softmax function for image compression and coding. The embodiments herein produce exact discrete probability mass function for latent variables to be coded from side information. The embodiments herein enable training of the image processing system to minimize the bit rate through backpropagation at the same time. Moreover, the embodiments herein create the possibility to encode region of interest (ROI) areas during coding.

    LOAD DISTRIBUTION FOR A DISTRIBUTED NEURAL NETWORK

    公开(公告)号:US20220101100A1

    公开(公告)日:2022-03-31

    申请号:US17426075

    申请日:2019-02-13

    Abstract: A method for dynamic load distribution for a distributed neural network is disclosed. The method comprises estimating, in a device of the neural network, an energy usage for processing at least one non-processed layer in the device, and estimating, in the device of the neural network, an energy usage for transmitting layer output of at least one processed layer to a cloud service of the neural network for processing. The method further comprises comparing, in the device of the neural network, the estimated energy usage for processing the at least one non-processed layer in the device with the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service. The method furthermore comprises determining to process the at least one non-processed layer in the device when the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service is equal or greater than the estimated energy usage for processing the at least one non-processed layer, and determining to transmit the layer output of the at least one processed layer to the cloud service for processing subsequent layers when the estimated energy usage for transmitting the layer output of the at least one processed layer to the cloud service is less than the estimated energy usage for processing the at least one non-processed layer in the device. Corresponding computer program product, apparatus, cloud service assembly, and system are also disclosed.

    Inference Processing of Data
    7.
    发明申请

    公开(公告)号:US20220351422A1

    公开(公告)日:2022-11-03

    申请号:US17624160

    申请日:2019-07-02

    Abstract: A method for processing data in a system configured to operate in either of at least a first power mode and a second power mode, wherein the first power mode is associated with a first power level and the second power mode is associated with a second power level, the second power level being higher than the first power level, wherein the first and second power modes each are configured to prepare a respective model for inference processing is disclosed. The method comprises acquiring (101) compressed data, determining (102) whether the system operates in the first power mode or in the second power mode. The method further comprises, when the system operates in the first power mode, determining (103) whether the acquired compressed data comprises a self-contained frame, and if so partly decoding (104) the self-contained frame, performing (105) feature extraction of the decoded self-contained frame, preparing (107) the model for inference processing in the first power mode in the system, wherein the model comprises inference parameters for the first power mode, and performing (108) inference processing by a neural network based on the extracted features and the prepared model for inference processing. Corresponding computer program product, apparatus, and system are also disclosed.

Patent Agency Ranking