-
公开(公告)号:US10902293B2
公开(公告)日:2021-01-26
申请号:US16177581
申请日:2018-11-01
Applicant: Cisco Technology, Inc.
Inventor: Purushotham Kamath , Abhishek Singh , Debojyoti Dutta
Abstract: In one embodiment, a device forms a neural network envelope cell that comprises a plurality of convolution-based filters in series or parallel. The device constructs a convolutional neural network by stacking copies of the envelope cell in series. The device trains, using a training dataset of images, the convolutional neural network to perform image classification by iteratively collecting variance metrics for each filter in each envelope cell, pruning filters with low variance metrics from the convolutional neural network, and appending a new copy of the envelope cell into the convolutional neural network.
-
公开(公告)号:US20200302272A1
公开(公告)日:2020-09-24
申请号:US16358554
申请日:2019-03-19
Applicant: Cisco Technology, Inc.
Inventor: Abhishek Singh , Debojyoti Dutta
Abstract: The present disclosure provides systems, methods and computer-readable media for optimizing the neural architecture search for the automated machine learning process. In one aspect, neural architecture search method including selecting a neural architecture for training as part of an automated machine learning process; collecting statistical parameters on individual nodes of the neural architecture during the training; determining, based on the statistical parameters, active nodes of the neural architecture to form a candidate neural architecture; and validating the candidate neural architecture to produce a trained neural architecture to be used in implemented an application or a service.
-
公开(公告)号:US20200279187A1
公开(公告)日:2020-09-03
申请号:US16288563
申请日:2019-02-28
Applicant: Cisco Technology, Inc.
Inventor: Xinyuan Huang , Debojyoti Dutta
Abstract: Joint hyper-parameter optimizations and infrastructure configurations for deploying a machine learning model can be generated based upon each other and output as a recommendation. A model hyper-parameter optimization may tune model hyper-parameters based on an initial set of hyper-parameters and resource configurations. The resource configurations may then be adjusted or generated based on the tuned model hyper-parameters. Further model hyper-parameter optimizations and resource configuration adjustments can be performed sequentially in a loop until a threshold performance for training the model based on the model hyper-parameters or a threshold improvement between loops is detected.
-
公开(公告)号:US20200021663A1
公开(公告)日:2020-01-16
申请号:US16581636
申请日:2019-09-24
Applicant: Cisco Technology, Inc.
Inventor: Marc Solanas Tarre , Ralf Rantzau , Debojyoti Dutta , Manoj Sharma
IPC: H04L29/08
Abstract: Approaches are disclosed for distributing messages across multiple data centers where the data centers do not store messages using a same message queue protocol. In some embodiment, a network element translates messages from a message queue protocol (e.g., Kestrel, RABBITMQ, APACHE Kafka, and ACTIVEMQ) to an application layer messaging protocol (e.g., XMPP, MQTT, WebSocket protocol, or other application layer messaging protocols). In other embodiments, a network element translates messages from an application layer messaging protocol to a message queue protocol. Using the new approaches disclosed herein, data centers communicate using, at least in part, application layer messaging protocols to disconnect the message queue protocols used by the data centers and enable sharing messages between messages queues in the data centers. Consequently, the data centers can share messages regardless of whether the underlying message queue protocols used by the data centers (and the network devices therein) are compatible with one another.
-
公开(公告)号:US20190196879A1
公开(公告)日:2019-06-27
申请号:US15850230
申请日:2017-12-21
Applicant: Cisco Technology, Inc.
Inventor: Debojyoti Dutta , Xinyuan Huang
CPC classification number: G06F9/5083 , G06F9/5038 , G06F9/5044 , G06F9/505 , H04L41/0803
Abstract: Systems, methods, computer-readable media are disclosed for determining a point of delivery (POD) device or network component on a cloud for workload and resource placement in a multi-cloud environment. A method includes determining a first amount of data for transitioning from performing a first function on input data to performing a second function on a first outcome of the first function; determining a second amount of data for transitioning from performing the second function on the first outcome to performing a third function on a second outcome of the second function; determining a processing capacity for each of one or more network nodes on which the first function and the third function are implemented; and selecting the network node for implementing the second function based on the first amount of data, the second amount of data, and the processing capacity for each of the network nodes.
-
公开(公告)号:US10333958B2
公开(公告)日:2019-06-25
申请号:US15350717
申请日:2016-11-14
Applicant: Cisco Technology, Inc.
Inventor: Xinyuan Huang , Sarvesh Ranjan , Olivia Zhang , Yathiraj B. Udupi , Debojyoti Dutta
Abstract: In one embodiment, a device in a network receives a first plurality of measurements for network metrics captured during a first time period. The device determines a first set of correlations between the network metrics using the first plurality of measurements captured during the first time period. The device receives a second plurality of measurements for the network metrics captured during a second time period. The device determines a second set of correlations between the network metrics using the second plurality of measurements captured during the second time period. The device identifies a difference between the first and second sets of correlations between the network metrics as a network anomaly.
-
公开(公告)号:US20190166221A1
公开(公告)日:2019-05-30
申请号:US15827969
申请日:2017-11-30
Applicant: Cisco Technology, Inc.
Inventor: Komei Shimamura , Amit Kumar Saha , Debojyoti Dutta
CPC classification number: H04L67/2847 , G06F9/5027 , G06F9/5033 , G06F9/5044 , G06F9/5072 , G06F9/5077 , H04L67/10
Abstract: A method for data provisioning a serverless computing cluster. A plurality of user defined functions (UDFs) are received for execution on worker nodes of the serverless computing cluster. For a first UDF, one or more data locations of UDF data needed to execute the first UDF are determined. At a master node of the serverless computing cluster, a plurality of worker node tickets are received, each ticket indicating a resource availability of a corresponding worker node. The one or more data locations and the plurality of worker node tickets are analyzed to determine eligible worker nodes capable of executing the first UDF. The master node transmits a pre-fetch command to one or more of the eligible worker nodes, causing the eligible worker nodes to become a provisioned worker node for the first UDF by storing a pre-fetched first UDF data before the first UDF is assigned for execution.
-
98.
公开(公告)号:US20190147070A1
公开(公告)日:2019-05-16
申请号:US15811124
申请日:2017-11-13
Applicant: Cisco Technology, Inc.
Inventor: Ralf Rantzau , Madhu S. Kumar , Johnu George , Amit Kumar Saha , Debojyoti Dutta
Abstract: Systems, methods, and computer-readable media for managing storing of data in a data storage system using a client tag. In some examples, a first portion of a data load as part of a transaction and a client identifier that uniquely identifies a client is received from the client at a data storage system. The transaction can be tagged with a client tag including the client identifier and the first portion of the data load can be stored in storage at the data storage system. A first log entry including the client tag is added to a data storage log in response to storing the first portion of the data load in the storage. The first log entry is then written from the data storage log to a persistent storage log in persistent memory which is used to track progress of storing the data load in the storage.
-
公开(公告)号:US10222986B2
公开(公告)日:2019-03-05
申请号:US14713851
申请日:2015-05-15
Applicant: CISCO TECHNOLOGY, INC.
Inventor: Johnu George , Kai Zhang , Yathiraj B. Udupi , Debojyoti Dutta
Abstract: Embodiments include receiving an indication of a data storage module to be associated with a tenant of a distributed storage system, allocating a partition of a disk for data of the tenant, creating a first association between the data storage module and the disk partition, creating a second association between the data storage module and the tenant, and creating rules for the data storage module based on one or more policies configured for the tenant. Embodiments further include receiving an indication of a type of subscription model selected for the tenant, and selecting the disk partition to be allocated based, at least in part, on the subscription model selected for the tenant. More specific embodiments include generating a storage map indicating the first association between the data storage module and the disk partition and indicating the second association between the data storage module and the tenant.
-
公开(公告)号:US20180343131A1
公开(公告)日:2018-11-29
申请号:US15907018
申请日:2018-02-27
Applicant: Cisco Technology, Inc.
Inventor: Johnu George , Amit Kumar Saha , Arun Saha , Debojyoti Dutta
Abstract: Aspects of the disclosed technology relate to ways to determine the optimal storage of data structures across different memory device is associated with physically disparate network nodes. In some aspects, a process of the technology can include steps for receiving a first retrieval request for a first object, searching a local PMEM device for the first object based on the first retrieval request, in response to a failure to find the first object on the local PMEM device, transmitting a second retrieval request to a remote node, wherein the second retrieval request is configured to cause the remote node to retrieve the first object from a remote PMEM device. Systems and machine-readable media are also provided.
-
-
-
-
-
-
-
-
-