Abstract:
A cloud proxy communicates over a public network with a network proxy in a private network. The network proxy communicates with the public network through a firewall. The cloud proxy receives from a requester a request to communicate with a remote device in the private network. The cloud proxy creates first forwarding rules to forward traffic from the requester to the network proxy and from the network proxy to the requester. The cloud proxy sends the request to the network proxy, which causes the network proxy to create second forwarding rules to forward traffic from the network proxy to the remote device and from the remote device to the cloud proxy. A communication tunnel through the firewall device between the cloud proxy and the network proxy is established over which the southbound and northbound traffic is forwarded based on the first forwarding rules and the second forwarding rules.
Abstract:
Network topology information may be determined for a plurality of network devices on a network. System identifier information may then be determined for each of the plurality of network devices on the network. The system identifier information may be a list of network solutions that each network device actually or potentially belongs to. The system may then flag the system identifier information to indicate whether each solution is an actual or a potential solution.
Abstract:
A context-driven publication option is received over a network at an adaptive publish subscribe broker from a publishing network device. The context driven publication options are presented over the network to a subscribing network device. A selection of a context-driven subscription is received over the network at the adaptive publish/subscribe broker from the subscribing network device. A publication configured for network management and operations is received at the adaptive publish/subscribe broker. Publications are filtered at the adaptive publish/subscribe broker for the subscribing network device according to the selection of the context-driven subscription.
Abstract:
Techniques are provided herein for enabling a virtual private network (VPN) using a bidirectional, full duplex transport channel configured to send and receive application layer data packets. At a source network device that hosts a VPN client, the VPN client is configured with a bidirectional, full duplex transport channel that is configured to send and receive Open Systems Interconnection application layer data packets. The VPN client is also configured with a virtual network interface that operates to virtually link the VPN client with the transport channel.
Abstract:
Disclosed is a method and system for using a credit-based approach to scheduling workload in a compute environment. The method includes determining server capacity and load of a compute environment and running a first benchmark job to calibrate a resource scheduler. The method includes partitioning, based on the calibration, the compute environment into multiple priority portions (e.g. first portion, second portion etc.) and optionally a reserve portion. Credits are assigned to allocate system capacity or resources per time quanta. The method includes running a benchmark job to calibrate a complexity of supported job types to be run in the compute environment. When a request for capacity is received, the workload is assigned one or more credits and credits are withdrawn from the submitting entity's account for access to the compute environment at a scheduled time.
Abstract:
Disclosed is a method and system for using a credit-based approach to scheduling workload in a compute environment. The method includes determining server capacity and load of a compute environment and running a first benchmark job to calibrate a resource scheduler. The method includes partitioning, based on the calibration, the compute environment into multiple priority portions (e.g. first portion, second portion etc.) and optionally a reserve portion. Credits are assigned to allocate system capacity or resources per time quanta. The method includes running a benchmark job to calibrate a complexity of supported job types to be run in the compute environment. When a request for capacity is received, the workload is assigned one or more credits and credits are withdrawn from the submitting entity's account for access to the compute environment at a scheduled time.
Abstract:
Disclosed is a method and system for using a credit-based approach to scheduling workload in a compute environment. The method includes determining server capacity and load of a compute environment and running a first benchmark job to calibrate a resource scheduler. The method includes partitioning, based on the calibration, the compute environment into multiple priority portions (e.g. first portion, second portion etc.) and optionally a reserve portion. Credits are assigned to allocate system capacity or resources per time quanta. The method includes running a benchmark job to calibrate a complexity of supported job types to be run in the compute environment. When a request for capacity is received, the workload is assigned one or more credits and credits are withdrawn from the submitting entity's account for access to the compute environment at a scheduled time.
Abstract:
Techniques are provided herein for establishing at a network management server a presence on a network. A presence associated with one or more managed devices on the network is detected. An instant messaging (IM) session is established with the one or more managed devices. The IM session forms a virtual chat room for performing a management function on the one or more managed devices, and IM messages are sent that are configured to perform the management function on the one or more managed devices. Techniques are also provided herein for establishing on a network an enriched presence by a network management server that is configured to perform a management function via a presence function of a messaging and presence protocol.
Abstract:
Disclosed is a method and system for using a credit-based approach to scheduling workload in a compute environment. The method includes determining server capacity and load of a compute environment and running a first benchmark job to calibrate a resource scheduler. The method includes partitioning, based on the calibration, the compute environment into multiple priority portions (e.g. first portion, second portion etc.) and optionally a reserve portion. Credits are assigned to allocate system capacity or resources per time quanta. The method includes running a benchmark job to calibrate a complexity of supported job types to be run in the compute environment. When a request for capacity is received, the workload is assigned one or more credits and credits are withdrawn from the submitting entity's account for access to the compute environment at a scheduled time.
Abstract:
Network topology information may be determined for a plurality of network devices on a network. System identifier information may then be determined for each of the plurality of network devices on the network. The system identifier information may be a list of network solutions that each network device actually or potentially belongs to. The system may then flag the system identifier information to indicate whether each solution is an actual or a potential solution.