摘要:
Embodiments of the present invention provide an approach for mapping requirements (e.g., functional and/or non-functional requirements) to components and/or policies of a system topology in a networked computing environment (e.g., a cloud computing environment). In a typical embodiment, a set of functional requirements is mapped to a set of components. A set of dependencies between the set of functional requirements is then indentified so that a set of interrelationships between the set of components may be identified. A set of non-functional requirements is then mapped to a set of policies that are then applied to the set of components. Based on the set of components, the set of interrelationships, and the set of policies, a system topology is generated. Upon implementation of the system topology, runtime metrics may be collected as feedback that is utilized for refinement of the system topology, as well as a system topology deployed in the future.
摘要:
Embodiments of the present invention provide a distributed approach to request processing. Specifically, in a typical embodiment, a request is received via a cloud dispatcher, which generates and places a corresponding message in a cloud manager queue associated with a set (at least one) of cloud managers. The message is then placed in a cloud node queue associated with a set of cloud nodes that process the message and provide state information related to request processing in an audit queue associated with an audit database. In addition, cloud manager state information is placed in a dispatcher queue associated with the cloud dispatcher. This state information is used by the cloud dispatcher to determine where to place incoming requests. Under these embodiments, each cloud resource runs self-contained management code and performs actions by receiving instructions from a queue. Thus, the messages may be directed to a specific resource or broadcasted to a “pool” of resources of which any resource can take the request and process it.
摘要:
Embodiments of the present invention provide an approach for validating deployment patterns/topologies (e.g., prior to being deployed) against existing patterns that have already been determined to be compliant (e.g., against a set of policies/standards). In a typical embodiment, individual components of a proposed deployment pattern are identified and then evaluated against previously approved deployment patterns (e.g., based on standards and/or policies). Components of the proposed deployment patterns that are deemed non-compliant are identified, and corrective action(s) may be determined to address any non-compliance (e.g., to put the non-compliant components into compliance, to remove the non-compliant components, etc.).
摘要:
Embodiments of the present invention provide an approach for analyzing operating costs (e.g., metered cost effects) for deployment patterns (and changes thereto) in a networked computing environment. In a typical embodiment, a deployment pattern for the networked computing environment is identified. The deployment pattern may comprise a set of components arranged in a network topology. Moreover, the set of components may be associated with a set of policies (e.g., stored in a computer memory medium and/or computer storage device). A cost analysis algorithm(s) may then be selected for the deployment pattern. The selected algorithm(s) may then be applied (e.g., to the deployment pattern and/or network computing environment) to analyze the operating costs of the deployment pattern.
摘要:
Embodiments of the present invention provide an intelligent node controller (e.g., for an endpoint/node such as a cloud node) to process requests. Specifically, (among other things) the node controller will read a request message from a cloud node queue that is associated with the endpoint. The request message typically includes details related to a request for cloud resources and/or services received from a consumer. The node controller executes program code in an attempt to process the request. As the request is being processed, the node controller can place state messages indicating a state of fulfillment of the request on a cloud manager queue that is associated with a cloud manager from which the request message was received. In addition, the node controller can update an audit via an audit queue with the state messages. When a request cannot be processed, the node controller can place a failure message in a triage queue or the like.
摘要:
Embodiments of the present invention provide an approach for mapping requirements (e.g., functional and/or non-functional requirements) to components and/or policies of a system topology in a networked computing environment (e.g., a cloud computing environment). In a typical embodiment, a set of functional requirements is mapped to a set of components. A set of dependencies between the set of functional requirements is then indentified so that a set of interrelationships between the set of components may be identified. A set of non-functional requirements is then mapped to a set of policies that are then applied to the set of components. Based on the set of components, the set of interrelationships, and the set of policies, a system topology is generated. Upon implementation of the system topology, runtime metrics may be collected as feedback that is utilized for refinement of the system topology, as well as a system topology deployed in the future.
摘要:
Embodiments of the present invention provide a self-updating node controller (e.g., for an endpoint/node such as a cloud node). In general, the node controller will autonomously and automatically obtain program code (e.g., scripts) from a central repository. Among other things, the program code enables the node controller to: receive a request message from a cloud node queue associated with the endpoint; process a request corresponding to the request message; automatically update the program code as needed (e.g., when requests cannot be processed/fulfilled); place a state message indicating a state of fulfillment of the request in a cloud manager queue associated with a cloud manager from which the request message was received; update an audit database to reflect the state of fulfillment; and/or place a failure message in a triage queue if the request cannot be processed by the node controller.
摘要:
Embodiments of the present invention provide an intelligent node controller (e.g., for an endpoint/node such as a cloud node) to process requests. Specifically, (among other things) the node controller will read a request message from a cloud node queue that is associated with the endpoint. The request message typically includes details related to a request for cloud resources and/or services received from a consumer. The node controller executes program code in an attempt to process the request. As the request is being processed, the node controller can place state messages indicating a state of fulfillment of the request on a cloud manager queue that is associated with a cloud manager from which the request message was received. In addition, the node controller can update an audit via an audit queue with the state messages. When a request cannot be processed, the node controller can place a failure message in a triage queue or the like.
摘要:
Embodiments of the present invention provide a distributed approach to request processing. Specifically, in a typical embodiment, a request is received via a cloud dispatcher, which generates and places a corresponding message in a cloud manager queue associated with a set (at least one) of cloud managers. The message is then placed in a cloud node queue associated with a set of cloud nodes that process the message and provide state information related to request processing in an audit queue associated with an audit database. In addition, cloud manager state information is placed in a dispatcher queue associated with the cloud dispatcher. This state information is used by the cloud dispatcher to determine where to place incoming requests. Under these embodiments, each cloud resource runs self-contained management code and performs actions by receiving instructions from a queue. Thus, the messages may be directed to a specific resource or broadcasted to a “pool” of resources of which any resource can take the request and process it.
摘要:
Embodiments of the present invention provide an approach for managing application template artifacts throughout an application's lifecycle in a networked computing environment (e.g., a cloud computing environment). In a typical embodiment, a workload template is assigned to each phase of a set of successive phases of the application's lifecycle. Each template typically refers to a template in a preceding phase of the lifecycle. Moreover, the templates may contain pointers to artifacts used in the phases assigned thereto. Any changes occurring in the artifacts/phases are propagated to the corresponding templates so as to automatically manage application lifecycle operations.