Abstract:
An energy efficient cloud-based environment includes multiple users requesting delivery of cloud-based services from a cloud service provider. Each user provides inputs for the delivery of one or more specific cloud based services. The inputs include the type of service, time frame for using the service, and either an energy efficiency level or a performance level pertaining to the service's delivery. The service provider allocates different resources to the users for delivering the requested services, and calculates an actual price and an operating energy cost for delivering the requested services to each of the users. The profit of the service provider due to users is calculated. An overall profit of the service provider associated with delivering the cloud-based services is calculated, and a fraction of the overall profit is distributed as an incentive among the users. The incentive of each user is proportional to the profit contribution of that user.
Abstract:
A method and apparatus for providing a resource allocation policy in a network are disclosed. For example, the method constructs a queuing model for each application. The method defines a utility function for each application and for each transaction type of each application, and defines an overall utility in a system. The method performs an optimization to identify an optimal configuration that maximizes the overall utility for a given workload, and determines one or more adaptation policies for configuring the system in accordance with the optimal configuration.
Abstract:
An embodiment generally relates to systems and methods for improving system performance by reducing fragility of computing systems. A processing module can identify separate ensemble files each comprising interpretations, by separate entities of a workflow, of a phrase in a file. The processing module can compare the interpretations to determine if the interpretations are the same or essentially the same. If the interpretations are neither the same nor essentially the same, a subsequent entity in the workflow can create a new file that replaces an associated interpretation of the phrase with a common interpretation. The subsequent entity can proceed with an intended operation.
Abstract:
A method for computing the energy rating for cloud-based software services is disclosed. For each of the service, following steps are performed. The method includes identifying configuration parameters impacting the energy consumption. The method further includes determining a value for each configuration parameter. Further, the method includes determining a relative energy rating using a pre-determined equation, based on the values of the configuration parameter. Finally, the method includes assigning a discrete value based on the range of the relative energy rating.
Abstract:
A method, a system and a computer program product for handling requests in a network are disclosed. A load pattern at a first service component is extracted. A capacity and pending requests at the first service component are calculated based on the load pattern. Thereafter, an insertion delay is calculated based on the capacity, pending requests, and a time period required to increase the capacity by applying various alternative adaptation techniques. The insertion delay is then distributed among a plurality of upstream service components.
Abstract:
A service workflow including ordered services is received, and a heuristic utility value is calculated for each service. A best node having a smallest heuristic utility value for a service is selected, and a best node identifier is placed in a node list. If the best node includes a parallel sub-workflow, potential next nodes are identified by generating potential next nodes from a data center that can perform a service associated with the best node with a minimum run-time value. Otherwise, potential next nodes are generated based on a data center associated with the service. A heuristic utility value is determined for each potential next node, and a new best node is selected based on the heuristic utility values. The identifying, determining, and selecting operations are repeated until the best node contains only the last ordered service. Data centers for each ordered service are identified based on the best node.
Abstract:
In a method and system for providing one or more cloud services in a cloud, one or more high-level parameters are received from a customer through a service requisition interface; a cloud configuration is generated along with a Service Level Agreement (SLA) based on the received high-level parameters; the generated cloud configuration along with the SLA is recommended to the customer through an output interface; the customer is allowed to negotiate the SLA recommendation through the service requisition interface based on a tradeoff between the one or more high level details at a service layer of the cloud through an SLA negotiation system; and the one or more cloud services are provided to the customer based on the negotiated SLA.
Abstract:
A method, a system, and a computer program product for translating a document are disclosed. A document in a source language is received and text snippets are extracted from the same. The text snippets are sent to a first set of remote workers for translation and a second set of remote workers for validation. The words in the validated text snippets are assigned a probability score. The words with the highest probability score are combined to generate the translated document.
Abstract:
A method, a system and a computer program product for handling requests in a network are disclosed. A load pattern at a first service component is extracted. A capacity and pending requests at the first service component are calculated based on the load pattern. Thereafter, an insertion delay is calculated based on the capacity, pending requests, and a time period required to increase the capacity by applying various alternative adaptation techniques. The insertion delay is then distributed among a plurality of upstream service components.
Abstract:
Methods and systems of performing data mining may include receiving a plurality of web log records and a plurality of call log records; associating one or more web log records with a call log record, wherein the associated user for each of the associated one or more web log records and the call log record are the same; identifying one or more patterns among the web log records for the plurality of call log records, wherein each pattern comprises one or more web accesses, a time stamp at which each of the one or more web accesses is performed and the call topic for the call log record; identifying one or more web log records associated with a new call, and predicting a call topic for the new call based on at least one pattern and the one or more web log records.