Abstract:
Computer applications may generate a large volume of different types of record data. In one example, the large volume of record data may represent millions of different processes occurring every second. Described herein are systems, methods and devices for generating parsed data based on the large volume of record data. The parsed data may be consumed by computing nodes within a designated amount of time from the generation of the record data.
Abstract:
Examples of systems and methods are described for managing computing capacity by a provider of computing resources. The computing resources may include program execution capabilities, data storage or management capabilities, network bandwidth, etc. Multiple user programs can consume a single computing resource, and a single user program can consume multiple computing resources. Changes in usage and other environmental factors can require scaling of the computing resources to reduce or prevent a negative impact on performance. In some implementations, a fuzzy logic engine can be used to determine the appropriate adjustments to make to the computing resources associated with a program in order to keep a system metric within a desired operating range.
Abstract:
The techniques described herein provide software testing that may concurrently process a user request using a live version of software and a shadow request, which is based on the user request, using a shadow version of software (e.g., trial or test version, etc.). The live version of software, unlike the shadow version, is user-facing and transmits data back to the users while the shadow request does not output to the users. An allocation module may vary allocation of the shadow requests to enable a ramp up of allocations (or possibly ramp down) of the shadow version of software. The allocation module may use allocation rules to dynamically initiate the shadow request based on various factors such as load balancing, user attributes, and/or other rules or logic. Thus, not all user requests may be issued as shadow requests.
Abstract:
Systems and methods are provided for analyzing operating metrics of monitored metric sources. Aspects of the present disclosure may present for display information associated with the monitored metric source and the analysis of its operating metrics. Analysis comprises determination of reference values and tolerance levels which represent allowable deviations from the reference values. Input data includes a measurement of an operating parameter and a time stamp. Input data may be saved to a data store for using in future analysis of other input data. When input data is determined to be outside the tolerance level, notifications may be issued to alert administrators or systems of the anomaly.
Abstract:
Examples of systems and methods are described for managing computing capacity by a provider of computing resources. The computing resources may include program execution capabilities, data storage or management capabilities, network bandwidth, etc. Multiple user programs can consume a single computing resource, and a single user program can consume multiple computing resources. Changes in usage and other environmental factors can require scaling of the computing resources to reduce or prevent a negative impact on performance. In some implementations, a fuzzy logic engine can be used to determine the appropriate adjustments to make to the computing resources associated with a program in order to keep a system metric within a desired operating range.
Abstract:
Techniques are described for performing automated predictions of program execution capacity or other capacity of computing-related hardware resources that will be used to execute software programs in the future, such as for a group of computing nodes that execute one or more programs for a user. The predictions that are performed may in at least some situations be based on historical data regarding corresponding prior actual usage of execution-related capacity (e.g., for one or more prior years), and may include long-term predictions for particular future time periods that are multiple months or years into the future. In addition, the predictions of the execution-related capacity for particular future time periods may be used in various manners, including to manage execution-related capacity at or before those future time periods, such as to prepare sufficient execution-related capacity to be available at those future time periods.
Abstract:
A method is provided for estimating past data by identifying a high frequency data set for a defined time period. A pattern is calculated for the high frequency data set and then the pattern is applied to a low frequency data set in a past time period to estimate a high frequency query point.
Abstract:
A method is provided for estimating past data by identifying a high frequency data set for a defined time period. A pattern is calculated for the high frequency data set and then the pattern is applied to a low frequency data set in a past time period to estimate a high frequency query point.
Abstract:
Systems and methods are provided for analyzing operating metrics of monitored metric sources. Aspects of the present disclosure may present for display information associated with the monitored metric source and the analysis of its operating metrics. Analysis comprises determination of reference values and tolerance levels which represent allowable deviations from the reference values. Input data includes a measurement of an operating parameter and a time stamp. Input data may be saved to a data store for using in future analysis of other input data. When input data is determined to be outside the tolerance level, notifications may be issued to alert administrators or systems of the anomaly.
Abstract:
A method is provided for estimating past data by identifying a high frequency data set for a defined time period. A pattern is calculated for the high frequency data set and then the pattern is applied to a low frequency data set in a past time period to estimate a high frequency query point.