Abstract:
Methods and systems for load generation for scalable load testing are disclosed. A plurality of job descriptions are generated based on a load step description. The load step description specifies a total transaction frequency or a total number of concurrent connections for a load test of a service over a period of time. The job descriptions specify subdivisions of the total transaction frequency or the total number of concurrent connections and subdivisions of the period of time. The job descriptions are placed in a job queue. A plurality of worker hosts remove the job descriptions from the job queue and concurrently execute local jobs based on the job descriptions.
Abstract:
A network-based production service is configured to process client requests for the production service via a network, capture production request data defining the requests and store the production request data in a data store. A test system comprising one or more controllers creates test jobs according to a test plan for testing the production service. The test plan creates a test profile for using specified production request data to simulate a load on the production service. Each job created by the test plan specifies a portion of production request data. A job queue receives and queues test jobs from one or more controllers configured to add test jobs to the job queue according to the test plan. Workers access jobs from the job queue and the production request data from the data store as specified in each job and replay the production request data to the production service.
Abstract:
Proposed updates to systems are evaluated in a manner that is automated and horizontally scalable. Input to a first system is provided to a second system. The first system and second system process the input and each generates output. The output from the first system and second system is analyzed and differences in the output data between the two systems are identified. Analyzing the output may be performed by a fleet of data processing units and the work of analyzing the output may be performed such that differences in the output data are traceable to subsystems of the second system that caused the differences.
Abstract:
Embodiments presented herein provide techniques for evaluating an asynchronous application using a test framework. The test framework may perform a load test of an asynchronous application or service composed from a collection of applications or services. To do so, the test framework may submit transactions to a distributed application at a specified transaction rate and monitor how the distributed application operates at that transaction rate. An aggregate load test component may evaluate the remaining work pending at work accumulation points of the distributed application to determine whether the distributed application can sustain the specified transaction rate. A transaction tracking component may initiate transactions to generate load at the specified transaction rate without blocking while the transactions are processed by the distributed application.
Abstract:
Methods, systems, and computer-readable media for ordered test execution based on code coverage are disclosed. A suite of tests are executed on a first version of program code to generate data indicative of code coverage of respective tests with respect to the program code. A mapping of the tests to the program code is determined based at least in part on the data indicative of code coverage and is stored. The mapping comprises data indicative of one or more portions of the program code exercised by respective tests from the suite. Based at least in part on the mapping of the tests to the program code and on data indicative of one or more modified or new portions of a second version of the program code, a subset of the tests is determined that are likely to be exercised by the second version of the program code.
Abstract:
Methods, systems, and computer-readable media for determining the maximum throughput of a service are disclosed. A first sequence of load tests is initiated for a service host. Individual ones of the load tests comprise determining a respective throughput at the service host for a respective number of concurrent connections to the service host. The number of concurrent connections increases nonlinearly in at least a portion of the first sequence of load tests. The first sequence of load tests is discontinued when the throughput is not increased by a threshold from one of the load tests to the next. An estimated maximum throughput for the service host is determined based at least in part on the first sequence of load tests. The estimated maximum throughput corresponds to a particular number of concurrent connections to the service host.
Abstract:
Methods and systems for automated tuning of a service configuration are disclosed. An optimal configuration for a test computer is selected by performing one or more load tests using the test computer for each of a plurality of test configurations. The performance of a plurality of additional test computers configured with the optimal configuration is automatically determined by performing additional load tests using the additional test computers. A plurality of production computers are automatically configured with the optimal configuration if the performance of the additional test computers is improved with the optimal configuration.
Abstract:
A generic transaction generator framework for testing a network-based production service may work in conjunction with a product-specific transaction creator module that executes transactions on the service. The transaction creator module may include runtime-discoverable information, such as source code annotations, to communicate product specific details to the framework. Runtime-discoverable information may identify transaction types, transaction methods, as well as dependencies between different transaction types and transaction methods. The framework may generate and execute various test transactions and may call a substituted transaction method for a transaction type on which a generated transaction depends prior to calling the generated transaction. The output from the substituted transaction may be used as input to the generated transaction when executed subsequently. Various data structures may be used to maintain information regarding which transactions have been substituted and to store data for use as input to subsequent transactions.
Abstract:
Embodiments presented herein provide techniques for evaluating an asynchronous application using a test framework. The test framework may perform a load test of an asynchronous application or service composed from a collection of applications or services. To do so, the test framework may submit transactions to a distributed application at a specified transaction rate and monitor how the distributed application operates at that transaction rate. An aggregate load test component may evaluate the remaining work pending at work accumulation points of the distributed application to determine whether the distributed application can sustain the specified transaction rate. A transaction tracking component may initiate transactions to generate load at the specified transaction rate without blocking while the transactions are processed by the distributed application.
Abstract:
Techniques are described for aggregating code coverage data generated from various types of testing of software modules, and automatically determining whether to promote software upwards in a multi-level software deployment hierarchy based on the aggregated code coverage data. In embodiments, a code coverage metric is determined for a software module, and the metric is compared to a set of promotion criteria, including whether the metric meets a predetermined threshold for quality. In some cases, the threshold may be a general threshold, a threshold based on the level of possible promotion, and/or a threshold that is based on an identified category for the software module such as whether the module is a front-end module, a shared module, a legacy module, or a critical module.