Abstract:
A software testing framework provides functionality for utilizing pre-existing tests to load and performance test a network service. Methods can be tagged with annotations indicating that they are tests, such as integration tests. The methods implementing the integration tests can also be tagged with other types of annotations that can be used to select individual tests for use in testing, such as annotations indicating whether a test is a positive or negative test, annotations specifying dependencies upon other tests, or annotations indicating that a test is a member of a test suite. The annotations can be utilized in conjunction with test selection criteria to select individual integration tests for use in load and performance testing of the network service. The selected integration tests can be deployed to and executed upon load-generating instances to execute the integration tests and generate requests to the network service at high throughput.
Abstract:
A network-based production service is configured to process client requests for the production service via a network, capture production request data defining the requests and store the production request data in a data store. A test system comprising one or more controllers creates test jobs according to a test plan for testing the production service. The test plan creates a test profile for using specified production request data to simulate a load on the production service. Each job created by the test plan specifies a portion of production request data. A job queue receives and queues test jobs from one or more controllers configured to add test jobs to the job queue according to the test plan. Workers access jobs from the job queue and the production request data from the data store as specified in each job and replay the production request data to the production service.
Abstract:
Systems and associated processes for testing a reverse proxy server are disclosed. A backend proxy server test system can receive a request from a reverse proxy server under test. The request may be generated in response to a request from a test client to access a backend service. In responding to the received request, the backend proxy server test system can include a copy of the received request. Upon the test client receiving the response from the proxy server to the test client's request, the test client can extract the embedded copy of the received request that the reverse proxy server generated to determine whether it matches the request that a functioning reverse proxy server generates. Based, at least in part on the result of this comparison, the test client can determine whether the reverse proxy server is malfunctioning.
Abstract:
A network-based scalable production load test service may be implemented on a provider network including a plurality of computing devices in order to provide load testing for network-based production systems. In some embodiments, the plurality of computing devices is configured to receive a request to capture to a load test data repository items of transaction data for a network-based production service. In some embodiments, the plurality of computing devices is configured to capture to the load test data repository the items of transaction data. The transaction data include input to the network-based production service over a network. In some embodiments, in response to a load test specification received by the scalable production load test service, the plurality of computing devices is configured to dynamically allocate one or more resources to perform a load test of the network-based production service according to the load test specification.
Abstract:
Embodiments presented herein provide techniques for dynamically generating synthetic test data used to test a web service. In one embodiment, a service description language document defining a web service may include a test interface definition. The test interface definition specifies rules for generating the synthetic data to use when testing API interfaces exposed by the web service, e.g., to generate synthetic data needed carry out load and performance testing. Including rules for generating test data in the service description language document provides a centralized and authoritative source for both building and testing the web service.
Abstract:
Technologies are disclosed herein for generating comments in a source code review tool using code analysis tools. A producer module can be executed in order to obtain source code from a source code review tool. One or more source code analysis modules can then be executed in order to analyze the source code. A reporter module can then store the output of the source code analysis modules as comments in the source code review tool for use by a developer of the source code. The producer, reporter, and source code analysis modules can be executed in response to a request from the source code developer to perform a source code review, by a job scheduler, or in another manner. An application programming interface (API) exposed by the source code review tool can be utilized to obtain the source code and to store the comments associated with the source code.
Abstract:
Methods, systems, and computer-readable media for determining the maximum throughput of a service are disclosed. A first sequence of load tests is initiated for a service host. Individual ones of the load tests comprise determining a respective throughput at the service host for a respective number of concurrent connections to the service host. The number of concurrent connections increases nonlinearly in at least a portion of the first sequence of load tests. The first sequence of load tests is discontinued when the throughput is not increased by a threshold from one of the load tests to the next. An estimated maximum throughput for the service host is determined based at least in part on the first sequence of load tests. The estimated maximum throughput corresponds to a particular number of concurrent connections to the service host.
Abstract:
One or more computers is configured to run an end-to-end test including at least a plurality of independent tests of multiple stages of an asynchronous multi-stage data processing system. One of the set of independent tests is configured to send a request for test input data from a test data repository service for a particular stage. A converted version of the test input data is obtained. A comparison of the converted version to the output of the particular stage to verify operation of the particular stage is obtained. The output of the particular stage is transmitted to the test data repository service. One or more computers is configured to provide the test data repository service. The test data repository service is configured to store in the test data storage the output of the particular stage as test input data for a next stage of the asynchronous multi-stage data processing system.
Abstract:
Methods, systems, and computer-readable media for ordered test execution to enable faster feedback are disclosed. A likelihood of failure is estimated for individual tests in a set of tests. Based at least in part on the likelihood of failure, an ordered sequence is determined for the set of tests, such that the tests are ordered in estimated likelihood of failure. The set of tests is initiated in the ordered sequence, such that one or more computer-executable programs are subjected to individual ones of the tests. A failure of one or more of the tests is determined prior to performing one or more remaining tests in the ordered sequence.
Abstract:
Automated methods of creating a predictive test load, using clustering to create the predictive test load from production data, for testing in a production environment. Client requests to a production system are captured and processed into production request data. The production request data undergoes a clustering analysis to determine cluster definitions. In some embodiments, the production request data is turned into vectors and the vectors undergo the clustering analysis instead of the production data. A specification may be received that specifies modifications to be made to the production data. The production request data may be processed using the cluster definitions and the specified modifications to create a predictive test load. In some embodiments, the predictive test load is played to a production system to simulate a predictive load according to a test plan. The test plan may specify the rate at which the test load is replayed.