Page 65 - SMILESENG
P. 65
Intl. Summer School on Search- and Machine Learning-based Software Engineering
Online Testing of RESTful APIs with RESTest
Abstract—Online testing of web APIs—testing APIs in production—is gaining traction in industry. Platforms such as RapidAPI and Sauce Labs provide online testing and monitoring services of web APIs 24/7, however, they require designing test cases manually, which are continuously executed at regular inter- vals. In this talk, we present the RESTest testing ecosystem as an alternative for automated and thorough online testing of RESTful APIs. First, we describe its architecture and functionality. Then, we explain how it can be used to test your own APIs, including a demonstration. Lastly, we delve into our latest results when deploying this testing ecosystem in practice. On the one hand, we uncovered over 200 bugs in industrial APIs over the course of 15 days of testing. On the other hand, we identified challenges posed by online testing at scale, which open exciting research opportunities in the areas of search-based software engineering and machine learning.
I. INTRODUCTION
Web APIs provide access to data and functionality over the Internet, via HTTP interactions. They are the cornerstone of software integration, especially RESTful web APIs [1], currently considered the de facto standard for Web integration. As RESTful APIs become more pervasive and widespread in industry, their validation becomes more critical than ever before. A single bug in an API may affect tens or hundreds of other services using it. In this scenario, test thoroughness and automation are of utmost importance. Recently, academia and industry have made great efforts to address this problem, which has led to an explosion in the number of approaches and tools for testing RESTful APIs. Research approaches are focused on the automated generation of test cases, especially from a black-box perspective (i.e., without requiring access to the source code of the API). On the other hand, industrial solutions are mostly concerned with automating test case ex- ecution and providing online testing services, where APIs are continuously tested while in production. Customers of online testing platforms such as RapidAPI [2] or Sauce Labs [3] may choose among different pricing plans determining features such as the test execution frequency and the integration with CI/CD platforms, among others.
In this talk, we present the RESTest testing ecosystem as a powerful alternative for automated and thorough online testing of RESTful APIs at scale. First, we describe its architecture and its main features. Then, we make a demonstration on how it can be used to test your own APIs. Lastly, we report on our latest results on deploying the ecosystem to test industrial APIs. In particular, we uncovered over 200 bugs over the
Fig. 1. Testing ecosystem architecture.
course of 15 days, but we also identified challenges of online testing when performed on a large scale, some of which could be tackled with search- and machine learning-based approaches.
II. RESTEST TESTING ECOSYSTEM
The RESTest testing ecosystem is specifically designed for online testing of RESTful web APIs, that is, it allows to continuously test and monitor APIs in production for any given period of time (e.g., days or months). It follows a black-box strategy, where test cases are automatically derived from the API specification (e.g., an OAS document [4]). Test cases are continuously generated, executed and reported, and the test results can be monitored in a user-friendly dashboard.
A. Architecture
Figure 1 depicts the architecture of the testing ecosystem. As illustrated, the architecture is decoupled into multiple types of bots, i.e., highly cohesive and autonomous programs that perform specific tasks within the testing process (e.g., test reporting). Bots can be independently developed and deployed using different technologies. We distinguish between input bots (support the generation and execution of test cases) and output bots (responsible for analyzing and leveraging test out- puts). Bots are started, stopped and monitored automatically by a controller component, and they can optionally interact with each other, e.g., by triggering the update of test reports.
There are two types of input bots: test bots, which generate and execute test cases, and garbage collectors, which delete resources created by test bots (e.g., playlists in the Spotify web API). Regarding output bots, we conceive two types: test reporters, which generate graphical test reports, and test coverage computers, which compute the API coverage achieved by test bots [5].
Alberto Martin-Lopez
SCORE Lab, I3US Institute Universidad de Sevilla Seville, Spain alberto.martin@us.es
53