conferences | speakers | series

Performance assessment of microservices with strong guarantees

home

Performance assessment of microservices with strong guarantees
SFScon 2021

Towards agile formal methods

The term “microservices”‘ is used to denote applications built as suites of loosely coupled specialized services, each running in its own process and communicating with lightweight mechanisms, such as an HTTP resource API. This architectural style lends itself to decentralization and to the adoption of continuous integration and deployment practices, as reported by several companies (e.g., Amazon and Netflix) that successfully developed and deployed microservices applications in their production environment.
Several configuration alternatives are possible for microservice deployment, for example, serverless microservices using functions, container-based deployment, either physical or virtual machines, or combination of them with different hardware capacity constraints.
Introducing microservices architecture to the software industry often
adds more communication between the different services. This often affects performance (i.e., how the target microservices system performs upon user requests) and scalability (i.e., how the target microservices system performs when increasing the scale of operation) that represent fundamental quality attributes that shall be assured with systematic methods supporting the engineering life-cycle. Available architecture alternatives and their parameters lead to non-trivial choices. These choices (i.e., the underlying deployment environment and its configuration) significantly influence the performance of the whole application.
For this reason, performance testing of microservices is challenging in the IT industry that usually adopt agile practices combining software development (Dev) and IT operations (Ops). In this context, there is still a lack of rigorous foundations that can be practiced, and systematically replicated.
Indeed, the work on agile and DevOps has been mainly driven by industry, with little contribution from formal approaches envisioned by academic research. Mainstream approaches to performance assessment in industry focus on the passive monitoring of the system response time or resources utilization to detect anomalous performance and scalability issues, such as bottlenecks. Even approaches based on load or stress testing, usually extract a set of performance indices and statistics that are difficult to use for guiding engineering decisions because of missing connections with system-level requirements.

The main goal of this work is to overcome the aforementioned limitations by enabling automated decision gates in performance testing of microservices that allow requirements traceability. We seek to achieve this goal by endowing common agile practices used in microservice performance testing, with the ability to automatically learn and then formally verify a performance model of the System Under Test (SUT) to achieve strong assurances of quality. Even if the separation between agile and formal methods increased over the years, we support the claim that formal methods are at a stage where they can be effectively incorporated into agile methods to give them rigorous engineering foundations and make them systematic and effective with strong guarantees.

Speakers: Matteo Camilli