CIPM
CIPM
Short Summary
The main goal of the Continuous Integration of architectural Performance Models (CIPM) approach is to enable an accurate architecture-based performance prediction at each point of a system's development life cycle. For this goal, the CIPM approach continuously updates the architecture-level performance model of a software system according to observed changes at development time and at operation time.
Context
If the measurement-based performance evaluation is followed during agile software development, only the current state of the performance is provided. The performance evaluation of design alternatives (i.e., unseen states) with measurement-based approaches is expensive. It requires to set up a test environment and measure all design alternatives (e.g., different deployments, system compositions, execution environments). On the contrary, architecture-based performance predictions can support design decisions by simulating or analyzing the architectural performance models with lower costs. However, modeling and keeping architectural performance models up-to-date during the agile software development is challenging: the frequent changes in source code and operations (e.g., system composition or deployment) outdate the models fast. The CIPM approach addresses this issues by observing the impacting changes and updating the architectural performance models accordingly.
CIPM Processes
The CIPM processes keep the consistency between the software system artifacts (source code, performance models, and the measurements) during the software development life cycle. The following figure shows the processes of CIPM and how they can be integrated into DevOps as an example of agile software development.
At development time, CIPM extends the Continuous Integration (CI) of source code with a CI-based update of software models1. This process extracts the source code changes from each commit and updates the corresponding source code model accordingly1.1 in a Virtual Single Underlying Model predefined in the Vitruvius platform. Based on consistency preservation rules defined at the metamodel level, CIPM updates the repository model1.2 (i.e., components, interfaces, and abstract behavior) and identifies instrumentation points1.3 for changed parts of the source code. Besides, CIPM extracts the system model1.4 (i.e., the composition of components). To estimate the performance parameters such as the resource demands, CIPM applies an automatic adaptive instrumentation2 to the changed parts of the source code.
Performance testing3 with the instrumented source code allows the collection of the required measurements for calibrating the performance parameters. The incremental calibration4 updates the affected performance parameters considering impacting parametric dependencies, for example, the input data. The self-validation5 process validates and improves the accuracy of the calibrated models. If the models are deemed accurate, developers can apply architecture-based performance prediction6 of unseen states. Otherwise, the accuracy can be improved by collecting more measurements from test/ production environments.
At operation time, the continuous adaptive monitoring8 of the system allows to collect the required runtime measurements for detecting operation changes and keeping the performance models up-to-date. The self-validation9 process compares the monitoring data and simulation results to detect inaccurate parts of the models. The results of the self-validation are used as an input to the Ops-time calibration9 which updates the models according to the detected changes and improves their accuracy. The updated models enable model-based analyses11, for instance, model-based auto-scaling. Moreover, they support the development planning12 by evaluating design alternatives.
Publications
- QUDOS18: The "Continuous Integration of Performance Model" introduces the initial idea of the approach.
- ICSA20: "Incremental Calibration of Architectural Performance Models with Parametric Dependencies" describes the CIPM approach in more detail and evaluates the accuracy of the performance models that are incrementally calibrated.
- ICSA21: "Enabling Consistency between Software Artifacts for Software Adaption and Evolution" continuously updates the performance models at operation time using the self-validation results as an input for the calibration to improve the models' accuracy and to reduce the monitoring overhead.
- QUDOS21 "Optimizing Parametric Dependencies for Incremental Performance Model Extraction" applies a genetic algorithm to optimize the performance parameters with parametric dependencies.
- Preprint: "Continuous Integration of Architectural Performance Models with Parametric Dependencies - The CIPM Approach". The new contributions in this work are the commit-based update of performance models at development time and further evaluation of the approach.
For full bibliographic details and BibTeX entries, see https://orcid.org/0000-0003-4261-8477.
Case studies
We perform experiments based on the following case studies:
CoCoMe
CoCoME is a trading system designed to be used in supermarkets. It supports several processes such as scanning products at a cash desk or processing sales with a credit card. We used a cloud-based implementation of CoCoME.
TeaStore
TeaStore is a microservice-based webshop to buy different types of tea (publication). The webshop consists of 8 microservices that register themselves by a registry microservice to allow client-side load balancing. The microservices communicate with each other over representational state transfer (REST)-based APIs. This case study is designed to evaluate approaches in performance modeling.
TEAMMATES
TEAMMATES is a cloud-based tool to manage students' feedback. It consists of a Web-based frontend and a Java-based backend. CIPM is evaluated based on the real Git repository and history of TEAMMATES. For more details about the evaluation, see CIPM Evaluation Details.
Details for Published and Submitted Papers
ICSA2020
The incremental calibration pipeline is available on GitHub and documented in its wiki. However, we extended this pipeline after the publication with more features (see the CIPM pipeline on GitHub).
Evaluation using CoCoMe
This evaluation scenario supposes that the “bookSale” service is newly added. According to this assumption, the “bookSale” is instrumented and calibrated. The service consists of several internal and external actions and two loops. The following figure visualizes the abstract behavior of the ”bookSale”.
Experiment Reproduction: Two steps can reproduce the results:
- The incremental calibration of CoCoMe by executing the calibration pipeline. To run it, please follow these instructions.
- The configuration of the calibration pipeline can be found on this link.
- The experiment data (e.g., the monitoring configuration, measurements, the original and calibrated models) can be found in the case study data folder.
- The evaluation of the calibrated performance model can be reproduced by executing the automatic evaluation of the calibrated CoCoMe model. The evaluation covers different aspects, for instance, the accuracy of the calibrated models, the required monitoring overhead, and the calibration pipeline's performance. The data used by the evaluation (e.g., monitoring configuration, the used monitoring data and calibrated models) are stored in the evaluation folder.
Evaluation using TeaStore
For the evaluation, we used the source code in our fork. The evaluation covers three scenarios described in the paper: (A) the incremental calibration of TeaStore to evaluate the accuracy of the incrementally calibrated models (Train service calibration), (B) the incremental calibration of TeaStore over incremental evolution to evaluate the stability of the models' accuracy over development, and (C) the incremental calibration of TeaStore after changes in the parametric dependencies to evaluate the identification of different types of parametric dependencies and the improvement of the models' accuracy.
Experiment Reproduction: Two steps can reproduce the results:
- The incremental calibration of TeaStore by executing the calibration pipeline following these instructions.
- The experiment configuration can be found on this link.
- The experiment data (e.g., the monitoring configuration, measurements, the original and calibrated models) are found in the case study data folder.
- The evaluation of the calibrated performance model can be reproduced by running the automatic evaluation of the calibrated TeaStore models. Please note that there are three different tests for the abovementioned evaluation scenarios (A: TeastoreEvolutionScenarioEvaluation.java, B: TeastoreSingleRecommenderEvaluation.java, and C:TeastoreParameterizedEvaluation.java). The evaluation covers the accuracy of the calibrated models, the required monitoring overhead, and the calibration pipeline's performance. The evaluation data for all evaluation scenarios (A, B, and C) are stored in the evaluation folder.
ICSA2021
The source code that is used for the evaluation of this publication is frozen on the ICSA21 branch of the CIPM pipeline repository. The source code is documented using this wiki.
Information on the evaluation and the reproduction of results is documented on this link in the wiki.
Journal Publication
A preprint of this publication is available on KitOpen.
Reproducibility of Results
- The experiment E1 is performed by the commit-based update of software models. The source code and information on reproducing the E1 results are on GitHub.
- The experiment E2 was performed based on the CIPM pipeline repository. The source code documentation and information on the reproduction of the results are on its wiki.
The source code of the commit-based update of software models will be later integrated into the CIPM pipeline repository.
Technical Report for the Java Model Parser and Printer
The current implementation of CIPM keeps the consistency between the software system that is written as Java source code and Palladio performance models. Here, the consistency preservation rules are defined at the metamodel level. Regarding the metamodel of Java, we build upon JaMoPP. We extended the metamodel of JaMoPP to support the Java versions 7-15. Our extensions expand the metamodel with new features, for instance, the diamond operator, lambda expressions, or modules. Moreover, we implemented our own printer and parser. The parser implementation is based on the Eclipse Java Development Tools (JDT) from which Abstract Syntax Trees (ASTs) and binding information are retrieved. The ASTs are converted to EMF model instances while the bindings support the resolution of references between model instances introduced by, e.g., imports. In contrast, the printer outputs model instances into valid Java code.
More details on our extension of JaMoPP are documented in a technical report. The source code is also available on GitHub.
Foundations and Related Projects
The following approaches are useful as background information:
- Vitruvius
- Palladio
- IObserve
- Kieker
- Automated Extraction of Palladio Component Models from Running Enterprise Java Applications
Related projects:
Contact
Please contact Manar Mazkatli or Martin Armbruster by mail if you have any questions.