Attribut:Publikation/Abstract
This is a property of type Text.
L
Performance is a pervasive quality of software systems; everything affects it, from the software itself to all underlying layers, such as operating system, middleware, hardware, communication networks, etc. Software Performance Engineering encompasses efforts to describe and improve performance, with two distinct approaches: an early-cycle predictive model-based approach, and a late-cycle measurement-based approach. Current progress and future trends within these two approaches are described, with a tendency (and a need) for them to converge, in order to cover the entire development cycle. +
The model-driven development of systems involves multiple models, metamodels and transformations, and relationships between them. A bidirectional transformation (bx) is usually defined as a means of maintaining consistency between "two (or more)" models. This includes cases where one model may be generated from one or more others, as well as more complex ("symmetric") cases where models record partially overlapping information. In recent years binary bx, those relating two models, have been extensively studied. Multiary bx, those relating more than two models, have received less attention. In this paper we consider how a multiary consistency relation may be defined in terms of binary consistency relations, and how consistency restoration may be carried out on a network of models and relationships between them. We relate this to megamodelling and discuss further research that is needed. +
There exist many solutions to solve a given design problem, and it is difficult to capture the essence of a solution and make it reusable for future designs. Furthermore, many variations of a given solution may exist, and choosing the best alternative depends on application-specific high-level goals and non-functional requirements. This paper proposes Concern-Oriented Software Design, a modelling technique that focuses on concerns as units of reuse. A concern groups related models serving the same purpose, and provides three interfaces to facilitate reuse. The variation interface presents the design alternatives and their impact on non-functional requirements. The customization interface of the selected alternative details how to adapt the generic solution to a specific context. Finally, the usage interface specifies the provided behavior. We illustrate our approach by presenting the concern models of variations of the Observer design pattern, which internally depends on the Association concern to link observers and subjects +
In this paper, we present an integrated model-driven approach for the specification and the enforcement of secure object flows in process-driven service-oriented architectures (SOA). In this context, a secure object flow ensures the confidentiality and the integrity of important objects (such as business contracts or electronic patient records) that are passed between different participants in SOA-based business processes. We specify a formal and generic metamodel for secure object flows that can be used to extend arbitrary process modeling languages. To demonstrate our approach, we present a UML extension for secure object flows. Moreover, we describe how platform-independent models are mapped to platform-specific software artifacts via automated model transformations. In addition, we give a detailed description of how we integrated our approach with the Eclipse modeling tools. +
The specification of workloads is required in order to evaluate performance characteristics of application systems using load testing and model-based performance prediction.
Defining workload specifications that represent the real workload as accurately as possible is one of the biggest challenges in both areas. To overcome this challenge, this paper
presents an approach that aims to automate the extraction and transformation of workload specifications for load testing and model-based performance prediction of session-based
application systems. The approach (WESSBAS) comprises three main components. First, a system- and tool-agnostic domain-specific language (DSL) allows the layered modeling of workload specifications of session-based systems.
Second, instances of this DSL are automatically extracted from recorded session logs of production systems. Third, these instances are transformed into executable workload
specifications of load generation tools and model-based performance evaluation tools.We present transformations to the common load testing tool Apache JMeter and to the Palladio
Component Model. Our approach is evaluated using the industry-standard benchmark SPECjEnterprise2010 and the World Cup 1998 access logs.Workload-specific characteristics (e.g., session lengths and arrival rates) and performance characteristics (e.g., response times and CPU utilizations) show that the extracted workloads match the measured workloads
with high accuracy. +
Multiobjective evolutionary algorithms (EAs) that use nondominated sorting and sharing have been criticized mainly for their: 1) O(MN^3) computational complexity (where M is the number of objectives and N is the population size); 2) nonelitism approach; and 3) the need for specifying a sharing parameter. In this paper, we suggest a nondominated sorting-based multiobjective EA (MOEA), called nondominated sorting genetic algorithm II (NSGA-II), which alleviates all the above three difficulties. Specifically, a fast nondominated sorting approach with O(MN^2) computational complexity is presented. Also, a selection operator is presented that creates a mating pool by combining the parent and offspring populations and selecting the best (with respect to fitness and spread) N solutions. Simulation results on difficult test problems show that the proposed NSGA-II, in most problems, is able to find much better spread of solutions and better convergence near the true Pareto-optimal front compared to Pareto-archived evolution strategy and strength-Pareto EA—two other elitist MOEAs that pay special attention to creating a diverse Pareto-optimal front. Moreover, we modify the definition of dominance in order to solve constrained multiobjective problems efficiently. Simulation results of the constrained NSGA-II on a number of test problems, including a five-objective seven-constraint nonlinear problem, are compared with another constrained multiobjective optimizer and much better performance of NSGA-II is observed. +
The Eclipse Modeling Framework (EMF) provides modeling and code generation facilities for Java applications based on structured data models. Henshin is a new language and associated tool set for in-place transformations of EMF models. The Henshin transformation language uses pattern-based rules on the lowest level, which can be structured into nested transformation units with well-defined operational semantics. So-called amalgamation units are a special type of transformation units that provide a forall-operator for pattern replacement. For all of these concepts, Henshin offers a visual syntax, sophisticated editing functionalities, execution and analysis tools. The Henshin transformation language has its roots in attributed graph transformations, which offer a formal foundation for validation of EMF model transformations. The transformation concepts are demonstrated using two case studies: EMF model refactoring and meta-model evolution. +
Context As autonomous driving technology matures toward series production, it is necessary to take a deeper look at various aspects of electrical/electronic (E/E) architectures for autonomous driving. Objective This paper describes a functional reference architecture for autonomous driving, along with various considerations that influence such an architecture. The functionality is described at the logical level, without dependence on specific implementation technologies. Method Engineering design has been used as the research method, which focuses on creating solutions intended for practical application. The architecture has been refined and applied over a 5 year period to the construction of prototype autonomous vehicles in three different categories, with both academic and industrial stakeholders. Results The architectural components are divided into categories pertaining to (i) perception, (ii) decision and control, and (iii) vehicle platform manipulation. The architecture itself is divided into two layers comprising the vehicle platform and a cognitive driving intelligence. The distribution of components among the architectural layers considers two extremes: one where the vehicle platform is as “dumb” as possible, and the other, where the vehicle platform can be treated as an autonomous system with limited intelligence. We recommend a clean split between the driving intelligence and the vehicle platform. The architecture description includes identification of stakeholder concerns, which are grouped under the business and engineering categories. A comparison with similar architectures is also made, wherein we claim that the presence of explicit components for world modeling, semantic understanding, and vehicle platform abstraction seem unique to our architecture. Conclusion The concluding discussion examines the influences of implementation technologies on functional architectures and how an architecture is affected when a human driver is replaced by a computer. The discussion also proposes that reduction and acceleration of testing, verification, and validation processes is the key to incorporating continuous deployment processes.
Keywords: Autonomous driving; Functional architecture; E/E architecture; Reference architecture
Systems of systems exhibit characteristics that pose difficulty in modelling and predicting their overall performance capabilities, including the presence of operational independence, emergent behaviour, and evolutionary development. When considering systems of systems within the autonomous defence systems context, these aspects become increasingly critical, as constraints on the performance of the final system are typically driven by hard constraints on space, weight and power. System execution modelling languages and tools permit early prediction of the performance of model-driven systems; however, the focus to date has been on understanding the performance of a model rather than determining whether it meets performance requirements, and only subsequently carrying out analysis to reveal the causes of any requirement violations. Moreover, such an analysis is even more difficult when applied to several systems cooperating to achieve a common goal—a system of systems. In this article, we propose an integrated approach to performance prediction of model-driven real-time embedded defence systems and systems of systems. Our architectural prototyping system supports a scenario-driven experimental platform for evaluating model suitability within a set of deployment and real-time performance constraints. We present an overview of our performance prediction system, demonstrating the integration of modelling, execution and performance analysis, and discuss a case study to illustrate our approach. +
In this study we report on a Systematic Mapping Study (SMS) for Domain-Specific Languages (DSLs), based on an automatic search including primary studies from journals, conferences, and workshops during the period from 2006 until 2012. Objective: The main objective of the described work was to perform an SMS on DSLs to better understand the DSL research field, identify research trends, and any possible open issues. The set of research questions was inspired by a DSL survey paper published in 2005. Method: We conducted a SMS over 5 stages: defining research questions, conducting the search, screening, classifying, and data extraction. Our SMS included 1153 candidate primary studies from the ISI Web of Science and ACM Digital Library, 390 primary studies were classified after screening. Results: This SMS discusses two main research questions: research space and trends/demographics of the literature within the field of DSLs. Both research questions are further subdivided into several research sub-questions. The results from the first research question clearly show that the DSL community focuses more on the development of new techniques/methods rather than investigating the integrations of DSLs with other software engineering processes or measuring the effectiveness of DSL approaches. Furthermore, there is a clear lack of evaluation research. Amongst different DSL development phases more attention is needed in regard to domain analysis, validation, and maintenance. The second research question revealed that the number of publications remains stable, and has not increased over the years. Top cited papers and venues are mentioned, as well as identifying the more active institutions carrying DSL research. Conclusion: The statistical findings regarding research questions paint an interesting picture about the mainstreams of the DSL community, as well as open issues where researchers can improve their research in their future work. +
Safety has often been equated with reliability and robustness. However, safety needs to be treated as a separate and important system quality. In this paper, software safety is distinguished from these other qualities and formally defined. The paper also examines the possibility of using three different verification approaches - state machines, temporal logic, and fault trees - to verify software safety. +
Reviewing software system architecture to pinpoint potential security flaws before proceeding with system development is a critical milestone in secure software development lifecycles. This includes identifying possible attacks or threat scenarios that target the system and may result in breaching of system security. Additionally we may also assess the strength of the system and its security architecture using well-known security metrics such as system attack surface, Compartmentalization, least-privilege, etc. However, existing efforts are limited to specific, predefined security properties or scenarios that are checked either manually or using limited toolsets. We introduce a new approach to support architecture security analysis using security scenarios and metrics. Our approach is based on formalizing attack scenarios and security metrics signature specification using the Object Constraint Language (OCL). Using formal signatures we analyse a target system to locate signature matches (for attack scenarios), or to take measurements (for security metrics). New scenarios and metrics can be incorporated and calculated provided that a formal signature can be specified. Our approach supports defining security metrics and scenarios at architecture, design, and code levels. We have developed a prototype software system architecture security analysis tool. To the best of our knowledge this is the first extensible architecture security risk analysis tool that supports both metric-based and scenario-based architecture security analysis. We have validated our approach by using it to capture and evaluate signatures from the NIST security principals and attack scenarios defined in the CAPEC database. +
With the recent success of embeddings in natural language processing, research has been conducted into applying similar methods to code analysis. Most works attempt to process the code directly or use a syntactic tree representation, treating it like sentences written in a natural language. However, none of the existing methods are sufficient to comprehend program semantics robustly, due to structural features such as function calls, branching, and interchangeable order of statements. In this paper, we propose a novel processing technique to learn code semantics, and apply it to a variety of program analysis tasks. In particular, we stipulate that a robust distributional hypothesis of code applies to both human- and machine-generated programs. Following this hypothesis, we define an embedding space, inst2vec, based on an Intermediate Representation (IR) of the code that is independent of the source programming language. We provide a novel definition of contextual flow for this IR, leveraging both the underlying data- and control-flow of the program. We then analyze the embeddings qualitatively using analogies and clustering, and evaluate the learned representation on three different high-level tasks. We show that even without fine-tuning, a single RNN architecture and fixed inst2vec embeddings outperform specialized approaches for performance prediction (compute device mapping, optimal thread coarsening); and algorithm classification from raw code (104 classes), where we set a new state-of-the-art. +
For computer-based simulations to be scientifically useful and scientifically credible, they need to be developed to high standards, and argued fit-for-purpose. The CoSMoS project has developed an approach to support such development, and codified its approach in a pattern language. Here we overview this pattern language, and discuss several example simulation development patterns and antipatterns. +
Software systems are increasingly open, handle largeamounts of personal or other sensitive data and are intricately linked with the daily lives of individuals and communities.This poses a range of privacy requirements. Such privacy requirements are typically treated as instances of requirements pertaining to compliance, traceability, access control, verification or usability. Though important, such approaches assume that the scope for the privacy requirements can be established a priori and that such scope does not vary drastically once the system is deployed. User data and information, however, exists in an open, hyper-connected and potentially “unbounded” environment. Furthermore, “privacy requirements – present” and “privacy requirements – future” may differ significantly as the privacy implications are often emergent a posteriori. Effective treatment of privacy requirements, therefore, requires techniques and approaches that fit with the inherent openness and fluidity of the environment through which user data and information flows. This paper surveys state of the art and presents some potential directions in the way privacy requirements should be treated. We reflect on the limitations of existing approaches with regards to unbounded privacy requirements and highlight a set of key challenges for requirements engineering research with regards to managing privacy in such unbounded settings. +
The feedback from architectural decisions to the elaboration of requirements is an established concept in the software engineering community. However, pinpointing the nature of this feedback in a precise way is a largely open problem. Often, the feedback is generically characterized as additional qualities that might be affected by an architect’s choice. This paper provides a practical perspective on this problem by leveraging architectural security patterns. The contribution of this paper is the Security Twin Peaks model, which serves as an operational framework to co-develop security in the requirements and the architectural artifacts. +
Architecture is the set of principal design decisions about a software system. In practice, new architectural decisions are added and existing ones reversed or modified throughout a system's lifetime. Frequently, these decisions deviate from the architect's well-considered intent, and software systems regularly exhibit increased architectural decay as they evolve. The manifestations of such ill-considered design decisions are seen as “architectural smells”. To date, there has been no in-depth study of the characteristics or trends involving this phenomenon. Instead, when referring to architectural smells and their negative effects, both researchers and practitioners had to rely on folklore and their personal, inherently limited experience. In this paper, we report on the systematic step we have taken in investigating the nature and impact of architectural smells. We have selected a set of representative architectural smells from literature and analyzed their instances in 421 versions from 8 open-source software systems. We have (1) developed algorithms to automatically detect instances of multiple architectural smell types, and (2) analyzed relationships between the detected smells and the lists of issues reported in the systems' respective issue trackers. Our study shows that architectural smells have tangible negative consequences in the form of implementation issues as well as code commits requiring increased maintenance effort throughout a system's lifetime. +
The need to support software architecture evolution has been well recognized, even more since the rise of agile methods. However, assuring the conformance between architecture descriptions and the implementation remains challenging. Inconsistencies emanate among multiple architecture descriptions, and between architecture descriptions and code. As a consequence, architecture descriptions are not always trusted and used to the extent that their authors wish for. In this paper, we present two surveys with 93 and 72 participants to examine architectural inconsistencies, with a focus on how they evolve over time and can be mitigated using practical guidelines. We identified the importance of capturing emerging elements to keep the architecture description consistent with the implementation, and consider the current-state and future-state architecture separately. Consequences of inconsistencies typically arise at later stages, especially if an architecture description concerns multiple teams. Our guidelines suggest to limit the upfront architecture to stable decisions, while paying attention to concerns that matter across team borders. In the ideal case, companies should aim to integrate architects into the teams to capture emerging aspects with time. +
Recent advances in machine learning have stimulated widespread interest within the Information Technology sector on integrating AI capabilities into software and services. This goal has forced organizations to evolve their development processes. We report on a study that we conducted on observing software teams at Microsoft as they develop AI-based applications. We consider a nine-stage workflow process informed by prior experiences developing AI applications (e.g., search and NLP) and data science tools (e.g. application diagnostics and bug reporting). We found that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace. We collected some best practices from Microsoft teams to address these challenges. In addition, we have identified three aspects of the AI domain that make it fundamentally different from prior software application domains: 1) discovering, managing, and versioning the data needed for machine learning applications is much more complex and difficult than other types of software engineering, 2) model customization and model reuse require very different skills than are typically found in software teams, and 3) AI components are more difficult to handle as distinct modules than traditional software components --- models may be "entangled" in complex ways and experience non-monotonic error behavior. We believe that the lessons learned by Microsoft teams will be valuable to other organizations. +
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone. +