PhD and Habilitation theses created at our chair

Continuous Rationale Management

Continuous Rationale Management

Author Anja Kleebaum
Type of work PhD thesis
Published in 2023
Download
Abstract

Continuous Software Engineering (CSE) is a software life cycle model open to frequent changes in requirements or technology. During CSE, software developers continuously make decisions on the requirements and design of the software or the development process. They establish essential decision knowledge, which they need to document and share so that it supports the evolution and changes of the software. The management of decision knowledge is called rationale management. Rationale management provides an opportunity to support the change process during CSE.

However, rationale management is not well integrated into CSE. The overall goal of this dissertation is to provide workflows and tool support for continuous rationale management. The dissertation contributes an interview study with practitioners from the industry, which investigates rationale management problems, current practices, and features to support continuous rationale management beneficial for practitioners. Problems of rationale management in practice are threefold: First, documenting decision knowledge is intrusive in the development process and an additional effort. Second, the high amount of distributed decision knowledge documentation is difficult to access and use. Third, the documented knowledge can be of low quality, e.g., outdated, which impedes its use. The dissertation contributes a systematic mapping study on recommendation and classification approaches to treat the rationale management problems.

The major contribution of this dissertation is a validated approach for continuous rationale management consisting of the ConRat life cycle model extension and the comprehensive ConDec tool support. To reduce intrusiveness and additional effort, ConRat integrates rationale management activities into existing workflows, such as requirements elicitation, development, and meetings. ConDec integrates into standard development tools instead of providing a separate tool. ConDec enables lightweight capturing and use of decision knowledge from various artifacts and reduces the developers' effort through automatic text classification, recommendation, and nudging mechanisms for rationale management. To enable access and use of distributed decision knowledge documentation, ConRat defines a knowledge model of decision knowledge and other artifacts. ConDec instantiates the model as a knowledge graph and offers interactive knowledge views with useful tailoring, e.g., transitive linking. To operationalize high quality, ConRat introduces the rationale backlog, the definition of done for knowledge documentation, and metrics for intra-rationale completeness and decision coverage of requirements and code. ConDec implements these agile concepts for rationale management and a knowledge dashboard. ConDec also supports consistent changes through change impact analysis.

The dissertation shows the feasibility, effectiveness, and user acceptance of ConRat and ConDec in six case study projects in an industrial setting. Besides, it comprehensively analyses the rationale documentation created in the projects. The validation indicates that ConRat and ConDec benefit CSE projects. Based on the dissertation, continuous rationale management should become a standard part of CSE, like automated testing or continuous integration.

Supporting Software Development by an Integrated Documentation Model for Decisions

Author Tom-Michael Hesse
Type of work PhD thesis
Published in 2020
Download
Abstract

Decision-making is a vital activity during software development. Decisions made during requirements engineering, software design, and implementation guide the development process. In order to make decisions, developers may apply different strategies. For instance, they can search for alternatives and evaluate them according to given criteria, or they may rely on their personal experience and heuristics to make single solution claims. Thereby, knowledge emerges during the process of decision making, as the content, outcome, and context of decisions are explored by developers. For instance, different solution options may be considered to address a given decision problem. In particular, such knowledge is growing rapidly, when multiple developers are involved. Therefore, it should be documented to make decisions comprehensible in the future.

However, this documentation is often not performed by developers in practice. First, developers need to find and use a documentation approach, which provides support for the decision making strategies applied for the decision to be documented. Thus, documentation approaches are required to support multiple strategies. Second, due to the collaborative nature of the decision making process during one or more development activities, decision knowledge needs to be captured and structured according to one integrated model, which can be applied during all these development activities.

This thesis uncovers two important reasons, why the aforementioned requirements are currently not fulfilled sufficiently. First, it is investigated, which decision making strategies can be identified in the documentation of decisions within issue tickets from the Firefox project. Interestingly, most documented decision knowledge originates from naturalistic decision making, whereas most current documentation approaches structure the captured knowledge according to rational decision making strategies. Second, most decision documentation approaches focus on one development activity, so that for instance decision documentation during requirements engineering and implementation are not supported within the same documentation model.

The main contribution of this thesis is a documentation model for decision knowledge, which addresses these two findings. In detail, the documentation model supports the documentation of decision knowledge resulting from both naturalistic and rational decision making strategies, and integrates this knowledge within flexible documentation structures. Also, it is suitable for capturing decision knowledge during the three development activities of requirements engineering, design, and implementation. Furthermore, a tool support is presented for the model, which allows developers to integrate decision capturing and documentation in their activities using the Eclipse IDE.

Interaction-Based Creation and Maintenance of Continuously Usable Trace Links

Interaction-Based Creation and Maintenance of Continuously Usable Trace Links

Author Paul Hübner
Type of work PhD thesis
Published in 2020
Download
Abstract

Traceability is a major concern for all software engineering artefacts. The core of traceability are trace links between the artefacts. Out of the links between all kinds of artefacts, trace links between requirements and source code are fundamental, since they enable the connection between the user point of view of a requirement and its actual implementation. Trace links are important for many software engineering tasks such as maintenance, program comprehension, verification, etc. Furthermore, the direct availability of trace links during a project improves the performance of developers.
The manual creation of trace links is too time-consuming to be practical. Thus, traceability research has a strong focus on automatic trace link creation. The most common automatic trace link creation methods use information retrieval techniques to measure the textual similarity between artefacts. The results of the textual similarity measurement is then used to judge the creation of links between artefacts. The application of such information retrieval techniques results in a lot of wrong link candidates and requires further expert knowledge to make the automatically created links usable, insomuch as it is necessary to manually vet the link candidates. This fact prevents the usage of information retrieval techniques to create trace links continuously and directly provide them to developers during a project.
Thus, this thesis addresses the problem of continuously providing trace links of a good quality to developers during a project and to maintain these links along with changing artefacts. To achieve this, a novel automatic trace link creation approach called ILog has been designed and evaluated. ILog utilizes the interactions of developers with source code while implementing requirements. In addition, ILog uses the common development convention to provide issues' identifiers in a commit message, to assign recorded interactions to requirements. Thus ILog avoids additional manual efforts from the developers for link creation.
ILog has been implemented in a set of tools. The tools enable the recording of interactions in different integrated development environments and the subsequent creation of trace links. Trace link are created between source code files which have been touched by interactions and the current requirement which is being worked on. The trace links which are initially created in this way are further improved by utilizing interaction data such as interaction duration, frequency, type, etc. and source code structure, i.e. source code references between source code files involved in trace links. ILog's link improvement removes potentially wrong links and subsequently adds further correct links.
ILog was evaluated in three empirical studies using gold standards created by experts. One of the studies used data from an open source project. In the two other studies, student projects involving a real world customer were used. The results of the studies showed that ILog can create trace links with perfect precision and good recall, which enables the direct usage of the links. The studies also showed that the ILog approach has better precision and recall than other automatic trace link creation approaches, such as information retrieval.
To identify trace link maintenance capabilities suitable for the integration in ILog, a systematic literature review about trace link maintenance was performed. In the systematic literature review the trace link maintenance approaches which were found are discussed on the basis of a standardized trace link maintenance process. Furthermore, the extension of ILog with suitable trace link maintenance capabilities from the approaches found is illustrated.

Retrospective Semi-automated Software Feature Extraction from Natural Language User Manuals

Author Thomas Quirchmayr
Type of work PhD thesis
Published in 2018
Download
Abstract

Mature software systems comprise a vast number of heterogeneous system capabilities which are usually requested by different groups of stakeholders and which evolve over time. Software features describe and bundle low level capabilities logically on an abstract level and thus provide a structured and comprehensive overview of the entire capabilities of a software system. Software features are often not explicitly managed. Quite the contrary, software feature-relevant information is often spread across several software engineering artifacts (e.g., user manual, issue tracking systems). It requires huge manual effort to (1) identify and extract software feature-relevant information from these artifacts in order to make software feature knowledge explicit and furthermore to (2) determine which software features the disclosed software feature-relevant information belongs to. This thesis presents a three-step-approach to semi-automatically enhance software features by software feature-relevant information from a user manual: first, a domain terminology is semi-automatically extracted from a natural language user manual based on linguistic patterns. Second, the extracted domain terminology, structural sentence information and natural language processing techniques are used to automatically identify and extract atomic software feature-relevant information with an F1-score of at least 92.00%.
Finally, the determined atomic software feature-relevant information is semi-automatically assigned to existing and logically related software features. The approach is empirically evaluated by means of a user manual and corresponding gold standards of an industrial partner.

Domain-specific Adaption of Requirements Engineering Methods

Author Chrisitian Kücherer
Type of work PhD thesis
Published in 2018
Download
Abstract

Requirements are fundamental for the development of software-based information systems (ISs). Stakeholder needs for such ISs are documented as requirements following a requirements engineering (RE) method. Requirements are specific to the application domain for which ISs are developed and in which they are used. A system domain is represented by ISs that share a minimal set of common requirements to solve similar domain independent problems. Both domain-specific aspects need to be considered explicitly during the specification of ISs. Generic RE methods can be used in different domains, but do not consider explicitly domain-specific details. A solution to this problem is the domain-specific adaptation of RE methods. Domain-specific modeling languages (DSMLs) allow conceptual modeling in specific system domains. Domain ontologies provide formalized domain knowledge of an application domain.

The objective of this thesis is to investigate, through the example of the task-oriented RE conceptual framework TORE, (1) how a generic RE method can be adapted to consider system domain-specifics with the use of a DSML, and (2) how a generic RE method can be adapted to use an application domain ontology. For the system domain adaptation, we use a personal decision support system (PDSS). The PDSS supports a Chief Information Officer (CIO) in decision-making with tasks in information management (IM). For the adaptation to the application domain, we use IM in hospitals represented by the semantic network of information management in hospitals (SNIK) domain ontology.The results of this investigation consist of two method adaptations: first, the system domain-specific DsTORE, and second, the application domain-specific TOREOnto. The contributions of the system domain-specific adaptation DsTORE are fourfold. First, an as-is domain study provides details about the information management department of a specific hospital in order to understand the organizational context for the PDSS that will be employed. Second, an exploratory case study shows the extent of task-oriented requirements engineering (TORE) to support the RE specification of a PDSS. Third, the design of DsTORE provides the system domain-specific adaptation of TORE to support the specification of PDSS. Fourth, a case study documents the evaluation of DsTORE. The application domain-specific adaptation TOREOnto consists of three contributions. First, a literature review provides the state of the art regarding the use of domain ontologies in RE, describing nine different usage scenarios of domain ontologies to improve the quality of requirements. Second, the design of TOREOnto provides the application domain-specific adaptation to support the improvement of requirements quality. Third, a case study shows the retrospective evaluation of TOREOnto with RE artifacts created in this thesis.The overall research method of this thesis is Design Science according to Wieringa. The problem investigation of domain ontology usage in RE is based on a systematic literature review by Kitchenham and Charters.

Measuring anticipated satisfaction

Author Rumyana Proynova
Type of work PhD thesis
Published in 2018
Download
Abstract

When developing a software system, one of the early steps is to create a requirements specification. Validating this specification saves implementation effort which might be otherwise spent on building a system with the wrong features. Ideally, this validation should involve many stakeholders representing different groups, to ensure coverage of a variety of viewpoints. However, the usual requirements validation methods such as personal interviews only allow the involvement of a few stakeholders before the costs become prohibitive, so it is difficult to apply them at the needed scale. If the requirements specification contains undesirable features, they are likely to be discovered during usability testing. Many usability methods can involve a high number of users at a low cost, for example satisfaction surveys and A/B testing in production. They can give high quality information about improving the system, but they require a completed system or at least an advanced prototype before they can be used. We create a method for measuring user satisfaction before building the system, which we call anticipated satisfaction to distinguish it from the actual satisfaction measured after the user has experienced the system. The method uses a questionnaire which contains short descriptions of the software system’s features, and asks the users to imagine how satisfied they would be when using a system with the described features. The method is flexible, as we do not create a single questionnaire to use. Instead, we give guidance on which variables can be measured with the questionnaire, and how to create questions for them. This allows the development team to tailor the questionnaire to the specific situation in their project. When we applied it in two validation studies, it discovered significant issues and was rated favorably by both the software development team and the users.

Our method contributes to the discipline of software engineering by offering a new option for validating software requirements. It is more scalable than interviewing users, and can be employed before the implementation phase, allowing for early problem detection. The effort required to apply it is low, and the information gained is seen as useful by both developers and managers, which makes it a good candidate for use in commercial projects.

 

Identification of Software Features in Issue Tracking System Data

Author
Thorsten Merten
Type of work
PhD thesis
Published in
2017
Download
Abstract

The knowledge of Software Features (SFs) is vital for software developers and requirements specialists during all software engineering phases: to understand and derive software requirements, to plan and prioritize implementation tasks, to update documentation, or to test whether the final product correctly implements the equested SF. In most software projects, SFs are managed in conjunction with other information such as bug reports, programming tasks, or refactoring tasks with the aid of Issue Tracking Systems (ITSs). Hence ITSs contains a variety of information that is only partly related to SFs.

In practice, however, the usage of ITSs to store SFs comes with two major problems: (1) ITSs are neither designed nor used as documentation systems. Therefore, the data inside an ITS is often uncategorized and SF descriptions are concealed in rather lengthy. (2) Although an SF is often requested in a single sentence, related information can be scattered among many issues. E.g. implementation tasks related to an SF are often reported in additional issues. Hence, the detection of SFs in ITSs is complicated: a manual search for the SFs implies reading, understanding and exploiting the Natural Language (NL) in many issues in detail. This is cumbersome and labor intensive, especially if related information is spread over more than one issue.

This thesis investigates whether SF detection can be supported automatically. First the problem is analyzed: (i) An empirical study shows that requests for important SFs reside in ITSs , making ITSs a good target for SF detection. (ii) A second study identifies characteristics of the information and related NL in issues. These characteristics represent opportunities as well as challenges for the automatic detection of SFs.

Based on these problem studies, the Issue Tracking Software Feature Detection Method (ITSoFD), is proposed. The method has two main components and includes an approach to preprocess issues. Both components address one of the problems associated with storing SFs in ITSs. ITSoFD is validated in three solution studies: (I) An empirical study researches how NL that describes SFs can be detected with techniques from Natural Language Processing (NLP) and Machine Learning. Issues are parsed and different characteristics of the issue and its NL are extracted. These characteristics are used to classify the issue’s content and identify SF description candidates, thereby approaching problem (1). (II) An empirical study researches how issues that carry information potentially related to an SF can be detected with techniques from NLP and Information Retrieval. Characteristics of the issue’s NL are utilized to create a traceability network vii of related issues, thereby approaching problem (2). (III) An empirical study researches how NL data in issues can be preprocessed using heuristics and hierarchical clustering. Code, stack traces, and other technical information is separated from NL. Heuristics are used to identify candidates for technical information and clustering improves the heuristic’s results. The technique can be applied to support components, I. and II.

 

User-Developer Communication in Large-Scale IT Projects

Author
Ulrike Abelein
Type of work
PhD thesis
Published in
2015
Download
Abstract

User participation and involvement in software development has been studied for a long time and is considered essential for a successful software system. The positive effects of involving users in software development include improving quality in light of information about precise requirements, avoiding unnecessarily expensive features through enhanced aligment between developers and users, creating a positive attitude toward the system among users, and enabling effective use of the system. However, large-scale IT (LSI) projects that use traditional development methods tend to involve the user only at the beginning of the development process (i.e., in the specification phase) and at the end (i.e., in the verification and validation phases) or not to involve users at all. However, even if developers involve users at the beginning and the end, there are important decisions that affect users in the phases in between (i.e., design and implementation), which are rarely communicated to the users. This lack of communication between the users and developers in the design and implementation phase results in users who do not feel integrated into the project, are little motivated to participate, and do not see their requirements manifested in the resulting system. Therefore, it is important to study how user-developer communication (UDC) in the design and implementation phases can be enhanced in LSI projects in order to increase system success.
The thesis follows the technical action research (TAR) approach with the four phases of problem investigation, treatment design, design validation, and implementation evaluation. In the problem investigation phase we conducted a systematic mapping study and assessed the state of UDC practice with experts. In the treatment design phase, we designed the UDC–LSI method with experts, and we validated its design with experts in the design validation phase. Finally, in the implementation evaluation phase we evaluated the implementation of the method using a case study. This thesis first presents a meta-analysis of evidence of the effects of UPI on system success in general and explore the methods in the literature that aim to increase UPI in software development in the literature. Second, we investigate the state of UDC practice with experts, analyzing current practices and obstacles of UDC in LSI projects. Third, we propose the UDC–LSI method, which supports the enhancement of UDC in LSI projects, and present a descriptive classification containing user-relevant decisions (and, therefore, trigger points) to start UDC that can be used with our method. We also show the validity of the method through an assessment of the experts who see potential for the UDC–LSI method. Fourth, we demonstrate the results of a retrospective validation of the method in the real-life context of a large-scale IT project. The evaluation showed that the method is feasible to implement, has a positive effect on system success, and is efficient to implement from the perspective of project participants. Furthermore, project participants consider the UDC-LSI method to be usable and are likely to use in future projects.

 

Supporting the Quality Assurance of a Scientific Framework

Author
Hanna Remmel
Type of work
PhD thesis
Published in
2014
Download
Abstract

The quality assurance of scientific software has to deal with special challenges of this type of software, including missing test oracles, the need for high performance computing, and the high priority of non-functional requirements. A scientific framework consists of common code, which provides solutions for several similar mathematical problems. The various possible uses of a scientific framework lead to a large variability in the framework. In addition to the challenges of scientific software, the quality assurance of a scientific framework needs to find a way of dealing with the large variability.
In software product line engineering (SPLE), the idea is to develop a software platform and then use mass customization for the creation of a group of similar applications. In this thesis, we show how SPLE, in particular variability modeling, can be applied to support the quality assurance of scientific frameworks.
One of the main contributions of this thesis is a process for the creation of reengineering variability models for a scientific framework based on its mathematical requirements. Reengineering means the adjustment of a software system to improve the software quality, mostly without changing the software’s functionality. In our research, the variability models are created for existing software and therefore we call them reengineering variability models. The created variability models are used for a systematic development of system test applications for the framework. Additionally, we developed a model-based method for test case derivation for the system test applications based on the variability models.
Furthermore, we contribute a software product line test strategy for scientific frameworks. A test strategy strongly influences the test activities performed. Another main contribution of this thesis is the design of a quality assurance process for scientific frameworks, which combines the test activities of the test strategy with other quality assurance activities. We introduce a list of special characteristics for scientific software, which we use as rationale for the design of this process.
We report on a case study, analyzing the feasibility and acceptance by developers for two parts of the design of the quality assurance process: variability model creation and desk-checking, a kind of lightweight review. Using FeatureIDE, an environment for feature-oriented software development as well as an automated test environment, we prototypically demonstrate the applicability of our approach.

Aligning Business Process Quality and Information System Quality

Author
Robert Heinrich
Type of work
PhD thesis
Published in
2013
Download
Abstract Business processes and information systems mutually affect each other in non-trivial ways. Frequently, the business process design and the information system design are not well aligned. This means that business processes are designed without taking the information system impact into account, and vice versa. Missing alignment at design time often results in quality problems at runtime, such as large response times of information systems, large process execution times, overloaded information systems or interrupted processes.
Aligning business process quality and information system quality at design time requires to solve the following problems (P). Business process quality and information system quality have to be characterized. P1: In contrast to information system quality, which is specified in the ISO/IEC 9126 standard, for example, there is no common and comprehensive understanding of business process quality. P2: Beyond that, current business process modeling notations do not aim to represent quality aspects. The impact of a business process on the quality of an information system, and vice versa, is unknown at design time. P3: The mutual impact between business processes and information systems must be predicted at design time.
In this thesis, the Business Process Quality Reference-Model (BPQRM), a quality model for business processes, is introduced. The model allows for a comprehensive characterization of business process quality (P1). The BPQRM is applied successfully in a case study to identify potential for process quality improvement in practice. Based on the BPQRM an existing process modeling notation is extended by model elements to represent quality aspects (P2). Simulation is a powerful means to predict the impact of a business process on the quality of an information system, and vice versa, at design time. This thesis proposes two simulation approaches to predict the mutual impact between business processes and information systems, in terms of performance (P3). The approach Business IT Impact Simulation (BIIS) defines interfaces between the business process simulation and the information system simulation. Performance-relevant information is exchanged via the interfaces between both simulations. Using business process simulation and information system simulation in isolation, workload burstiness is not adequately reflected. This is especially true for occasional, volatile peak loads. Workload burstiness can significantly affect the performance of business processes and information systems. The approach Integrated Business IT Impact Simulation (IntBIIS) for the integration of business processes and information systems in a single simulation allows for reflecting workload burstiness correctly. The simulation approaches support the comparison of design alternatives and the verification of a certain design against requirements. A case study confirms the feasibility in practice and the acceptance from practitioners’ point of view.

Tracing Requirements and Source Code During Software Development

Author
Alexander Delater
Type of work
PhD thesis
Published in
2013
Download
Abstract Traceability supports the software development process in various ways, amongst others, change management, software maintenance and prevention of misunderstandings. Traceability links between requirements and code are vital to support these development activities, e.g., navigating from a requirement to its realization in the code, and vice versa. However, in practice, traceability links between requirements and code are often not created during development because this would require increased development effort. This reduces the possibilities for developers to use these links during development.
To address this weakness, this thesis presents an approach that (semi-) automatically captures traceability links between requirements and code during development. We do this by using work items from project management that are typically stored in issue trackers. The presented approach consists of three parts. The first part comprises a Traceability Information Model (TIM) consisting of artifacts from three different areas, namely requirements engineering, project management, and code. The TIM also includes the traceability links between them. The second part presents three processes for capturing traceability links between requirements, work items, and code during development. The third part defines an algorithm that automatically infers traceability links between requirements and code based on the interlinked work items. The traceability approach is implemented as an extension to the model-based CASE tool UNICASE, which is called UNICASE Trace Client.
Practitioners and researchers have discussed the practice of using work items to capture links between requirements and code, but there has been no systematic study of this practice. This thesis provides a first empirical study based on the application of the presented approach. The approach and its tool support are applied in three different software development projects conducted with undergraduate students. The feasibility and practicability of the presented approach and its tool support are evaluated. The feasibility results indicate that the approach creates correct traceability links between all artifacts with high precision and recall during development. At the same time the practicability results indicate that the subjects found the approach and its tool support easy to use. In a second empirical study we compare the presented approach with an existing technique for the automatic creation of traceability links between requirements and code. The results indicate the presented approach outperforms the existing technique in terms of the quality of the created traceability links.

 

Justified test foci definition an empirical approach

Author
Timea Illes-Seifert
Type of work
PhD thesis
Published in
2011
Download
Abstract Since complete testing is not possible, testers have to focus their effort on those parts of the software which they expect to have defects, the test foci. Despite the crucial importance of a systematic and justified definition of the test foci, this task is not well established in practice. Usually, testing resources are uniformly distributed among all parts of the software. A risk of this approach is that parts which contain defects are not sufficiently tested, whereas areas that do not contain defects attain too much consideration. In this thesis, a systematic approach is introduced that allows testers to make justified decisions on the test foci. For this purpose, structural as well as historical characteristics of the software’s past releases are analysed visually and statistically in order to find indicators for the software’s defects. Structural characteristics refer to the internal structure of the software. This thesis concentrates on the analysis of bad software characteristics, also known as “bad smells”. Historical characteristics considered in this thesis are the software’s change history and the software’s age. Simple and combined analyses of defect variance are introduced in order to determine indicators for defects in software. For this purpose, the defect variance analysis diagram is used to explore the relationship between the software’s characteristics and its faultiness visually. Then, statistical procedures are applied in order to determine whether the results obtained visually are statistically significant. The approach is validated in the context of open source development as well as in an industrial setting. For this purpose, seven open source programs as well as several releases of a commercial program are analysed. Thus, the thesis increases the empirical body of knowledge concerning the empirical validation of indicators for defects in software. The results show that there is a subset of bad smells that are well suited as indicators for defects in software. A good indicator in most of all analysed programs is the “God Class” bad smell. Among the historical characteristics analysed in the industrial context, the number of distinct authors as well as the number of changes performed to a file proved to be useful indicators for defects in software.

 

Ausführbare Dialogmodelle für reichhaltige und kontextabhängige Benutzungsschnittstellen

Author
Jürgen Rückert
Type of work
PhD thesis
Published in
2010
Abstract In heutigen Software-Entwicklungsprojekten ist die Erstellung der Benutzungsschnittstelle eine der aufwendigsten Aktivitäten. Zahlreiche Methoden stehen zur Anforderungsspezifikation, zur Architekturdefinition, zum Feinentwurf, zur Implementierung und zur Qualitätssicherung zur Verfügung. Der Feinentwurf von Benutzungsschnittstellen kann von den Erkenntnissen der modellbasierten Software-Entwicklung profitieren. Benutzungsschnittstellen werden dann mit Hilfe von Präsentations- und Dialogmodellen modelliert. Präsentationsmodelle beschreiben das Aussehen der Benutzungsschnittstelle, Dialogmodelle deren Verhalten. Moderne Benutzungsschnittstellen müssen besondere nichtfunktionale Anforderungen erfüllen, die das Laufzeitverhalten der Benutzungsschnittstelle in hohem Maße prägen: sie sollen reichhaltige graphische Objekte anbieten, sie sollen sich an den Kontext der Nutzung anpassen, sie sollen in dienstorientierten Architekturen einsetzbar sein und sie sollen zwischen Software- und Hardware-Plattformen übertragbar sein. Die vorliegende Arbeit stellt sich diesen nichtfunktionalen Anforderungen mit Hilfe von ausführbaren Verhaltensmodellen, so genannten Guilets.

 

Integrationstest : Testprozess, Testfokus und Integrationsreihenfolge

Author
Lars Borner
Type of work
PhD thesis
Published in
2010
Download
Abstract The goal of integration testing is to test the dependencies between the components of a software system. The huge number of dependencies of today’s systems challenges the roles participating in the integration testing process. This PhD thesis describes new and innovative approaches to support these roles.
The first part of the thesis defines a testing process that takes the specific characteristics of integration testing into account. This defined integration testing process focuses on the decisions to be made during the process. It describes the decisions that have to be made by individual roles. For every decision the dependent and depending decisions are worked out.
The second part of the thesis introduces new approaches to support two important decisions of the process: the test focus selection and the determination of the integration testing order.
Due to resource limitations of real software development projects, the testing of all dependencies is not possible. The few available resources have to be spent on error prone dependencies. This PhD thesis introduces a newly developed approach to identify these error prone dependencies and with that the test focus for the integration testing process. This new approach uses previous versions of a software system to uncover statistically significant correlations between the properties of a dependency and the number of errors of the participating components. These correlations are used to select the dependencies of the current version that have to be tested.
In integration testing the components of the systems are stepwise integrated to test the dependencies between them. This stepwise approach eases the detection of the error cause, in case an error is uncovered. The disadvantage of this approach is the simulation effort of components which have not yet been integrated, but are being used by components in the current integration step. Therefore, the goal is to find an integration testing order that causes a minimal simulation effort. Additionally, the order has to take into account the test focus, i.e. the dependencies that were selected as test focus have to be integrated as early as possible. This PhD thesis introduces a newly developed approach that determines an integration testing order which considers the test focus as well as the simulation effort. This approach uses heuristic algorithms like the Simulated Annealing or Genetic Algorithm.
All newly developed approaches were evaluated in several case studies. They were applied to real, large-sized software systems.

 

Decision-Making in Requirements Engineering and Architectural Design

Author
Andrea Herrmann
Type of work
Habilitation thesis
Published in
2009
Abstract Decision-making during software engineering, requirements engineering and architectural design of software systems is the topic of the present habilitation thesis. During my theoretical and empirical research on these topics, I found that these decision-making processes are only partly understood by scientists and only superficially put into operation by existing methods. Especially, integrated methods are missing, where the output of one activity is used as input for subsequent activities. Decision-making is significantly complicated by dependencies. It is difficult to treat these dependencies in a simple and time-efficient way that can be applied in daily software engineering practice.
Therefore, this book is pursuing two main objectives, firstly the scientific striving for a better understanding of these topics and secondly the support of practitioners in their software engineering work. Scientific, theoretical work serves to define the decisions to be made during requirements engineering and architectural design in order to understand the challenges of these decisions and how these challenges can be tackled. In these analyses, we take into account interdisciplinary research results. Some of the insights won by these analyses are new to science in this form. For instance, we identified six ways of how existing requirements prioritisation methods usually treat requirements dependencies, and based on a mathematical model, we can qualitatively describe the advantages and disadvantages of each of these ways.
In order to support practitioners, we develop new software engineering methods which integrate rational decision-making in requirements engineering (requirements elicitation, prioritisation and conflict solution) and in architectural design more systematically, encompassing more activities than any other available method. These new methods can be applied as we define them, but can also be understood as a modular toolbox from which single modules or concepts can be chosen.
These new methods aim to satisfy the following major requirements: they should be custom-made for each type of decision, but integrated into each other; be able to cope with uncertain predictions for the future; reuse knowledge; and consider dependencies among decisions, because they are critical, such as the alignment of technical decisions with business objectives. The methods must be easy to understand and to apply. Support for the tailoring of the methods to specific pactical needs must be provided. Modularity of the methods facilitates the tailoring. As we will show, some of these requirements have not well been satisfied by existing methods.
In this work, the new methods MOQARE (for the elicitation of non-functional requirements and for their prioritisation) and ICRAD (for requirements conflict solution and architectural design) are presented. These methods use the best principles from state-of-the-art methods in an integrated way. One important principle was to consider negative, unwanted scenarios and to quantify their risk. These methods are complemented by a collection of reusable knowledge and by prototypical tool support. The conceptual models of both methods have been derived by the Grounded Theory, which includes intense literature research and iterative improvement of the conceptual model.
The applicability of MOQARE and ICRAD in software projects, their understandability by non-scientists and the satisfaction of the other criteria defined above have been evaluated by different types of empirical work, such as case studies, large examples and student experiments. We present some of these empirical evaluations and the lessons learned thereof. These lessons learned, for instance, describe what is important when applying the methods and how they can be tailored to specific project settings.

 

 

Generating Meaningful Test Databases

Author
Carsten Binnig
Type of work
PhD thesis
Published in
2008
Download
Abstract Testing is one of the most time-consuming and cost-intensive tasks in software development projects today. A recent report of the NIST [RTI02] estimated the costs for the economy of the Unites States of America caused by software errors in the year 2000 to range from $22.2 to $59.5 billion. Consequently, in the past few years, many techniques and tools have been developed to reduce the high testing costs. Many of these techniques and tools are devoted to automate various testing tasks (e.g., test case generation, test case execution, and test result checking). However, almost no research work has been carried out to automate the testing of database applications (e.g., an E-Shop application) and relational database management systems (DBMSs). The testing of a database application and of a DBMS requires different solutions because the application logic of a database application or of a DBMS strongly depends on the contents of the database (i.e., the database state). Consequently, when testing database applications or DBMSs new problems arise compared to traditional software testing. This thesis focuses on a specific problem: the test database generation. The test database generation is a crucial task in the functional testing of a database application and in the testing of a DBMS (also called test object further on). In order to test a certain behavior of the test object, we need to generate one or more test databases which are adequate for a given set of test cases. Currently, a number of academic and commercial test database generation tools are available. However, most of these generators are general-purpose solutions which create the test databases independently from the test cases that are to be executed on the test object. Hence, the generated test databases often do not comprise the necessary data characteristics to enable the execution of all test cases. In this thesis we present two innovative techniques (Reverse Query Processing and Symbolic Query Processing), which tackle this problem for different applications (i.e, the functional testing of database applications and DBMSs). The idea is to let the user specify the constraints on the test database individually for each test case in an explicit way. These constraints are then used directly to generate one or more test databases which exactly meet the needs of the test cases that are to be executed on the test object.
Contact | Travel Info

News

CrowdRE'23: Keynote 'Reflections on Human Values in Crowd-based Requirements Engineering' held by Barbara Paech

REFSQ 2023: Keynote 'Explicit and Implicit Values in and of Requirements Engineering Practice and Research' held by Barbara Paech

Our paper 'Empirical Research Design for Software Architecture Decision Making: An Analysis' was selected for the JSS Happy Hour. You can watch it on YouTube

2020-2023 Barbara Paech member of DFG review board "software engineering and programing languages"

Anja Kleebaum et al. 'Continuous Design Decision Support'. Chapter published in 'Managed Software Evolution' (2019)