ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

HSST 2018 - 8th Halmstad Summer School on Testing, HSST 2018

Date2018-06-11 - 2018-06-14

Deadline2018-05-15

VenueHalmstad, Sweden Sweden

Keywords

Websitehttp://ceres.hh.se/mediawiki/HSST_2018

Topics/Call fo Papers

Testing and debugging account for a major part of software development cost and effort, yet the current practice of software testing is often insufficiently structured and disciplined. There have been various attempts in the past decades to bring more rigour and structure into this field, resulting in several industrial-strength processes, techniques and tools for different levels of testing. The 8th Halmstad Summer School on Testing provides an overview of the state of the art in testing, including theory, industrial cases, tools and hands-on tutorials by internationally-renowned researchers.
Tutorials
Model-based Mutation Testing: the Science of Killing Bugs in a Black Box, Bernhard Aichernig, TU Graz, Austria
In this tutorial I will discuss the combination of model-based testing and mutation testing. Model-based testing is a black-box testing technique that avoids the labour of manually writing hundreds of test cases, but instead advocates the capturing of the expected behaviour in a model of the system under test. Out of this abstract model, test cases can be generated automatically. In model-based mutation testing we inject faults into models and generate test-cases that cover these faults. The resulting test suite aims at detecting these kind of faults in a system under test. Hence, we anticipate what could go wrong and generate test-cases that protect against these bugs. The tutoriaI will introduce the toolset MoMuT (momut.org) implementing this technique. We will cover its different modeling styles, its scientific foundations and industrial applications.
Atif Memon, University of Maryland, USA (To Be Confirmed)
Testing concurrent and distributed systems, Mauro Pezzé, University of Lugano, Switzerland
Concurrent systems find applications in many domains, from multi-core architectures to distributed systems, and are rapidly spreading in commercial applications. Concurrency introduces new types of faults that are particularly difficult to find, due to both the nondeterministic nature of their occurrence and the combinatorial explosion of the thread interleavings. Disciplined design approaches, modern programming languages, and novel analysis techniques can reduce but not eliminate the presence of concurrency faults, which can lead to severe and critical problems. Testing approaches address the distinguishing features of concurrent systems with techniques to control the nondeterministic and combinatorial explosion of execution threads. We will discuss the nature of concurrent faults, identify the problems of testing concurrent systems, and define a coherent framework to frame the different testing approaches. We will learn the main classic as well as modern approaches for testing concurrent systems, and identify current and future research directions in the field.
Model-Based Testing: Theory, Tools, and Applications, Jan Tretmans, ESI by TNO and Radboud University Nijmegen, The Netherlands and Halmstad University, Sweden
We build ever larger and more complex systems. Systematic testing plays an important role in assessing the quality of such systems. Testing, however, turns out to be error-prone, expensive, and laborious. Consequently, better testing methods are needed that can detect more bugs faster and cheaper. Classical test automation helps but only for test execution. Model-based testing (MBT) is a promising technology that enables the next step in test automation by combining automatic test generation and test result analysis with test execution, and providing more, longer, and more diversified test cases with less effort.
In the lecture, we first discuss the basic ideas and principles of MBT, with motivation, perspectives, and pitfalls. Second, we discuss the ioco-theory for model-based testing. The ioco theory, on the one hand, provides a sound and well-defined foundation for labelled transition systems testing, having its roots in the theoretical area of testing equivalences and refusal testing. On the other hand, it has proved to be a practical basis for model-based test generation tools and applications. The latter is illustrated in the third part of the lecture, where we will introduce TorXakis (https://github.com/TorXakis/TorXakis), an MBT tool that implements the ioco-testing theory. We will discuss how TorXakis deals with such aspects as compositionality, concurrency, abstraction, and uncertainty (nondeterminism), that are ubiquitous in current systems. Finally, some simple examples, with the opportunity for hands-on experience, and some industrial applications are presented.
Literature:
Jan Tretmans, Model Based Testing with Labelled Transition Systems. In R. Hierons, et al., Formal Methods and Testing. LNCS 4949, pp. 1-38, Springer, 2008. http://dx.doi.org/10.1007/978-3-540-78917-8_1.
Jan Tretmans, On the Existence of Practical Testers. In J.-P. Katoen, et al., ModelEd, TestEd, TrustEd. LNCS 10500, pp. 87-106, Springer, 2017. http://dx.doi.org/10.1007/978-3-319-68270-9_5.
Automated testing of applications at the GUI level, Tanja Vos, Technical University of Valencia, Spain and the Open University, The Netherlands
Graphical User Interfaces (GUIs) represent the main connection point between a software's components and its end users and can be found in almost all modern applications. This makes them attractive for testers, since testing at the GUI level means testing from the user's perspective and is thus the ultimate way of verifying a program's correct behaviour. To be effective, GUI testing should be automated.
A substantial part of the current state-of-the art for automating UI testing is still script-based. Scripts can for example be recorded with Capture and Replay (CR) tools or Visual Gui Testing (VGT) tools. A known concern of these approaches is that it creates a critical maintenance problem because the test cases easily break when the UI evolves, which happens often.
For a couple of years now we have been researching a scriptless approach to automated GUI testing called TESTAR (http://www.testar.org) (Test Automation at the user inteRface level). Using TESTAR, you can start testing immediately and you do not need to specify test cases in advance. TESTAR automatically generates and executes test sequences based on a structure that is automatically derived from the UI through the accessibility API. Without specifying anything, TESTAR can detect the violation of general-purpose system requirements through implicit oracles like those stating that the SUT should not crash, the SUT should not find itself in an unresponsive state (freeze) and the GUI state should not contain any widget with suspicious words like error, problem, exception, etc.
While our application is automatically being tested for stability, crashes and undesired outputs, we can start adding more and more oracles that test more specific requirements of our application. This way we incrementally create the requirements, this is something that turns out to be very helpful when dealing with legacy systems.
In this tutorial we will explain GUI testing and its challenges. We will introduce TESTAR and show the internals of how the tool works, the different options that exist for action selection, online and offline oracles. Moreover, we will do a hands-on do it yourself session.
T.E.J. Vos, P.M. Kruse, N. Condori-Fernández, S. Bauersfeld, J. Wegener: TESTAR: Tool Support for Test Automation at the User Interface Level. International Journal of Information System Modeling and Design, IJISMD 2015, vol.6(3), uly-September 2015, pp. 46-83.
Real Bugs, Real Projects, Real Impact, Andrzej Wasowski, IT University of Copenhagen, Denmark
I will present a summary of collaboration with open source projects around understanding historical bugs and building tools for detecting new bugs.
In the first part, I will report the state of the union for bugs in two complex software systems : the Linux kernel (which is likely the most popular operating system on the planet) and the ROS operating system (which is not an operating system at all, but likely the most popular robotics middleware on the planet). In both cases, we will look into programming language aspects of problems observed in the very rich histories stored in the source code repositories of this projects. In the Linux kernel case, we will additionally focus on challenges introduced by the highly configurable aspect (bugs hidden well in rare configurations). For ROS, we will emphasize robotics-specific bugs and problems hidden in intensive modularity of the ROS applications architectures.
In the second part, we will ask basic methodological questions: Why a tool builder fighting with bugs might want to search and understand real bugs? What is the cost of studying bugs? How and what kinds of benchmarks can reduce this cost? What can be learnt from real bugs? What are the challenges of studying real bugs, and what methods do we have to reproduce real bugs, in order to use them for experiments later? How can we create benchmarks that will survive a long time, without bit rotting?
Finally, in the third part, I will show the EBA tool, specifically designed to uncover resource manipuation bugs in the Linux kernel; in response to some requirements presented in the first part of this lecture. EBA (an effect-based analyzer) creates a program abstraction performing a type inference in a rich type system tracking regions (a memory abstraction), simple shapes (hierarchies of pointers nested in structures) and side-effects of computations. This abstraction is then overlayed on a control-flow graph and fed to a reachability checking algorithm that identifies suspicious sequences of effects: for instance a lock taken twice. I will describe how EBA works, and how we used it to uncover about a dozen sequential double-lock bugs in the Linux kernel.
Joint work with: Claus Brabrand (ITU), Iago Abal (Prover), Stefan Stanciulescu (ABB), Jean Melo (ConfigIt), Marcio Ribeiro (UFAL), Gijs van der Storm (TU Delft), Jon Azpiazu (Tecnalia, San Sebastian), Jon Tjerngren (ABB Vesteras), Jonathan Hechtbauer (Fraunhofer IPA), Andre Santos (U. Minho), Chris Timperley (CMU), Harshavardhan Deshpande (Fraunhofer IPA).

Last modified: 2018-04-06 16:16:05