TESTBEDS 2011 - Third International Workshop on TESTing Techniques & Experimentation Benchmarks for Event-Driven Software (TESTBEDS 2011)
Topics/Call fo Papers
We’re doing this for the third time! TESTBEDS 2009 and TESTBEDS 2010 were extremely successful. We had several interesting talks and discussions in the past TESTBEDS. We’re doing this because testing of several classes of event-driven software (EDS) applications is becoming very important. Common examples of EDS include graphical user interfaces (GUIs), web applications, network protocols, embedded software, software components, and device drivers. An EDS takes internal/external events (e.g., commands, messages) as input (e.g., from users, other applications), changes its state, and sometimes outputs an event sequence. An EDS is typically implemented as a collection of event handlers designed to respond to individual events. Nowadays, EDS is gaining popularity because of the advantages this ``event-handler architecture'' offers to both developers and users. From the developer's point of view, the event handlers may be created and maintained fairly independently; hence, complex system may be built using these loosely coupled pieces of code. In interconnected/distributed systems, event handlers may also be distributed, migrated, and updated independently. From the user's point of view, EDS offers many degrees of usage freedom. For example, in GUIs, users may choose to perform a given task by inputting GUI events (mouse clicks, selections, typing in text-fields) in many different ways in terms of their type, number and execution order.
Software testing is a popular QA technique employed during software development and deployment to help improve its quality. During software testing, test cases are created and executed on the software. One way to test an EDS is to execute each event individually and observe its outcome, thereby testing each event handler in isolation. However, the execution outcome of an event handler may depend on its internal state, the state of other entities (objects, event handlers) and/or the external environment. Its execution may lead to a change in its own state or that of other entities. Moreover, the outcome of an event's execution may vary based on the sequence of preceding events seen thus far. Consequently, in EDS testing, each event needs to be tested in different states. EDS testing therefore may involve generating and executing sequences of events, and checking the correctness of the EDS after each event. Test coverage may not only be evaluated in terms of code, but also in terms of the event-space of the EDS. Regression testing not only requires test selection, but also repairing obsolete test cases. The first major goal of this workshop is to bring together researchers and practitioners to discuss some of these topics.
One of the biggest obstacles to conducting research in the field of EDS testing is the lack of freely available standardized benchmarks containing artifacts (software subjects and their versions, test cases, coverage-adequate test suites, fault matrices, coverage matrices, bug reports, change requests), tools (test-case generators, test-case replayers, fault seeders, regression testers), and processes (how an experimenter may use the tools and artifacts together) [see http://comet.unl.edu for examples] for experimentation. The second major goal of this workshop is to promote the development of concrete benchmarks for EDS.
To provide focus, this event will only examine GUI-based applications and Web Applications, which share many testing challenges. As this workshop matures, we hope to expand to other types of EDS.
Important Dates
? Submission of Full Papers: 10 January, 2011
? Notification: 1 February, 2011
? Camera-Ready: 1 March, 2011
? Workshop: March, 2011
Submission
The workshop solicits submission of:
? Full Papers (max 10 pages)
? Position Papers (max 6 pages) [what is a position paper?]
? Demo Papers (max 6 pages) [usually papers describing implementation-level details (e.g., tool, file format, structure) that are of interest to the community]
? Industrial Presentations (slides)
All submissions will be handled through http://www.easychair.org/conferences/?conf=testbed....
Industrial presentations are submitted in the form of presentation slides and will be evaluated by at least two members of the Program Committee for relevance and soundness.
Each paper will be reviewed by at least three referees. Papers should be submitted as PDF files in standard IEEE two-column conference format (Latex , Word). The workshop proceedings will be published on this workshop web-page. Papers accepted for the workshop will appear in the IEEE digital library, providing a lasting archived record of the workshop proceedings.
Organization
General Chair
? Atif M Memon, University of Maryland, USA.
Program Committee
? Cristiano Bertolini, Federal University of Pernambuco, Brazil.
? Zhenyu Chen, Nanjing University, China.
? Myra Cohen, University of Nebraska-Lincoln, USA.
? Cyntrica Eaton, Norfolk State University, USA.
? Anna-Rita Fasolino, University of Naples Federico II, Italy.
? Mark Grechanik, Accenture Labs, USA.
? Matthias Hauswirth, University of Lugano, Switzerland.
? Chin-Yu Huang, National Tsing Hua University, Taiwan.
? Ana Paiva, University of Porto, Portugal.
? Brian Robinson, ABB Inc., US Corporate Research, USA.
Software testing is a popular QA technique employed during software development and deployment to help improve its quality. During software testing, test cases are created and executed on the software. One way to test an EDS is to execute each event individually and observe its outcome, thereby testing each event handler in isolation. However, the execution outcome of an event handler may depend on its internal state, the state of other entities (objects, event handlers) and/or the external environment. Its execution may lead to a change in its own state or that of other entities. Moreover, the outcome of an event's execution may vary based on the sequence of preceding events seen thus far. Consequently, in EDS testing, each event needs to be tested in different states. EDS testing therefore may involve generating and executing sequences of events, and checking the correctness of the EDS after each event. Test coverage may not only be evaluated in terms of code, but also in terms of the event-space of the EDS. Regression testing not only requires test selection, but also repairing obsolete test cases. The first major goal of this workshop is to bring together researchers and practitioners to discuss some of these topics.
One of the biggest obstacles to conducting research in the field of EDS testing is the lack of freely available standardized benchmarks containing artifacts (software subjects and their versions, test cases, coverage-adequate test suites, fault matrices, coverage matrices, bug reports, change requests), tools (test-case generators, test-case replayers, fault seeders, regression testers), and processes (how an experimenter may use the tools and artifacts together) [see http://comet.unl.edu for examples] for experimentation. The second major goal of this workshop is to promote the development of concrete benchmarks for EDS.
To provide focus, this event will only examine GUI-based applications and Web Applications, which share many testing challenges. As this workshop matures, we hope to expand to other types of EDS.
Important Dates
? Submission of Full Papers: 10 January, 2011
? Notification: 1 February, 2011
? Camera-Ready: 1 March, 2011
? Workshop: March, 2011
Submission
The workshop solicits submission of:
? Full Papers (max 10 pages)
? Position Papers (max 6 pages) [what is a position paper?]
? Demo Papers (max 6 pages) [usually papers describing implementation-level details (e.g., tool, file format, structure) that are of interest to the community]
? Industrial Presentations (slides)
All submissions will be handled through http://www.easychair.org/conferences/?conf=testbed....
Industrial presentations are submitted in the form of presentation slides and will be evaluated by at least two members of the Program Committee for relevance and soundness.
Each paper will be reviewed by at least three referees. Papers should be submitted as PDF files in standard IEEE two-column conference format (Latex , Word). The workshop proceedings will be published on this workshop web-page. Papers accepted for the workshop will appear in the IEEE digital library, providing a lasting archived record of the workshop proceedings.
Organization
General Chair
? Atif M Memon, University of Maryland, USA.
Program Committee
? Cristiano Bertolini, Federal University of Pernambuco, Brazil.
? Zhenyu Chen, Nanjing University, China.
? Myra Cohen, University of Nebraska-Lincoln, USA.
? Cyntrica Eaton, Norfolk State University, USA.
? Anna-Rita Fasolino, University of Naples Federico II, Italy.
? Mark Grechanik, Accenture Labs, USA.
? Matthias Hauswirth, University of Lugano, Switzerland.
? Chin-Yu Huang, National Tsing Hua University, Taiwan.
? Ana Paiva, University of Porto, Portugal.
? Brian Robinson, ABB Inc., US Corporate Research, USA.
Other CFPs
Last modified: 2010-10-13 13:46:36