ATSE 2012 - 3th International Workshop Automating Test Case Design, Selection and Evaluation (ATSE)
Topics/Call fo Papers
3th International Workshop Automating Test Case Design, Selection and Evaluation (ATSE)
Wrocław, Poland, September 9-12, 2012
Trends such as globalisation, standardisation and shorter lifecycles place great demands on the flexibility of the software industry. In order to compete and cooperate on an international scale, a constantly decreasing time to market and an increasing level of quality are essential. Software and systems testing is at the moment the most important and mostly used quality assurance technique applied in industry. However, the complexity of software systems and hence of their development is increasing. Systems get bigger, connect large amounts of components that interact in many different ways on the Future Internet, and have constantly changing and different types of requirements (functionality, dependability, real-time, etc.). Consequently, the development of cost-effective and high-quality systems opens new challenges that cannot be faced only with traditional testing approaches. New techniques for systematization and automation of testing are required.
Even though many test automation tools are currently available to aid test planning and control as well as test case execution and monitoring, all these tools share a similar passive philosophy towards test case design, selection of test data and test evaluation. They leave these crucial, time-consuming and demanding activities to the human tester. This is not without reason; test case design and test evaluation are difficult to automate with the techniques available in current industrial practice. The domain of possible inputs (potential test cases), even for a trivial program, is typically too large to be exhaustively explored. Consequently, one of the major challenges associated with test case design is the selection of test cases that are effective at finding flaws without requiring an excessive number of tests to be carried out. This is the problem which this workshop wants to attack.
This workshop will provide researchers and practitioners a forum for exchanging ideas, experiences, understanding of the problems, visions for the future, and promising solutions to the problems in automated test case generation, selection and evaluation. The workshop will also provide a platform for researchers and developers of testing tools to work together to identify the problems in the theory and practice of software test automation and to set an agenda and lay the foundation for future development.
Topics
Topics include (but are not limited to):
techniques and tools for automating test case design:
model-based,
combinatorial.based,
optimization-based,
etc.
Evaluation of testing techniques and tools on real systems, not only toy problems.
Benchmarks for evaluating software testing techniques
Type of submissions
We expect the following types of submissions:
Research in progress, including research results at a premature stage.
Experience reports with the usage of testing technique and tools.
Positive experiences should present techniques and tools that work and the situations in which they work.
Negative experiences should be used to highlight new research challenges.
Surveys, case studies and comparative studies that investigate pros, cons and complementarities of existing tools.
Vision papers stating where the research in the field should be heading towards.
Tool and technique demonstrations.
Paper Submission and Publication
Papers will be refereed and accepted on the basis of their scientific merit and relevance to the symposium.
Accepted and Presented papers will be published in the Conference Proceedings and included in the IEEE Xplore® database. They will be also submitted for indexation in: DBLP Computer Science Bibliography, Google Scholar, Inspec, Scirus, SciVerse Scopus and Thomson Reuters - Conference Proceedings Citation Index
Authors should submit draft papers (as Postscript, PDF of MSWord file)
The total length of a paper should not exceed 8 pages (IEEE style). IEEE style templates are available here.
Extended versions of selected papers will be considered for Special Issues of:
Applied Artificial Intelligence (ISI indexed)
and possibly other journals that will be announced later.
Organizers reserve right to move accepted papers between FedCSIS events.
Important Dates
Paper submission: April 22, 2012
Author notification: June 17, 2012
Final submission and registration: July 8, 2012
Conference date: September 9-12, 2012
Wrocław, Poland, September 9-12, 2012
Trends such as globalisation, standardisation and shorter lifecycles place great demands on the flexibility of the software industry. In order to compete and cooperate on an international scale, a constantly decreasing time to market and an increasing level of quality are essential. Software and systems testing is at the moment the most important and mostly used quality assurance technique applied in industry. However, the complexity of software systems and hence of their development is increasing. Systems get bigger, connect large amounts of components that interact in many different ways on the Future Internet, and have constantly changing and different types of requirements (functionality, dependability, real-time, etc.). Consequently, the development of cost-effective and high-quality systems opens new challenges that cannot be faced only with traditional testing approaches. New techniques for systematization and automation of testing are required.
Even though many test automation tools are currently available to aid test planning and control as well as test case execution and monitoring, all these tools share a similar passive philosophy towards test case design, selection of test data and test evaluation. They leave these crucial, time-consuming and demanding activities to the human tester. This is not without reason; test case design and test evaluation are difficult to automate with the techniques available in current industrial practice. The domain of possible inputs (potential test cases), even for a trivial program, is typically too large to be exhaustively explored. Consequently, one of the major challenges associated with test case design is the selection of test cases that are effective at finding flaws without requiring an excessive number of tests to be carried out. This is the problem which this workshop wants to attack.
This workshop will provide researchers and practitioners a forum for exchanging ideas, experiences, understanding of the problems, visions for the future, and promising solutions to the problems in automated test case generation, selection and evaluation. The workshop will also provide a platform for researchers and developers of testing tools to work together to identify the problems in the theory and practice of software test automation and to set an agenda and lay the foundation for future development.
Topics
Topics include (but are not limited to):
techniques and tools for automating test case design:
model-based,
combinatorial.based,
optimization-based,
etc.
Evaluation of testing techniques and tools on real systems, not only toy problems.
Benchmarks for evaluating software testing techniques
Type of submissions
We expect the following types of submissions:
Research in progress, including research results at a premature stage.
Experience reports with the usage of testing technique and tools.
Positive experiences should present techniques and tools that work and the situations in which they work.
Negative experiences should be used to highlight new research challenges.
Surveys, case studies and comparative studies that investigate pros, cons and complementarities of existing tools.
Vision papers stating where the research in the field should be heading towards.
Tool and technique demonstrations.
Paper Submission and Publication
Papers will be refereed and accepted on the basis of their scientific merit and relevance to the symposium.
Accepted and Presented papers will be published in the Conference Proceedings and included in the IEEE Xplore® database. They will be also submitted for indexation in: DBLP Computer Science Bibliography, Google Scholar, Inspec, Scirus, SciVerse Scopus and Thomson Reuters - Conference Proceedings Citation Index
Authors should submit draft papers (as Postscript, PDF of MSWord file)
The total length of a paper should not exceed 8 pages (IEEE style). IEEE style templates are available here.
Extended versions of selected papers will be considered for Special Issues of:
Applied Artificial Intelligence (ISI indexed)
and possibly other journals that will be announced later.
Organizers reserve right to move accepted papers between FedCSIS events.
Important Dates
Paper submission: April 22, 2012
Author notification: June 17, 2012
Final submission and registration: July 8, 2012
Conference date: September 9-12, 2012
Other CFPs
Last modified: 2012-02-03 15:43:57