SERE-C 2012 - 2012 International Conference on Software Security and Reliability Companion
Topics/Call fo Papers
The goal of this workshop is to bring together researchers and practitioners to (1) understand the state of the art and state of practice in software testing, (2) map work needed for improved methods and tools for software testing, and (3) list any important problems needing to be solved.
Software has been tested for decades, yet there is still so much we desire of testing but cannot get: creating a small set of tests to expose bugs or build assurance, understanding how much confidence is justified by certain coverage, and combining the strengths of static analysis with testing. Why is fuzzing (still) so effective? How does the nondeterminism of scheduling, memory use, parallelism, virtual machines, and cloud computing change testing? What is the measure of effectiveness of testing in light of a determined adversary? How can we supply information for decisions based on assurance, robustness, resilience, security, portability, maintainability, and so forth? Just as civil engineering uses building codes, what styles, constructs, or patterns can be used in software so the product can be analyzed and tested for assurance that it satisfies requirements for safety or confidentiality or scalability?
There are severe theoretical limits to the ability to measure software properties, such as the halting problem. However, Heisenberg's uncertainty principle has not stopped the advancement of theoretical or applied atomic physics. NIST seeks to identify needed research and technology in measurement, metrics, and standards of software testing, based on sound principles of science and engineering.
Topics of interest include, but are not limited to, the following areas:
Advanced measurement techniques for properties of software related to testing,
Theoretically justified means of comparing different coverage metrics,
Practical results comparing the effectiveness of testing based on different coverage metrics,
The sources of assurance in testing or static analysis,
Languages to express testable policy or software requirements,
Standards (in both étalon and norme senses - explanation) needed for software testing,
Gaps and future directions of software analysis using static or dynamic analysis tools,
Research topics to advance the state of the art in software testing,
Technology transfer approaches so that current practice benefits from the state of the art, and
Software testing metrics and measurements.
Software has been tested for decades, yet there is still so much we desire of testing but cannot get: creating a small set of tests to expose bugs or build assurance, understanding how much confidence is justified by certain coverage, and combining the strengths of static analysis with testing. Why is fuzzing (still) so effective? How does the nondeterminism of scheduling, memory use, parallelism, virtual machines, and cloud computing change testing? What is the measure of effectiveness of testing in light of a determined adversary? How can we supply information for decisions based on assurance, robustness, resilience, security, portability, maintainability, and so forth? Just as civil engineering uses building codes, what styles, constructs, or patterns can be used in software so the product can be analyzed and tested for assurance that it satisfies requirements for safety or confidentiality or scalability?
There are severe theoretical limits to the ability to measure software properties, such as the halting problem. However, Heisenberg's uncertainty principle has not stopped the advancement of theoretical or applied atomic physics. NIST seeks to identify needed research and technology in measurement, metrics, and standards of software testing, based on sound principles of science and engineering.
Topics of interest include, but are not limited to, the following areas:
Advanced measurement techniques for properties of software related to testing,
Theoretically justified means of comparing different coverage metrics,
Practical results comparing the effectiveness of testing based on different coverage metrics,
The sources of assurance in testing or static analysis,
Languages to express testable policy or software requirements,
Standards (in both étalon and norme senses - explanation) needed for software testing,
Gaps and future directions of software analysis using static or dynamic analysis tools,
Research topics to advance the state of the art in software testing,
Technology transfer approaches so that current practice benefits from the state of the art, and
Software testing metrics and measurements.
Other CFPs
- 2013 IEEE International Conference on Microwaves, Communications, Antennas and Electronic Systems (COMCAS)
- 2013 IEEE Third International Conference on Information Science and Technology (ICIST)
- 2013 International Conference on Computer Communication and Informatics (ICCCI)
- 2013 IEEE 38th Conference on Local Computer Networks (LCN 2013)
- 2013 IEEE Workshop on Microelectronics and Electron Devices (WMED)
Last modified: 2012-06-09 11:26:53