ResearchBib Share Your Research, Maximize Your Social Impacts
Sign for Notice Everyday Sign up >> Login

TTC 2017 - 10th Transformation Tool Contest

Date2017-07-17 - 2017-07-21

Deadline2017-04-28

VenueMarburg, Germany Germany

Keywords

Websitehttp://www.informatik.uni-marburg.de/staf2017

Topics/Call fo Papers

In order to facilitate the comparison of transformation tools, we are soliciting potential case studies. Specific areas of transformation case studies relevant to TTC are described on our aims and scope page. If you have a suitable case study, please describe it shortly but as detailed as needed and submit it to the online submission system. Please include a reference solution to your case to support the evaluation of the correctness of submitted solutions.
Our program committee will select a small, but representative set of case studies to be used for the contest. Case descriptions should answer the following questions:
What is the context of the case? (provide a short description and references)
What is the subject to be modeled? (what are the input and output modeling languages?)
What is the purpose of the models? (what are they typically used for from a larger perspective than the proposed case study?)
What are variation points in the case? (divide up your case in core characteristics and extensions)
What are the criteria for evaluating the submitted solutions to the case?
Correctness test: which are the reference input/ouput documents (models/graphs) and how should they be used? Ideally, a case description includes a testsuite, as well as a test driver (The test driver can be an online web service, or a local script that can be deployed in SHARE (see http://is.tm.tue.nl/staff/pvgorp/share)
Which transformation tool-related features are important and how can they be classified? (e.g., formal analysis of the transformation program, rule debugging support, ...)
What transformation language-related challenges are important and how can they be classified? (e.g., declarative bidirectionality, declarative change propagation, declarative subgraph copying, cyclic graph support, typing issues, ...)
How to measure the quality of submitted solutions, at the design level? (e.g., measure the number of rules, the conciseness of rules, ...)
How can the solutions be evaluated (ranked) systematically using information technology? Please provide one of the following:
a simple spreadsheet (an evaluation form that can be aggregated easily (See for example http://goo.gl/QwxTAs),
a so-called "classification scheme" in ResearchR (See http://goo.gl/QA7npw) (or a similar web 2.0 platform.)

Last modified: 2017-02-23 23:20:58