Motivation
Empirical evidence is important to produce long-lasting impact research. We feel that the tools and experiments that are used to produce or validate research results are not given as much attention as they should. To counteract this tendency, Artifact Evaluation (AE) rewards well written tools allowing researchers to replicate the experiments presented in papers. The purpose of the AE process is mainly to improve the reproducibility of computational results.
ECRTS was the first real-time systems conference to introduce artifact evaluation in 2016, and has continued since then.
Authors of accepted papers with a computational component will be invited to submit their code and/or data to an optional AE process. We seek to achieve the benefits of the AE process without disturbing the current process through which ECRTS has generated high-quality programs in the past. In particular, the decision to submit or not an artifact has no impact on whether a paper is accepted at ECRTS. Moreover, there will be no disclosure of the title and authors of papers that did not pass the repeatability evaluation.
The authors of papers corresponding to artifacts that pass evaluation can decide to use a seal that indicates that the artifact has passed the repeatability test, and the artifact will be published in Dagstuhl Artifacts Series (DARTS).
We recognize that not all the results are repeatable. For instance, the execution time of the experiments may be too long or a complete infrastructure to execute the tests may be required, but not be available to the evaluators. We encourage submissions but we can only guarantee to repeat experiments that are reasonably repeatable with regular computing resources. Our focus is on: (1) replicating the tests that are repeatable; (2) improving the repeatability infrastructure so that more tests become repeatable in the future.
Formatting instructions
Artifacts should include two components:
- a document explaining how to use the artifact and which of the experiments presented in the paper are repeatable (with reference to specific digits, figures and tables in the paper), the system requirements and instructions for installing and using the artifact;
- the software and any accompanying data.
A good how-to to prepare an artifact evaluation package is available at http://bit.ly/HOWTO-AEC.
The evaluation process is single-blind. It is non-competitive and we hope that all the artifacts submitted can pass the evaluation criteria.
Special Artifacts
If you are not in a position to prepare the artifact as above, or if your artifact requires special libraries, commercial tools (e.g., MATLAB or specific toolboxes), or particular hardware, please contact the AE chairs as soon as possible.
Recommendation: Use Virtual Machines
Based on previous experience, the biggest hurdle to successful reproducibility is the setup and installation of the necessary libraries and dependencies. Authors are therefore encouraged to prepare a virtual machine (VM) image including their artifact (if possible) and to make it available via HTTP throughout the evaluation process (and, ideally, afterwards). As the basis of the VM image, please choose commonly used OS versions that have been tested with the virtual machine software and that evaluators are likely to be accustomed to. We encourage authors to use VirtualBox (https://www.virtualbox.org) and save the VM image as an Open Virtual Appliance (OVA) file. To facilitate the preparation of the VM, we suggest using the VM images available
at https://www.osboxes.org/.
Timeline
- Artifact Abstract Submission (Platform Dependencies): April 28th
- Artifact Evaluation Submission Deadline: May 2nd
- Author Notification: May 21st
- Camera-Ready Deadline: May 27th
Submission Process
All authors of accepted papers are highly encouraged to submit to the ECRTS’25 Artifact Evaluation (AE). Authors should submit an abstract by April 28th, including details about any platform dependencies, so that reviewers can begin to evaluate which artifacts they have the resources to be able to evaluate.
Authors of artifacts that pass the evaluation will be asked to submit the final artifact version to Dagstuhl Artifacts Series (DARTS).
Submission Site: https://easychair.org/conferences/?conf=ecrtsae25
Organizers
Artifact Evaluation co-chairs:
- Catherine E. Nemitz
Davidson College, North Carolina, USA - Bryan C. Ward
Vanderbilt University, Tennessee, USA
Conflicts-of-Interest Chair
- Matthias Becker
KTH Royal Institute of Technology, Sweden
Evaluation committee:
- Tanya Amert, Carleton College, USA
- Federico Aromolo, Scuola Superiore Sant’Anna, Pisa, Italy
- Anne Friebe, Mälardalen University, Sweden
- Arpan Gujarati, University of British Columbia, Canada
- Christian Hakert, TU Dortmund, Germany
- Seonyeong Heo, Kyung Hee University, Korea
- Sims Osborne, Elon University, USA
- Marion Sudvarg, Washington University in St. Louis, USA
- Binqi Sun, TU Munich, Germany
- Corey Tessler, University of Nevada, USA