Evaluation criteria

In order to be reviewed, papers must be in scope (that is, adress some form of timing requirement) and follow the submission instructions. Submissions are evaluated by the program committee according to the following criteria:


We welcome theoretical and practical contributions to the state of the art in the design, implementation, verification and validation of real-time embedded and cyber-physical systems. This includes case studies, implementations, tools, benchmarks, application scenarios, technology transfer success stories (or failures) and open problems.

We particularly encourage papers on industrial case studies, application of real-time technology to real systems, and system-level implementations. We believe that our community needs to validate the proposed assumptions and methodologies with respect to realistic applications. We welcome practical contributions even without novel theoretical insights, provided they are of interest to the research community and/or to industry. In such cases, authors should make an effort to provide proper motivation and justification of the relevance of their work and report on lessons learned.

The models, assumptions and application scenarios used in the paper must be properly motivated.

Authors (of both practical and theoretical works) must demonstrate applicability of their approach to real systems (examples of realistic task models can be found here). Deviations from such models or strong simplifications require justification. In particular, papers relying on simple models must have more fundamental theoretical insights than papers that include more details by construction, such as system-level implementations.

Whenever relevant, we strongly encourage authors to present experimental results. These can be obtained on real implementations, from simulations or via the use of data from case studies, benchmarks or models of real systems. The use of synthetic workloads or models is permitted, but authors are strongly encouraged to properly motivate and justify the choices made in designing their synthetic evaluation.


Authors are expected to clearly explain the relationship between their work and the existing state-of-the-art (including their own prior work). In addition, they must, whenever relevant, compare their work with the state-to-the-art to show the size, scope and significance of the improvements they have made. Experimental evaluations should cover a broad but realistic range of parameters. Authors should avoid selecting specific cases where there may be a bias in favor of their approach, or compare only against their own previous work (unless there is no other work to compare against). As well as explaining the advantages of their new approach with respect to the state-of-the-art, authors should also give a balanced view of its disadvantages.

Technical correctness

Technical correctness is a mandatory requirement for acceptance. We consider it to be the duty of the authors to convince the reviewers of the technical correctness of their paper by whatever means the authors consider appropriate. We encourage open-source initiatives and computer-assisted proofs in order to increase confidence in practical and theoretical results and to improve their reusability. We acknowledge that due to IP constraints, implementation papers may not be able to provide all details. In this case, the authors should strive to provide sufficient information for the paper to remain of interest to the reader.

Writing quality

The writing quality of the paper is very important. The paper must be well structured and clearly written to ensure that the reader can precisely understand and evaluate the technical contribution. We expect the authors to proof-read their paper and to reduce the number of typos and grammatical mistakes.

Prospective authors can find here suggestions (which are not mandatory rules) for structuring and writing their paper that are likely to be appreciated by the reviewers.

Comments are closed.