The FMTV'16 Challenge

The FMTV'16 challenge and its solutions to be presented at WATERS'16
Post Reply
Sophie Quinton
Site Admin
Posts: 46
Joined: Tue Apr 28, 2015
Location: Inria Grenoble - Rhône-Alpes, France
Contact:

The FMTV'16 Challenge

Post by Sophie Quinton » Tue Dec 01, 2015

About the FMTV Challenge
The purpose of the Formal Methods for Timing Verification (FMTV) challenge is to share ideas, experiences and solutions to a concrete timing verification problem issued from real industrial case studies. It also aims at promoting discussions, closer interactions, cross fertilization of ideas and synergies across the breadth of the real-time research community, as well as attracting industrial practitioners from different domains having a specific interest in timing verification.

The 2016 FMTV Challenge
We are glad to announce that the 2016 challenge is proposed by Arne Hamann, Simon Kramer, Martin Lukasiewycz and Dirk Ziegenbein from Bosch GmbH. An initial version of the challenge is available here:
FMTV_challenge_2016_Bosch.pdf
(108.64 KiB) Downloaded 809 times
Sophie Quinton
INRIA Grenoble - Rhône-Alpes
655 Avenue de l'Europe - Montbonnot
38334 St Ismier Cedex - FRANCE
tel: +33 4 76 61 55 31
https://team.inria.fr/spades/quinton/

Sakthivel Manikandan
Posts: 2
Joined: Sat Jul 25, 2015
Location: Luxembourg

Re: The FMTV'16 Challenge

Post by Sakthivel Manikandan » Tue Jan 26, 2016

Hi,

Amalthea performance model as mentioned in the challenge is already available ? (December Mid is mentioned)

Regards,
Sakthivel

Sophie Quinton
Site Admin
Posts: 46
Joined: Tue Apr 28, 2015
Location: Inria Grenoble - Rhône-Alpes, France
Contact:

Re: The FMTV'16 Challenge

Post by Sophie Quinton » Tue Jan 26, 2016

Hi,

In fact the model should be available this week, sorry for the delay!

Best,
Sophie
Sophie Quinton
INRIA Grenoble - Rhône-Alpes
655 Avenue de l'Europe - Montbonnot
38334 St Ismier Cedex - FRANCE
tel: +33 4 76 61 55 31
https://team.inria.fr/spades/quinton/

Dirk Ziegenbein
Posts: 1
Joined: Thu Jan 28, 2016

Re: The FMTV'16 Challenge

Post by Dirk Ziegenbein » Fri Jan 29, 2016

Sorry, there are still issues with the tool we want to use to validate the model. We will definitely post the model next week.
Best regards,
Dirk

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Fri Feb 05, 2016

Dear all,

we are finally able to submit the promised Amalthea model for the challenge. Sorry for the delay.
FMTV_Challenge_2016_Bosch_Engine_Control_Model.zip
(160.36 KiB) Downloaded 468 times
The model can be inspected and processed using the Amalthea tool chain that can be downloaded here (version 1.1.1): http://www.amalthea-project.org/index.php/downloads.

If you have any questions regarding the model just post it here, we will answer as quick as possible.

Arne

Sophie Quinton
Site Admin
Posts: 46
Joined: Tue Apr 28, 2015
Location: Inria Grenoble - Rhône-Alpes, France
Contact:

Re: The FMTV'16 Challenge

Post by Sophie Quinton » Tue Feb 16, 2016

Hi,

I have downloaded Amalthea and when trying to open the challenge, I get the following error : "The editor could not be opened because the file located at GeneratedModel.amxmi is either not a model file or a model file that is out of scope."

Can you help me out? Thanks,
Sophie
Sophie Quinton
INRIA Grenoble - Rhône-Alpes
655 Avenue de l'Europe - Montbonnot
38334 St Ismier Cedex - FRANCE
tel: +33 4 76 61 55 31
https://team.inria.fr/spades/quinton/

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Tue Feb 16, 2016

Hi Sophie,

the simplest way is the following:

1) Create an empty project File->New->Project then General -> Project
2) Drag and Drop the file GeneratedModel.amxml into the project and choose 'copy file'

I hope that helps,
Arne

Sophie Quinton
Site Admin
Posts: 46
Joined: Tue Apr 28, 2015
Location: Inria Grenoble - Rhône-Alpes, France
Contact:

Re: The FMTV'16 Challenge

Post by Sophie Quinton » Tue Feb 16, 2016

Hi again, and thanks for the help!

Step 1) went fine (once I realized I had to close the Welcome tab to see the model...). It took me a while however to understand how exactly I was supposed to drag and drop (drag from my file browser and drop into the Amalthea model) but now I got it and I can see the model. Let the challenge begin!

Best,
Sophie
Sophie Quinton
INRIA Grenoble - Rhône-Alpes
655 Avenue de l'Europe - Montbonnot
38334 St Ismier Cedex - FRANCE
tel: +33 4 76 61 55 31
https://team.inria.fr/spades/quinton/

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Tue Feb 16, 2016

Yes, as you mentioned, the file browser is the source and the empty project is the target of the drag and drop action ;-)

User avatar
iceslj
Posts: 6
Joined: Wed Jul 01, 2015
Location: Verimag @ Grenoble

Re: The FMTV'16 Challenge

Post by iceslj » Thu Feb 18, 2016

I also spent quite a few minutes to understand this drag and drop: one has to drop the file exactly to the "project folder" in the "Project Explorer" pane, instead of just dropping in to the editor pane. I guess some people could also be stuck here.
Lijun SHAN
E-mail: lijun.shan@imag.fr.
VERIMAG
2, Avenue de Vignate
Centre Equation
F - 38610 GRENOBLE - Gières, FRANCE
Phone : (33) 04.56.52.03.76

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Wed Mar 30, 2016

We corrected some errors in the previous challenge model. We validated the attached updated version, and everything should work fine now.
Attachments
ChallengeModel_updated.zip
(177.51 KiB) Downloaded 278 times

User avatar
paolo.burgio
Posts: 1
Joined: Wed Apr 06, 2016
Location: HipertLab@ UniMoRe
Contact:

Re: The FMTV'16 Challenge

Post by paolo.burgio » Wed Apr 06, 2016

Hello everybody,

could anyone explain me, once I have a solution for the challenge, how can I verify it? Honestly, I have no experience on Amalthea IDE and I don't know how can I run the project package.

Thanks for the help
Best
Paolo

----------------------------------
Addendum/clarification:
----------------------------------

I saw that in the challenge you ask either to calculate end-to-end latencies (w/ or w/o memory and arbitration latencies) or to optimize them. Well, I'd like to I validate my solution(s) to either one or all of the challenges, and I therefore ask whether I can "simulate" the system deployed and running to get some profiling measures.

Or is the project a simple "read-only" module so see how the HW/SW platform, and it doesn't allow to actually see it running?

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Mon Apr 11, 2016

Dear Paolo,

the AMALTHEA model is not "executable" using the AMALTHEA tool chain. Basically, the AMALTHEA tool only helps you to access the model data. We use tools like SymTA/S and Timing Architects to get the numbers for the end-to-end latencies. And during the workshop we would like to compare them with the numbers that you got. So for now, there is no baseline for comparison.
If you are at CPS Week we can give you a demo http://2016.rtas.org/demos/ using one of the above mentioned tools. That might give you an idea :-).

Arne

rivasjm
Posts: 7
Joined: Sun Jul 05, 2015

Re: The FMTV'16 Challenge

Post by rivasjm » Mon Apr 18, 2016

Hello again, we are progressing with the challenge, but we are struggling with how the EventChain constraint works in the model. For example, EffectChain_2 traverse the following Runnables:

Runnable_100ms_7 – Runnable_10ms_19 – Runnable_2ms_8

In our interpretation, the event chain means that Runnable_100ms_7 (which is in Task_100ms) must execute before Runnable_10ms_19 (which is in Task_10ms), and so on. The problem we see with this example is that Runnable_10ms_19 is released with a period of 10ms, while Runnable_100ms_7 has a period of 100ms. To which release of Runnable_100ms_7 must Runnable_10ms_19 wait for? How is this precedence relationship maintained if in general each element in the chain can have different periods?

Thanks in Advance
Juan

Simon Kramer
Posts: 9
Joined: Mon Jul 13, 2015

Re: The FMTV'16 Challenge

Post by Simon Kramer » Mon Apr 18, 2016

Hello Juan,

effect chains here don't have a blocking/waiting semantic. They are driven by data. In that example Runnable_100ms_7 produces data that is read by Runnable_10ms_19. As you figured out correctly, both runnables have different rates, so here the result of Runnable_100ms_7 is read multiple times from Runnable_10ms_19 (last-is-best semantics). This is called over- resp. undersampling. This in general leads to varying latencies for an effect chain, which we want to have analyzed.

Best,
Simon

rivasjm
Posts: 7
Joined: Sun Jul 05, 2015

Re: The FMTV'16 Challenge

Post by rivasjm » Tue Apr 19, 2016

Thank you very much Simon, with your thorough explanation now it makes sense.

Best
Juan

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Wed Apr 20, 2016

Hi all,

I got the following question via Email.

1- We understand that tasks are preemptible, but can the runnables be preempted?
Generally we distinguish preemptive and cooperative tasks. Higher priority preemptive task can preempt lower priority tasks (of any type) at any point in time, also runnables. Cooperative tasks preempt lower priority cooperative tasks only at runnable borders. In the system all tasks with period >= 20 ms are cooperative. You can find the task type under: Property->Misc->Preemption

2- What is the meaning of the attribute "Is Buffered" at runnable item level?
Can be ignored. There is no buffering of execution backlog, etc.

3- We found some attributes defined as "_undefined_", for example:
* The "sched policy" of local bus and crossbar
* "Is Buffered" on runnable items
* "Severity" in deadline constraints

_undefined_ attributes can be ignored. Only the sched_policy for arbitrating memory accesses is important. Details about that are mentioned in the original paper of the FMTV challenge:
"The cores are interconnected by a crossbar (full connectivity, FIFO arbitration at memories). The system-wide frequency is 200 MHz."

Arne

Nacho_S
Posts: 7
Joined: Wed Apr 20, 2016
Location: @Unimore

Re: The FMTV'16 Challenge

Post by Nacho_S » Wed Apr 20, 2016

Thank you Arne for the response.

In regard of question 1 we would like to ask you about the possibility of having mixed types of task (i.e., both preemptive and cooperative) on the same core. What should the scheduling policy be in this case?

Best regards
Nacho

Simon Kramer
Posts: 9
Joined: Mon Jul 13, 2015

Re: The FMTV'16 Challenge

Post by Simon Kramer » Wed Apr 20, 2016

Hi Nacho,

preemptive and cooperative tasks can be located on the same core. That's possible.
The scheduling policy in this case is still priority based. Preemptive tasks here have a higher priority than cooperative ones. This means that still cooperative tasks preempt each other only at runnable borders, but can be preempted by preemptive tasks at any time.

Best,
Simon

Nacho_S
Posts: 7
Joined: Wed Apr 20, 2016
Location: @Unimore

Re: The FMTV'16 Challenge

Post by Nacho_S » Thu Apr 21, 2016

Thank you so much, we understand all better :)

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Thu Apr 21, 2016

Hi all,

I got some new questions:
--
Q:
We are getting interested in the Waters challenge, but we cannot figure out the cooperative scheduling among the tasks that are tagged as such, nor what is the meaning of end-to-end latency in these cases (what is the definition, given that it is a cooperation among periodic tasks sharing variables written asynchronously). Can you help us?

A:
1) Concerning the cooperative scheduling: I think the question is answered above

2) Concerning the meaning of end-to-end latency:

Communication of runnables in the same task: if data is shared among successive runnables inside a runnable there is no additional latency due to the communication. However, if there is a backward communication (i.e. the earlier runnable reads data written by the later runnable) there is one cycle delay until the data is read, i.e. that an effect takes place.

Communication of runnable among tasks: here you need to take over- and under-sampling effects into account. We are generally interest in two different semantics:
  • Reaction: useful when the first point-in-time that (new) data is read is critical. This is usually of importance, like the name suggests, when system reactivity is required.
  • Age: useful when the latest point-in-time that (old) data is read is critical. This is usually of importance for control applications
It is sufficient if in a first step you focus first on the reaction sematics
--

I hope the answers help,
Arne

hinomk2
Posts: 3
Joined: Mon Apr 25, 2016

Re: The FMTV'16 Challenge

Post by hinomk2 » Mon Apr 25, 2016

Hi,

I have two simple questions.

1) What is the unit of runnable execution lowerbound and upperbound? Is it CPU cycle?

2) There are four schedulers that have 0 ns delay. Also every stimuli has 0 ms offset. I guess it means that all schedulers and stimulis start at the same time (the time system is activated), which is the static offset model. Am I right?

It will be very appreciated if you help me.

Thank you
Junchul

Simon Kramer
Posts: 9
Joined: Mon Jul 13, 2015

Re: The FMTV'16 Challenge

Post by Simon Kramer » Mon Apr 25, 2016

Hi Junchul,

the unit of runnable execution is instructions. Number of instructions divided by the instructions per cycle (IPC) value from the core (HW element) results to the actual taken time in cpu cycles. In our model, the IPC is 1, so number of instructions equals the number of cycles.

The 0ns delay in the hardware scheduler denotes that there is no scheduling overhead. Every scheduling decision is done in zero time. The 0ms offset at the stimuli means, as you already guessed, that all tasks start the the same time. Yes, all offsets are static.

Best regards,
Simon

Nacho_S
Posts: 7
Joined: Wed Apr 20, 2016
Location: @Unimore

Re: The FMTV'16 Challenge

Post by Nacho_S » Wed Apr 27, 2016

Hi, we are still working in the challenge and we have some questions to ask you:

- Are the R/W dependencies protected by some kind of semaphore or mutex? If so, what is the shared resource protocol used to arbitrate the access to shared resources (SRP, priority ceiling...)? if any is used.
- Is the model of a runnable a Read-Execution-Write?, that is, is the order of R/W operations like in the XML?. Are all read accesses performed before all the write accesses?
- We see many R/W relations through a shared label, why don't these relations belong to an effect chain?

Best regards

Nacho

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Mon May 02, 2016

Some more questions I received via Email:

1- Are the R/W accesses protected by some kind of semaphore or mutex? If so, what is the shared resource protocol (if any) used to arbitrate the
access to shared resources (SRP, priority ceiling...)?

R/W accesses are arbitrated by the memory in a FIFO manner (no contentions on the interconnect). So accesses to the memory might get blocked by pending accesses from other cores. No locking is performed since the accesses are assumed to be atomic.

2- Is the model of a runnable a Read-Execution-Write (i.e., all read accesses are performed before all the write accesses) or they may be arbitrarily sparse in the code?

Yes, the former.

3- We see many R/W relations through a shared label: why only a very limited subset of these relations belong to an effect chain?

We just defined a few typical effect chains. If you analysis method is in place and general you are free to add more relations ;-)

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Mon May 02, 2016

And sorry that I did not reply earlier to the forum post ...that one slipped through :/

Nacho_S
Posts: 7
Joined: Wed Apr 20, 2016
Location: @Unimore

Re: The FMTV'16 Challenge

Post by Nacho_S » Fri May 06, 2016

Thanks for the response,

I got some new questions:

- With regard to challenge 1: "ignoring memory accesses" means that the fetching of data in the reading/write phases takes "0" time?
- With regard to challenge 2 when read/write time is not neglected: what happens if a task/runnable/core reads the same label multiple time from GRAM? Does it always pay the same penalty for the access to GRAM, or is there any caching effect? (for example, the data is brought to LRAM the first time, so that successive read/write take less time)

Thanks,

Nacho

kangdongh
Posts: 1
Joined: Wed May 11, 2016
Location: Seoul National University, Korea

Re: The FMTV'16 Challenge

Post by kangdongh » Wed May 11, 2016

Hi,

I have a question for a given amalthea challenge model.

I started to analyze tasks in model and got a mysterious result.

I added all runnables' upper execution time in Task_10ms, which deadline is 2,000,000 cycles ( 0.01 * 200Mhz), Surprisingly it becomes 2,342,546 cycles in total.

I understood that it means Task_10ms overflows in worst case execution without considering other tasks.

than do I have to analyze it 'mean' execution time? (e.g. lower:452 upper:1782 mean: 979, use 979 to analyze)

or did I analyze model in a wrong way?

I'll wait for your answer :)

Thanks,

Donghyun
Donghyun, Kang
Seoul National University, Korea
Contact: kangdonghyun93 at gmail.com

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Thu May 12, 2016

Nacho_S wrote:Thanks for the response,

I got some new questions:

- With regard to challenge 1: "ignoring memory accesses" means that the fetching of data in the reading/write phases takes "0" time?
- With regard to challenge 2 when read/write time is not neglected: what happens if a task/runnable/core reads the same label multiple time from GRAM? Does it always pay the same penalty for the access to GRAM, or is there any caching effect? (for example, the data is brought to LRAM the first time, so that successive read/write take less time)

Thanks,

Nacho
Dear Nacho,

here are your answers:
1) yes ;-)
2) There is no caching. Conflicts are arbitrated in a FIFO manner as explained above. The data mapping to LRAM and GRAM is static.

I hope that helps,
Arne

hinomk2
Posts: 3
Joined: Mon Apr 25, 2016

Re: The FMTV'16 Challenge

Post by hinomk2 » Mon May 16, 2016

Hi, thanks arne for the previous answers.

I have another questions on the memory access model:

1. What happens if preemptive task with higher priority is requested during a resource access of lower priority task? Is the preemptive task blocked until the resource access of lower priority task finishes?

2. Should be a label stored in the local memory when it is fetched from global memory? If my guess is right, total access delay is 9 (to global memory) plus 1 (to local memory). Or is there a temporal storage (such as stack memory area) in a core to store labels for runnable execution?

3. The crossbar has full connectivity. So I guess there is no contention on the crossbar. Then for challenge 2 and 3, the access delay on the memory should be known to compute the arbitration delay, but the access delay in the description seems to include not only memory access delay but also crossbar transfer delay. I guess pure memory access delay is 1 cycle for local/global memory and 8 cycles are crossbar transfer delay, since the cores are symmetric and remote local memory access consists of transfer delay and local memory access delay. Am I right?
For example, suppose a runnable tries to access a label in global memory and there are two access requests in FIFO queue. Then access arbitration delay is 3*9 (9 cycles per each access) if memory access takes 9 cycles. If my guess is right, access arbitration delay is 8(transfer delay) + 3*1(1 cycle per each access).

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Thu May 19, 2016

hinomk2 wrote:Hi, thanks arne for the previous answers.

I have another questions on the memory access model:

1. What happens if preemptive task with higher priority is requested during a resource access of lower priority task? Is the preemptive task blocked until the resource access of lower priority task finishes?

2. Should be a label stored in the local memory when it is fetched from global memory? If my guess is right, total access delay is 9 (to global memory) plus 1 (to local memory). Or is there a temporal storage (such as stack memory area) in a core to store labels for runnable execution?

3. The crossbar has full connectivity. So I guess there is no contention on the crossbar. Then for challenge 2 and 3, the access delay on the memory should be known to compute the arbitration delay, but the access delay in the description seems to include not only memory access delay but also crossbar transfer delay. I guess pure memory access delay is 1 cycle for local/global memory and 8 cycles are crossbar transfer delay, since the cores are symmetric and remote local memory access consists of transfer delay and local memory access delay. Am I right?
For example, suppose a runnable tries to access a label in global memory and there are two access requests in FIFO queue. Then access arbitration delay is 3*9 (9 cycles per each access) if memory access takes 9 cycles. If my guess is right, access arbitration delay is 8(transfer delay) + 3*1(1 cycle per each access).
1. The lower priority finishes its memory access and is only then preempted.

2. There is no local buffering. The data is fetched from global memory and stored into a register. Each access requires thus the full access delay again.

3. Very good question. Right, arbitration is done at the memory, we assume no contention on the crossbar. Actually, we forgot to specify the split in access time between crossbar and memory. So your guess is right: take 8 cycles as transfer delay and 1 cycle for each access.

Nacho_S
Posts: 7
Joined: Wed Apr 20, 2016
Location: @Unimore

Re: The FMTV'16 Challenge

Post by Nacho_S » Thu May 19, 2016

Hello Arne,
arne.hamann wrote:
3. Very good question. Right, arbitration is done at the memory, we assume no contention on the crossbar. Actually, we forgot to specify the split in access time between crossbar and memory. So your guess is right: take 8 cycles as transfer delay and 1 cycle for each access.
So 8 cycles of transfer delay is for the round trip, that is, 4 "to" and 4 "back from" the memory, is this right?

Thanks,

Nacho.

hinomk2
Posts: 3
Joined: Mon Apr 25, 2016

Re: The FMTV'16 Challenge

Post by hinomk2 » Fri May 20, 2016

Thanks Arne, I think I understand the problem enough to tackle it.

When I analyzed the worst case end-to-end latencies of tasks and chains, too many tasks, 6 out of 21 tasks from my result, violate deadlines even without memory access delay. Chains that have runnables in unschedulable tasks are of course unanalyzable. I'm not sure this unschedulable results have meaning on the challenge, making challenge results not comparable.

Is it OK to leave them unschedulable or should I have to do something like changing some periods to make the whole system schedulable?

Thanks,
Junchul

arne.hamann
Posts: 27
Joined: Thu Jan 28, 2016
Location: Renningen

Re: The FMTV'16 Challenge

Post by arne.hamann » Fri May 20, 2016

hinomk2 wrote:Thanks Arne, I think I understand the problem enough to tackle it.

When I analyzed the worst case end-to-end latencies of tasks and chains, too many tasks, 6 out of 21 tasks from my result, violate deadlines even without memory access delay. Chains that have runnables in unschedulable tasks are of course unanalyzable. I'm not sure this unschedulable results have meaning on the challenge, making challenge results not comparable.

Is it OK to leave them unschedulable or should I have to do something like changing some periods to make the whole system schedulable?

Thanks,
Junchul
Hi Junchul,

I would suggest to scale the execution times of the runnables in those tasks, and try to answer the question "What is the maximum execution time for those tasks leading to a schedulable system"?

Arne

Post Reply