[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

TASKers: A Whole-System Generator for Benchmarking Real-Time-System Analyses

Authors Christian Eichler, Tobias Distler, Peter Ulbrich, Peter Wägemann, Wolfgang Schröder-Preikschat



PDF
Thumbnail PDF

File

OASIcs.WCET.2018.6.pdf
  • Filesize: 2.05 MB
  • 12 pages

Document Identifiers

Author Details

Christian Eichler
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
Tobias Distler
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
Peter Ulbrich
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
Peter Wägemann
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany
Wolfgang Schröder-Preikschat
  • Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany

Cite As Get BibTex

Christian Eichler, Tobias Distler, Peter Ulbrich, Peter Wägemann, and Wolfgang Schröder-Preikschat. TASKers: A Whole-System Generator for Benchmarking Real-Time-System Analyses. In 18th International Workshop on Worst-Case Execution Time Analysis (WCET 2018). Open Access Series in Informatics (OASIcs), Volume 63, pp. 6:1-6:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018) https://doi.org/10.4230/OASIcs.WCET.2018.6

Abstract

Implementation-based benchmarking of timing and schedulability analyses requires system code that can be executed on real hardware and has defined properties, for example, known worst-case execution times (WCETs) of tasks. Traditional approaches for creating benchmarks with such characteristics often result in implementations that do not resemble real-world systems, either due to work only being simulated by means of busy waiting, or because tasks have no control-flow dependencies between each other. In this paper, we address this problem with TASKers, a generator that constructs realistic benchmark systems with predefined properties. To achieve this, TASKers composes patterns of real-world programs to generate tasks that produce known outputs and exhibit preconfigured WCETs when being executed with certain inputs. Using this knowledge during the generation process, TASKers is able to specifically introduce inter-task control-flow dependencies by mapping the output of one task to the input of another.

Subject Classification

ACM Subject Classification
  • Computer systems organization → Real-time systems
Keywords
  • benchmarking real-time-system analyses
  • task-set generation
  • whole-system generation
  • static timing analysis
  • WCET analysis

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. M. Bambagini, M. Marinoni, H. Aydin, and G. Buttazzo. Energy-aware scheduling for real-time systems: A survey. ACM TECS, 15(1), 2016. Google Scholar
  2. E. Bini and G. Buttazzo. Measuring the performance of schedulability tests. Real-Time Systems, 30, 2005. Google Scholar
  3. B. Blackham, Y. Shi, S. Chattopadhyay, A. Roychoudhury, and G. Heiser. Timing analysis of a protected operating system kernel. In Proc. of RTSS '11, 2011. Google Scholar
  4. Y. De Bock, S. Altmeyer, J. Broeckhove, and P. Hellinckx. Task-set generator for schedulability analysis using the taclebench benchmark suite. In Proc. of EWiLi '16, 2016. Google Scholar
  5. A. Burns and R. Davis. Mixed criticality systems - a review. Technical report, Department of Computer Science, University of York, 2018. Google Scholar
  6. C. Dietrich, P. Wägemann, P. Ulbrich, and D. Lohmann. SysWCET: Whole-system response-time analysis for fixed-priority real-time systems. In Proc. of RTAS '17, 2017. Google Scholar
  7. P. Emberson, R. Stafford, and R. I. Davis. Techniques for the synthesis of multiprocessor tasksets. In Proc. of WATERS '10, 2010. Google Scholar
  8. H. Falk et al. TACLeBench: A benchmark collection to support worst-case execution time research. In Proc. of WCET '16, 2016. Google Scholar
  9. J. Gustafsson, A. Betts, A. Ermedahl, and B. Lisper. The Mälardalen WCET benchmarks: Past, present and future. In Proc. of WCET '10, 2010. Google Scholar
  10. M. Guthaus et al. MiBench: A free, commercially representative embedded benchmark suite. In Proc. of WWC '01, 2001. Google Scholar
  11. M. Hoffmann, F. Lukas, C. Dietrich, and D. Lohmann. dOSEK: The design and implementation of a dependability-oriented static embedded kernel. In Proc. of RTAS '15, 2015. Google Scholar
  12. B. Huber, D. Prokesch, and P. Puschner. Combined WCET analysis of bitcode and machine code using control-flow relation graphs. In Proc. of LCTES '13, 2013. Google Scholar
  13. Infineon Technologies AG. XMC4500 reference manual, 2012. Google Scholar
  14. F. Kluge, C. Rochange, and T. Ungerer. Emsbench: Benchmark and testbed for reactive real-time systems. Leibniz Trans. on Embedded Systems, 4(2), 2017. Google Scholar
  15. J. Knoop, L. Kovács, and J. Zwirchmayr. WCET squeezing: On-demand feasibility refinement for proven precise WCET-bounds. In Proc. of RTNS '13, 2013. Google Scholar
  16. S. Kramer, D. Ziegenbein, and A. Hamann. Real world automotive benchmarks for free. In Proc. of WATERS '15, 2015. Google Scholar
  17. C. Lattner and V. Adve. LLVM: A compilation framework for lifelong program analysis &transformation. In Proc. of CGO '04, 2004. Google Scholar
  18. M. Lv, N. Guan, Y. Zhang, Q. Deng, G. Yu, and J. Zhang. A survey of WCET analysis of real-time operating systems. In Proc. of ICESS '09, 2009. Google Scholar
  19. F. Nemer, H. Cassé, P. Sainrat, J.-P. Bahsoun, and M. De Michiel. PapaBench: A free real-time benchmark. In Proc. of WCET '06, 2006. Google Scholar
  20. OSEK/VDX Group. Operating system specification 2.2.3. Technical report, OSEK/VDX Group, February 2005. Google Scholar
  21. Jan Reineke. Caches in WCET Analysis: Predictability, Competitiveness, Sensitivity. PhD thesis, Saarland University, 2008. Google Scholar
  22. H. G. Rice. Classes of recursively enumerable sets and their decision problems. Trans. of the AMS, 1953. Google Scholar
  23. P. Ulbrich, R. Kapitza, C. Harkort, R. Schmid, and W. Schröder-Preikschat. I4Copter: An adaptable and modular quadrotor platform. In Proc. of SAC '11, 2011. Google Scholar
  24. P. Wägemann, T. Distler, C. Eichler, and W. Schröder-Preikschat. Benchmark generation for timing analysis. In Proc. of RTAS '17, 2017. Google Scholar
  25. P. Wägemann, T. Distler, P. Raffeck, and W. Schröder-Preikschat. Towards code metrics for benchmarking timing analysis. In Proc. of RTSS WiP '16, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail