Advertisement
Advertisement

More Related Content

Similar to Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System(20)

Advertisement
Advertisement

Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System

  1. Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System Antonio Paolillo - Ph.D student & software engineer 24th ACM International Conference on Real-Time Networks and Systems 21th October 2016
  2. 2
  3. Goal: 3
  4. Goal: Parallelism helps to reduce energy while meeting real-time requirements 4
  5. 5 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  6. 6 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  7. 7 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  8. 8 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  9. 9 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  10. 10 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  11. 11 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  12. 12 Quantifying Energy Consumption for Practical Fork-Join Parallelism on an Embedded Real-Time Operating System
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
  39. 39
  40. 40
  41. 41
  42. 42
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
  49. 49
  50. 50
  51. 51
  52. 52
  53. 53
  54. 54
  55. Combine: ● parallel task model ● Power-aware multi-core scheduling 55 Save more with parallelism? Yes!
  56. Combine: ● parallel task model ● Power-aware multi-core scheduling → validated in theory → malleable jobs (RTCSA’14) 56 Save more with parallelism? Yes!
  57. This work Goal: Reduce energy consumption as much possible while meeting real-time requirements. 57
  58. This work Goal: Reduce energy consumption as much possible while meeting real-time requirements. Validate it in practice 58
  59. Validate in practice ● Parallel programming model ● RTOS techniques → power-aware multi-core hard real-time scheduling ● Evaluated with full experimental stack (RTOS + hardware in the loop) ● Simple, but real application use cases 59
  60. 1. Run-time framework 2. Task model and analysis 3. Experiments 60
  61. 1. Run-time framework 2. Task model and analysis 3. Experiments 61
  62. Run-time framework 62 MPSoC platform
  63. Run-time framework 63 MPSoC platform RTOS & lib support
  64. Run-time framework 64 MPSoC platform RTOS & lib support
  65. Run-time framework 65 MPSoC platform RTOS & lib support Parallel app. benchmark
  66. Run-time framework 66 MPSoC platform RTOS & lib support Parallel app. benchmark
  67. Parallel programming model 67
  68. (Simple) Parallel Program 68
  69. (Simple) Parallel Program Flow 69
  70. (Simple) Parallel Program Flow 70
  71. An embedded RTOS 71
  72. An embedded RTOS 72 ● Micro-kernel based ● Natively supports multi-core ● Provides hard-real time scheduling ● Power management fixed at task level We developed HOMPRTL → OpenMP run-time support for HIPPEROS.
  73. 1. Run-time framework 2. Task model and analysis 3. Experiments 73
  74. Parallel task model 74
  75. Parallel task model 75
  76. ● Sporadic fork-join ● Arbitrary deadlines ● Rate monotonic based analysis ● Threads are partitioned ● 3 stages, very simple Parallel task model 76
  77. ● Symmetric Multi-Core ● Global DVFS, finite set of operating points 〈 Vdd , f 〉 ● Power increases monotonically with frequency Platform model 77
  78. Optimisation & partitioning process 78 Power optimisation Partitioner Schedulability test
  79. Optimisation & partitioning process 79 Power optimisation Partitioner Schedulability test Axer et al proved wrong (shepherding)
  80. 1. Run-time framework 2. Task model and analysis 3. Experiments 80
  81. 1. Run-time framework 2. Task model and analysis 3. Experiments a. Testbed description b. Single use cases c. Task systems 81
  82. 1. Run-time framework 2. Task model and analysis 3. Experiments a. Testbed description b. Single use cases c. Task systems 82
  83. Experimental stack 83 MPSoC platform RTOS & lib support Parallel app. benchmark
  84. Experimental stack 84 MPSoC platform RTOS & lib support Parallel app. benchmark
  85. Experimental stack 85 MPSoC platform RTOS & lib support Parallel app. benchmark Measurement framework
  86. Experimental stack 86 MPSoC platform RTOS & lib support Parallel app. benchmark Measurement framework
  87. Target platform - i.MX6q SabreLite 87 ● Embedded board ARM Cortex A9 MP ● Supported by HIPPEROS ● 4 cores, but global DVFS ● Operating points
  88. Operating points 88
  89. Operating points 89
  90. 90 MPSoC Embedded board
  91. 91 MPSoC Embedded board Power supply 220 V 220 V Transformer 5 V
  92. 92 MPSoC Oscilloscope Embedded board Power supply 220 V 220 V5 V R Voltage measurements P = V² / R Transformer
  93. 93 MPSoC Control Computer Oscilloscope Embedded board Power supply 220 V 220 V5 V R Serial JTAG USB Captured data transfer Oscilloscope trigger Voltage measurements P = V² / R Transformer HIPPEROS Inputs Outputs Serial
  94. Use cases 94 MPSoC platform RTOS & lib support Parallel app. benchmark Measurement framework
  95. Use cases 95 MPSoC platform RTOS & lib support Parallel app. benchmark Measurement framework π AES Image
  96. ● Different workloads ● Easy OpenMP implementation ● Good scaling expected Use cases 96 Image AES π
  97. 1. Run-time framework 2. Task model and analysis 3. Experiments a. Testbed description b. Single use cases c. Task systems 97
  98. Use case alone - voltage probe output 98
  99. Use case alone - converted to power values 99
  100. Use case alone - execution time measurement 100
  101. Use case alone - peak power measurement 101
  102. Use case alone - energy measurement 102
  103. Use case alone - execution time 103
  104. Use case alone - peak power 104
  105. Single core power consumption 105 P ∝ Vdd 2 f Vdd ∝ f
  106. Single core power consumption 106 P ∝ Vdd 2 f Vdd ∝ f → P ∝ f3
  107. Multi-core power consumption 107 P ∝ k f3
  108. Use case alone - peak power 108 P ∝ k f 3
  109. Use case alone - energy 109
  110. Execution time speedup 110 Frequency scaling OpenMP parallelism
  111. 1. Run-time framework 2. Task model and analysis 3. Experiments a. Testbed description b. Single use cases c. Task systems 111
  112. System experiments 112
  113. ● Generate 232 random task systems (U = 0.6 → 2.9) System experiments 113
  114. ● Generate 232 random task systems (U = 0.6 → 2.9) ● Bind generated tasks to use cases System experiments 114
  115. ● Generate 232 random task systems (U = 0.6 → 2.9) ● Bind generated tasks to use cases ● Scheduling test & optimisation in Python for 1, 2, 3, 4 threads System experiments 115
  116. ● Generate 232 random task systems (U = 0.6 → 2.9) ● Bind generated tasks to use cases ● Scheduling test & optimisation in Python for 1, 2, 3, 4 threads ● Operating points and threads partition for each task System experiments 116
  117. ● Generate 232 random task systems (U = 0.6 → 2.9) ● Bind generated tasks to use cases ● Scheduling test & optimisation in Python for 1, 2, 3, 4 threads ● Operating points and threads partition for each task ● Generate HIPPEROS builds System experiments 117
  118. ● Generate 232 random task systems (U = 0.6 → 2.9) ● Bind generated tasks to use cases ● Scheduling test & optimisation in Python for 1, 2, 3, 4 threads ● Operating points and threads partition for each task ● Generate HIPPEROS builds ● Run feasible systems and measures System experiments 118
  119. ● Generate 232 random task systems (U = 0.6 → 2.9) ● Bind generated tasks to use cases ● Scheduling test & optimisation in Python for 1, 2, 3, 4 threads ● Operating points and threads partition for each task ● Generate HIPPEROS builds ● Run feasible systems and measures System experiments 119
  120. Energy consumption 120
  121. Relative energy savings 121
  122. Conclusions 122 ● Practical experimental framework flow for parallel real-time applications ● Parallelisation saves up to 25% energy (the whole board) ● Confronted theory with practice ○ Speedup factors ○ Power measurements ● Challenge: integrate “OpenMP-like” programs in industry systems
  123. Conclusion 123 Parallelism helps to reduce energy while meeting real-time requirements
  124. Thank you. 124
  125. Questions? 125
  126. Dynamic power vs static leakage 126 P ∝ k f3 + α k
  127. P ∝ k f3 + α k Dynamic power vs static leakage 127 dynamic static
  128. Dynamic power vs static leakage 128 dynamic static ● α is platform-dependent ● Comparison between DVFS and DPM P ∝ k f3 + α k
  129. ● 232 systems for each degree of // ○ No high utilisation systems → Partitioned Rate Monotonic ○ Only feasible systems for 1 → 4 threads ● More threads consume less ● Low utilisation systems don’t need high operating points (often idle) ● Analysis is pessimistic for high number of threads Energy consumption 129
  130. ● Same systems, but rel. to 1 thread ● Parallelism helps to save energy :-) ● < 0.13, but not a lot of systems... ● “4 threads” dominates ● ↗ Utot ⇒ ↘ energy ○ Until Utot = 2.4 ○ For high Utot : ■ Analysis pessimism ■ Schedulable systems only bias, so already well distributed ■ Some systems are only schedulable in parallel Relative energy savings 130
  131. ● “3 threads” is bad: ○ Loss of symmetry with 4 cores ○ Different tasks executes simultaneously (1+3) ● Measures on the whole platform ○ CPU alone would give better results ● Need more data ○ Charts would be smoother ○ But it takes time… ■ 1 minute/execution ■ 4 executions/system ■ 232 systems, ≈ 15 hours Questions on experiment results 131
  132. ● Flawed, but does not impact results ○ Axer et al (ECRTS’13) ○ Flaw pointed by Fonseca et al (SIES’16) ● Scheduling framework ○ Part of the technical framework ○ Not the important contribution/result ● Will be improved with future work Analysis technique 132
Advertisement