This document introduces the concept of application scenarios for streaming-oriented embedded system design. It defines application scenarios as sets of similar operation modes grouped by their resource usage. The document outlines a three-step methodology for incorporating application scenarios into the design process: 1) discovering scenarios by identifying and clustering similar operation modes, 2) deriving predictors to determine the active scenario, and 3) exploiting scenarios to optimize design aspects like energy efficiency. It also discusses different ways to classify and discover scenarios, and provides examples of how previous works have used scenarios to optimize memory usage, voltage scaling, and multi-task scheduling.
Abstract: The increasing amount of applications using digital multimedia technologies has accentuated the need to provide copyright protection to multimedia data. This paper reviews one of the data hiding techniques - digital image watermarking. Through this paper we will explore some basic concepts of digital image watermarking techniques.Two different methods of digital image watermarking namely spatial domain watermarking and transform domain watermarking are briefly discussed in this paper. Furthermore, two different algorithms for a digital image watermarking have also been discussed. Also the comparision between the different algorithms,tests performed for the robustness and the applications of the digital image watermarking have also been discussed.
Self Attested Images for Secured Transactions using Superior SOMIDES Editor
Separate digital signals are usually used as the
digital watermarks. But this paper proposes rebuffed
untrained minute values of vital image as a digital watermark,
since no host image is needed to hide the vital image for its
safety. The vital images can be transformed with the self
attestation. Superior Self Organized Maps is used to derive
self signature from the vital image. This analysis work
constructs framework with Superior Self Organizing Maps
(SSOM) against Counter Propagation Network for watermark
generation and detection. The required features like
robustness, imperceptibility and security was analyzed to prove
that which neural network is appropriate for mining watermark
from the host image. SSOM network is proved as an efficient
neural trainer for the proposed watermarking technique. The
paper presents one more contribution to the watermarking
area.
This paper presents a rotation scale invariant digital
color image watermarking technique using Scale Invariant
Feature Transform (SIFT) which is invariant to geometric
transformation. The image descriptors extracted using SIFT
transform of original image and watermarked image are used
for estimating the scaling factor and angle of rotation of
attacked image. Using estimated factors attacked image is
then restored to its original size for synchronization purpose.
As a result of synchronization, watermark detection is done
correctly. In proposed approach the offline signature, which is
a biometric characteristics of owner is embedded in second
level detailed coefficients of discrete wavelet transform of
cover image. The simulation results show that the algorithm
is robust against signal processing and geometric attack.
Edinburgh Data-Intensive Research Data-intensive refers to huge volumes of data, complex patterns of data integration and analysis, and intricate interactions between data and users. Current methods and tools are failing to address data-intensive challenges effectively. They fail for several reasons, all of which are aspects of scalability. The deluge of computational methods and plethora of computational systems prevents effective and efficient use of resources, user interfaces are not adopted at a sufficient rate to satisfy demand for scientific computing and data and knowledge is created outside suitable contexts for collaborative research to be effective. The Edinburgh Data-Intensive Research group addresses these scalability issues by providing mappings from abstract formulations to concrete and optimised executions of research challenges, by developing intuitive interfaces to enable access to steer these executions and by developing systems to aid in creating new research challenges. In this talk I will present several exemplars where we have dealt with scalability issues in scientific scenarios.
Abstract: The increasing amount of applications using digital multimedia technologies has accentuated the need to provide copyright protection to multimedia data. This paper reviews one of the data hiding techniques - digital image watermarking. Through this paper we will explore some basic concepts of digital image watermarking techniques.Two different methods of digital image watermarking namely spatial domain watermarking and transform domain watermarking are briefly discussed in this paper. Furthermore, two different algorithms for a digital image watermarking have also been discussed. Also the comparision between the different algorithms,tests performed for the robustness and the applications of the digital image watermarking have also been discussed.
Self Attested Images for Secured Transactions using Superior SOMIDES Editor
Separate digital signals are usually used as the
digital watermarks. But this paper proposes rebuffed
untrained minute values of vital image as a digital watermark,
since no host image is needed to hide the vital image for its
safety. The vital images can be transformed with the self
attestation. Superior Self Organized Maps is used to derive
self signature from the vital image. This analysis work
constructs framework with Superior Self Organizing Maps
(SSOM) against Counter Propagation Network for watermark
generation and detection. The required features like
robustness, imperceptibility and security was analyzed to prove
that which neural network is appropriate for mining watermark
from the host image. SSOM network is proved as an efficient
neural trainer for the proposed watermarking technique. The
paper presents one more contribution to the watermarking
area.
This paper presents a rotation scale invariant digital
color image watermarking technique using Scale Invariant
Feature Transform (SIFT) which is invariant to geometric
transformation. The image descriptors extracted using SIFT
transform of original image and watermarked image are used
for estimating the scaling factor and angle of rotation of
attacked image. Using estimated factors attacked image is
then restored to its original size for synchronization purpose.
As a result of synchronization, watermark detection is done
correctly. In proposed approach the offline signature, which is
a biometric characteristics of owner is embedded in second
level detailed coefficients of discrete wavelet transform of
cover image. The simulation results show that the algorithm
is robust against signal processing and geometric attack.
Edinburgh Data-Intensive Research Data-intensive refers to huge volumes of data, complex patterns of data integration and analysis, and intricate interactions between data and users. Current methods and tools are failing to address data-intensive challenges effectively. They fail for several reasons, all of which are aspects of scalability. The deluge of computational methods and plethora of computational systems prevents effective and efficient use of resources, user interfaces are not adopted at a sufficient rate to satisfy demand for scientific computing and data and knowledge is created outside suitable contexts for collaborative research to be effective. The Edinburgh Data-Intensive Research group addresses these scalability issues by providing mappings from abstract formulations to concrete and optimised executions of research challenges, by developing intuitive interfaces to enable access to steer these executions and by developing systems to aid in creating new research challenges. In this talk I will present several exemplars where we have dealt with scalability issues in scientific scenarios.
FPGA Based Design of High Performance Decimator using DALUT AlgorithmIDES Editor
This paper presents a multiplier less approach
to implement high speed and area efficient decimator for
down converter of Software Defined Radios. This
technique substitutes multiply-and-accumulate (MAC)
operations with look up table (LUT) accesses. Proposed
decimator has been implemented using Partitioned
distributed arithmetic look up table (DALUT) algorithm
by taking optimal advantage of embedded LUTs of target
FPGA device. This method is useful to enhance the system
performance in terms of speed and area. The proposed
decimator has used half band polyphase decomposition
FIR structure. The decimator has been designed with
Matlab 7.6, simulated with Modelsim 6.3XE simulator,
synthesized with Xilinx Synthesis Tool (XST) 10.1 and
implemented on Spartan-3E based 3s500efg320-4 FPGA
device. The proposed DALUT approach has shown an
improvement of 24% in speed by saving almost 50%
resources of target device as compared to MAC based
approach.
A DWT based Dual Image Watermarking Technique for Authenticity and Watermark ...sipij
In this paper we propose a DWT based dual watermarking technique wherein both blind and non-blind algorithms are used for the copyright protection of the cover/host image and the watermark respectively. We use the concept of embedding two watermarks into the cover image by actually embedding only one, to authenticate the source image and protect the watermark simultaneously. Here the DWT coefficients of the primary watermark (logo) are modified using another smaller secondary binary image (sign) and the midfrequency coefficients of the cover/host image. Since the watermark has some features of host image embedded in it, the security is increased two-fold and it also protects the watermark from any misuse or copy attack. For this purpose a new pseudorandom generator based on the mathematical constant π has been developed and used successfully in various stages of the algorithm. We have also proposed a new approach of applying pseudo-randomness in selecting the watermark pixel values for embedding in the cover image. In all the existing techniques the randomness is incorporated in selecting the location to embed the watermark. This makes the embedding process more unpredictable. The cover image which is watermarked with the signed-logo is subjected to various attacks like cropping, rotation, JPEG compression, scaling and noising. From the results it has been found that it is very robust and has good invisibility as well.
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Daniele Gianni
Presentation at the 2nd International Workshop on Model-driven Approaches for Simulation Engineering
(held within the SCS/IEEE Symposium on Theory of Modeling and Simulation part of SpringSim 2012)
Please see: http://www.sel.uniroma2.it/mod4sim12/ for further details
IRJET-Security Based Data Transfer and Privacy Storage through Watermark Dete...IRJET Journal
Gowtham.T ,Pradeep Kumar.G " Security Based Data Transfer and Privacy Storage through Watermark Detection ", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net .published by Fast Track Publications
Abstract
Digital watermarking has been proposed as a technology to ensure copyright protection by embedding an imperceptible, yet detectable signal in visual multimedia content such as images or video. In every field key aspect is the security Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service. Several attempts has been made for increasing the security related works and avoidance of data loss. Existing system had attain its solution up to its level where it can be further able to attain the parameter refinement. In this paper improvising factor been made on the successive compressive sensing reconstruction part and Peak Signal-to-Noise Ratio (PSNR).Another consideration factor is to increase (CS) rate through de-emphasize the effect of predictive variables that become uncorrelated with the measurement data which eliminates the need of (CS) reconstruction.
Image Authentication Using Digital Watermarkingijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
here it introduces an efficient multi-resolution watermarking methodology for copyright protection of digital images. By adapting the watermark signal to the wavelet coefficients, the proposed method is highly image adaptive and the watermark signal can be strengthen in the most significant parts of the image. As this property also increases the watermark visibility, usage of the human visual system is incorporated to prevent perceptual visibility of embedded watermark signal. Experimental results show that the proposed system preserves the image quality and is vulnerable against most common image processing distortions. Furthermore, the hierarchical nature of wavelet transform allows for detection of watermark at various resolutions, resulting in reduction of the computational load needed for watermark detection based on the noise level. The performance of the proposed system is shown to be superior to that of other available schemes reported in the literature.
An Approach Towards Lossless Compression Through Artificial Neural Network Te...IJERA Editor
An image consists of significant info along with demands much more space within the memory. The particular significant info brings about much more indication moment from transmitter to device. Any time intake is usually lowered by utilizing info compression techniques. In this particular method, it's possible to eliminate the repetitive info within an image. The particular condensed image demands a lesser amount of storage along with a lesser amount of time for you to monitor by means of data from transmitter to device. Unnatural neural community along with give food to ahead back again propagation method can be utilized for image compression. In this particular cardstock, this Bipolar Code Method is offered along with executed for image compression along with received the higher results as compared to Principal Part Analysis (PCA) method. However, this LM protocol can be offered along with executed which will acts as being a powerful way of image compression. It is seen how the Bipolar Code along with LM protocol fits the very best for image compression along with control applications.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
FPGA Based Design of High Performance Decimator using DALUT AlgorithmIDES Editor
This paper presents a multiplier less approach
to implement high speed and area efficient decimator for
down converter of Software Defined Radios. This
technique substitutes multiply-and-accumulate (MAC)
operations with look up table (LUT) accesses. Proposed
decimator has been implemented using Partitioned
distributed arithmetic look up table (DALUT) algorithm
by taking optimal advantage of embedded LUTs of target
FPGA device. This method is useful to enhance the system
performance in terms of speed and area. The proposed
decimator has used half band polyphase decomposition
FIR structure. The decimator has been designed with
Matlab 7.6, simulated with Modelsim 6.3XE simulator,
synthesized with Xilinx Synthesis Tool (XST) 10.1 and
implemented on Spartan-3E based 3s500efg320-4 FPGA
device. The proposed DALUT approach has shown an
improvement of 24% in speed by saving almost 50%
resources of target device as compared to MAC based
approach.
A DWT based Dual Image Watermarking Technique for Authenticity and Watermark ...sipij
In this paper we propose a DWT based dual watermarking technique wherein both blind and non-blind algorithms are used for the copyright protection of the cover/host image and the watermark respectively. We use the concept of embedding two watermarks into the cover image by actually embedding only one, to authenticate the source image and protect the watermark simultaneously. Here the DWT coefficients of the primary watermark (logo) are modified using another smaller secondary binary image (sign) and the midfrequency coefficients of the cover/host image. Since the watermark has some features of host image embedded in it, the security is increased two-fold and it also protects the watermark from any misuse or copy attack. For this purpose a new pseudorandom generator based on the mathematical constant π has been developed and used successfully in various stages of the algorithm. We have also proposed a new approach of applying pseudo-randomness in selecting the watermark pixel values for embedding in the cover image. In all the existing techniques the randomness is incorporated in selecting the location to embed the watermark. This makes the embedding process more unpredictable. The cover image which is watermarked with the signed-logo is subjected to various attacks like cropping, rotation, JPEG compression, scaling and noising. From the results it has been found that it is very robust and has good invisibility as well.
Calibration of Deployment Simulation Models - A Multi-Paradigm Modelling Appr...Daniele Gianni
Presentation at the 2nd International Workshop on Model-driven Approaches for Simulation Engineering
(held within the SCS/IEEE Symposium on Theory of Modeling and Simulation part of SpringSim 2012)
Please see: http://www.sel.uniroma2.it/mod4sim12/ for further details
IRJET-Security Based Data Transfer and Privacy Storage through Watermark Dete...IRJET Journal
Gowtham.T ,Pradeep Kumar.G " Security Based Data Transfer and Privacy Storage through Watermark Detection ", International Research Journal of Engineering and Technology (IRJET), Volume2,issue-01 April 2015.e-ISSN:2395-0056, p-ISSN:2395-0072. www.irjet.net .published by Fast Track Publications
Abstract
Digital watermarking has been proposed as a technology to ensure copyright protection by embedding an imperceptible, yet detectable signal in visual multimedia content such as images or video. In every field key aspect is the security Privacy is a critical issue when the data owners outsource data storage or processing to a third party computing service. Several attempts has been made for increasing the security related works and avoidance of data loss. Existing system had attain its solution up to its level where it can be further able to attain the parameter refinement. In this paper improvising factor been made on the successive compressive sensing reconstruction part and Peak Signal-to-Noise Ratio (PSNR).Another consideration factor is to increase (CS) rate through de-emphasize the effect of predictive variables that become uncorrelated with the measurement data which eliminates the need of (CS) reconstruction.
Image Authentication Using Digital Watermarkingijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
here it introduces an efficient multi-resolution watermarking methodology for copyright protection of digital images. By adapting the watermark signal to the wavelet coefficients, the proposed method is highly image adaptive and the watermark signal can be strengthen in the most significant parts of the image. As this property also increases the watermark visibility, usage of the human visual system is incorporated to prevent perceptual visibility of embedded watermark signal. Experimental results show that the proposed system preserves the image quality and is vulnerable against most common image processing distortions. Furthermore, the hierarchical nature of wavelet transform allows for detection of watermark at various resolutions, resulting in reduction of the computational load needed for watermark detection based on the noise level. The performance of the proposed system is shown to be superior to that of other available schemes reported in the literature.
An Approach Towards Lossless Compression Through Artificial Neural Network Te...IJERA Editor
An image consists of significant info along with demands much more space within the memory. The particular significant info brings about much more indication moment from transmitter to device. Any time intake is usually lowered by utilizing info compression techniques. In this particular method, it's possible to eliminate the repetitive info within an image. The particular condensed image demands a lesser amount of storage along with a lesser amount of time for you to monitor by means of data from transmitter to device. Unnatural neural community along with give food to ahead back again propagation method can be utilized for image compression. In this particular cardstock, this Bipolar Code Method is offered along with executed for image compression along with received the higher results as compared to Principal Part Analysis (PCA) method. However, this LM protocol can be offered along with executed which will acts as being a powerful way of image compression. It is seen how the Bipolar Code along with LM protocol fits the very best for image compression along with control applications.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Utmaningar som tredjepartsutvecklare för kollektivtrafikbranchen - Kollektivt...Johan Nilsson
Presentation om de utmaningar jag upplevt och ser som tredjepartsutvecklare för kollektivtrafikbranchen.
Presentationen hölls på Kollektivtrafikdagen för ca 300 personer. http://www.kollektivtrafikdagen.se/
Multilevel Hybrid Cognitive Load Balancing Algorithm for Private/Public Cloud...IDES Editor
Cloud computing is an emerging computing
paradigm. It aims to share data, resources and services
transparently among users of a massive grid. Although the
industry has started selling cloud-computing products,
research challenges in various areas, such as architectural
design, task decomposition, task distribution, load
distribution, load scheduling, task coordination, etc. are still
unclear. Therefore, we study the methods to reason and model
cloud computing as a step towards identifying fundamental
research questions in this paradigm. In this paper, we propose
a model for load distribution on cloud computing by modeling
them as cognitive systems and using aspects which not only
depend on the present state of the system, but also, on a set of
predefined transitions and conditions. The entirety of this
model is then bundled to cater the task of job distribution
using the concept of application metadata. Later, we draw a
qualitative and simulation based summarization for the
proposed model. We finally evaluate the results and draw up
a series of key conclusions in cloud computing for future
exploration.
Wireless Sensor Networks (WSNs) are used in many application fields, such as military,
healthcare, environment surveillance, etc. The WSN OS based on event-driven model doesn’t
support real-time and multi-task application types and the OSs based on thread-driven model
consume much energy because of frequent context switch. Due to the high-dense and largescale
deployment of sensor nodes, it is very difficult to collect sensor nodes to update their
software. Furthermore, the sensor nodes are vulnerable to security attacks because of the
characteristics of broadcast communication and unattended application. This paper presents a
task and resource self-adaptive embedded real-time microkernel, which proposes hybrid
programming model and offers a two-level scheduling strategy to support real-time multi-task
correspondingly. A communication scheme, which takes the “tuple” space and “IN/OUT”
primitives from “LINDA”, is proposed to support some collaborative and distributed tasks. In
addition, this kernel implements a run-time over-the-air updating mechanism and provides a
security policy to avoid the attacks and ensure the reliable operation of nodes. The performance
evaluation is proposed and the experiential results show this kernel is task-oriented and
resource-aware and can be used for the applications of event-driven and real-time multi-task.
TASK & RESOURCE SELF-ADAPTIVE EMBEDDED REAL-TIME OPERATING SYSTEM MICROKERNEL...cscpconf
Wireless Sensor Networks (WSNs) are used in many application fields, such as military,
healthcare, environment surveillance, etc. The WSN OS based on event-driven model doesn’t
support real-time and multi-task application types and the OSs based on thread-driven model
consume much energy because of frequent context switch. Due to the high-dense and largescale
deployment of sensor nodes, it is very difficult to collect sensor nodes to update their
software. Furthermore, the sensor nodes are vulnerable to security attacks because of the
characteristics of broadcast communication and unattended application. This paper presents a
task and resource self-adaptive embedded real-time microkernel, which proposes hybrid
programming model and offers a two-level scheduling strategy to support real-time multi-task
correspondingly. A communication scheme, which takes the “tuple” space and “IN/OUT”
primitives from “LINDA”, is proposed to support some collaborative and distributed tasks. In
addition, this kernel implements a run-time over-the-air updating mechanism and provides a
security policy to avoid the attacks and ensure the reliable operation of nodes. The performance
evaluation is proposed and the experiential results show this kernel is task-oriented and
resource-aware and can be used for the applications of event-driven and real-time multi-task.
Truly dependable software systems should be built with structuring techniques able to decompose the software complexity without
hiding important hypotheses and assumptions such as those regarding
their target execution environment and the expected fault- and system
models. A judicious assessment of what can be made transparent and
what should be translucent is necessary. This paper discusses a practical
example of a structuring technique built with these principles in mind:
Reflective and refractive variables. We show that our technique offers
an acceptable degree of separation of the design concerns, with limited
code intrusion; at the same time, by construction, it separates but does
not hide the complexity required for managing fault-tolerance. In particular, our technique offers access to collected system-wide information
and the knowledge extracted from that information. This can be used
to devise architectures that minimize the hazard of a mismatch between
dependable software and the target execution environments.
DIVISION AND REPLICATION OF DATA IN GRID FOR OPTIMAL PERFORMANCE AND SECURITYijgca
Using Grid Storage, users can remotely store their data and enjoy the on-demand high quality applications and services from a shared networks of configurable computing resources, without the burden of local data storage and maintenance. In this project based on the dynamic secrets proposed design an encryption scheme for SG wireless communication, named as dynamic secret-based encryption (DSE). Dynamic encryption key (DEK) is updated by XOR the previous DEK with current DS. In this project based on the dynamic secrets proposed design an encryption scheme for SG wireless communication, named as dynamic secret-based encryption (DSE). The basic idea of dynamic secrets is to generate a series of secrets from unavoidable transmission errors and other random factors in wireless communications In DSE, the previous packets are coded as binary values 0 and 1 according to whether they are retransmitted due to channel error. This 0/1 sequence is called as retransmission sequence (RS) which is applied to generate dynamic secret (DS). Dynamic encryption key (DEK) is updated by XOR the previous DEK with current DS.
DIVISION AND REPLICATION OF DATA IN GRID FOR OPTIMAL PERFORMANCE AND SECURITYijgca
Using Grid Storage, users can remotely store their data and enjoy the on-demand high quality applications and services from a shared networks of configurable computing resources, without the burden of local data storage and maintenance. In this project based on the dynamic secrets proposed design an encryption scheme for SG wireless communication, named as dynamic secret-based encryption (DSE). Dynamic encryption key (DEK) is updated by XOR the previous DEK with current DS. In this project based on the dynamic secrets proposed design an encryption scheme for SG wireless communication, named as dynamic secret-based encryption (DSE). The basic idea of dynamic secrets is to generate a series of secrets from unavoidable transmission errors and other random factors in wireless communications In DSE, the previous packets are coded as binary values 0 and 1 according to whether they are retransmitted due to channel error. This 0/1 sequence is called as retransmission sequence (RS) which is applied to generate dynamic secret (DS). Dynamic encryption key (DEK) is updated by XOR the previous DEK with current DS.
DIVISION AND REPLICATION OF DATA IN GRID FOR OPTIMAL PERFORMANCE AND SECURITYijgca
Using Grid Storage, users can remotely store their data and enjoy the on-demand high quality applications and services from a shared networks of configurable computing resources, without the burden of local data storage and maintenance. In this project based on the dynamic secrets proposed design an encryption scheme for SG wireless communication, named as dynamic secret-based encryption (DSE). Dynamic encryption key (DEK) is updated by XOR the previous DEK with current DS. In this project based on the dynamic secrets proposed design an encryption scheme for SG wireless communication, named as dynamic secret-based encryption (DSE). The basic idea of dynamic secrets is to generate a series of secrets from unavoidable transmission errors and other random factors in wireless communications In DSE, the previous packets are coded as binary values 0 and 1 according to whether they are retransmitted due to channel error. This 0/1 sequence is called as retransmission sequence (RS) which is applied to generate dynamic secret (DS). Dynamic encryption key (DEK) is updated by XOR the previous DEK with current DS
Using Semi-supervised Classifier to Forecast Extreme CPU Utilizationgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable
load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with
large number of applications running concurrently. This proposed model forecasts the likelihood of a
scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU
utilization under extreme stress conditions. The enterprise IT environment consists of a large number of
applications running in a real time system. Load features are extracted while analysing an envelope of the
patterns of work-load traffic which are hidden in the transactional data of these applications. This method
simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test
environment and use our model to predict the excessive CPU utilization under peak load conditions for
validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the
parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of
this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to
few days in a complex enterprise environment. Workload demand prediction and profiling has enormous
potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONijaia
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
USING SEMI-SUPERVISED CLASSIFIER TO FORECAST EXTREME CPU UTILIZATIONgerogepatton
A semi-supervised classifier is used in this paper is to investigate a model for forecasting unpredictable load on the IT systems and to predict extreme CPU utilization in a complex enterprise environment with large number of applications running concurrently. This proposed model forecasts the likelihood of a scenario where extreme load of web traffic impacts the IT systems and this model predicts the CPU utilization under extreme stress conditions. The enterprise IT environment consists of a large number of applications running in a real time system. Load features are extracted while analysing an envelope of the patterns of work-load traffic which are hidden in the transactional data of these applications. This method simulates and generates synthetic workload demand patterns, run use-case high priority scenarios in a test environment and use our model to predict the excessive CPU utilization under peak load conditions for validation. Expectation Maximization classifier with forced-learning, attempts to extract and analyse the parameters that can maximize the chances of the model after subsiding the unknown labels. As a result of this model, likelihood of an excessive CPU utilization can be predicted in short duration as compared to few days in a complex enterprise environment. Workload demand prediction and profiling has enormous potential in optimizing usages of IT resources with minimal risk.
Background differencing algorithm for moving object detection using system ge...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Run-Time Adaptive Processor Allocation of Self-Configurable Intel IXP2400 Net...CSCJournals
An ideal Network Processor, that is, a programmable multi-processor device must be capable of offering both the flexibility and speed required for packet processing. But current Network Processor systems generally fall short of the above benchmarks due to traffic fluctuations inherent in packet networks, and the resulting workload variation on individual pipeline stage over a period of time ultimately affects the overall performance of even an otherwise sound system. One potential solution would be to change the code running at these stages so as to adapt to the fluctuations; a near robust system with standing traffic fluctuations is the dynamic adaptive processor, reconfiguring the entire system, which we introduce and study to some extent in this paper. We achieve this by using a crucial decision making model, transferring the binary code to the processor through the SOAP protocol.
Similar to Application scenarios in streaming oriented embedded-system design (20)
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Epistemic Interaction - tuning interfaces to provide information for AI support
Application scenarios in streaming oriented embedded-system design
1. Application Scenarios in Streaming-Oriented
Embedded System Design
Stefan Valentin Gheorghita, Twan Basten and Henk Corporaal
EE Department, ES Group, Eindhoven University of Technology, The Netherlands
{s.v.gheorghita,a.a.basten,h.corporaal}@tue.nl
internal state
Abstract— In the past decade real-time embedded systems header
became more and more complex and pervasive. From the user
perspective, these systems have stringent requirements regarding Kernel 1
size, performance and energy consumption, and due to business Read
Kernel 3
Write
object object
competition, their time-to-market is a crucial factor. Therefore, Input bitstream:
much work has been done in developing design methodologies header data header data … data
Kernel 2
Kernel 4
for embedded systems to cope with these tight requirements. In stream object Processing path for
one type of object
this paper, we introduce the concept of application scenarios that
group operation modes of an application that are similar from the Fig. 1. Typical streaming application processing a stream object.
resource usage perspective, and we describe how to incorporate
them in the overall real-time embedded system design process. A
case study shows the use of application scenarios for low energy faster or lower energy implementation (e.g. by using different
design, under both soft and hard real-time constraints. source code optimizations per scenario), or a better estimation
of required resources (e.g. the number of computation cycles
I. I NTRODUCTION
or bandwidth) may be derived. These intermediate results lead
Embedded systems usually consist of processors that exe- to a smaller, cheaper and more energy efficient system that can
cute domain-specific programs. Much of their functionality is deliver the required performance.
implemented in software, which is running on one or multiple The paper is organized as follows. Section II presents
generic processors, leaving only the high performance func- the role of application scenarios in an embedded system
tions implemented in hardware. Typical examples include TV design flow, illustrating the difference between them and the
sets, cellular phones, MP3 players and printers. Most of these well known use-case scenarios, and it provides examples of
systems are running multimedia and/or telecom applications, scenario exploitation found in the literature. A classification
like video and audio decoders. These applications are usually of application scenarios is given in section III. A case study
implemented as a main loop, called the loop of interest, showing how we reduced the energy consumption of a single
that is executed over and over again, reading, processing and task system under both hard and soft real-time constraints is
writing out individual stream objects (see figure 1). A stream presented in section IV. Some conclusions are discussed in
object might be a bit belonging to a compressed bitstream the last section.
representing a coded video clip, a macro-block, a video frame,
or an audio sample. Usually, these applications have to deliver II. S CENARIOS IN D ESIGN
a given throughput (number of objects per second), which A. Use-case vs. Application Scenarios
imposes a time constraint on each loop iteration. Scenario-based design has been in use for a long time in
The read part of the loop of interest takes a stream object different areas [1], [2], like human-computer interaction or
from the input stream and separates it into a header and object oriented software engineering. In these cases, scenarios
the object’s data. The processing part consists of several concretely describe, in an early phase of the development
kernels. For each stream object some of these kernels are process, the use of a future system. Moreover, they appear
used, depending on the object type. The write part sends the like narrative descriptions of envisioned usage episodes, or
processed data to output devices, like a screen or speakers, and like unified modeling language (UML) use-case diagrams that
saves the internal state of the application for further use (e.g. in enumerate, from a functional and timing point of view, all
a video decoder, the previous decoded frame may be necessary possible user actions and system reactions that are required
for decoding the current frame). The actions executed in a loop to meet a proposed system functionality. These scenarios are
iteration form an internal operation mode of the application. called use-case scenarios, and characterize the system from
In this work, we introduce ways of detecting and exploiting the user perspective. In the embedded systems area, they were
a characteristic of the application that has not been fully used used in both hardware [3], [4] and software design [5].
in embedded system design previously, namely the different In this work, we concentrate on a different kind of scenarios,
internal operation modes, each with their own typical resource so-called application scenarios, that characterize the system
consumption. Operation modes that are closely related to each from the resource usage perspective.
other from a resource consumption perspective are clustered Definition: An application scenario is a detectable set of op-
in so-called application scenarios, distinguishing operation eration modes of an application that are sufficiently similar in
modes that are really different. If these scenarios are con- a multi-dimensional resource-based cost space (e.g. execution
sidered in different steps of the embedded system design, a cycles, memory usage, source code).
This work was supported by the Dutch Science Foundation, NWO, project The cost space is defined over the dimensions of interest
FAME, number 612.064.101. for a specific problem. For example, we might be interested
2. Product
Idea
Manual Definition
1. Scenario Discovery
Use-case
1 2 3 Operation Mode 2. Scenario
scenarios Operation Mode 3. Scenario Final
Identification & Predictor / Detector
Clustering Exploitation System
Application Characterization Operation Application Derivation Application
Graph modes Scenarios Scenarios +
Design
Predictor
Application Code
Semi-automatic Extraction
Application
Scenarios 1 2 3
Design
Final
System
(a) Full design flow (b) Application scenario usage methodology
Fig. 2. A scenario based design flow for embedded systems.
in operation modes that share the same source code, or that usage (in the cost space of interest) is characterized, and the
execute in the same number of CPU cycles. To be exploited, modes with similar needs are clustered in an application sce-
these modes must be detectable in the application, preferably nario. The methods used for scenario discovery can be divided
as soon as the application starts to execute in one of them. As in three categories: (i) analytical, (ii) profiling and (iii) hybrid.
the definition is very general, it is commonly tailored to the Independent of the method, the set of the identified application
specific design problem at hand (e.g. the application behavior scenarios must cover all possible application operation modes.
for a specific type of input data [6]). In an analytical method, the application structure is stat-
Figure 2(a) depicts a design flow using scenarios. It starts ically analyzed to identify similar operation modes. This
from a product idea, for which the stakeholders define the method is restrictive, as it can not automatically collect
future utilization as use-case scenarios. They are used in information about how the application is really used and
a user-centric development process to design an embedded how it behaves at runtime (e.g. which is the most frequently
system that includes both software and hardware components. used scenario at runtime). The real runtime behavior of the
In order to optimize the design of the system, we suggest to application can be captured using a profiling method, but in
augment this trajectory with application scenarios (the bottom this case it is more difficult to derive scenario predictors than
gray box in figure 2(a)). Once the application is coded, its it is in the analytical case, and in general not all scenarios may
scenarios related to resources utilization are extracted in a be discovered as not all possible distinct operation modes may
semi-automatic way, and they are considered for the decisions be covered by profiling. To overcome this problem, an extra
made during the following phases of the system design. The scenario, called the backup scenario, must be considered. It
sets of use-case scenarios and application scenarios are not is selected at runtime when the application is running in an
necessarily disjoint. One or more use-case scenarios may be operation mode that did not appear during profiling. A hybrid
merged in one application scenario, a use-case scenario may be method combines the advantages of the previous two methods
split into several application scenarios, or several application and it is the most powerful way to discover scenarios.
scenarios may intersect several use-case scenarios. Especially for profiling and hybrid methods, an explosion
Example: We want to design a portable MP3 player as a USB in the number of operation modes may appear during their
stick. At first sight, there are two main use-case scenarios: (i) identification. In this case, decisions must be made using
the player is connected to the computer and music files are partial information, applying mode identification and cluster-
transferred between them, and (ii) the player is used to listen ing simultaneously. There is a trade-off between how many
music. These scenarios can be divided in more detailed use- different scenarios and modes may be handled during the
case scenarios, like, for the second one, song selection, play discovery process (from which the process speed and memory
or fast forward scenarios. Let us consider the play scenario. usage are derived) and the resulting quality. Discovery and
From the software point of view, this use-case can be split clustering may be performed in a bottom-up approach, but
in two different application scenarios: (i) mono mode and (ii) also in a top-down refinement based approach.
stereo mode. If these scenarios are used during the design, 2: Runtime scenario detector and/or predictor derivation is
the system battery lifetime may be increased, as in case of the step of finding a way to determine in which scenario the
playing in mono mode a lower computation power is needed, application runs at a certain moment in time. The current
thus a lower supply voltage may be used to meet the timing scenario of an application can be either detected, based on
constraints of the decoding. already known information (e.g. variable values), or it can be
predicted, with a certain confidence. Detection can be seen as
B. Application Scenario Usage Methodology prediction with 100% confidence.
The methodology to introduce application scenarios into Different ways of implementing predictors may be con-
the current embedded system design trajectory consists of sidered, like static vs. runtime adaptive or centralized vs.
three steps depicted in figure 2(b): (1) discovery, (2) predic- distributed. Independent of the predictor implementation, the
tor/detector derivation and (3) exploitation. following information may be used: (i) runtime application
1: Scenario discovery starts from the original application internal information like variable values and executed code
and identifies its different operation modes. Their resource (i.e. a basic block that appears only in one scenario); (ii) statis-
3. tical information obtained by profiling or from the application possible criteria that can be used for scenario classification.
designer (e.g. how often a scenario may appear at runtime); Considering how scenario switches are driven at runtime,
(iii) a probabilistic scenario transition model, like a Markov two main scenario categories can be considered: data flow
chain; and (iv) a history of active scenarios in the current driven and event driven. Data flow driven scenarios char-
execution. Predictors may be of two types: acterize different actions executed in an application that are
• Reactive: Only information already computed by the selected at runtime based on the input data characteristics
application is used. (e.g. the type of streaming object). Usually each scenario
• Proactive: A part of the application control-flow that fol- has its own implementation within the application source
lows the predictor is duplicated/extracted in the predictor code. Event driven scenarios are selected at runtime based
source code. This allows early decision making. There on events external to the application, such as user requests
is a trade-off between the amount of code duplicated or system status changes (e.g. battery level). They typically
and how early in the execution the current application characterize different quality levels for the same functionality,
scenario can be predicted. Usually, the earlier the better, which may be implemented as different algorithms or different
but the prediction overhead must be limited. quality parameters in the same algorithm. They are also called
3: Scenario exploitation is the step that uses scenarios to quality scenarios. The two types of scenarios may form a
optimize a design. As this step strongly depends on what the hierarchy. For different quality levels, a data flow driven
designer wants to achieve, we present an overview of several scenario corresponding to the same application source code,
papers that use application scenarios (although in general they may require different amounts of resources.
do not give an explicit definition and/or identify the concept). The runtime switches that appear between scenarios are
In [7], the authors concentrate on saving energy for a single differentiated by the tolerable amount of side-effects. Usually,
task application. For each manually identified scenario they in case of data flow driven scenarios side-effects are not
select the most energy efficient architecture configuration that acceptable, whereas in case of event driven scenarios different
can be used to meet the timing constraints. The architecture potential side-effects may be acceptable.
has a single processor with reconfigurable components (e.g. Example: A switch between quality scenarios in a TV set
number and type of function units), and its supply voltage may appear as an image format change (e.g. from 4:3 to
can be changed. It is not clear how scenarios are predicted 16:9). A side-effect of image flickering generated during
at runtime. In [8], a reactive predictor is used to select the system reconfiguration is acceptable. But when the application
lowest supply voltage for which the timing constraints are switches from a data driven scenario to another one, no side-
met. An extension [9] considers two simultaneous resources effects that visibly affect the image of the channel being
for scenario characterization. It looks for the most energy ef- watched are acceptable.
ficient configuration for encoding video on a mobile platform, As design methods for single and multi-task systems con-
exploring the trade-off between computation and compression centrate on different aspects, scenarios can also be classified
efficiency. in (i) intra-task scenarios, which appear within a sequential
To reduce the number of memory accesses, in [10], the part of an application (i.e. a task); and (ii) inter-task scenarios,
authors selectively duplicate parts of application source code, which represent operation modes of a multi-task application.
enabling global loop transformations across data dependent This classification can also be seen as a hierarchy. Usually, the
conditions. They have a systematic way of detecting operation scenario in which a multi-task application is running is derived
modes based on profiling and of clustering them in scenarios from the scenarios in which each application task is currently
based on a trade-off between the number of memory accesses running. Data flow driven intra- and inter-task scenarios are
and the code size increase. The final application implementa- conceptually the same from the resource usage and runtime
tion, including scenarios and the predictor, is done manually. switching perspectives, but they have a different impact on
In context of multi-task applications, the scenario concept the intra- and inter-task parts of the design flow, and their
was first used in [11] to capture the data-dependent dynamic exploitation is in general different.
behavior inside a thread, to better schedule a multi-thread Finally, scenario usage differs for soft and hard real-time
application on a heterogenous multi-processor architecture, al- systems. Not all the methods presented above for each step of
lowing the change of voltage level for each individual proces- the methodology can be applied. For example, for hard real-
sor. The use of application scenarios for reducing the energy time systems, scenario discovery can only use static analysis,
consumed by a multi-task application mapped on a voltage and only detectors may be used to identify the current scenario
scaling aware processor is also investigated in [12]. Other work at runtime, whereas for soft real-time systems predictors and
in the multi-task context is [13]. The considered scenarios are statistical information from profilers may be used.
characterized by different communication requirements (e.g. IV. O UR TRAJECTORY FOR LOW ENERGY DESIGN
bandwidth, latency) and traffic patterns. The paper presents
a method to map application communication to a network This section presents our semi-automatic trajectory of dis-
on chip architecture, satisfying the design constraints of each covery, predicting and exploiting application scenarios to
individual scenario. reduce the energy consumed by a single task application
Most of the mentioned papers (except [10]) emphasize on a dynamic voltage scaling (DVS) aware processor. The
scenario exploitation and do not go into detail on discovery trajectory is adapted for both hard and soft real-time con-
and prediction. Our work focuses on these last two problems. straints. It starts from an application written in C, as C is the
most used language to write embedded systems software, and
III. A PPLICATION S CENARIO C LASSIFICATION generates the final energy-aware implementation also in C. The
The different classes of embedded systems (e.g. hard vs. soft numerical results presented bellow, are obtained for an MP3
real-time) and the problem that must be solved lead to multiple decoder running on a processor similar to an ARM7TDMI [14]
4. Scenarios + Scenarios +
Fine-grain DVS Coarse Reduced Reference the average over-estimation in the cycle budget required by
Fine-grain DVS grain DVS WCEC implementation the MP3 decoder to decode stereo songs is decreased with
46%, reducing the average reserved cycle budget for an audio
0 .17.21 .5 .64 1 sample from 3.9 • 106 to 3.5 • 106 cycles. The cost paid is
16% 36%
1.74% missed deadlines, but this can be reduced to 0% if
an output buffer with the size of one audio sample is used.
Fig. 3. Normalized energy for different MP3 hard real-time implementations By using a proactive predictor that acts like a DVS-aware
coarse-grain scheduler, the average energy reduction is 15%
for which the frequency and the supply voltage can be set compared to the original soft real-time implementation. Up to
continuously within the operating range. A frequency change 35% reduction is obtained if mono songs are also considered.
introduces a transition overhead of 70µs during which the V. C ONCLUSIONS
processor stops running. References to papers that detail the In this paper, we introduced the concept of application
methodology and the results are included. scenarios, which group operation modes that are similar from
A. Hard Real-Time the resource usage perspective. Moreover, we presented their
role in an embedded system design flow, illustrating the
Our trajectory may generate different energy saving imple- difference with the well known use-case scenarios.
mentations, from a purely static one to an implementation that Our scenario-based design trajectory for reducing the energy
uses a fine grain DVS-aware scheduler. A comparison of their consumption under both hard and soft real-time constraints
energy reduction is shown in figure 3 and discussed below. was presented. We showed that applying it on a benchmark,
In [15], we describe a method for the automatic discovery of under different scheduling constraints, the energy consumption
scenarios that incorporate correlations between different parts may be reduced with 16% to 50% compared to the state-
of an application. These correlations differentiate between the of-the-art methods. To reduce energy, our trajectory exploits
source code parts that never and the ones that may execute the difference in the computation cycles between different
together in the same iteration of the loop of interest. To scenarios. It can be easily adapted to consider another resource
avoid an explosion in the number of detected scenarios, the (e.g. the number of memory accesses or memory size) for
correlations are extracted using only information about the scenario discovery and clustering.
automatically detected application parameters with a large
impact on the execution time. R EFERENCES
For the MP3 decoder, using the detected scenarios, the [1] J. M. Carroll, Ed., Scenario-based design: envisioning work and tech-
nology in system development. John Wiley & Sons Inc, 1995.
estimated worst case number of execution cycles (WCEC) for [2] M. B. Rosson and J. M. Carroll, “Scenario-based design,” in The Human-
the entire application, which is the maximum between the ones Computer Interaction Handbook: Fundamentals, Evolving Technologies
obtained for each scenario, was reduced with 16%. As for hard and Emerging Applications. LEA, 2002, ch. 53, pp. 1032–1050.
[3] M. T. Ionita, “Scenario-based system architecting: A systematic ap-
real-time systems no deadlines may be missed, the processor proach to developing future-proof system architectures,” Ph.D. disser-
must be able to execute at least the WCEC per decoding tation, Technische Universiteit Eindhoven, Netherlands, May 2005.
time period for an audio sample. Thus, reducing the estimated [4] J. M. Paul, “Scenario-oriented design for single chip heterogeneous
multiprocessors,” in Proc. of the 19th IEEE Int. Parallel and Distributed
WCEC with 15.9%, a processor with a 15.9% lower frequency Processing Symposium (IPDPS’05) - Workshop 10, 2005, p. 227b.
is good enough. This saves 36% in energy consumption. [5] B. P. Douglass, Real Time UML: Advances in the UML for Real-Time
By exploiting the different WCEC for each scenario, the Systems. Addison Wesley Publishing Company, 2004.
[6] S. V. Gheorghita, T. Basten, and H. Corporaal, “Intra-task scenario-aware
energy can be further reduced. Our trajectory may generate voltage scheduling,” in Proc. of Int. Conf. Compilers, Architecture and
a proactive predictor that acts like a DVS-aware coarse-grain Synthesis for Embedded Systems (CASES). ACM, 2005, pp. 177–184.
scheduler and selects once per loop iteration the supply voltage [7] R. Sasanka, C. J. Hughes, and S. V. Adve, “Joint local and global hard-
ware adaptations for energy,” ACM SIGARCH Computer Architecture
level. In the MP3 case, the average energy reduction is up to News, vol. 30, no. 5, pp. 144–155, 2002.
50% from the original energy, depending on the input stream. [8] M. Pedram et al., “Frame-based dynamic voltage and frequency scaling
Our trajectory can also introduce scenarios in a fine-grain for a MPEG decoder,” in Proc. of the IEEE/ACM Int. Conf. on Computer-
Aided Design (ICCAD), USA, 2002, pp. 732–737.
scheduler which changes the processor frequency multiple [9] D. G. Sachs, S. V. Adve, and D. L. Jones, “Cross-layer adaptive video
times during an iteration of the loop of interest. In [6], we coding to reduce energy on general-purpose processors,” in Proc. of
showed that the combination of scenarios with a state-of-the- IEEE Int. Conf. on Image Processing, 2003, pp. 109–112.
[10] M. Palkovic et al., “Global memory optimisation for embedded systems
art fine-grain DVS-aware scheduler for hard real-time systems allowed by code duplication,” in Proc. of SCOPES, 2005.
reduces the average energy with 16% compared to using only [11] Peng Yang et al., Multi-Processor Systems on Chip. Morgan Kauf-
the DVS-aware scheduler. Fine-grain DVS gives better results mann, 2003, ch. Cost-efficient mapping of dynamic concurrent tasks in
embedded real-time multimedia systems.
than coarse-grain DVS if the frequency switching time is [12] S. Lee, S. Yoo, and K. Choi, “An intra-task dynamic voltage scaling
small enough. For larger switching times, fine-grain DVS is method for SoC design with hierarchical FSM and synchronous dataflow
infeasible or coarse-grain DVS outperforms it. model,” in Proc. of the Int. Symp. on Low Power Electronics and Design.
ACM, 2002, pp. 84–87.
[13] S. Murali et al., “A methodology for mapping multiple use-cases onto
B. Soft Real-Time networks on chips,” in Proc. of DATE. IEEE, 2006.
[14] http://www.arm.com/products/CPUs/ARM7TDMI.html.
As for soft real-time systems a certain deadline miss ratio is [15] S. V. Gheorghita, S. Stuijk, T. Basten, and H. Corporaal, “Automatic
acceptable, information collected by profiling the application scenario detection for improved WCET estimation,” in Proc. of the 42nd
can be used for scenario discovery and prediction. In [16], we Design Automation Conf. (DAC). ACM, 2005, pp. 101–104.
[16] S. V. Gheorghita, T. Basten, and H. Corporaal, “Profiling driven sce-
describe a method and a tool that can automatically detect the nario detection and prediction for multimedia applications,” in Proc.
most important application parameters and use them to define of the IEEE Int. Conf. on Embedded Computer Systems: Architectures,
and dynamically predict scenarios. Using the generated code, MOdeling, and Simulation (IC-SAMOS), Greece, 2006, pp. 63–70.