ITA: The Improved Throttled Algorithm of Load Balancing on Cloud ComputingIJCNCJournal
Cloud computing makes the information technology industry boom. It is a great solution for businesses who want to save costs while ensuring the quality of service. One of the key issues that make cloud computing successful is the load balancing technique used in the load balancer to minimize time costs and optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so that it can help improve the load balancing capacity on cloud computing. This algorithm, named as Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin, Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines in an index table that is available but in order of priority. It helps response times and processing times remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
CONTEXT-AWARE DECISION MAKING SYSTEM FOR MOBILE CLOUD OFFLOADINGIJCNCJournal
In this study, a mobile cloud offloading system has been developed to decide that a process run on the cloud or on the mobile platform. A context-aware decision algorithm has been developed. The low performance and problem of battery consumption of mobile devices have been fundamental challenges on the mobile computing. To overcome this kind of challenges, recent advances towards mobile cloud computing propose a selective mobile-to-cloud offloading service by moving a mobile application from a slow mobile device to a fast server in the cloud during run time. Determine whether a process running on cloud or not is an important issue. Power consumption and time limits are vitally important for decision. In this study we used PowerTutor application which is a dynamic power measurement modelling tool. Another important factor is the process completion time. Calculate the power consumption is very difficult
Tremendous usage of internet has made huge data on the network, without compromising on the
performance of network the end-users must obtain best service. As cloud provides different services on
leasing basis many companies are migrating from their own Infrastructure to cloud,This migration should
not compromise on performance of the cloud, The performance of the cloud can be improved by having
excellent load balancing strategy such that the end user is satisfied. The paper reveals the method by which
a cloud can be partitioned and a study of different algorithm with comparative study to balance the
dynamic load. The comparative study between Ant Colony and Honey Bee algorithm gives the result which
algorithm is optimal in normal load condition also the simplest round robin algorithm is applied when the
partition are in Idle state
Using Grid Technologies in the Cloud for High Scalabilitymabuhr
An unstated assumption is that clouds are scalable. But are they? Stick thousands upon thousands of machines together and there are a lot of potential bottlenecks just waiting to choke off your scalability supply. And if the cloud is scalable what are the chances that your application is really linearly scalable? At 10 machines all may be well. Even at 50 machines the seas look calm. But at 100, 200, or 500 machines all hell might break loose. How do you know?
You know through real life testing. These kinds of tests are brutally hard and complicated. who wants to do all the incredibly precise and difficult work of producing cloud scalability tests? GridDynamics has stepped up to the challenge and has just released their Cloud Performance Reports.
SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHYcsandit
This paper presents a parallel approach to improve the time complexity problem associated
with sequential algorithms. An image steganography algorithm in transform domain is
considered for implementation. Image steganography is a technique to hide secret message in
an image. With the parallel implementation, large message can be hidden in large image since
it does not take much processing time. It is implemented on GPU systems. Parallel
programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement
obtained is very good with reasonably good output signal quality, when large amount of data is
processed
ITA: The Improved Throttled Algorithm of Load Balancing on Cloud ComputingIJCNCJournal
Cloud computing makes the information technology industry boom. It is a great solution for businesses who want to save costs while ensuring the quality of service. One of the key issues that make cloud computing successful is the load balancing technique used in the load balancer to minimize time costs and optimize costs economically. This paper proposes an algorithm to enhance the processing time of tasks so that it can help improve the load balancing capacity on cloud computing. This algorithm, named as Improved Throttled Algorithm (ITA), is an improvement of Throttled Algorithm. The paper uses the Cloud Analyst tool to simulate. The selected algorithms are used to compare: Equally Load, Round Robin, Throttled and TMA. The simulation results show that the proposed algorithm ITA has improved the processing time of tasks, time spent processing requests and reduced the cost of Datacenters compared to the selected popular algorithms as above. The improvement of ITA is because of selecting virtual machines in an index table that is available but in order of priority. It helps response times and processing times remain stable, limits the idling resources, and cloud costs are minimized compared to selected algorithms.
CONTEXT-AWARE DECISION MAKING SYSTEM FOR MOBILE CLOUD OFFLOADINGIJCNCJournal
In this study, a mobile cloud offloading system has been developed to decide that a process run on the cloud or on the mobile platform. A context-aware decision algorithm has been developed. The low performance and problem of battery consumption of mobile devices have been fundamental challenges on the mobile computing. To overcome this kind of challenges, recent advances towards mobile cloud computing propose a selective mobile-to-cloud offloading service by moving a mobile application from a slow mobile device to a fast server in the cloud during run time. Determine whether a process running on cloud or not is an important issue. Power consumption and time limits are vitally important for decision. In this study we used PowerTutor application which is a dynamic power measurement modelling tool. Another important factor is the process completion time. Calculate the power consumption is very difficult
Tremendous usage of internet has made huge data on the network, without compromising on the
performance of network the end-users must obtain best service. As cloud provides different services on
leasing basis many companies are migrating from their own Infrastructure to cloud,This migration should
not compromise on performance of the cloud, The performance of the cloud can be improved by having
excellent load balancing strategy such that the end user is satisfied. The paper reveals the method by which
a cloud can be partitioned and a study of different algorithm with comparative study to balance the
dynamic load. The comparative study between Ant Colony and Honey Bee algorithm gives the result which
algorithm is optimal in normal load condition also the simplest round robin algorithm is applied when the
partition are in Idle state
Using Grid Technologies in the Cloud for High Scalabilitymabuhr
An unstated assumption is that clouds are scalable. But are they? Stick thousands upon thousands of machines together and there are a lot of potential bottlenecks just waiting to choke off your scalability supply. And if the cloud is scalable what are the chances that your application is really linearly scalable? At 10 machines all may be well. Even at 50 machines the seas look calm. But at 100, 200, or 500 machines all hell might break loose. How do you know?
You know through real life testing. These kinds of tests are brutally hard and complicated. who wants to do all the incredibly precise and difficult work of producing cloud scalability tests? GridDynamics has stepped up to the challenge and has just released their Cloud Performance Reports.
SPEED-UP IMPROVEMENT USING PARALLEL APPROACH IN IMAGE STEGANOGRAPHYcsandit
This paper presents a parallel approach to improve the time complexity problem associated
with sequential algorithms. An image steganography algorithm in transform domain is
considered for implementation. Image steganography is a technique to hide secret message in
an image. With the parallel implementation, large message can be hidden in large image since
it does not take much processing time. It is implemented on GPU systems. Parallel
programming is done using OpenCL in CUDA cores from NVIDIA. The speed-up improvement
obtained is very good with reasonably good output signal quality, when large amount of data is
processed
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
Detailed Simulation of Large-Scale Wireless NetworksGabriele D'Angelo
WiFra is a new framework for the detailed simulation of very large-scale wireless networks. It is based on the parallel and distributed simulation approach and provides high scalability in terms of size of simulated networks and number of execution units running the simulation. In order to improve the performance of distributed simulation, additional techniques are proposed. Their aim is to reduce the communication overhead and to maintain a good level of load-balancing. Simulation architectures composed of low-cost Commercial-Off-The-Shelf (COTS) hardware are specifically supported by WiFra. The framework dynamically reconfigures the simulation, taking care of the performance of each part of the execution architecture and dealing with unpredictable fluctuations of the available computation power and communication load on the single execution units. A fine-grained model of the 802.11 DCF protocol has been used for the performance evaluation of the proposed framework. The results demonstrate that the distributed approach is suitable for the detailed simulation of very-large scale wireless networks.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
Elliptic Curve Cryptography (ECC) is capable of constructing public-key cryptosystems. Specifically, the security of the ECC minimizes to testing the ability to handle and solve the DLP (Discrete Logarithmic Problem) in the group of points of an elliptic curve (ECDLP). ECC based on ECDLP is in the list of recommended algorithms for use by NIST (National Institute of Standards and Technology) and NSA (National Security Agency). Given that ECDLP based cryptosystems are in wide-spread utilization, continuous efforts on monitoring the effectiveness of new attacks or improvements to pre-existing attacks on ECDLP over large prime factor is a significant part in that. This paper aims to provide a secure, effective, and flexible method to improve data security in cloud computing. In this chapter, a novel algorithm using MapReduce and Pollard-Rho’s approach to solve the ECDLP problems and to enhance the security level.
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
Energy Optimized Link Selection Algorithm for Mobile Cloud ComputingEswar Publications
Mobile cloud computing is the revolutionary distributed computing research area which consists of three different domains: cloud computing, wireless networks and mobile computing targeting to improve the task computational capabilities of the mobile devices in order to minimize the energy consumption. Heavy computations can be offloaded to the cloud to decrease energy consumption for the mobile device. In some mobile cloud applications, it has been more energy inefficient to use the cloud compared to the conventional computing conducted in the local device. Despite mobile cloud computing being a reliable idea, still faces several
problems for mobile phones such as storage, short battery life and so on. One of the most important concerns for mobile devices is low energy consumption. Different network links has different bandwidths to uplink and downlink task as well as data transmission from mobile to cloud or vice-versa. In this paper, a novel optimal link selection algorithm is proposed to minimize the mobile energy. In the first phase, all available networks are
scanned and then signal strength is calculated. All the calculated signals along with network locations are given
input to the optimal link selection algorithm. After the execution of link selection algorithm, an optimal network link is selected.
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
Cobe framework cloud ontology blackboard environment for enhancing discovery ...ijccsa
The new relatively concept of cloud computing & its associated methodologies has many advantages in the
world of today. Such advantages range between providing solutions for integration of the miscellaneous
systems & presenting as well guarantees for distribution of searching means & integration of software
tools which are used by consumers & different providers. In this paper, we have constructed an ontologybased
cloud framework with a view to identifying its external agent’s interoperability. The proposed
framework has been designed using the blackboard design style. This framework is composed of mainly
two components: controller and cloud ontology blackboard environment. The function of the controller is
to interact with consumers after receipt of the subject request where it spontaneously uses the ontology
base to distribute it & constitute the required related responses whereas the function of the second
framework component is to interact with different cloud providers and systems, using the meta-ontology
framework to restructure data via using AI reasoning tools and map them to its corresponding
redistributed request. Finally, E-tourism case study can be applicable will be explored.
Comparative Study of Neural Networks Algorithms for Cloud Computing CPU Sched...IJECEIAES
Cloud Computing is the most powerful computing model of our time. While the major IT providers and consumers are competing to exploit the benefits of this computing model in order to thrive their profits, most of the cloud computing platforms are still built on operating systems that uses basic CPU (Core Processing Unit) scheduling algorithms that lacks the intelligence needed for such innovative computing model. Correspdondingly, this paper presents the benefits of applying Artificial Neural Networks algorithms in regards to enhancing CPU scheduling for Cloud Computing model. Furthermore, a set of characteristics and theoretical metrics are proposed for the sake of comparing the different Artificial Neural Networks algorithms and finding the most accurate algorithm for Cloud Computing CPU Scheduling.
Detailed Simulation of Large-Scale Wireless NetworksGabriele D'Angelo
WiFra is a new framework for the detailed simulation of very large-scale wireless networks. It is based on the parallel and distributed simulation approach and provides high scalability in terms of size of simulated networks and number of execution units running the simulation. In order to improve the performance of distributed simulation, additional techniques are proposed. Their aim is to reduce the communication overhead and to maintain a good level of load-balancing. Simulation architectures composed of low-cost Commercial-Off-The-Shelf (COTS) hardware are specifically supported by WiFra. The framework dynamically reconfigures the simulation, taking care of the performance of each part of the execution architecture and dealing with unpredictable fluctuations of the available computation power and communication load on the single execution units. A fine-grained model of the 802.11 DCF protocol has been used for the performance evaluation of the proposed framework. The results demonstrate that the distributed approach is suitable for the detailed simulation of very-large scale wireless networks.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs), where application developers can extract data from sensor nodes through a high level abstraction of the system. Instead of developing the entire application, task graph representation of the WSN model presents simplified approach of data collection. However, mapping of tasks onto sensor nodes highlights several problems in energy consumption and routing delay. In this paper, we present an efficient hybrid approach of task mapping for WSN – Hybrid Genetic Algorithm, considering multiple objectives of optimization – energy consumption, routing delay and soft real time requirement. We also present a method to configure the algorithm as per user's need by changing the heuristics used for optimization. The trade-off analysis between energy consumption and delivery delay was performed and simulation results are presented. The algorithm is applicable during macro-programming enabling developers to choose a better mapping according to their application requirements.
Elliptic Curve Cryptography (ECC) is capable of constructing public-key cryptosystems. Specifically, the security of the ECC minimizes to testing the ability to handle and solve the DLP (Discrete Logarithmic Problem) in the group of points of an elliptic curve (ECDLP). ECC based on ECDLP is in the list of recommended algorithms for use by NIST (National Institute of Standards and Technology) and NSA (National Security Agency). Given that ECDLP based cryptosystems are in wide-spread utilization, continuous efforts on monitoring the effectiveness of new attacks or improvements to pre-existing attacks on ECDLP over large prime factor is a significant part in that. This paper aims to provide a secure, effective, and flexible method to improve data security in cloud computing. In this chapter, a novel algorithm using MapReduce and Pollard-Rho’s approach to solve the ECDLP problems and to enhance the security level.
MCCVA: A NEW APPROACH USING SVM AND KMEANS FOR LOAD BALANCING ON CLOUDijccsa
Nowadays, the demand of using resources, using services via the intranet system or on the Internet is rapidly growing. The respective problem coming is how to use these resources effectively in terms of time and quality. Therefore, the network QoS and its economy are people concerns, cloud computing was born in an inevitable trend. However, managing resources and scheduling tasks in virtualized data centres on the cloud are challenging tasks. Currently, there are a lot of Load Balancing algorithms applied in clouds and proposed by many authors, scholars, and experts. These existing methods are more about natural and heuristic, but the application of AI, or modern datamining technologies, in load balancing is not too popular due to the different characteristics of cloud. In this paper, we propose an algorithm to reduce the processing time (makespan) on cloud computing, helping the load balancing work more efficiency. Here, we use the SVM algorithm to classify the coming Requests, K - Mean to cluster the VMs in cloud, then the LB will allocate the requests into the VMs in the most reasonable way. In this way, request with the least processing time will be allocated to the VMs with the lowest usage. We name this new proposal as MCCVA - Makespan Classification & Clustering VM Algorithm. We have experimented and evaluated this algorithm in CloudSim, a cloud simulation environment, we obtained better results than some other wellknown algorithms. With this MCCVA, we can see the big potential of AI and datamining in Load Balancing, we can further develop LB with AI to achieve better and better results of QoS.
Energy Optimized Link Selection Algorithm for Mobile Cloud ComputingEswar Publications
Mobile cloud computing is the revolutionary distributed computing research area which consists of three different domains: cloud computing, wireless networks and mobile computing targeting to improve the task computational capabilities of the mobile devices in order to minimize the energy consumption. Heavy computations can be offloaded to the cloud to decrease energy consumption for the mobile device. In some mobile cloud applications, it has been more energy inefficient to use the cloud compared to the conventional computing conducted in the local device. Despite mobile cloud computing being a reliable idea, still faces several
problems for mobile phones such as storage, short battery life and so on. One of the most important concerns for mobile devices is low energy consumption. Different network links has different bandwidths to uplink and downlink task as well as data transmission from mobile to cloud or vice-versa. In this paper, a novel optimal link selection algorithm is proposed to minimize the mobile energy. In the first phase, all available networks are
scanned and then signal strength is calculated. All the calculated signals along with network locations are given
input to the optimal link selection algorithm. After the execution of link selection algorithm, an optimal network link is selected.
UnaCloud is an opportunistic based cloud infrastructure
(IaaS) that allows to access on-demand computing
capabilities using commodity desktops. Although UnaCloud
tried to maximize the use of idle resources to deploy virtual
machines on them, it does not use energy-efficient resource
allocation algorithms. In this paper, we design and implement
different energy-aware techniques to operate in an energyefficient
way and at the same time guarantee the performance
to the users. Performance tests with different algorithms and
scenarios using real trace workloads from UnaCloud, show how
different policies can change the energy consumption patterns
and reduce the energy consumption in opportunistic cloud
infrastructures. The results show that some algorithms can
reduce the energy-consumption power up to 30% over the
percentage earned by opportunistic environment.
Empirical studies have revealed that a significant amount of energy is lost unnecessarily in the
network architectures, protocols, routers and various other network devices. Thus there is a need for techniques
to obtain green networking in the computer architecture which can lead to energy saving. Green networking is
an emerging phenomenon in the computer industry because of its economic and environmental benefits. Saving
energy leads to cost-cutting and lower emission of greenhouse gases which are apparently one of the major
threats to the environment. ’Greening’ as the name suggests is the process of constructing network architecture
in such a way so as to avoid unnecessary loss of power and energy due its various components and can be
implemented using various techniques out of which four are mentioned in this review paper, namely Adaptive
link rate (ALR), Dynamic Voltage and Frequency scaling(DVFS), Interface proxying and energy aware
applications and software.
Cobe framework cloud ontology blackboard environment for enhancing discovery ...ijccsa
The new relatively concept of cloud computing & its associated methodologies has many advantages in the
world of today. Such advantages range between providing solutions for integration of the miscellaneous
systems & presenting as well guarantees for distribution of searching means & integration of software
tools which are used by consumers & different providers. In this paper, we have constructed an ontologybased
cloud framework with a view to identifying its external agent’s interoperability. The proposed
framework has been designed using the blackboard design style. This framework is composed of mainly
two components: controller and cloud ontology blackboard environment. The function of the controller is
to interact with consumers after receipt of the subject request where it spontaneously uses the ontology
base to distribute it & constitute the required related responses whereas the function of the second
framework component is to interact with different cloud providers and systems, using the meta-ontology
framework to restructure data via using AI reasoning tools and map them to its corresponding
redistributed request. Finally, E-tourism case study can be applicable will be explored.
Professor Richard Eckard's extensive presentation details a host of event and organisations geared around understanding greenhouse gases in agriculture and working towards an adaptive, productive future.
Efficient data compression of ecg signal using discrete wavelet transformeSAT Journals
Abstract Data compression reduces the number of bits of information required to store or transmit the biomedical signals. Compression algorithm of bio-medical signals is implemented using Discrete Wavelet Transform. High threshold value (λ) gives high data reduction and poor signal fidelity and low threshold value (λ) gives low data reduction and high signal fidelity. Threshold value selection should be such that the quality of the ECG signal is not distorted on reconstruction and a good amount of data reduction is also achieved. The database has been collected from MIT-BIH arrhythmias database of the lead II (ML II) signal. The ECG signal to be compressed is decomposed to the Level 5 using the biorthogonal 4.4 wavelet family. In this paper different issues for compression are demonstrated and the results are shown using MATLAB Index Terms: Electrocardiogram (ECG), Discrete Wavelet Transform (DWT), Compression Ratio (CR), Percentage Root Mean Deviation (PRD), etc.
A Review on Image Compression using DCT and DWTIJSRD
Image Compression addresses the matter of reducing the amount of data needed to represent the digital image. There are several transformation techniques used for data compression. Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) is mostly used transformation. The Discrete cosine transform (DCT) is a method for transform an image from spatial domain to frequency domain. DCT has high energy compaction property and requires less computational resources. On the other hand, DWT is multi resolution transformation. The research paper includes various approaches that have been used by different researchers for Image Compression. The analysis has been carried out in terms of performance parameters Peak signal to noise ratio, Bit error rate, Compression ratio, Mean square error. and time taken for decomposition and reconstruction.
Predictive Business Process Monitoring with Structured and Unstructured DataMarlon Dumas
Presentation delivered by Irene Teinemaa at the BPM'2016 conference, Rio de Janeiro, 22 September 2016. Paper available at: http://kodu.ut.ee/~dumas/pubs/bpm2016predictivemonitoring.pdf
Managing Smartphone Crowdsensing Campaigns through the OrganiCity Smart City ...Dimitrios Amaxilatis
Presentation in the Second International Workshop on Mobile and Situated Crowdsourcing (WMSC’16), co-located with the UbiComp’16 conference in Heidelberg (Germany) on 13th of September 2016.
Automated Discovery of Structured Process Models: Discover Structured vs Disc...Marlon Dumas
Research paper presentation at the 35th International Conference on Conceptual Modeling (ER'2016), Gifu, Japan, 15 Nov. 2016
Presentation delivered by Raffaele Conforti.
Paper available at: http://goo.gl/5EN3l2
Marco Trombetti - How Translated used Big Data and Artificial Intelligence to re-invent one of the oldest and less technological market: professional human translation.
Mobile Fog: A Programming Model for Large–Scale Applications on the Internet ...HarshitParkar6677
creating a new environment, namely the Internet of Things
(IoT), that enables a wide range of future Internet applications.
In this work, we present Mobile Fog, a high level
programming model for future Internet applications that are
geospatially distributed, large–scale, and latency–sensitive.
We analyze use cases for the programming model with camera
network and connected vehicle applications to show the
efficacy of Mobile Fog. We also evaluate application performance
through simulation.
An advanced ensemble load balancing approach for fog computing applicationsIJECEIAES
Fog computing has emerged as a viable concept for expanding the capabilities of cloud computing to the periphery of the network allowing for efficient data processing and analysis from internet of things (IoT) devices. Load balancing is essential in fog computing because it ensures optimal resource utilization and performance among distributed fog nodes. This paper proposed an ensemble-based load-balancing approach for fog computing environments. An advanced ensemble load balancing approach (AELBA) uses real-time monitoring and analysis of fog node metrics, such as resource utilization, network congestion, and service response times, to facilitate effective load distribution. Based on the ensemble's collective decision-making, these metrics are fed into a centralized load-balancing controller, which dynamically adjusts the load distribution across fog nodes. Performance of the proposed ensemble load-balancing approach is evaluated and compared it to traditional load-balancing techniques in fog using extensive simulation experiments. The results demonstrate that our ensemble-based approach outperforms individual load-balancing algorithms regarding response time, resource utilization, and scalability. It adapts to dynamic fog environments, providing efficient load balancing even under varying workload conditions.
Intelligent task processing using mobile edge computing: processing time opti...IAESIJAI
The fast-paced development of the internet of things led to the increase of computing resource services that could provide a fast response time, which is an unsatisfied feature when using cloud infrastructures due to network latency. Therefore, mobile edge computing became an emerging model by extending computation and storage resources to the network edge, to meet the demands of delay-sensitive and heavy computing applications. Computation offloading is the main feature that makes Edge computing surpass the existing cloud-based technologies to break limitations such as computing capabilities, battery resources, and storage availability, it enhances the durability and performance of mobile devices by offloading local intensive computation tasks to edge servers. However, the optimal solution is not always guaranteed by offloading computation, there-fore, the offloading decision is a crucial step depending on many parameters that should be taken in consideration. In this paper, we use a simulator to compare a two tier edge orchestrator architecture with the results obtained by implementing a system model that aims to minimize a task’s processing time constrained by time delay and the limited device’s computational resource and usage based on a modified version.
Evaluation of load balancing approaches for Erlang concurrent application in ...TELKOMNIKA JOURNAL
Cloud system accommodates the computing environment including PaaS (platform as a service), SaaS (software as a service), and IaaS (infrastructure as service) that enables the services of cloud systems. Cloud system allows multiple users to employ computing services through browsers, which reflects an alternative service model that alters the local computing workload to a distant site. Cloud virtualization is another characteristic of the clouds that deliver virtual computing services and imitate the functionality of physical computing resources. It refers to an elastic load balancing management that provides the flexible model of on-demand services. The virtualization allows organizations to improve high levels of reliability, accessibility, and scalability by having a capability to execute applications on multiple resources simultaneously. In this paper we use a queuing model to consider a flexible load balancing and evaluate performance metrics such as mean queue length, throughput, mean waiting time, utilization, and mean traversal time. The model is aware of the arrival of concurrent applications with an Erlang distribution. Simulation results regarding performance metrics are investigated. Results point out that in Cloud systems both the fairness and load balancing are to be significantly considered.
Task Scheduling using Hybrid Algorithm in Cloud Computing Environmentsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Resource allocation for fog computing based on software-defined networksIJECEIAES
With the emergence of cloud computing as a processing backbone for internet of thing (IoT), fog computing has been proposed as a solution for delay-sensitive applications. According to fog computing, this is done by placing computing servers near IoT. IoT networks are inherently very dynamic, and their topology and resources may be changed drastically in a short period. So, using the traditional networking paradigm to build their communication backbone, may lower network performance and higher network configuration convergence latency. So, it seems to be more beneficial to employ a software-defined network paradigm to implement their communication network. In software-defined networking (SDN), separating the network’s control and data forwarding plane makes it possible to manage the network in a centralized way. Managing a network using a centralized controller can make it more flexible and agile in response to any possible network topology and state changes. This paper presents a software-defined fog platform to host real-time applications in IoT. The effectiveness of the mechanism has been evaluated by conducting a series of simulations. The results of the simulations show that the proposed mechanism is able to find near to optimal solutions in a very lower execution time compared to the brute force method.
Contemporary Energy Optimization for Mobile and Cloud Environmentijceronline
Cloud and mobile computing applications are increasing heavily in terms of usage. These two areas extending usability of systems. This review paper gives information about cloud and mobile applications in terms of resources they consume and the need of choosing variety of features for users from several locations and the evolutionary provisions for service provider and end users. Both the fields are combined to provide good functionality, efficiency and effectiveness with mobile phones. The enhancement by considering power consumption by means of resource constrained nature of devices, communication media and cost effectiveness. This paper discuss about the concepts related to power consumption, underlying protocols and the other performance issues
Adaptive Offloading in Mobile Cloud Computing by automatic partitioning approach of tasks is the idea to augment execution through migrating heavy computation from mobile devices to resourceful cloud servers and then receive the results from them via wireless networks. Offloading is an effective way to
overcome the resources and functionalities constraints
of the mobile devices since it can release them from
intensive processing and increase performance of the
mobile applications, in terms of response time.
Offloading brings many potential benefits, such as
energy saving, performance improvement, reliability
improvement, ease for the software developers and
better exploitation of contextual information.
Parameters about method transitions, response times,
cost and energy consumptions are dynamically reestimated
at runtime during application executions.
CONFIGURABLE TASK MAPPING FOR MULTIPLE OBJECTIVES IN MACRO-PROGRAMMING OF WIR...ijassn
Macro-programming is the new generation advanced method of using Wireless Sensor Network (WSNs),
where application developers can extract data from sensor nodes through a high level abstraction of the
system. Instead of developing the entire application, task graph representation of the WSN model presents
simplified approach of data collection.
Time and resource constrained offloading with multi-task in a mobile edge co...IJECEIAES
In recent years, the importance of the mobile edge computing (MEC) paradigm along with the 5G, the Internet of Things (IoT) and virtualization of network functions is well noticed. Besides, the implementation of computation-intensive applications at the mobile device level is limited by battery capacity, processing capabalities and execution time. To increase the batteries life and improve the quality of experience for computationally intensive and latency-sensitive applications, offloading some parts of these applications to the MEC is proposed. This paper presents a solution for a hard decision problem that jointly optimizes the processing time and computing resources in a mobile edge-computing node. Hence, we consider a mobile device with an offloadable list of heavy tasks and we jointly optimize the offloading decisions and the allocation of IT resources to reduce the latency of tasks’ processing. Thus, we developped a heuristic solution based on the simulated annealing algorithm, which can improve the offloading rate and reduce the total task latency while meeting short decision time. We performed a series of experiments to show its efficiency. Finally, the obtained results in terms of full-time treatrement are very encouraging. In addition, our solution makes offloading decisions within acceptable and achievable deadlines.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
ENERGY EFFICIENT COMPUTING FOR SMART PHONES IN CLOUD ASSISTED ENVIRONMENTIJCNCJournal
In recent years, the employment of smart mobile phones has increased enormously and are concerned as an area of human life. Smartphones are capable to support immense range of complicated and intensive applications results shortened power capability and fewer performance. Mobile cloud computing is the newly rising paradigm integrates the features of cloud computing and mobile computing to beat the constraints of mobile devices. Mobile cloud computing employs computational offloading that migrates the computations from mobile devices to remote servers. In this paper, a novel model is proposed for dynamic task offloading to attain the energy optimization and better performance for mobile applications in the cloud environment. The paper proposed an optimum offloading algorithm by introducing new criteria such as benchmarking for offloading decision making. It also supports the concept of partitioning to divide the computing problem into various sub-problems. These sub-problems can be executed parallelly on mobile device and cloud. Performance evaluation results proved that the proposed model can reduce around 20% to 53% energy for low complexity problems and up to 98% for high complexity problems.
Similar to Just in-time code offloading for wearable computing (20)
An efficient tree based self-organizing protocol for internet of thingsredpel dot com
An efficient tree based self-organizing protocol for internet of things.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Web Service QoS Prediction Based on Adaptive Dynamic Programming Using Fuzzy ...redpel dot com
Web Service QoS Prediction Based on Adaptive Dynamic Programming Using Fuzzy Neural Networks for Cloud Services
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Privacy preserving and delegated access control for cloud applicationsredpel dot com
Privacy preserving and delegated access control for cloud applications
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Performance evaluation and estimation model using regression method for hadoo...redpel dot com
Performance evaluation and estimation model using regression method for hadoop word count.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Frequency and similarity aware partitioning for cloud storage based on space ...redpel dot com
Frequency and similarity aware partitioning for cloud storage based on space time utility maximization model.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Multiagent multiobjective interaction game system for service provisoning veh...redpel dot com
Multiagent multiobjective interaction game system for service provisoning vehicular cloud
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Efficient multicast delivery for data redundancy minimization over wireless d...redpel dot com
Efficient multicast delivery for data redundancy minimization over wireless data centers
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Cloud assisted io t-based scada systems security- a review of the state of th...redpel dot com
Cloud assisted io t-based scada systems security- a review of the state of the art and future challenges.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
I-Sieve: An inline High Performance Deduplication System Used in cloud storageredpel dot com
I-Sieve: An inline High Performance Deduplication System Used in cloud storage
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Architecture harmonization between cloud radio access network and fog networkredpel dot com
Architecture harmonization between cloud radio access network and fog network
for more ieee paper / full abstract / implementation , just visit www.redpel.com
A tutorial on secure outsourcing of large scalecomputation for big dataredpel dot com
A tutorial on secure outsourcing of large scalecomputation for big data
for more ieee paper / full abstract / implementation , just visit www.redpel.com
A parallel patient treatment time prediction algorithm and its applications i...redpel dot com
A parallel patient treatment time prediction algorithm and its applications in hospital.
for more ieee paper / full abstract / implementation , just visit www.redpel.com
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Introduction to AI for Nonprofits with Tapp Network
Just in-time code offloading for wearable computing
1. IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING
Received 15 July 2014; revised 9 November 2014; accepted 21 November 2014. Date of publication 7 January, 2015;
date of current version 6 March, 2015.
Digital Object Identifier 10.1109/TETC.2014.2387688
Just-in-Time Code Offloading for
Wearable Computing
ZIXUE CHENG, (Member, IEEE), PENG LI, (Member, IEEE), JUNBO WANG, (Member, IEEE),
AND SONG GUO, (Senior Member, IEEE)
School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
CORRESPONDING AUTHOR: P. LI (lipengcs@gmail.com)
ABSTRACT Wearable computing becomes an emerging computing paradigm for various recently
developed wearable devices, such as Google Glass and the Samsung Galaxy Smartwatch, which have
significantly changed our daily life with new functions. To magnify the applications on wearable devices
with limited computational capability, storage, and battery capacity, in this paper, we propose a novel
three-layer architecture consisting of wearable devices, mobile devices, and a remote cloud for code offload-
ing. In particular, we offload a portion of computation tasks from wearable devices to local mobile devices
or remote cloud such that even applications with a heavy computation load can still be upheld on wearable
devices. Furthermore, considering the special characteristics and the requirements of wearable devices, we
investigate a code offloading strategy with a novel just-in-time objective, i.e., maximizing the number of tasks
that should be executed on wearable devices with guaranteed delay requirements. Because of the NP-hardness
of this problem as we prove, we propose a fast heuristic algorithm based on the genetic algorithm to solve it.
Finally, extensive simulations are conducted to show that our proposed algorithm significantly outperforms
the other three offloading strategies.
INDEX TERMS Wearable computing, just-in-time, code offloading, cloud.
I. INTRODUCTION
Along with the popularity of various wearable devices,
such as Google glass [1] and Magic Ring [2], wearable
computing has attracted more and more attentions since it
facilitates a new form of cyber-physical interaction com-
prising small body-worn devices that are always powered
on and accessible [3]–[6]. Various emerging applications,
such as healthy monitoring, reality augmentation, and ges-
ture or object recognition, require wearable devices to
provide fast processing and communication capability in
an energy-efficient manner. On the other hand, hardware
equipped on wearable devices is usually with limited size and
weight, hardly to provide enough capability and power for
complicated applications.
To fill the gap between resource demand and supply
on wearable devices, we propose a novel architecture that
offloads some codes to nearby mobile devices with stronger
processing capability or a remote cloud with unlimited com-
putation resources. Specifically, we consider a three-layer
architecture as shown in Fig. 1. Wearable devices with limited
computation capability forms the first layer closest to users.
FIGURE 1. Architecture.
Several mobile devices, such as smartphones or tablets, are
in the middle layer, which can communicate with wear-
able devices using short-range communication technologies
like ZigBee or Bluetooth. Meanwhile, these mobile devices
74
2168-6750
2015 IEEE. Translations and content mining are permitted for academic research only.
Personal use is also permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. VOLUME 3, NO. 1, MARCH 2015
www.redpel.com+917620593389
www.redpel.com+917620593389
2. Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING
can communicate with remote cloud as the third layer via
WiFi or LTE networks.
Under this three-layer architecture, we investigate how to
efficiently offload codes from wearable devices in the first
layer to computation resources in the second and the third
layers. In this paper, wearable applications are represented
as task graphs, in which methods or functions are denoted
by nodes and their relationship by edges. Note that some
tasks like sensing or display cannot be offloaded, i.e., they
should be executed only on wearable devices. These tasks
are referred to as w-tasks in the rest of our paper. For other
non-w-tasks, we propose a code offloading algorithm to
schedule them on mobile devices or cloud. To guarantee a
certain level of user experience, we consider a just-in-time
objective for code offloading, i.e., maximizing the number
of w-tasks that are executed within a given delay from their
direct previous ones. It is motivated by the fact that w-tasks
directly interact with users who cannot tolerate long delay
between any two adjacent w-tasks.
The main contributions of this paper are summarized
as follows.
• We propose a novel three-layer architecture for code
offloading from wearable devices to local mobile
devices and remote cloud. Different layers have distinct
processing capability, and they communicate with
each other using different wireless communication
technologies.
• We consider an optimization problem for code offload-
ing with a just-in-time objective with respect to user
experience. This problem is proved to be NP-hard, and
we develop a formulation that deals with the challenges
of both task assignment and task scheduling, i.e., to
determine where and in which order these tasks should
be executed, respectively.
• We develop a fast algorithm based on genetic algo-
rithm to approximate the optimal solution. Instead of
directly applying the standard genetic algorithm with
high complexity, we propose an enhanced algorithm by
creating chromosomes for global scheduling only, and
leaving the determination of other variables to a
simplified optimization problem.
• Finally, extensive simulations are conducted to evalu-
ate the performance of our proposed algorithm. The
results show that our algorithm can quickly converge to
performance close to the optimal solution.
The rest of this paper is organized as follows. We review
some important related work in Section 2. The system model
is presented in Section 3. Section 4 formulates the problem,
whose hardness is analyzed in Section 5. Section 6 presents
our proposed algorithm. The performance evaluation is given
in Section 7. Section 8 finally concludes this paper.
II. RELATED WORK
Code offloading is a critical technique to enable mobile cloud
computing [7]–[10] that resource-constrained mobile devices
can outsource their computation and storage to the remote
cloud. Luo et al. [11] have proposed the idea of using cloud
computing to enhance the capabilities of mobile devices.
Hyrax [12] has been proposed as a mobile cloud computing
platform that allows mobile devices to use cloud computing
platforms for data processing. Oberheide et al. [13] have
proposed to outsource antivirus services from mobile devices
to the cloud.
However, these work simply offloads the whole appli-
cation to the cloud, which would lead to high commu-
nication cost. Thus, a partition scheme has emerged to
partially offload applications to cloud for achieving a better
performance. CloneCloud [14] seamlessly offloads parts of
applications from devices to their clones residing in virtual
machines at cloud. Li et al. [15] use the static partitioning
method to improve the battery lifetime of mobile devices.
Rudenko et al. [16] show the Gaussian application (i.e., to
solve a system of linear algebraic equations) can be offloaded
into the remote server. Later, several solutions have been
proposed to find the optimal decision for partitioning applica-
tions before offloading. In [17], the authors present a partition
scheme based on the profiling information about computation
time and data sharing at the level of procedure calls. The
scheme constructs a cost graph, on which a branch-and-bound
algorithm [18] is applied with the objective to minimize
the total energy consumption of computation and the total
data communication cost. The idea of this algorithm is to
prune the search space to obtain an approximated solution.
In [19], the authors present an approach to decide which
components of Java programs should be offloaded. The
approach first divides a Java program into methods and uses
input parameters to compute the execution costs for these
methods. Then, it makes an optimal execution decision by
comparing the local execution cost of each method with the
remote execution cost estimated based on status of the current
wireless channel condition. Wang et al. [20] present a com-
putation offloading scheme on mobile devices and propose
a polynomial time algorithm to find an optimal program
partition. The proposed scheme partitions a program into the
distributed subprograms by producing a program abstraction,
where all physical memory references are mapped into the
references of abstract memory locations. Yang et al. [21]
extend this work by focusing on the system throughput rather
than the makespan of the application. Moreover, they propose
a genetic algorithm that converges to the global optimal
partition running on the cloud side.
Different from existing work, we extend the idea of code
offloading to wearable computing by proposing a three-layer
architecture. Moreover, we focus on the just-in-time objective
with respect to user experience, which has been little studied
by existing work.
III. SYSTEM MODEL
A. NETWORK MODEL
We consider a network consisting of a wearable device (WD),
several local mobile devices (MD) and a remote cloud (RC),
VOLUME 3, NO. 1, MARCH 2015 75
www.redpel.com+917620593389
www.redpel.com+917620593389
3. IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
which can be represented by a graph Gn(V, E), where
V denotes a set of nodes including all devices and cloud, and
E represents a set of communication links among nodes in V.
The processing capability of each node i ∈ V is denoted by ci.
Typically, we have cWD ≤ cMD ≤ cRC , i.e., the wearable
device has the weakest processing capability because of low-
end hardware, and the processing speed of mobile devices is
faster than wearable device, but slower than cloud.
Each edge (i, j) ∈ E is associated with a transmission
rate rij depending on the adopted communication technology.
Local mobile devices communicate with the wearable device
through short-range wireless technologies (e.g., Bluetooth
or ZigBee). On the other hand, mobile devices communicate
with each other through direct link (e.g., WiFi direct or
LTE direct) or wide area network (e.g., 3G networks).
B. APPLICATION MODEL
A wearable application can be represented by a directed
acyclic graph Ga = (N, A), where set N = {1, 2, ..., n}
denotes a number of tasks, and each task i ∈ N is associated
with a weight si that represents the number of instructions to
be executed. An example of task graph is shown in Fig. 2. The
tasks in set N can be divided into two subsets NW and NnoW ,
which include w-tasks and non-w-tasks, respectively.
We have NW ∪NnoW = N, and NW ∩NnoW = ∅. For example,
tasks 1, 6, and 8 in Fig. 2 are w-tasks, which may represent
user input, picture capture, and result display, respectively,
that must be executed on wearable devices. The relationship
among tasks is represented by directed links in set A. For a
directed link (i, j) ∈ A, we call task i is the predecessor of
task j, and task j is the successor of task i. A task can execute
only when all its predecessors have finished. In addition, each
link (i, j) ∈ A is associated with a weight eij that represents
the amount of intermediate data from task i to j. We use P(j)
to denote the set of predecessors of task j. For example,
P(5) = {2, 3} in Fig. 2.
FIGURE 2. An example of task graph.
IV. PROBLEM STATEMENT
Due to size and weight constraints, wearable devices are
usually equipped with low-end hardware and powered by
batteries with limited capacity. Therefore, they can run only
some simple applications with low computation requirement.
To support more advanced applications with improved energy
efficiency, we propose to offload some codes from wearable
devices to local mobile devices and cloud. In other words,
instead of executing all tasks in the task graph on the wearable
device, we assign some of them to nearby mobile devices with
more powerful hardware and more energy supply, or a remote
cloud without resource constraints.
In our system model, some tasks, e.g., display or
sensing, should be executed on wearable devices. To guar-
antee user experience, we target on a novel just-in-time
objective, i.e., the duration between any two w-tasks should
be within a threshold δ. In some cases, this requirement is
too strict to generate a feasible solution. For example, there
are too many non-w-tasks between two w-tasks, such that the
duration between them cannot satisfy the threshold δ under
any scheduling. Therefore, we investigate a code offloading
problem for wearable devices with a relaxed objective, i.e.,
seeking a task scheduling to maximize the number of w-tasks
that can be executed within δ time after the previous w-task.
The problem is formally defined as follows.
Definition 1 [The JCOW (Just-in-time Code Offloading for
Wearable Computing) Problem]: given a network consisting
of a wearable device, several mobile devices, and a remote
cloud, and an application represented by a task graph,
we attempt to find a code offloading scheme that maximizes
the number of w-tasks, each of which starts within δ time after
its previous w-task.
Theorem 1: The JCOW problem is NP-hard.
Proof: It is easy to see that the JCOW problem is in
NP class as the objective function associated with a given
task scheduling can be evaluated in a polynomial time.
The remaining proof is done by reducing the well-known
multiprocessor scheduling problem to the JCOW problem.
The multiprocessor scheduling problem can be formally
described as follows.
INSTANCE: Given a set T of n tasks, and a set P of m
processors. Each task t ∈ T has a length lt.
QUESTION: Is there a task scheduling such that all tasks
can be finished with time δ?
We now describe the reduction from multiprocessor prob-
lem to an instance of the JCOW problem. First, we create
a wearable device and a remote cloud. For each processor
in P, we create a corresponding local device with the same
processing capability. We also create a task graph as shown
in Fig. 3, which consists of two w-tasks, i.e., i and j, and a set
T of non-w-tasks that can be executed in parallel.
In the following, we show that the multiprocessor schedul-
ing problem has a solution if and only if the resulting
instance of JCOW problem has a scheduling scheme that
satisfies the delay requirement. First, we suppose that there
exists a feasible scheduling of multiprocessor scheduling
problem such that all tasks can be finished before time δ.
It is straightforward to verify that the corresponding solution
in JCOW problem guarantees that the delay between two
76 VOLUME 3, NO. 1, MARCH 2015
www.redpel.com+917620593389
www.redpel.com+917620593389
4. Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING
FIGURE 3. An instance of task graph.
w-tasks is less than δ. Then, we suppose that the JCOW
problem has a feasible solution such that the delay between
w-tasks i and j is less than δ. We schedule the non-w-tasks
assigned to each local device to the corresponding processors,
which forms a solution of the multiprocessor problem. Based
on the above analysis, we conclude that the JCOW problem is
NP-complete. Since the decision form of JCOW problem is
NP-complete, we can further conclude that the optimization
form of the original problem is NP-hard.
To solve the JCOW problem, we need to deal with the
challenges of both task assignment and task scheduling. Task
assignment determines on which device each task should
be executed. Except w-tasks that should be executed only
on the wearable device, other tasks can be offloaded to
mobile devices and cloud. Compared with cloud, mobile
devices have limited processing capability, but they are
closer to the wearable device, leading to small latency
for data delivery among tasks. In addition to the tradeoff
between processing speed and transmission delay, the exis-
tence of multiple mobile devices further complicates task
assignment.
Task scheduling determines the execution sequence
of tasks assigned to the same device. For example, if
tasks 2, 3 and 5 in Fig. 2 are assigned to the same mobile
device, we have two possible execution sequences, i.e.,
{2, 3, 5} and {3, 2, 5}, with different performance. When
tasks are executed according to {2, 3, 5}, task 4 can quickly
get its input data after task 2, and run in parallel with
task 3 or 5 on the other device. Alternatively, if execution
sequence {3, 2, 5} is chosen, we can start w-task 6 earlier
while delaying the execution of task 4. Therefore, task
scheduling on all devices should be jointly considered to
achieve the optimal performance.
V. PROBLEM FORMULATION
In this section, we develop an optimization framework for the
JCOW problem by jointly considering both task assignment
and task scheduling. First, we define a binary variable xik for
task assignment as follows:
xik =
1, if task i ∈ N is assigned to node k ∈ V;
0, otherwise.
Since each task can be assigned to one and only one node
in the network, we have the following constraint:
k∈V
xik = 1, ∀i ∈ N. (1)
For task scheduling, we first define a global scheduling
that determines the execution sequence when all tasks are
assigned to the same node. When tasks are assigned to mul-
tiple devices, the ones in the same device cannot violate the
execution sequence defined by the global scheduling. On the
other hand, we do not impose any sequence requirement
for tasks on different devices. For example, we specify a
sequence of {1, 2, 3, 4, 5, 6, 7, 8} for tasks in Fig. 2, which
will generate three local scheduling {1, 6, 8}, {2, 3, 5}, and
{4, 7} when they are assigned to three devices. Only local
scheduling should be obeyed, e.g., task 3 starts after task 2,
but task 5 can start before task 4, although it is after task 4 in
the global scheduling.
We define a binary variable uij to specify the global
scheduling as follows,
uij =
1, if task j ∈ N is scheduled
immediately after task i ∈ N,
0, otherwise.
If we consider a virtual task n as both the origin
and termination of a circular scheduling, then any task in
N = {n } ∪ N should have exactly one successor and one
predecessor. These can be described by the constraints:
j∈N
uij = 1, i ∈ N , (2)
i∈N
uij = 1, j ∈ N . (3)
Now we only need to consider the scheduling of users in N
by removing n . In order to guarantee the resulting scheduling
acyclic, we define an integer variable ai to denote that task i
is scheduled in the ai-th place in the global scheduling. Then,
we have the following constraints for ai:
1 ≤ ai ≤ n, ∀i ∈ N, (4)
nuij − n + 1 ≤ aj − ai ≤ n − 1 − (n − 2)uij,
∀i, j ∈ N. (5)
Note that constraint (5) becomes aj − ai = 1 if task i is the
predecessor of j, i.e., uij = 1, and otherwise 1−n ≤ aj −ai ≤
n−1 (i.e., |aj −ai| ≤ n−1), which is alway satisfied because
of (4).
To simplify the calculation of task execution time later,
we define another binary variable yij for task scheduling as
follows:
yij =
1, if task j ∈ N is scheduled after i ∈ N,
0, otherwise.
We use an example to explain the differences between uij
and yij. In the global scheduling {1, 2, 3, 4, 5, 6, 7, 8}, task 3
VOLUME 3, NO. 1, MARCH 2015 77
www.redpel.com+917620593389
www.redpel.com+917620593389
5. IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
is before task 5, so we have y35 = 1, but u35 = 0 because
task 5 is not a direct successor of task 3. The relationship
between yij and scheduling variables aj can be represented by:
aj − ai
n
≤ yij ≤
aj
ai
, ∀i, j ∈ N. (6)
If task i is scheduled before task j, i.e., ai < aj, above
constraint leads to yij = 1 because 0 <
aj−ai
n < 1 and
aj
ai
> 1.
Otherwise, i.e., ai > aj, we have yij = 0 because
aj−ai
n < 0
and 0 <
aj
ai
< 1.
Any task j ∈ N can start to execute when two conditions
are satisfied. First, the assigned device should be available,
which means no other tasks are currently executing on it. The
device available time Ta
j of task j should satisfy the following
constraint:
Ta
j ≥
k∈V
xjkxikyijte
i , ∀i ∈ P(j), ∀j ∈ N, (7)
where te
i is the finish time of task i.
Second, task i should be ready, i.e., it has received data
from all its predecessors in the task graph. The task ready
time Tr
j can be calculated by:
Tr
j = max
i∈P(j)
te
i +
(k,l)∈E
eijxikxjl
rkl
, ∀j ∈ N. (8)
If task j and one of its predecessor i ∈ P(j) are assigned to
different devices k and l, respectively, we need to consider the
communication delay
eij
rkl
. Otherwise, data delivery between
them can be implemented via shared memory without com-
munication delay. In this case, the second term in the right
hand of constraint (8) will be zero.
The execution start time ts
j of task j is determined by:
ts
j = max{Ta
j , Tr
j }, ∀j ∈ N. (9)
The relationship between ts
j and te
j can be expressed as:
te
j = ts
j +
k∈V
xjksj
ck
, ∀j ∈ N. (10)
Finally, we define a binary variable zij to describe whether
w-task j starts within δ time from its previous w-task i, i.e.,
zij =
1, if w-task j starts within δ time from
the previous w-task j,
0, otherwise.
We have the following constraints for zij:
yij[δ − (ts
j − ts
i )]
T
≤ zij ≤
δyij
ts
j − ts
i
, ∀i, j ∈ NW , (11)
where T is a large constant. By defining a binary variable wj
to represent whether w-task j starts within δ time from any
previous w-task, the JCOW problem can be formulated as a
Algorithm 1 The Genetic Algorithm Framework
Input:
A task graph Ga, a network Gn, a threshold δ;
Output:
The scheduling of tasks on devices;
1: generate a set of feasible solutions as an initial popula-
tion.
2: while number of generations is not exhausted do
3: for each population do
4: randomly select two chromosomes and apply
crossover operation
5: randomly select one chromosomes and apply muta-
tion operation
6: end for
7: evaluate all chromosomes in the population and per-
form selection
8: end while
mixed-integer nonlinear programming (MINLP) problem as
follows.
JCOW: max
j∈NW
wj
wj ≤
i∈NW
zij, ∀j ∈ NW
s.t. (1)–(11). (12)
Note that the MINLP problem is in general NP-hard,
and no mathematical solvers are available because of non-
linear constraints. Thus, we are motivated to design a fast
heuristic algorithm to approximate the optimal solution in
next section.
VI. ALGORITHM
A. BASIC IDEA
In this section, we propose a fast algorithm based on genetic
algorithm [22] to solve the JCOW problem. The basic idea is
to start with a population consisting of a set of feasible solu-
tions that are represented by chromosomes. Chromosomes
in one population are randomly selected to produce a new
population by crossover and mutation operations. The chro-
mosomes in the new population, which are also referred to as
offspring, are selected for survival according to their fitness
that is evaluated using our objective function. This heuristic
selection mimics the process of natural selection, i.e., the
more suitable chromosomes are, the more chances they have
to reproduce. This process is repeated until some condition,
for example, number of populations or improvement of the
best solution, is satisfied. The pseudo codes of our proposed
algorithm are shown in Algorithm 1.
To apply the genetic algorithm, we need to first define
chromosomes that represent feasible solutions of our
problem. A straightforward method is to define a variable as
a gene, such that a chromosome contains all variables to form
a feasible solution of the problem. However, by including all
78 VOLUME 3, NO. 1, MARCH 2015
www.redpel.com+917620593389
www.redpel.com+917620593389
6. Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING
kinds of variables, such as the ones for task assignment (xik)
and scheduling (uij, aj, and yij) in a chromosome, it would
be difficult to guarantee its feasibility after crossover and
mutation operations.
Instead of including all variables in a chromosome,
we propose to create chromosomes for global schedul-
ing only, and leave the determination of other variables
to a simplified optimization framework. We still use the
task graph example in Fig. 2 to illustrate our chromosome
construction. As shown in Fig. 4(a), we create two chromo-
somes {1, 2, 3, 4, 5, 6, 7, 8} and {1, 3, 2, 6, 5, 7, 4, 8}, which
represent two possible global scheduling sequences. In the
following, we give the detailed design of crossover and
mutation operations on our defined chromosomes.
FIGURE 4. An example of crossover operation. (a) Standard
crossover. (b) Order crossover.
B. DETAILED DESIGN
1) CROSSOVER OPERATION
To conduct crossover operations in the standard genetic
algorithm, we randomly select a point as the crossover point,
and exchange the portions beyond the crosspoint to generate
two new chromosomes. Unfortunately, this operation would
lead to infeasible scheduling that violates the precedence con-
straints imposed by our task graph. As an example shown in
Fig. 4(a), standard crossover operation generates two children
that are both infeasible because task 4 appears twice and
task 6 even does not show up in the generated chromosome
{1, 2, 3, 4, 5, 7, 4, 8}.
As standard crossover operations may violate the prece-
dence constraints, we adopt the order crossover opera-
tion [22], [23] that always generates valid scheduling lists
from two valid parent chromosomes. Specifically, given
any two parent chromosomes, we first randomly choose a
crossover point and pass the left segment from the first parent
to the child. Then, we construct the right fragment of the child
by taking the remaining parts of the first parent, but in the
order of the other parent. For example, we set the crossover
point in the middle of two chromosomes shown in Fig. 4(b).
Then, we create a child chromosome with {1, 2, 3, 4} as its
first 4 elements, and other tasks are scheduled according to
their order in parent 2, i.e., {6, 5, 7, 8}. The other child can be
constructed in a similar way.
2) MUTATION OPERATION
We conduct mutation operation by swapping two randomly
selected tasks in the global scheduling list. Note that such
a mutation operation may generate invalid scheduling. For
example, if we swap tasks 3 and 5 of the chromosome
in Fig. 5, the resulting scheduling {1, 2, 5, 4, 3, 6, 7, 8} is
invalid because task 5 cannot start before task 3 according to
the task graph. To guarantee the feasibility, we check every
chromosome after mutation and abandon invalid ones.
FIGURE 5. An example of mutation operation.
3) FITNESS EVALUATION
The fitness of each chromosome is evaluated by the number
of w-tasks that satisfy the just-in-time requirement. Since
each chromosome determines a task scheduling sequence,
we only need to deal with task assignment now, which can
be obtained by solving a simplified optimization framework.
Given a chromosome, variables related with task scheduling,
VOLUME 3, NO. 1, MARCH 2015 79
www.redpel.com+917620593389
www.redpel.com+917620593389
7. IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
i.e., uij, aj, and yij, can be fixed, which will significantly
reduce the complexity of solving the MINLP problem formu-
lated in last section. We use ˆuij, ˆaj, and ˆyij to denote the fixed
values of uij, aj, and yij, respectively, and the task assignment
problem can be formulated as:
max
j∈NW
wj,
subject to: Ta
j ≥
k∈V
xjkxik ˆyijte
i , ∀i ∈ P(j), ∀j ∈ N, (13)
ˆyij[δ − (ts
j − ts
i )]
T
≤ zij ≤
δˆyij
ts
j − ts
i
, ∀i, j ∈ NW ,
(1), (8), (9), (10), and (12). (14)
Although many variables and constraints are eliminated,
above formulation is still difficult to solve because of non-
linear constraints (8) and (13). To linearise these constraints,
we define a new binary variable vkl
ij as:
vkl
ij = xikxjl, ∀i, j ∈ N, ∀k, l ∈ V, (15)
such that the constraint (8) can be written in a linear form as:
Tr
j ≥ te
i +
(k,l)∈E
eijvkl
ij
rkl
, ∀i ∈ P(j), ∀j ∈ N. (16)
Constraint (15) can be equivalently replaced by the
following linear constraints:
0 ≤ vkl
ij ≤ xik, ∀i, j ∈ N, ∀k, l ∈ V, (17)
xik + xjl − 1 ≤ vkl
ij ≤ xjl, ∀i, j ∈ N, ∀k, l ∈ V. (18)
To linearize constraint (13), we define a binary
variable φk
ij as:
φk
ij = vkk
ij te
i , ∀i ∈ P(j), ∀j ∈ N, ∀k ∈ V, (19)
which can be equivalently replaced by:
0 ≤ wk
ij ≤ te
i , ∀i ∈ P(j), ∀j ∈ N, ∀k ∈ V, (20)
te
i − T(1 − vkk
ij ) ≤ wk
ij ≤ Tvkk
ij ,
∀i ∈ P(j), ∀j ∈ N, ∀k ∈ V. (21)
In a similar way, the constraint (14) can be linearized by
introducing a new variable ψij = zij(ts
j − ts
i ), such that
task assignment problem can be formulated as follows.
max
j∈NW
wj,
subject to: Ta
j ≥
k∈V
wk
ijˆyij, ∀i ∈ P(j), ∀j ∈ N, (22)
ˆyij(δ − ts
j + ts
i ) ≤ zijT, ∀i, j ∈ NW , (23)
ψij ≤ δˆyij ∀i, j ∈ NW , (24)
0 ≤ ψij ≤ ts
j − ts
i , ∀i, j ∈ NW , (25)
ts
j − ts
i − T(1 − zij) ≤ ψij ≤ Tzij, ∀i, j ∈ NW , (26)
(1), (9), (10), (16)−(18), (20), and (21).
Although above formulation is a mixed-integer linear
programming (MILP) that is generally NP-hard, it can be
quickly solved by advanced algorithms like branch-and-
bound and mathematical tools like CPLEX. Since we focus
on problem formulation and genetic-based algorithm design
in this paper, the discussion of solving the MILP problem is
omitted due to space limit.
VII. PERFORMANCE EVALUATION
In this section, we conduct extensive simulations to evaluate
the performance of our proposed algorithm. The simulation
settings will be first introduced, followed by simulation
results that demonstrate the advantages of our proposed
algorithm.
A. SIMULATION SETTINGS
We first describe a default simulation setting with a number
of parameters, and then study the performance by changing
one parameter while fixing others. We randomly generate
task graphs [21] with 15 tasks, whose node and link weights
are Gaussian distributed with mean 100 and variance 10.
Among these tasks, 40% of them are randomly selected as
w-tasks. We create random networks, each containing of a
wearable device, 3 mobile devices and the cloud. The link rate
relationship can be described as rMM−MM = γ rMM−WD =
γ 2rMM−RC , and the default value of γ is 50. For comparison,
we also consider other three schemes as follows.
Offloading nothing (OLN): all tasks are executed at the
wearable device.
Offloading all to cloud (OLAC): we offload all tasks except
w-tasks to the cloud.
Simple greedy offloading (SGO): we start from the first
task in the task graph, and greedily assign one by one in the
following to the network node that results in the earliest finish
time.
Our proposed algorithm is denoted by OLGA in the
following. All simulation results are averaged over 30 random
instances.
B. SIMULATION RESULTS
We first investigate the influence of number of tasks, and
show the percent of w-tasks that satisfy the just-in-time
requirement in Fig. 6. As the number of tasks grows, the
performance of all algorithms decreases. For example, the
percent of just-in-time tasks under OLGA is 94.3% when
the total number of tasks is 5. As the number of tasks
increases to 25, the corresponding percent is 79.5%, leading
to 18% degradation. Meanwhile, the performance of OLN,
OLAC, and SGO is always lower than OLGA, and their
performance degradation is more obvious. For example, OLN
has about 28% performance degradation as the number of
tasks increases from 5 to 25. OLN shows poor performance
because the processing capability of wearable device is very
limited, and imposing all tasks to it will seriously delay
the execution of w-tasks. Although OLAC can improve the
performance of OLN, it is much lower than OLGA because
80 VOLUME 3, NO. 1, MARCH 2015
www.redpel.com+917620593389
www.redpel.com+917620593389
8. Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING
FIGURE 6. Percentage of just-in-time w-task versus different
number of tasks.
when each w-task finishes, it delivers results to the cloud that
will later return data for next w-tasks. The frequent com-
munication between wearable device and cloud still incurs
significant delay.
We then study the effect of w-tasks portion by fixing the
number of total tasks to 15. As shown in Fig. 7, the perfor-
mance of all algorithm increases as number of w-tasks grows.
For example, when 10% w-tasks exist in the task graph,
the percentage of just-in-time w-tasks are 18.6%, 49.8%,
71.2% and 81.4% under OLN, OLAC, SGO and OLGA,
respectively. As the portion of w-tasks grows to 50%, their
performance increases to 39.6%, 60%, 75.1% and 92.4%,
respectively. The reason is that when more w-tasks exist,
there are less other tasks between any two w-tasks, and the
just-in-time requirement can be easily satisfied. Also, OLGA
always outperforms the other three algorithms because too
many tasks are assigned to the wearable device with low
processing speed in OLN, and frequent message exchange
happens between wearable device and cloud under OLAC.
FIGURE 7. Percentage of just-in-time w-task versus different
number of w-tasks.
The influence of mobile devices is investigated by
changing its number from 1 to 5. As shown in Fig. 8,
the performance of our proposed algorithm increases as the
number of mobile devices grows. For example, when there
is only one mobile device, the percentage of just-in-time
w-tasks is 25.5%. The performance increases to 87.5% as the
number of mobile device grows to 5. Moreover, we observe
that performance improvement becomes less as more mobile
devices join the network. For example, two devices brings
about 20% performance improvement compared with the
case with only one device. However, the performance gap
decreases to 6% when the number of mobile devices increases
from 4 to 5. There are two reasons for this phenomenon.
First, the computation capability of mobile devices has been
fully exploited by our algorithm as more devices are added
into the network. Second, the overhead of data exchange
among mobile devices will overwhelm the benefits of code
offloading when more mobile devices are involved.
FIGURE 8. Percentage of just-in-time w-task versus different
number of mobile devices.
FIGURE 9. Percentage of just-in-time w-task versus different
value of γ .
We study the influence of γ by changing its value from
10 to 200. Since OLN is not affected by γ , we only show
the performance of OLAC and OLGA in Fig. 9. As the
VOLUME 3, NO. 1, MARCH 2015 81
www.redpel.com+917620593389
www.redpel.com+917620593389
9. IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
value of γ grows, the percentage of just-in-time w-tasks
increases under both algorithms. For example, when γ = 10,
there are 40% and 80% w-tasks that satisfy the just-in-time
requirement under OLAC and OLGA, respectively. As γ
grows to 200, the corresponding percentage increases to
84.8% and 96.7%, respectively. We also observe that the
performance gap between OLAC and OLGA decreases from
40% to 14% as γ grows from 10 to 200. That is because
the communication overhead becomes less under larger
value of γ .
FIGURE 10. Percentage of just-in-time w-task versus different
number of tasks.
FIGURE 11. Execution time versus different number of tasks.
Finally, we compare our proposed algorithm with tradi-
tional genetic algorithm (GA) that uses all binary variables as
genes. We apply the crossover operation by randomly select-
ing a crossover point for two chromosomes and exchang-
ing their portions after the point. The mutation operation
can be conducted by randomly mutating a binary variable.
If the generated chromosomes represent infeasible solutions,
we abandon them and repeat above crossover and mutation
operations until we obtain feasible chromosomes. As shown
in Fig. 10, our proposed algorithm always outperforms
traditional GA. On the other hand, the execution time of GA
is significantly higher than OLGA because GA spends a large
portion of time to generate feasible chromosomes. As shown
in Fig. 11, when there are 10 tasks, GA needs more then
2 minutes to guarantee 86% just-in-time w-tasks, while
OLGA achieves the percentage of 91.3% within 5 seconds.
VIII. CONCLUSION
In this paper, we investigate just-in-time code offloading for
wearable computing. Instead of offloading all codes directly
to the remote cloud, we employ mobile devices nearby to
form a local mobile cloud with low communication delay
with the wearable device. In such a three-layer architecture,
we study the problem of task assignment and scheduling for a
given task graph with the just-in-time objective, i.e., the time
interval between any two w-tasks that should be executed on
wearable device cannot be greater than a threshold. This prob-
lem is proved to be NP-hard, and an efficient code offloading
algorithm based on genetic algorithm is proposed. Extensive
simulation results show that our proposal outperforms other
three offloading strategies significantly.
ACKNOWLEDGMENT
The authors would like to thank members in Computer Net-
work Lab of the University of Aizu, especially, Xin Fan
and Zhitao Deng,and Nariyoshi Chida for discussions and
simulations on this paper.
REFERENCES
[1] GoogleGlass. [Online]. Available: https://www.google.com/glass/start/
[2] L. Jing, Y. Zhou, Z. Cheng, and T. Huang, ‘‘Magic ring: A finger-
worn device for multiple appliances control using static finger gestures,’’
Sensors, vol. 12, no. 5, pp. 5775–5790, 2012.
[3] G. Ngai, S. C. Chan, J. C. Y. Cheung, and W. W. Y. Lau, ‘‘Deploying
a wearable computing platform for computing education,’’ IEEE Trans.
Learn. Technol., vol. 3, no. 1, pp. 45–55, Jan./Mar. 2010.
[4] M.-H. Cho and C.-H. Lee, ‘‘A low-power real-time operating system for
ARC (actual remote control) wearable device,’’ IEEE Trans. Consum.
Electron., vol. 56, no. 3, pp. 1602–1609, Aug. 2010.
[5] C. Setz, B. Arnrich, J. Schumm, R. La Marca, G. Troster, and U. Ehlert,
‘‘Discriminating stress from cognitive load using a wearable EDA device,’’
IEEE Trans. Inf. Technol. Biomed., vol. 14, no. 2, pp. 410–417, Mar. 2010.
[6] A. Gruebler and K. Suzuki, ‘‘Design of a wearable device for reading
positive expressions from facial EMG signals,’’ IEEE Trans. Affective
Comput., vol. 5, no. 3, pp. 227–237, Jul./Sep. 2014.
[7] A. R. Khan, M. Othman, S. A. Madani, and S. U. Khan, ‘‘A survey of
mobile cloud computing application models,’’ IEEE Commun. Surveys
Tuts., vol. 16, no. 1, pp. 393–413, Feb. 2014.
[8] S. Abolfazli, Z. Sanaei, E. Ahmed, A. Gani, and R. Buyya, ‘‘Cloud-
based augmentation for mobile devices: Motivation, taxonomies, and open
challenges,’’ IEEE Commun. Surveys Tuts., vol. 16, no. 1, pp. 337–368,
Feb. 2014.
[9] R. Kaewpuang, D. Niyato, P. Wang, and E. Hossain, ‘‘A framework for
cooperative resource management in mobile cloud computing,’’ IEEE J.
Sel. Areas Commun., vol. 31, no. 12, pp. 2685–2700, Dec. 2013.
[10] M. R. Rahimi, N. Venkatasubramanian, and A. V. Vasilakos, ‘‘MuSIC:
Mobility-aware optimal service allocation in mobile cloud computing,’’ in
Proc. IEEE 6th Int. Conf. Cloud Comput., Jun. 2013, pp. 75–82.
[11] X. Luo, ‘‘From augmented reality to augmented computing: A look at
cloud-mobile convergence,’’ in Proc. Int. Symp. Ubiquitous Virtual Reality,
Jul. 2009, pp. 29–32.
[12] E. E. Marinelli, ‘‘Hyrax: Cloud computing on mobile devices using
MapReduce,’’ Carnegie Mellon Univ.: Pittsburgh, PA, USA, Tech.
Rep. CMU-CS-09-164, Sep. 2009.
82 VOLUME 3, NO. 1, MARCH 2015
www.redpel.com+917620593389
www.redpel.com+917620593389
10. Cheng et al.: Just-in-Time Code Offloading for Wearable Computing
IEEE TRANSACTIONS ON
EMERGING TOPICS
IN COMPUTING
[13] J. Oberheide, K. Veeraraghavan, E. Cooke, J. Flinn, and F. Jahanian,
‘‘Virtualized in-cloud security services for mobile devices,’’ in Proc. 1st
Workshop Virtualization Mobile Comput., 2008, pp. 31–35.
[14] B.-G. Chun, S. Ihm, P. Maniatis, M. Naik, and A. Patti, ‘‘CloneCloud:
Elastic execution between mobile device and cloud,’’ in Proc. ACM 6th
Int. Conf. Comput. Syst., 2011, pp. 301–314.
[15] Z. Li, C. Wang, and R. Xu, ‘‘Task allocation for distributed multimedia
processing on wirelessly networked handheld devices,’’ in Proc. IEEE-IEE
Veh. Navigat. Inf. Syst. Conf., Oct. 1993, pp. 15–19.
[16] A. Rudenko, P. Reiher, G. J. Popek, and G. H. Kuenning, ‘‘Saving
portable computer battery power through remote process execution,’’ ACM
SIGMOBILE Mobile Comput. Commun. Rev., vol. 2, no. 1, pp. 19–26,
Jan. 1998.
[17] Z. Li, C. Wang, and R. Xu, ‘‘Computation offloading to save energy on
handheld devices: A partition scheme,’’ in Proc. Int. Conf. Compil., Archit.,
Synth. Embedded Syst., 2001, pp. 238–246.
[18] W. Jigang and S. Thambipillai, ‘‘A branch-and-bound algorithm for hard-
ware/software partitioning,’’ in Proc. 4th IEEE Int. Symp. Signal Process.
Inf. Technol., Dec. 2004, pp. 526–529.
[19] G. Chen, B.-T. Kang, M. Kandemir, N. Vijaykrishnan, M. J. Irwin, and
R. Chandramouli, ‘‘Studying energy trade offs in offloading computa-
tion/compilation in Java-enabled mobile devices,’’ IEEE Trans. Parallel
Distrib. Syst., vol. 15, no. 9, pp. 795–809, Sep. 2004.
[20] C. Wang and Z. Li, ‘‘A computation offloading scheme on handheld
devices,’’ J. Parallel Distrib. Comput., vol. 64, no. 6, pp. 740–746,
Jun. 2004.
[21] L. Yang, J. Cao, S. Tang, T. Li, and A. T. S. Chan, ‘‘A framework for
partitioning and execution of data stream applications in mobile cloud
computing,’’ in Proc. IEEE 5th Int. Conf. Cloud Comput., Jun. 2012,
pp. 794–802.
[22] D. E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine
Learning, 1st ed. Boston, MA, USA: Addison-Wesley, 1989.
[23] K. Shahookar and P. Mazumder, ‘‘A genetic approach to standard cell
placement using meta-genetic parameter optimization,’’ IEEE Trans.
Comput.-Aided Design Integr. Circuits Syst., vol. 9, no. 5, pp. 500–511,
May 1990.
ZIXUE CHENG (M’95) received the master’s
and Dr. Degrees in engineering from the Tohoku
University, Sendai, Japan, in 1990 and 1993,
respectively. He joined the University of Aizu,
Aizuwakamatsu, Japan, in 1993, as an Assistant
Professor, became an Associate Professor in 1999,
and has been a Full Professor since 2002. His inter-
ests are design and implementation of protocols,
distributed algorithms, distance education, ubiq-
uitous computing, ubiquitous learning, embedded
systems, functional safety, and Internet of Things. He served as the director
of University Business Innovation Center from 2006 to 2010, the Head of the
Division of Computer Engineering from 2010 to 2014, and has been the vice
president of the University of Aizu, since 2014. He is a member of the Asso-
ciation for Computing Machinery, Institute of Electronics, Information and
Communication Engineers, and Information Processing Society of Japan.
PENG LI (M’11) received the B.S. degree
from the Huazhong University of Science and
Technology, Wuhan, China, in 2007, and the
M.S. and Ph.D. degrees from the University of
Aizu, Aizuwakamatsu, Japan, in 2009 and 2012,
respectively, where he is currently an Associate
Professor. His research interests include
networking modeling, cross-layer optimization,
network coding, cooperative communications,
cloud computing, smart grid, performance eval-
uation of wireless and mobile networks for reliable, energy-efficient, and
cost-effective communications.
JUNBO WANG received the B.E. degree in elec-
trical engineering and automation and M.E. degree
in electric circuits and systems from YanShan
University, Qinhuangdao, China, in 2004 and
2007, and the Ph.D. degree in computer sci-
ence with the University of Aizu, Aizuwakamatsu,
Japan, in 2011, where he is currently an Associate
Professor. His current research interests include
Internet of Things, ubiquitous computing, context/
situation awareness, and wireless sensor networks.
SONG GUO (M’02–SM’11) received the
Ph.D. degree in computer science from the Uni-
versity of Ottawa, Ottawa, ON, Canada, in 2005.
He is currently a Full Professor at the School of
Computer Science and Engineering, University of
Aizu, Aizuwakamatsu, Japan. His research inter-
ests are mainly in the areas of protocol design
and performance analysis for wireless networks
and distributed systems. He has authored over
250 papers in refereed journals and conferences
in these areas and received three IEEE/ACM best paper awards. Dr. Guo
currently serves as an Associate Editor of IEEE TRANSACTIONS ON PARALLEL
AND DISTRIBUTED SYSTEMS, an Associate Editor of the IEEE TRANSACTIONS
ON EMERGING TOPICS IN COMPUTING with duties on emerging paradigms in
computational communication systems, and on the editorial boards of many
others. He has also been in Organizing and Technical Committees of
numerous international conferences. He is a Senior Member of the ACM.
VOLUME 3, NO. 1, MARCH 2015 83
www.redpel.com+917620593389
www.redpel.com+917620593389