This document provides a survey of cyber foraging systems and discusses open issues and research challenges. It summarizes key cyber foraging applications like LOCUSTS, Slingshot, and Puppeteer that allow resource-deficient mobile devices to offload tasks to stronger surrogate machines. The document outlines the general process of cyber foraging including surrogate discovery, task distribution, migration, and remote execution control. It also discusses design challenges like efficient task distribution, migration, and security concerns. Finally, it provides more detail on the LOCUSTS framework, describing its architecture, approach to task distribution, and ability to migrate tasks between surrogates.
Internet of things-based photovoltaics parameter monitoring system using Node...IJECEIAES
The use of the internet of things (IoT) in solar photovoltaic (PV) systems is a critical feature for remote monitoring, supervising, and performance evaluation. Furthermore, it improves the long-term viability, consistency, efficiency, and system maintenance of energy production. However, previous researchers' proposed PV monitoring systems are relatively complex and expensive. Furthermore, the existing systems do not have any backup data, which means that the acquired data could be lost if the network connection fails. This paper presents a simple and low-cost IoT-based PV parameter monitoring system, with additional backup data stored on a microSD card. A NodeMCU ESP8266 development board is chosen as the main controller because it is a system-on-chip (SOC) microcontroller with integrated Wi-Fi and low-power support, all in one chip to reduce the cost of the proposed system. The solar irradiance, ambient temperature, PV output voltage and PV output current, are measured with photo-diodes, DHT22, impedance dividers and ACS712. While, the PV output power is a product of the PV voltage and PV current. ThingSpeak, an opensource software, is used as a cloud database and data monitoring tool in the form of interactive graphics. The results showed that the system was designed to be highly accurate, reliable, simple to use, and low-cost.
An Efficient and Optimal Systems For Medical & Industries With Concept of IOTIJERA Editor
Its quit interesting to starts with some philosophical manner rather than the aim of paper, that is, human kind
likes more and more simplified life in their daily activities. The people like, simplification in cooking, travel,
education, fashion design, dressing, information availability about certain things, communication from one place
to another and so on. That means, the people like intelligent automated life. In contrast artificial things can
interact with the human kind and solves their desires. For example, someone may have lost something,
somewhere, but they forget where they lost. The difficulty here is, how find the thing, someone may give the
solutions RFID technology but it works in certain range. So from the discussions, we need, the autonomous
mechanism, that can trace out the locality or information regarding to what we lost? In simple manner, we need
a common platform to integrate entire world of thing. For a little while, the common platform is internet and
hence we labeled Internet of Things(IoT). That’s what, the paper is going to deal. So the principal objective is to
monitoring the thing of parameters from anywhere in the world. The paper is not aimed to integrate entire world
of things right now but dedicated to two fields, medical & small, medium enterprises. In medical field, monitor
the patient through camera interface and patient’s moments using WSN interface. And in case of SM Enterprises
monitors, temperature, water level, machine motion detection using WSN and visual through Camera Interface.
The paper is built with high speed and low cost Raspberry-Pi Controller
The increased usage of mobile devices caused them to face a large amount of resource, memory and processing speed scarcity. Of all other constraints, energy is the major problem for this to carry out a task. The concept of offloading gets into play for mobile devices i.e., the task or the computation which needs to be performed involving more service in the android systems will be shifted to resourceful server (for ex cloud) and getting back the results done from the cloud. The decision of whether to offload a computation or not will depend on the task accounting to the energy spent by the device while working with the application versus the amount of energy spent by the same device for uploading the task to the cloud and getting the result back from the cloud. As this concept depicts energy is the major constraint for whether to offload a task or not, there is a model called CUCKOO framework which acts as an interface between the cloud and the android environment to support for the task offloading to the cloud. Thus this framework bridges the gap between the smartphones as well as the cloud environment so that computation intensive task can be performed with less amount of energy consumed. In this work two applications are used to detect the amount of energy consumed in the cloud as well as the smartphones namely eyedentify and Photoshoot.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
CIRCUIT BREAK CONNECT MONITORING TO 5G MOBILE APPLICATIONijcsit
Along by a continuous improvement to composite electronic devices, a safety to technicians takes
additionally become the matter to good concern, as a result to technicians' lives is in jeopardy while their
work through shutting down circuit breakers, even that even once the breaker takes been switched off,
someone will inadvertently flip to while a technician remains working. That should be a system to
guarantee safety that technicians. Also, individuals do not love switching all the time toward turn on / off
appliances like fans/lighting/air conditioners. It ends in wasted energy thanks to unnecessarily placing the
instrument. To address these issues, we tend to come up through the system through mobile app-controlled
circuit breakers that degrade wireless management to home appliances to hunt down a golem app. That
replaces a traditional breaker through the mobile app-controlled system in the on / off system, where no
one will activate the breaker, while not the word. The remote of home appliances helps a user to save
electricity. That enhances a quality of life and luxury. Additionally, a system includes the home security
mechanism against drone intrusion using the mobile app-controlled door lock system besides the
mechanism that sleuthing dangerous gas leaks. A formation of the system subtracts the degree of victim
associate ESP 32 microcontroller, the Bluetooth module, matrix 4x4 keyboards, and the paraffin gas
detector associate with a golem mobile application. The entire system is usually compact systems
Internet of things-based photovoltaics parameter monitoring system using Node...IJECEIAES
The use of the internet of things (IoT) in solar photovoltaic (PV) systems is a critical feature for remote monitoring, supervising, and performance evaluation. Furthermore, it improves the long-term viability, consistency, efficiency, and system maintenance of energy production. However, previous researchers' proposed PV monitoring systems are relatively complex and expensive. Furthermore, the existing systems do not have any backup data, which means that the acquired data could be lost if the network connection fails. This paper presents a simple and low-cost IoT-based PV parameter monitoring system, with additional backup data stored on a microSD card. A NodeMCU ESP8266 development board is chosen as the main controller because it is a system-on-chip (SOC) microcontroller with integrated Wi-Fi and low-power support, all in one chip to reduce the cost of the proposed system. The solar irradiance, ambient temperature, PV output voltage and PV output current, are measured with photo-diodes, DHT22, impedance dividers and ACS712. While, the PV output power is a product of the PV voltage and PV current. ThingSpeak, an opensource software, is used as a cloud database and data monitoring tool in the form of interactive graphics. The results showed that the system was designed to be highly accurate, reliable, simple to use, and low-cost.
An Efficient and Optimal Systems For Medical & Industries With Concept of IOTIJERA Editor
Its quit interesting to starts with some philosophical manner rather than the aim of paper, that is, human kind
likes more and more simplified life in their daily activities. The people like, simplification in cooking, travel,
education, fashion design, dressing, information availability about certain things, communication from one place
to another and so on. That means, the people like intelligent automated life. In contrast artificial things can
interact with the human kind and solves their desires. For example, someone may have lost something,
somewhere, but they forget where they lost. The difficulty here is, how find the thing, someone may give the
solutions RFID technology but it works in certain range. So from the discussions, we need, the autonomous
mechanism, that can trace out the locality or information regarding to what we lost? In simple manner, we need
a common platform to integrate entire world of thing. For a little while, the common platform is internet and
hence we labeled Internet of Things(IoT). That’s what, the paper is going to deal. So the principal objective is to
monitoring the thing of parameters from anywhere in the world. The paper is not aimed to integrate entire world
of things right now but dedicated to two fields, medical & small, medium enterprises. In medical field, monitor
the patient through camera interface and patient’s moments using WSN interface. And in case of SM Enterprises
monitors, temperature, water level, machine motion detection using WSN and visual through Camera Interface.
The paper is built with high speed and low cost Raspberry-Pi Controller
The increased usage of mobile devices caused them to face a large amount of resource, memory and processing speed scarcity. Of all other constraints, energy is the major problem for this to carry out a task. The concept of offloading gets into play for mobile devices i.e., the task or the computation which needs to be performed involving more service in the android systems will be shifted to resourceful server (for ex cloud) and getting back the results done from the cloud. The decision of whether to offload a computation or not will depend on the task accounting to the energy spent by the device while working with the application versus the amount of energy spent by the same device for uploading the task to the cloud and getting the result back from the cloud. As this concept depicts energy is the major constraint for whether to offload a task or not, there is a model called CUCKOO framework which acts as an interface between the cloud and the android environment to support for the task offloading to the cloud. Thus this framework bridges the gap between the smartphones as well as the cloud environment so that computation intensive task can be performed with less amount of energy consumed. In this work two applications are used to detect the amount of energy consumed in the cloud as well as the smartphones namely eyedentify and Photoshoot.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
CIRCUIT BREAK CONNECT MONITORING TO 5G MOBILE APPLICATIONijcsit
Along by a continuous improvement to composite electronic devices, a safety to technicians takes
additionally become the matter to good concern, as a result to technicians' lives is in jeopardy while their
work through shutting down circuit breakers, even that even once the breaker takes been switched off,
someone will inadvertently flip to while a technician remains working. That should be a system to
guarantee safety that technicians. Also, individuals do not love switching all the time toward turn on / off
appliances like fans/lighting/air conditioners. It ends in wasted energy thanks to unnecessarily placing the
instrument. To address these issues, we tend to come up through the system through mobile app-controlled
circuit breakers that degrade wireless management to home appliances to hunt down a golem app. That
replaces a traditional breaker through the mobile app-controlled system in the on / off system, where no
one will activate the breaker, while not the word. The remote of home appliances helps a user to save
electricity. That enhances a quality of life and luxury. Additionally, a system includes the home security
mechanism against drone intrusion using the mobile app-controlled door lock system besides the
mechanism that sleuthing dangerous gas leaks. A formation of the system subtracts the degree of victim
associate ESP 32 microcontroller, the Bluetooth module, matrix 4x4 keyboards, and the paraffin gas
detector associate with a golem mobile application. The entire system is usually compact systems
A virtual analysis on various techniques using ann with data miningeSAT Journals
Abstract In this paper, Firstly we discussed on monitoring the quality of video in network and proposing a tool called “VQMT” (Video Quality Measurement Tool) for automatic assessment of video quality and comparing it with MOS (mean opinion score). Secondly; author had proposed a tool called “ReGIMviZ” for video data visualization and personalization system based on semantic classification also used fuzzy logic. And lastly we focus on “SOFAIT” (SIFT and Optical flow affine image Transform) technique for face registration in video to improve action unit and its various algorithms. Here, in every system the common area is ANN architecture based on supervised learning algorithm.
A survey on context aware system & intelligent Middleware’sIOSR Journals
Abstract: Context aware system or Sentient system is the most profound concept in the ubiquitous computing.
In the cloud system or in distributed computing building a context aware system is difficult task and
programmer should use more generic programming framework. On the basis of layered conceptual design, we
introduce Context aware systems with Context aware middleware’s. On the basis of presented system we will
analyze different approaches of context aware computing. There are many components in the distributed system
and these components should interact with each other because it is the need of many applications. Plenty
Context middleware’s have been made but they are giving partial solutions. In this paper we are giving analysis
of different middleware’s and comprehensive application of it in context caching.
Keywords: Context aware system, Context aware Middleware’s, Context Cache
Efficient Data Aggregation in Wireless Sensor NetworksIJAEMSJORNAL
Sensor network is a term used to refer to a heterogeneous system combining tiny sensors and actuators with general/special-purpose processors. Sensor networks are assumed to grow in size to include hundreds or thousands of low-power, low-cost, static or mobile nodes. This system is created by observing that for any densely deployed sensor network, high redundancy exists in the gathered information from the sensor nodes that are close to each other we have exploited the redundancy and designed schemes to secure different kinds of aggregation processing against both inside and outside attacks.
Real-time monitoring of the prototype design of electric system by the ubido...IJECEIAES
In this paper, a prototype DC electric system was practically designed. The idea of the proposed system was derived from the microgrid concept. The system contained two houses each have a DC generator and load that consists of four 12 V DC lamps. Each house is controlled fully by Arduino UNO microcontroller to work in Island mode or connected it with the second house or main electric network. House operating mode depends on the power generated by its source and the availability of the main network. Under all operating cases, the minimum price of electricity consumption should satisfy as possible. Information between the houses about the operating mode and the main network state was exchanging wirelessly with the help of the RFHC12. This information uploaded to the Ubidots platform by the Wi-FiESP8266 included in the node MCU microcontroller. This platform has several advantages such as capture, visualization, analysis, and management of data. The system was examined for different cases to verify its working by varying the load in each building. All tested states showed that the houses transfer from one mode to another automatically with high reliability and minimum energy cost. The information about the main grid states and the sources of the houses were monitored and stored at the Ubidots platform.
A lot of sources are available to every side of human environment. These sources are genuinely provided by
nature to us. The utilization of these available sources again and again results in “re-source” for us. The
interaction among multiple resources increases the level of availability for maximum possible duration to
the environment. The involvement of both (natural & artificial) intelligence create interaction environment
which works every time and everywhere. The quality work done bounded with time for impactful effect is
known as “smart work”. The theme and framework are required for creating an environment for smart
work. In this article we propose a theorem for theme declaration which is based on suggested lemma. The
theme declaration is measured in probabilistic terms. To perform work in smart fashion we require the
appropriate proportion of both intelligence (natural and human) and their interaction. We have also cited
three varied examples discussing the relevance of intelligent interaction for smart work.
Wireless sensor network plays vital role in today’s life, it is a collection of sensors that are scattered in different directions which are further used to control and measure the physical conditions of environment as well as to organize to the data somewhere at centre location. As in context of greenhouse we can measure various parameters such as temperature, humidity, water level, insect monitoring and light intensity.
Evaluation of network intrusion detection using markov chainIJCI JOURNAL
Day today life internet threat has been increased significantly. There is a need to develop model in order to
maintain security of system. The most effective techniques are Intrusion Detection System (IDS).The
purpose of intrusion system through the security devices detect and deal with it. In this paper, a
mathematical approach is used effectively to predict and detect intrusion in the network. Here we discuss
about two algorithms ‘K-Means + Apriori’, a method which classify normal and abnormal activities in
computer network. In K-Means process, it partitions the training set into K-clusters using Euclidean
distance and introduce an outlier factor, then it build Apriori Algorithm to prune the data by removing
infrequent data in the database. Based on defined state the degree of incoming data is evaluated through
the experiment using sample DARPA2000 dataset, and achieves high detection performance in level of
attack in stages.
International Journal of Ubiquitous Computing (IJU) is a quarterly open access peer-reviewed journal provides excellent international forum for sharing knowledge and results in theory, methodology and applications of ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
A Survey on Features Combination for Image WatermarkingEditor IJMTER
As the internet users are increasing day by day it is easy to transfer digital data. By this new
problem of data piracy is increasing. For this different methods of watermarking are developed for
protecting the digital data like video, audio, image, etc. Out of these many researcher are working on
image watermarking field from last few decades. This paper focus on the image watermarking features
combination with various techniques which are broadly categorized into spatial and frequency domain.
Many features are studied with their different requirement and functionality. It has been observed that
most of the researcher combines many features for achieving the prior goal of the watermark that is to
embed watermark and extract from the carrier image in presence of different attack.
The common challenges of mobile internet for up coming generationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The common challenges of mobile internet for up coming generationeSAT Journals
Abstract In this survey we concentrate on the mobile internet. Our main focus on mobile internet in two different cases of fixed connection which is provided by the telecommunication network provider and the second one is the wireless network which is getting from internet access point can be home network, Education campus .etc; in this case we also would like to discuss about network layer (protocols and Transport layer protocols).
Many of mobile devices suffer from limited computation resources (memory and processors), limited network
connection, bandwidth and limited battery life. For minimizing these problems mobile agents are premising
technology. However, for clients and servers most mobile agent systems are very resources demanding. This
research paper describes an approach to run mobile agents on different devices from mobile phones and
Personal Digital Assistants (PDAs) to powerful PCs. It proposes a simple mobile agent architecture and
middleware that makes it possible for accessing a mobile agent system on different devices. This architecture
and middleware proposes that clients will state their abilities. Depending on these abilities, the client will either
run the full mobile agent on the device or only run a light-weight version of the agent on the device. The mobile
agents are basically same on all clients, but code of the mobile agent is removed for small devices. This means
that only the data of the agent can be changed for mobile devices with minimal resources. The code of this agent
is stored at the server. When the agent returns to the server, the two parts are joined and the agent is ready to be
executed. The joined mobile agent can migrate to other agent servers and clients. A middleware is also proposed
that makes it possible to establish communication between different heterogeneous devices.
A virtual analysis on various techniques using ann with data miningeSAT Journals
Abstract In this paper, Firstly we discussed on monitoring the quality of video in network and proposing a tool called “VQMT” (Video Quality Measurement Tool) for automatic assessment of video quality and comparing it with MOS (mean opinion score). Secondly; author had proposed a tool called “ReGIMviZ” for video data visualization and personalization system based on semantic classification also used fuzzy logic. And lastly we focus on “SOFAIT” (SIFT and Optical flow affine image Transform) technique for face registration in video to improve action unit and its various algorithms. Here, in every system the common area is ANN architecture based on supervised learning algorithm.
A survey on context aware system & intelligent Middleware’sIOSR Journals
Abstract: Context aware system or Sentient system is the most profound concept in the ubiquitous computing.
In the cloud system or in distributed computing building a context aware system is difficult task and
programmer should use more generic programming framework. On the basis of layered conceptual design, we
introduce Context aware systems with Context aware middleware’s. On the basis of presented system we will
analyze different approaches of context aware computing. There are many components in the distributed system
and these components should interact with each other because it is the need of many applications. Plenty
Context middleware’s have been made but they are giving partial solutions. In this paper we are giving analysis
of different middleware’s and comprehensive application of it in context caching.
Keywords: Context aware system, Context aware Middleware’s, Context Cache
Efficient Data Aggregation in Wireless Sensor NetworksIJAEMSJORNAL
Sensor network is a term used to refer to a heterogeneous system combining tiny sensors and actuators with general/special-purpose processors. Sensor networks are assumed to grow in size to include hundreds or thousands of low-power, low-cost, static or mobile nodes. This system is created by observing that for any densely deployed sensor network, high redundancy exists in the gathered information from the sensor nodes that are close to each other we have exploited the redundancy and designed schemes to secure different kinds of aggregation processing against both inside and outside attacks.
Real-time monitoring of the prototype design of electric system by the ubido...IJECEIAES
In this paper, a prototype DC electric system was practically designed. The idea of the proposed system was derived from the microgrid concept. The system contained two houses each have a DC generator and load that consists of four 12 V DC lamps. Each house is controlled fully by Arduino UNO microcontroller to work in Island mode or connected it with the second house or main electric network. House operating mode depends on the power generated by its source and the availability of the main network. Under all operating cases, the minimum price of electricity consumption should satisfy as possible. Information between the houses about the operating mode and the main network state was exchanging wirelessly with the help of the RFHC12. This information uploaded to the Ubidots platform by the Wi-FiESP8266 included in the node MCU microcontroller. This platform has several advantages such as capture, visualization, analysis, and management of data. The system was examined for different cases to verify its working by varying the load in each building. All tested states showed that the houses transfer from one mode to another automatically with high reliability and minimum energy cost. The information about the main grid states and the sources of the houses were monitored and stored at the Ubidots platform.
A lot of sources are available to every side of human environment. These sources are genuinely provided by
nature to us. The utilization of these available sources again and again results in “re-source” for us. The
interaction among multiple resources increases the level of availability for maximum possible duration to
the environment. The involvement of both (natural & artificial) intelligence create interaction environment
which works every time and everywhere. The quality work done bounded with time for impactful effect is
known as “smart work”. The theme and framework are required for creating an environment for smart
work. In this article we propose a theorem for theme declaration which is based on suggested lemma. The
theme declaration is measured in probabilistic terms. To perform work in smart fashion we require the
appropriate proportion of both intelligence (natural and human) and their interaction. We have also cited
three varied examples discussing the relevance of intelligent interaction for smart work.
Wireless sensor network plays vital role in today’s life, it is a collection of sensors that are scattered in different directions which are further used to control and measure the physical conditions of environment as well as to organize to the data somewhere at centre location. As in context of greenhouse we can measure various parameters such as temperature, humidity, water level, insect monitoring and light intensity.
Evaluation of network intrusion detection using markov chainIJCI JOURNAL
Day today life internet threat has been increased significantly. There is a need to develop model in order to
maintain security of system. The most effective techniques are Intrusion Detection System (IDS).The
purpose of intrusion system through the security devices detect and deal with it. In this paper, a
mathematical approach is used effectively to predict and detect intrusion in the network. Here we discuss
about two algorithms ‘K-Means + Apriori’, a method which classify normal and abnormal activities in
computer network. In K-Means process, it partitions the training set into K-clusters using Euclidean
distance and introduce an outlier factor, then it build Apriori Algorithm to prune the data by removing
infrequent data in the database. Based on defined state the degree of incoming data is evaluated through
the experiment using sample DARPA2000 dataset, and achieves high detection performance in level of
attack in stages.
International Journal of Ubiquitous Computing (IJU) is a quarterly open access peer-reviewed journal provides excellent international forum for sharing knowledge and results in theory, methodology and applications of ubiquitous computing. Current information age is witnessing a dramatic use of digital and electronic devices in the workplace and beyond. Ubiquitous Computing presents a rather arduous requirement of robustness, reliability and availability to the end user. Ubiquitous computing has received a significant and sustained research interest in terms of designing and deploying large scale and high performance computational applications in real life. The aim of the journal is to provide a platform to the researchers and practitioners from both academia as well as industry to meet and share cutting-edge development in the field.
A Survey on Features Combination for Image WatermarkingEditor IJMTER
As the internet users are increasing day by day it is easy to transfer digital data. By this new
problem of data piracy is increasing. For this different methods of watermarking are developed for
protecting the digital data like video, audio, image, etc. Out of these many researcher are working on
image watermarking field from last few decades. This paper focus on the image watermarking features
combination with various techniques which are broadly categorized into spatial and frequency domain.
Many features are studied with their different requirement and functionality. It has been observed that
most of the researcher combines many features for achieving the prior goal of the watermark that is to
embed watermark and extract from the carrier image in presence of different attack.
The common challenges of mobile internet for up coming generationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The common challenges of mobile internet for up coming generationeSAT Journals
Abstract In this survey we concentrate on the mobile internet. Our main focus on mobile internet in two different cases of fixed connection which is provided by the telecommunication network provider and the second one is the wireless network which is getting from internet access point can be home network, Education campus .etc; in this case we also would like to discuss about network layer (protocols and Transport layer protocols).
Many of mobile devices suffer from limited computation resources (memory and processors), limited network
connection, bandwidth and limited battery life. For minimizing these problems mobile agents are premising
technology. However, for clients and servers most mobile agent systems are very resources demanding. This
research paper describes an approach to run mobile agents on different devices from mobile phones and
Personal Digital Assistants (PDAs) to powerful PCs. It proposes a simple mobile agent architecture and
middleware that makes it possible for accessing a mobile agent system on different devices. This architecture
and middleware proposes that clients will state their abilities. Depending on these abilities, the client will either
run the full mobile agent on the device or only run a light-weight version of the agent on the device. The mobile
agents are basically same on all clients, but code of the mobile agent is removed for small devices. This means
that only the data of the agent can be changed for mobile devices with minimal resources. The code of this agent
is stored at the server. When the agent returns to the server, the two parts are joined and the agent is ready to be
executed. The joined mobile agent can migrate to other agent servers and clients. A middleware is also proposed
that makes it possible to establish communication between different heterogeneous devices.
Ant colony Optimization: A Solution of Load balancing in Cloud dannyijwest
As the cloud computing is a new style of computing over internet. It has many advantages along with some
crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related
with the load management, fault tolerance and different security issues in cloud environment. In this paper
the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity,
delay or network load. Load balancing is the process of distributing the load among various nodes of a
distributed system to improve both resource utilization and job response time while also avoiding a
situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work.
Load balancing ensures that all the processor in the system or every node in the network does
approximately the equal amount of work at any instant of time. Many methods to resolve this problem has
been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several
scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony
optimization to resolve the problem of load balancing in cloud environment.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey of Cyber Foraging and Cloud Offload Techniques.
We present a survey of cyber foraging and
cloud computing techniques as a solution to aid computing on resource-constrained mobile devices. We also explain some
important cyber foraging systems and present a categorization of the existing approaches considering various factors like
their dynamics, granularity, metrics used, surrogate types and their overheads.
Abstract: Cloud computing is a latest trend and a hot topic in today global world. In which sources are provided to concern as local user on an on demand basically as usual it provides the path or means of internet. Mobile cloud computing is simply cloud computing throughout that at all smallest variety of devices could be involved as wireless equipment this paper concern multiple procedure and procedure for the mobile cloud computing . It developed every General mobile cloud computing solution and application specific solution. It also concern about the cloud computing in which mobile phones are used to browse the web, write e-mails, videos etc. Mobile phones are become the universal interface online services and cloud computing application general run local on mobile phones.
Contemporary Energy Optimization for Mobile and Cloud Environmentijceronline
Cloud and mobile computing applications are increasing heavily in terms of usage. These two areas extending usability of systems. This review paper gives information about cloud and mobile applications in terms of resources they consume and the need of choosing variety of features for users from several locations and the evolutionary provisions for service provider and end users. Both the fields are combined to provide good functionality, efficiency and effectiveness with mobile phones. The enhancement by considering power consumption by means of resource constrained nature of devices, communication media and cost effectiveness. This paper discuss about the concepts related to power consumption, underlying protocols and the other performance issues
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
DESIGN AND IMPLEMENTATION OF INTELLIGENT COMMUNITY SYSTEM BASED ON THIN CLIEN...ijasuc
With the continuous development of science and technology, the intelligent development of community
system becomes a trend. Meanwhile, smart mobile devices and cloud computing technology are
increasingly used in intelligent information systems; however, smart mobile devices such as smartphone
and smart pad, also known as thin clients, limited by either their capacities (CPU, memory or battery) or
their network resources, do not always meet users' satisfaction in using mobile services. Mobile cloud
computing, in which resource-rich virtual machines of smart mobile device are provided to a customer as a
service, can be terrific solution for expanding the limitation of real smart mobile device, but the resources
utilization rate is low and the information cannot be shared easily. To address the problems above, this
paper proposes an information system for intelligent community, which is composed of thin clients, wide
band network and cloud computing servers. On one hand, the thin clients with the characteristics of energy
efficiency, high robustness and high computing capacity can efficiently avoid the problems encountered in
the PC architecture and mobile devices. On the other hand, the cloud computing servers in the proposed
information system solve the problems of resource sharing barriers. Finally, the system is built in real
environments to evaluate the performance. We deploy the proposed system in a community with more than
2000 residents, and it is demonstrated that the proposed system is robust and efficient.
DESIGN AND IMPLEMENTATION OF INTELLIGENT COMMUNITY SYSTEM BASED ON THIN CLIEN...ijasuc
With the continuous development of science and technology, the intelligent development of community
system becomes a trend. Meanwhile, smart mobile devices and cloud computing technology are
increasingly used in intelligent information systems; however, smart mobile devices such as smartphone
and smart pad, also known as thin clients, limited by either their capacities (CPU, memory or battery) or
their network resources, do not always meet users' satisfaction in using mobile services. Mobile cloud
computing, in which resource-rich virtual machines of smart mobile device are provided to a customer as a
service, can be terrific solution for expanding the limitation of real smart mobile device, but the resources
utilization rate is low and the information cannot be shared easily. To address the problems above, this
paper proposes an information system for intelligent community, which is composed of thin clients, wide
band network and cloud computing servers. On one hand, the thin clients with the characteristics of energy
efficiency, high robustness and high computing capacity can efficiently avoid the problems encountered in
the PC architecture and mobile devices. On the other hand, the cloud computing servers in the proposed
information system solve the problems of resource sharing barriers. Finally, the system is built in real
environments to evaluate the performance. We deploy the proposed system in a community with more than
2000 residents, and it is demonstrated that the proposed system is robust and efficient.
Content-Based Image Retrieval (CBIR) systems have been used for the searching of relevant images in various research areas. In CBIR systems features such as shape, texture and color are used. The extraction of features is the main step on which the retrieval results depend. Color features in CBIR are used as in the color histogram, color moments, conventional color correlogram and color histogram. Color space selection is used to represent the information of color of the pixels of the query image. The shape is the basic characteristic of segmented regions of an image. Different methods are introduced for better retrieval using different shape representation techniques; earlier the global shape representations were used but with time moved towards local shape representations. The local shape is more related to the expressing of result instead of the method. Local shape features may be derived from the texture properties and the color derivatives. Texture features have been used for images of documents, segmentation-based recognition,and satellite images. Texture features are used in different CBIR systems along with color, shape, geometrical structure and sift features.
The cyber attacks have become most prevalent in the past few years. During this time, attackers have discovered new vulnerabilities to carry out malicious activities on the internet. Both the clients and the servers have been victimized by the attackers. Clickjacking is one of the attacks that have been adopted by the attackers to deceive the innocuous internet users to initiate some action. Clickjacking attack exploits one of the vulnerabilities existing in the web applications. This attack uses a technique that allows cross domain attacks with the help of userinitiated clicks and performs unintended actions. This paper traces out the vulnerabilities that make a website vulnerable to clickjacking attack and proposes a solution for the same.
Performance Analysis of Audio and Video Synchronization using Spreaded Code D...Eswar Publications
The audio and video synchronization plays an important role in speech recognition and multimedia communication. The audio-video sync is a quite significant problem in live video conferencing. It is due to use of various hardware components which introduces variable delay and software environments. The objective of the synchronization is used to preserve the temporal alignment between the audio and video signals. This paper proposes the audio-video synchronization using spreading codes delay measurement technique. The performance of the proposed method made on home database and achieves 99% synchronization efficiency. The audio-visual
signature technique provides a significant reduction in audio-video sync problems and the performance analysis of audio and video synchronization in an effective way. This paper also implements an audio- video synchronizer and analyses its performance in an efficient manner by synchronization efficiency, audio-video time drift and audio-video delay parameters. The simulation result is carried out using mat lab simulation tools and simulink. It is automatically estimating and correcting the timing relationship between the audio and video signals and maintaining the Quality of Service.
Due to the availability of complicated devices in industry, models for consumers at lower cost of resources are developed. Home Automation systems have been developed by several researchers. The limitations of home automation includes complexity in architecture, higher costs of the equipment, interface inflexibility. In this paper as we have proposed, the working protocol of PIC 16F72 technology is which is secure, cost efficient, flexible that leads to the development of efficient home automation systems. The system is operational to control various home appliances like fans, Bulbs, Tube light. The following paper describes about components used and working of all components connected. The home automation system makes use of Android app entitled “Home App” which gives
flexibility and easy to use GUI.
Semantically Enchanced Personalised Adaptive E-Learning for General and Dysle...Eswar Publications
E-learning plays an important role in providing required and well formed knowledge to a learner. The medium of e- learning has achieved advancement in various fields such as adaptive e-learning systems. The need for enhancing e-learning semantically can enhance the retrieval and adaptability of the learning curriculum. This paper provides a semantically enhanced module based e-learning for computer science programme on a learnercentric perspective. The learners are categorized based on their proficiency for providing personalized learning environment for users. Learning disorders on the platform of e-learning still require lots of research. Therefore, this paper also provides a personalized assessment theoretical model for alphabet learning with learning objects for
children’s who face dyslexia.
Agriculture plays an important role in the economy of our country. Over 58 percent of the rural households depend on the agriculture sector as their means of livelihood. Agriculture is one of the major contributors to Gross Domestic Product(GDP). Seeds are the soul of agriculture. This application helps in reducing the time for the researchers as well as farmers to know the seedling parameters. The application helps the farmers to know about the percentage of seedlings that will grow and it is very essential in estimating the yield of that particular crop. Manual calculation may lead to some error, to minimize that error, the developed app is used. The scientist and farmers require the app to know about the physiological seed quality parameters and to take decisions regarding their farming activities. In this article a desktop app for seed germination percentage and vigour index calculation are developed in PHP scripting language.
What happens when adaptive video streaming players compete in time-varying ba...Eswar Publications
Competition among adaptive video streaming players severely diminishes user-QoE. When players compete at a bottleneck link many do not obtain adequate resources. This imbalance eventually causes ill effects such as screen flickering and video stalling. There have been many attempts in recent years to overcome some of these problems. However, added to the competition at the bottleneck link there is also the possibility of varying network bandwidth which can make the situation even worse. This work focuses on such a situation. It evaluates current heuristic adaptive video players at a bottleneck link with time-varying bandwidth conditions. Experimental setup includes the TAPAS player and emulated network conditions. The results show PANDA outperforms FESTIVE, ELASTIC and the Conventional players.
WLI-FCM and Artificial Neural Network Based Cloud Intrusion Detection SystemEswar Publications
Security and Performance aspects of cloud computing are the major issues which have to be tended to in Cloud Computing. Intrusion is one such basic and imperative security problem for Cloud Computing. Consequently, it is essential to create an Intrusion Detection System (IDS) to detect both inside and outside assaults with high detection precision in cloud environment. In this paper, cloud intrusion detection system at hypervisor layer is developed and assesses to detect the depraved activities in cloud computing environment. The cloud intrusion detection system uses a hybrid algorithm which is a fusion of WLI- FCM clustering algorithm and Back propagation artificial Neural Network to improve the detection accuracy of the cloud intrusion detection system. The proposed system is implemented and compared with K-means and classic FCM. The DARPA’s KDD cup dataset 1999 is used for simulation. From the detailed performance analysis, it is clear that the proposed system is able to detect the anomalies with high detection accuracy and low false alarm rate.
Spreading Trade Union Activities through Cyberspace: A Case StudyEswar Publications
This report present the outcome of an investigative research conducted to examine the modu-operandi of academic staff union of polytechnics (ASUP) YabaTech. The investigation covered the logistics and cost implication for spreading union activities among members. It was discovered that cost of management and dissemination of information to members was at high side, also logistics problem constitutes to loss of information in transit hence cut away some members from union activities. To curtail the problem identified, we proposed the
design of secure and dynamic website for spreading union activities among members and public. The proposed system was implemented using HTML5 technology, interface frameworks like Bootstrap and j query which enables the responsive feature of the application interface. The backend was designed using PHPMYSQL. It was discovered from the evaluation of the new system that cost of managing information has reduced considerably, and logistic problems identified in the old system has become a forgotten issue.
Identifying an Appropriate Model for Information Systems Integration in the O...Eswar Publications
Nowadays organizations are using information systems for optimizing processes in order to increase coordination and interoperability across the organizations. Since Oil and Gas Industry is one of the large industries in whole of the world, there is a need to compatibility of its Information Systems (IS) which consists three categories of systems: Field IS, Plant IS and Enterprise IS to create interoperability and approach the
optimizing processes as its result. In this paper we introduce the different models of information systems integration, identify the types of information systems that are using in the upstream and downstream sectors of petroleum industry, and finally based on expert’s opinions will identify a suitable model for information systems integration in this industry.
Link-and Node-Disjoint Evaluation of the Ad Hoc on Demand Multi-path Distance...Eswar Publications
This work illustrates the AOMDV routing protocol. Its ancestor, the AODV routing protocol is also described. This tutorial demonstrates how forward and reverse paths are created by the AOMDV routing protocol. Loop free paths formulation is described, together with node and link disjoint paths. Finally, the performance of the AOMDV routing protocol is investigated along link and node disjoint paths. The WSN with the AOMDV routing protocol using link disjoint paths is better than the WSN with the AOMDV routing protocol using node disjoint paths for energy consumption.
Bridging Centrality: Identifying Bridging Nodes in Transportation NetworkEswar Publications
To identify the importance of node of a network, several centralities are used. Majority of these centrality measures are dominated by components' degree due to their nature of looking at networks’ topology. We propose a centrality to identification model, bridging centrality, based on information flow and topological aspects. We apply bridging centrality on real world networks including the transportation network and show that the nodes distinguished by bridging centrality are well located on the connecting positions between highly connected regions. Bridging centrality can discriminate bridging nodes, the nodes with more information flowed through them and locations between highly connected regions, while other centrality measures cannot.
Now a days we are living in an era of Information Technology where each and every person has to become IT incumbent either intentionally or unintentionally. Technology plays a vital role in our day to day life since last few decades and somehow we all are depending on it in order to obtain maximum benefit and comfort. This new era equipped with latest advents of technology, enlightening world in the form of Internet of Things (IoT). Internet of things is such a specified and dignified domain which leads us to the real world scenarios where each object can perform some task while communicating with some other objects. The world with full of devices, sensors and other objects which will communicate and make human life far better and easier than ever. This paper provides an overview of current research work on IoT in terms of architecture, a technology used and applications. It also highlights all the issues related to technologies used for IoT, after the literature review of research work. The main purpose of this survey is to provide all the latest technologies, their corresponding
trends and details in the field of IoT in systematic manner. It will be helpful for further research.
Automatic Monitoring of Soil Moisture and Controlling of Irrigation SystemEswar Publications
In past couple of decades, there is immediate growth in field of agricultural technology. Utilization of proper method of irrigation by drip is very reasonable and proficient. A various drip irrigation methods have been proposed, but they have been found to be very luxurious and dense to use. The farmer has to maintain watch on irrigation schedule in the conventional drip irrigation system, which is different for different types of crops. In remotely monitored embedded system for irrigation purposes have become a new essential for farmer to accumulate his energy, time and money and will take place only when there will be requirement of water. In this approach, the soil test for chemical constituents, water content, and salinity and fertilizer requirement data collected by wireless and processed for better drip irrigation plan. This paper reviews different monitoring systems and proposes an automatic monitoring system model using Wireless Sensor Network (WSN) which helps the farmer to improve the yield.
Multi- Level Data Security Model for Big Data on Public Cloud: A New ModelEswar Publications
With the advent of cloud computing the big data has emerged as a very crucial technology. The certain type of cloud provides the consumers with the free services like storage, computational power etc. This paper is intended to make use of infrastructure as a service where the storage service from the public cloud providers is going to leveraged by an individual or organization. The paper will emphasize the model which can be used by anyone without any cost. They can store the confidential data without any type of security issue, as the data will be altered
in such a way that it cannot be understood by the intruder if any. Not only that but the user can retrieve back the original data within no time. The proposed security model is going to effectively and efficiently provide a robust security while data is on cloud infrastructure as well as when data is getting migrated towards cloud infrastructure or vice versa.
Impact of Technology on E-Banking; Cameroon PerspectivesEswar Publications
The financial services industry is experiencing rapid changes in services delivery and channels usage, and financial companies and users of financial services are looking at new technologies as they emerge and deciding whether or not to embrace them and the new opportunities to save and manage enormous time, cost and stress.
There is no doubt about the favourable and manifold impact of technology on e-banking as pictured in this review paper, almost all banks are with the least and most access e-banking Technological equipments like ATMs and Cards. On the other Hand cheap and readily available technology has opened a favourable competition in ebanking services business with a lot of wide range competitors competing with Commercial Banks in Cameroon in providing digital financial services.
Classification Algorithms with Attribute Selection: an evaluation study using...Eswar Publications
Attribute or feature selection plays an important role in the process of data mining. In general the data set contains more number of attributes. But in the process of effective classification not all attributes are relevant.
Attribute selection is a technique used to extract the ranking of attributes. Therefore, this paper presents a comparative evaluation study of classification algorithms before and after attribute selection using Waikato Environment for Knowledge Analysis (WEKA). The evaluation study concludes that the performance metrics of the classification algorithm, improves after performing attribute selection. This will reduce the work of processing irrelevant attributes.
Mining Frequent Patterns and Associations from the Smart meters using Bayesia...Eswar Publications
In today’s world migration of people from rural areas to urban areas is quite common. Health care services are one of the most challenging aspect that is must require to the people with abnormal health. Advancements in the technologies lead to build the smart homes, which contains various sensor or smart meter devices to automate the process of other electronic device. Additionally these smart meters can be able to capture the daily activities of the patients and also monitor the health conditions of the patients by mining the frequent patterns and
association rules generated from the smart meters. In this work we proposed a model that is able to monitor the activities of the patients in home and can send the daily activities to the corresponding doctor. We can extract the frequent patterns and association rules from the log data and can predict the health conditions of the patients and can give the suggestions according to the prediction. Our work is divided in to three stages. Firstly, we used to record the daily activities of the patient using a specific time period at three regular intervals. Secondly we applied the frequent pattern growth for extracting the association rules from the log file. Finally, we applied k means clustering for the input and applied Bayesian network model to predict the health behavior of the patient and precautions will be given accordingly.
Network as a Service Model in Cloud Authentication by HMAC AlgorithmEswar Publications
Resource pooling on internet-based accessing on use as pay environmental technology and ruled in IT field is the
cloud. Present, in every organization has trusted the web, however, the information must flow but not hold the
data. Therefore, all customers have to use the cloud. While the cloud progressing info by securing-protocols. Third
party observing and certain circumstances directly stale in flow and kept of packets in the virtual private cloud.
Global security statistics in the year 2017, hacking sensitive information in cloud approximately maybe 75.35%,
and the world security analyzer said this calculation maybe reached to 100%. For this cause, this proposed
research work concentrates on Authentication-Message-Digest-Key with authentication in routing the Network as
a Service of packets in OSPF (Open Shortest Path First) implementing Cloud with GNS3 has tested them to
securing from attackers.
Microstrip patch antennas are recently used in wireless detection applications due to their low power consumption, low cost, versatility, field excitation, ease of fabrication etc. The microstrip patch antennas are also called as printed antennas which is suffer with an array elements of antenna and narrow bandwidth. To overcome the above drawbacks, Flame Retardant Material is used as the substrate. Rectangular shape of microstrip patch antenna with FR4 material as the substrate which is more suitable for the explosive detection applications. The proposed printed antenna was designed with the dimension of 60 x 60 mm2. FR-4 material has a dielectric constant value of 4.3 with thickness 1.56 mm, length and width 60 mm and 60 mm respectively. One side of the substrate contains the ground plane of dimensions 60 x60 mm2 made of copper and the other side of the substrate contains the patch which have dimensions 34 x 29 mm2 and thickness 0.03mm which is also made of copper. RMPA without slot, Vertical slot RMPA, Double horizontal slot RMPA and Centre slot RMPA structures were
designed and the performance of the antennas were analysed with various parameters such as gain, directivity, Efield, VSWR and return loss. From the performance analysis, double horizontal slot RMPA antenna provides a better result and it provides maximum gain (8.61dB) and minimum return loss (-33.918dB). Based on the E-field excitation value the SEMTEX explosive material is detected and it was simulated using CST software.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
A Survey of Cyber foraging systems: Open Issues, Research Challenges
1. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3274
A Survey of Cyber foraging systems: Open Issues,
Research Challenges
Manas Kumar Yogi
Department of Computer Science and Engineering, Pragati Engineering College(Autonomous)
Email: manas.yogi@gmail.com
Darapu Uma
Department of Computer Science and Engineering, Pragati Engineering College(Autonomous)
Email: umadarapu03@gmail.com
-------------------------------------------------------------------ABSTRACT---------------------------------------------------------------
This paper presents a survey on current applications which practice the pervasive mechanism of cyber foraging.
The applications include the LOCUSTS framework, Slingshot, Pupetter. This applications advocated the
operating principle of task sharing among resource deficient mobile devices. These applications face some design
issues for providing efficient performance like task distribution and task migration apart from the security aspect.
The general operating mechanism of the cyber foraging technique are also discussed upon and the design options
to leverage the throughput of the inherent mechanism is also represented in a suitable way.
Keywords - Cyber foraging, resource deficient, LOCUSTS, Sligshot, Pupetter, Task migration.
--------------------------------------------------------------------------------------------------------------------------------------------------
Date of Submission: May 01, 2017 Date of Acceptance: May 16, 2017
--------------------------------------------------------------------------------------------------------------------------------------------------
I. INTRODUCTION
Cyber foraging refers to pervasive computing
mechanism where resource deficient, mobile devices
offload some of their heavy task to stronger surrogate
machines in the surroundings. The term cyber foraging
was given by Mr. M. Satyanarayanan in his 2001 paper.
Cyber foraging, helps the mobile devices to take on more
resource intensive tasks by leveraging unused resources on
larger computers in the vicinity. Cyber foraging is
foraging for a multiple resource types , not limited to
processing power. Among the resources that can be
foraged for is network connectivity, storage, processing
power, bandwidth and much more. All of these resource
types are equally important in a cyber foraging scenario.
There are many possible usage scenarios where cyber
foraging can be utilized. Some designs for pervasive
computing advocates for wearable computing devices
which include small computing devices that may be worn
by their users like clothes[1]. Users of such devices are not
interested in carrying around equipment which are heavy,
and these devices must therefore be as lightweight as
possible. This is opposite to the user’s need to have as
powerful a device as possible. The desired computing
power can be added to these small wearable devices
through techniques such as cyber foraging. Consider the
following situation : a doctor doing house calls is
wearing a small headset (similar in size and form to the
well-known Bluetooth headsets for mobile phones). Using
this headset he would like to be able to enter data
pertaining to his patients into an electronic journal. This
means that the headset is faced with the difficult task of
continuous voice recognition. The headset is unable to
perform this translation task by itself, so instead of
performing the actual voice recognition it only records the
words uttered by the doctor. Whenever the headset comes
within range of usable computing resources it forwards
some of the recordings to these machines who respond by
returning the translated text[2]. In case the surrogate has
an Internet connection it may even be given the task of
updating the patient’s journal directly. After translation the
headset may discard the recording and thus free storage for
additional recordings. In the preceding scenario the
application running on the mobile device works in two
modes; high fidelity and low fidelity. When no surrogates
are within range the headset simply saves the recordings
(low fidelity), and when surrogates can be used the
recordings are immediately translated into text (high
fidelity). This high/low fidelity aspect is inherent in all
cyber foraging applications, when surrogates are available
high quality work may be done, but this does not mean
that the applications will only work in the presence of
surrogates. For cyber foraging to be usable a low fidelity
setting must also be possible, where the mobile device
itself is running the application, but at a diminished
fidelity. In the scenario low fidelity means that the headset
only stores the recordings, but it would also be possible to
ask the mobile device to do the processing itself, or even
to do it in combination with other mobile devices that
reside in the doctors personal area network. To be able to
perform the actions described above a number of things
are needed. At first the mobile device must be able to
monitor the network looking for any available
surrogates[3]. Once found the mobile device must be able
to distribute tasks to surrogate machines, and, in the case
that the user is moving while tasks are being performed,
surrogates must be able to migrate tasks between each
other so that the result may be returned. This amount to
complex operations, and it should not be the work of the
application programmer to implement this.
2. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3275
2. CURRENT RESEARCH CHALLENGES
There are a considerable number of challenges that must
be addressed when designing a framework for cyber
foraging.
2.1 Distribution of task
How can the complex tasks be delegated to surrogates,
and exactly up to what complexity of tasks should be
moved onto the surrogates.
2.1.2 Migration of task
When mobile devices are using surrogates the tasks that
are distributed to surrogates must be transferrable. It must
be possible to move running tasks between surrogates and
also to move a task back to the mobile device. Apart from
this remote execution specific challenges a number of
other challenges are posed as well in a cyber foraging
framework, challenges such as device discovery,
capability announcement, data staging[4]. One final
important challenge for cyber foraging is security.
Surrogates must execute code on behalf of, possibly
unknown and thus untrusted, mobile devices and data must
be transmitted over wireless links that are easy for an
eavesdropper to monitor. Finally, the client must be able to
trust that the surrogate actually performs the task that it is
asked to. How can this be done in a secure manner? A fine
balance between security and flexibility must be found
here.
2.2 Cyber Foraging Steps
A cyber foraging approach includes some steps that every
available cyber foraging systems have considered all or
some of them. These steps can be summarized as follows.
2.2.1 Surrogate discovery
At the onset, available idle surrogates that are ready to
share their resources with the mobile device must be
determined. Some researchers have addressed surrogate
discovery in quite an impressive way.
2.2.2 Context gathering
To build a good decision about target execution location,
there is a requirement to monitor available resources in
surrogates and mobile devices and estimate application
resource consumptions which is regarded as context
gathering in some cyber foraging systems:
2.2.2.1 Partitioning
Here, a task is divided into smaller size subtasks, and
undividable i.e. unmovable parts are specified. Some
researchers do the partitioning automatically.
2.2.2.2 Scheduling
The most important step of cyber foraging is to assign
individual task at the surrogate(s) or the mobile device
most capable of executing it, based on the context
information and the estimated cost of doing so.
2.2.3 Remote execution control
The final step includes the establishment of a reliable
connection between the mobile device and the appropriate
surrogate to pass required information, remote execution,
and the receipt of returned results. Various researchers
have considered remote execution control.
2.3 Cyber Foraging Goals
Cyber foraging is a way to execute resource intensive
applications on resource constrained mobile devices. In
fact, researchers in cyber foraging have tried to augment
some resources of mobile devices in terms of effective
metrics to obtain more efficient application execution.
The most important resources have been considered by
offloading approaches are as follows:
2.3.1 Energy
One of the most crucial limits of mobile devices is energy
consumption because mobile device’s energy cannot be
replenished by itself. Many researchers have considered
energy consumption as a factor for offloading.
2.3.2 Memory and storage
Memory capacity of mobile devices is less than stationary
computers and memory intensive applications cannot
usually run on mobile devices. Many researchers have
considered the availability of memory and storage as
another effective parameter for offloading decision.
2.3.3 Response time
When the processing power of mobile devices is
considerably lower than static computers, task offloading
is advantageous to decrease execution time. Many
researchers that have considered the response time and
latency as a major factor affecting the offloading decision.
2.3.3 I/O
Showing a movie on a bigger screen, playing music on
more powerful speakers, and printing are examples of task
offloading to improve I/O quality or exploit more I/O
devices. Some researchers have focused on augmenting
I/O as an effective parameter for offloading decision.
3. THE LOCUSTS FRAMEWORK
The LOCUSTS framework aims to provision developers
with a complete cyber foraging toolbox that can ease the
process of developing applications that utilize cyber
foraging [6]. In the following the architecture of
LOCUSTS will be briefly described in Section 3.1, and
then the chosen approach towards task distribution is
described in Section 3.2. Finally, task migration is
illustrated in Section 3.3.
3.1 Architecture
A clear view of the current architecture of LOCUSTS
system is shown in Figure 1. The LOCUSTS daemon runs
as a different process on both client and surrogate devices
and the individual applications can interact with the local
LOCUSTS instance. As depicted, a cyber foraging enabled
3. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3276
application includes some local code which is executed by
the local device, and multiple number of distributable
tasks. Every time a task is executed, the local LOCUSTS
instance is communicated so that it may determine a
suitable plan for execution. This execution plan is
developed by the scheduler who depends on resource
measurements, both local and remote.
Fig. 1. LOCUSTS architecture
At the moment an execution plan has been obtained the
task may either be distributed to one or more surrogates, or
it may be transferred to the task processor for executing
locally. At the bottom layer of the LOCUSTS client the
network services layer exists. These help the mobile node
to do all the required P2P operations like peer discovery,
network roaming etc. All devices can opt to act as
surrogates and thus the software running on surrogates and
clients is identical. When performing operation as a
surrogate, a device generally offers three things:
1) Execution of known tasks on behalf of clients
2) Client can author new tasks
3) Support for full task migration
The storage manager shown below the LOCUSTS
daemon in above figure has a provision for a simple file
system that can be accessed by tasks executed by
LOCUSTS. The storage manager provides a virtual file
system that can be accessed from within tasks. When
executing a task on the local device, files in this virtual file
system simply refer to the local files, but when a task is
delegated for remote execution, the storage managers of
the client and the surrogate are linked, so that remote files
may be transparently perform read operation . This small
distributed system is simple in design and is designed
specifically for the purpose of cyber foraging. It has
property of on-demand synchronization of file data to
minimize the amount of data transferred, and has built in
support for temporary files that will only be synchronized
when the task is migrated.
3.2. Task Distribution
Task distribution is at the core of cyber foraging. The
delegation of complex work to surrogates is the
fundamental idea of cyber foraging. When designing a
cyber foraging framework decision must be taken for
exactly what is delegated, when it is delegated, and at
which granularity. The question about task granularity is
difficult to give a definite answer to, because it depends on
numerous factors. The main factors to focus are network
bandwidth and latency in network , processing power of
the mobile device and the surrogate, the amount of energy
used at the mobile device when communicating with the
surrogate, and the rate of mobility of the device. When
delegating tasks to a surrogate the mobile device needs to
send the task to the surrogate, and similarly the surrogate
must transmit a response back to the mobile device. This
means that data must be transmitted over the wireless link
between the mobile device and the surrogate. It must be
taken care that whether the cost of this transmission, both
in time and energy, is acceptable, i.e. whether the cost of
distributing the task is smaller than the cost of doing the
processing locally. Due to this reason distribution happens
for only larger, longer running tasks, as the cost of
delegating a small task will be more than the cost of local
execution. But how does a small task get designated? It is
variable as per context depending on factors mentioned
above. To face this challenge, decisions regarding when to
distribute a task is advisable to be taken dynamically
depending on the current resource availability. This is
achieved by resource usage monitoring at the client and
surrogates and using this data in the scheduler application
when future execution is being planned. The LOCUSTS
framework takes the same approach towards task
distribution; monitoring resource usage and dynamically
deciding where and when to distribute tasks. In LOCUSTS
task can be resized. A resizable task is a task that can be
solved to different degrees which means that a surrogate
may opt to solve only a small fraction of the task before
returning the task to the client. It happens when the
surrogate is under timing constrains given by the client.
LOCUSTS also have functionality of the concept of
migratable tasks as will be described in section below. The
next important issue to face is exactly what is distributed
and how it is done. When a mobile device is executing an
application that can use cyber foraging, a part of the
application will always be running locally while other
parts may or may not be distributed to surrogates. The
parts of the program that can be distributed must be
identified and, possibly, modified to make the distribution
possible. After identifying the parts of a program that can
be delegated to surrogates a mechanism for actually
distributing these tasks must be found. LOCUSTS do not
have the need to preinstall anything on the surrogates. A
task in LOCUSTS is therefore more than just an RPC
invocation. It also contains the actual source code of the
task presented in a way such that any surrogate, regardless
of architecture etc., will be able to execute it. This means,
that the portions of the code that designate distributable
tasks must be written in a specific, interpreted language so
that it can be moved on to surrogates, and thus allowing
the clients to author new tasks on the surrogate. Allowing
clients to execute unknown code on surrogates of course
leads to an abundance of security issues that will have to
be addressed, but that is out of the scope of this paper.
4. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3277
Currently the language used for distributable tasks is
Python.
3.3. Task Migration Distribution
Distributing very large tasks increases the merits of remote
execution, since it helps to reduce the overhead of sending
tasks back and forth. But, in existing cyber foraging
frameworks, working with large tasks needs the user to
stay within range of a specific surrogate for an extended
period of time. To remove this problem, functionalities are
made so that tasks may span multiple surrogates
throughout their lifetime. The solution to the problem is
task migration. Task migration enables surrogates to shift
running tasks to other surrogates or even back on to the
mobile device. Using migration a client no longer needs to
stay within range of a surrogate while performing a task
and it is thus possible to distribute larger tasks, which
decreases the considerable overhead of remote execution.
Task migration is depicted in Figure below. The ways that
such task migration could be implemented range from
simple surrogate-to-surrogate proxies to task check
pointing. Both methods are used in LOCUSTS. Proxies
are used in some scenarios when high speed network
connections exist between surrogates. Take for example
the scenario in Figure 2. The task is originated at S2 but
when M moves out of range of S2 the task is migrated to
S4. If, due to some reason, it makes sense to let S2 keep
the task S4 will simply be asked to proxy for S2. In the
viewpoint of the client M surrogate S4 is the one
executing the task and all communications about the task
goes through S4. Alternatively, the task could be moved
entirely to S4. Subsequently the current executing task
would be check pointed by S2 and its code and state sent
to S4.
Fig. 2 Task migration in LOCUSTS
Many factors must be considered when choosing which
kind of migration to use – factors such as network
bandwidth between the surrogates, current resource usage
at the surrogates, checkpoint size, estimated finish time of
the task etc. This preceding description of task migration
touches lightly on a very complex matter.
3.4 Slingshot
Slingshot is a new architecture for deploying mobile
services at wireless hotspots[8]. Slingshot replicates
applications on surrogate computers which are located at
hotspots. A first-class replica of each application executes
on a remote server owned by the mobile user. Slingshot
instantiates second-class replicas on surrogates at or near
the hotspot where the user is located. A proxy which is
running on a handheld broadcasts each application request
to all replicas; it returns the first response it receives to the
application. Second-class replicas improve interactive
response time since they are reachable through low-
latency, high-bandwidth connections (e.g. 54 Mb/s for
802.11g). Additionally, the first-class replica is a trusted
repository for application state, which means upon
surrogate failure state is not lost but it is preserved.
Slingshot also makes surrogate management easy. It
considers virtual machine encapsulation to remove the
need to install application-specific code on surrogates.,
Replication prevents the loss of application state when a
surrogate crashes or even permanently fails. The
performance impact of surrogate failure is reduced by
other replicas, which continue to service client requests.
The harnessing of surrogate computation is a complex
problem with many challenges. Our paper addresses
these challenges, including improving interactive response
time, hiding the perceived cost of migration, recovering
from surrogate failure, and simplifying surrogate
management. It also gives robust results that measure
the substantial benefit of surrogate computation for
stateless and stateful applications. Other research
challenges remain to be addressed. Slingshot does not yet
address privacy concerns, provide protocols for secure
replica management, manage surrogate load, or decide
when to instantiate and destroy replicas. Current
implementation of Slingshot has two services: a speech
recognizer and a remote desktop. Observatory results
show that instantiating a second-class replica on a
surrogate lets these applications run nearly 2.6 times
faster. Our results also show that replication lets Slingshot
move services between surrogates with little user-
perceived latency and recover gracefully from surrogate
failure.
4.1 DESIGN PRINCIPLES
We begin by discussing the three principles followed in
the design of Slingshot:
4.1.1 Location
Server location can be crucial to the performance of
remote execution. Suppose a handheld device is connected
to the Internet at a wireless hotspot. If the handheld device
executes code on a remote server, its network
communication not only passes through the wireless
medium; it also traverses the hotspot’s backhaul
connection and the wide area Internet link. In a general
hotspot, the backhaul connection is the bottleneck. For
instance, the nominal bandwidth of a 802.11g network (54
Mb/s) is more than an order of magnitude greater than that
of a T1 connection. If the handheld could instead execute
code on a server located at the hotspot, it could reduce
the communication delay associated with the bottleneck
link. For interactive applications that require sub-second
response time, server location can make the difference
between acceptable and unacceptable performance.
5. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3278
Network latency is also a concern. A server that is nearby
in physical distance can often be quite distant in network
topology due to the unexpected changes of Internet
routing. Firewalls, VPNs, and NAT components add
additional latency when connections cross administrative
boundaries [9]. For mobile users, a journey of only a few
hundred feet can enhance the round-trip time for
communication with a remote server. But a server located
at the current hotspot is only a network hop away.
4.1.2 Replicate instead of migrate
The need to locate services near mobile users refers that
services need to move over time. When a handheld user
moves to a new location, a surrogate at the new hotspot
will generally offer better response time than a surrogate at
the previous hotspot. In case we need to move
functionality, one option is migration: suspend the
application on the previous surrogate, send its state to the
new surrogate, and resume it there. This approach affects
the availability of the application while it is migrating.
Slingshot deploys an alternative strategy that instantiates
multiple replicas of each service. During instantiation of
new replicas, existing replicas continue to serve the user.
Slingshot replication is a form of primary-backup fault
tolerance; i.e. it tolerates the failure of any number of
surrogates. For each application, Slingshot creates a first-
class replica on a reliable server known to the mobile user
this server is referred to as the home server. Slingshot
ensures that all application state can be reconstructed from
information stored on the client and the home server. This
allows all state on a surrogate to be regarded as soft state.
Even if all surrogates crash, Slingshot continues to service
requests using the first-class replica on the home server. In
contrast, a naive approach that migrates applications
between surrogates might lose state when a surrogate fails.
We note that Slingshot handles both stateful and stateless
applications[10]. The result of a remote operation for a
stateful application depends upon the operations that have
previously executed. Slingshot assumes that applications
are deterministic; i.e. that given two replicas in the same
initial state, an identical sequence of operations sent to
each replica will produce identical results. Slingshot
instantiates a new replica by check pointing the first-class
replica, shipping its volatile state to a surrogate, and
replaying any operations that occurred after the
checkpoint. Instantiation of a new replica takes several
minutes since the volatile state must travel through the
bandwidth constrained backhaul connection. However,
existing replicas mitigate the perceived performance
impact. Until the new replica is instantiated, existing
replicas service application requests.
4.1.3 Ease of maintenance
We observe that the business case for deploying a
surrogate as being similar to that of deploying a wireless
access point. Desktop computers have become cheaper,
not much more than an access point. Further, it has been
shown by researchers that surrogates can provide
significant value-addition to wireless users in terms of
improved interactive performance. Nevertheless,
surrogates must be easy to manage if they are to be widely
deployed. Since we envision surrogates at hotspots in
airport lounges, coffee shops, and bookstores, they must
need negligible or no supervision. They should be
appliances that require little configuration. Apart from that
maximum trouble shooting should be possible by a normal
restart. For easy management of surrogates Slingshot has
the following policies.
4.1.3.1 Minimizes the surrogate computing base
Each replica runs within its own virtual machine, which
encapsulates all-application specific state such as a guest
OS, shared libraries, executables, and data files. The
surrogate computing base consists of only the host
operating system (Linux), the virtual machine monitor
(VMware), and Slingshot. No configuration or setup is
needed to enable a surrogate to run new applications each
VM is self-contained.
4.1.3.2 Uses a heavyweight virtual machine
While Para virtualization and other lightweight
approaches to virtualization offer scalability and
performance benefits, they also restrict the type of
applications that can run within a VM. In contrast, by
deploying a heavyweight VMM (VMware), Slingshot
runs the two applications described in Section 4.2 without
modifying source code, even though their guest OS
(Windows XP) differs substantially from the surrogate
host OS (Linux)[11].
4.1.3.3 Places no hard state on surrogates
As surrogates have only soft state, a normal or abnormal
restart does not result to incorrect application behavior or
data loss. If a surrogate crashes or is restarted, the only
effect observable by the user is that performance goes
down to the level that would have been available in the
absence of the surrogate.
4.2 Slingshot implementation
Figure 1 shows an overview of Slingshot. For simplicity of
exposition, this figure assumes that the mobile client is
executing a single application and that a single surrogate is
being used. In practice, we expect a Slingshot user to run
only one or two applications concurrently, with each
service replicated two or three times. Each Slingshot
application is partitioned into a local component that runs
on the mobile client and a remote service that is replicated
on the home server and surrogates. Ideally, we partition
the application so that resource-intensive functionality
executes as part of the remote service; the local component
typically contains only the user interface. This partitioning
enables demanding applications to run on clients such as
handhelds that are highly portable but also resource-
impoverished. The applications that we have studied so far
(speech recognition and remote desktops) already had
client server partitioning that fit this model. For some
applications, the best partitioning may not be immediately
clear—in these cases, we could leverage prior work to
choose a partition that fits our model. In Figure 1, a first-
class replica executes on the home server and a second-
6. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3279
class replica executes on the surrogate. The home server,
described in Section 3.2, is a well-maintained server under
the administrative control of the user, e.g. the user’s
desktop or a shared server maintained by the user’s IT
department. They are administered by third parties and
are not assumed to be reliable. Slingshot creates the first-
class replica when the user starts the application—this
replica is required for execution of stateful services. As the
application runs, Slingshot dynamically instantiates one or
more second-class replicas on nearby surrogates. These
replicas improve interactive performance because they are
located closer to the user and respond faster than the first-
class replica on the distant home server. If no second-class
replicas are instantiated, Slingshot’s behavior is identical
to that of remote execution. Each replica executes within
its own virtual machine. Replica state consists of the
persistent state, or disk image of the virtual machine, and
the volatile state, which includes its memory image and
registers. On the home server, requests are redirected to a
service database that stores the disk blocks of every
remote service. On a surrogate, VMware reads are first
directed to a service cache if the block is not found in the
cache, it is fetched from the service database on the home
server.
4.3 Slingshot applications
We have adapted the IBM Via Voice speech recognizer
and the VNC remote desktop to use Slingshot[5]. Due to
Slingshot’s use of virtual machine encapsulation, we did
not need to modify the source code of either application.
All Slingshot-specific functionality is performed within
proxies that intercept and redirect network traffic.
4.3.1 Speech recognition
We chose speech recognition as our first service because
of its natural application to handheld computers. We used
IBM Via Voice in our work. We created a server side
proxy that accepts audio input from a remote client and
passes it to Via Voice through that application’s API. Via
Voice returns a text string which the proxy sends to the
client. Via Voice and our server run on a Windows XP
guest OS executing within a VMware virtual ma-chine.
The local component of this application displays the
speech recognition output. We chose to implement speech
recognition as a stateless service. One can certainly make
a reasonable argument that speech recognition should be a
stateful service in order to allow a user to train the
recognizer. However, we wanted to explore the
optimizations that Slingshot could provide for stateless
services.
4.2 Virtual desktop
VNC allows users to view and interact with another
computer from a mobile device. In the case of Slingshot,
the remote desktop is a Windows XP guest OS executing
within a VMware virtual machine. This allows users to
remotely execute any Windows application from their
handhelds. This is clearly a stateful service; i.e., a user
who edits a Word document expects the document to exist
when the service is next instantiated. Adapting VNC to
Slingshot presented interesting challenges[12]. First, the
VNC server sends display updates to the client in a non-
deterministic fashion. When pixels on the screen change, it
reports the new values to the client in a series of updates.
Two identical replicas may communicate the same change
with a different sequence of updates. The resulting screen
image at the end of the updates is identical but the
intermediary states may not be equivalent. A second
challenge is that some applications are inherently non-
deterministic. One annoying example is the Windows
system clock; two surrogates can send different updates
because their clocks differ. We noted that some non-
determinism is unlikely to be relevant to the user (e.g. a
slightly different clock value).Unfortunately, other non-
determinism affects correct execution. For example, a key
stroke or mouse click is often dependent upon the window
state. If a user opens a text editor and enters some text, the
key strokes must be sent to each replica only after the
editor has opened on that replica. If this is not done, the
key strokes will be sent to another application. To solve
this problem, we associate a precondition with each input
event. When the user executes the event, we log the state
of the window on the client to which that event was
delivered. When replaying the event on a server, we
require that the window be in an identical state before the
event is delivered. Since each event is associated with a
screen coordinate, we check state equality by comparing
the surrounding pixel values of the original execution and
the background execution. In the above example, this
strategy causes Slingshot to wait until the editor is
displayed before it delivers the text entry events. A second
issue with VNC is that its non-determinism prevents us
from mixing updates from different replicas. We designate
the best-performing replica as the foreground replica and
the remainder as background replicas. Only events from
the foreground replica are delivered to the client. If
performance changes, we quiescence the replicas before
choosing a new foreground replica. Two replicas are
quiesced by ensuring that the same events have been
delivered to each, and by requesting a full-screen update
from the new foreground replica to eliminate transition
artifacts. New events are logged while quiescing replicas.
Note that the foreground replica is rarely the first-class
replica since nearby surrogates provide better performance
in the common case. We were encouraged that VNC can
fit within the Slingshot model, since its behavior is
relatively nondeterministic. Based on this result, we
suspect that application-specific wrappers can be used to
enforce determinism for many applications. For those
applications where this approach proves infeasible, we
could use a VMM that enforces determinism at the ISA
level. Handhelds can improve interactive response time by
leveraging surrogate computers located at wireless
hotspots. Slingshot’s use of replication offers several
improvements over a strategy that simply migrates remote
services between computers. Replication provides good
response time for mobile users who move between
wireless hotspots; while a new replica is being
instantiated, other replicas continue to service user
requests. Replication also lets Slingshot recover gracefully
7. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3280
from surrogate failure, even when running stateful
services. Slingshot minimizes the cost of operating
surrogates. For these computers to be of maximum benefit,
they must be located at wireless hotspots, rather than in
machine rooms that are under the supervision of trained
operators. Slingshot uses off-the-shelf virtual machine
software to eliminate the need to install custom operating
systems, libraries, or applications to service mobile users.
All application-specific state associated with each service
is encapsulated within its virtual machine. Further,
Slingshot’s replication strategy means that surrogates need
not provide 24/7 availability. If a surrogate fails or is
rebooted, no state is lost. Harnessing surrogate
computation is a complex problem. Slingshot currently
provides several pieces of the puzzle, including the use of
replication to improving response time and the elimination
of hard surrogate state to improve ease of management.
Other pieces of the puzzle remain. Slingshot does not yet
address the privacy issues inherent to running computation
on third-party hardware[13]. Trusted computing efforts
provide promise in this area. Slingshot does not provide a
mechanism for securely controlling replica instantiation
and termination. Other areas of potential investigation are
load management and policies for creating and destroying
replicas. We believe that Slingshot will be an extremely
useful platform on which to conduct such investigations.
5. PUPPETEER
5.1 Introduction
Puppeteer is advocated as a system for using applications
based on components in mobile environments. It reaps the
advantage of the exported interfaces of these applications
and the structured nature of the documents they
manipulate to perform adaptation without changing the
applications. The system is structured in a modular
fashion, allowing easy addition of new applications and
adaptation policies. The initial prototype concentrated on
adaptation to limited bandwidth. It was designed to run on
Windows NT, and included support for a variety of
adaptation policies for Microsoft PowerPoint and Internet
Explorer 5[7]. We represent in this paper that Puppeteer
can support complex policies without any major change to
the application and with negligible cost.
Fig 3 Overall architecture
Figure above shows the four-tier Puppeteer system
architecture. It includes the applications which are to be
adapted, the Puppeteer client proxy, the Puppeteer server
proxy, and the data server. The application and data server
are completely untouched. The Puppeteer client proxy and
server proxy work combinely to perform the adaptation.
The Puppeteer client proxy is in control of executing the
policies that adapt the applications. The Puppeteer server
proxy has to parse the documents, exposing their structure
and transcoding components as needed and requested by
the client proxy. The Puppeteer server proxy is considered
to have robust connection with the data server. In the most
common situation, it executes on the same machine as the
data server. Data servers can be random repositories of
data such as Web servers, file servers or databases.
5.2 Puppeteer Architecture
The Puppeteer architecture is represented by four types of
modules: Kernel, Driver, Transcoder, and Policy (see
Figure 2). The Kernel appears once in both the client and
server Puppeteer proxy. A driver supports adaptation for a
particular elemental type. A driver for a specific elemental
type may call on a driver for another elemental type, if an
elemental of the modern type is included in an elemental
of the basic type. The driver for a specific application
resides on the top of this driver hierarchy. Driver
execution may happen in both the client and the server
Puppeteer proxies, as may Transcoders which model
particular transformations on elemental types. Policies
denote specific adaptation mechanisms and subsequently
execute in the client Puppeteer proxy.
5.2.1 Kernel
The Kernel is a part which is independent module that
derives the Puppeteer protocol. The Kernel executes in
both the client and server proxies and triggers the shift of
document components. The Kernel does not include
knowledge regarding the specifics of the documents being
transmitted. It functions on a representation neutral
description of the documents, which are referred to as the
Puppeteer Intermediate Format (PIF). A PIF has a skeleton
of components, each of which possess a set of related data
items. The skeleton includes the structure of the data used
by the application. it has the form of a tree, with the root
being represented as the document, and the children being
pages, slides or any other elements in the document. The
skeleton is a multi-level data structure as elements in any
level can contain sub elements. The skeleton is element
independent, but elements in the skeleton are element
specific[8]. To improve functional performance, the
Kernel batches requests for multiple elements into a single
message and has provisional support for asynchronous
requests.
5.2.2 Drivers
Puppeteer requires an import and an export driver. A
tracking driver is necessary to implement compound
policies. The import drivers resolves the documents,
extracting their elemental structure and converting them
from their application limited file formats to PIF.In the
general case where the application’s file format is
parsable, either because it is human readable (e.g., XML)
or there is sufficient documentation to create a parser,
Puppeteer can parse the files directly to uncover the
structure of the data[13]. This leads to good performance,
and enables clients and server to run on different
platforms. When the application only exposes a DMI, but
has an opaque file format, Puppeteer runs an particular of
8. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3281
the application on the server, and handling the DMI to
uncover the structure of the data, in some cases using the
application as a parser. This configuration allows for a
high degree of flexibility, since Puppeteer need not
understand the application’s file format. It creates,
however, more overhead on the server proxy, and requires
both the client and server to run the environment of the
application, which in most cases amounts to running the
same operating system on both servers and clients.
resolving at the server does not work well for documents
that choose what data to fetch and display by executing a
script. Instead, import drivers for dynamic content run in
the Puppeteer client proxy, and built on a prevent
mechanism that traces requests. Regardless of whether the
skeleton is built statically in the server proxy or
dynamically in the client proxy, any changes to the
skeleton are reflected by the Kernel at both ends to
maintain a persistent view of the skeleton. Export drivers
unparsed the PIF and update the application using the DMI
interfaces exposed by the application. A minimal export
driver has to support inserting new components into a
running application. Tracking drivers are important for
many complex methods. A tracking driver tracks which
components are being viewed by the user and intercepts
load and save requests. Tracking drivers can be
implemented using polling or event registration
mechanisms.
5.2.3 Transcoders
Puppeteer makes huge use of transcoding to perform
conversions on basic data. Transcoders include the
conventional ones, such as compression and diminish
image resolution. An exclusive transcoding system is used
to setup loading subsets of components. Each element of
the PIF skeleton has a number of associated data items
that, among other things, encode in a component-distinct
pattern the relationship between the component and its
children. To load a subset of the children of a given node,
it is required to change the data items associated with the
parent node to follow the fact that we are only loading
some of its children. In effect, by transcoding the parent
node’s data items, we create a new temporary component
that consists only of a subset of the children of the original
component.
5.2.4 Policies
Policies are modules that work on the client proxy and
control the yielding of components. These policies
traverse the skeleton, choosing what components to fetch
and with what integrity. Puppeteer provides support for
two types of Policies: general purpose Policies that are
independent of the component type being adapted and
component define Policies that use their knowledge about
the component to drive the adaptation. Typical Policies
choose components and integrities based on available
bandwidth and user defined preferences. Other Policies
track the user or react to the way the user moves through
the document. Regardless of whether the decision to fetch
a component is made by a general purpose Policy or by a
component specific one, the actual data transfer is
performed by the Kernel, free the policy from the
elaborateness of communication.
5.3 The Adaptation Process
The adaptation process in Puppeteer is divided into three
stages: resolving the document to uncover the structure of
the data, fetching the initially selected components at
distinct integrity levels and supplying those to the
application, and, if the policy so defines, modernizing the
application with newly fetched data. When the user opens
a document, the Kernel on the Puppeteer server proxy
instantiates an import driver for the proper document type.
The import driver resolves the document, extracts its
skeleton and data, and generates a PIF. The Kernel then
transfers the document’s skeleton to the Puppeteer client
proxy. The policies running on the client proxy ask the
Kernel to fetch an initial set of components at a distinct
integrity. This set of components is supplied to the
application in return to its open call. The application has
finished loading the document, returns control to the user.
Meanwhile, Puppeteer knows that only a portion of the
document has been loaded. The policies in the client proxy
now decide what further components to fetch. They
instruct the Kernel to do so, and then the client proxy uses
the DMI to feed those newly fetched components to the
application.
6. CONCLUSION
As we have discussed in this paper the cyber foraging
technique is becoming popular day by day due to
increased mobility of device by users. Users want a cost
effective way of computing needs and cyber foraging
provides just the exact functionality. It has been observed
that many applications similar to the systems like
LOCUSTS, Slingshot, and Pupetter are coming up in near
future which will be released for commercial use in couple
of years in a massive way. The popularity of this
application is going to surge in an upward direction in
years to come. Researchers in this area are actively
working to realize this.
REFERENCES
[1] Perry,Mark, O'hara,Kenton, Sellen,Abigail,
Brown,Barry&Harper,Richard, (2001) "Dealing with
Mobility: Understanding Access Anytime,
Anywhere,"Transactions on Computer-Human
Interaction (TOCHI), Vol. 8, No. 4, pp. 323-347.
[2] M. Satyanarayanan, (2001) "Pervasive Computing:
Vision and Challenges," IEEE Personal Communication,
Vol. 8, No. 4, pp. 10-17..
[3] Balan, Rajesh Krishna, Flinn, Jason, Satyanarayanan,
Mahadev, Sinnamohideen,S.&Yang, H.I.,(2002) "The
Case for Cybef Foraging," presented at the 10th Workshop
on ACM SIGOPS European Workshop: beyond the PC,
New York, NY, USA, pp. 87-92..
9. Int. J. Advanced Networking and Applications
Volume: 08 Issue: 06 Pages: 3274-3282 (2017) ISSN: 0975-0290
3282
[4] Chun, Byung-Gon & Maniatis, Petros, (2010)
"Dynamically Partitioning Applications between Weak
Devices and Clouds," in 1st ACM Workshop on Mobile
Cloud Computing and Services(MCS 2010), San
Francisco, pp. 1-5.
[5] Kemp, Roelof, Palmer, Nicholas, Kielmann, Thilo,
Seinstra, Frank, Drost, Niels, Maassen,Jason&Bal, Henri,
(2009) "eyeDentify: Multimedia Cyber Foraging from a
Smartphone," in IEEE International Symposium on
Multimedia (ISM2009), San Diego, pp. 392-399.
[6] A. Fox, S. D. Gribble, E. A. Brewer, and E. Amir.
Adapting to network and client variability via on-demand
dynamic distillation. Sigplan Notices, 31(9):160–170,
September 1996.
[7] Randy H. Katz. Adaptation and mobility in wireless
information systems. IEEE Personal Communications,
1(1):6–17, 1994.
[8] Brian D. Noble, M. Satyanarayanan, Dushyanth
Narayanan, James Eric Tilton, Jason Flinn, and Kevin R.
Walker. Agile application-aware adaptation for mobility.
Operating Systems Review (ACM), 51(5):276–287,
December 1997.
[9] M. Satyanarayanan, (2001) "Pervasive Computing:
Vision and Challenges," IEEE Personal Communication,
Vol. 8, No. 4, pp. 10-17.
[10] Balan, Rajesh Krishna, Flinn, Jason, Satyanarayanan,
Mahadev, Sinnamohideen,S.&Yang, H.I., (2002) "The
Case for Cybef Foraging," presented at the 10th Workshop
on ACM SIGOPS European Workshop: beyond the PC,
New York, NY, USA, pp. 87-92.
[11] Balan, Rajesh Krishna, Satyanarayanan, Mahadev,
Park, SoYoung&Okoshi, Tadashi,(2003) "Tactics-Based
Remote Execution for Mobile Computing," in 1st
International Conference on Mobile Systems, Applications
and Services, San Francisco, pp. 273-286.
[12] Gu, Xiaohui, Messer, Alan, Greenbergx, Ira,
Milojicic, Dejan&Nahrstedt, Klara, (2004) "Adaptive
Offloading for Pervasive Computing," IEEE Pervasive
Computing Magazine, Vol. 3, No. 3, pp. 66-73.
[13] Ou, Shumao, Yang, Kun&Liotta, Antonio,(2006) "An
Adaptive Multi-Constraint Partitioning Algorithm for
Offloading in Pervasive Systems," in 4th Annual IEEE
International Conference on Pervasive Computing and
Communications (PERCOM’06), Pisa, Italy, pp. 116-125.