Cloud computing currently plays a crucial role in the delivery of vital information technology services. A unique aspect of cloud computing is the cloud middleware and other related entities that support applications and networks. A specific field of research may be considered, particularly as regards cloud middleware and services at all levels, and thus needs analysis and paper surveys to elucidate possible study limitations. The purpose of this paper is to perform a systematic mapping for studies that capture cloud computing middleware, stacks, tools and services. The methodology adopted for this study is a systematic mapping review. The results showed that more papers on the contribution facet were published with tool, model, method and process having 18.10%, 13.79%, 6.03% and 8.62% respectively. In addition, in terms of tool, evaluation and solution research had the largest number of articles with 14.17% and 26.77% respectively. A striking feature of the systemic map is the high number of articles in solution research with respect to all aspects of the features applied in the studies. This study showed clearly that there are gaps in cloud computing middleware and delivery services that would interest researchers and industry professionals desirous of research in this area.
Virtual Machine Allocation Policy in Cloud Computing Environment using CloudSim IJECEIAES
This document discusses virtual machine allocation policies in cloud computing environments using the CloudSim simulation tool. It begins with an introduction to cloud computing and discusses challenges related to resource management and energy consumption. It then reviews previous research on modeling approaches, energy optimization techniques, and network topologies. A UML class model is presented for analyzing energy consumption when accessing cloud servers arranged in a step network topology. The methodology section outlines how energy consumption by system components like processors, RAM, hard disks, and motherboards will be calculated. Simulation results will depict response times and cost details for different data center configurations and allocation policies.
Resource Allocation for Task Using Fair Share Scheduling AlgorithmIRJET Journal
1) The document discusses resource allocation algorithms for tasks in cloud computing. It analyzes the existing Spectral Clustering Scheduling (SCS) algorithm and proposes a Fair Share Scheduling algorithm to address SCS's limitations.
2) The SCS algorithm focuses only on start and end times, which can lead to overlapping tasks assigned to different resources. The proposed Fair Share Scheduling algorithm aims to reduce task overlapping and minimize resource utilization costs by prioritizing tasks based on time requirements and resource availability.
3) The paper evaluates the proposed algorithm and finds that it maximizes resource utilization and minimizes task overlapping compared to the existing SCS approach.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
IRJET- Cost Effective Workflow Scheduling in BigdataIRJET Journal
This document summarizes research on cost-effective workflow scheduling in big data environments. It discusses a Pointer Gossip Content Addressable Network Montage Framework that allows automatic selection of target clouds, uniform access to clouds, and workflow data management while meeting service level agreements at lowest cost. The framework is evaluated using a real scientific workflow application in different deployment scenarios. Results show it can execute workflows with expected performance and quality at lowest cost.
The document proposes an Inter-Cloud Architecture (ICA) to address integration and interoperability issues in heterogeneous multi-domain cloud environments. The key components of the ICA include: (1) a multi-layer Cloud Services Model for vertical integration of cloud services, (2) an Inter-Cloud Control and Management Plane for application/infrastructure control across clouds, and (3) an Inter-Cloud Federation Framework to federate independently managed cloud infrastructure from different providers. The ICA aims to facilitate on-demand provisioning of complex cloud infrastructure involving multiple providers and integration with legacy services.
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...ijcsit
3D reconstruction is a technique used in computer vision which has a wide range of applications in
areas like object recognition, city modelling, virtual reality, physical simulations, video games and
special effects. Previously, to perform a 3D reconstruction, specialized hardwares were required.
Such systems were often very expensive and was only available for industrial or research purpose.
With the rise of the availability of high-quality low cost 3D sensors, it is now possible to design
inexpensive complete 3D scanning systems. The objective of this work was to design an acquisition and
processing system that can perform 3D scanning and reconstruction of objects seamlessly. In addition,
the goal of this work also included making the 3D scanning process fully automated by building and
integrating a turntable alongside the software. This means the user can perform a full 3D scan only by
a press of a few buttons from our dedicated graphical user interface. Three main steps were followed
to go from acquisition of point clouds to the finished reconstructed 3D model. First, our system
acquires point cloud data of a person/object using inexpensive camera sensor. Second, align and
convert the acquired point cloud data into a watertight mesh of good quality. Third, export the
reconstructed model to a 3D printer to obtain a proper 3D print of the model.
CCCORE: Cloud Container for Collaborative Research IJECEIAES
Cloud-based research collaboration platforms render scalable, secure and inventive environments that enabled academic and scientific researchers to share research data, applications and provide access to high- performance computing resources. Dynamic allocation of resources according to the unpredictable needs of applications used by researchers is a key challenge in collaborative research environments. We propose the design of Cloud Container based Collaborative Research (CCCORE) framework to address dynamic resource provisioning according to the variable workload of compute and data-intensive applications or analysis tools used by researchers. Our proposed approach relies on–demand, customized containerization and comprehensive assessment of resource requirements to achieve optimal resource allocation in a dynamic collaborative research environment. We propose algorithms for dynamic resource allocation problem in a collaborative research environment, which aim to minimize finish time, improve throughput and achieve optimal resource utilization by employing the underutilized residual resources.
This document discusses security protocols for high performance grid computing architectures. It analyzes the different network layers in grid computing protocols and identifies various security disciplines. It also analyzes various security suites available in the TCP/IP protocol architecture. The paper aims to define security disciplines at different levels of cluster computing architecture and propose applicable security suites from the TCP/IP security protocol suite. Grid computing allows sharing and aggregation of distributed computing resources to enable more powerful applications. Security is an important consideration in grid computing due to sharing resources across administrative domains.
Virtual Machine Allocation Policy in Cloud Computing Environment using CloudSim IJECEIAES
This document discusses virtual machine allocation policies in cloud computing environments using the CloudSim simulation tool. It begins with an introduction to cloud computing and discusses challenges related to resource management and energy consumption. It then reviews previous research on modeling approaches, energy optimization techniques, and network topologies. A UML class model is presented for analyzing energy consumption when accessing cloud servers arranged in a step network topology. The methodology section outlines how energy consumption by system components like processors, RAM, hard disks, and motherboards will be calculated. Simulation results will depict response times and cost details for different data center configurations and allocation policies.
Resource Allocation for Task Using Fair Share Scheduling AlgorithmIRJET Journal
1) The document discusses resource allocation algorithms for tasks in cloud computing. It analyzes the existing Spectral Clustering Scheduling (SCS) algorithm and proposes a Fair Share Scheduling algorithm to address SCS's limitations.
2) The SCS algorithm focuses only on start and end times, which can lead to overlapping tasks assigned to different resources. The proposed Fair Share Scheduling algorithm aims to reduce task overlapping and minimize resource utilization costs by prioritizing tasks based on time requirements and resource availability.
3) The paper evaluates the proposed algorithm and finds that it maximizes resource utilization and minimizes task overlapping compared to the existing SCS approach.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
IRJET- Cost Effective Workflow Scheduling in BigdataIRJET Journal
This document summarizes research on cost-effective workflow scheduling in big data environments. It discusses a Pointer Gossip Content Addressable Network Montage Framework that allows automatic selection of target clouds, uniform access to clouds, and workflow data management while meeting service level agreements at lowest cost. The framework is evaluated using a real scientific workflow application in different deployment scenarios. Results show it can execute workflows with expected performance and quality at lowest cost.
The document proposes an Inter-Cloud Architecture (ICA) to address integration and interoperability issues in heterogeneous multi-domain cloud environments. The key components of the ICA include: (1) a multi-layer Cloud Services Model for vertical integration of cloud services, (2) an Inter-Cloud Control and Management Plane for application/infrastructure control across clouds, and (3) an Inter-Cloud Federation Framework to federate independently managed cloud infrastructure from different providers. The ICA aims to facilitate on-demand provisioning of complex cloud infrastructure involving multiple providers and integration with legacy services.
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...ijcsit
3D reconstruction is a technique used in computer vision which has a wide range of applications in
areas like object recognition, city modelling, virtual reality, physical simulations, video games and
special effects. Previously, to perform a 3D reconstruction, specialized hardwares were required.
Such systems were often very expensive and was only available for industrial or research purpose.
With the rise of the availability of high-quality low cost 3D sensors, it is now possible to design
inexpensive complete 3D scanning systems. The objective of this work was to design an acquisition and
processing system that can perform 3D scanning and reconstruction of objects seamlessly. In addition,
the goal of this work also included making the 3D scanning process fully automated by building and
integrating a turntable alongside the software. This means the user can perform a full 3D scan only by
a press of a few buttons from our dedicated graphical user interface. Three main steps were followed
to go from acquisition of point clouds to the finished reconstructed 3D model. First, our system
acquires point cloud data of a person/object using inexpensive camera sensor. Second, align and
convert the acquired point cloud data into a watertight mesh of good quality. Third, export the
reconstructed model to a 3D printer to obtain a proper 3D print of the model.
CCCORE: Cloud Container for Collaborative Research IJECEIAES
Cloud-based research collaboration platforms render scalable, secure and inventive environments that enabled academic and scientific researchers to share research data, applications and provide access to high- performance computing resources. Dynamic allocation of resources according to the unpredictable needs of applications used by researchers is a key challenge in collaborative research environments. We propose the design of Cloud Container based Collaborative Research (CCCORE) framework to address dynamic resource provisioning according to the variable workload of compute and data-intensive applications or analysis tools used by researchers. Our proposed approach relies on–demand, customized containerization and comprehensive assessment of resource requirements to achieve optimal resource allocation in a dynamic collaborative research environment. We propose algorithms for dynamic resource allocation problem in a collaborative research environment, which aim to minimize finish time, improve throughput and achieve optimal resource utilization by employing the underutilized residual resources.
This document discusses security protocols for high performance grid computing architectures. It analyzes the different network layers in grid computing protocols and identifies various security disciplines. It also analyzes various security suites available in the TCP/IP protocol architecture. The paper aims to define security disciplines at different levels of cluster computing architecture and propose applicable security suites from the TCP/IP security protocol suite. Grid computing allows sharing and aggregation of distributed computing resources to enable more powerful applications. Security is an important consideration in grid computing due to sharing resources across administrative domains.
The increasing demands on the usage of data centers especially in provisioning cloud
applications (i.e. data-intensive applications) have drastically increased the energy consumption and
becoming a critical issue. Failing to handle the increasing in energy consumption leads to the negative
impact on the environment, and also negatively affecting the cloud providers' profits due to increasing
costs. Various surveys have been carried out to address and classify energy-aware approaches and
solutions. As an active research area with increasing number of proposals, more surveys are needed to
support researchers in the research area. Thus, in this paper, we intend to provide the current state of
existing related surveys that serve as a guideline for the researchers as well as the potential reviewers to
embark into a new concern and dimension to compliment existing related surveys. Our review highlights
four main topics and concludes to some recommendations for the future survey.
Fog Computing – Enhancing the Maximum Energy Consumption of Data Servers.dbpublications
Fog Computing and IoT systems make use of end-user premises devices as local servers. Here, we are identifying the scenarios for which running applications from NDCs are more energy-efficient than running the same applications from MDC. With the complete survey and analysis of various energy consumption factors such as different flow-variants and time-variants with respect to the Network Equipment we found two energy consumption use cases and respective results. Parameters such as current Load, Pmax, Cmax, Incremental Energy etc evolved with respect to system structure and various data related parameters leading to the conclusion that the NDC utilizes relatively reduced factor of energy comparative to the MDC. The study reveals that NDC as a part of Fog makeweights the MDCs to accompany respective applications, especially in the scenarios where IoT based applications are used where end users are the source data providers and can maximize the server utilization.
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
CLOUD COMPUTING IN THE PUBLIC SECTOR: MAPPING THE KNOWLEDGE DOMAINijmpict
This document summarizes a research study that mapped the knowledge domain of public sector cloud computing. The study utilized 188 research publications from the Web of Science database and the CiteSpace visualization software. Key findings included:
1) The top publishing countries were Canada, the US, Australia, China, and the UK. The top institutions were from Malaysia, Australia, and the UK.
2) Author collaboration was limited, though a few nascent research clusters were identified. The most published authors were from the Middle East, Malaysia, Australia, and China. The most co-cited authors focused on cloud standards and technologies.
3) The top co-cited journals were Communications of the ACM, NIST Special
International Journal on Web Service Computing (IJWSC)ijwscjournal
Web Service Computing is a recent evolution in Distributed Computing series and it is an emerging and fast growing paradigm in the present scenario. Web Service Computing is a diversified discipline suite that related to the technologies of Business Process Integration and Management, Grid / Utility / Cloud Computing paradigms, autonomic computing, as well as the business and scientific applications. It applies the theories of Science and Technology for bridging the gap between Business Services and IT Services. Service oriented computing addresses how to enable the technology to help people to perform business processes more efficiently and effectively, ultimately resulting in creating WIN-WIN strategy between the business organizations and end users. The greatest significance of the web services is their interoperability, which allows businesses to dynamically publish, discover, and aggregate a range of Web services through the Internet to more easily create innovative products, business processes and value chains both from organization and end user points of views. Due to these, this cross discipline attracts the variety of researchers from various disciplines to conduct the versatile research and experiments in this area.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
This curriculum vitae summarizes the qualifications of Yang Hu, a Ph.D. candidate in computer engineering at the University of Florida. It outlines his education history, awards, academic talks, publications, patents, research projects and experience. Some of his research has focused on optimizing network function virtualization platforms, software-defined data center management, and renewable energy powered cloud computing infrastructures.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
WiSANCloud: a set of UML-based specifications for the integration of Wireless...Priscill Orue Esquivel
Giving the current trend to combine the advantages of Wireless Sensor and Actor Networks (WSANs) with the Cloud Computing technology, this work proposes a set of specifications, based on the Unified Modeling Language - UML, in order to provide the general framework for the design of the integration of said components. One of the keys of the integration is the architecture of the WSAN, due to its structural relationship with the Cloud in the definition of the combination. Regarding the standard applied in the integration, UML and its subset, Systems Modeling Language - SysML , are proposed by the Object Management Group - OMG to deal with cloud applications; so, this indicates the starting point of the process of the design of specifications for WSAN-Cloud Integration. Based on the current state of UML tools for analysis and design, there are several aspects to take into account in order to define the integration process.
Most downloaded article for an year in academia - Advanced Computing: An Inte...acijjournal
Advanced Computing: An International Journal (ACIJ) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the advanced computing. The journal focuses on all technical and practical aspects of high performance computing, green computing, pervasive computing, cloud computing etc. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding advances in computing and establishing new collaborations in these areas.
This document provides an introduction and overview of cloud computing. It discusses the instructor's background and credentials. The class objectives are outlined, including describing cloud concepts, technologies, and approaches. Key aspects of building and migrating systems to the cloud are also covered, along with associated costs, benefits, security issues and standards. Several reference articles on cloud computing are listed. The document concludes with an overview of cloud service models, deployment models, providers such as Amazon and Google, and a brief comparison of cloud platforms.
The document discusses the integration of cloud computing and internet of things (IoT). It describes how cloud and IoT are complementary technologies that can benefit each other by merging together. The cloud can provide unlimited storage, processing and communication resources to address IoT's constraints, while IoT can extend the cloud's reach to real-world devices and enable new application scenarios. The document reviews literature on this emerging CloudIoT paradigm and analyzes challenges, application scenarios, platforms and open issues in further integrating cloud and IoT technologies.
Novel holistic architecture for analytical operation on sensory data relayed...IJECEIAES
With increasing adoption of the sensor-based application, there is an exponential rise of the sensory data that eventually take the shape of the big data. However, the practicality of executing high end analytical operation over the resource-constrained big data has never being studied closely. After reviewing existing approaches, it is explored that there is no cost-effective schemes of big data analytics over large scale sensory data processiing that can be directly used as a service. Therefore, the propsoed system introduces a holistic architecture where streamed data after performing extraction of knowedge can be offered in the form of services. Implemented in MATLAB, the proposed study uses a very simplistic approach considering energy constrained of the sensor nodes to find that proposed system offers better accuracy, reduced mining duration (i.e. faster response time), and reduced memory dependencies to prove that it offers cost effective analytical solution in contrast to existing system.
Efficient and reliable hybrid cloud architecture for big databaseijccsa
The objective of our paper is to propose a Cloud computing framework which is feasible and necessary for
handling huge data. In our prototype system we considered national ID database structure of Bangladesh
which is prepared by election commission of Bangladesh. Using this database we propose an interactive
graphical user interface for Bangladeshi People Search (BDPS) that use a hybrid structure of cloud
computing handled by apache Hadoop where database is implemented by HiveQL. The infrastructure
divides into two parts: locally hosted cloud which is based on “Eucalyptus” and the remote cloud which is
implemented on well-known Amazon Web Service (AWS). Some common problems of Bangladesh aspect
which includes data traffic congestion, server time out and server down issue is also discussed.
The document proposes a three-stage model for energy utilization in cloud environments. The first stage defines cloud server capabilities using quantitative vectors. The second stage performs clustering to categorize cloud servers based on a cost analysis. The final stage maps user requests to cloud server clusters to identify the most efficient allocation based on energy and resource constraints.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
This document summarizes research on cloud computing services and resource sharing for higher education in Saudi Arabia. It discusses several frameworks and tools for evaluating cloud migration options, including decision support systems that allow users to select suitable cloud providers based on their requirements. Case studies on cloud adoption in various fields like healthcare, oil and gas, and education are also reviewed. The document concludes that cloud computing provides opportunities to reduce costs while improving access to resources, but security, reliability and control issues must be considered during migration planning.
Review and Classification of Cloud Computing Researchiosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels
Grid computing is a model of distributed computing that uses geographically and administratively disparate resources to solve large problems. It involves sharing computing power, data, and other resources across organizational boundaries. Key aspects include applying resources from many computers to a single problem, combining resources from multiple administrative domains for tasks requiring large processing power or data, and using middleware to coordinate resources as a virtual system. The document then discusses definitions of grid computing from various organizations and the core functional requirements and characteristics needed for grid applications and users.
Cloud management and monitoring: a systematic mapping studynooriasukmaningtyas
A key component of ensuring that services are available on the cloud at the right time in the right manner is adequate cloud management. This makes it possible to provide services that meets user demands. The purpose of this research is to carry out a systematic study of management and monitoring on the cloud. Three facets were applied in conducting the categorization. These are the contribution, research, and topic facets. The purpose was to determine the level of work so far carried out in the field of cloud management. This enabled the creation of a pictorial representation of the research coverage. The result of the study showed that there are no opinion research on cloud management. Generally, articles on experience research, philosophical research and metric are the lowest at 6.62%, 4.41% and 1.90% respectively, while articles on models, solution research and evaluation research are the highest with 52.38%, 46.32% and 31.62% respectively. The outcome of this study will stimulate further research in the area cloud management and systematic studies.
A systematic mapping study of performance analysis and modelling of cloud sys...IJECEIAES
Cloud computing is a paradigm that uses utility-driven models in providing dynamic services to clients at all levels. Performance analysis and modelling is essential because of service level agreement guarantees. Studies on performance analysis and modelling are increasing in a productive manner on the cloud landscape on issues like virtual machines and data storage. The objective of this study is to conduct a systematic mapping study of performance analysis and modelling of cloud systems and applications. A systematic mapping study is useful in visualization and summarizing the research carried in an area of interest. The systematic study provided an overview of studies on this subject by using a structure, based on categorization. The results are presented in terms of research such as evaluation and solution, and contribution such as tools and method utilized. The results showed that there were more discussions on optimization in relation to tool, method and process with 6.42%, 14.29% and 7.62% respectively. In addition, analysis based on designs in terms of model had 14.29% and publication relating to optimization in terms of evaluation research had 9.77%, validation 7.52%, experience 3.01%, and solution 10.51%. Research gaps were identified and should motivate researchers in pursuing further research directions.
The increasing demands on the usage of data centers especially in provisioning cloud
applications (i.e. data-intensive applications) have drastically increased the energy consumption and
becoming a critical issue. Failing to handle the increasing in energy consumption leads to the negative
impact on the environment, and also negatively affecting the cloud providers' profits due to increasing
costs. Various surveys have been carried out to address and classify energy-aware approaches and
solutions. As an active research area with increasing number of proposals, more surveys are needed to
support researchers in the research area. Thus, in this paper, we intend to provide the current state of
existing related surveys that serve as a guideline for the researchers as well as the potential reviewers to
embark into a new concern and dimension to compliment existing related surveys. Our review highlights
four main topics and concludes to some recommendations for the future survey.
Fog Computing – Enhancing the Maximum Energy Consumption of Data Servers.dbpublications
Fog Computing and IoT systems make use of end-user premises devices as local servers. Here, we are identifying the scenarios for which running applications from NDCs are more energy-efficient than running the same applications from MDC. With the complete survey and analysis of various energy consumption factors such as different flow-variants and time-variants with respect to the Network Equipment we found two energy consumption use cases and respective results. Parameters such as current Load, Pmax, Cmax, Incremental Energy etc evolved with respect to system structure and various data related parameters leading to the conclusion that the NDC utilizes relatively reduced factor of energy comparative to the MDC. The study reveals that NDC as a part of Fog makeweights the MDCs to accompany respective applications, especially in the scenarios where IoT based applications are used where end users are the source data providers and can maximize the server utilization.
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
CLOUD COMPUTING IN THE PUBLIC SECTOR: MAPPING THE KNOWLEDGE DOMAINijmpict
This document summarizes a research study that mapped the knowledge domain of public sector cloud computing. The study utilized 188 research publications from the Web of Science database and the CiteSpace visualization software. Key findings included:
1) The top publishing countries were Canada, the US, Australia, China, and the UK. The top institutions were from Malaysia, Australia, and the UK.
2) Author collaboration was limited, though a few nascent research clusters were identified. The most published authors were from the Middle East, Malaysia, Australia, and China. The most co-cited authors focused on cloud standards and technologies.
3) The top co-cited journals were Communications of the ACM, NIST Special
International Journal on Web Service Computing (IJWSC)ijwscjournal
Web Service Computing is a recent evolution in Distributed Computing series and it is an emerging and fast growing paradigm in the present scenario. Web Service Computing is a diversified discipline suite that related to the technologies of Business Process Integration and Management, Grid / Utility / Cloud Computing paradigms, autonomic computing, as well as the business and scientific applications. It applies the theories of Science and Technology for bridging the gap between Business Services and IT Services. Service oriented computing addresses how to enable the technology to help people to perform business processes more efficiently and effectively, ultimately resulting in creating WIN-WIN strategy between the business organizations and end users. The greatest significance of the web services is their interoperability, which allows businesses to dynamically publish, discover, and aggregate a range of Web services through the Internet to more easily create innovative products, business processes and value chains both from organization and end user points of views. Due to these, this cross discipline attracts the variety of researchers from various disciplines to conduct the versatile research and experiments in this area.
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
This curriculum vitae summarizes the qualifications of Yang Hu, a Ph.D. candidate in computer engineering at the University of Florida. It outlines his education history, awards, academic talks, publications, patents, research projects and experience. Some of his research has focused on optimizing network function virtualization platforms, software-defined data center management, and renewable energy powered cloud computing infrastructures.
An efficient resource sharing technique for multi-tenant databases IJECEIAES
Multi-tenancy is a key component of Software as a Service (SaaS) paradigm. Multi-tenant software has gained a lot of attention in academics, research and business arena. They provide scalability and economic benefits for both cloud service providers and tenants by sharing same resources and infrastructure in isolation of shared databases, network and computing resources with Service level agreement (SLA) compliances. In a multitenant scenario, active tenants compete for resources in order to access the database. If one tenant blocks up the resources, the performance of all the other tenants may be restricted and a fair sharing of the resources may be compromised. The performance of tenants must not be affected by resource-intensive activities and volatile workloads of other tenants. Moreover, the prime goal of providers is to accomplish low cost of operation, satisfying specific schemas/SLAs of each tenant. Consequently, there is a need to design and develop effective and dynamic resource sharing algorithms which can handle above mentioned issues. This work presents a model referred as MultiTenant Dynamic Resource Scheduling Model (MTDRSM) embracing a query classification and worker sorting technique enabling efficient and dynamic resource sharing among tenants. The experiments show significant performance improvement over existing model.
FDMC: Framework for Decision Making in Cloud for EfficientResource Management IJECEIAES
An effective resource management is one of the critical success factors for precise virtualization process in cloud computing in presence of dynamic demands of the user. After reviewing the existing research work towards resource management in cloud, it was found that there is still a large scope of enhancement. The existing techniques are found not to completely utilize the potential features of virtual machine in order to perform resource allocation. This paper presents a framework called FDMC or Framework for Decision Making in Cloud that gives better capability for the VMs to perform resource allocation. The contribution of FDMC is a joint operation of VM to ensure faster processing of task and thereby withstand more number of increasing traffic. The study outcome was compared with some of the existing systems to find FDMC excels better performance in the scale of task allocation time, amount of core wasted, amount of storage wasted, and communication cost.
WiSANCloud: a set of UML-based specifications for the integration of Wireless...Priscill Orue Esquivel
Giving the current trend to combine the advantages of Wireless Sensor and Actor Networks (WSANs) with the Cloud Computing technology, this work proposes a set of specifications, based on the Unified Modeling Language - UML, in order to provide the general framework for the design of the integration of said components. One of the keys of the integration is the architecture of the WSAN, due to its structural relationship with the Cloud in the definition of the combination. Regarding the standard applied in the integration, UML and its subset, Systems Modeling Language - SysML , are proposed by the Object Management Group - OMG to deal with cloud applications; so, this indicates the starting point of the process of the design of specifications for WSAN-Cloud Integration. Based on the current state of UML tools for analysis and design, there are several aspects to take into account in order to define the integration process.
Most downloaded article for an year in academia - Advanced Computing: An Inte...acijjournal
Advanced Computing: An International Journal (ACIJ) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the advanced computing. The journal focuses on all technical and practical aspects of high performance computing, green computing, pervasive computing, cloud computing etc. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding advances in computing and establishing new collaborations in these areas.
This document provides an introduction and overview of cloud computing. It discusses the instructor's background and credentials. The class objectives are outlined, including describing cloud concepts, technologies, and approaches. Key aspects of building and migrating systems to the cloud are also covered, along with associated costs, benefits, security issues and standards. Several reference articles on cloud computing are listed. The document concludes with an overview of cloud service models, deployment models, providers such as Amazon and Google, and a brief comparison of cloud platforms.
The document discusses the integration of cloud computing and internet of things (IoT). It describes how cloud and IoT are complementary technologies that can benefit each other by merging together. The cloud can provide unlimited storage, processing and communication resources to address IoT's constraints, while IoT can extend the cloud's reach to real-world devices and enable new application scenarios. The document reviews literature on this emerging CloudIoT paradigm and analyzes challenges, application scenarios, platforms and open issues in further integrating cloud and IoT technologies.
Novel holistic architecture for analytical operation on sensory data relayed...IJECEIAES
With increasing adoption of the sensor-based application, there is an exponential rise of the sensory data that eventually take the shape of the big data. However, the practicality of executing high end analytical operation over the resource-constrained big data has never being studied closely. After reviewing existing approaches, it is explored that there is no cost-effective schemes of big data analytics over large scale sensory data processiing that can be directly used as a service. Therefore, the propsoed system introduces a holistic architecture where streamed data after performing extraction of knowedge can be offered in the form of services. Implemented in MATLAB, the proposed study uses a very simplistic approach considering energy constrained of the sensor nodes to find that proposed system offers better accuracy, reduced mining duration (i.e. faster response time), and reduced memory dependencies to prove that it offers cost effective analytical solution in contrast to existing system.
Efficient and reliable hybrid cloud architecture for big databaseijccsa
The objective of our paper is to propose a Cloud computing framework which is feasible and necessary for
handling huge data. In our prototype system we considered national ID database structure of Bangladesh
which is prepared by election commission of Bangladesh. Using this database we propose an interactive
graphical user interface for Bangladeshi People Search (BDPS) that use a hybrid structure of cloud
computing handled by apache Hadoop where database is implemented by HiveQL. The infrastructure
divides into two parts: locally hosted cloud which is based on “Eucalyptus” and the remote cloud which is
implemented on well-known Amazon Web Service (AWS). Some common problems of Bangladesh aspect
which includes data traffic congestion, server time out and server down issue is also discussed.
The document proposes a three-stage model for energy utilization in cloud environments. The first stage defines cloud server capabilities using quantitative vectors. The second stage performs clustering to categorize cloud servers based on a cost analysis. The final stage maps user requests to cloud server clusters to identify the most efficient allocation based on energy and resource constraints.
ADVANCES IN HIGHER EDUCATIONAL RESOURCE SHARING AND CLOUD SERVICES FOR KSAIJCSES Journal
This document summarizes research on cloud computing services and resource sharing for higher education in Saudi Arabia. It discusses several frameworks and tools for evaluating cloud migration options, including decision support systems that allow users to select suitable cloud providers based on their requirements. Case studies on cloud adoption in various fields like healthcare, oil and gas, and education are also reviewed. The document concludes that cloud computing provides opportunities to reduce costs while improving access to resources, but security, reliability and control issues must be considered during migration planning.
Review and Classification of Cloud Computing Researchiosrjce
IOSR journal of VLSI and Signal Processing (IOSRJVSP) is a double blind peer reviewed International Journal that publishes articles which contribute new results in all areas of VLSI Design & Signal Processing. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced VLSI Design & Signal Processing concepts and establishing new collaborations in these areas.
Design and realization of microelectronic systems using VLSI/ULSI technologies require close collaboration among scientists and engineers in the fields of systems architecture, logic and circuit design, chips and wafer fabrication, packaging, testing and systems applications. Generation of specifications, design and verification must be performed at all abstraction levels, including the system, register-transfer, logic, circuit, transistor and process levels
Grid computing is a model of distributed computing that uses geographically and administratively disparate resources to solve large problems. It involves sharing computing power, data, and other resources across organizational boundaries. Key aspects include applying resources from many computers to a single problem, combining resources from multiple administrative domains for tasks requiring large processing power or data, and using middleware to coordinate resources as a virtual system. The document then discusses definitions of grid computing from various organizations and the core functional requirements and characteristics needed for grid applications and users.
Cloud management and monitoring: a systematic mapping studynooriasukmaningtyas
A key component of ensuring that services are available on the cloud at the right time in the right manner is adequate cloud management. This makes it possible to provide services that meets user demands. The purpose of this research is to carry out a systematic study of management and monitoring on the cloud. Three facets were applied in conducting the categorization. These are the contribution, research, and topic facets. The purpose was to determine the level of work so far carried out in the field of cloud management. This enabled the creation of a pictorial representation of the research coverage. The result of the study showed that there are no opinion research on cloud management. Generally, articles on experience research, philosophical research and metric are the lowest at 6.62%, 4.41% and 1.90% respectively, while articles on models, solution research and evaluation research are the highest with 52.38%, 46.32% and 31.62% respectively. The outcome of this study will stimulate further research in the area cloud management and systematic studies.
A systematic mapping study of performance analysis and modelling of cloud sys...IJECEIAES
Cloud computing is a paradigm that uses utility-driven models in providing dynamic services to clients at all levels. Performance analysis and modelling is essential because of service level agreement guarantees. Studies on performance analysis and modelling are increasing in a productive manner on the cloud landscape on issues like virtual machines and data storage. The objective of this study is to conduct a systematic mapping study of performance analysis and modelling of cloud systems and applications. A systematic mapping study is useful in visualization and summarizing the research carried in an area of interest. The systematic study provided an overview of studies on this subject by using a structure, based on categorization. The results are presented in terms of research such as evaluation and solution, and contribution such as tools and method utilized. The results showed that there were more discussions on optimization in relation to tool, method and process with 6.42%, 14.29% and 7.62% respectively. In addition, analysis based on designs in terms of model had 14.29% and publication relating to optimization in terms of evaluation research had 9.77%, validation 7.52%, experience 3.01%, and solution 10.51%. Research gaps were identified and should motivate researchers in pursuing further research directions.
Task Scheduling methodology in cloud computing Qutub-ud- Din
This document outlines a proposed methodology for developing efficient task scheduling strategies in cloud computing. It begins with introductions to cloud computing and task scheduling. It then reviews several relevant existing task scheduling algorithms from literature that focus on objectives like reducing costs, minimizing completion time, and maximizing resource utilization. The problem statement indicates the goals are to reduce costs, minimize completion time, and maximize resource allocation. An overview of the proposed methodology's flow is then provided, followed by references.
A Survey on the Security Issues of Software Defined Networking Tool in Cloud ...ijcnes
The Advent of the digital age has led to a rise in different types of data with every passing day. In fact, it is expected that half of the total data used around the world will be on the cloud nowadays. This complex data needs to be stored, processed and analyzed for information gaining that can be used for several organizations. Cloud computing provides an appropriate platform for Software Defined Networking (SDN) in communicating and computing requirements of the latter. It makes cloud-based networking a viable research field in the current scenario. However, several issues addressed and risk needs to be mitigated in the L2 cloud server. Virtual networks and cloud federation being considered in the network virtualization over L3 cloud router. This research work explores the existing research challenges and discusses open issues for the security in cloud computing and its uses in the relevant field by means of a comparative analysis of L2 server L3 router based on SDN tools. Also, an analysis of such issues are discussed and summarized. Finally, the best tool identified for the use cloud security.
Cloud Computing: A Perspective on Next Basic Utility in IT World IRJET Journal
This document discusses cloud computing and its architecture. It begins with an introduction to cloud computing, defining it as a model that provides infrastructure, platforms, and software as services. The key characteristics and service models of cloud computing are described.
The document then discusses the architecture of cloud computing, including the layers of Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It also describes the deployment models of private cloud, public cloud, community cloud, and hybrid cloud.
The document outlines several challenges of cloud computing, such as resource allocation and scheduling, cost optimization, processing time and speed, memory management, load balancing, security issues, fault
Toward Cloud Network Infrastructure Approach Service and Security Perspectiveijtsrd
This document summarizes research on cloud network infrastructure from a service and security perspective. It discusses cloud computing concepts including essential characteristics, deployment models, and service models. It also reviews the open-source cloud management platform OpenStack and software-defined networking (SDN). Additionally, it covers integrating OpenStack and SDN, designing demilitarized zones (DMZs) for security, and implementing firewalls in cloud infrastructure. The goal is to provide a secure cloud network infrastructure that allows for additional service implementation and functionality.
Researching How Cloud Computing Enhances the Businesses GrowthAJASTJournal
The Internet of Things (IoT) changes several sectors, including education, logistics, and manufacturing. The IoT is a web-based system that integrates and syncs a wide range of machines or appliances with gates, third-party technologies, and apps on machinery and cloud computing networks. Cloud computing performs a significant part in the framework layer with the thriving development of IoT. In a range of industries cloud computing technology was built into unique forms of cloud computing: Everything-as-a-Sevice (Xaas), Education-as-a-Service (EaaS), Logistics-as-a-Service (LaaS), and Manufacturing-as-a-Service (MaaS). Researchers and professionals have drawn substantial attention to the applicability of cloud computing in different sectors. Essential evaluation, framework architecture, and structural study are the primary forms of science. Large data, computational technology, service orientation, and IoT refer to cloud computing systems (e.g., XaaS, EaaS, LaaS, MaaS). This research analyzed the technological patterns in new cloud computing technology and examined cloud computing studies. The results demonstrate that knowledge and innovation are the main concerns guiding cloud science.
Cloud Computing is an attractive research area for the last few years; and there have been a tremendous grows in the number of educational institutions all over the world who have either adopted or are considering migrating to cloud computing. However, there are many concerns and reservations about adopting conventional or public cloud based solutions. A new paradigm of cloud based solution has been proposed, namely, the private cloud based solutions, which becomes an attractive choice to educational Institutions. This paper presents the adjustment and implementation of private-based cloud solution for multi-campus educational institution, namely, Al-Balqa Applied University (BAU) in Jordan.
Cloud Computing is an attractive research area for the last few years; and there have been a tremendous
grows in the number of educational institutions all over the world who have either adopted or are
considering migrating to cloud computing. However, there are many concerns and reservations about
adopting conventional or public cloud based solutions. A new paradigm of cloud based solution has been
proposed, namely, the private cloud based solutions, which becomes an attractive choice to educational
Institutions. This paper presents the adjustment and implementation of private-based cloud solution for
multi-campus educational institution, namely, Al-Balqa Applied University (BAU) in Jordan.
This document discusses 10 key research areas in cloud computing:
1. The Green Cloud - Improving energy efficiency and reducing consumption in cloud data centers.
2. Denial of Service issues - Addressing attacks that restrict access to cloud resources.
3. Cloud Verification, Validation and Testing - Developing strategies for testing cloud software, applications and designs.
4. Cloud Security - Ensuring secure architectures and managing security across distributed cloud networks.
The document summarizes a presentation on cloud computing given by Norman Wiseman and Rachel Bruce of JISC. It defines cloud computing and discusses different cloud service models (SaaS, PaaS, IaaS). It outlines benefits of cloud like flexibility, cost savings, and environmental benefits but also risks around costs, data protection, contractual issues. JISC activities are described to help institutions develop cloud strategies including research on cloud adoption, brokerage services to negotiate deals, and support for flexible service delivery pilots across UK universities.
This document provides a comprehensive study of cloud computing. It discusses cloud computing models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It explores the benefits of cloud computing such as scalability, flexibility, and reduced costs. However, it also notes avoidance issues like security, privacy, internet dependency, and availability. The document concludes by stating that vertical scalability presents a technical challenge in cloud environments.
This document discusses the potential benefits of implementing cloud computing technologies in libraries and e-libraries. It begins by providing background on cloud computing and how it can enable on-demand access to configurable computing resources. It then reviews related literature on cloud computing applications in educational and library settings. The document outlines some of the key advantages of cloud computing for libraries, such as improved efficiency, reduced expenses, enhanced collaboration, backup of information, and improved access and management of files. It also notes some potential disadvantages, such as issues relating to security, lack of control, and dependence on network performance. Overall, the document argues that cloud computing can be a useful tool for libraries to automate services and processes while reducing the need for on-site management
This document provides an overview of cloud computing. It begins with definitions of cloud computing and discusses concepts like service-oriented architecture, cyber infrastructure, and virtualization. It describes different types of cloud architectures like public, private and hybrid clouds. It outlines the key components of cloud computing including cloud types, virtualization, and users. It discusses how cloud computing works and reviews the merits and demerits. Finally, it concludes that cloud computing allows for more efficient use of IT resources and flexible access to computing power and data from any internet-connected device.
AN EMPIRICAL STUDY OF USING CLOUD-BASED SERVICES IN CAPSTONE PROJECT DEVELOPMENTcsandit
Cloud computing is gaining prominence and popularity in three important forms: Software as a Service, Platform as a Service, and Infrastructure as a Service. In this paper, we will present
an empirical study of how these cloud-based services were used in an undergraduate Computer Science capstone class to enable agile and effective development, testing, and deployment of sophisticated software systems, facilitate team collaborations among students, and ease the project assessment and grading tasks for teachers. Especially, in this class, students and teachers could leverage time, talent, and resources collaboratively and distributedly on his/her own schedule, from his/her convenient location, and using heterogeneous programming platforms thanks to such a completely All-In-Cloud environment, which eliminated the necessity of spending valuable development time on local setup, configuration, and maintenance, streamlined version control and group management, and greatly increased the collective productivity of student groups. Despite of the relatively steep learning curve in the beginning of the semester, all nine groups of students benefitted tremendously from such an All-In-Cloud experience and eight of them completed their substantial software projects successfully. This paper is concluded with a vision on expandin and standardizing the adoption of the Cloud ecosystem in other Computer Science classes in the future.
ANALYSIS OF THE COMPARISON OF SELECTIVE CLOUD VENDORS SERVICESijccsa
Cloud computing refers to a location that allows us to preserve our precious data and use computing and
networking services on a pay-as-you-go basis without the need for a physical infrastructure. Cloud
computing now provides us with powerful data processing and storage, exceptional availability and
security, rapid accessibility and adaption, ensured flexibility and interoperability, and time and cost
efficiency. Cloud computing offers three platforms (IaaS, PaaS, and SaaS) with unique capabilities that
promise to make it easier for a customer, organization, or trade to establish any type of IT business. We
compared a variety of cloud service characteristics in this article, following the comparing, it's
straightforward to pick a specific cloud service from the possible options by comparison with three chosen
cloud providers such as Amazon, Microsoft Azure, and Digital Ocean. By using findings of this study to not
only identify similarities and contrasts across various aspects of cloud computing, as well as to suggest
some areas for further study.
IRJET- A Workflow Management System for Scalable Data Mining on CloudsIRJET Journal
1. The document discusses a workflow management system for scalable data mining on clouds. It proposes using MapReduce and Hadoop frameworks to parallelize k-means clustering of large datasets on cloud infrastructure.
2. The system aims to improve efficiency, security, and transmission speed over existing cloud systems by generating hash codes for files before classification and storage on cloud. It uses deduplication to avoid redundant uploads.
3. The document outlines the system implementation including user modules for registration, login, profile editing, training data upload, file upload and download while avoiding redundancy, and changing/logging out of passwords. It also discusses testing the system functionality using unit testing libraries.
An advanced ensemble load balancing approach for fog computing applicationsIJECEIAES
Fog computing has emerged as a viable concept for expanding the capabilities of cloud computing to the periphery of the network allowing for efficient data processing and analysis from internet of things (IoT) devices. Load balancing is essential in fog computing because it ensures optimal resource utilization and performance among distributed fog nodes. This paper proposed an ensemble-based load-balancing approach for fog computing environments. An advanced ensemble load balancing approach (AELBA) uses real-time monitoring and analysis of fog node metrics, such as resource utilization, network congestion, and service response times, to facilitate effective load distribution. Based on the ensemble's collective decision-making, these metrics are fed into a centralized load-balancing controller, which dynamically adjusts the load distribution across fog nodes. Performance of the proposed ensemble load-balancing approach is evaluated and compared it to traditional load-balancing techniques in fog using extensive simulation experiments. The results demonstrate that our ensemble-based approach outperforms individual load-balancing algorithms regarding response time, resource utilization, and scalability. It adapts to dynamic fog environments, providing efficient load balancing even under varying workload conditions.
This document discusses various computing paradigms such as fog computing, cloud computing, edge computing, mobile cloud computing, and fog-based computing. It provides an overview of fog computing, describing its layered architecture and comparing it to similar paradigms like cloud and edge computing. Some key points discussed include:
- Fog computing enhances cloud computing by extending services and resources to the network edge, supporting low-latency applications.
- It has a 3-layer architecture with end devices, fog nodes, and cloud layers, placing resources closer to end users than the cloud.
- Characteristics of fog computing include low latency, mobility support, location awareness, and decentralized storage and analytics.
- Challen
This document is Priyanka R. Nayak's seminar report on cloud computing submitted to Visvesvaraya Technological University. It discusses key concepts of cloud computing including cyber infrastructure, service-oriented architecture, cloud types (public, private, hybrid), cloud architecture, and cloud components. The report provides an introduction to cloud computing and covers topics such as virtualization and users.
Similar to Cloud middleware and services-a systematic mapping review (20)
Square transposition: an approach to the transposition process in block cipherjournalBEEI
The transposition process is needed in cryptography to create a diffusion effect on data encryption standard (DES) and advanced encryption standard (AES) algorithms as standard information security algorithms by the National Institute of Standards and Technology. The problem with DES and AES algorithms is that their transposition index values form patterns and do not form random values. This condition will certainly make it easier for a cryptanalyst to look for a relationship between ciphertexts because some processes are predictable. This research designs a transposition algorithm called square transposition. Each process uses square 8 × 8 as a place to insert and retrieve 64-bits. The determination of the pairing of the input scheme and the retrieval scheme that have unequal flow is an important factor in producing a good transposition. The square transposition can generate random and non-pattern indices so that transposition can be done better than DES and AES.
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
The document proposes using a particle swarm optimization (PSO) algorithm to optimize the hyperparameters of a convolutional neural network (CNN) for image classification. The PSO algorithm is used to find optimal values for CNN hyperparameters like the number and size of convolutional filters. In experiments on the MNIST handwritten digit dataset, the optimized CNN achieved a testing error rate of 0.87%, which is competitive with state-of-the-art models. The proposed approach finds optimized CNN architectures automatically without requiring manual design or encoding strategies during training.
Supervised machine learning based liver disease prediction approach with LASS...journalBEEI
In this contemporary era, the uses of machine learning techniques are increasing rapidly in the field of medical science for detecting various diseases such as liver disease (LD). Around the globe, a large number of people die because of this deadly disease. By diagnosing the disease in a primary stage, early treatment can be helpful to cure the patient. In this research paper, a method is proposed to diagnose the LD using supervised machine learning classification algorithms, namely logistic regression, decision tree, random forest, AdaBoost, KNN, linear discriminant analysis, gradient boosting and support vector machine (SVM). We also deployed a least absolute shrinkage and selection operator (LASSO) feature selection technique on our taken dataset to suggest the most highly correlated attributes of LD. The predictions with 10 fold cross-validation (CV) made by the algorithms are tested in terms of accuracy, sensitivity, precision and f1-score values to forecast the disease. It is observed that the decision tree algorithm has the best performance score where accuracy, precision, sensitivity and f1-score values are 94.295%, 92%, 99% and 96% respectively with the inclusion of LASSO. Furthermore, a comparison with recent studies is shown to prove the significance of the proposed system.
A secure and energy saving protocol for wireless sensor networksjournalBEEI
The research domain for wireless sensor networks (WSN) has been extensively conducted due to innovative technologies and research directions that have come up addressing the usability of WSN under various schemes. This domain permits dependable tracking of a diversity of environments for both military and civil applications. The key management mechanism is a primary protocol for keeping the privacy and confidentiality of the data transmitted among different sensor nodes in WSNs. Since node's size is small; they are intrinsically limited by inadequate resources such as battery life-time and memory capacity. The proposed secure and energy saving protocol (SESP) for wireless sensor networks) has a significant impact on the overall network life-time and energy dissipation. To encrypt sent messsages, the SESP uses the public-key cryptography’s concept. It depends on sensor nodes' identities (IDs) to prevent the messages repeated; making security goals- authentication, confidentiality, integrity, availability, and freshness to be achieved. Finally, simulation results show that the proposed approach produced better energy consumption and network life-time compared to LEACH protocol; sensors are dead after 900 rounds in the proposed SESP protocol. While, in the low-energy adaptive clustering hierarchy (LEACH) scheme, the sensors are dead after 750 rounds.
Plant leaf identification system using convolutional neural networkjournalBEEI
This paper proposes a leaf identification system using convolutional neural network (CNN). This proposed system can identify five types of local Malaysia leaf which were acacia, papaya, cherry, mango and rambutan. By using CNN from deep learning, the network is trained from the database that acquired from leaf images captured by mobile phone for image classification. ResNet-50 was the architecture has been used for neural networks image classification and training the network for leaf identification. The recognition of photographs leaves requested several numbers of steps, starting with image pre-processing, feature extraction, plant identification, matching and testing, and finally extracting the results achieved in MATLAB. Testing sets of the system consists of 3 types of images which were white background, and noise added and random background images. Finally, interfaces for the leaf identification system have developed as the end software product using MATLAB app designer. As a result, the accuracy achieved for each training sets on five leaf classes are recorded above 98%, thus recognition process was successfully implemented.
Customized moodle-based learning management system for socially disadvantaged...journalBEEI
This study aims to develop Moodle-based LMS with customized learning content and modified user interface to facilitate pedagogical processes during covid-19 pandemic and investigate how teachers of socially disadvantaged schools perceived usability and technology acceptance. Co-design process was conducted with two activities: 1) need assessment phase using an online survey and interview session with the teachers and 2) the development phase of the LMS. The system was evaluated by 30 teachers from socially disadvantaged schools for relevance to their distance learning activities. We employed computer software usability questionnaire (CSUQ) to measure perceived usability and the technology acceptance model (TAM) with insertion of 3 original variables (i.e., perceived usefulness, perceived ease of use, and intention to use) and 5 external variables (i.e., attitude toward the system, perceived interaction, self-efficacy, user interface design, and course design). The average CSUQ rating exceeded 5.0 of 7 point-scale, indicated that teachers agreed that the information quality, interaction quality, and user interface quality were clear and easy to understand. TAM results concluded that the LMS design was judged to be usable, interactive, and well-developed. Teachers reported an effective user interface that allows effective teaching operations and lead to the system adoption in immediate time.
Understanding the role of individual learner in adaptive and personalized e-l...journalBEEI
Dynamic learning environment has emerged as a powerful platform in a modern e-learning system. The learning situation that constantly changing has forced the learning platform to adapt and personalize its learning resources for students. Evidence suggested that adaptation and personalization of e-learning systems (APLS) can be achieved by utilizing learner modeling, domain modeling, and instructional modeling. In the literature of APLS, questions have been raised about the role of individual characteristics that are relevant for adaptation. With several options, a new problem has been raised where the attributes of students in APLS often overlap and are not related between studies. Therefore, this study proposed a list of learner model attributes in dynamic learning to support adaptation and personalization. The study was conducted by exploring concepts from the literature selected based on the best criteria. Then, we described the results of important concepts in student modeling and provided definitions and examples of data values that researchers have used. Besides, we also discussed the implementation of the selected learner model in providing adaptation in dynamic learning.
Prototype mobile contactless transaction system in traditional markets to sup...journalBEEI
1) Researchers developed a prototype contactless transaction system using QR codes and digital payments to support physical distancing during the COVID-19 pandemic in traditional markets.
2) The system allows sellers and buyers in traditional markets to conduct fast, secure transactions via smartphones without direct cash exchange. Buyers scan sellers' QR codes to view product details and make e-wallet payments.
3) Testing showed the system's functions worked properly and users found it easy to use and useful for supporting contactless transactions and digital transformation of traditional markets. However, further development is needed to increase trust in digital payments for users unfamiliar with the technology.
Wireless HART stack using multiprocessor technique with laxity algorithmjournalBEEI
The use of a real-time operating system is required for the demarcation of industrial wireless sensor network (IWSN) stacks (RTOS). In the industrial world, a vast number of sensors are utilised to gather various types of data. The data gathered by the sensors cannot be prioritised ahead of time. Because all of the information is equally essential. As a result, a protocol stack is employed to guarantee that data is acquired and processed fairly. In IWSN, the protocol stack is implemented using RTOS. The data collected from IWSN sensor nodes is processed using non-preemptive scheduling and the protocol stack, and then sent in parallel to the IWSN's central controller. The real-time operating system (RTOS) is a process that occurs between hardware and software. Packets must be sent at a certain time. It's possible that some packets may collide during transmission. We're going to undertake this project to get around this collision. As a prototype, this project is divided into two parts. The first uses RTOS and the LPC2148 as a master node, while the second serves as a standard data collection node to which sensors are attached. Any controller may be used in the second part, depending on the situation. Wireless HART allows two nodes to communicate with each other.
Implementation of double-layer loaded on octagon microstrip yagi antennajournalBEEI
This document describes the implementation of a double-layer structure on an octagon microstrip yagi antenna (OMYA) to improve its performance at 5.8 GHz. The double-layer consists of two double positive (DPS) substrates placed above the OMYA. Simulation and experimental results show that the double-layer configuration increases the gain of the OMYA by 2.5 dB compared to without the double-layer. The measured bandwidth of the OMYA with double-layer is 14.6%, indicating the double-layer can increase both the gain and bandwidth of the OMYA.
The calculation of the field of an antenna located near the human headjournalBEEI
In this work, a numerical calculation was carried out in one of the universal programs for automatic electro-dynamic design. The calculation is aimed at obtaining numerical values for specific absorbed power (SAR). It is the SAR value that can be used to determine the effect of the antenna of a wireless device on biological objects; the dipole parameters will be selected for GSM1800. Investigation of the influence of distance to a cell phone on radiation shows that absorbed in the head of a person the effect of electromagnetic radiation on the brain decreases by three times this is a very important result the SAR value has decreased by almost three times it is acceptable results.
Exact secure outage probability performance of uplinkdownlink multiple access...journalBEEI
In this paper, we study uplink-downlink non-orthogonal multiple access (NOMA) systems by considering the secure performance at the physical layer. In the considered system model, the base station acts a relay to allow two users at the left side communicate with two users at the right side. By considering imperfect channel state information (CSI), the secure performance need be studied since an eavesdropper wants to overhear signals processed at the downlink. To provide secure performance metric, we derive exact expressions of secrecy outage probability (SOP) and and evaluating the impacts of main parameters on SOP metric. The important finding is that we can achieve the higher secrecy performance at high signal to noise ratio (SNR). Moreover, the numerical results demonstrate that the SOP tends to a constant at high SNR. Finally, our results show that the power allocation factors, target rates are main factors affecting to the secrecy performance of considered uplink-downlink NOMA systems.
Design of a dual-band antenna for energy harvesting applicationjournalBEEI
This report presents an investigation on how to improve the current dual-band antenna to enhance the better result of the antenna parameters for energy harvesting application. Besides that, to develop a new design and validate the antenna frequencies that will operate at 2.4 GHz and 5.4 GHz. At 5.4 GHz, more data can be transmitted compare to 2.4 GHz. However, 2.4 GHz has long distance of radiation, so it can be used when far away from the antenna module compare to 5 GHz that has short distance in radiation. The development of this project includes the scope of designing and testing of antenna using computer simulation technology (CST) 2018 software and vector network analyzer (VNA) equipment. In the process of designing, fundamental parameters of antenna are being measured and validated, in purpose to identify the better antenna performance.
Transforming data-centric eXtensible markup language into relational database...journalBEEI
eXtensible markup language (XML) appeared internationally as the format for data representation over the web. Yet, most organizations are still utilising relational databases as their database solutions. As such, it is crucial to provide seamless integration via effective transformation between these database infrastructures. In this paper, we propose XML-REG to bridge these two technologies based on node-based and path-based approaches. The node-based approach is good to annotate each positional node uniquely, while the path-based approach provides summarised path information to join the nodes. On top of that, a new range labelling is also proposed to annotate nodes uniquely by ensuring the structural relationships are maintained between nodes. If a new node is to be added to the document, re-labelling is not required as the new label will be assigned to the node via the new proposed labelling scheme. Experimental evaluations indicated that the performance of XML-REG exceeded XMap, XRecursive, XAncestor and Mini-XML concerning storing time, query retrieval time and scalability. This research produces a core framework for XML to relational databases (RDB) mapping, which could be adopted in various industries.
Key performance requirement of future next wireless networks (6G)journalBEEI
The document provides an overview of the key performance indicators (KPIs) for 6G wireless networks compared to 5G networks. Some of the major KPIs discussed for 6G include: achieving data rates of up to 1 Tbps and individual user data rates up to 100 Gbps; reducing latency below 10 milliseconds; supporting up to 10 million connected devices per square kilometer; improving spectral efficiency by up to 100 times through technologies like terahertz communications and smart surfaces; and achieving an energy efficiency of 1 pico-joule per bit transmitted through techniques like wireless power transmission and energy harvesting. The document outlines how 6G aims to integrate terrestrial, aerial and maritime communications into a single network to provide ubiquitous connectivity with higher
Noise resistance territorial intensity-based optical flow using inverse confi...journalBEEI
This paper presents the use of the inverse confidential technique on bilateral function with the territorial intensity-based optical flow to prove the effectiveness in noise resistance environment. In general, the image’s motion vector is coded by the technique called optical flow where the sequences of the image are used to determine the motion vector. But, the accuracy rate of the motion vector is reduced when the source of image sequences is interfered by noises. This work proved that the inverse confidential technique on bilateral function can increase the percentage of accuracy in the motion vector determination by the territorial intensity-based optical flow under the noisy environment. We performed the testing with several kinds of non-Gaussian noises at several patterns of standard image sequences by analyzing the result of the motion vector in a form of the error vector magnitude (EVM) and compared it with several noise resistance techniques in territorial intensity-based optical flow method.
Modeling climate phenomenon with software grids analysis and display system i...journalBEEI
This study aims to model climate change based on rainfall, air temperature, pressure, humidity and wind with grADS software and create a global warming module. This research uses 3D model, define, design, and develop. The results of the modeling of the five climate elements consist of the annual average temperature in Indonesia in 2009-2015 which is between 29oC to 30.1oC, the horizontal distribution of the annual average pressure in Indonesia in 2009-2018 is between 800 mBar to 1000 mBar, the horizontal distribution the average annual humidity in Indonesia in 2009 and 2011 ranged between 27-57, in 2012-2015, 2017 and 2018 it ranged between 30-60, during the East Monsoon, the wind circulation moved from northern Indonesia to the southern region Indonesia. During the west monsoon, the wind circulation moves from the southern part of Indonesia to the northern part of Indonesia. The global warming module for SMA/MA produced is feasible to use, this is in accordance with the value given by the validate of 69 which is in the appropriate category and the response of teachers and students through a 91% questionnaire.
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
The purpose of this paper is to propose an approach of re-organizing input data to recognize emotion based on short signal segments and increase the quality of emotional recognition using physiological signals. MIT's long physiological signal set was divided into two new datasets, with shorter and overlapped segments. Three different classification methods (support vector machine, random forest, and multilayer perceptron) were implemented to identify eight emotional states based on statistical features of each segment in these two datasets. By re-organizing the input dataset, the quality of recognition results was enhanced. The random forest shows the best classification result among three implemented classification methods, with an accuracy of 97.72% for eight emotional states, on the overlapped dataset. This approach shows that, by re-organizing the input dataset, the high accuracy of recognition results can be achieved without the use of EEG and ECG signals.
Parking detection system using background subtraction and HSV color segmentationjournalBEEI
Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.
Quality of service performances of video and voice transmission in universal ...journalBEEI
The universal mobile telecommunications system (UMTS) has distinct benefits in that it supports a wide range of quality of service (QoS) criteria that users require in order to fulfill their requirements. The transmission of video and audio in real-time applications places a high demand on the cellular network, therefore QoS is a major problem in these applications. The ability to provide QoS in the UMTS backbone network necessitates an active QoS mechanism in order to maintain the necessary level of convenience on UMTS networks. For UMTS networks, investigation models for end-to-end QoS, total transmitted and received data, packet loss, and throughput providing techniques are run and assessed and the simulation results are examined. According to the results, appropriate QoS adaption allows for specific voice and video transmission. Finally, by analyzing existing QoS parameters, the QoS performance of 4G/UMTS networks may be improved.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Cloud middleware and services-a systematic mapping review
1. Bulletin of Electrical Engineering and Informatics
Vol. 10, No. 5, October 2021, pp. 2696~2706
ISSN: 2302-9285, DOI: 10.11591/eei.v10i5.3169 2696
Journal homepage: http://beei.org
Cloud middleware and services-a systematic mapping review
Isaac Odun-Ayo1
, Marion Adebiyi2
, Olatunji Okesola3
, Olufunke Vincent4
1
Department of Computer and Information Sciences, Covenant University, Ota, Nigeria
2
Department of Computer Science, Landmark University, Omu-Aran, Nigeria
3
Department of Computational Sciences, First Technical University, Ibadan, Nigeria
4
Department of Computer Science, Federal University of Agriculture, Abeokuta, Nigeria
Article Info ABSTRACT
Article history:
Received Nov 12, 2020
Revised Mar 17, 2021
Accepted Aug 23, 2021
Cloud computing currently plays a crucial role in the delivery of vital
information technology services. A unique aspect of cloud computing is the
cloud middleware and other related entities that support applications and
networks. A specific field of research may be considered, particularly as
regards cloud middleware and services at all levels, and thus needs analysis
and paper surveys to elucidate possible study limitations. The purpose of this
paper is to perform a systematic mapping for studies that capture cloud
computing middleware, stacks, tools and services. The methodology adopted
for this study is a systematic mapping review. The results showed that more
papers on the contribution facet were published with tool, model, method and
process having 18.10%, 13.79%, 6.03% and 8.62% respectively. In addition,
in terms of tool, evaluation and solution research had the largest number of
articles with 14.17% and 26.77% respectively. A striking feature of the
systemic map is the high number of articles in solution research with respect
to all aspects of the features applied in the studies. This study showed clearly
that there are gaps in cloud computing middleware and delivery services that
would interest researchers and industry professionals desirous of research in
this area.
Keywords:
Cloud computing
Cloud delivery network
Cloud middleware
Cloud tools
Services
Services at all layers
Systematic mapping
This is an open access article under the CC BY-SA license.
Corresponding Author:
Isaac Odun-Ayo
Department of Computer and Information Sciences
Covenant University
Ota, Nigeria
Email: isaac.odun-ayo@covenantuniversity.edu.ng
1. INTRODUCTION
Cloud computing is characterized as a type of distributed and parallel system that consists of a series
of dynamically supplied interlinked and virtualized computers which are provided by the service provider as
several centralized computing resources with adequate service level agreements [1]. Cloud computing
provides environments for managing resources concerning flexible infrastructure, middleware, application
development frameworks and business applications. Cloud services are operated as pay-as-go or free vendor
services. Software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS)
are the primary categories of cloud offerings. There are four cloud architecture models, in addition to the
cloud services, which are private, public, community, and hybrid clouds which are used to supply a variety of
services that enhances operations in data centres, internet of things, edge and fog computing [2].
The cloud middleware and services at all layers (XaaS) enhance operations on the cloud by
connecting computers and devices to either cloud mobile or web applications which is also relevant to
performance analysis of such aplications [3], [4]. In the traditional computing environment, the entire stack
2. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Cloud middleware and services-a systematic mapping review (Isaac Odun-Ayo)
2697
which consists of applications, data, runtime, middleware, operating system, virtualization, servers, storage
and networking all reside in the data centre under the control of the organisation. Conversely, this entire stack
is now supplied via a cloud service provider (CSP) under SaaS. The middleware is under the control of the
user in PaaS, but managed by the CSP under IaaS. Various CSPs like Microsoft, Google, IBM and others
have highly robust middleware and other microservices made available to user. Express is a server-side web
framework for Node.js which helps to create APIs very quickly and easily by using hypertext transfer
protocol. Express has built-in middleware functions, but middleware also comes from third-party packages
and custom packages. For example, the IBM middleware functions in Express have access to the request and
response objects.
CloudSim offers the ability to seamlessly model, simulate and experiment with evolving cloud
computing infrastructure and application services, with a modern widespread and extensible simulation
framework [5]. This includes support of large-scale cloud computing network modeling and simulation, and
an integrated framework for data center modeling, service providers, planning, and allocation policies. The
CloudSim toolkit is used to model, simulate, and execute cloud-enhancing applications. It enables extension
and policy specification of every component of the software stack as a completely personalizable device. It
helps users to handle the complexity of the specification, delivery and design of resources in physical
environments [6]. Cloud bus toolkit is a layered, seven module architecture at the top of the stack, utilizing
Cloud instances for everyday applications that are used for various purposes [7].
Cloud resources are acquired via third-party brokering services that promote access to the
infrastructure. The virtual voyager, architect, network, native and mobile mover. The virtual voyager
comprises the SaaS, PaaS and IaaS stacks. A virtual voyager leverages technology to access virtual computer
resources in addition to increased versatility. It takes advantage of better response time, thereby reducing
costs [8]. An application delivery network (ADN) is an integrated infrastructure that includes communication
tools, layer of service packets and network layer services for host applications [9]. However, the trend now is
a software defined network (SDN). By proposing new design based on the separation of control and data
planes, SDN enhances solution to protocol design problem and configuration errors on large-scale distributed
systems.
Cloud middleware is an interesting field of research and the choice of a research subject can be a
difficult process. A systematic mapping evaluation presents an overview of the number of publications within
a field of interest and categorization them by using a scheme that allows a description of the degree of
publications to be published in various fields of study [10]. Here, a systematic mapping study is built by
examining issues relating to cloud middleware, and services. Through this process, the number of
publications and work in this area will be defined. This study was carried out by establishing three facets for
the scheme, they include the contribution, topic, and research facets. The contribution facet considered
method and model amongst others, while the research facet dealt with the kind of research carried out. The
topic facet was completed by extracting key areas of cloud middleware, stacks, tools and services on the
cloud. In this paper, section 2 examines related work. The materials and method applied in the study are
provided in section 3. The results and discussion on the outcome of the mapping process are presented in
section 4. Section 5 shows the conclusion and recommendation for further research.
2. RELATED WORK
Santos, et al. [11] conducted research on systematic mapping to enhance the utilization of computer
science conceptual diagrams, and the outcome of this study aims at identifying and analyzing current work
utilizing backward snowballing as well as manual processes using five digital libraries. Through analyzing
properly, the research focused on the teaching and learning process of concept maps and the key goals were
achieved. Digital archives and repositories were used for research in the field of study.
Odun-Ayo et al. [2] is a systematic mapping research of edge computing and the internet of things
with the cloud. A combination of topical, research and contribution facets were used to analyze publications
on applications, concerning tool and model. They were closely followed by publications on architecture of
edge computing. Opinion papers were the least published in the aspects of architecture. These were followed
by publications on evaluation and solution research related to application. Most papers on ubiquitous
networks were in the area of validation research. Furthermore, there was more work on optimization in the
areas of philosophical, valuation and experience studies, with all following the procedures in [12]. SCOPUS,
ScienceDirect, Compedex, ACM DL and IEEExplore were used for the work's search strings.
Petersen et al. [13] examines the manner in which researchers perform the systematic mapping
process and how recommendations are to be revised based on lessons derived from the systematic maps and
the procedures for systematic literature review. Certainly, the authors performed a comprehensive mapping
study on systematic maps, taking into account relevant patterns in systematic reviews. They found that
3. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 5, October 2021 : 2696 – 2706
2698
several recommendations have been used and merged in the vast number of studies carried out leading to a
systematic mapping studies with more valuable conclusions.
The authors in Brocke et al. [14] presented six major aspects for the effective planning and
communication of design of research projects that are science-based. They also emphasized the importance
of the review of literature during scientific investigation and in the ever-increasing sphere of information
system. The authors addressed the challenges of literary research and offered suggestions on how to resolve
these challenges. They also offered practical instructions and checklists to help scientists plan and coordinate
their literature searches.
The study in Odun-Ayo et al. [15] systematically documented cloud computing models and
implementation patterns. Design, service deployment, implementation, configuration, data protection,
security and usage models were considered by the categorization framework. A total of 131 fundamental
examinations was utilized and the guide was drawn up based on the ideas of [13]. ScienceDirect, ACM
digital library, Springer and IEEExplore provided results from different search strings of the exploration.
The work in Odun-Ayo et al. [16] is a systematic mapping analysis of networking for cloud and data
centre. In the work, only 119 articles of the 131 articles consulted gave a concise contribution, which in this
context was broken down into five major areas which are processes and metrics, devices, models and
approaches. The result indicated that out of the 119 articles, contributions based on process and model had
the highest percentage of 38.66% and 26.05%. However, contributions based on methods, tools, and metrics
were 19.33%, 10.08%, and 5.88% respectively. SCOPUS, ScienceDirect, Compedex, ACM DL and
IEEExplore were utilized in the work's search strings.
The paper in Mahamad et al. [17] designed a cloud-based people counter which could be used for
the tracking and business marketing utilizing Raspberry Pi embedded systems, such that the received data is
transmitted to the Thing Speak, IoT platform. Performance evaluation of the system was done in a real-life
environment to ascertain its efficiency and capability to enhance business decisions. The basic stages of the
work are; development of people counting algorithms, application of the developed algorithms into the
hardware, and the integration of the acquired data onto Thing Speak.
The study in Reshma and Chetanaprakash [18] performed a study of developments in the
automotive infotainment system with vehicular cloud network for enhancement and protection of in-car
systems for smart and advanced entertainment operations. It is the vehicle ad-hoc network and support of the
cloud system that relies on the transmission of this infotainment service. This work offers a summary of the
research being done to exploit the infotainment system to get a genuine representation of input device power,
weakness and open-end problems. The research contribution was summarized into the infotainment system
over vehicular, schema-based approach, security-based approach, and traffic-based scheme.
The research in Kumar and Shantala [19] is a broad research overview on data integrity and
deduplication against privacy in cloud storage because of the frailty of cloud data, which demands improved
security. Consequently, this work discussed the express research commitment towards the Integrity of
records, privacy, and deduplication of information. This also helps to highlight new open research issues and
to explore the possible direction in which the existing challenges can be addressed. Unrealistic assumptions,
non-applicability towards external intruders, unconsidered computational costs, according to the work
outlined open issues in this field.
The study in Farahzadi et al. [20] performed a cloud of things middleware survey. To offer IoT
services, the cloud is pivotal to giving intelligent conditions in houses, structures, and towns. There are,
however, problems such as handling, aggregating, and preserving the big data involved. Cloud computing has
enabled IoT as a web of things (WoT), providing almost limitless web services to expand broad-based IoT
platforms. Middleware is capable of improving the heterogeneity of objects leading to efficient
communication between interfaces, operating systems, and architectures. The research study analyzed CoT
middleware technologies; identified the key characteristics of middleware for exploring various architectural
styles and service areas. Also, to recognize middleware relevant to CoT-based stages and, to provide a
rundown of current difficulties and issues in the improvement of CoT-based middleware.
The research in Endol et al. [21] systematically examined high cloud availability and research
challenges. Before reaching cloud service that is compliant with service level agreements, many challenges
must be overcome. The work covers checks, load balancing, and redundancies for the enhanced quality of
cloud service while adding several research problems in this field without neglecting network and
middleware solutions. To identify methods that cover high-availability clouds, it adapted the systematic
analysis suggested by [22]. The work involved requirement for assessment, research and search queries,
research resources, inclusion criteria and exclusion criteria, the procedure of information extraction
identifying primary studies, assessing the quality of studies and extracting relevant information and
presenting an overview of studies.
4. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Cloud middleware and services-a systematic mapping review (Isaac Odun-Ayo)
2699
The study in Birje et al. [23] is a cloud study focused on cloud computing ideas, technology,
challenges, and health. This paper investigates the model of cloud computing in the area of domain,
principles, technology, instruments, and diverse challenges. A systematic literary review of 77 selected
papers, released between 2000 and 2015, was performed to explain correctly the complexities of the cloud
computing model. Hence security was identified as the greatest problem, the topic is discussed in depth
separately. The concepts discussed are interoperability, resource planning, protection, data leakage, resource
sharing, load balancing, multi-tenancy virtualization, and service level agreement (SLA).
The research in Chiregi and Navimipour [24] is the investigation of the cloud trust assessment and
top-notch techniques utilized in cloud. Also, certain processes have been evaluated in terms of credibility,
safety, efficiency, efficiency, protection, complexity, and scalability. There were 224 papers listed, that were
screened to 28 based on the selection process. The survey would enable scholars, researchers, and experts to
understand the developments in trust assessment processes in cloud environments through the presentation of
up-to-date knowledge and problem issues. The strategy for selecting the article involves an automated,
keyword-based search. The online repositories reviewed for this work are ScienceDirect, Google Scholar,
Springerlink, IEEExplore, Elsevier, Wiley, Springer, and Emerald.
The work in Muhasin et al. [25] reviewed the issues of security and privacy affecting critical cloud
data and identified solutions to tackle challenges such as user access management, sensitive data protection
privacy, user identity anonymity, and data file protection. To explore the fundamental targets of the proposed
system, a pilot study utilizing an organized survey was led by specialists working in cloud security. This also
provides a multi-level structure to enhance the management of cloud-based information systems. It is a
systematic analysis of literature carried out in compliance with Kitchenham's guidelines [13]. The systematic
literature review starts with the review, investigation discovers, data gathering and information assessment.
The paper made inquiries regarding the effects of critical data management in cloud computing. From the
related work discussed so far, it is obvious that no work has been done in this area of study, hence our work
is novel. This is a new system that is being proposed, hence it is novel. From the author’s perspective, there
are no similar papers in this field of study. However, there are several related works on systematic mapping
studies which have been discussed in this paper which apply similar methodology in arriving at their results.
3. MATERIALS AND METHOD
This study focuses on developing a systematic map of cloud computing middleware, stacks, tools,
delivery network and services at all layers. It was carried out in [11], [26], based on the formal guidelines for
systematic mapping studies. A systematic mapping study is a repeatable method in which resources for
research purposes are collected and interpreted. For a proper systematic mapping study as defined in Figure 1
there are several required steps. Research questions have to be defined, which focuses on the scope of review
to be conducted. All articles in the specific study area under examination are also checked. The papers are
screened after the search to decide which paper is acceptable for the research. By examining the abstracts in
the respective papers, a scheme for classification is designed with keywording. The data extraction process
led to the creation of a systematic map. The above steps were involved in creating a comprehensive map of
middleware, stacks, software, supply networks and resources at all levels of the cloud.
Figure 1. The systematic mapping process [13]
3.1. Definition of research questions
Systematic maps give a review of the amount and class of work that has been completed in a
specific research domain. This allows the visualization of patterns over time by illustrating the rates of
publications. The places in which the work was published in the study may also be important sometimes. It
might likewise be fundamental sometimes to have a good knowledge of the areas in which publications have
5. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 5, October 2021 : 2696 – 2706
2700
been done. These relevant issues decide the proper research question to be used in the study. The questions
that govern the research are enumerated below:
RQ 1: What aspect in cloud computing middleware, stacks, and services are examined as well as what
number of papers encompass the different areas?
RQ 2: What kinds of articles and in particular what is the unique pattern of papers published in this field?
3.2. Conduction of primary studies research
Primary studies are typically accomplished by exploration of major digital libraries. That is achieved
by looking online for conference papers and articles. Searches were done on different digital advanced
libraries accessible on-line to acquire papers for this systematic mapping study. The search did not
concentrate on book and printed material. Due to the high impact of conference as well as journal
publications of those databases, four databases were utilized for the hunt. The searched digital libraries are
shown in Table 1.
Table 1. Electronic databases used for the systematic mapping study
Electronic databases URL
ACM http://dl.acm.org/
IEEE http://ieeexplore.ieee.org/xplore
SCIENCE DIRECT http://www.sciencedirect.com/
SPRINGER http://www.springerlink.com/
The search query was structured to elicit outcomes with respect to information, population and
comparisons. All aspects of the study focused on the keywords used in the search string. The queries used on
the leading digital libraries for this cloud-based middleware research, include stacks, tools, delivery networks
and services in each layer:
(TITLE (‘’Cloud middleware’’) OR TITLE (‘’cloud computing middleware’’) or (ALL(‘’cloud’’)AND
ALL(middleware)))AND (TITLE (Stacks) OR TITLE (tools) OR TITLE (‘’delivery networks’’) OR TITLE
(XAAS) OR TITLE(‘’services at all layers’’)).
For the search of the document metadata the above search string was used to ensure all relevant
articles were obtained. All findings from the four digital repositories which relate to cloud computing and
computer science were taken into account during the analysis. One hundred and twenty seven (127) papers
were considered important to the inclusion from an initial list of 1,158 publications in the selected paper
criteria, which were defined in the review emphasis and research question requirements. The research was
carried out between 2001 and 2018.
3.3. Inclusion and exclusion criteria paper screening
Selection processes are used to ensure that the search and inclusion of all relevant research
publications occurs. These processes were utilized to exclude irrelevant documents that pertains to the study,
while papers pertaining to the study were included. This method also helps to exclude studies which have no
direct answers to the research questions. Some abstracts related to the key issues without enough information
and documents like these were also omitted. Furthermore, papers on tutorials, summaries, discussions,
prefaces, editorials and presentation slides were excluded. The primary focal point of this study was
considered important in inclusion of papers and also the secondary aspects were to a certain degree extracted
too. Middleware, stack, platform, supply network and XaaS are the main focus of this report. Criteria for
inclusion and exclusion therefore, are defined in Table 2.
Table 2. Inclusion and exclusion criteria
Inclusion criteria Exclusion criteria
In the abstract, middleware, stacks, software, distribution
networks and cloud services are addressed specifically. In
addition, the entire paper contributes to the study subject.
This paper is beyond the field of cloud and computer science and
does not assist with middleware, stacks, software, supply networks
and infrastructure on all levels. The papers are not cloud-related.
3.4. Keywording of abstracts
The classification scheme used for the study is accomplished through a systematic process.
Keywording was necessary in reducing the time required in developing the classification scheme for this
work. Furthermore, keywording enables the researcher to consider all relevant publications through the
6. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Cloud middleware and services-a systematic mapping review (Isaac Odun-Ayo)
2701
classification scheme. The process involves the study of abstracts to identify concepts and keywords that
relate to this study. After the extraction process keywords from numerous middleware-related articles were
combined to provide comprehensive research insights. Afterwards, the categories for this study were
established. However, in order for the analysis to be used with appropriate keywords it was often important
to review the introduction, and conclusion of the publications. In conclusion, a keyword cluster was used for
the evaluation of the various classes utilized in constructing the systematic map. Three aspects were utilized
in this analysis of cloud middleware, stacks, networks and services. The first facets was based on the subject
which was directly linked to the various aspects of the research. In the second facet, the types of
contributions in the research paper were discussed in terms of the metric, model, instrument, system and
procedure. The third facet comprised the research types undertaken.
3.5. Categories and descriptions of facets on research types
The third dimension dealt with the kind of work undertaken. A current classification of research
approaches was explored [27]. The method includes the following definitions and categories.
a. Validation of research: the techniques used in the papers reviewed are unique but not yet implemented
for example conducting an experiment in the laboratory.
b. Evaluation research: the techniques have been evaluated and implemented. Also the result of the
findings discussed in the papers.
c. Solution proposal: the process point towards a unique solution that was found to a challenge. The
applications and benefits of such solution are also mentioned.
d. Philosophical papers: these methods generate new ways to look at concepts and structures as problems.
e. Opinion papers: there is no known form of research for the thesis under consideration here. It simply
communicates the opinions of such researchers.
f. Experience papers: such papers refer to the author's personal experience and explain how things have
been done.
The classification scheme for this study considered the work in [27] suitable for use. These research
types were used to examine the papers included for this study.
3.6. Data extraction and mapping studies
The included papers were analysed according to specific categories, based on the scheme of
classification. The extraction of data from the specific documents used in the analysis was therefore possible.
The classification scheme was determined by the data extraction method. New categories may be added
during the process, while certain categories could be merged and some others may not be considered
relevant. The data extraction process was conducted via a Spreadsheet table from Microsoft Excel. Each
classification scheme category was included in the Excel tables. The publication frequency was achieved by
combining the topic/contribution tables with the topic/research type tables. The frequencies from the findings
on the Excel tables was the pre-requisite n creating the systematic map. It was important to assess which part
of the research field was more prominent. In this way, the gaps in cloud middleware, stacks, and service were
easily identified, allowing further work to be recommended in fields with low publications.
From the result on the spreadsheet tables, the frequencies were presented via a bubble plot, at the
intersection of the categories. The map comprising the plot was an x-y dispersion chat with bubble. The three
facets used in the analysis gave rise to two quadrants. A visual map based on the contribution of the theme
classifications to either the contribution/research classes was given by each quadrant. Furthermore, the
bubble also provided descriptive statistics for easy comprehension. The map presented a simple overview of
the analysis in the field of middleware, stacks, and services.
4. RESULTS AND ANALYSIS
The core motivation of the systematic mapping study on cloud computing middleware, stacks, and
services was thematic analysis, classification and likely identification of the publication fora. Based on the
analysis, research gaps were highlighted on the map, thus depicting which area of the field of study had
shortage of publications. On the other hand, the study also indicated the areas that had more articles
published. In this systematic mapping study, the primary studies were assessed via high-level classes, which
were then used for producing the rates of occurrence of articles and also the map creation.
4.1. Topic and contribution facet
The classification scheme topics reflected the different aspects of cloud middleware and related
resources. Hence the topics considered were stacks, tools, delivery network, services at all layers (XaaS),
middleware and orthogonal. The tables in this study are the outcomes of the categorization of the primary
studies. The primary studies were considered based on the topics used in the systematic process, hence the
7. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 5, October 2021 : 2696 – 2706
2702
data generated are displayed on the tables. The details of this dataset are too large to be included in the paper.
The numerical results in the table simply provides the quantity of papers resulting from the classification
scheme and data extraction process as shown in Figure 1. This is the standard way it is represented in
literature. The listing of primary studies utilized for categorizing the topics viz-a-viz the contribution type is
as displayed on Table 3, while the percentage of each category is depicted in Figure 2. In Table 3, suitable
topics from the papers extracted during screening and keywording are examined to determine which ones
discussed metric, tool, model method and process. Hence Table 3 contains the primary studies list used for
the examination of the various topics against the types of contributions. Figure 3 shows the percentage of
topics in the research category. The systematic map on cloud computing middleware, stacks, tools, delivery
network and services at all layers is at Figure 4. On the x-axis of the left quadrant is the contribution facet.
The category dealt with the type of contribution a paper offered the research in terms of metric, method, tool,
process and model. In the study, metric had 10.34% out of the 116 papers in this facet. Tool had 28.45%,
model had 27.59%, method had 18.10% and process had 15.52%.
Furthermore, the left quadrant indicated that 1.72% of the model contribution were each on
orthogonal, middleware, delivery network and stacks. Model contributed 2.59% to services at all layers,
while the contribution to tool had the largest which was 18.1%. Also, Figure 2 displays the pie chart used for
every topic within the contribution category. From the 116 papers within this category, 16.38% were based
on stack, 51% were based on tools, 8.62% were topics on delivery network, 10.34% were topics on XaaS and
12.06% were topics on middleware.
Table 3. Primary studies for topic and contribution facets
Contribution Facet
Topic
Metric Tool Model Method Process
Stack 23, 24,
122
18, 25, 103, 111 7, 26 21, 27, 29, 59,
110, 120, 126
2, 45, 116
Tools 4, 13, 28,
102, 105,
125
31, 33, 65, 66, 67,
71, 77, 82, 85, 86,
89, 91, 99, 100,
113, 117,
34, 37, 38, 39, 40, 60,
61, 62, 63, 75, 76, 79,
80, 81, 82, 84, 104,
112, 115, 124
32, 47, 56, 57,
64, 74, 93,
6, 10, 16,
72, 73, 90,
113, 119,
123,
Delivery Networks 15 22, 48, 14, 50, 42, 43 51, 83, 127
Services at all layers (XaaS) 1, 55 19, 20, 69, 70, 118, 108, 87, 109, 98, 106
Middleware 5, 12, 17, 78, 88,
92, 94, 107
9, 11, 49, 68 30, 96
Percentage 10.34% 28.45% 27.59% 18.10% 15.52%
Figure 2. Percentage of topics in the contribution category
4.2. Topic and research facet
The listing of main studies utilized for categorization of the topics with respect to the research type
is displayed on Table 4. Table 4 contains the primary studies data used for the examination of the various
topics against the types of research. In Table 4, suitable topics from the papers extracted during screening and
keywording are examined to determine which ones discussed evaluation, validation, solution, philosophical,
experience and opinion research types. The x-axis of the right quadrant of Figure 4 contains the result of the
type of research undertaken for this study. The x-axis of the right quadrant of Figure 4 is the research type
Stack
17%
Tools
51%
Delivery Network
9%
XaaS
11%
Middleware
12%
Contribution Category
8. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Cloud middleware and services-a systematic mapping review (Isaac Odun-Ayo)
2703
category. Evaluation research had 28.35% out of 127 articles reviewed. Solution research had 54.33%,
philosophical had 5.15% and experience had 11.88%. No publications were present on validation and opinion
research in field of study under review. Evaluation research had 28.35% out of the 127 publications included
in this study. The breakdown of this 28.35% indicated that 0.79% dealt with orthogonal, 6.3% of the
evaluation research was on tool and 3.94% was on stacks. From the 127 papers within this group, 15.75%
were based on stack, 44.09% were on tools, 7.87% were based on delivery network, 9.45% were topics on
XaaS and 22.83% were topics on middleware.
Table 4. Primary studies for topic and research facets
Research Facet
Topic
Evaluation Validation Solution Philosophical Experience Opinion
Stack 23, 24, 103,
111, 122
7, 8, 18, 25 26, 27, 29 45,
59, 110, 120, 126
21 2, 116
Tools 4,13, 28,31,
33,65, 66,67,
71,85, 86,89,
91,99, 100, 102,
105, 125
6, 10, 16, 34,37, 38, 39,40,
47, 56,57, 60, 61,62, 63,
64,72, 73, 74,75, 76, 77,79,
80, 81,82, 84, 104,112,
115,117, 119,123, 124
32, 93, 90, 113
Delivery Networks 15, 48, 14,22,42, 43,50, 51, 83, 127
Services at all layers
(XAAS)
1, 55 19,20,69, 70,87,108,109,
118,
98, 106
Middleware 3, 5, 12, 17,52,
53,97, 101, 121,
9, 11, 49, 54,68, 78, 88,92,
94, 107,
30,58, 114,
96
35, 36, 41,
44, 46, 95
Percentage 28.35% 0.00% 54.33% 5.51% 18.81% 0.00%
Figure 3. Percentage of topics in the research category
4.3. Major findings
The systematic map on middleware, stacks, tools, delivery network and services at all layers on the
cloud can be visualized in Figure 4. Two x-y scatter charts that contains bubble plots at the intersection of the
topic were provided in the left quadrant, at the position where the topic and contribution facets traverse. The
right quadrant is the map depicting the connection of the topic and research type facet also using a two x-y
scatter plot with bubbles. The map made it easier to distinguish which classification had more importance.
From Figure 4:
a. It is obvious that there were more papers published in the metric area as it concerns tools. In fact, it is
possible to identify clearly that more publications were created on tool, as it relates to the contribution
facet. Tool, model, method and process had 18.10%, 13.79%, 6.03% and 8.62% respectively.
b. Evaluation and solution research had the largest number of articles in terms of tools with 14.17% and
26.77% respectively. In terms of experience research, middleware had the highest number of papers with
4.72%. Also middleware had more articles in relation to philosophical research with 2.36%. There were
no publications dealing with validation and opinion research based on the author’s investigations.
Furthermore, in terms of studies carried out, there were no papers on delivery networks and XaaS.
c. A most striking aspect of the map is that solution research has the highest frequencies of publications in
relation to all aspects of the topics. This type of result shown on the systematic map should easily
generate interest of researchers. The visual delivery of a systematic map has helped to summarize the
9. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 5, October 2021 : 2696 – 2706
2704
publication gaps providing results that can be used for further studies. This is a novel work and it is hoped
that more papers will be published in this field to improve on what has been done in this research paper.
Figure 4. Systematic map of cloud computing middleware and services
The significance is that research experts at all levels could utilize this outcome for additional study.
This study focused on six topics namely: stacks, tools, delivery network, services at all layers, middleware
and orthogonal in relation to cloud computing middleware. In addition, six classes of study were considered
either regarding tool, model, method, metric and process or in terms of evaluation, validation, solution,
philosophical and opinion research. For future research these fields, among other things, are recommended.
In this analysis, the essential lessons learnt is that the spectrum of research in any field is inexhaustibility.
From the literature, the comparison with similar work is trivial because there has not been similar
publications related to the title of this paper, hence making the contributions in this paper unique. It is
expected that more upcoming researchers in the field of cloud computing will undertake more in-depth
studies as a follow up to the paper.
5. CONCLUSION
Cloud computing is evolving very fast and so are the various challenges regarding the services being
offered. Cloud users and providers alike are benefitting from cloud activities. This has inspired research
efforts both within the industry and research institutions alike. The use of cloud services will require
appropriate resources by prospective clients. Cloud providers must also strive to perform at an optimum
level. This has led to lots of studies on cloud middleware and services. Several high impact studies were
presented in this research. Notwithstanding, a shortage of publications in several aspects still exists. With the
analysis carried out in the study, areas that need more work through the systematic map were highlighted.
Contribution to the body of knowledge was realized by finding deficiencies in different aspects of the
research. The research gaps established are useful for further studies, hence this forms the recommendation
for this paper. This study can be strengthened by in-depth analysis of the topics extracted in this research
when further studies are conducted. In summary, this study would enable researchers to identify important
gaps in cloud middleware and services at all layers that were not addressed in past studies thus increasing
research opportunities.
10. Bulletin of Electr Eng & Inf ISSN: 2302-9285
Cloud middleware and services-a systematic mapping review (Isaac Odun-Ayo)
2705
ACKNOWLEDGEMENTS
We appreciate Covenant University Nigeria, through the Centre for Research, Innovation and
Discovery (CUCRID) for the support granted to this work.
REFERENCES
[1] R. Buyya, J. Broberg, and A. M. Goscinski, “Cloud computing principles and paradigms,” John Wiley and Son, pp.
4-10, 2011.
[2] I. Odun-Ayo, I. Eweoya, W. T.-Abasia, and S. Bogleb, “A Systematic Mapping Study of Edge Computing and
Internet of Things with the Cloud,” International Journal of Engineering Research and Technology, vol. 12, no. 10,
pp. 1824-1836, 2019.
[3] I. Odun-Ayo, T.-A. William, J. Yahaya, and M. Odusami, “A Systematic Mapping Study of Performance Analysis
and Modelling of Cloud Systems and Applications,” International Journal of Electrical and Computer Engineering
(IJECE), vol. 10, no. 1, pp. 101-118, 2020, doi: 10.11591/ijece.v11i2.pp1839-1848.
[4] I. Odun-Ayo, S. Misra, N. A. I. Omoregbe, E. Onibere, Y. Bulamana, and R. Damasevicius, “Cloud-based security
driven human resource management system,” Frontiers in Artificial Intelligence and Applications, vol. 295, pp. 96-
106, 2017. doi: 10.3233/978-1-61499-773-3-96.
[5] R. Buyya, R. Ranjan and R. N. Calheiros, "Modeling and simulation of scalable Cloud computing environments
and the CloudSim toolkit: Challenges and opportunities," 2009 International Conference on High Performance
Computing & Simulation, 2009, pp. 1-11, doi: 10.1109/HPCSIM.2009.5192685.
[6] A. Ranabahu, and E. M. Maximilien, “A best practice model for cloud middleware systems,” IBM Almaden
Research Center, San Jose, 2010.
[7] R. Buyya, S. Pandey, and C. Vecchiola, “Cloud bus toolkit for market-oriented cloud computing,” in IEEE
International Conference on Cloud Computing, The University of Melbourne, Australia, 2010, pp. 24-44, doi:
10.1007/978-3-642-10665-1_4.
[8] A. T. Kearney, “Clearing the fog from cloud computing,” Forester, 2012.
[9] P. Subharthi, “Software-defined Application Delivery Network,” All Theses and Dissertations, p. 1331, 2014,
[Online]. Availabe: http://openscholarship.wustl.edu/etd/1331.
[10] K. Petersen, R. Feldt, S. Mujtaba, and M. Mattsson, “Systematic Mapping Studies in Software Engineering,”
EASE'08 Proceedings of the 12th international conference on Evaluation and Assessment in Software Engineering,
2008, pp. 68-77, doi: 10.5555/2227115.2227123.
[11] V. Santos, E. F. de Souza, N. L. Vijaykumar, and K. Felizardo, “Analyzing the Use of Concept Maps in Computer
Science: A Systematic Mapping Study,” Informatics in Education, vol. 16, no. 2, pp. 257-288, 2017. doi:
10.15388/infedu.2017.13.
[12] B. Kitchenham, O. P. Brereton, D. Budgen, M. Turner, J. Bailey, and S. Linkman, “Systematic literature reviews in
software engineering–a systematic literature review,” Information and Software Technology, vol. 51, no. 1, pp. 7-
15, 2009, doi: 10.1016/j.infsof.2008.09.009.
[13] K. Petersen, S. Vakkalanka, and L. Kuzniarz, “Guidelines for conducting systematic mapping studies in software
engineering: An update,” Information and Software Technology, vol. 64, pp. 1-18, 2015, doi:
10.1016/j.infsof.2015.03.007.
[14] J. V. Brocke, A. Simons, K. Riemer, B. Niehaves, R. Plattfaut, and A. Cleven, “Standing on the Shoulders of
Giants: Challenges and Recommendations of Literature Search in Information Systems Research,”
Communications of the Association for Information Systems, vol. 37, no. 1, p. 9, 2015, doi: 10.17705/1CAIS.03709.
[15] I. Odun-Ayo, R. G. Worlu, and V. Samuel, “A Systematic Mapping Study of Designs and Employment Models for
Cloud: Private, Public, Hybrid, Federated and Aggregated,” International Journal of Advances in Computer
Science and Cloud Computing, vol. 6, no. 1, pp. 39-47, 2018.
[16] I. Odun-Ayo, I. Eweoya, W. T.-Abasi, and S. Bogle, “Networking for Cloud and Data Centre - A Systematic
Mapping Study,” International Journal of Engineering Research and Technology, vol. 12, no. 12, pp. 2559-2573,
2019.
[17] A. K. Mahamad, S. Saon, H. Hashim, M. A. Ahmadon, and S. Yamaguchi, “Cloud-based people counter,” Bulletin
of Electrical Engineering and Informatics, vol. 9, no. 1, pp. 284-291, 2020, doi: 10.11591/eei.v9i1.1849.
[18] S. Reshma and C. Chetanaprakash, “Advancement in infotainment system in automotive sector with vehicular
cloud network and current state of art,” International Journal of Electrical and Computer Engineering (IJECE),
vol. 10, no. 2, pp. 2077-2087, 2020. doi: 10.11591/ijece.v10i2.pp2077-2087.
[19] G. Kumar and C. P. Shantala. “An extensive research survey on data integrity and deduplication towards privacy in
cloud storage,” International Journal of Electrical and Computer Engineering (IJECE), vol. 10, no. 2, pp. 2011-
2022, 2020, doi: 10.11591/ijece.v10i2.pp2011-2022.
[20] A. Farahzadi, P. Shams, J. Rezazadah, and R. Farahbakhsh, “A survey of Middleware Technologies for Cloud of
Things,” Digital Communications and Networks, vol. 4, no. 3, pp. 176-188, 2018, doi: 10.1016/j.dcan.2017.04.005.
[21] P. T. Endol, M. Rodrigues, G. E. Goncalves, J. Kelner, D. Sadok, and C. Curescu, “High availability in clouds:
systematic review and research challenges,” Journal of Cloud Computing: Advances, Systems and Applications,
vol. 5, no. 1, pp. 1-15, 2016, doi 10.1186/s13677-016-0066-8.
[22] E. F. Coutinho, F. R. C. Sousa, P. A. L. Rego, D. G. Gomes, and J. Souza, “Elasticity in cloud computing: a
survey,” annals of telecommunications - annales des télécommunications, vol. 70, no. 7-8, pp. 289-309, 2015, doi:
10.1007/s12243-014-0450-7.
11. ISSN: 2302-9285
Bulletin of Electr Eng & Inf, Vol. 10, No. 5, October 2021 : 2696 – 2706
2706
[23] M. N. Birje, P. Challagidad, R. H. Goudar, and M. T. Tapale, “Cloud computing review: concepts, technology,
challenges and security,” International Journal of Cloud Computing, vol. 6, no. 1, pp.32-57, 2017, doi:
10.1504/IJCC.2017.083905.
[24] M. Chiregi and N. J. Navimipour, "Cloud computing and trust evaluation: A systematic literature review of the
state-of-the-art mechanisms," Journal of Electrical Systems and Information Technology, vol. 5, no. 1, pp. 608-622,
2018, doi: 10.1016/j.jesit.2017.09.001.
[25] H. J. Muhasin, R. Atan, M. A. Jabar, and S. Abdullah, “The Factors Affecting on Managing Sensitive Data in
Cloud Computing,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 11, no. 3, pp. 1168-
1175, 2018, doi: 10.11591/ijeecs.v11.i3.pp1168-1175.
[26] B. Kitchenham and S. Charter, “Guidelines for performing systematic literature review in software engineering,”
Technical report, Ver. 2.3 EBSE Technical Report. EBSE Version, vol. 2 Ed, pp. 101-117, 2007.
[27] R. Wieringa, N. A. M. Maiden, N. R. Mead, and C. Rolland, “Requirement engineering paper classification and
evaluation criteria. A proposal and a discussion,” Requirement Engineering, vol. 11, no. 1, pp. 102-107, 2006, doi:
10.1007/s00766-005-0021-6.
BIOGRAPHIES OF AUTHORS
Isaac A. Odun-Ayo was born in Ilesha, Nigeria in 1962. He received the B.S, M.S and Ph.D
degrees in Computer Science from the University of Benin, Benin City, Nigeria. Between 2010
and 2013 he was a faculty and Director Information and Communication Technology at the
National Defence College, Abuja, Nigeria. He joined the faculty of Covenant University, Ota,
Nigeria as a Senior Lecturer in October 2016. He is the author of one book and more than 40
journal and conference articles in Cloud Computing. His research interest include cloud
computing, human resource management, e-governance and software engineering. Dr. Odun-
Ayo is a recipient of the National Productivity Order of Merit Award, Nigeria for his
contribution to computing. He is a member of the Nigeria Computer Society (NCS), Computer
Professionals of Nigeria (CPN), International Association of Engineers (IAENG), Institute of
Electrical and Electronics Engineers (IEEE) and Member Information Science Institute (ISI).
Toro-Abasi Williams received a BSc. degree in computer science from Afe Babalola
University, Ado-Ekiti, Ekiti State, Nigeria in 2016. He received a MSc. Degree from the
Department of Computer and Information Sciences, Covenant University, Nigeria in 2020. He
takes tutorials for several courses at the postgraduate level. He has a passion for academics and
research in computer science. Williams has some cloud computing publications. His research
interests include cloud computing, mobile computing, artificial intelligence, machine learning,
and software engineering.
Marion Adebiyi received a BSc. degree in computer science from University of Ilorin, Kwara
State, Nigeria in 2000. Her MSc. and Ph. D degree also in computer science, bioinformatics
Option from Covenant University, Ota, Nigeria in 2008 and 2014 respectively. She is a Senior
lecturer in Computer and Information Sciences Department of Covenant University, and on
deployment to Landmark University, Omu-Aran, Kwara-State, Nigeria in 2018 and currently a
PDF researcher at Durban University of Technology, South Africa. She authors two books,
over ten chapter in book and more than 40 articles, and 25 conferences and workshops. Dr.
Adebiyi heads the H3Africa projects coordinating Division, and the Entomology and Data
Management Division of Covenant University Bioinformatics Research (CUBRe) group and
her research interests include bioinformatics, high throughput data analytics, genomics,
proteomics, transcriptomics, homology modelling and Organism’s inter-pathway analysis.
Olatunji Okesola is a Professor of Cybersecurity at the First Technical University, Ibadan -
Nigeria. He is a Certified Information Security Manager (CISM) and a Certified Information
Systems (IS) Auditor (CISA) with a Ph.D in Computer Sciences. He is a member of IS Audit
and Control Association (ISACA), Computer Professionals of Nigeria (CPN), and a fellow of
Nigerian Computer Society (NCS). Okesola is a scholar, an Information Security expert and a
seasoned banker. Until November 2016, he was the Group Head, for Information Systems
Control and Revenue Assurance at Keystone Bank (Nig.) Ltd, Lagos. An alumni of University
of South Africa, his research interests include Cyber security, biometrics, and Software
engineering. He has several publications in scholarly journals and conference proceedings both
local and international.