Sustainability, appropriate use of natural resources and providing a better quality of life for citizens has become a prerequisite to change the traditional concept of a smart city. A smart city needs to use latest generation Information Technologies, IT, and hardware to improve services and data, to offer to create a balanced environment between the ecosystem and inhabitants. This paper analyses the advantages of using a private cloud architecture to share hardware and software resources when it is required. Our case study is Guadalajara, which has nine municipalities and each one monitor’s air quality. Each municipality has a set of servers to process information independently and consists of information systems for the transmission and storage of data with other municipalities. We analysed the behaviour of the carbon footprint during the years1999-2013 and we observed a pattern in each season. Thus our proposal requires municipalities to use a cloud-based solution that allows managing and consolidating infrastructure to minimize maintenance costs and electricity consumption to reduce carbon footprint generated by the city.
INFRASTRUCTURE CONSOLIDATION FOR INTERCONNECTED SERVICES IN A SMART CITY USIN...csandit
Sustainability, appropriate use of natural resources and providing a better quality of life for citizens has become a prerequisite to change the traditional concept of a smart city. A smart city needs to use latest generation Information Technologies, IT, and hardware to improve services
and data, to offer to create a balanced environment between the ecosystem and inhabitants. This paper analyses the advantages of using a private cloud architecture to share hardware and software resources when it is required. Our case study is Guadalajara, which has seven municipalities and each one monitor’s air quality. Each municipality has a set of servers to process information independently and consists of information systems for the transmission and
storage of data with other municipalities. We analysed the behaviour of the carbon footprint during the years 1999-2013 and we observed a pattern in each season. Thus our proposal
requires municipalities to use a cloud-based solution that allows managing and consolidating infrastructure to minimize maintenance costs and electricity consumption to reduce carbon footprint generated by the city.
Zpryme Report on Cloud and SAS SolutionsPaula Smith
The document provides an overview of the history and development of cloud computing and software-as-a-service (SaaS) technologies and their potential benefits for utilities. It discusses how utilities initially struggled with smart grid modernization due to fragmented systems and big data challenges. The emergence of cloud hosting, SaaS and managed services has enabled even small and mid-sized utilities to realize the benefits of a fully integrated smart grid infrastructure. The document then covers key concepts around cloud computing models, virtualization, and the opportunities that SaaS and cloud-based analytics present for improved utility operations and planning.
The project aims to create an integrated digital platform for managing SUAP (Single Authorization Procedure) in compliance with current legislation. The platform will allow citizens, companies, and professionals to complete administrative procedures online in a more efficient and transparent manner. It will be used by the back office and users to open, close, or modify business activities. The work was commissioned by the SUAP office of the Murgia area to provide SUAP services completely online.
IRJET - Cloud Computing and IoT ConvergenceIRJET Journal
This document discusses the convergence of cloud computing and the Internet of Things (IoT). It first provides background on both cloud computing and IoT, noting how cloud computing enables distributed computing resources and how IoT involves billions of interconnected devices. It then argues that the cloud features of on-demand access, scalability, and resource pooling are essential for supporting the IoT world. The document also discusses how cloud computing can offer sharing of resources, location independence, virtualization, and elasticity to benefit IoT. Finally, it outlines some challenges of combining IoT and cloud technologies, such as handling large volumes of real-time and unstructured IoT data from distributed sources.
A Cloud-based Online Access on Smart Energy Metering in the PhilippinesIJAEMSJORNAL
This document discusses a study on using cloud computing to access smart energy metering in the Philippines. Key findings from interviews with industry professionals include:
1) Most participants believed that smart energy meters can be controlled remotely through cloud computing and offer reliable communication.
2) Some applications currently provide accurate data analysis, management, and customer engagement.
3) Participants saw advantages in the scalability, central data storage, cost efficiency, and security provided by cloud-based systems.
Combining cloud and sensors in a smart city environmentNgoc Thanh Dinh
This document discusses combining cloud computing and sensors to create smart city environments. It proposes an architecture based on sensor web enablement standards that abstracts heterogeneous sensor data and makes it available as a service. The architecture is hierarchical, with a high-level intelligence layer and peripheral decision makers that analyze and aggregate sensor data. It aims to provide scalable and reactive data access to meet user requirements. Smart cities are presented as an example application area.
This document discusses the impact of cloud computing on the IT industry. It begins by defining cloud computing and the various types of cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It then reviews how cloud computing transforms the IT industry by making software available as an online utility rather than installed locally. The document also examines cloud computing's ability to improve flexibility and reduce costs for IT services compared to traditional computing models. Finally, it analyzes how cloud computing architecture works by provisioning resources on-demand from large pools of virtualized hardware and software.
This document discusses using the Sensing and Actuation as a Service (SAaaS) paradigm to provide an infrastructure approach for deploying Internet of Things (IoT) applications. SAaaS provides a framework that abstracts, virtualizes, and manages distributed sensing resources to enable on-demand provisioning of these resources for IoT applications. The document outlines the SAaaS architecture and how it can address the needs of IoT applications through its ability to aggregate and provide access to geographically distributed sensing devices and sensor networks connected over the Internet. It then describes the process for deploying an IoT application using the SAaaS infrastructure, including provisioning the system and enrolling sensing resources, and then deploying and custom
INFRASTRUCTURE CONSOLIDATION FOR INTERCONNECTED SERVICES IN A SMART CITY USIN...csandit
Sustainability, appropriate use of natural resources and providing a better quality of life for citizens has become a prerequisite to change the traditional concept of a smart city. A smart city needs to use latest generation Information Technologies, IT, and hardware to improve services
and data, to offer to create a balanced environment between the ecosystem and inhabitants. This paper analyses the advantages of using a private cloud architecture to share hardware and software resources when it is required. Our case study is Guadalajara, which has seven municipalities and each one monitor’s air quality. Each municipality has a set of servers to process information independently and consists of information systems for the transmission and
storage of data with other municipalities. We analysed the behaviour of the carbon footprint during the years 1999-2013 and we observed a pattern in each season. Thus our proposal
requires municipalities to use a cloud-based solution that allows managing and consolidating infrastructure to minimize maintenance costs and electricity consumption to reduce carbon footprint generated by the city.
Zpryme Report on Cloud and SAS SolutionsPaula Smith
The document provides an overview of the history and development of cloud computing and software-as-a-service (SaaS) technologies and their potential benefits for utilities. It discusses how utilities initially struggled with smart grid modernization due to fragmented systems and big data challenges. The emergence of cloud hosting, SaaS and managed services has enabled even small and mid-sized utilities to realize the benefits of a fully integrated smart grid infrastructure. The document then covers key concepts around cloud computing models, virtualization, and the opportunities that SaaS and cloud-based analytics present for improved utility operations and planning.
The project aims to create an integrated digital platform for managing SUAP (Single Authorization Procedure) in compliance with current legislation. The platform will allow citizens, companies, and professionals to complete administrative procedures online in a more efficient and transparent manner. It will be used by the back office and users to open, close, or modify business activities. The work was commissioned by the SUAP office of the Murgia area to provide SUAP services completely online.
IRJET - Cloud Computing and IoT ConvergenceIRJET Journal
This document discusses the convergence of cloud computing and the Internet of Things (IoT). It first provides background on both cloud computing and IoT, noting how cloud computing enables distributed computing resources and how IoT involves billions of interconnected devices. It then argues that the cloud features of on-demand access, scalability, and resource pooling are essential for supporting the IoT world. The document also discusses how cloud computing can offer sharing of resources, location independence, virtualization, and elasticity to benefit IoT. Finally, it outlines some challenges of combining IoT and cloud technologies, such as handling large volumes of real-time and unstructured IoT data from distributed sources.
A Cloud-based Online Access on Smart Energy Metering in the PhilippinesIJAEMSJORNAL
This document discusses a study on using cloud computing to access smart energy metering in the Philippines. Key findings from interviews with industry professionals include:
1) Most participants believed that smart energy meters can be controlled remotely through cloud computing and offer reliable communication.
2) Some applications currently provide accurate data analysis, management, and customer engagement.
3) Participants saw advantages in the scalability, central data storage, cost efficiency, and security provided by cloud-based systems.
Combining cloud and sensors in a smart city environmentNgoc Thanh Dinh
This document discusses combining cloud computing and sensors to create smart city environments. It proposes an architecture based on sensor web enablement standards that abstracts heterogeneous sensor data and makes it available as a service. The architecture is hierarchical, with a high-level intelligence layer and peripheral decision makers that analyze and aggregate sensor data. It aims to provide scalable and reactive data access to meet user requirements. Smart cities are presented as an example application area.
This document discusses the impact of cloud computing on the IT industry. It begins by defining cloud computing and the various types of cloud services, including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). It then reviews how cloud computing transforms the IT industry by making software available as an online utility rather than installed locally. The document also examines cloud computing's ability to improve flexibility and reduce costs for IT services compared to traditional computing models. Finally, it analyzes how cloud computing architecture works by provisioning resources on-demand from large pools of virtualized hardware and software.
This document discusses using the Sensing and Actuation as a Service (SAaaS) paradigm to provide an infrastructure approach for deploying Internet of Things (IoT) applications. SAaaS provides a framework that abstracts, virtualizes, and manages distributed sensing resources to enable on-demand provisioning of these resources for IoT applications. The document outlines the SAaaS architecture and how it can address the needs of IoT applications through its ability to aggregate and provide access to geographically distributed sensing devices and sensor networks connected over the Internet. It then describes the process for deploying an IoT application using the SAaaS infrastructure, including provisioning the system and enrolling sensing resources, and then deploying and custom
Cloud Computing Relation With Smart CitiesTareq Alemadi
Cloud computing is a computing paradigm where information and computational resources are delivered as a service over the Internet. It allows users to access technology-enabled services from anywhere without requiring specialized hardware or software. The main advantages are low cost, scalability, and simplified access to applications and data. Potential disadvantages include security concerns, loss of control over certain aspects of the IT infrastructure, and vendor lock-in. A final piece of advice is to start with a pilot project to test cloud adoption before fully committing resources.
With a rapid growth of the mobile applications and development of cloud computing concept, mobile cloud
computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud
computing into the mobile environment and overcomes obstacles related to the performance, security etc discussed in
mobile computing. This paper gives an overview of the MCC including the definition, architecture, and applications. The
issues, existing solutions and approaches are presented.
IRJET- Resource Management in Mobile Cloud Computing: MSaaS & MPaaS with Femt...IRJET Journal
This document discusses resource management in mobile cloud computing using Mobile Software as a Service (MSaaS) and Mobile Platform as a Service (MPaaS) with femtocell and Wi-Fi networks. It proposes using femtocell and Wi-Fi private cloud networks to overcome mobile performance issues like limited battery life, storage, and bandwidth. MSaaS and MPaaS can further improve quality of service, pricing, and standard interfaces. The document suggests this approach can effectively manage resources and improve the performance of mobile cloud computing.
Self-tuning data centers aim to minimize human intervention through machine learning techniques. Current challenges include meeting service level agreements for performance and uptime while maximizing efficiency of resources and minimizing costs. A self-tuning architecture uses monitoring data to detect issues and make recommendations for scaling, migration, or tuning of resources without human input. This approach aims to optimize data centers so they can scale efficiently to support growing workloads and applications.
A concept based on the vision described by Mark Weiser nearly a decade ago:
“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”
Asyma E3 2014 The Impact of Cloud Computing on SME'sasyma
The document discusses how cloud computing can provide benefits to small and medium-sized enterprises (SMEs). It outlines how cloud services have evolved from time-sharing mainframes to today's software-as-a-service (SaaS) models. The cloud offers SMEs important advantages like reduced costs through economies of scale, lower barriers to entry since they don't need to purchase their own software and infrastructure, and improved scalability. While concerns around data security and control remain for some businesses, the cloud is becoming increasingly important for SMEs to remain competitive through improved productivity and flexibility.
The document summarizes key telecom trends including the growth of connected devices and machines, big data challenges, cloud computing advances, emerging applications, and the evolution of networks towards more intelligent, automated, and distributed architectures. Major technology directions include the internet of things, content-centric networking, heterogeneous networks, virtualization, and the changing role of telecom operators.
Cloud computing and artificial intelligenceFurqan Haider
Hi friends, I have created this simple yet helpful slide about application of AI in cloud computing for a presentation held at my class. i hope it would help those who looking for this topic since Its a rare topic and hardly anyone else ever made ppt about this topic. thanks
Have a nice day :)
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. It allows users to access technology-based services from the Internet without knowledge of, expertise with, or control over the technology infrastructure that supports them.
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud ComputingReza Rahimi
- The document discusses QoS-aware middleware for optimal service allocation in mobile cloud computing.
- It proposes a 2-tier cloud architecture consisting of local clouds and public clouds and develops algorithms to optimally allocate services for mobile users across these tiers.
- A location-time workflow model is used to represent mobile applications and QoS metrics like delay, power consumption and price are considered for optimal service allocation.
Emerging cloud computing paradigm vision, research challenges and development...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document provides a high-level overview of cloud computing, including its history and key concepts. Cloud computing involves delivering computing resources such as hardware and software over a network, typically the internet. It allows users to access applications from anywhere through a web browser or light-weight application. The concept originated in the 1950s with mainframe computing and time-sharing, and cloud computing has grown with improvements in networking, virtualization, and utility computing.
Ambiences on the-fly usage of available resources through personal devicesijasuc
In smart spaces such as smart homes, computation is
embedded everywhere: in toys, appliances, or the
home’s infrastructure. Most of these devices provid
e a pool of available resources which the user can
take
advantage, interacting and creating a friendly envi
ronment. The inherent composability of these system
s
and other unique characteristics such as low-cost e
nergy, simplicity in module programming, and even
their small size, make them a suitable candidate fo
r dynamic and adaptive ambient systems. This resear
ch
work focuses on what is defined as an “ambience”, a
space with a user-defined set of computational
devices. A smart-home is modeled as a collection of
ambiences, where every ambience is capable of
providing a pool of available resources to the user
. In turn, the user is supposed to carry one or sev
eral
personal devices able to interact with the ambience
s, taking advantage of his inherent mobility. In th
is way,
the whole system can benefit from resources discove
red in the spatial proximity. A software architectu
re is
designed, which is based on the implementation of l
ow-cost algorithms able to detect and update the sy
stem
when changes in an ambience occur. Ambience middlew
are implementation works in a wide range of
architectures and OSs, while showing a negligible o
verhead in the time to perform the basic output
operations.
Secured Communication Model for Mobile Cloud Computingijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Security & privacy issues of cloud & grid computing networksijcsa
Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking
technologies. Cloud computing has become a significant technology in field of information technology.
Security of confidential data is a very important area of concern as it can make way for very big problems
if unauthorized users get access to it. Cloud computing should have proper techniques where data is
segregated properly for data security and confidentiality. This paper strives to compare and contrast cloud
computing with grid computing, along with the Tools and simulation environment & Tips to store data and
files safely in Cloud.
This is a small and simple Presentation on the topic Mobile Cloud Computing Made for a Symposium. The content inside the slides are taken from Google and various research papers, this slide is purely for educational purpose and not meant for commercial publication.
A Smart ITS based Sensor Network for Transport System with Integration of Io...IRJET Journal
This document discusses a proposed smart transportation system that integrates Internet of Things (IoT), big data approaches, and cloud computing. The system would use sensors to capture transportation data from vehicles and infrastructure in real-time. This IoT data would generate large volumes of diverse data (the "4Vs" of big data) that could be stored and analyzed in the cloud to provide insights for transportation planning and management. The proposed system aims to combine these technologies to develop intelligent transportation system cloud services to help optimize traffic flow and infrastructure usage.
Cloud computing provides on-demand access to shared computing resources like servers, storage, databases, networking, software and analytics over the internet. It delivers computing as a utility or service rather than a product. There are different types of cloud services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Clouds can be public, private, hybrid or community and are offered by major companies like Amazon, Microsoft, Google and IBM.
With expanding volumes of knowledgeable production and the variability of themes and roots, shapes and languages, most detectable issues related to the delivery of storage space for the information and the variety of treatment strategies in addition to the problems related to the flow of information and methods
go down and take an interest in the advantage of them face the researchers. In any case, such a great significance comes with a support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. The cloud is not a small, undeveloped branch of it, it is a type of computing that is based on the internet, an image from the internet. Cloud Computing is a
developed technology, cloud computing, possibly offers an overall economic benefit, in that end users shares a large, centrally achieved pool of storing and computing resources, rather than owning and managing their own systems. But, it needs to be environment friendly also. This review paper gives a general overview of cloud computing, also it describes cloud computing, architecture of cloud computing, characteristics of cloud computing, and different services and deployment model of cloud computing. This paper is for anyone who will have recently detected regarding cloud computing and desires to grasp a lot of regarding cloud computing.
The improvement of end to end delays in network management system using netwo...IJCNCJournal
The document summarizes research on improving end-to-end delays in a network management system using network coding. Specifically, it applies network coding to manage radio and television broadcast stations in a wireless network. The study shows that a proposed "Fast Forwarding Strategy" using network coding outperforms a classical routing strategy in reducing end-to-end delays from source to destination. It analyzes end-to-end delays theoretically using network calculus and conducts a practical study on a network of broadcast stations, finding the proposed strategy reduces delays compared to the classical strategy.
In the last decade Peer to Peer technology has been thoroughly explored, becauseit overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner.
We present proofs of our algorithms as well as experimental results and evaluations.
Cloud Computing Relation With Smart CitiesTareq Alemadi
Cloud computing is a computing paradigm where information and computational resources are delivered as a service over the Internet. It allows users to access technology-enabled services from anywhere without requiring specialized hardware or software. The main advantages are low cost, scalability, and simplified access to applications and data. Potential disadvantages include security concerns, loss of control over certain aspects of the IT infrastructure, and vendor lock-in. A final piece of advice is to start with a pilot project to test cloud adoption before fully committing resources.
With a rapid growth of the mobile applications and development of cloud computing concept, mobile cloud
computing (MCC) has been introduced to be a potential technology for mobile services. MCC integrates the cloud
computing into the mobile environment and overcomes obstacles related to the performance, security etc discussed in
mobile computing. This paper gives an overview of the MCC including the definition, architecture, and applications. The
issues, existing solutions and approaches are presented.
IRJET- Resource Management in Mobile Cloud Computing: MSaaS & MPaaS with Femt...IRJET Journal
This document discusses resource management in mobile cloud computing using Mobile Software as a Service (MSaaS) and Mobile Platform as a Service (MPaaS) with femtocell and Wi-Fi networks. It proposes using femtocell and Wi-Fi private cloud networks to overcome mobile performance issues like limited battery life, storage, and bandwidth. MSaaS and MPaaS can further improve quality of service, pricing, and standard interfaces. The document suggests this approach can effectively manage resources and improve the performance of mobile cloud computing.
Self-tuning data centers aim to minimize human intervention through machine learning techniques. Current challenges include meeting service level agreements for performance and uptime while maximizing efficiency of resources and minimizing costs. A self-tuning architecture uses monitoring data to detect issues and make recommendations for scaling, migration, or tuning of resources without human input. This approach aims to optimize data centers so they can scale efficiently to support growing workloads and applications.
A concept based on the vision described by Mark Weiser nearly a decade ago:
“The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it”
Asyma E3 2014 The Impact of Cloud Computing on SME'sasyma
The document discusses how cloud computing can provide benefits to small and medium-sized enterprises (SMEs). It outlines how cloud services have evolved from time-sharing mainframes to today's software-as-a-service (SaaS) models. The cloud offers SMEs important advantages like reduced costs through economies of scale, lower barriers to entry since they don't need to purchase their own software and infrastructure, and improved scalability. While concerns around data security and control remain for some businesses, the cloud is becoming increasingly important for SMEs to remain competitive through improved productivity and flexibility.
The document summarizes key telecom trends including the growth of connected devices and machines, big data challenges, cloud computing advances, emerging applications, and the evolution of networks towards more intelligent, automated, and distributed architectures. Major technology directions include the internet of things, content-centric networking, heterogeneous networks, virtualization, and the changing role of telecom operators.
Cloud computing and artificial intelligenceFurqan Haider
Hi friends, I have created this simple yet helpful slide about application of AI in cloud computing for a presentation held at my class. i hope it would help those who looking for this topic since Its a rare topic and hardly anyone else ever made ppt about this topic. thanks
Have a nice day :)
Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. It allows users to access technology-based services from the Internet without knowledge of, expertise with, or control over the technology infrastructure that supports them.
QoS-Aware Middleware for Optimal Service Allocation in Mobile Cloud ComputingReza Rahimi
- The document discusses QoS-aware middleware for optimal service allocation in mobile cloud computing.
- It proposes a 2-tier cloud architecture consisting of local clouds and public clouds and develops algorithms to optimally allocate services for mobile users across these tiers.
- A location-time workflow model is used to represent mobile applications and QoS metrics like delay, power consumption and price are considered for optimal service allocation.
Emerging cloud computing paradigm vision, research challenges and development...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document provides a high-level overview of cloud computing, including its history and key concepts. Cloud computing involves delivering computing resources such as hardware and software over a network, typically the internet. It allows users to access applications from anywhere through a web browser or light-weight application. The concept originated in the 1950s with mainframe computing and time-sharing, and cloud computing has grown with improvements in networking, virtualization, and utility computing.
Ambiences on the-fly usage of available resources through personal devicesijasuc
In smart spaces such as smart homes, computation is
embedded everywhere: in toys, appliances, or the
home’s infrastructure. Most of these devices provid
e a pool of available resources which the user can
take
advantage, interacting and creating a friendly envi
ronment. The inherent composability of these system
s
and other unique characteristics such as low-cost e
nergy, simplicity in module programming, and even
their small size, make them a suitable candidate fo
r dynamic and adaptive ambient systems. This resear
ch
work focuses on what is defined as an “ambience”, a
space with a user-defined set of computational
devices. A smart-home is modeled as a collection of
ambiences, where every ambience is capable of
providing a pool of available resources to the user
. In turn, the user is supposed to carry one or sev
eral
personal devices able to interact with the ambience
s, taking advantage of his inherent mobility. In th
is way,
the whole system can benefit from resources discove
red in the spatial proximity. A software architectu
re is
designed, which is based on the implementation of l
ow-cost algorithms able to detect and update the sy
stem
when changes in an ambience occur. Ambience middlew
are implementation works in a wide range of
architectures and OSs, while showing a negligible o
verhead in the time to perform the basic output
operations.
Secured Communication Model for Mobile Cloud Computingijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Security & privacy issues of cloud & grid computing networksijcsa
Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking
technologies. Cloud computing has become a significant technology in field of information technology.
Security of confidential data is a very important area of concern as it can make way for very big problems
if unauthorized users get access to it. Cloud computing should have proper techniques where data is
segregated properly for data security and confidentiality. This paper strives to compare and contrast cloud
computing with grid computing, along with the Tools and simulation environment & Tips to store data and
files safely in Cloud.
This is a small and simple Presentation on the topic Mobile Cloud Computing Made for a Symposium. The content inside the slides are taken from Google and various research papers, this slide is purely for educational purpose and not meant for commercial publication.
A Smart ITS based Sensor Network for Transport System with Integration of Io...IRJET Journal
This document discusses a proposed smart transportation system that integrates Internet of Things (IoT), big data approaches, and cloud computing. The system would use sensors to capture transportation data from vehicles and infrastructure in real-time. This IoT data would generate large volumes of diverse data (the "4Vs" of big data) that could be stored and analyzed in the cloud to provide insights for transportation planning and management. The proposed system aims to combine these technologies to develop intelligent transportation system cloud services to help optimize traffic flow and infrastructure usage.
Cloud computing provides on-demand access to shared computing resources like servers, storage, databases, networking, software and analytics over the internet. It delivers computing as a utility or service rather than a product. There are different types of cloud services including Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). Clouds can be public, private, hybrid or community and are offered by major companies like Amazon, Microsoft, Google and IBM.
With expanding volumes of knowledgeable production and the variability of themes and roots, shapes and languages, most detectable issues related to the delivery of storage space for the information and the variety of treatment strategies in addition to the problems related to the flow of information and methods
go down and take an interest in the advantage of them face the researchers. In any case, such a great significance comes with a support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. The cloud is not a small, undeveloped branch of it, it is a type of computing that is based on the internet, an image from the internet. Cloud Computing is a
developed technology, cloud computing, possibly offers an overall economic benefit, in that end users shares a large, centrally achieved pool of storing and computing resources, rather than owning and managing their own systems. But, it needs to be environment friendly also. This review paper gives a general overview of cloud computing, also it describes cloud computing, architecture of cloud computing, characteristics of cloud computing, and different services and deployment model of cloud computing. This paper is for anyone who will have recently detected regarding cloud computing and desires to grasp a lot of regarding cloud computing.
The improvement of end to end delays in network management system using netwo...IJCNCJournal
The document summarizes research on improving end-to-end delays in a network management system using network coding. Specifically, it applies network coding to manage radio and television broadcast stations in a wireless network. The study shows that a proposed "Fast Forwarding Strategy" using network coding outperforms a classical routing strategy in reducing end-to-end delays from source to destination. It analyzes end-to-end delays theoretically using network calculus and conducts a practical study on a network of broadcast stations, finding the proposed strategy reduces delays compared to the classical strategy.
In the last decade Peer to Peer technology has been thoroughly explored, becauseit overcomes many limitations compared to the traditional client server paradigm. Despite its advantages over a traditional approach, the ubiquitous availability of high speed, high bandwidth and low latency networks has supported the traditional client-server paradigm. Recently, however, the surge of streaming services has spawned renewed interest in Peer to Peer technologies. In addition, services like geolocation databases and browser technologies like Web-RTC make a hybrid approach attractive.
In this paper we present algorithms for the construction and the maintenance of a hybrid P2P overlay multicast tree based on topological distances. The essential idea of these algorithms is to build a multicast tree by choosing neighbours close to each other. The topological distances can be easily obtained by the browser using the geolocation API. Thus the implementation of algorithms can be done web-based in a distributed manner.
We present proofs of our algorithms as well as experimental results and evaluations.
THE IMPACT OF NODE MISBEHAVIOR ON THE PERFORMANCE OF ROUTING PROTOCOLS IN MANETIJCNCJournal
This document compares the performance of four routing protocols (AODV, DSR, OLSR, GRP) under the security attack of node misbehavior in mobile ad hoc networks (MANETs). The document presents background information on MANETs and the four routing protocols. It then discusses two types of misbehaving nodes (partially selfish and fully selfish) that are modeled in simulations. The simulations vary the percentage of misbehaving nodes and measure performance metrics like packet delivery ratio, end-to-end delay, data dropped, and load. The results show that packet delivery ratio decreases and data dropped increases as the percentage of misbehaving nodes increases for all protocols. OLSR generally has the lowest delay while
Managing, searching, and accessing iot devicesIJCNCJournal
In this paper a new method is proposed for management of REST-based services acting as proxies for Internet-of-Things devices. The method is based on a novel way of monitoring REST resources by hierarchical set of directories, with the possibility of smart searching for “the best” device according to atthe- place devices’ availability and functionality, overall context (including geo-location), and personal preferences. The system is resistant to changes of network addresses of the devices and their services, as well as core system points such as directories. Thus, we successfully deal with the problem of
(dis)connectivity and mobility of network nodes, and the problem of a “newcomer” device trying to connect
to the network at an incidental place/time.
Main novelty of the approach is a summary of three basic achievements. Firstly, the system introduces
unifying tools for efficient monitoring. On one hand, we may control an availability and load (statistics) of
devices/services. On the other hand, we are able to search for “the best” device/service with different criteria, also formulated ad-hoc and personalized. Secondly, the system is resistant to sudden changes of network topology and connections (basically IP addressing), and frequent disconnections of any system element, including core nodes such as central directories. As a result, we may have a common view to the whole system at any time/place and with respect to its current state, even if the elements of the system are distributed across a wider area. Thirdly, any element of the system, from simple devices to global directories, is able to self-adjust to evolving parameters of the environment (including other devices as a part of this environment). In particular, it is possible for a mobile “newcomer” device to interact with the system at any place and time without a need for prior installation, re-programming, determination of
actual parameters, etc. The presented approach is a coherent all-in-one solution to basic problems related
with efficient usage of IoT devices and services, well suited to the hardware- and software-restricted world
of Internet of Things and Services. Fully implemented, the system is now being applied for an “intelligent”
home and workplace with user-centric e-comfort management.
A novel secure e contents system for multi-media interchange workflows in e-l...IJCNCJournal
The goal of e-learning is to benefit from the capabilities offered by new information technology (such as
remote digital communications, multimedia, internet, cell phones, teleconferences, etc.) and to enhance the
security of several government organizations so as to take into considerations almost all the contents of elearning
such as: information content, covering most of citizens or state firms or corporations queries.
Content provides a service to provide most if not all basic and business services; content of communicative
link provides the citizen and the state agencies together all the time and provides content security for all
workers on this network to work in securely environment. Access to information as well is safeguarded. The
main objective of this research is to build a novel multi-media security system (encrypting / decrypting
system) that will enable E-learning to exchange more secured multi-media data/information.
Packet delivery ratio, delay, throughput, routing overhead etc are the strict quality of service requirements
for applications in Ad hoc networks. So, the routing protocol not only finds a suitable path but also the path
should satisfy the QoS constraints also. Quality of services (QoS) aware routing is performed on the basis
of resource availability in the network and the flow of QoS requirement. In this paper we developed a
source routing protocol which satisfying the link bandwidth and end –to- end delay factor. Our protocol
will find multiple paths between the source and the destination, out of those one will be selected for data
transfer and others are reserve at the source node those can be used for route maintenance purpose. The
path selection is strictly based on the bandwidth and end-to-end delay in case two or more then two paths
are having the same values for QoS constraints then we will use hop as a parameter for path selection.
A Mobile Ad-Hoc Network (MANET) is a self configuring, infrastructure less network of mobile devices
connected by wireless links. Loopholes like wireless medium, lack of a fixed infrastructure, dynamic
topology, rapid deployment practices, and the hostile environments in which they may be deployed, make
MANET vulnerable to a wide range of security attacks and Wormhole attack is one of them. During this
attack a malicious node captures packets from one location in the network, and tunnels them to another
colluding malicious node at a distant point, which replays them locally. This paper presents a cluster based
Wormhole attack avoidance technique. The concept of hierarchical clustering with a novel hierarchical 32-
bit node addressing scheme is used for avoiding the attacking path during the route discovery phase of the
DSR protocol, which is considered as the under lying routing protocol. Pinpointing the location of the
wormhole nodes in the case of exposed attack is also given by using this method.
Performance comparison of coded and uncoded ieee 802.16 d systems under stanf...IJCNCJournal
Worldwide Interoperability for Microwave Access (WiMax), standardized asIEEE 802.16d is a popular
technology for broadband wireless communication system. Orthogonal Frequency Division Multiplexing
(OFDM) is the core of this technology.OFDMreduces Inter-symbol Interference (ISI) and hence improves
system performance (i.e., Bit Error Rate (BER)). To improve system performance further error correction
coding schemes have been included in WiMax. It is widely accepted thata coded system outperforms an
uncodedsystem. But, the performance improvement of a coded system depends on the channel conditions. In
this paper, we investigated and compared the performances of a coded and an uncoded WiMaxsystem
under a practical channel model called Stanford University Interim (SUI). Different modulation schemes
namely BPSK, QPSK, 16-QAM, and 64-QAM have been considered in this work. It is shown that the
selection of codedoruncoded WiMaxsystem should depend on the channel condition as well as on the
modulation used. It is also shown that anuncoded system outperforms a coded system under some channel
conditions.
Evaluation of scalability and bandwidthIJCNCJournal
Multi-Point to Multi-Point Traffic Engineering (MP2MP-TE) leads to an important scalability in Multi
Protocol Label Switching-Traffic Engineering (MPLS-TE) networks. This paper emphasizes on the support
of Fast-reroute (FRR) in MPLS-TE networks by using MP2MP bypass TE-tunnels. Hence, one MP2MP
bypass TE-tunnel can be used to protect several primary TE-tunnels. During failure, the primary TE-tunnel
is encapsulated into the MP2MP bypass TE-tunnel which calls for defining a new type of MPLS hierarchy,
i.e. the multipoint to multipoint hierarchy. In this paper we present a simulation study that evaluates
several fast rerouting scenarios depending on the number of leaves of a MP2MP bypass TE-tunnel and on
the number of primary TE-tunnels that can be encapsulated into one MP2MP bypass TE-tunnel. In
particular, the scalability/bandwidth efficiency tradeoff between these schemes is analyzed and valuable
comparisons with the existing approaches are presented.
Energy efficient clustering in heterogeneousIJCNCJournal
Cluster head election is a key technique used to reduce energy consumption and enhancing the throughput
of wireless sensor networks. In this paper, a new energy efficient clustering (E2C) protocol for
heterogeneous wireless sensor networks is proposed. Cluster head is elected based on the predicted
residual energy of sensors, optimal probability of a sensor to become a cluster head, and its degree of
connectivity as the parameters. The probability threshold to compete for the role of cluster head is derived.
The probability threshold has been extended for multi-levels energy heterogeneity in the network. The
proposed E2C protocol is simulated in MATLAB. Results obtained in the simulationshowthat performance
of the proposed E2Cprotocol is betterthan stable election protocol (SEP), and distributed energy efficient
clustering (DEEC) protocol in terms of energy consumption, throughput, and network lifetime.
This document summarizes a research paper that proposes a new algorithm for calculating TCP timeout over wireless ad hoc networks. The current TCP timeout calculation does not adapt well to the unstable nature of wireless networks where factors like node mobility can cause estimated round trip times to vary greatly. The proposed algorithm aims to make timeout calculation more adaptive to network conditions in wireless ad hoc networks to improve TCP performance and quality of service for real-time multimedia applications. It describes challenges with existing TCP timeout approaches over wireless networks and reviews related literature before introducing the novel algorithm developed and tested through simulation.
Concepts and evolution of research in the field of wireless sensor networksIJCNCJournal
This document provides a comprehensive review of research in the field of wireless sensor networks (WSNs). It begins with an introduction to WSNs and discusses sensor nodes, network architectures, communication protocols, applications, and open research issues. It then reviews several related surveys that focus on specific aspects of WSNs such as applications, energy consumption techniques, security protocols, and testbeds. The document aims to provide an overview of all aspects of WSN research and identify promising directions like bio-inspired solutions. It discusses sensors and network types, architectures and services, and challenges like fault tolerance.
Distributed firewalls and ids interoperability checking based on a formal app...IJCNCJournal
To supervise and guarantee a network security, the administrator uses different security components, such
as firewalls, IDS and IPS. For a perfect interoperability between these components, they must be
configured properly to avoid misconfiguration between them. Nevertheless, the existence of a set of
anomalies between filtering rules and alerting rules, particularly in distributed multi-component
architectures is very likely to degrade the network security. The main objective of this paper is to check if a
set of security components are interoperable. A case study using a firewall and an IDS as examples will
illustrate the usefulness of our approach.
AN OPTIMAL FUZZY LOGIC SYSTEM FOR A NONLINEAR DYNAMIC SYSTEM USING A FUZZY BA...IJCNCJournal
The impetus for this paper is the development of Fuzzy Basis Function “FBF” that assigns in an optimal fashion, a function approximation for a nonlinear dynamic system. A fuzzy basis function is applied to find the best location of the characteristic points by specifying the set of fuzzy rules. The advantage of this technique is that, it may produce a simple and well-performing system because it selects the most significant fuzzy basis functions to minimize an objective function in the output error for the fuzzy rules. The fuzzy basis function is a linguistic fuzzy IF_THEN rule. It provides a combination of the numerical information and the linguistic information in the form input-output pairs and in the form of fuzzy rules. The proposed control scheme is applied to a magnetic ball suspension system.
Adhoc mobile wireless network enhancement based on cisco devicesIJCNCJournal
This document discusses enhancing the performance of ad hoc wireless networks using Cisco devices. It proposes using Cisco routers and access points to create a three-layer ad hoc network with endpoints, intermediate coordinators, and a core router layer for improved processing, reliability, cost, power consumption, and accessibility. It then outlines various protocols and configurations that could be implemented using Cisco devices, including NAT, ACLs, DHCP, and wireless security settings. Diagrams and tables show an example network topology and device IP addresses and configurations.
A method of evaluating effect of qo s degradation on multidimensional qoe of ...IJCNCJournal
This paper studies a method of investigating effect of IP performance (QoS) degradation on quality of experience (QoE) for a Web service; it considers the usability based on the ISO 9241-11 as multidimensional QoE of a Web service (QoE-Web) and the QoS parameters standardized by the IETF. Moreover, the paper tackles clarification of the relationship between ISO-based QoE-Web and IETF-based QoS by the multiple regression analysis. The experiment is intended for the two actual Japanese online shopping services and utilizes 35 subjects. From the results, the paper quantitatively discusses how the QoE-Web deteriorates owing to the QoS degradation and shows that it is appropriate to evaluate the proposed method
Regressive admission control enabled by real time qos measurementsIJCNCJournal
We propose a novel regressive principle to Admission Control (AC) assisted by real-time passive QoS
monitoring. This measurement-based AC scheme acceptsflows by default, but based on the changes in the
network QoS, it makes regressive decisions on the possible flow rejection, thus bringing cognition to the
network path. TheREgressive Admission Control (REAC) system consists of three modules performing the
necessary tasks:QoS measurements, traffic identification, and the actual AC decision making and flow
control. There are two major advantages with this new scheme; (i) significant optimization of the
connection start-up phase, and (ii) continuous QoS knowledge of the accepted streams. In fact, the latter
combined with the REAC decisions can enable guaranteed QoS without requiring any QoS support from
the network. REAC was tested on a video streaming test bed and proved to have a timely and realistic
match between the network's QoS and the video quality.
COST-EFFICIENT RESIDENTIAL ENERGY MANAGEMENT SCHEME FOR INFORMATION-CENTRIC N...IJCNCJournal
Home network (HOMENET) performs multiple important functions such as energy management,
multimedia sharing, lighting and climate control in smart grid (SG). In HOMENET there are numerous
challenges among which mobility and security are the basic requirements that need to be addressed with
priority. The information-centric networking (ICN) is regarded as the future Internet that subscribes data
in a content-centric manner irrespective of its location. Furthermore, it has pecial merit in mobility and
security since ICN supports in-network caching and self-contained security, these make ICN a potential
solution for home communication fabric. This paper aims to apply the ICN approach on HOMENET
system, which we called ICN-HOMENET. Then, a proof-of-concept evaluation is employed to evaluate the
effectiveness of the proposed ICN-HOMENET approach in data security, device mobility and efficient
content distribution for developing HOMENET system in SG. In addition, we proposed a cost-efficient
residential energy management (REM) scheme called ICN-REM scheme for ICN-HOMENET system which
encourages consumers to shift the start time of appliances from peak hours to off-peak hours to reduce the
energy bills. To the best of our knowledge, this is the first attempt to propose an ICN-based REM scheme
for HOMENET system. In this proposal, we not only consider the conflicting requests from appliances and
domestic power generation, but also think the energy management unit (EMU) should cooperate with
measurement sensors to control some specific appliances in some specific conditions. Moreover, the
corresponding performance evaluation validates its correctness and effectiveness.
Priority scheduling for multipath video transmission in wmsnsIJCNCJournal
This document proposes a priority scheduling mechanism for multipath video transmission over wireless multimedia sensor networks (WMSNs). It assigns different priorities to different types of video frames (I, P, B frames) and schedules them over multiple paths based on path conditions. Higher priority I and P frames are scheduled along higher quality paths. It divides node buffers into queues and uses round-robin scheduling between the queues based on frame type to prioritize transmission of higher priority frames. Path scores are calculated based on metrics like bandwidth, delay and energy to identify highest quality paths for important video packets. Simulation results show this approach improves video quality at the receiver.
INFRASTRUCTURE CONSOLIDATION FOR INTERCONNECTED SERVICES IN A SMART CITY USIN...cscpconf
Sustainability, appropriate use of natural resources and providing a better quality of life for citizens has become a prerequisite to change the traditional concept of a smart city. A smart city needs to use latest generation Information Technologies, IT, and hardware to improve services and data, to offer to create a balanced environment between the ecosystem and inhabitants. This paper analyses the advantages of using a private cloud architecture to share hardware and software resources when it is required. Our case study is Guadalajara, which has seven municipalities and each one monitor’s air quality. Each municipality has a set of servers to process information independently and consists of information systems for the transmission and storage of data with other municipalities. We analysed the behaviour of the carbon footprint during the years 1999-2013 and we observed a pattern in each season. Thus our proposal requires municipalities to use a cloud-based solution that allows managing and consolidating infrastructure to minimize maintenance costs and electricity consumption to reduce carbon footprint generated by the city.
A Comparison of Cloud Execution Mechanisms Fog, Edge, and Clone Cloud Computing IJECEIAES
Cloud computing is a technology that was developed a decade ago to provide uninterrupted, scalable services to users and organizations. Cloud computing has also become an attractive feature for mobile users due to the limited features of mobile devices. The combination of cloud technologies with mobile technologies resulted in a new area of computing called mobile cloud computing. This combined technology is used to augment the resources existing in Smart devices. In recent times, Fog computing, Edge computing, and Clone Cloud computing techniques have become the latest trends after mobile cloud computing, which have all been developed to address the limitations in cloud computing. This paper reviews these recent technologies in detail and provides a comparative study of them. It also addresses the differences in these technologies and how each of them is effective for organizations and developers.
This document discusses green cloud computing and the need to develop optimized algorithms and applications to improve energy efficiency. It notes that while cloud computing provides economic benefits through shared infrastructure, the growing demand has increased energy consumption and carbon emissions. The document examines various technologies that enable green computing in clouds, such as virtualization, and proposes a green cloud architecture framework to improve efficiency from both user and provider perspectives. It stresses the importance of developing optimized algorithms and applications to minimize resource usage and route data to lower-cost energy regions.
This document discusses cyber forensics in cloud computing. It begins by defining cloud computing and noting that cloud organizations have yet to establish well-defined forensic capabilities, making it difficult to investigate criminal activity. The document then provides an overview of cloud computing concepts like virtualization, server virtualization, and the three main cloud service models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). It proposes a cloud computing service architecture based on these three models and their relationships.
This document discusses cyber forensics in cloud computing. It begins with an introduction to cloud computing concepts like virtualization, infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). It then proposes steps for cloud forensics investigations, including collecting and storing data, performing signature-based and behavior-based analysis, and using network tools for forensics analysis and invasion detection. The goal is to define the new area of cloud forensics and analyze its challenges and opportunities.
The document provides an introduction to cloud computing. It discusses how computing is transitioning to a utility-based model where resources are accessed on-demand via the internet. Cloud computing allows users to access applications, storage, and other services from any device. Early computer scientists envisioned this type of computing utility. The document outlines characteristics of cloud computing like multi-tenancy, scalability, and the various deployment models. It also discusses some of the technical aspects that enabled cloud computing such as virtualization, distributed systems, and web technologies. Challenges of cloud computing are security, privacy, outsourcing, and heterogeneity between provider platforms.
Cloud computing can give the ability of flexibly outsourcing software for supply chain collaboration and infrastructure needs in a better way. Instead of maintaining and paying for maximum use this technology puts forward the method that provides flexibility to add on the way,depending upon the overall business process and network model of supply chain. Ahead of the usual technology publicity,the worth of cloud computing is that it can be a right technology for supporting and managing a constantly cha nging and dynamic network and thus for supply chain management. Because now a day these are the exact visibility and supply chain collaboration needs. Efficient supply chains are a vital necessity for many c ompanies. Supply chain management acts on operational processes,divergent and consolidated information flows and interaction processes with a variety of business partners. Efforts of recent years are usually facing this diversity by creating and organizing central information system solutions. Taking in account all the well-known probl ems of these central information systems,the question arises,whether cloud-based information system s represent a better alternative to establish an IT support for supply chain management .
Load Balancing In Cloud Computing:A ReviewIOSR Journals
Abstract: As the IT industry is growing day by day, the need of computing and storage is increasing
rapidly. The amount of data exchanged over the network is constantly increasing. Thus the process of this
increasing mass of data requires more computer equipment to meet the various needs of the organizations.
To better capitalize their investment, the over-equipped organizations open their infrastructures to others by
exploiting the Internet and other important technologies such as virtualization by creating a new computing
model: the cloud computing. Cloud computing is one of the significant milestones in recent times in the
history of computers. The basic concept of cloud computing is to provide a platform for sharing of resources
which includes software and infrastructure with the help of virtualization. This paper presents a brief review
of cloud computing. The main emphasize of this paper is on the load balancing technique in cloud
computing.
Keywords: Cloud Computing, Load Balancing, Dynamic Load Balancing, Virtualization, Data Center.
This document provides an overview of cloud computing, including its key characteristics, service models, deployment models, examples, advantages and limitations. Specifically, it defines cloud computing as the delivery of computing resources such as servers, storage, databases and software over the internet. It describes the main service models of software as a service (SaaS), platform as a service (PaaS) and infrastructure as a service (IaaS). It also outlines the deployment models of public, private and hybrid clouds and discusses some advantages like scalability, cost savings and disadvantages like security issues and dependence on internet connectivity.
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A review on orchestration distributed systems for IoT smart services in fog c...IJECEIAES
This paper provides a review of orchestration distributed systems for IoT smart services in fog computing. The cloud infrastructure alone cannot handle the flow of information with the abundance of data, devices and interactions. Thus, fog computing becomes a new paradigm to overcome the problem. One of the first challenges was to build the orchestration systems to activate the clouds and to execute tasks throughout the whole system that has to be considered to the situation in the large scale of geographical distance, heterogeneity and low latency to support the limitation of cloud computing. Some problems exist for orchestration distributed in fog computing are to fulfil with high reliability and low-delay requirements in the IoT applications system and to form a larger computer network like a fog network, at different geographic sites. This paper reviewed approximately 68 articles on orchestration distributed system for fog computing. The result shows the orchestration distribute system and some of the evaluation criteria for fog computing that have been compared in terms of Borg, Kubernetes, Swarm, Mesos, Aurora, heterogeneity, QoS management, scalability, mobility, federation, and interoperability. The significance of this study is to support the researcher in developing orchestration distributed systems for IoT smart services in fog computing focus on IR4.0 national agenda.
A review on serverless architectures - function as a service (FaaS) in cloud ...TELKOMNIKA JOURNAL
Emergence of cloud computing as the inevitable IT computing paradigm, the perception of the compute reference model and building of services has evolved into new dimensions. Serverless computing is an execution model in which the cloud service provider dynamically manages the allocation of compute resources of the server. The consumer is billed for the actual volume of resources consumed by them, instead paying for the pre-purchased units of compute capacity. This model evolved as a way to achieve optimum cost, minimum configuration overheads, and increases the application's ability to scale in the cloud. The prospective of the serverless compute model is well conceived by the major cloud service providers and reflected in the adoption of serverless computing paradigm. This review paper presents a comprehensive study on serverless computing architecture and also extends an experimentation of the working principle of serverless computing reference model adapted by AWS Lambda. The various research avenues in serverless computing are identified and presented.
A PROPOSED MODEL FOR IMPROVING PERFORMANCE AND REDUCING COSTS OF IT THROUGH C...ijccsa
This document discusses a proposed model for improving performance and reducing costs of IT through cloud computing for Egyptian business enterprises. It begins with an introduction and background on cloud computing. The proposed model has four layers: infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), and client layer. The IaaS layer provides virtual hardware resources using virtualization. The PaaS layer manages data processing and resource management. The SaaS layer provides internet-based application services. Finally, the client layer directly interfaces with users. The model aims to improve performance and reduce costs for Egyptian businesses through flexible, efficient cloud services and infrastructure.
A Proposed Model for Improving Performance and Reducing Costs of IT Through C...neirew J
Information technologies are affecting the big business enterprises of todays from data processing and
transactions to achieve the goals efficiently and effectively, affecting creates new business opportunities
and towards new competitive advantage, service must be enough to match the recent trends of IT such as
cloud computing. Cloud computing technology has provided all IT services. Therefore, cloud computing
offers an alternative to adaptable with technology model current , creating reducing cost (Fixed costs and
ongoing), the proliferation of high speed Internet connections through Rent, not acquisitions, cheaper
powerful computing technology and effective performance. The public and private clouds are characterized
by flexibility, operational efficiency that reduces costs improve performance. Also cloud computing
generates business creativity and innovation resulted from collaborative ideas of users; presents cloud
infrastructure and services; paving new markets; offering security in public and private clouds; and
providing environmental impact regarding utilizing green energy technology. In this paper, the main
concentrate the cloud computing.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
CONTAINERIZED SERVICES ORCHESTRATION FOR EDGE COMPUTING IN SOFTWARE-DEFINED W...IJCNCJournal
As SD-WAN disrupts legacy WAN technologies and becomes the preferred WAN technology adopted by corporations, and Kubernetes becomes the de-facto container orchestration tool, the opportunities for deploying edge-computing containerized applications running over SD-WAN are vast. Service orchestration in SD-WAN has not been provided with enough attention, resulting in the lack of research focused on service discovery in these scenarios. In this article, an in-house service discovery solution that works alongside Kubernetes’ master node for allowing improved traffic handling and better user experience when running micro-services is developed. The service discovery solution was conceived following a design science research approach. Our research includes the implementation of a proof-ofconcept SD-WAN topology alongside a Kubernetes cluster that allows us to deploy custom services and delimit the necessary characteristics of our in-house solution. Also, the implementation's performance is tested based on the required times for updating the discovery solution according to service updates. Finally, some conclusions and modifications are pointed out based on the results, while also discussing possible enhancements.
In this paper we are study-ing about cloud computing, their types, need to use cloud computing. We also study the architecture of the mobile cloud computing. So we included new techniques for backup and restoring data from mobile to cloud. Here we proposed to apply some compres-sion technique while backup and restore data from Smartphone to cloud and cloud to the Smartphone.
Cloud computing is affecting the software development process. It provides resources over the internet rather than requiring direct physical access. This allows developers to access resources from anywhere and reduces costs since users only pay for what they use. Cloud computing introduces new concepts like mesh computing and pay-per-use services. Research is investigating how cloud computing reduces development costs and time by making services easily accessible. However, security and privacy concerns remain an issue with storing data on external provider networks rather than locally.
Similar to Infrastructure of services for a smart city (20)
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Blockchain Enforced Attribute based Access Control with ZKP for Healthcare Se...IJCNCJournal
The relationship between doctors and patients is reinforced through the expanded communication channels provided by remote healthcare services, resulting in heightened patient satisfaction and loyalty. Nonetheless, the growth of these services is hampered by security and privacy challenges they confront. Additionally, patient electronic health records (EHR) information is dispersed across multiple hospitals in different formats, undermining data sovereignty. It allows any service to assert authority over their EHR, effectively controlling its usage. This paper proposes a blockchain enforced attribute-based access control in healthcare service. To enhance the privacy and data-sovereignty, the proposed system employs attribute-based access control, zero-knowledge proof (ZKP) and blockchain. The role of data within our system is pivotal in defining attributes. These attributes, in turn, form the fundamental basis for access control criteria. Blockchain is used to keep hospital information in public chain but EHR related data in private chain. Furthermore, EHR provides access control by using the attributed based cryptosystem before they are stored in the blockchain. Analysis shows that the proposed system provides data sovereignty with privacy provision based on the attributed based access control.
EECRPSID: Energy-Efficient Cluster-Based Routing Protocol with a Secure Intru...IJCNCJournal
A revolutionary idea that has gained significance in technology for Internet of Things (IoT) networks backed by WSNs is the " Energy-Efficient Cluster-Based Routing Protocol with a Secure Intrusion Detection" (EECRPSID). A WSN-powered IoT infrastructure's hardware foundation is hardware with autonomous sensing capabilities. The significant features of the proposed technology are intelligent environment sensing, independent data collection, and information transfer to connected devices. However, hardware flaws and issues with energy consumption may be to blame for device failures in WSN-assisted IoT networks. This can potentially obstruct the transfer of data. A reliable route significantly reduces data retransmissions, which reduces traffic and conserves energy. The sensor hardware is often widely dispersed by IoT networks that enable WSNs. Data duplication could occur if numerous sensor devices are used to monitor a location. Finding a solution to this issue by using clustering. Clustering lessens network traffic while retaining path dependability compared to the multipath technique. To relieve duplicate data in EECRPSID, we applied the clustering technique. The multipath strategy might make the provided protocol more dependable. Using the EECRPSID algorithm, will reduce the overall energy consumption, minimize the End-to-end delay to 0.14s, achieve a 99.8% Packet Delivery Ratio, and the network's lifespan will be increased. The NS2 simulator is used to run the whole set of simulations. The EECRPSID method has been implemented in NS2, and simulated results indicate that comparing the other three technologies improves the performance measures.
Analysis and Evolution of SHA-1 Algorithm - Analytical TechniqueIJCNCJournal
A 160-bit (20-byte) hash value, sometimes called a message digest, is generated using the SHA-1 (Secure Hash Algorithm 1) hash function in cryptography. This value is commonly represented as 40 hexadecimal digits. It is a Federal Information Processing Standard in the United States and was developed by the National Security Agency. Although it has been cryptographically cracked, the technique is still in widespread usage. In this work, we conduct a detailed and practical analysis of the SHA-1 algorithm's theoretical elements and show how they have been implemented through the use of several different hash configurations.
Optimizing CNN-BiGRU Performance: Mish Activation and Comparative AnalysisIJCNCJournal
Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
An Hybrid Framework OTFS-OFDM Based on Mobile Speed EstimationIJCNCJournal
The Future wireless communication systems face the challenging task of simultaneously providing high-quality service (QoS) and broadband data transmission, while also minimizing power consumption, latency, and system complexity. Although Orthogonal Frequency Division Multiplexing (OFDM) has been widely adopted in 4G and 5G systems, it struggles to cope with a significant delay and Doppler spread in high mobility scenarios. To address these challenges, a novel waveform named Orthogonal Time Frequency Space (OTFS). Designers aim to outperform OFDM by closely aligning signals with the channel behaviour. In this paper, we propose a switching strategy that empowers operators to select the most appropriate waveform based on an estimated speed of the mobile user. This strategy enables the base station to dynamically choose the waveform that best suits the mobile user’s speed. Additionally, we suggest retaining an Integrated Sensing and Communication (ISAC) radar approach for accurate Doppler estimation. This provides precise information to facilitate the waveform selection procedure. By leveraging the switching strategy and harnessing the Doppler estimation capabilities of an ISAC radar.Our proposed approach aims to enhance the performance of wireless communication systems in high mobility cases. Considering the complexity of waveform processing, we introduce an optimized hybrid system that combines OTFS and OFDM, resulting in reduced complexity while still retaining performance benefits.This hybrid system presents a promising solution for improving the performance of wireless communication systems in higher mobility.The simulation results validate the effectiveness of our approach, demonstrating its potential advantages for future wireless communication systems. The effectiveness of the proposed approach is validated by simulation results as it will be illustrated.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Accurate latency computation is essential for the Internet of Things (IoT) since the connected devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while still allowing communication with the cloud. Many applications rely on fog computing, including traffic management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which are essential for evaluating the performance of the ITCMS. Our system model is also compared with other models to assess its performance. A comparison is made using two parameters, namely throughput and the total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
May 2024, Volume 16, Number 3 - The International Journal of Computer Network...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Advanced Privacy Scheme to Improve Road Safety in Smart Transportation SystemsIJCNCJournal
In -Vehicle Ad-Hoc Network (VANET), vehicles continuously transmit and receive spatiotemporal data with neighboring vehicles, thereby establishing a comprehensive 360-degree traffic awareness system. Vehicular Network safety applications facilitate the transmission of messages between vehicles that are near each other, at regular intervals, enhancing drivers' contextual understanding of the driving environment and significantly improving traffic safety. Privacy schemes in VANETs are vital to safeguard vehicles’ identities and their associated owners or drivers. Privacy schemes prevent unauthorized parties from linking the vehicle's communications to a specific real-world identity by employing techniques such as pseudonyms, randomization, or cryptographic protocols. Nevertheless, these communications frequently contain important vehicle information that malevolent groups could use to Monitor the vehicle over a long period. The acquisition of this shared data has the potential to facilitate the reconstruction of vehicle trajectories, thereby posing a potential risk to the privacy of the driver. Addressing the critical challenge of developing effective and scalable privacy-preserving protocols for communication in vehicle networks is of the highest priority. These protocols aim to reduce the transmission of confidential data while ensuring the required level of communication. This paper aims to propose an Advanced Privacy Vehicle Scheme (APV) that periodically changes pseudonyms to protect vehicle identities and improve privacy. The APV scheme utilizes a concept called the silent period, which involves changing the pseudonym of a vehicle periodically based on the tracking of neighboring vehicles. The pseudonym is a temporary identifier that vehicles use to communicate with each other in a VANET. By changing the pseudonym regularly, the APV scheme makes it difficult for unauthorized entities to link a vehicle's communications to its real-world identity. The proposed APV is compared to the SLOW, RSP, CAPS, and CPN techniques. The data indicates that the efficiency of APV is a better improvement in privacy metrics. It is evident that the AVP offers enhanced safety for vehicles during transportation in the smart city.
April 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
DEF: Deep Ensemble Neural Network Classifier for Android Malware DetectionIJCNCJournal
Malware is one of the threats to security of computer networks and information systems. Since malware instances are available sufficiently, there is increased interest among researchers on usage of Artificial Intelligence (AI). Of late AI-enabled methods such as machine learning (ML) and deep learning paved way for solving many real-world problems. As it is a learning-based approach, accumulated training samples help in improving thequality of training and thus leveraging malware detection accuracy. Existing deep learning methods are focusing on learning-based malware detection systems. However, there is need for improving the state of the art through ensemble approach. Towards this end, in this paper we proposed a framework known as Deep Ensemble Framework (DEF) for automatic malware detection. The framework obtains features from training samples. From given malware instance a grayscale image is generated. There is another process to extract the opcode sequences. Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) techniques are used to obtain grayscale image and opcode sequence respectively. Afterwards, a stacking ensemble is employed in order to achieve efficient malware detection and classification. Malware samples collected fromthe Internet sources and Microsoft are used for theempirical study. An algorithm known as Ensemble Learning for Automatic Malware Detection (EL-AML) is proposed to realize our framework. Another algorithm named Pre-Process is proposed to assist the EL-AML algorithm for obtaining intermediate features required by CNN and LSTM.Empirical study reveals that our framework outperforms many existing methods in terms of speed-up and accuracy.
High Performance NMF Based Intrusion Detection System for Big Data IOT TrafficIJCNCJournal
With the emergence of smart devices and the Internet of Things (IoT), millions of users connected to the network produce massive network traffic datasets. These vast datasets of network traffic, Big Data are challenging to store, deal with and analyse using a single computer. In this paper we developed parallel implementation using a High Performance Computer (HPC) for the Non-Negative Matrix Factorization technique as an engine for an Intrusion Detection System (HPC-NMF-IDS). The large IoT traffic datasets of order of millions samples are distributed evenly on all the computing cores for both storage and speedup purpose. The distribution of computing tasks involved in the Matrix Factorization takes into account the reduction of the communication cost between the computing cores. The experiments we conducted on the proposed HPC-IDS-NMF give better results than the traditional ML-based intrusion detection systems. We could train the HPC model with datasets of one million samples in only 31 seconds instead of the 40 minutes using one processor), that is a speed up of 87 times. Moreover, we have got an excellent detection accuracy rate of 98% for KDD dataset.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Infrastructure of services for a smart city
1. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
DOI : 10.5121/ijcnc.2016.8108 105
INFRASTRUCTURE OF SERVICES FOR A SMART CITY
USING CLOUD ENVIRONMENT
Jorge F Hernandez1
, Victor M Larios 1
, Manuel Avalos1
and Ignacio Silva-Lepe2
1
Department of Information Systems, CUCEA, UDG Guadalajara, Mexico
2
Thomas J. Watson Research Center, Yorktown Heights, NY USA Research, New York,
USA
ABSTRACT
Sustainability, appropriate use of natural resources and providing a better quality of life for citizens has
become a prerequisite to change the traditional concept of a smart city. A smart city needs to use latest
generation Information Technologies, IT, and hardware to improve services and data, to offer to create a
balanced environment between the ecosystem and inhabitants. This paper analyses the advantages of using
a private cloud architecture to share hardware and software resources when it is required. Our case study
is Guadalajara, which has nine municipalities and each one monitor’s air quality. Each municipality has a
set of servers to process information independently and consists of information systems for the transmission
and storage of data with other municipalities. We analysed the behaviour of the carbon footprint during the
years1999-2013 and we observed a pattern in each season. Thus our proposal requires municipalities to
use a cloud-based solution that allows managing and consolidating infrastructure to minimize maintenance
costs and electricity consumption to reduce carbon footprint generated by the city.
KEYWORDS
Smart Cities; Cloud Architectures; Cost Estimation; City Services
1. INTRODUCTION
Improving the services offered by a city and promoting a balance between the environmental
sustainability and citizen’s quality of life has become an important goal of what we define today
as Smart Cities [1]. IT offer a convenient way to connect processes in a city, optimize resources
for the benefit of communities and forecast dynamics of the urban environment to better adapt
solutions towards the well-being of citizens. However, citizens in smart cities have to deal with
the physical and digital dimension.
During the living activities in the urban fabric, inhabitants have a unique identity to access and
engage services such as energy, water, communication, and transport, among others. In addition,
cities need to offer secure digital platforms for their inhabitants and IT infrastructure becomes
vital in terms of communication and processing capabilities and availability. One solution to
adapt and scale to the cities services demand is to shift city IT departments to the Cloud
Computing paradigm [8]. Cloud computing has become a most used options in information
systems because it can optimize, organize and maintain software services and hardware across the
Internet [12].
The use of this technology has allowed companies, mainly to reduce costs of maintenance and
support; to focus to the administration of its application or its hardware.
2. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
The Cloud allows grouping various types of hardware and to merge them into a single entity for
better and efficient management. Hence, Cloud Computing can
services. First, we have the Infrastructure as a Service (IaaS) which provides of virtualization for
using hardware resources and this category can offer sensors, storage or processing capacities on
demand. Second is the Platform as a Service (PaaS) where users can run Web applications
without the complexity of maintaining and running the associated infrastructure, this is critical for
e-government service portals. Third, Software as a Service (SaaS) where licenses for critical
software in processes such as analytics can be used on demand.
companies offer these types of cloud
Fig 1. Examples of services in Cloud Computing
A key aspect of the cloud is the use of virtual machin
is a software application that emulates be a real computer with software and hardware features
limited to execute some task.Cloud computing provides a set of principles establishing the rules
and principles among suppliers and customers. An important aspect of Cloud is the use of
resources based on a “pay as you go” basis, in which the customer must pay for the time that a
service, platform or software license is executed/used on a cloud provider.
A service as a process for a Smart City may need hardware, software or a combination of them.
Cloud computing proposes benefits of elasticity, resilience, performance, productivity, scalability
etc. Hence, this technology offers a better strategy for city
services.This paper is based on the Guadalajara Smart City project selected as IEEE pilot project
to share the experience of best practices for smart cities worldwide. Moreover, Guadalajara is not
only a city but also a metropolita
observed and analysed that each municipality has a traditional IT infrastructure consisting of a
cluster of servers, routers and intranet access to communicate with other municipalities. In its
current state, the data centers on each municipality are isolated infrastructure because they are not
interconnected and sharing information and processing capabilities for the metropolitan area.
Our proposal is to consolidate municipal infrastructure b
metropolitan area with the existing infrastructure.The benefits of using a cloud computing
architecture allows the acquisition of any hardware configuration (supported by virtualization) in
a few minutes. To better understand how Cloud Infrastructure can bring value for Smart Cities,
we introduce a Use Case that is based on historical data about pollution in the metropolitan area;
IaaS
•Amazon Web
Services
•Joyent
It offers services as storage,
monitoring and remote
communication datacenter
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
The Cloud allows grouping various types of hardware and to merge them into a single entity for
better and efficient management. Hence, Cloud Computing can work in three categories of
services. First, we have the Infrastructure as a Service (IaaS) which provides of virtualization for
using hardware resources and this category can offer sensors, storage or processing capacities on
rm as a Service (PaaS) where users can run Web applications
without the complexity of maintaining and running the associated infrastructure, this is critical for
government service portals. Third, Software as a Service (SaaS) where licenses for critical
software in processes such as analytics can be used on demand. In the Fig 1,we show some
companies offer these types of cloud computing service.
Fig 1. Examples of services in Cloud Computing
A key aspect of the cloud is the use of virtual machines to achieve its elasticity; a virtual machine
is a software application that emulates be a real computer with software and hardware features
limited to execute some task.Cloud computing provides a set of principles establishing the rules
mong suppliers and customers. An important aspect of Cloud is the use of
resources based on a “pay as you go” basis, in which the customer must pay for the time that a
service, platform or software license is executed/used on a cloud provider.
A service as a process for a Smart City may need hardware, software or a combination of them.
Cloud computing proposes benefits of elasticity, resilience, performance, productivity, scalability
etc. Hence, this technology offers a better strategy for city governments to manage IT
services.This paper is based on the Guadalajara Smart City project selected as IEEE pilot project
to share the experience of best practices for smart cities worldwide. Moreover, Guadalajara is not
only a city but also a metropolitan area composed of seven interconnected municipalities and we
observed and analysed that each municipality has a traditional IT infrastructure consisting of a
cluster of servers, routers and intranet access to communicate with other municipalities. In its
current state, the data centers on each municipality are isolated infrastructure because they are not
interconnected and sharing information and processing capabilities for the metropolitan area.
Our proposal is to consolidate municipal infrastructure by setting up a private cloud for the
metropolitan area with the existing infrastructure.The benefits of using a cloud computing
architecture allows the acquisition of any hardware configuration (supported by virtualization) in
rstand how Cloud Infrastructure can bring value for Smart Cities,
we introduce a Use Case that is based on historical data about pollution in the metropolitan area;
PaaS
•Heroku
•Amazon EC2
•Bluemix
SaaS
•Ms Office Online
•Dropbox
•Adobe
It is a way to develop, test
and execute services using
different types of
infrastructure (software and
hardware)
It seems to run a process
locally but runs remotely
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
106
The Cloud allows grouping various types of hardware and to merge them into a single entity for
work in three categories of
services. First, we have the Infrastructure as a Service (IaaS) which provides of virtualization for
using hardware resources and this category can offer sensors, storage or processing capacities on
rm as a Service (PaaS) where users can run Web applications
without the complexity of maintaining and running the associated infrastructure, this is critical for
government service portals. Third, Software as a Service (SaaS) where licenses for critical
we show some
es to achieve its elasticity; a virtual machine
is a software application that emulates be a real computer with software and hardware features
limited to execute some task.Cloud computing provides a set of principles establishing the rules
mong suppliers and customers. An important aspect of Cloud is the use of
resources based on a “pay as you go” basis, in which the customer must pay for the time that a
A service as a process for a Smart City may need hardware, software or a combination of them.
Cloud computing proposes benefits of elasticity, resilience, performance, productivity, scalability
governments to manage IT
services.This paper is based on the Guadalajara Smart City project selected as IEEE pilot project
to share the experience of best practices for smart cities worldwide. Moreover, Guadalajara is not
n area composed of seven interconnected municipalities and we
observed and analysed that each municipality has a traditional IT infrastructure consisting of a
cluster of servers, routers and intranet access to communicate with other municipalities. In its
current state, the data centers on each municipality are isolated infrastructure because they are not
interconnected and sharing information and processing capabilities for the metropolitan area.
y setting up a private cloud for the
metropolitan area with the existing infrastructure.The benefits of using a cloud computing
architecture allows the acquisition of any hardware configuration (supported by virtualization) in
rstand how Cloud Infrastructure can bring value for Smart Cities,
we introduce a Use Case that is based on historical data about pollution in the metropolitan area;
Ms Office Online
It seems to run a process
locally but runs remotely
4. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
Fig. 3 Layers
For managing virtual machines can use applications such as OpenStack, VMware vSphere,
CloudStack, Xen, among others. Government entities usually use open source solutions to
minimize licensing costs nonetheless; they coul
machines.
We can identify the Administrator as the process of monitoring the behaviour of each virtual
machine in the cloud and to perform operations such as: increase or decrease resources hardware
or software, delete, or create a new virtual machine. OpenStack Dashboard is an option for this
type of module to support a better management.
cloud service categories already explained fit into the Smart Cities cycles.
The city deploys sensors and actuators that can be connected to the cloud as an IaaS, offering a
global management, security and capacity to scale on demand. In particular, sensors produce data
to be curated and stored in the open data city repository.
curate and provide storage as well as processing capabilities for analytics. To deal with the
complexity of Open Data repositories, specialized software for analytics should be used requiring
a SaaS service to use licenses on demand.
One of the key elements of Smart Cities[4] is to break the silos of information among the
different government offices offering to share all in a common open data city platform.
action allows avoiding duplicated efforts and investments
provide solutions
Fig. 4
In order to provide more efficient services, it is possible to correlate information datasets from
different indicators. Moreover, a Smart City needs to have a strategy for metrics to understand its
performance and where to invest to reach its sustainabili
Metrics for smart cities need to have Key Performance Indicators (KPIs) as well as a subset of
indicators for example in Cohen’s Wheel there is a section called: Smart Gov, which contains
Clou
Anal
(Pa
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
Layers in a cloud computing environment
For managing virtual machines can use applications such as OpenStack, VMware vSphere,
CloudStack, Xen, among others. Government entities usually use open source solutions to
minimize licensing costs nonetheless; they could use paid software to manage their virtual
We can identify the Administrator as the process of monitoring the behaviour of each virtual
machine in the cloud and to perform operations such as: increase or decrease resources hardware
delete, or create a new virtual machine. OpenStack Dashboard is an option for this
type of module to support a better management.Given the layers in Fig. 3, Fig. 4 shows how the
cloud service categories already explained fit into the Smart Cities cycles.
The city deploys sensors and actuators that can be connected to the cloud as an IaaS, offering a
global management, security and capacity to scale on demand. In particular, sensors produce data
to be curated and stored in the open data city repository. Cloud PaaS is the most indicated to
curate and provide storage as well as processing capabilities for analytics. To deal with the
complexity of Open Data repositories, specialized software for analytics should be used requiring
es on demand.
One of the key elements of Smart Cities[4] is to break the silos of information among the
different government offices offering to share all in a common open data city platform.
action allows avoiding duplicated efforts and investments to understand the city dynamics and
Fig. 4 Smart City Cycles related to the cloud
In order to provide more efficient services, it is possible to correlate information datasets from
different indicators. Moreover, a Smart City needs to have a strategy for metrics to understand its
performance and where to invest to reach its sustainability.
Metrics for smart cities need to have Key Performance Indicators (KPIs) as well as a subset of
indicators for example in Cohen’s Wheel there is a section called: Smart Gov, which contains
Hardware
Virtualization
Administrator
Cloud for IOT -
Network,
Storage
(IaaS)
Cloud for Services
(SaaS)
ud for
lytics
aaS)
Sensor
Network
Analy
cs&
Visualiza
on
Open
Data
Smart Ci es
Cycles
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
108
For managing virtual machines can use applications such as OpenStack, VMware vSphere,
CloudStack, Xen, among others. Government entities usually use open source solutions to
d use paid software to manage their virtual
We can identify the Administrator as the process of monitoring the behaviour of each virtual
machine in the cloud and to perform operations such as: increase or decrease resources hardware
delete, or create a new virtual machine. OpenStack Dashboard is an option for this
shows how the
The city deploys sensors and actuators that can be connected to the cloud as an IaaS, offering a
global management, security and capacity to scale on demand. In particular, sensors produce data
Cloud PaaS is the most indicated to
curate and provide storage as well as processing capabilities for analytics. To deal with the
complexity of Open Data repositories, specialized software for analytics should be used requiring
One of the key elements of Smart Cities[4] is to break the silos of information among the
different government offices offering to share all in a common open data city platform.This
to understand the city dynamics and
In order to provide more efficient services, it is possible to correlate information datasets from
different indicators. Moreover, a Smart City needs to have a strategy for metrics to understand its
Metrics for smart cities need to have Key Performance Indicators (KPIs) as well as a subset of
indicators for example in Cohen’s Wheel there is a section called: Smart Gov, which contains
5. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
109
infrastructure, this option could generate alternative metrics such as latency, multitenancy and
others.This means that Smart Cities should work with a holistic vision integrating KPIs [5] and
indicators to understand city dynamics and decide how the services should adapt. This concept is
based on a systemic approach where a city is a System of Systems or can be modelled and
understood as a Complex System.
For this reason, the city should decide how to select KPIs and indicators. Given our work at the
IEEE Smart Cities initiative, the model used is shown in Fig. 5 based in the Cohen Wheel. The
model proposes five important KPIs related to Smart Economy, Smart Government, Smart
People, Smart Living, Smart Mobility and Smart Environment [2][3]. Each KPI has a subset of
actions and indicators in a secondary ring. It depends on the amount of sources of information
available to feed indicators to be provided by the city, there could be more outer rings, which
themselves induce more external rings.
This means that the more the city deploys sensor/actuator networks, the more rings that will
appear, resulting in more accurate models to analyse the behaviours and dynamics of the city.
That is the reason to have a good cloud strategy in order to scale the KPIs, Indicators and Actions
management [6]. Hence, the Smart City project in Guadalajara, following the principles of
Metrics based in the Cohen Wheel KPIs, requires an architecture to migrate the metropolitan area
of Guadalajara to the cloud. This is the main problem and challenge presented in this paper.
A new issue to introduce is that the metropolitan area of Guadalajara, and for every city that is
composed of interconnected municipalities, each one has autonomous infrastructure and budgets.
Since all municipalities are interconnected, a challenge is to connect all data centers respecting
their autonomy them. A proposed solution is to create a private cloud to support the three types of
cloud services. As a use case to create a methodology to estimate the performance and cost of the
private cloud integration among the interconnected municipalities, we identified sensors, open
data and processing requirements as an example that can be used as reference for all KPIs of the
Smart City in Guadalajara.
The sensors are real devices in the city creating datasets of air pollution in various zones of the
city. The created datasets are stored in an open web service, and we propose a system that
analyses the air pollution data flows to identify harmful pollution levels in zones of the
metropolitan area to provide actions for the benefit of citizens (alarms, transport re-routing, etc.).
The contribution of this paper is to process and analyse the information produced by an alert
system. The system will be fully supported by the Cloud resulting in a consolidation of
infrastructure across municipalities in the metropolitan area.
Fig. 5 Representative diagram Cohen’s Smart City Wheel
6. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
110
Finally, we propose a methodology to estimate costs of cloud services, which is based in the
current municipalities data center infrastructure modelled and extended with a plug in created for
the Cloud Simulation Tool Cloudsim [7].
3. METHODOLOGY
We use Java framework to simulate the behaviour of acloud, CloudSim. CloudSim’s goal is to
provide a generalized and extensible simulation framework that enables modelling, simulation,
and experimentation of emerging Cloud computing infrastructures and application services,
allowing its users to focus on specific system design issues that they want to investigate, without
getting concerned with the low-level details related to Cloud-based infrastructure and services.
CloudSim can simulate jobs to analysis the behaviour in a cloud computing system. A job is a
process in executing during a certain time and using resources as hardware or software to
complete its job.
There are software or applications that can get these data because it would be very difficult run on
a specific machine or cluster a lot of jobs. Hebrew University made this action and designed a
cluster of jobs. Each job represents an activity that was realized in a specific time with a specific
configuration of hardware and it was saved in a file called: Metacentrum. The Metacentrum file
contains the record of workloads on parallel computers [1]. These data were sent as an input to
Cloud Sim to start the process of simulating. Firstly, it was necessary to understand the fields of
the MetaCentrum file to know how we would use it. We found the specification of the file [2] and
we selected the following fields to make our simulation:
• Submit time
• Wait time
• Run time
• Used memory
• Requested memory
• Status
We used Metacentrum with Cloudsim tools to understand the behaviour and the way in which
these processes are distributed using CloudSim environment. Thus we decided to create our own
schedule process, each data center must share resources when a process or administrator request
to respond immediately, then we added a module to Cloudsim to perform this task, see Fig 6.
Fig 6. Module added to Cloudsim’s Architecture
Workflow Manager
7. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
The hardware and software settings were established by actual use conditions
municipally. Also we created a process that consumed all
warning to the general manager and
workflowand priority process of each datacentre to take more resources
Also, an additional goal of our simulation was to generate an equation
implementing theservice using thecloud
preventive andcorrectivemaintenance, and keyandsupport staff.
environments that are necessaries to get the real cost of a specific service. These groups
1. Physical configuration:
service (i.e. hard disk, memory ram, video target, bandwidth, kind of network).
2. Software configuration:
operating system, database, file system, simulation programs and parallelism).
3. Supplies: items that the provider needs to active the mentioned above services (i.e.
electrical power, cables, air conditioning, license fees, space, staff,
providers).
This aggrupation allowed us to understand
made an equation as follows:
where:
ω = a service/process
Ψ = the total amount required resources
Ω = a specific used resource from physical or software configuration
ϕ = fixed cost of used resource finished by the provider
λ = execution time of each resource
µ = execution time needed to complete all the process
Υ = sub process of ω
Τ = the total sub process of ω
M (Ω) = maintenance cost of θ
Using this formula, we can
time. We decided to use a recursive function to recover the used resources of a certain
process/service that required a distribution of its
formula is used to specify the economic cost
resources(hardware) used in a service.
Each municipality has its autonomy
accomplish the tasks. With this equation
versus another for each municipality
for the same service.
Using this formula, we can guarantee the cost computation of each resource in a specific time.
For example, if a provider “X” has a price listing as
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
hardware and software settings were established by actual use conditions
lso we created a process that consumed all resources of a data center to send a
warning to the general manager and reallocate more resources to other datacenter
of each datacentre to take more resources.
n additional goal of our simulation was to generate an equation to createthe actual costof
implementing theservice using thecloud, we consideredaspects such aspaymentof electricity
preventive andcorrectivemaintenance, and keyandsupport staff. We decidedto group the different
environments that are necessaries to get the real cost of a specific service. These groups
it represents the required configuration to execute correctly the
service (i.e. hard disk, memory ram, video target, bandwidth, kind of network).
it refers to the set of programs that the service needs (i.e.
tem, database, file system, simulation programs and parallelism).
items that the provider needs to active the mentioned above services (i.e.
electrical power, cables, air conditioning, license fees, space, staff, payment to other
allowed us to understand the elements to be evaluated in each process and we
= the total amount required resources
= a specific used resource from physical or software configuration
= fixed cost of used resource finished by the provider
= execution time of each resource
= execution time needed to complete all the process
Using this formula, we can obtain the cost computation of each resource in a specific
to use a recursive function to recover the used resources of a certain
a distribution of its job (parallel tasks). The primary
the economic cost in each process to have a log of all the
a service.
autonomy to decide what kind of software and hardware
this equation we could also determine the efficiency of a process
for each municipality to identify the fastest execution and best low price service
Using this formula, we can guarantee the cost computation of each resource in a specific time.
For example, if a provider “X” has a price listing as is showed in Table 1.
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
111
hardware and software settings were established by actual use conditions for each
resources of a data center to send a
allocate more resources to other datacenters, checking
to createthe actual costof
aspaymentof electricity,
to group the different
environments that are necessaries to get the real cost of a specific service. These groups were:
it represents the required configuration to execute correctly the
service (i.e. hard disk, memory ram, video target, bandwidth, kind of network).
it refers to the set of programs that the service needs (i.e.
tem, database, file system, simulation programs and parallelism).
items that the provider needs to active the mentioned above services (i.e.
payment to other
in each process and we
the cost computation of each resource in a specific
to use a recursive function to recover the used resources of a certain
primary goal of our
all the physical
is needed to
of a process
and best low price service
Using this formula, we can guarantee the cost computation of each resource in a specific time.
8. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
112
Resource Symbol Amount Price
Hard disk
(SATA)
λ [1,1] 1 Mb $0.05
RAM Memory λ [1,2] 1 Mb $0.10
Bandwidth λ [1,3] 1 kb $0.009
Hard disk
(SSD)
λ [1,4] 1 Mb $0.30
Table 1. Cost of use per each resource established by “X” user
Our equation may compute the cost for a specific user and in a specific time (ω). Consequently
(α, β, θ) ε Ω. The Cost = λ (Ω) + λ (ϕ) ∧Σ (α) + Σ (β) = Ψ ∧Σ (Ψ) = µ. The formula determines
the cost of each Ω in a specify time λ and when the process has ended, returns µ.
We also decide to use a recursive function to recover the used resources of a specific
process/service that required distributing his job. The M (Ω) factor represents a minimum cost of
the maintenance services used. It could be applied or not. This value is added by the provider.
For example, if a final user operates over one month his service with the provider “X”, the
computed cost could be obtained. In the Table 2, the table shows the days in which the final user
required to use his application. ω is a process put by the user. Ψ has 4 values because they are 4
services, the application needs to execute.
Date Used resources Price
Nov/01/2015 λ [1,1] =0
λ [1,2] =0
λ [1,3] =0
λ [1,4] =0
µ = 0
Υ = 0
0.0
Nov/15/2015
λ [1,1] =120
λ [1,2] =10
λ [1,3] =0
λ [1,4] =0
µ = 3 segs
Υ = 0
$89.90
Nov/23/2015 λ [1,1] =160
λ [1,2] =200
λ [1,3] =100
λ [1,4] =0
µ = 1000
segs
Υ = 0
$157.84
Nov/30/2015 λ [1,1] =130
λ [1,2] =10
λ [1,3] =30
λ [1,4] =0
µ = 30 segs
Υ = 0
$69.89
C2ost = Computed Cost Total $317.63
Table 2. Used days by the final user
9. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
*Prices are in dollars
Therefore, we can observe that the final user will have to pay $317.63 for the days of consume of
his application in a period of time. In this case
this issue, also it did not have a recursion algorithm which it is represented by k = 0.
4. EXPERIMENTAL FRAMEWORK
Guadalajara is located in western Mexico; it has a current population of about 4,299,000
according to the National Institute of Statistics and Geography (INEGI for its initials in Spanish)
[9] in 2008. It is one of the three main cities in Mexico for its e
and demographic region.
Fig. 7
The city is formed by 9 municipalities: Guadalajara, Zapopan, Tonala, Tlaquepaque, El Salto,
Juanacatlan, Ixtlahuacan de los Membrillos
structure the metropolitan area of Guadalajara
monitoring air quality 24 hours a day, to make recommendations to care health of its citizens and
animals. The Metropolitan Index of Air Quality [10] (IMECA in Spanish) is a Mexican official
standard for gauging air quality
dioxide (SO2), nitrogen dioxide (NO2),carbon monoxide (CO) and particulate less than 10
micrometers (PM10). IMECA is used as a reference for all the states of Mexico to measure
environmental pollution. IMECA has a set of ranges for activing notifications
in the Table 3.
Table 3
1
Image taken from: https://commons.wikimedia.org/wiki/File:Mapa_ZMG.svg
IMECA
0 – 50
51 – 100
101 – 150
151 – 200
>201
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
Therefore, we can observe that the final user will have to pay $317.63 for the days of consume of
his application in a period of time. In this case M (Ω) = 0 because the provider does not consider
this issue, also it did not have a recursion algorithm which it is represented by k = 0.
RAMEWORK
Guadalajara is located in western Mexico; it has a current population of about 4,299,000
according to the National Institute of Statistics and Geography (INEGI for its initials in Spanish)
[9] in 2008. It is one of the three main cities in Mexico for its economic growth, technological
Fig. 7 Metropolitan area of Guadalajara1
The city is formed by 9 municipalities: Guadalajara, Zapopan, Tonala, Tlaquepaque, El Salto,
Juanacatlan, Ixtlahuacan de los Membrillos y Tlajomulco de Zuñiga. In Fig. 7, we show the
structure the metropolitan area of Guadalajara. Since 1995, the city of Guadalajara has been
monitoring air quality 24 hours a day, to make recommendations to care health of its citizens and
animals. The Metropolitan Index of Air Quality [10] (IMECA in Spanish) is a Mexican official
since 1988. It reports chemicals such as: ozone (O3), sulfur
dioxide (SO2), nitrogen dioxide (NO2),carbon monoxide (CO) and particulate less than 10
micrometers (PM10). IMECA is used as a reference for all the states of Mexico to measure
ollution. IMECA has a set of ranges for activing notifications, which are shown
Table 3. Classification pollution levels
https://commons.wikimedia.org/wiki/File:Mapa_ZMG.svg
IMECA Air quality level
50 Good
100 Regular
150 Bad
200 Very Bad
Extremely Bad
International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
113
Therefore, we can observe that the final user will have to pay $317.63 for the days of consume of
) = 0 because the provider does not consider
Guadalajara is located in western Mexico; it has a current population of about 4,299,000
according to the National Institute of Statistics and Geography (INEGI for its initials in Spanish)
conomic growth, technological
The city is formed by 9 municipalities: Guadalajara, Zapopan, Tonala, Tlaquepaque, El Salto,
, we show the
the city of Guadalajara has been
monitoring air quality 24 hours a day, to make recommendations to care health of its citizens and
animals. The Metropolitan Index of Air Quality [10] (IMECA in Spanish) is a Mexican official
since 1988. It reports chemicals such as: ozone (O3), sulfur
dioxide (SO2), nitrogen dioxide (NO2),carbon monoxide (CO) and particulate less than 10
micrometers (PM10). IMECA is used as a reference for all the states of Mexico to measure
, which are shown
10. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
114
When the range of pollution is above 120 IMECAS, it may cause respiratory problems for some
people, mainly children and the elderly. However, if the range is greater than 200 IMECAS a
contingency plan is activated: the PRECA (Emergency Response Plan Contingency and
Atmospheric Jalisco), which contains various levels, see Table 4.Guadalajara has registered as an
open dataset, the information collected about pollution levels from 1996 to 2013, it is in a excel
format and files contain data for each metropolitan area and its levels of pollution sensed per day
and time (0 - 23 hours).
Table 4. Levels of activation and deactivation of PRECA
Registered contaminants are: O3, NO, NO2, NOx, SO2, CO, PM10, direction and wind speed
and temperature. In our case study, we considered the use of CO as a contaminant, because most
Guadalajara inhabitants are in direct contact with it. The CO is an odourless and invisible toxin,
thus constant attention is required to monitor its level and to detect when it is higher than
environmental health standards in Mexico, to activate contingency plans for the population. The
extracted information on the levels of contamination of the ZMG, which is in Excel format, it was
migrated to a MySQL database, we normalized the information and we analysed the behaviour of
the contamination by zone based on hours, days, months and seasons.
Next, we configured each municipality with its data center and additionally interconnected one
by one, to have a communication link between each one and further we connected with the state
government which control each municipality, this entity can recover data that requires of a
municipality and visualize areas of the city where require more use resources, e.g. process to
detect and synchronize traffic signals and wait times when there is an accident or special event,
etc. Because municipalities are connected to each other, the decisions of each municipally are
informed immediately to all others, as well as the actions decided by the state government, and
this is shown in Figure 8.
In Figs. 9 to 11, we can observe the behaviour of contamination of three major areas of
Guadalajara, where we no increase in pollution levels at the beginning of January, May and
December.
PRECA
Level Enabled Disabled
Pre Equal or greater 120 IMECAS
for 2 consecutive hours
Equal or less than 110
IMECAS for 2 consecutive
hours
I Equal or greater 150 IMECAS
for 2 consecutive hours
Equal or less than 140
IMECAS for 2 consecutive
hours
II Equal or greater 200 IMECAS
for 2 consecutive hours
Equal or less than 190
IMECAS for 2 consecutive
hours
III Equal or greater 250 IMECAS
for 2 consecutive hours
Equal or less than 240
IMECAS for 2 consecutive
hours
11. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
115
Fig 8. Municipalities connected with state government
A decrease in the months of August and September. These factors should be for the rainy season
and the reincorporation of students and teachers to their schools. We noted that there is a
relationship between the work and school activities with the emission of pollutants, with this
premise determined in 3 phases: 1 = low level, 2 = intermediate level and 3 = maximum level.
We mapped these levels with an alert system (we did it) to inform to the citizens and to initiate a
feedback process to provide recommendations and avoid crossing by this zone.
Fig. 9 Co Levels in Las Miravalle 2013 Fig.10 Co Levels in Las Pintas 2013
Fig. 11 Co Levels in Tlaquepaque 2013
0
20
40
60
80
100
120
1 15 31 14 1 15 14 30 14 30 1 15 1 15 31 14 30 1 15 1 15 1 15 31
Jan Feb Mar Apr May Jun Jul Aug Sep OctNov Dec
Tlaquepaque
COLevel
Time
0
50
100
150
200
250
14 30 1 15 14 30 1 1 15 31 14 30 14 30 1 15 14 30 14 30 1 15 1 15 31
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Las Pintas
COLevel
Time
0
50
100
150
200
250
14 30 1 15 14 30 1 1 15 31 14 30 14 30 1 15 14 30 14 30 1 15 1 15 31
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
Miravalle
COLevel
Time
state government
12. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
116
These levels allow us to identify when a municipality zone needs to share resources among others
through the private cloud to manage services. When a level is in phase 1, the municipality may
process its information by itself, in phase 2, it could need the assistance of one or more
municipalities to improve response time, and the last phase this municipality requests the
intervention of others to process the high demand for information to be processed in the shortest
possible time.Once the simulation has been performed, we create nine datacenters with hardware
features provided by each zone also. The purpose of representing the nine municipalities in a
cloud environment, it is to verify that sharing the resources of each infrastructure, improving
quality of service to the city, maintenance costs are optimize and the purchase of computer
equipment benefits all municipalities and not just one. That is reason, we generated a mechanism
to share resources between each area and we built a coordinator agent, which monitors the
workload of each area to grant access to other data centers when is necessary.
After we configured a datacenter with the following characteristics: 16 Memory Ram, 4 Tb Hard
Drive, Two Xeon X3430 processors and multi-node optical fiber with 100 Mbps transfer rate.
This configuration represents the average of resources that a municipality has. The application
was executed in two phases: the first, it was using the traditional scheme of each municipality and
phase two: it used a cloud-based scheme. In the Fig. 12 we show the workflow, we did for this
extraction, interpretation and results obtained.
Fig. 12 Workflow process
5. RESULT ANALISYS
In Fig 13, we show on the left side the number of milliseconds that a service took to execute
during a certain time in a traditional scheme ,and on the right side the result of our proposal
where we suggested use of resources using a private cloud. A considerable decrease is observed
and thus electric power consumption was reduced and the cost of use is lower than with other
scheme.
The type of service used in this experiment was on monitoring air quality, which 15 minutes an
IMECA value was recorded for each municipally, in the Fig 13, it shows some peaks, and it is
caused when we sent the information to the dependencies of government to alert a level of
contamination that exceeded the allowed limit and we noted a time range where there is more
traffic and response time must almost be faster to take preventive measures.
This result proves that the traditional IT system of processing a service with the resources that a
municipally has, increases the response time of a service or attention. However, if we use cloud
computing solution to connect these entities to share data or resources, the execution time
decreases and the response of a services is faster than IT traditional.
Pollution
Dataset
Transform to
MySQL
Normalize data
Infrastructure each
municipality
Analytics
Simulation & Results
13. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
117
Fig. 13 Traditional IT vs Private Cloud
6. CONCLUDING REMARKS AND PERSPECTIVES
Historical data of pollution levels helped to determine the system of events that should be present
in the city to inform and organize services for each zone. For example, when pollution levels are
high, systems could report to citizenship through their smart phones of this situation and suggest
changes in their routine. Thus begins a process of feedback where the user could request new
routes to arrive his destination not passing by contaminated areas.
These kind of services are useful for smart cities. Cloud computing allows enhancing resources
and adapt to new hardware equipment that may be present in a data center and to share its
resources with other areas. As a future work, we could use Docker containers to perform in situ
the information that is enclosed within a metropolitan area instead of moving it to another
location for processing.
Docker allows the creation, implementation and execution of applications by using containers.
The containers provide a mechanism to package an application with all the components you need,
such as libraries and other facilities, and group them into one package. Using Docker, the
municipalities will apply only hardware resources and information of each municipally is
processed through itself and it is not necessity to move another place to do that.
This article showed how the interconnection between municipalities in the metropolitan area of
Guadalajara, allows for integration between each to optimize resources and minimize the cost of
buying and maintaining hardware and software.The goal is to use the advantages of cloud
computing to consolidate infrastructure and services to improve the quality of life of citizens.
ACKNOWLEDGEMENTS
The work described in this paper was supported by CONACYT through University of
Guadalajara in collaboration with Smart Cities Innovation Center.
14. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
118
REFERENCES
[1] M. Batty, K. W. Axhausen, F. Giannotti, A. Pozdnoukhov, A. Bazzani, M. Wachowicz, G. Ouzounis,
and Y. Portugali, “Smart cities of the future,” The European Physical Journal Special Topics, vol.
214, no. 1, pp. 481–518, Nov. 2012.
[2] H. Schaffers, N. Komninos, M. Pallot, B. Trousse, M. Nilsson, and A. Oliveira, “Smart Cities and the
Future Internet: Towards Cooperation Frameworks for Open Innovation.,” Future Internet Assembly,
vol. 6656, no. 31, pp. 431–446, 2011.
[3] “Smart city Framework. Guide to establishing strategies for smart cities and communities,” BSI
British Standards, London.
[4] N. Komninos, H. Schaffers, and M. Pallot, “Developing a Policy road map for Smart Cities and the
future internet,” eChallenges e-2011 …, 2011.
[5] C. Harrison and I. A. Donnelly, “A Theory of Smart Cities,” Proceedings of the 55th Annual Meeting
of the ISSS - 2011, Hull, UK, vol. 55, no. 1, Sep. 2011.
[6] G. J. Peek and P. Troxler, “City in Transition: Urban Open Innovation Environments as a Radical
Innovation,” programm.corp.at
[7] Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, Cesar A. F. De Rose, and Rajkumar Buyya,
"CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and
Evaluation of Resource Provisioning Algorithms, Software: Practice and Experience (SPE)", Volume
41, Number 1, Pages: 23-50, ISSN: 0038-0644, Wiley Press, New York, USA, January, 2011
[8] Rajkumar Buyya, James Broberg, Andrzej M. Goscinski , "Cloud Computing: Principles and
Paradigms", Wiley Editorial, March 2011
[9] Instituto Nacional de Estadística Geografía e Informática, http://www.inegi.org.mx/
[10] Secretaria de Medio Ambiente, htp://siga.jalisco.gob.mx/
[11] Ley Federal de Transparencia y Acceso a la Información Pública,
http://www.diputados.gob.mx/LeyesBiblio/pdf/244_140714.pdf, Junio 2002
[12] George Reese, Cloud Application Architectures: Building Applications and Infrastructure in the
Cloud, O’Reilly Media, 2009
AUTHORS
Jorge F. Hernandez, he was born in Guadalajara, Jalisco, in 1975. He received the B.E.
degree in computation engineering from the University of Guadalajara, Mexico, in 1998,
and the Master in Computer Science in 2000 from Cinvestav. In 2004, he joined the
Department of Computer Science, University of Guadalajara, as a teacher. He had
published 4 articles in different congress and he had been thesis advisor in bachelor and
master level. In 2012, he was admit- ted in CUCEA to study Postgrade IT and he is
working with cloud computing to estimate cost and planning time processing.
Víctor M. Larios, has received his PhD and a DEA (French version of a MS program)
in Computer Science at the Technological University of Compiègne, France and a BA
in Electronics Engineering at the ITESO University in Guadalajara, Mexico. He works
at the University of Guadalajara (UDG) as Professor Researcher and as consultant
directed the Guadalajara Ciudad Creativa Digital Science and Technology program
during 2013. His research insteres are related to distributed systems, visual simultaitons
and smart cities. He is a Senior IEEE member and current chair of the Computer
Chapter at the IEEE Guadalajara Section at Region 9. His role in the IEEE-CCD Smart
Cities initiative is to lead the working groups.
Manuel Avalos, graduated from Computer Science Engineering from
“Universidad Autonoma de Guadalajara (UAG)” back in 1991. In December 2006
Manuel finished his Master Degree onInformation Technology by ITESM institution.
Manuel is currently a PhD Data Science student at Universidad
deGuadalajara campus CUCEA. Manuel joined IBM in 1991 as a Testing Software
Engineer andsince then Manuel had several Technical, Management and Executive
positions in different IBMDivisions, currently he has a Global responsibility for
Systems-Storage brand.
15. International Journal of Computer Networks & Communications (IJCNC) Vol.8, No.1, January 2016
119
Ignacio Silva-Lepe is a Research Staff Member at IBM. His areas of interest include
(1) Component Software, designing and building application server infrastructure for
distributed components, (2) Distributed Messaging Systems, (3) Advanced Enterprise
Middleware, and (4) PaaS Research, designing and building infrastructure for on-
boarding and instantiating platform as a service offering onto a compute cloud.Before
joining IBM Research, Ignacio was a Member of Technical Staff at Texas Instruments'
Corporate Research and Development, subsequently acquired by Raytheon. Prior to that
he was a Research Assistant at Northeastern University, where he earned a PhD in
Computer Science.