Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
The concept of Genetic algorithm is specifically useful in load balancing for best virtual
machines distribution across servers. In this paper, we focus on load balancing and also on
efficient use of resources to reduce the energy consumption without degrading cloud
performance. Cloud computing is an on demand service in which shared resources, information,
software and other devices are provided according to the clients requirement at specific time. It‟s
a term which is generally used in case of Internet. The whole Internet can be viewed as a cloud.
Capital and operational costs can be cut using cloud computing. Cloud computing is defined as a
large scale distributed computing paradigm that is driven by economics of scale in which a pool
of abstracted virtualized dynamically scalable , managed computing power ,storage , platforms
and services are delivered on demand to external customer over the internet. cloud computing is
a recent field in the computational intelligence techniques which aims at surmounting the
computational complexity and provides dynamically services using very large scalable and
virtualized resources over the Internet. It is defined as a distributed system containing a
collection of computing and communication resources located in distributed data enters which
are shared by several end users. It has widely been adopted by the industry, though there are
many existing issues like Load Balancing, Virtual Machine Migration, Server Consolidation,
Energy Management, etc.
Agent based Aggregation of Cloud Services- A Research Agendaidescitation
-Cloud computing has come to the forefront as it
overcomes some of the issues in computing such as storage
space and processing power. It enables ubiquitous accessing
and processing of information without the need of excessive
computing facilities. In this work, we plan to brief some of the
issues in aggregating the cloud services, discovering futuristic
cloud service requests, develop a repository of the same and
propose an agent based Quality of Service (QoS) provisioning
system for cloud clients.
What began as tiny baby steps to bridge gaps of human limitations like memory, cognitive fluctuations has today turned into a virtual revolution where an invisible world of 0’s and 1’s is home to the most important bytes and pieces of our lives. Replacing the need for local servers, cloud computing emerged as the matrix within which multiple remote servers coordinated together to store, manage and administer data. The two other baits in the cloud computing movement is the live/real-time information updating and light-speed transfer of data. Initially, it was the center for large-scale corporations. In the last few years, small-scale and medium-sized organizations have also joined the army of cloud computing era. The analytics of the cloud computing culture swing between the spectrums ranging from the sustainability of cloud computing to the increased carbon emissions due to servers.
Read the full blog here:
http://suyati.com/culture-of-cloud-computing-a-green-move-or-eco-death/
Or reach us at: jghosh@suyati.com
Sukhpal Singh Gill and Rajkumar Buyya, "Cloud Data Centers and the Challenge of Sustainable Energy", Cutter Business Technology Journal, Volume 31, Issue 4, Pages 1-2, Publisher Cutter, 2018.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
Collaboration and fairness-aware big data management in distributed cloudsNexgen Technology
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
CLOUD COMPUTING IN EDUCATION: POTENTIALS AND CHALLENGES FOR BANGLADESHIJCSEA Journal
Cloud Computing is an emerging technology. It is a growing technology which can change traditional IT systems. It plays a major role in today’s technology sector. People are using it every day through one way or another. Education sector is not out of this phenomenon. At the present time the teaching method is changing and students are becoming much technology based and therefore it is necessary that we think about the most recent technologies to incorporate in the teaching and learning methods. By sharing Information technology related services in the cloud, educational institutions can better concentrate on offering students, teachers, faculty and staff the essential instruments. Bangladesh is a developing country. So applying this technology on education sector is a huge challenge for Bangladesh. In this paper it is discussed that how Bangladesh can be benefited by applying cloud in education and its challenges followed by some case studies and success stories.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Cost Benefits of Cloud vs. In-house IT for Higher EducationCSCJournals
Cloud Computing is an excellent alternative for Higher Education in a resource limited setting. Universities should take advantage of available cloud-based application offered by service providers and enable their own user/student to perform business and academic tasks. In this paper, we will compare the cost between on-premise options and Cloud Computing. Two cost estimates will be created, the firstfor building and setting up IT infrastructure in-house in Federal University of Technology (FUTO), Nigeria while the second cost estimate will be for setting up IT in the cloud for the same Institution.This will enable us know the cost benefit cloud has over onpremise in setting up IT in Higher Educations.
Enterprise-level Green ICT Using virtualization to balance energy economicsIJARIDEA Journal
Abstract— The computing industry has been a significant contributor to global warming ever since its
inception. Performance maximization per unit has cost remained the prime focus of academic and industrial
research alike, ignoring environmental impacts in the process if any. However, the infamous global energy
crisis has inevitably pushed power and energy management up the priority list of computing design and
management activities for purely economic reasons today. Green IT lays emphasis on including the
dimensions of environmental sustainability, the offsets of energy efficiency, and the total cost of
disposal and recycling. A green computing initiative must be adaptive and flexible enough to be
able to address problems that keep on increasing in size and complexity with time. Cloud computing concepts
can invariably be applied to reduce e-waste generation. The service oriented architecture lends itself to
incorporating green computing as a process rather than a product. Re-usability, extensibility and flexibility
are some of the key characteristics which are inherent to the cloud and directly help address the vertical
specific challenges to reducing energy consumption in the long run.
Keywords— Cloud computing, Electronic waste, Green Information Technology, Service oriented architecture.
Cloud middleware and services-a systematic mapping reviewjournalBEEI
Cloud computing currently plays a crucial role in the delivery of vital information technology services. A unique aspect of cloud computing is the cloud middleware and other related entities that support applications and networks. A specific field of research may be considered, particularly as regards cloud middleware and services at all levels, and thus needs analysis and paper surveys to elucidate possible study limitations. The purpose of this paper is to perform a systematic mapping for studies that capture cloud computing middleware, stacks, tools and services. The methodology adopted for this study is a systematic mapping review. The results showed that more papers on the contribution facet were published with tool, model, method and process having 18.10%, 13.79%, 6.03% and 8.62% respectively. In addition, in terms of tool, evaluation and solution research had the largest number of articles with 14.17% and 26.77% respectively. A striking feature of the systemic map is the high number of articles in solution research with respect to all aspects of the features applied in the studies. This study showed clearly that there are gaps in cloud computing middleware and delivery services that would interest researchers and industry professionals desirous of research in this area.
AN OVERVIEW OF THE SECURITY CONCERNS IN ENTERPRISE CLOUD COMPUTINGIJNSA Journal
Deploying cloud computing in an enterprise infrastructure bring significant security concerns. Successful implementation of cloud computing in an enterprise requires proper planning and understanding of emerging risks, threats, vulnerabilities, and possible countermeasures. We believe enterprise should analyze the company/organization security risks, threats, and available countermeasures before adopting this technology. In this paper, we have discussed security risks and concerns in cloud computing and enlightened steps that an enterprise can take to reduce security risks and protect their resources. We have also explained cloud computing strengths/benefits, weaknesses, and
applicable areas in information risk management
There are five disruptive forces shaping IT today, but none has more wide-ranging impact on all enterprises than the emergence of cloud as a preferred means of service delivery. This article discusses the cloud industry and how WGroup can help give client a competitive advantage using a service delivery strategy and new IT operating models.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
Resource Consideration in Internet of Things A Perspective Viewijtsrd
The ubiquitous computing and its applications at different levels of abstraction are possible mainly by virtualization. Most of its applications are becoming pervasive with each passing day and with the growing trend of embedding computational and networking capabilities in everyday objects of use by a common man. Virtualization provides many opportunities for research in IoT since most of the IoT applications are resource constrained. Therefore, there is a need for an approach that shall manage the resources of the IoT ecosystem. Virtualization is one such approach that can play an important role in maximizing resource utilization and managing the resources of IoT applications. This paper presents a survey of Virtualization and the Internet of Things. The paper also discusses the role of virtualization in IoT resource management. Rishikesh Sahani | Prof. Avinash Sharma ""Resource Consideration in Internet of Things: A Perspective View"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23694.pdf
Paper URL: https://www.ijtsrd.com/computer-science/world-wide-web/23694/resource-consideration-in-internet-of-things-a-perspective-view/rishikesh-sahani
Energy Saving by Migrating Virtual Machine to Green Cloud Computingijtsrd
Green computing is characterized as the examination and practice of structuring, assembling, utilizing, and discarding PCs, servers, and related subsystems, for example, screens, printers, storage gadgets, and systems administration and interchanges frameworks proficiently and successfully with negligible or no effect on the earth. The objective of green computing is to diminish the utilization of hazardous materials, amplify energy proficiency during the items lifetime, and advance the recyclability of obsolete items and factory waste. Green computing can be accomplished by either Product Longevity Resource distribution or Virtualization or Power management. power is the bottleneck of improving the system execution. Among all industries, the information communication technology ICT industry is seemingly answerable for a bigger segment of the overall development in energy utilization. The objective of green cloud computing is to advance the recyclability or biodegradability of outdated items and factory waste by diminishing the utilization of hazardous materials and amplifying the energy productivity during the items lifetime. Stephen Fernandes "Energy Saving by Migrating Virtual Machine to Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30422.pdf Paper Url :https://www.ijtsrd.com/computer-science/distributed-computing/30422/energy-saving-by-migrating-virtual-machine-to-green-cloud-computing/stephen-fernandes
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Journals
Abstract Cloud computing is a new model of computing that is widely used in today’s industry, organizations and society in information technology service delivery as a utility. It enables organizations to reduce operational expenditure and capital expenditure. However, cloud computing with underutilized resources still consumes an unacceptable amount of energy than fully utilized resource. Many techniques for optimizing energy consumption in virtualized cloud have been proposed. This paper surveys different energy efficient models with task consolidation in the virtualized cloud computing environment. Keywords: Cloud computing, Virtualization, Task consolidation, Energy consumption, Virtual machine
What began as tiny baby steps to bridge gaps of human limitations like memory, cognitive fluctuations has today turned into a virtual revolution where an invisible world of 0’s and 1’s is home to the most important bytes and pieces of our lives. Replacing the need for local servers, cloud computing emerged as the matrix within which multiple remote servers coordinated together to store, manage and administer data. The two other baits in the cloud computing movement is the live/real-time information updating and light-speed transfer of data. Initially, it was the center for large-scale corporations. In the last few years, small-scale and medium-sized organizations have also joined the army of cloud computing era. The analytics of the cloud computing culture swing between the spectrums ranging from the sustainability of cloud computing to the increased carbon emissions due to servers.
Read the full blog here:
http://suyati.com/culture-of-cloud-computing-a-green-move-or-eco-death/
Or reach us at: jghosh@suyati.com
Sukhpal Singh Gill and Rajkumar Buyya, "Cloud Data Centers and the Challenge of Sustainable Energy", Cutter Business Technology Journal, Volume 31, Issue 4, Pages 1-2, Publisher Cutter, 2018.
Providing a multi-objective scheduling tasks by Using PSO algorithm for cost ...Editor IJCATR
This article is intended to use the multi-PSO algorithm for scheduling tasks for cost management in cloud computing. This means that
any migration costs due to supply failure consider as a one objective and each task is a little particle and recognize by use of the
appropriate fitness schedule function (how the particles arrangement) that cost at least amount of total expense. In addition to, the weight
is granted to the each expenditure that reflects the importance of cost. The data which is used to simulate proposed method are series of
academic and research data that are prepared from the Internet and MATLAB software is used for simulation. We simulate two issues,
in the first issue, consider four task by four vehicles and divide tasks. In the second issue, make the issue more complicated and consider
six tasks by four vehicles. We write PSO's output for each two issues of various iterations. Finally, the particles dispersion and as well
as the output of the cost function were computed for each pa
Collaboration and fairness-aware big data management in distributed cloudsNexgen Technology
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
CLOUD COMPUTING IN EDUCATION: POTENTIALS AND CHALLENGES FOR BANGLADESHIJCSEA Journal
Cloud Computing is an emerging technology. It is a growing technology which can change traditional IT systems. It plays a major role in today’s technology sector. People are using it every day through one way or another. Education sector is not out of this phenomenon. At the present time the teaching method is changing and students are becoming much technology based and therefore it is necessary that we think about the most recent technologies to incorporate in the teaching and learning methods. By sharing Information technology related services in the cloud, educational institutions can better concentrate on offering students, teachers, faculty and staff the essential instruments. Bangladesh is a developing country. So applying this technology on education sector is a huge challenge for Bangladesh. In this paper it is discussed that how Bangladesh can be benefited by applying cloud in education and its challenges followed by some case studies and success stories.
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
Cost Benefits of Cloud vs. In-house IT for Higher EducationCSCJournals
Cloud Computing is an excellent alternative for Higher Education in a resource limited setting. Universities should take advantage of available cloud-based application offered by service providers and enable their own user/student to perform business and academic tasks. In this paper, we will compare the cost between on-premise options and Cloud Computing. Two cost estimates will be created, the firstfor building and setting up IT infrastructure in-house in Federal University of Technology (FUTO), Nigeria while the second cost estimate will be for setting up IT in the cloud for the same Institution.This will enable us know the cost benefit cloud has over onpremise in setting up IT in Higher Educations.
Enterprise-level Green ICT Using virtualization to balance energy economicsIJARIDEA Journal
Abstract— The computing industry has been a significant contributor to global warming ever since its
inception. Performance maximization per unit has cost remained the prime focus of academic and industrial
research alike, ignoring environmental impacts in the process if any. However, the infamous global energy
crisis has inevitably pushed power and energy management up the priority list of computing design and
management activities for purely economic reasons today. Green IT lays emphasis on including the
dimensions of environmental sustainability, the offsets of energy efficiency, and the total cost of
disposal and recycling. A green computing initiative must be adaptive and flexible enough to be
able to address problems that keep on increasing in size and complexity with time. Cloud computing concepts
can invariably be applied to reduce e-waste generation. The service oriented architecture lends itself to
incorporating green computing as a process rather than a product. Re-usability, extensibility and flexibility
are some of the key characteristics which are inherent to the cloud and directly help address the vertical
specific challenges to reducing energy consumption in the long run.
Keywords— Cloud computing, Electronic waste, Green Information Technology, Service oriented architecture.
Cloud middleware and services-a systematic mapping reviewjournalBEEI
Cloud computing currently plays a crucial role in the delivery of vital information technology services. A unique aspect of cloud computing is the cloud middleware and other related entities that support applications and networks. A specific field of research may be considered, particularly as regards cloud middleware and services at all levels, and thus needs analysis and paper surveys to elucidate possible study limitations. The purpose of this paper is to perform a systematic mapping for studies that capture cloud computing middleware, stacks, tools and services. The methodology adopted for this study is a systematic mapping review. The results showed that more papers on the contribution facet were published with tool, model, method and process having 18.10%, 13.79%, 6.03% and 8.62% respectively. In addition, in terms of tool, evaluation and solution research had the largest number of articles with 14.17% and 26.77% respectively. A striking feature of the systemic map is the high number of articles in solution research with respect to all aspects of the features applied in the studies. This study showed clearly that there are gaps in cloud computing middleware and delivery services that would interest researchers and industry professionals desirous of research in this area.
AN OVERVIEW OF THE SECURITY CONCERNS IN ENTERPRISE CLOUD COMPUTINGIJNSA Journal
Deploying cloud computing in an enterprise infrastructure bring significant security concerns. Successful implementation of cloud computing in an enterprise requires proper planning and understanding of emerging risks, threats, vulnerabilities, and possible countermeasures. We believe enterprise should analyze the company/organization security risks, threats, and available countermeasures before adopting this technology. In this paper, we have discussed security risks and concerns in cloud computing and enlightened steps that an enterprise can take to reduce security risks and protect their resources. We have also explained cloud computing strengths/benefits, weaknesses, and
applicable areas in information risk management
There are five disruptive forces shaping IT today, but none has more wide-ranging impact on all enterprises than the emergence of cloud as a preferred means of service delivery. This article discusses the cloud industry and how WGroup can help give client a competitive advantage using a service delivery strategy and new IT operating models.
The growth of internet of things and wireless technology has led to enormous generation of data for various application uses such as healthcare, scientific and data intensive application. Cloud based Storage Area Network (SAN) has been widely in recent time for storing and processing these data. Providing fault tolerant and continuous access to data with minimal latency and cost is challenging. For that efficient fault tolerant mechanism is required. Data replication is an efficient mechanism for providing fault tolerant mechanism that has been considered by exiting methodologies. However, data replica placement is challenging and existing method are not efficient considering application dynamic requirement of cloud based storage area network. Thus, incurring latency, due to which induce higher cost of data transmission. This work present an efficient replica placement and transmission technique using Bipartite Graph based Data Replica Placement (BGDRP) technique that aid in minimizing latency and computing cost. Performance of BGDRP is evaluated using real-time scientific application workflow. The outcome shows BGDRP technique minimize data access latency, computation time and cost over state-of-art technique.
Resource Consideration in Internet of Things A Perspective Viewijtsrd
The ubiquitous computing and its applications at different levels of abstraction are possible mainly by virtualization. Most of its applications are becoming pervasive with each passing day and with the growing trend of embedding computational and networking capabilities in everyday objects of use by a common man. Virtualization provides many opportunities for research in IoT since most of the IoT applications are resource constrained. Therefore, there is a need for an approach that shall manage the resources of the IoT ecosystem. Virtualization is one such approach that can play an important role in maximizing resource utilization and managing the resources of IoT applications. This paper presents a survey of Virtualization and the Internet of Things. The paper also discusses the role of virtualization in IoT resource management. Rishikesh Sahani | Prof. Avinash Sharma ""Resource Consideration in Internet of Things: A Perspective View"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-4 , June 2019, URL: https://www.ijtsrd.com/papers/ijtsrd23694.pdf
Paper URL: https://www.ijtsrd.com/computer-science/world-wide-web/23694/resource-consideration-in-internet-of-things-a-perspective-view/rishikesh-sahani
Energy Saving by Migrating Virtual Machine to Green Cloud Computingijtsrd
Green computing is characterized as the examination and practice of structuring, assembling, utilizing, and discarding PCs, servers, and related subsystems, for example, screens, printers, storage gadgets, and systems administration and interchanges frameworks proficiently and successfully with negligible or no effect on the earth. The objective of green computing is to diminish the utilization of hazardous materials, amplify energy proficiency during the items lifetime, and advance the recyclability of obsolete items and factory waste. Green computing can be accomplished by either Product Longevity Resource distribution or Virtualization or Power management. power is the bottleneck of improving the system execution. Among all industries, the information communication technology ICT industry is seemingly answerable for a bigger segment of the overall development in energy utilization. The objective of green cloud computing is to advance the recyclability or biodegradability of outdated items and factory waste by diminishing the utilization of hazardous materials and amplifying the energy productivity during the items lifetime. Stephen Fernandes "Energy Saving by Migrating Virtual Machine to Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30422.pdf Paper Url :https://www.ijtsrd.com/computer-science/distributed-computing/30422/energy-saving-by-migrating-virtual-machine-to-green-cloud-computing/stephen-fernandes
An Efficient Cloud Scheduling Algorithm for the Conservation of Energy throug...IJECEIAES
Method of broadcasting is the well known operation that is used for providing support to different computing protocols in cloud computing. Attaining energy efficiency is one of the prominent challenges, that is quite significant in the scheduling process that is used in cloud computing as, there are fixed limits that have to be met by the system. In this research paper, we are particularly focusing on the cloud server maintenance and scheduling process and to do so, we are using the interactive broadcasting energy efficient computing technique along with the cloud computing server. Additionally, the remote host machines used for cloud services are dissipating more power and with that they are consuming more and more energy. The effect of the power consumption is one of the main factors for determining the cost of the computing resources. With the idea of using the avoidance technology for assigning the data center resources that dynamically depend on the application demands and supports the cloud computing with the optimization of the servers in use.
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
A survey on energy efficient with task consolidation in the virtualized cloud...eSAT Journals
Abstract Cloud computing is a new model of computing that is widely used in today’s industry, organizations and society in information technology service delivery as a utility. It enables organizations to reduce operational expenditure and capital expenditure. However, cloud computing with underutilized resources still consumes an unacceptable amount of energy than fully utilized resource. Many techniques for optimizing energy consumption in virtualized cloud have been proposed. This paper surveys different energy efficient models with task consolidation in the virtualized cloud computing environment. Keywords: Cloud computing, Virtualization, Task consolidation, Energy consumption, Virtual machine
Cloud computing offers to users worldwide a low cost on-demand services, according to their requirements. In the recent years, the rapid growth and service quality of cloud computing has made it an attractive technology for different Tech Companies. However with the growing number of data centers resources, high levels of energy cost are being consumed with more carbon emissions in the air. For instance, the Google data center estimation of electric power consumption is equivalent to the energy requirement of a small sized city. Also, even if the virtualization of resources in cloud computing datacenters may reduce the number of physical machines and hardware equipments cost, it is still restrained by energy consumption issue. Energy efficiency has become a major concern for today’s cloud datacenter researchers, with a simultaneous improvement of the cloud service quality and reducing operation cost. This paper analyses and discusses the literature review of works related to the contribution of energy efficiency enhancement in cloud computing datacenters. The main objective is to have the best management of the involved physical machines which host the virtual ones in the cloud datacenters.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Opportunistic job sharing for mobile cloud computingijccsa
Cloud Computing is the evolution of new business era which is covered with many of technologies.These
technology are taking advantage of economies of scale and multi tenancy which are used to decrees the
cost of information technology resources. Many of the organization are eager to reduce their computing
cost through the means of virtualization. This demand of reducing the computing cost and time has led to
the innovation of Cloud Computing. Itenhanced computing through improved deployment and
infrastructure costs and processing time. Mobile computing & its applications in smart phones enable a
new, rich user experience. Due to extreme usage of limited resources in smart phones it create problems
which are battery problems, memory space and CPU. To solve this problem, we propose a dynamic mobile
cloud computing architecture framework to use global resources instead of local resources. In this
proposed framework the usefulness of job sharing workload at runtime reduces the load at the local client
and the dynamic throughput time of the job through Wi-Fi Connectivity.
The increasing demands on the usage of data centers especially in provisioning cloud
applications (i.e. data-intensive applications) have drastically increased the energy consumption and
becoming a critical issue. Failing to handle the increasing in energy consumption leads to the negative
impact on the environment, and also negatively affecting the cloud providers' profits due to increasing
costs. Various surveys have been carried out to address and classify energy-aware approaches and
solutions. As an active research area with increasing number of proposals, more surveys are needed to
support researchers in the research area. Thus, in this paper, we intend to provide the current state of
existing related surveys that serve as a guideline for the researchers as well as the potential reviewers to
embark into a new concern and dimension to compliment existing related surveys. Our review highlights
four main topics and concludes to some recommendations for the future survey.
An energy optimization with improved QOS approach for adaptive cloud resources IJECEIAES
In recent times, the utilization of cloud computing VMs is extremely enhanced in our day-to-day life due to the ample utilization of digital applications, network appliances, portable gadgets, and information devices etc. In this cloud computing VMs numerous different schemes can be implemented like multimedia-signal-processing-methods. Thus, efficient performance of these cloud-computing VMs becomes an obligatory constraint, precisely for these multimedia-signal-processing-methods. However, large amount of energy consumption and reduction in efficiency of these cloud-computing VMs are the key issues faced by different cloud computing organizations. Therefore, here, we have introduced a dynamic voltage and frequency scaling (DVFS) based adaptive cloud resource re-configurability (퐴퐶푅푅) technique for cloud computing devices, which efficiently reduces energy consumption, as well as perform operations in very less time. We have demonstrated an efficient resource allocation and utilization technique to optimize by reducing different costs of the model. We have also demonstrated efficient energy optimization techniques by reducing task loads. Our experimental outcomes shows the superiority of our proposed model 퐴퐶푅푅 in terms of average run time, power consumption and average power required than any other state-of-art techniques.
ENERGY EFFICIENT COMPUTING FOR SMART PHONES IN CLOUD ASSISTED ENVIRONMENTIJCNCJournal
In recent years, the employment of smart mobile phones has increased enormously and are concerned as an area of human life. Smartphones are capable to support immense range of complicated and intensive applications results shortened power capability and fewer performance. Mobile cloud computing is the newly rising paradigm integrates the features of cloud computing and mobile computing to beat the constraints of mobile devices. Mobile cloud computing employs computational offloading that migrates the computations from mobile devices to remote servers. In this paper, a novel model is proposed for dynamic task offloading to attain the energy optimization and better performance for mobile applications in the cloud environment. The paper proposed an optimum offloading algorithm by introducing new criteria such as benchmarking for offloading decision making. It also supports the concept of partitioning to divide the computing problem into various sub-problems. These sub-problems can be executed parallelly on mobile device and cloud. Performance evaluation results proved that the proposed model can reduce around 20% to 53% energy for low complexity problems and up to 98% for high complexity problems.
Introduction to Cloud Computing and Cloud InfrastructureSANTHOSHKUMARKL1
Introduction, Cloud Infrastructure: Cloud computing, Cloud computing delivery models and services, Ethical issues, Cloud vulnerabilities, Cloud computing at Amazon, Cloud computing the Google perspective, Microsoft Windows Azure and online services, Open-source software platforms for private clouds.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
An Efficient MDC based Set Partitioned Embedded Block Image CodingDr. Amarjeet Singh
In this paper, fast, efficient, simple and widely used
Set Partitioned Embedded bloCK based coding is done on
Multiple Descriptions of transformed image. The maximum
potential of this type of coding can be exploited with discrete
wavelet transform (DWT) of images. Two correlated
descriptions are generated from a wavelet transformed image
to ensure meaningful transmission of the image over noise
prone wireless channels. These correlated descriptions are
encoded by set partitioning technique through SPECK coders
and transmitted over wireless channels. Quality of
reconstructed image at the decoder side depends upon the
number of descriptions received. More the number of
descriptions received at output side, more enhance the quality
of reconstructed image. However, if any of the multiple
description is lost, the receive can estimate it exploiting the
correlation between the descriptions. The simulations
performed on an image on MATLAB gives decent
performance and results even after half of the descriptions is
lost in transmission.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
Virtual Machine Allocation Policy in Cloud Computing Environment using CloudSim IJECEIAES
Cloud computing has been widely accepted by the researchers for the web applications. During the past years, distributed computing replaced the centralized computing and finally turned towards the cloud computing. One can see lots of applications of cloud computing like online sale and purchase, social networking web pages, country wide virtual classes, digital libraries, sharing of pathological research labs, supercomputing and many more. Creating and allocating VMs to applications use virtualization concept. Resource allocates policies and load balancing polices play an important role in managing and allocating resources as per application request in a cloud computing environment. Cloud analyst is a GUI tool that simulates the cloud-computing environment. In the present work, the cloud servers are arranged through step network and a UML model for a minimization of energy consumption by processor, dynamic random access memory, hard disk, electrical components and mother board is developed. A well Unified Modeling Language is used for design of a class diagram. Response time and internet characteristics have been demonstrated and computed results are depicted in the form of tables and graphs using the cloud analyst simulation tool.
Similar to TOWARDS A MACHINE LEARNING BASED ARTIFICIALLY INTELLIGENT SYSTEM FOR ENERGY EFFICIENCY IN A CLOUD COMPUTING ENVIRONMENT (20)
Software development field is becoming more
productive day by day with the wonderful model name Agile. Agile
is the main focus of research now a days. It is because of its
abilities of handling changes in efficient way through iterative and
incremental practices. Although it became famous because of its
capabilities still there are some issues in it, which is ignorance of
usability engineering in different phases of agile that is an
important aspect to understand the software. Usability has deep
roots in software quality and is a core construct of HCI. To develop
interactive and usable systems there is a need of such a model
which can integrate HCI with Agile. To address this issue. To solve
this issue we have proposed a model which will work with both
User Centered (main focus of HCI) and Agile by assembling
different practices from both fields which will result useable
products. It will enhance software life with user satisfaction by
giving them running software with usability.
This article aims to outline different pedagogical strategies with applications (apps) in the classroom. Every year the use of mobile devices like tablets and smartphones increases. At the same time, applications are being developed to meet this demand. It is therefore essential that educators investigate their use as an motivational technological medium that can possibly be used in the classroom. Apps can be used both as a source of information as well as a tool for creating material. Thus, this article will present the results of a study applying teaching strategies in different contexts. It therefore highlights the importance of mobile learning as a viable alternative in the classroom. In order to do so, there was a multiple case study in the undergraduate pedagogy program and a digital inclusion course for seniors, both offered in the first semester of 2017 at the Federal University of Rio Grande do Sul (UFRGS). Educational applications and examples of teaching strategies using apps were created in these classes. Educational applications offer the possibility to bring innovations to teaching practices, as well as new forms of communication, interaction and authorship, thus contributing to the process of teaching and learning
Science Fiction (Sci-Fi) brings several examples of modifications made in the human body, each having different goals in mind — it may be either improving or compromising intellectual, physical, or psychological abilities. Lately, with consistent advancements in the Health field, mostly brought about by e-Health startups and the interdisciplinary combination of Biology, Medicine, Computer Science, and Engineering, many of the modifications seen in big screens became a reality, albeit from a weak signal point of view and not yet mainstream solutions to Health issues. Aiming to define the scope of this research, as Sci-Fi works are abundant and take the form of movies, animes, mangas, and books, filtering all of those would be a herculean job. Hence, for this paper, only movies and animes were assessed, according to precepts established in the Methodology section. Taking our society’s progress into consideration, the aims of this work are twofold: (i) knowing to what extent there has already been real scientific progress with regard to science fiction scenarios and predictions of human body transformations; (ii) understanding the repercussions of humans undergoing such modifications applied to several fields, such as Economics, Sociology and Ethics, pinpointing scenarios that should be discussed in preparation for future changes.
Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand.
This study aims to evaluate the level of information that Albanian travellers have on malaria, known as one of the most impactful diseases in a worldwide scale. For this purpose, it was conducted a cross-sectional survey with Albanian passengers departing from Tirana International Airport from July till September 2015. The travelers were given a questionnaire with fourteen closed questions about their information on infectious diseases and their status of vaccination. The questionnaire included malaria-related questions. It was focused mainly on basic knowledge of malaria likewise how it was transmitted and how it could be prevented.Six hundred and five persons responded to the questionnaire. The majority of the participants in the survey were women in the age group of 20-40 years old. 56% of the participants of the survey replied negatively to the question if they get informed about infectious diseases and other health-related topics while preparing to travel, whether the purpose of the travel is leisure or job-related. Regarding their knowledge specifically on malaria, more than half of the participants responded that they have information about it, mainly from the educational system. Almost 90% of the persons knew malaria as a mosquito-borne disease but less than 28% of them took measures to prevent mosquito bites during their travel.
In modern social and philosophical studies, value orientations are understood as the orientation of the subject (personality, group, community) to goals that he or she perceives as positively significant (good, right, high, etc.) in accordance with the samples accepted in society (community) and available Life experience and individual preferences. This orientation is a set of stable motives, underlying the orientation of the subject in the social environment and his assessments of situations. It can be realized in varying degrees, expressed in the facts of behavior, faith, knowledge and have the form of a stereotype, judgment, project (program), ideal, worldview. At the same time, from an orientation toward positive life goals recognized, the subject does not automatically take active actions to achieve them in real life.
Durreesamin (ISSN: 2204-9827) is an Australian peer reviewed quarterly multi-disciplinary journal. Dureesamin journal covers a very wide range of topics and welcome researchers from all over the world to submit original research papers, Literature Review Articles, Hypothesis Papers, book reviews, conference reports, Case Studies and Case Reports. Full articles are published on Durreesamin website are open access to all.
More from Durreesamin Journal Australia (ISSN: 2204-9827) (8)
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Designing for Privacy in Amazon Web ServicesKrzysztofKkol1
Data privacy is one of the most critical issues that businesses face. This presentation shares insights on the principles and best practices for ensuring the resilience and security of your workload.
Drawing on a real-life project from the HR industry, the various challenges will be demonstrated: data protection, self-healing, business continuity, security, and transparency of data processing. This systematized approach allowed to create a secure AWS cloud infrastructure that not only met strict compliance rules but also exceeded the client's expectations.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
TOWARDS A MACHINE LEARNING BASED ARTIFICIALLY INTELLIGENT SYSTEM FOR ENERGY EFFICIENCY IN A CLOUD COMPUTING ENVIRONMENT
1. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
1
TOWARDS A MACHINE LEARNING
BASED ARTIFICIALLY INTELLIGENT
SYSTEM FOR ENERGY EFFICIENCY IN
A CLOUD COMPUTING ENVIRONMENT
Yao Francois Michael Kra, 1840192845@qq.com, School of Computer Science and Engineering, Southeast University, Nanjing, China
Noah Kwaku Baah, baah.noah@gmail.com, School of Computer Science and Engineering, Southeast University, Nanjing, China.
Imran Memon, imranmemon52@zju.edu.cn, College of Computer Science, Zhejiang University, Hangzhou 310027, China
William Gyasi-Mensah, kyfm349349@gmail.com, School of Finance and Economics Jiangsu University
ABSTRACT
Cloud computing has become the mainstream of
the emerging technologies for information interchange and
accessibility. With such systems, the information accessed
from any geographic location on this planet with some
decent kind of internet connection. Applying machine
learning together with artificial intelligence in dealing with
the problem of energy reduction in cloud data center is an
innovative idea. A large combination of Artificial
intelligence is playing a significant role in cloud
environment. For that matter, the Big organization
providers like Amazon have taken steps to ensure that they
can continue to expand their fast-growing cloud services to
commensurate with the fast growth of population. These
companies have built large data centers in remote parts of
the world to overcome a shortage of information. These
centers consume significant amounts of electrical energy.
There is often a lot of energy wastage. According to IDC
white paper, data centers have tremendously wasted
billions of energy regarding billing and cash. Additionally,
researchers have argued that by the year 2020 the energy
consumption rate would have doubled. Research in this
area is still a hot topic. This paper seeks to address the
energy efficiency issue at a Cloud Data Center using
machine learning methodologies, principles, and practices.
This article also aims to bring out possible future
implementation methods for artificially intelligent agents
that would help reduce energy wastage at a Cloud data
center and thus help ameliorate the great big energy
problem at hand.
Keywords: Cloud Computing; PUE; Energy
Efficiency, Machine Learning, Artificial Intelligence,
Cloud Service Provider (CSP) Virtualization
I. INTRODUCTION
Recent years, cloud computing has demonstrated,
established and founded itself as one of the brains and
drivers in modern technology. As a process paradigm
faculty economy of scale, when organized and used
effectively, the cloud computing presents significant
advantages relating to computation power whereas
reducing expenditures and saving energy. Massive data
centers are in places wherever the concept of cloud
computing involves life. Through virtualization technology,
data center resources and services became substantially
potential for several users to share, and to avoid having to
line up their infrastructure to try and do things that have
been completed within the cloud.
Efficient use of energy in cloud computing has been
receiving attention by researchers over the past decade.
Some studies have suggested various optimization
approaches to the challenge of minimizing the expenditure
of energy within cloud computing setting
[36],[37],[25],[20],[22]. Several scenarios also exist for
using machine instruction strategies to material supplies
and management within the cloud, with several goals.
(The study will provide a survey towards a machine
learning based artificially intelligent system for the
efficient use of energy in a cloud computing setting). (The
aim of this study area is to analyze and delve into energy
efficiency, and carry up to the machine learning research,
as well as support their invention in innovative ways
capable of producing preferred outcomes. As computing
has become very vast and sophisticated engine worldwide,
cloud computing as a traditional model delivers, computing
resources on cloud computing uses pay as you use method.
The public IT corporations like Microsoft, Google,
Amazon, and IBM have a unit of measurement running
expansive data knowledge Centres worldwide to handle
their always-rising requests. Notably, the rising demands
for cloud computing facilities have considerably multiplied
the power usage of knowledge centers, thereby making it
an important issue). The third drop in energy charge for an
outsized associate company like Google will reach over
1,000,000 dollars in value savings [35]. High power
consumption does not only interpret to the great value but
to boot leads to high carbon emissions that do not appear to
2. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
2
be environmentally sustainable. Power costs hugely rising,
information center instrumentality is overstretching power
and cooling infrastructures, and so the primary point has
not been about the current amount of data center emissions,
instead, the point that these emissions are rising faster than
the various carbon emission [35]. Among the compelling
primary rationales for energy underperformance in data
centers is that unused energy is wasted once the server
operates at a low load. Even at down usage, like 100
percent central processor Usage, the flexibility consumed is
over five hundredth of the peak power [36]. Dynamic
consolidation has tested to be a good economic technique
for energy cut down in data information centers by
switching off unused or less-utilized servers [36],[37].
However, reaching the aimed extent of Quality of Service
(QoS) between the user and a data center is vital. Hence,
Quality of Service, the periodic upgrade will save energy
whereas keeping an appropriate Quality of Service. The
standard of Service necessity formalized through Service
Level Agreement (SLA) that explains these features as
lower turnout, largest possible amount or latency produced
by the installed system. Moreover, virtualization is the most
present power controlling and resource distribution
technique used by data Knowledge Center. It permits a
physical server (host) to be shared among multiple Virtual
Machines (VMs) whereby each VM can run numerous
application tasks. The central processing unit and memory
resources are dynamically provided for a Virtual Machine,
per current resource requirements. It enables virtualization
for the requirements of energy efficiency in every data
center [35].
Our Contributions to article extended by:
1. Illustrating some problems towards machine learning
based mostly artificial agent to lop out the energy usage
within the large-scale data center.
2. Reviewing and analyzing various types agent entities
which may be applying exploiting machine learning
3. Identify problems within the existing systems to give
insight to new researchers in innovative ways to implement
energy huddles to handle the substance of massive energy
wastage within green data centers.
4. Additionally show that when exploiting AI will facilitate
data Centre’s manager to understand the prediction,
management and stop the worst-case situation from
occurring in reducing usage capability.
The research paper is organized as follow, section I
introduction, section I the background of the article,
Section III, the stated problem of the article, section IV
mathematical models for PUE state of the art. Part V and
VI respectively discussed privacy control and literature
review; section VII end up of the conclusion and the
possible future development.
II. BACKGROUND
People around the world these days are enjoying computers,
PC networks and applications to undertake and do most of
their business processes [25], communication, and social
networking [25]. As a result, the popularity of web-
primarily based applications is on the increase. Most of the
companies rendering these internet based applications use
cloud computing services to host their applications. One
can only imagine what amount methodology in power is
needed to tackle this common workload dilemma. However,
these works are mostly distributed across data centers
within a cloud computing setting.
The goal of cloud computing is to provide computing
resources as utilities, rather like today electricity, clean
water and telephoning services rendered as utilities. The
services provided by cloud computing is based on software
as a service (SaaS), infrastructure as a service (IaaS) and
platform as a service. A new aspect of cloud computing is
its acquisition model that depends on going to services and
its business model supported by purchase use. It has an
excellent access model that handles over the net to any
device and its particular model that is cycling the climbable,
elastic, dynamic, multi-tenant, and shareable. There are
differing types of cloud computing environments of which
Public cloud services offered by a 3rd party service
provider. Private cloud is extraordinarily like a public
cloud, the only real distinction between the private cloud is
based on the services managed within one organization.
Community cloud that controlled by a bunch of agencies
that have a regular goal or concern, like security. Hybrid
cloud, is therefore a mix of any of the various cloud
environments.
A) Machine Learning
Machine learning (ML) methods considered for materials
and power control within the wide-reaching data center
corresponding procedure in grid energy and cloud
technology. Considering the task consolidation policies,
which have been described in [35], it operates every job
with a small amount of data resources and takes into
account the programming aspect in cutting down energy
utilization [35]. The study adopted machine-learning
strategy as a method to explore the existing data of the
system, like Energy usage level, hundreds of processors
and task completion time; and contributes to the standard of
scheduling selections. The primary goal of the policy in [38]
was to maximize user contentment while keeping energy
usage down. In [38] an internet learning algorithmic
regulation was scheduled to vigorously choose diverse
consultants for forming energy controlling choices at
execution time, wherever every knowledgeable may be a
redesigned power management policy. Various experts
outperform one another beneath entirely different
workloads and hardware characteristics.
B) Learning Reinforcement
In [35] Reinforcement Learning (RL), the intelligent agent
gets the maximum resolution through trial and error
interaction with a current set with no prior information
regarding the surroundings. A framework of learning
reinforcement comprises of [1][35]:
3. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
3
▪ State area S: a group of states that intelligent
agent can provide representation for at any
surrounding area.
▪ Action area A: a group of measures that
intelligent agent can perform.
▪ A Learning reinforcement agent sends signal r: a
symbol that intelligent agent can receive from
different types of environment.
Actually, the indicator imitates the success or failure status
of the system when associated with an action which it has
taken place. Considering the fact of the signal, in [1] the
signal serves as punishment for the intelligent agent who
has accomplished an action based on pay before usage. Q-
learning [1] could pass as one of the first probable Learning
reinforcement agents which can be employed in numerous
areas of analysis [1]. At the iteration level of the Q-learning
algorithm, the intelligent agent firstly detects the system
state ‘s’ and selects the action in ‘a.' When describing the
work, the system run up to the following state ‘s,' and
obtains the supported signal in r. When updating Q value,
the equation calculates the start of next iteration level.
(1)[1]
Where Q(s, a) [1] signifies that the value of an intelligent
agent can take action within state s. Training percentage
determined which the recent one can overwrite proportion
of the new information. The agent learning level can
assume a price between zero and one; the worth of zero
implies that no training takes place by the algorithm; on the
other hand, the value of 1 shows that solely the first current
data is used. The reduction issue could be worth between
zero and one that gives additional weight to the sanctions
within the near future than the far future. Consequently,
once associate degree agent moves to state s once more, it
chooses the activity with the least Q-value. The approach
for selecting the simplest measures in state s is:
(2) [1]
Accordingly, the training agent’s objective is to seek out
Learning Reinforcement (RL)[1],[39] could be machine
learning Prototype has applied for power management in
wide-reaching data center systems. In Reinforcement
Learning, a decision-maker or agent observes the
environment and chooses an activity at every state[1]. After
every action has been undertaken, the agent gets a response
showing the value of the executed activity. The ultimate
objective of the agent is to study a policy for choosing the
most effective measures for all possible steps. Also,
researchers have shown the viability of RL methods in
resource distribution [41][42], energy control [36][42] and
self-optimizing memory controller [1] [43]. In [42] share
servers on the internets. Applications dynamically exploit
online, hybrid Reinforcement Learning to increase the
anticipated total of SLA payments in every application.
This hybrid method permits the RL regulator to bootstrap
from existing management policies, considerably cutting
down learning and expensiveness. The efficiency of the
process verified in the situation of an available information
data center image. Moreover, in [43], the power
management system level’s policy supported by
Reinforcement Learning provided a real gold reduction in
the energy usage. It studies the most favorable policy in the
absence any previous data of work. The researchers set the
delay in manufacturing activity as a performance challenge
whereas reducing energy usage. Looking at the prevalence
of Machine Learning based on power management methods,
the RL based learning mostly will investigate the trade-off
within the electrical performance design house and join to a
far efficient energy management policy.
The application of machine learning algorithms to existing
observance data provides a chance to improve Data Center
in operation effectiveness significantly. A typical large
scale Data Center generates several data points across
thousands of sensors each day. Nevertheless, this
information is never used for applications aside from
observation purposes. Advances in process power and
respect capabilities produce an outsized chance for machine
learning to guide best apply and improve Data Center
efficiency.
C) Artificial Intelligent
Intelligence commonly thought about because of the
capacity to gather expertise and logic concerning insight to
resolve compounded issues. Within the close to Future
Intelligent Machines, they can replace human abilities in
several ways. AI is the study and creation of intelligent
machines and software system capable of reasoning,
learning, gathering data, communicating, manipulating and
understanding the objects. John McCarthy coined the term
in 1956 as an aspect of technology involved with creating
computers that act similar to humans. Economical energy
use, generally merely referred to as energy efficiency, is the
objective to scale back the quantity of power needed to
produce product and services. For instance, Installing
fluorescent lights, semiconductor diode lights or natural
skylights minimizes the amount of energy necessary to
reach an equivalent degree of lighting compared with using
an old incandescent lightweight bulbs. Compact fluorescent
lights need a mere fraction of the power of incandescent
lights and will last from half dozen to ten times longer.
These are the bound advantages of energy efficiency:-
Energy observation Agent: This half is responsible for
inspection the usage of electricity. Energy view agent
compares current energy usage with historical data, records
the results associated reports an emergency when abnormal
information is revealed.
Energy effectiveness Analysis Agent: This agent is
accountable for information analyzing. Energy potency
analysis agent can classify the characteristics of different
users, and eventually verify the principle of energy
utilization, which used to make effective selections.
Decision-making Agent: This agent considers the results
of energy potency analysis agent and the gifting strategy
thoroughly, and makes correct picks once needed. At an
equivalent time, it will take the CMB output as an essential
4. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
4
reference. Finally, the agent generates new and reasonable
electricity theme to guide users.
Energy diagnosing Agent: Analyzes energy-using
instrumentation from the system aspect, estimates current
power consumption and offers additional references to
decision-making agent, thus serving to enhance energy
potency.
User Feedback Agent: Uses the service condition to
estimate the effectiveness of the system, measure the
strength of the model from the user aspect, and output
auxiliary suggestions to form period changes, improving
decision-making set up endlessly.
Information Intelligence Maintenance Agent: This agent
is responsible for all the system information’s maintenance
and classification in a very regular time, including user
profile, energy-using instrumentality information, energy
utilization information and information from measurement
points.
The constituent below shows the effectiveness of energy
savings:
Renewable Energy: The Renewable Energy has a possible
impact on health because it produces energy with a
significant reason and it has no pollution effect as coal and
nuclear power.
Intelligent Distribution: It has an artificial agent to which
every knowledge performed as human. Its experience
enables to predict and prevent some unusual distribution
system (e.g. street lights, doors, elevators, wireless and
many more)
Operation Centre: performs as a data center whereby
devices can be check and resolve specific problem within
the area employment
Smart home: it has an ability to control and check all
connected gadgets within office building and houses
Smart Connected Cities: It performs as public clouds to
which connect the cities with internet bandwidth related to
the aim of accessing information anywhere within the city.
PEVs: the PEVs are very efficient electric cars chargers
with a sort of less duration charging time
D. Deep Learning
Deep learning strategies illustrated with learning strategies
with multiple levels of representation, obtained by
composing straightforward; however nonlinear modules
that convert every illustration at one level start with a raw
data into the slightly new abstract level. With the
composition of enough such transformations, highly
complex functions often learned.
Deep learning is creating significant advances in resolution
issues that have resisted the most effective makes an
attempt of the synthetic intelligence agency for several
years [54]. It has clad to be superb at unearthing involving
systems in high-dimensional information and is thus
relevant to several domains of science, business and
government.
Since 2006, intensive, structured training, or additional
unremarkably referred to as deep learning or hierarchical
learning[60], has been known as a replacement space of
machine learning analysis [56],[61]. Throughout the
previous years, many methods have been created from deep
learning analysis and have already been affecting a good
vary of signal and data process. Working on the normal and
also innovate a widened scopes together with fundamental
aspects of machine learning and artificial intelligence in
[55],[56],[57],[58], [59].
E. Cloud Computing Load Prediction
One of the first important analytics applications for the SG.
Moreover, the handiness of the time interval information
has made it attainable to predict within the short term and
with greater correctness.
Fig: 1. Intelligent, smart grid architecture images
Source: A smart grid [44].Smart Buildings of the Future
Cyber aware, Deep Learning Powered, and Human
Interacting.
The figure 1 above described the need of intelligence
system which helps reduce energy wastage and at the same
becomes some sources of energy inefficiency.
5. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
5
Correct predictions are necessary for determining a short
term time operations as well as mid-term planning.
However, additionally, manufacturers have to have
an understanding concerning the purchases they need to
provide for extended scheming [45]. Several applications of
load prediction have been represented in literature
wherever many apply function statically, and machine
learning technologies used. For shorthand medium-term
prediction, time-series analysis and neural networks are
used [45], [47], [48]. A haul with short term time predicting
models has been the deficit in understanding concerning the
larger image as a result of not handling data concerning the
various classes of customers. In [49], a PCA-based
approach accustomed establishes the kind of demand
visage by such client categories. In [50], [45] a hybrid
system of SOMs and SVM was applied to predict mid-term
electricity load. The SOM was used to divide power usage
information into two teams that are then input into an SVM
in an exceedingly monitored way for load forecasting. In
[46], Espinoza et al. described on short-term time
prediction with hourly load data from a Belgian grid station
highlight that prediction and client identification are
reticulated and suggested a merged structure which
includes each. The first modeling relies on seasonal time-
series analysis, using the periodic auto-regression (PAR)
model [51], the periodic autoregression utilized in the
modeling of electricity costs [52]. The stationary attributes
obtained from these models are run through a k-means
clustering method to include various client descriptions.
III. PROBLEMS
The problem occurs in the fact that in a cloud computing
environment server consumes far more energy than they
need. Hence, lots of energy wasted due to intuitive to
energy efficiency. Computer servers in data centers account
for concerning a pair of worldwide energy demand,
growing concerning twelve-tone music a year, in line with
the cluster. The servers, Greenpeace aforementioned, will
suck up the maximum amount power as 50,000 average
U.S. homes. However, most of what supplies energy to the
cloud comes from coal energy instead of renewable sources
like wind and star, consistent with Greenpeace. Clusters of
information centers square measure rising in places just like
the geographical region, where coal-powered electricity is
reasonable and plentiful in the same cluster. In its report,
the organization narrowed in on ten major technical school
corporations, together with Apple, Twitter, and Amazon.
Recently, the cluster has waged a feisty fight against
Facebook, that depends on coal for 53.2% of its electricity,
consistent with Greenpeace. Several corporations, the
organization aforementioned, tightly guard data concerning
the environmental effect and power usage of their IT
operations. They additionally focus a lot on victimization
energy expeditiously than on sourcing it cleanly, previously
mentioned Greenpeace. Yahoo landed bonus points for
setting facilities near clean energy hot spots and efficient
coal-based power for direct 18.3% of its portfolio. Google
received commendation for its intensive support of the
wind and solar initiatives and for making a subsidiary,
Google Energy, that may get electricity straight from
separated clean energy producers. In 2005, the U.S. owned
10.3 million data centers gobbling up sufficient power to
supply all of the England for two months, consistent with
the web selling company WordStream. Every month,
electricity accustomed power inquiries on Google bring out
260,000 kilograms of greenhouse gas, and it is s to
sufficient supply a deep freezer for 5,400 years, consistent
with WordStream.
IV. Power Usage Effectiveness (PUE) V Data center
Infrastructure Efficiency (DCiE)
Benchmarking information data hub’s power capability
might be a vital commencement for minimizing energy
usage and connected power expenditure. Effectiveness
Benchmarking permits us to grasp this level of
effectiveness with every data center, and has to institute
further effective optimum procedures; it aids in measuring
the efficiency of those efficiency methods. Power Usage
Effectiveness (PUE) and its shared data Centre
infrastructure Efficiency (DCiE) are usually preferred
criterion planned by the new Grid to aid IT Professionals to
ensure but energy economical info centers areas, and to
observe the impact of their efficiency efforts. The amount
instituted collectively incorporates a general benchmark
that it recommends, named Company Average info Centre
Efficiency (CADE). At their February 2009 Technical
Forum, the new Grid introduced new parameters named
Information Center Productivity (DCP) knowledge and
Data Centre energy Productivity (DCeP) that probed into
the relevant work created by your information center. All
benchmarks have their worth, and once used correctly, they
are going to be a helpful and essential tool for center
energy efficiency.
Data centers all around the world have a responsibility to
become green and eco-friendly. It starts with cutting their
energy costs and consumption. Traditional methods of
managing the energy efficiency of data centers are
evidently inefficient and obsolete. The PUE ratio of total
amount of energy used the in substitute variations inside
6. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
6
the landscape of
Fig: 2. Sketch of How the PUE and the DCE are calculated
Computing, some legal problems increase with cloud
computing, together with trademark infringement, security
considerations and sharing of proprietary information
resources.
With a computer data center facility [29], the lower an
organization is PUE the greener they are. An ideal PUE is
about 1.0. PUE developed by a group called The Green
Grid. It is a computation of how efficiently a computer data
center applies energy.
It is necessary to know the elements for the hundreds of the
standards of measurement, which may represent as follows:
1. IT equipment Power. It comprises the load related to all
of the IT instrumentation, such as figure, storage, and
network equipment, in conjunction with complementing
gadgets such as KVM switches, monitors, and
workstations/laptops accustomed monitor or otherwise
control the information center.
2. Total Facility Power. It involves all that supports the IT
equipment load such as:
❖ Power delivery elements run through as
Generators, UPS, PDUs, switchgear, heavy
batteries, and distribution losses outside to the IT
equipment.
❖ Cooling system elements like chillers, PC room
air conditioning units (CRACs), direct
enlargement air handler (DX) units, pumps, and
cooling towers.
❖ Computer network and storage nodes.
❖ Different miscellaneous element hundreds like
data center lighting.
The PUE and DCiE provide the simplest approach to
show:
❖ Opportunities to boost a data center's operational
efficiency. However center compares with
competitive data centers. If the PUE Data center
operators are rising, then the designs and
processes will get over time.
❖ Opportunities to repurpose energy for added IT
equipment.
While each of those metrics is an equivalent, they will be
accustomed to express the power sharing within the
knowledge center otherwise. As an example, if a PUE is
decided to be 3.0, this means that the information center's
demand is thrice bigger than the power required to supply
energy to the IT instrumentation. Additionally, the
magnitude relation uses as a multiplier factor for conniving
the $64000 effect of the system’s power demands. For
instance, if a server requests five hundred watts and the
PUE for the data center is 3.0, then the ability from the
utility Grid required to deliver five hundred watts to the
server is 1500 watts. DCiE
In A Data center, PUE is calculated by:
(3)
Moreover, its similarities with DCIE described as:
; (4)
It must be well noted here at this point that the valuation for
Total Facility Energy and IT Equipment Energy will vary
and are likely to change based on a data center's layout.
(5)
Companies like Google have pioneered many attempts to
cut energy costs at their data centers. One of such attempt
is an artificially intelligent system developed by its
subsidiary Deep Mind that led to a 15 percent
improvement in power efficiency. The following diagram
illustrates the layout of the Google data center and how the
PUE resolved in:
7. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
7
Data Source: ensuring measurement on Google data
center, Google comprises servers, storage, and networking
equipment as IT equipment power. We recognize
everything else overhead power [33].
Fig: 4. Flowchart of already done data center
Total Facility Power calculated at or close to the ability
utility’s meter(s) to correctly mirror the facility coming into
the data center. It could amount to the whole energy used
within the data center. Center-only parts of a building
utility meter ought to set the calculation of energy that is
not supposed to be used within the data center would lead
to faulty PUE and DCiE metrics. For instance, if a
knowledge center works in an office block, gross energy
supplied from the utility is the addition of the whole
Facility Power for the data center, and therefore, the total
power used by the non-data center offices. In this case, the
data center administrator would need to live or predict the
number of the energy utilized by the non-data center offices
(an estimate can include some mistakes into the
computations). IT instruments power would measure on
balance power conversion, switching, and acquisition
finalized and before the IT instrumentation itself. A
possible measure purpose would be to the outcome of the
PC space power distribution units (PDUs). This
mensuration ought to represent the whole energy supplied
to the figure instrumentation racks within the information
center.
Fig: 3. Structure green data center architecture
The PUE will vary from 1.0 to time. Acceptably, a
PUE worth approaching from zero to one would show
100% efficiency (i.e. all power utilized by IT equipment
only). Presently, there are no exhaustive information data
sets that show truth expansion of the PUE for information
centers.
Some preliminary work indicates that a lot of data centers
might have a PUE from zero to three or greater; however,
with the right style, a PUE worth from zero to six ought to
be achievable. Shows that the twenty-two information data
centers measured had PUE values within the 1.3 to 3.0
ranges. Some researchers have indicates that PUE values of
2.0 are doable with correct design6. However, there is
presently no comprehensive business data information set
that shows how the Green Grid feels, which is vital to start
measuring information of data centers' effectiveness,
though the present approach needs information
manipulation.
8. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
8
Additionally, the Green Grid conjointly urges information
data center providers to share DCiE outcomes, which can
facilitate every information data center owner higher
analysis of their mensuration methodology similarly as
perceived in their performance, however, compared with
the remainder of the trade.
Fig: 5. Observation of where energy utilized
Correct PUE statistics for information data centers. Once
more, there is no universal consensus on what makes up an
economic or inefficient data center. Within the future, the
Green Grid can provide values that profile target PUE and
DCiE metrics for a range of typical data center
configurations. In the short term, the Green Grid suggests
that information center infrastructure begins utilization
either with the PUE or DCiE metrics. Whereas the
measuring points might not be outlined, the Green Grid
feels which is vital to start measurement information of
data center efficiency, even if the method presently needs
information manipulation.
VI. REVIEWED LITERATURE
A. Cloud Computing Authorization
U.S. Federal Agencies “are engaged in a geographical point
of Management and Budget to use a system discussed to as
Fed-RAMP (Federal Risk and Authorization Management
Program) to assess and authorize cloud product and
services. As it has shown, the Federal Congress of
commerce Organised by Steven VanRoekel noted the fact
to federal agency Chief Information Officers on 8th
of
December, 2011, about shaping but the federal agencies
needed to use Fed-RAMP [63][64]. Fed-RAMP comprises
of a group of agency Exceptional Publication 800-53
security panels specifically elect to issues protection in
cloud environments. A group has been recognized publicly
for the FIPS 199 at the lower categorization and thus the
FIPS 199 moderate categorization [64]. The Fed-RAMP
program has also set up Joint Certification Board (JAB)
comprising of Chief Information Officers from Defense,
DHS, and GSA [63][64]. The JAB is responsible for
developing certification benchmark for third party
organization Worldwide and also assesses the World
Health Organization performance assessments on cloud
solutions level. The JAB also evaluates license packages
that can grant temporary permission, allowing the operation.
The government agencies overseeing the services have the
final duty for final authority.
B. Legal Over Cloud Computing
The additional dissimilarities classified the environment of
computing; some legal issues increase with cloud
computing, in conjunction with trademark infringement,
security concerns and sharing of proprietary data resources.
The Electronic Frontier Foundation discussed a conflicted
issues about the United States government; concerning the
Mega transfer confiscation methodology, considering
people losing their property privileges by storing
information on a cloud computing service[68]. One to three
significant but hardly mentioned disadvantage with cloud
computing is that the problem of World Health
Organization is in "possession" of the knowledge[69][64].
If the cloud company is the "custodian" of the data, then a
particular set of rights would apply. However, it is bringing
certain disadvantages inside the legalities of cloud
computing, that is, the problem of legal possession of the
data. Many terms are of Service Level Agreements area
unit that is silent on the question of ownership. The legal
issues are not limited to the first amendment during which
the cloud-based application is actively getting used
[64][70]. That should straighten things that could happen
when the clients finish up the relationship with others. With
important issues, things might be an event going to happen
before leveraging the deployment of the application in the
cloud environment [64]. Though, within the occurrence of
provider failures or liquidation, the state property
information might become distorted.
C. Cloud computing Vendor lock-in Right
Cloud computing remains comparatively new, standard
developed. Several cloud platforms and services are
proprietary, which means that has engineered on the precise
criteria, tools, and protocols developed by the explicit
merchant for its particular cloud giving. It may build
migration off a proprietary cloud platform prohibitively
sophisticated and highly-priced.
Three forms of merchant lock-in will occur with cloud
computing[64][72]:
Clouds Platform lock-in: Cloud services tend to be
engineered on one amongst many possible virtualization
platforms, for instance, VMWare or Xen. Migrating from a
cloud supplier utilization of one platform to a cloud
provider employing an entirely different platform can be
terribly sophisticated[64][72].
Clouds Data lock-in: Since the computing cloud remains
novel, the standards of possession. The World Health
Organization owns the data once it appears on a cloud
platform, that is not, however, developed, that might build
it sophisticated if cloud computing users ever conceived as
moving knowledge Off a cloud vendor's platform[64][72].
9. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
9
Clouds Tool's lock-in: The tools engineered to manage a
clouded atmosphere which is not compatible with various
types of each virtual and physical infrastructure, those tools
can solely be ready to be administered by knowledge or
apps that board the vendor's explicit cloud
atmosphere[64][72].
When the heterogeneous cloud computing is delineated as a
kind of cloud environment that forestalls merchant lock-in,
it aligns with enterprise information data centers that are
operating hybrid cloud models[64][73]. The absence of
merchant lock-in lets cloud director choose his or her
selection of supervisors for specific tasks to deploy
virtualized infrastructures to alternative enterprises while
not the necessity to think about the flavor of a supervisor
within the alternative company[64][73].
A different cloud has taken into account one that has on-
premises non-public clouds, public clouds, and software-as-
a-service clouds. Heterogeneous clouds will work with
environments that are not virtualized, like old knowledge
centers. Different clouds additionally yield the utilization of
piece components, like hypervisors, servers, and storage,
from multiple vendors.
Cloud piece components, like cloud storage systems,
provide arthropod genus, however it usually incompatible
with one another. The result is sophisticated migration
between backends and makes it tough to integrate
knowledge unfold across various location. It has been
delineating as a tangle of merchant lock-in[64][74]. The
answer to the current is for clouds to adopt common
standards[64].
Heterogeneous cloud computing differs from homogenized
clouds, which delineated as those efficient, logical building
blocks provided by one merchant[64][73][75]. Intel chief of
high-density computing, Jason Waxman, “ estimated as an
expression that a homogenized system of fifteen thousand
servers would value about six million US dollars for an
additional cost and “ megawatts” power utilsation[64][75].
C. Cloud Computing Open standards
Cloud infrastructure providers naturally expose traditional
APIs services but additionally distinctive to their
implementation and so not practical [64][76]. Some
vendors have approved and adopted others APIs[76], and
there are some open standards ongoing development,
delivering ability, and movability. in the year 2012, the
Open Standard was broadened with business support by
OpenStack[77]. In 2010, National Aeronautic Space
Administration and Rackspace presented a probable rule to
the OpenStack Foundation[64][78][79]. OpenStack
followers epitomize and endorse those group which keeps
the clouds technologies such as AMD, Intel, Dell, HP,
IBM, Yahoo, Huawei and currently VMware for
empowerment[64][79].
V. Cloud Computing Privacy Control
D. Cloud Computing Privacy solution
Solutions to privacy in cloud computing embrace policy
and
Legislation likewise as end users' decisions for a way data
is held on[64][80][81].The cloud service infrastructure
desires to establish clear and relevant policies that describe,
how the data of every cloud user will be accessed and used
[64][81]. Cloud service users will cipher data that process
or hold on at intervals the cloud can forestall[64]. The
unauthorized access and Science cryptography mechanisms
are the simplest choices. Additionally, authentication and
integrity protection mechanisms make sure that data solely
goes where the client needs it to travel, and it has not
changed in transit [82].
Strong authentication may be an obligatory demand for any
cloud reading[81][82]. User authentication is that the
primary basis for access management, and especially
within the cloud setting, authentication, and access
management are additional vital than ever since the cloud,
and every one of its data is in public access. Cloud ID
[64][80] provides a privacy-preserving cloud-based and
cross-enterprise identity verification solutions for this
downside[81]. It links the counsel of the users to their life
science associated stores it in an encrypted fashion[64][82].
The creative use of a searchable cryptography technique,
identity verification is performed within the encrypted
domain to form positive that the cloud supplier or potential
attackers do not gain access to any sensitive data or maybe
the contents of the individual queries[64][80].
E Cloud Computing Agreement
To become rules organized by FISMA, HIPAA, and SOX
within the United state of America, the information
Protection Directive within the EU and also the MasterCard
industry's PCI DSS, users might get to adopt community
and hybrid deployment models that are dearer and should
provide restricted edges”[30]. However, Google is
organized to "manage and meet additional government
policy necessities on the far side FISMA, " and Rackspace
Cloud or QubeSpace can claim PCI compliance.
Many suppliers conjointly acquire associate degree SAS
seventy sort II audits. However, this has been criticized as a
result of the selected set of goals and standards determined
by the auditor and also the auditee are typically not
disclosed and may vary widely. Suppliers create this data
obtainable for the asking, beneath a non-disclosure
agreement.
Customers within the EU getting with cloud suppliers
outside the EU/EEA got to adhere to the EU rules on
export of private information.
A multitude of laws and regulations have forced specific
compliance necessities onto several corporations that
collect, generate or store information. These policies might
dictate a vast array of knowledge storage systems.
However significant data should maintain, the method used
for deleting information, and even certain recovery plans.
The U. S insurance movability and answerability Act
10. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
10
(HIPAA) need a contingency set up that has information
backups, data recovery, and information access throughout
emergencies.
The privacy laws of Svizzera demand that non-public
information, together with emails, physically keep in
Svizzera.
In the UK, the Civil Contingencies Act of 2004 sets forth
steerage for a business contingency set up that has policies
for information storage. In a virtualized cloud computing
atmosphere, customers might grasp precisely wherever
their data is being stored. In fact, information could also
maintain across multiple data centers to enhance
responsibility, increased performance, and supply
redundancies. This geographic dispersion might create it
more durable to establish legal jurisdiction if disputes arise.
VII.CONCLUSION AND FUTURE WORK
In this paper, we sought to implement artificially intelligent
agents that would help reduce energy wastage at Cloud
data centers and thus contribute to improving the great big
energy problem that all big data centers face in today’s
world. We have a look at the different ways in which
machine learning and artificial intelligence mechanisms
and methodologies can be used towards energy efficiency
in a cloud data center. In future work, we hope to consider
deep learning methods in reducing power consumption in
Data Centre much further.
REFERENCES
[1] Farahnakian, F., Liljeberg, P., & Plosila, J. (2014,
February). Energy-efficient virtual machines
consolidation in cloud data centers using
reinforcement learning. In Parallel, Distributed
and Network-Based Processing (PDP), 2014
22nd Euromicro International Conference on
(pp. 500-507). IEEE.
[2] H. Allcott and M. Greenstone, “Is There an
Energy Efficiency Gap?,” in Energy Efficiency,
2013, pp. 133–161.
[3] S. Backlund, P. Thollander, J. Palm, and M.
Ottosson, “Extending the energy efficiency gap,”
Energy Policy, vol. 51, pp. 392–396, 2012.
[4] M. G. Patterson, “What is energy efficiency?
Concepts, indicators, and methodological issues,”
Energy Policy, vol. 24, no. 5, pp. 377–390, 1996.
[5] P. Linares and X. Labandeira, “Energy
efficiency: Economics and policy,” J. Econ.
Surv., vol. 24, no. 3, pp. 573–592, 2010.
[6] Iea, Worldwide Trends in Energy Use and
Efficiency. 2008.
[7] Dec, “The Energy Efficiency Strategy: The
Energy
[8] Efficiency Opportunity in the UK,” Dep. Energy
Clim.
[9] Chang., no. November, p. 30 pp., 2012.
[10] A. A. B. Lovins, “Energy efficiency, taxonomic
overview,” Encycl. Energy, vol. 401, no.
September, pp. 383–401, 2004.
[11] L. Pérez-Lombard, J. Ortiz, and D. Velázquez,
“Revisiting energy efficiency fundamentals,”
Energy Efficiency, vol. 6, no. 2. pp. 239–254,
2013.
[12] V. Oikonomou, F. Becchis, L. Steg, and D.
Russolillo, “Energy saving and energy efficiency
concepts for policy making,” Energy Policy, vol.
37, no. 11, pp. 4787–4796, 2009.
[13] CIBSE GUIDE F, “CIBSE Guide F: Energy
efficiency in buildings,” Energy Effic. Build.
Chart. Inst. Build. Serv. Eng. London, 2nd Ed., p.
204, 2004.
[14] A. B. Jaffe and R. N. Stavins, “The energy-
efficiency gap What does it mean?,” Energy
Policy, vol. 22, no. 10, pp. 804–810, 1994.
[15] L. Pérez-Lombard, J. Ortiz, I. R. Maestre, and J.
F. Coronel, “Constructing HVAC energy
efficiency indicators,” Energy Build., vol. 47, pp.
619–629, 2012.
[16] M. Croucher, “Potential problems and limitations
of energy conservation and energy efficiency,”
Energy Policy, vol. 39, no. 10, pp. 5795–5799,
2011.
[17] M. Ryghaug and K. H. Sørensen, “How energy
efficiency fails in the building industry,” Energy
Policy, vol. 37, no. 3, pp. 984–991, 2009.
[18] A. B. Lovins, “Energy End-Use Efficiency,”
Most, no. September, pp. 1–25, 2005.
[19] H. Herring, “Energy efficiency - A critical view,”
Energy, vol. 31, no. 1 SPEC. ISS. pp. 10–20,
2006.
[20] J. M. Cullen and J. M. Allwood, “Theoretical
efficiency limits for energy conversion devices,”
Energy, vol. 35, no. 5, pp. 2059–2069, 2010.
[21] K. Gillingham, R. Newell, and K. Palmer,
“Energy Efficiency Policies: A Retrospective
Examination,” Annu. Rev. Environ. Resources.,
vol. 31, no. 1, pp. 161–192, 2006.
11. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
11
[22] A. B. Jaffe, R. G. Newell, and R. N. Stavins,
“Economics of energy efficiency,” Encycl.
Energy, vol. 2, pp. 79–90, 2004.
[23] M. G. Patterson, “What is energy efficiency?,”
Energy Policy, vol. 24, no. 5, pp. 377–390, 1996.
[24] B. W. Ang, “Monitoring changes in economy-
wide energy efficiency: From energy-GDP ratio
to composite efficiency index,” Energy Policy,
vol. 34, no. 5, pp. 574–582, 2006.
[25] H. C. Granade, J. Creyts, A. Derkach, P. Farese,
S. Nyquist, and K. Ostrowski, “Unlocking
Energy Efficiency in the U.S. Economy,”
McKinsey Glob. Energy Mater., pp. 1–165, 2009.
[26] I. G. Hamilton, P. J. Steadman, H. Bruhns, A. J.
Summerfield, and R. Lowe, “Energy efficiency in
the British housing stock: Energy demand and the
Homes Energy Efficiency Database,” Energy
Policy, vol. 60, pp. 462–480, 2013.
[27] J. Palm and P. Thollander, “An interdisciplinary
perspective on industrial energy efficiency,”
Appl. Energy, vol. 87, no. 10, pp. 3255–3261,
2010.
[28] Energy Efficiency Models Implemented in a
Cloud Computing http://worldcomp-
proceedings.com/proc/p2013/GCA3885.pdf
(accessed March 27, 2017).
[29] Google Warns That NSA Is Breaking Internet -
Hit & Run ..,
http://reason.com/blog/2014/10/10/google-warns-
that-nsa-is-breaking-intern (accessed March 27,
2017).
[30] Energy Efficiency Models Implemented in a
Cloud Computing .., http://worldcomp-
proceedings.com/proc/p2013/GCA3885.pdf
(accessed March 27, 2017).
[31] ADVANCED M/E PUE & DCIE Assessments,
http://www.amepservices.com/downloads/AMEP
PUE-DCIEAssessmentServices.pdf (accessed
March 27, 2017).
[32] Cloud computing issues - Wikipedia,
https://en.wikipedia.org/wiki/Cloud_computing_i
ssues (accessed March 27, 2017).
[33] https://www.coursehero.com/file/p55ajq1/98-
Intel-General-Manager-of-high-density (accessed
March 27, 2017).
[34] 98 Intel General Manager of high-density
computing Jason ..,
https://www.coursehero.com/file/p55ajq1/98-
Intel-General-Manager-of-high-density (accessed
March 27, 2017).
[35] Google Data Centers ."Efficiency: How we do it"
retrieved
fromhttps://www.google.com/about/datacenters/e
fficiency/internal/#content , on 2017/03/09
[36] Computing's Energy Problem:
www.futurearchs.org/sites/default/files/horowitz-
ComputingEnergyISSCC.pdfCRAY-1. Page 6.
Supporting Evidence. 6
http://cpudb.stanford.edu/. Page 7. Houston, We
Have A Problem. Seven
http://cpudb.stanford.edu/.
[37] G. A. Qureshi, R. Weber, H. Balakrishnan, J.
Guttag, and B. Maggs,“ Cutting the electric bill
for Internet-scale systems,” Proceedings of the
ACM SIGCOMM 2009 conference on Data
communication, pp.123-134, 2009.
[38] J. Ll. Berral, I. Goiri1, R. Nou, F. Julia, J.
Guitart, R. Gavaldà and J.Torres; “Towards
energy-aware scheduling in data centers using
machine learning”, Proceedings of the 1st
International Conference on Energy-Efficient
Computing and Networking, pp. 215-224, 2010.
[39] A. Beloglazov, J.Abawajy, R.Buyya,” Energy-
aware resource allocation heuristics for efficient
management of data centers for cloud
computing,” Journal of Future Generation
Computer Systems, vol.28, pp.755-768, 2012.
[40] G. Dhiman and T. S. Rosing, “ System-level
power management using online learning”,
Proceedings of the Computer-Aided Design of
Integrated Circuits and Systems (CADICS), pp.
676–689, 2009.
[41] R.S.Sutton, A.G.Barto, “Reinforcement
Learning: AnIntroduction”,MIT Press,1998.
[42] J. Rao, X. Bu, C.-Z. Xu, L. Wang, and G. Yin. “
Vconf: a reinforcement learning approach to
virtual machine auto configuration,” Proceedings
of the 6th International Conference on
Autonomic Computing ( ICAC), pp. 137-146,
2009.
[43] G. Tesauro, N. K. Jong, R. Das, and M. N.
Bennani, “ A hybrid reinforcement learning
approach to autonomic resource allocation,”
Proceedings of the IEEE International
Conference on Autonomic Computing ( ICAC),
pp. 65–73, 2006.
[44] Y. Tan, W. Liu, and Q. Qiu, “Adaptive power
management using reinforcement learning,”
Proceedings of the International Conference on
Computer-Aided Design (ICCAD ’09), pp 461–
467, 2009.
[45] E. Ipek, O. Mutlu, J. F. Martinez, and R.
Caruana. “Self-optimizing memory controllers: A
reinforcement learning approach,” Proceedings of
the 35th Annual International Symposium on
Computer Architecture ( ISCA), pp.39-50, 2008.
[46] Manic, Milos, Kasun Amarasinghe, Juan J.
Rodriguez-Andina, and Craig Rieger. "Intelligent
Buildings of the Future: Cyberwar, Deep
Learning Powered, and Human Interacting."
IEEE Industrial Electronics Magazine 10, no. 4
(2016): 32-49.
[47] Alahakoon, D., & Yu, X. (2016). Smart
electricity meter data intelligence for future
energy systems: A survey. IEEE Transactions on
Industrial Informatics, 12(1), 425-436.
[48] M. Espinoza, C. Joye, R. Belmans, and B.
DeMoor,“Short-term load forecasting, profile
12. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
12
identification, and customer segmentation: A
methodology based on periodic time series,”
IEEE Trans. Power Syst., vol. 20, no. 3, pp.
1622–1630, Aug. 2005.
[49] K.-H. Kim, H.-S. Youn, and Y.-C. Kang, “Short-
term load forecasting for special days in
anomalous load conditions using neural networks
and fuzzy inference method,” IEEE Trans. Power
Syst., vol. 15, no. 2, pp. 559–565, May 2000.
[50] H. Pao, “Forecasting electricity market pricing
using artificial neural networks,” Energy
Convers. Manage., vol. 48, no. 3, pp. 907–912,
2007
[51] H. Liao and D. Niebur, “Load profile estimation
in electric transmission networks using
independent component analysis,” IEEE Trans.
Power Syst., vol. 18, no. 2, pp. 707–715, May
2003
[52] J. Nagi, K. S. Yap, S. K. Tiong, and S. K.
Ahmed, “Electrical power load forecasting using
hybrid self-organizing maps and support vector
machines,” in Proc. 2nd Int. Power Eng. Optim.
Conf. (PEOCO’08), Jun. 2008, pp. 51–56.
[53] D. Osborn and J. Smith, (1989) “The
performance of periodic autoregressive models in
forecasting seasonal U.K. consumption,” J. Bus.
Econ. Statist., vol. 7, pp. 117–127. [52] G.
Guthrie and S. Videbeck, “High-frequency
electricity spot price dynamics: An intra-day
markets approach,” New Zealand Inst. Study of
Competition Regul., Tech. Rep. 367760, 2002
[Online]. Available:
http://ssrn.com/abstract=367760 accessed on
June. 29, 2017.
[54] Jaiswal, S., Gupta, A., & Kanojiya, S. K. (2016).
Optimization of Energy Consumption via
Artificial Intelligence: A Study. SAMRIDDHI: A
Journal of Physical Sciences, Engineering, and
Technology, 8(1).
[55] LeCun, Y., Bengio, Y., & Hinton, G. (2015).
Deep learning. Nature, 521(7553), 436-444.
[56] I. Arel, C. Rose, and T. Karnowski. , (2010)
“Deep machine learning a new frontier in
artificial intelligence.” IEEE Computational
Intelligence Magazine, 5:13–18
[57] Y. Bengio.( 2009) Learning deep architectures
for AI. in Foundations and trends in Machine
Learning, 2(1):1–127.
[58] Y. Bengio, A. Courville, and P. Vincent. (2013)
Representation learning: A review and new
perspectives. IEEE Transactions on Pattern
Analysis and Machine Intelligence (PAMI),
38:1798–1828.
[59] L. Deng. (2011) An overview of deep-structured
learning for information processing. In
Proceedings of Asian-Pacific Signal &
Information Processing Annual Summit and
Conference (APSIPA-ASC).
[60] L. Deng, J. Li, K. Huang, Yao, D. Yu, F. Seide,
M. Seltzer, G. Zweig, X. He, J. Williams, Y.
Gong, and A. Acero.(2013a) Recent advances in
deep learning for speech research at Microsoft. In
Proceedings of International Conference on
Acoustics Speech and Signal Processing (ICASS).
[61] Deng, L., & Yu, D. (2014). Deep learning:
methods and applications. Foundations and
Trends® in Signal Processing, 7(3–4), 197-387.
[62] G. Hinton, S. Osindero, and Y. Teh. (2006) A
fast learning algorithm for deep belief nets.
Neural Computation, 18:1527–1554.
[63] "FedRAMP". U.S. General Services
Administration. 2012-06-13. Retrieved 2017-07-
03
[64] Cloud computing issues by Wikipedia, the free
encyclopedia,https://en.wikipedia.org/wiki?curid
=43204134#cite_note-25 retrieved from 2017-
07-03
[65] "FISMA compliance for federal cloud computing
on the horizon in 2010". SearchCompliance.com.
Retrieved 2010-08-22.
[66] "Google Apps and Government". Official Google
Enterprise Blog. 2009-09-15. Retrieved 2010-08-
22.
[67] "Cloud Hosting is Secure for Take-off: Mosso
Enables The Spreadsheet Store, an Online
Merchant, to become PCI Compliant". Rackspace.
2009-03-14. Retrieved 2017-07-04.
[68] Cohn, Cindy; Samuels, Julie (31 October 2012).
"Megaupload and the Government's Attack on
Cloud Computing". Electronic Frontier
Foundation. Retrieved 2017-07-04.
[69] Maltais, Michelle (26 April 2012). "Who owns
your stuff in the cloud". Los Angeles Times.
Retrieved 2017-07-04.
[70] Chambers, Don (July 2010). "Windows Azure:
Using Windows Azure’s Service Bus to Solve
13. Durreesamin Journal (ISSN: 2204-9827)
July Vol 4 Issue 2, Year 2018
13
Data Security Issues" (PDF). Rebus Technologies.
Retrieved 2017-07-04.
[71] McKendrick, Joe. (2011-11-20) "Cloud
Computing's Vendor Lock-In Problem: Why the
Industry is Taking a Step Backward," Forbes.com
[72] Hinkle, Mark. (2010-6-9) "Three cloud lock-in
considerations," Zenoss Blog
[73] Staten, James (2012-07-23). "Gelsinger brings
the 'H' word to VMware." ZDNet.
[74] vada, Eirik T. (2012-06-11) "Creating Flexible
Heterogeneous Cloud Environments," page 5,
Network and System Administration, Oslo
University College.
[75] Gannes, Liz. GigaOm, "Structure 2010: Intel vs.
the Homogeneous Cloud," June 24, 2010
[76] VMware Launches Open Source PaaS Cloud
Foundry". 2011-04-21. Retrieved 2017-07-04.
[77] Jon Brodkin (July 28, 2008). "Open source fuels
growth of cloud computing, software-as-a-
service". Network World. Retrieved 2017-07-04.
[78] AGPL: Open Source Licensing in a Networked
Age". Redmonk.com. 2009-04-15. Retrieved
2017-07-04.
[79] "Did OpenStack Let VMware Into The
Henhouse?". InformationWeek. 2012-10-19.
Retrieved 2017-07-04.
[80] Haghighat, M., Zonouz, S., & Abdel-Mottaleb, M.
(2015). CloudID: Trustworthy Cloud-based and
Cross-Enterprise Biometric Identification. Expert
Systems with Applications, 42(21), 7905–7916.
[81] Ryan, Mark (January 2011). "Cloud Computing
Privacy Concerns on Our Doorstep.". ACM.
[82] Sen, Saydip (2013). "Security and Privacy Issues
in Cloud Computing." In Ruiz-Martinez,
Pereniguez-Garcia, and Marin-Lopez.
Architectures and Protocols for Secure
Information Technology. USA: IGI-Global. ar
xiv:1303.4814
.