Abstract-Population exploration causes the high usage of electronic devices such as computer, laptop, and household equipments. So we can save the power and world from the pollution and its unpleasant additional result. Using the Green Computing we can save the power, world., .Green computing means the study and practice of designing ,manufacturing using, and disposing of computers ,servers, and associate subsystem such as printers, monitors etc. The goal of Green computing are similar to green chemistry reduce the use of hazardous materials maximum energy efficiency during the product’s life time and promote the reusability of defunct products and factory waste. Green computing can also develop solutions that offer benefits by “aligning all IT process and practices with the core principles of sustainability which are to reduce, reuse, and recycle and finding innovative ways to use it in business product to deliver sustainability benefits across the enterprise and beyond”.In this thesis, green maturity model for virtualization and its levels are explained “Green Computing,” is especially important and timely: As computing becomes increasingly pervasive, the energy consumption attributable to computing is climbing, despite the clarion call to action to reduce consumption and reverse greenhouse effects. At the same time, the rising cost of energy — due to regulatory measures enforcing a “true cost” of energy coupled with scarcity as finite natural resources are rapidly being diminished — is refocusing IT leaders on efficiency and total cost of ownership, particularly in the context of the world-wide financial crisis.. The Five steps for Green computing for energy conservation.
Energy Saving by Virtual Machine Migration in Green Cloud Computingijtsrd
Nowadays the innovations have turned out to be so quick and advanced that enormous all big enterprises have to go for cloud. Cloud provides wide range of services, from high performance computing to storage. Datacenter consisting of servers, network, wires, cooling systems etc. is very important part of cloud as it carries various business information onto the servers. Cloud computing is widely used for large data centers but it causes very serious issues to environment such as heat emission, heavy consumption of energy, release of toxic gases like methane, nitrous oxide, carbon dioxide, etc. High energy consumption leads to high operational cost as well as low profit. So we required Green cloud computing, which very environment friendly and energy efficient version of the cloud computing. In this paper the major issues related to cloud computing is discussed. And the various techniques used to minimize the power consumption are also discussed. Ruhi D. Viroja | Dharmendra H. Viroja"Energy Saving by Virtual Machine Migration in Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-1 , December 2016, URL: http://www.ijtsrd.com/papers/ijtsrd104.pdf http://www.ijtsrd.com/engineering/computer-engineering/104/energy-saving-by-virtual-machine-migration-in-green-cloud-computing/ruhi-d-viroja
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
An Efficient MDC based Set Partitioned Embedded Block Image CodingDr. Amarjeet Singh
In this paper, fast, efficient, simple and widely used
Set Partitioned Embedded bloCK based coding is done on
Multiple Descriptions of transformed image. The maximum
potential of this type of coding can be exploited with discrete
wavelet transform (DWT) of images. Two correlated
descriptions are generated from a wavelet transformed image
to ensure meaningful transmission of the image over noise
prone wireless channels. These correlated descriptions are
encoded by set partitioning technique through SPECK coders
and transmitted over wireless channels. Quality of
reconstructed image at the decoder side depends upon the
number of descriptions received. More the number of
descriptions received at output side, more enhance the quality
of reconstructed image. However, if any of the multiple
description is lost, the receive can estimate it exploiting the
correlation between the descriptions. The simulations
performed on an image on MATLAB gives decent
performance and results even after half of the descriptions is
lost in transmission.
Energy Saving by Virtual Machine Migration in Green Cloud Computingijtsrd
Nowadays the innovations have turned out to be so quick and advanced that enormous all big enterprises have to go for cloud. Cloud provides wide range of services, from high performance computing to storage. Datacenter consisting of servers, network, wires, cooling systems etc. is very important part of cloud as it carries various business information onto the servers. Cloud computing is widely used for large data centers but it causes very serious issues to environment such as heat emission, heavy consumption of energy, release of toxic gases like methane, nitrous oxide, carbon dioxide, etc. High energy consumption leads to high operational cost as well as low profit. So we required Green cloud computing, which very environment friendly and energy efficient version of the cloud computing. In this paper the major issues related to cloud computing is discussed. And the various techniques used to minimize the power consumption are also discussed. Ruhi D. Viroja | Dharmendra H. Viroja"Energy Saving by Virtual Machine Migration in Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-1 | Issue-1 , December 2016, URL: http://www.ijtsrd.com/papers/ijtsrd104.pdf http://www.ijtsrd.com/engineering/computer-engineering/104/energy-saving-by-virtual-machine-migration-in-green-cloud-computing/ruhi-d-viroja
A Survey on Virtualization Data Centers For Green Cloud ComputingIJTET Journal
Abstract —Due to trends like Cloud Computing and Green cloud Computing, virtualization technologies are gaining increasing importance. Cloud is a atypical model for computing resources, which intent to computing framework to the network in order to cut down costs of software and hardware resources. Nowadays, power is one of big issue of IDC has huge impacts on society. Researchers are seeking to find solutions to make IDC reduce power consumption. These IDC (Internet Data Center) consume large amounts of energy to process the cloud services, high operational cost, and affecting the lifespan of hardware equipments. The field of Green computing is also becoming more and more important in a world with finite number of energy resources and rising demand. Virtual Machine (VM) mechanism has been broadly applied in data center, including flexibility, reliability, and manageability. The research survey presents about the virtualization IDC in green cloud it contains various key features of the Green cloud, cloud computing, data centers, virtualization, data center with virtualization, power – aware, thermal – aware, network-aware, resource-aware and migration techniques. In this paper the several methods that are utilze to achieve the virtualization in IDC in green cloud computing are discussed.
An Efficient MDC based Set Partitioned Embedded Block Image CodingDr. Amarjeet Singh
In this paper, fast, efficient, simple and widely used
Set Partitioned Embedded bloCK based coding is done on
Multiple Descriptions of transformed image. The maximum
potential of this type of coding can be exploited with discrete
wavelet transform (DWT) of images. Two correlated
descriptions are generated from a wavelet transformed image
to ensure meaningful transmission of the image over noise
prone wireless channels. These correlated descriptions are
encoded by set partitioning technique through SPECK coders
and transmitted over wireless channels. Quality of
reconstructed image at the decoder side depends upon the
number of descriptions received. More the number of
descriptions received at output side, more enhance the quality
of reconstructed image. However, if any of the multiple
description is lost, the receive can estimate it exploiting the
correlation between the descriptions. The simulations
performed on an image on MATLAB gives decent
performance and results even after half of the descriptions is
lost in transmission.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
Enterprise-level Green ICT Using virtualization to balance energy economicsIJARIDEA Journal
Abstract— The computing industry has been a significant contributor to global warming ever since its
inception. Performance maximization per unit has cost remained the prime focus of academic and industrial
research alike, ignoring environmental impacts in the process if any. However, the infamous global energy
crisis has inevitably pushed power and energy management up the priority list of computing design and
management activities for purely economic reasons today. Green IT lays emphasis on including the
dimensions of environmental sustainability, the offsets of energy efficiency, and the total cost of
disposal and recycling. A green computing initiative must be adaptive and flexible enough to be
able to address problems that keep on increasing in size and complexity with time. Cloud computing concepts
can invariably be applied to reduce e-waste generation. The service oriented architecture lends itself to
incorporating green computing as a process rather than a product. Re-usability, extensibility and flexibility
are some of the key characteristics which are inherent to the cloud and directly help address the vertical
specific challenges to reducing energy consumption in the long run.
Keywords— Cloud computing, Electronic waste, Green Information Technology, Service oriented architecture.
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing is a hot topic all over the world nowadays, through which customers can access information and computer power via a web browser. As the adoption and deployment of cloud computing increase, it is critical to evaluate the performance of cloud environments. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. Cloud simulators are required for cloud system testing to decrease the complexity and separate quality concerns. Cloud computing means saving and accessing the data over the internet instead of local storage. In this paper, we have provided a short review on the types, models and architecture of the cloud environment.
Energy Saving by Migrating Virtual Machine to Green Cloud Computingijtsrd
Green computing is characterized as the examination and practice of structuring, assembling, utilizing, and discarding PCs, servers, and related subsystems, for example, screens, printers, storage gadgets, and systems administration and interchanges frameworks proficiently and successfully with negligible or no effect on the earth. The objective of green computing is to diminish the utilization of hazardous materials, amplify energy proficiency during the items lifetime, and advance the recyclability of obsolete items and factory waste. Green computing can be accomplished by either Product Longevity Resource distribution or Virtualization or Power management. power is the bottleneck of improving the system execution. Among all industries, the information communication technology ICT industry is seemingly answerable for a bigger segment of the overall development in energy utilization. The objective of green cloud computing is to advance the recyclability or biodegradability of outdated items and factory waste by diminishing the utilization of hazardous materials and amplifying the energy productivity during the items lifetime. Stephen Fernandes "Energy Saving by Migrating Virtual Machine to Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30422.pdf Paper Url :https://www.ijtsrd.com/computer-science/distributed-computing/30422/energy-saving-by-migrating-virtual-machine-to-green-cloud-computing/stephen-fernandes
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Cloud computing is receiving an increasing level of attention, as evidenced by the rapidly growing number of qualitative surveys and analysis that has been published over the past few years.
Cloud computing is a paradigm shift organizations use the computing resources to conduct their business. Cloud computing is a new general purpose Internet-based technology through which information is stored in servers and provided as a service and on-demand to clients. The computing resources are accessed by mainstream businesses as a pooled or leased resource over networks. Hence traditional IT investment decisions models are not directly suitable to perform the cost-benefit and investment decisions for cloud computing resources.
This paper presents research on the return-on-investment and pricing models and seeks to build a model for quantitative assessment of cloud computing.
The results of this analysis model are intended to facilitate a more informed decision making for cloud computing resources.
Cloud infrastructure addresses two critical elements of a green IT a.pdfangeldresses
Cloud infrastructure addresses two critical elements of a green IT approach: energy efficiency
and resource efficiency. Whether done in a private or public cloud configuration, as-a-service
computing will be greener for (at least) the following three reasons.
1. Resource virtualization, enabling energy and resource efficiencies.
Virtualization is a foundational technology for deploying cloud-based infrastructure that allows a
single physical server to run multiple operating system images concurrently. As an enabler of
consolidation, server virtualization reduces the total physical server footprint, which has inherent
green benefits.
From a resource-efficiency perspective, less equipment is needed to run workloads, which
proactively reduces data center space and the eventual e-waste footprint. From an energy-
efficiency perspective, with less physical equipment plugged in, a data center will consume less
electricity.
It\'s worth noting that server virtualization is the most widely adopted green IT project
implemented or planned, at 90 percent of IT organizations globally into 2011.
2. Automation software, maximizing consolidation and utilization to drive efficiencies.
The presence of virtualization alone doesn\'t maximize energy and resource efficiencies. To
rapidly provision, move, and scale workloads, cloud-based infrastructure relies on automation
software.
Combined with the right skills and operational and architectural standards, automation allows IT
professionals to make the most of their cloud-based infrastructure investment by pushing the
limits of traditional consolidation and utilization ratios.
The higher these ratios are, the less physical infrastructure is needed, which in turn maximizes
the energy and resource efficiencies from server virtualization.
3. Pay-per-use and self-service, encouraging more efficient behavior and life-cycle management.
The pay-as-you-go nature of cloud-based infrastructure encourages users to only consume what
they need and nothing more. Combined with self-service, life-cycle management will improve,
since users can consume infrastructure resources only when they need it -- and \"turn off\" these
resources with set expiration times.
In concert, the pay-per-use and self-service capabilities of cloud-based infrastructure drive
energy and resource efficiencies simultaneously, since users only consume the computing
resources they need when they need it.
4. Multitenancy, delivering efficiencies of scale to benefit many organizations or business units.
Multitenancy allows many different organizations (public cloud) or many different business units
within the same organization (private cloud) to benefit from a common cloud-based
infrastructure.
By combining demand patterns across many organizations and business units, the peaks and
troughs of compute requirements flatten out. Combined with automation, the ratio between peak
and average loads becomes smaller, which in turn reduces the need for extra infrastructu.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand
Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand.
Ant colony Optimization: A Solution of Load balancing in Cloud dannyijwest
As the cloud computing is a new style of computing over internet. It has many advantages along with some
crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related
with the load management, fault tolerance and different security issues in cloud environment. In this paper
the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity,
delay or network load. Load balancing is the process of distributing the load among various nodes of a
distributed system to improve both resource utilization and job response time while also avoiding a
situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work.
Load balancing ensures that all the processor in the system or every node in the network does
approximately the equal amount of work at any instant of time. Many methods to resolve this problem has
been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several
scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony
optimization to resolve the problem of load balancing in cloud environment.
Methods to Maximize the well being and Vitality of Moribund Communitiespraveena06
Abstract-It has become the primary concern for the governments to chart effective methods and policies to revitalize the communities which are on the verge of extinction, most of which are indigenous. This has become more relevant and important in an era of liberalization, which more often adversely affects the welfare of such communities. In this paper we make an effort to identify and qualify measures that would revitalize moribund communities and to quantify them using fuzzy analysis. We come out with concrete suggestions for the governments and the policy makers which can be easily put in action.
Identification of Differentially Expressed Genes by unsupervised Learning Methodpraveena06
Abstract-Microarrays are one of the latest breakthroughs in experimental molecular biology that allow monitoring of gene expression of tens of thousands of genes in parallel. Micro array analysis include many stages. Extracting samples from the cells, getting the gene expression matrix from the raw data, and data normalization which are low level analysis.Cluster analysis for genome-wide expression data from DNA micro array data is described as a high level analysis that uses standard statistical algorithms to arrange genes according to similarity patterns of expression levels. This paper presents a method for the number of clusters using the divisive hierarchical clustering, and k-means clustering of significant genes. The goal of this method is to identify genes that are strongly associated with disease in 12607 genes. Gene filtering is applied to identify the clusters. k-means shows that about four to seven genes or less than one percent of the genes account for the disease group which are the outliers, more than seventy percent falls as undefined group. The hierarchical clustering dendo gram shows clusters at two levels which shows again less than one percent of the genes are differentially expressed.
Swiftly increasing demand of computational
calculations in the process of business, transferring of files
under certain protocols and data centers force to develop an
emerging technology cater to the services for computational
need, highly manageable and secure storage. To fulfill these
technological desires cloud computing is the best answer by
introducing various sorts of service platforms in high
computational environment. Cloud computing is the most
recent paradigm promising to turn around the vision of
“computing utilities” into reality. The term “cloud
computing” is relatively new, there is no universal agreement
on this definition. In this paper, we go through with different
area of expertise of research and novelty in cloud computing
domain and its usefulness in the genre of management. Even
though the cloud computing provides many distinguished
features, it still has certain sorts of short comings amidst with
comparatively high cost for both private and public clouds. It
is the way of congregating amasses of information and
resources stored in personal computers and other gadgets
and further putting them on the public cloud for serving
users. Resource management in a cloud environment is a
hard problem, due to the scale of modern data centers, their
interdependencies along with the range of objectives of the
different actors in a cloud ecosystem. Cloud computing is
turning to be one of the most explosively expanding
technologies in the computing industry in this era. It
authorizes the users to transfer their data and computation to
remote location with minimal impact on system performance.
With the evolution of virtualization technology, cloud
computing has been emerged to be distributed systematically
or strategically on full basis. The idea of cloud computing has
not only restored the field of distributed systems but also
fundamentally changed how business utilizes computing
today. Resource management in cloud computing is in fact a
typical problem which is due to the scale of modern data
centers, the variety of resource types and their inter
dependencies, unpredictability of load along with the range of
objectives of the different actors in a cloud ecosystem.
Enterprise-level Green ICT Using virtualization to balance energy economicsIJARIDEA Journal
Abstract— The computing industry has been a significant contributor to global warming ever since its
inception. Performance maximization per unit has cost remained the prime focus of academic and industrial
research alike, ignoring environmental impacts in the process if any. However, the infamous global energy
crisis has inevitably pushed power and energy management up the priority list of computing design and
management activities for purely economic reasons today. Green IT lays emphasis on including the
dimensions of environmental sustainability, the offsets of energy efficiency, and the total cost of
disposal and recycling. A green computing initiative must be adaptive and flexible enough to be
able to address problems that keep on increasing in size and complexity with time. Cloud computing concepts
can invariably be applied to reduce e-waste generation. The service oriented architecture lends itself to
incorporating green computing as a process rather than a product. Re-usability, extensibility and flexibility
are some of the key characteristics which are inherent to the cloud and directly help address the vertical
specific challenges to reducing energy consumption in the long run.
Keywords— Cloud computing, Electronic waste, Green Information Technology, Service oriented architecture.
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing is a hot topic all over the world nowadays, through which customers can access information and computer power via a web browser. As the adoption and deployment of cloud computing increase, it is critical to evaluate the performance of cloud environments. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. Cloud simulators are required for cloud system testing to decrease the complexity and separate quality concerns. Cloud computing means saving and accessing the data over the internet instead of local storage. In this paper, we have provided a short review on the types, models and architecture of the cloud environment.
Energy Saving by Migrating Virtual Machine to Green Cloud Computingijtsrd
Green computing is characterized as the examination and practice of structuring, assembling, utilizing, and discarding PCs, servers, and related subsystems, for example, screens, printers, storage gadgets, and systems administration and interchanges frameworks proficiently and successfully with negligible or no effect on the earth. The objective of green computing is to diminish the utilization of hazardous materials, amplify energy proficiency during the items lifetime, and advance the recyclability of obsolete items and factory waste. Green computing can be accomplished by either Product Longevity Resource distribution or Virtualization or Power management. power is the bottleneck of improving the system execution. Among all industries, the information communication technology ICT industry is seemingly answerable for a bigger segment of the overall development in energy utilization. The objective of green cloud computing is to advance the recyclability or biodegradability of outdated items and factory waste by diminishing the utilization of hazardous materials and amplifying the energy productivity during the items lifetime. Stephen Fernandes "Energy Saving by Migrating Virtual Machine to Green Cloud Computing" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30422.pdf Paper Url :https://www.ijtsrd.com/computer-science/distributed-computing/30422/energy-saving-by-migrating-virtual-machine-to-green-cloud-computing/stephen-fernandes
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Cloud computing is receiving an increasing level of attention, as evidenced by the rapidly growing number of qualitative surveys and analysis that has been published over the past few years.
Cloud computing is a paradigm shift organizations use the computing resources to conduct their business. Cloud computing is a new general purpose Internet-based technology through which information is stored in servers and provided as a service and on-demand to clients. The computing resources are accessed by mainstream businesses as a pooled or leased resource over networks. Hence traditional IT investment decisions models are not directly suitable to perform the cost-benefit and investment decisions for cloud computing resources.
This paper presents research on the return-on-investment and pricing models and seeks to build a model for quantitative assessment of cloud computing.
The results of this analysis model are intended to facilitate a more informed decision making for cloud computing resources.
Cloud infrastructure addresses two critical elements of a green IT a.pdfangeldresses
Cloud infrastructure addresses two critical elements of a green IT approach: energy efficiency
and resource efficiency. Whether done in a private or public cloud configuration, as-a-service
computing will be greener for (at least) the following three reasons.
1. Resource virtualization, enabling energy and resource efficiencies.
Virtualization is a foundational technology for deploying cloud-based infrastructure that allows a
single physical server to run multiple operating system images concurrently. As an enabler of
consolidation, server virtualization reduces the total physical server footprint, which has inherent
green benefits.
From a resource-efficiency perspective, less equipment is needed to run workloads, which
proactively reduces data center space and the eventual e-waste footprint. From an energy-
efficiency perspective, with less physical equipment plugged in, a data center will consume less
electricity.
It\'s worth noting that server virtualization is the most widely adopted green IT project
implemented or planned, at 90 percent of IT organizations globally into 2011.
2. Automation software, maximizing consolidation and utilization to drive efficiencies.
The presence of virtualization alone doesn\'t maximize energy and resource efficiencies. To
rapidly provision, move, and scale workloads, cloud-based infrastructure relies on automation
software.
Combined with the right skills and operational and architectural standards, automation allows IT
professionals to make the most of their cloud-based infrastructure investment by pushing the
limits of traditional consolidation and utilization ratios.
The higher these ratios are, the less physical infrastructure is needed, which in turn maximizes
the energy and resource efficiencies from server virtualization.
3. Pay-per-use and self-service, encouraging more efficient behavior and life-cycle management.
The pay-as-you-go nature of cloud-based infrastructure encourages users to only consume what
they need and nothing more. Combined with self-service, life-cycle management will improve,
since users can consume infrastructure resources only when they need it -- and \"turn off\" these
resources with set expiration times.
In concert, the pay-per-use and self-service capabilities of cloud-based infrastructure drive
energy and resource efficiencies simultaneously, since users only consume the computing
resources they need when they need it.
4. Multitenancy, delivering efficiencies of scale to benefit many organizations or business units.
Multitenancy allows many different organizations (public cloud) or many different business units
within the same organization (private cloud) to benefit from a common cloud-based
infrastructure.
By combining demand patterns across many organizations and business units, the peaks and
troughs of compute requirements flatten out. Combined with automation, the ratio between peak
and average loads becomes smaller, which in turn reduces the need for extra infrastructu.
Efficient architectural framework of cloud computing Souvik Pal
Cloud computing is that enables adaptive, favorable and on-demand network access to a collective pool of adjustable and configurable computing physical resources which networks, servers, bandwidth, storage that can be swiftly provisioned and released with negligible supervision endeavor or service provider interaction. From business prospective, the viable achievements of Cloud Computing and recent developments in Grid computing have brought the platform that has introduced virtualization technology into the era of high performance computing. However, clouds are Internet-based concept and try to disguise complexity overhead for end users. Cloud service providers (CSPs) use many structural designs combined with self-service capabilities and ready-to-use facilities for computing resources, which are enabled through network infrastructure especially the internet which is an important consideration. This paper provides an efficient architectural Framework for cloud computing that may lead to better performance and faster access.
Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand
Cloud computing has become the mainstream of the emerging technologies for information interchange and accessibility. With such systems, the information accessed from any geographic location on this planet with some decent kind of internet connection. Applying machine learning together with artificial intelligence in dealing with the problem of energy reduction in cloud data center is an innovative idea. A large combination of Artificial intelligence is playing a significant role in cloud environment. For that matter, the Big organization providers like Amazon have taken steps to ensure that they can continue to expand their fast-growing cloud services to commensurate with the fast growth of population. These companies have built large data centers in remote parts of the world to overcome a shortage of information. These centers consume significant amounts of electrical energy. There is often a lot of energy wastage. According to IDC white paper, data centers have tremendously wasted billions of energy regarding billing and cash. Additionally, researchers have argued that by the year 2020 the energy consumption rate would have doubled. Research in this area is still a hot topic. This paper seeks to address the energy efficiency issue at a Cloud Data Center using machine learning methodologies, principles, and practices. This article also aims to bring out possible future implementation methods for artificially intelligent agents that would help reduce energy wastage at a Cloud data center and thus help ameliorate the great big energy problem at hand.
Ant colony Optimization: A Solution of Load balancing in Cloud dannyijwest
As the cloud computing is a new style of computing over internet. It has many advantages along with some
crucial issues to be resolved in order to improve reliability of cloud environment. These issues are related
with the load management, fault tolerance and different security issues in cloud environment. In this paper
the main concern is load balancing in cloud computing. The load can be CPU load, memory capacity,
delay or network load. Load balancing is the process of distributing the load among various nodes of a
distributed system to improve both resource utilization and job response time while also avoiding a
situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work.
Load balancing ensures that all the processor in the system or every node in the network does
approximately the equal amount of work at any instant of time. Many methods to resolve this problem has
been came into existence like Particle Swarm Optimization, hash method, genetic algorithms and several
scheduling based algorithms are there. In this paper we are proposing a method based on Ant Colony
optimization to resolve the problem of load balancing in cloud environment.
Methods to Maximize the well being and Vitality of Moribund Communitiespraveena06
Abstract-It has become the primary concern for the governments to chart effective methods and policies to revitalize the communities which are on the verge of extinction, most of which are indigenous. This has become more relevant and important in an era of liberalization, which more often adversely affects the welfare of such communities. In this paper we make an effort to identify and qualify measures that would revitalize moribund communities and to quantify them using fuzzy analysis. We come out with concrete suggestions for the governments and the policy makers which can be easily put in action.
Identification of Differentially Expressed Genes by unsupervised Learning Methodpraveena06
Abstract-Microarrays are one of the latest breakthroughs in experimental molecular biology that allow monitoring of gene expression of tens of thousands of genes in parallel. Micro array analysis include many stages. Extracting samples from the cells, getting the gene expression matrix from the raw data, and data normalization which are low level analysis.Cluster analysis for genome-wide expression data from DNA micro array data is described as a high level analysis that uses standard statistical algorithms to arrange genes according to similarity patterns of expression levels. This paper presents a method for the number of clusters using the divisive hierarchical clustering, and k-means clustering of significant genes. The goal of this method is to identify genes that are strongly associated with disease in 12607 genes. Gene filtering is applied to identify the clusters. k-means shows that about four to seven genes or less than one percent of the genes account for the disease group which are the outliers, more than seventy percent falls as undefined group. The hierarchical clustering dendo gram shows clusters at two levels which shows again less than one percent of the genes are differentially expressed.
Abstract- A feature based “Selection of feature region set for digital image using Optimization Algorithm” is proposed here. The work is based on simulated attacking and optimization solving procedure. Image transformation techniques are used to extract local features. Simulated attacking procedure is performed to evaluate the robustness of every candidate feature region. According to the evaluation results, a track-with-pruning procedure I adopted to search a minimal primary feature set which may resists the most predefined attacks. In order to enhance its resistance capability against undefined attacks, primary feature set is then extended by adding some auxiliary feature regions in it. This work is formulated as a multidimensional knapsack problem and solved by optimization algorithms such as Genetic Algorithm, Particle Swarm Optimization and Simulated Annealing.
Abstract -Data mining is a process of extracting knowledge from huge amount of data stored in databases, data warehouses and data repositories. Crime is an interesting application where data mining plays an important role in terms of prediction and analysis. Clustering is the process of combining data objects into groups. The data objects within the group are very similar and very dissimilar as well when compared to objects of other groups. This paper presents detailed study on clustering techniques and its role on crime applications. This study also helps crime branch for better prediction and classification of crimes.
Abstract - In wireless sensor network, energy efficiency is a major concern as the sensors have minimum energy capacity. As the sensor energy consumption plays a vital role in determining the network lifetime, many strategies have been proposed for energy conservation. One of them is mobile data gathering (MDG).In Mobile Data Gathering either the sink can go on tour to collect the data or a Mobile Data Collector (MDC) can collect the data from sensors and drops them back to the static sink. In this paper, static sink with mobile data collector is considered for the topic of interest. The mobile data collector will reach the cluster of sensors at some appropriate locations called as anchor points. For the formation of clusters, many algorithms have been proposed so far, two of them , Square Grid based Clustering (SGC) and Sensor Transmission based Clustering (STC) are analysed here for the selection of anchors in the anchor based mobile data gathering approach.
Abstract-Denial-of-Service attacks, a type of attack on a network that is designed to bring the network to its knees by flooding it with useless traffic. Many Dos attacks, such as the Ping of Death ,Teardrop attacks etc., exploit the limitations in the TCP/IP protocols. like viruses, new Dos attacks are constantly being dreamed up by hackers.So the users have to take own effort of a large number of protected system such as Firewall or up-to-date antivirus software. . If the system or links are affected from an attack then the legitimate clients may not be able to connect it.. This detection system is the next level of the security to protect the server from major problems occurs such as Dos attacks, Flood IP attacks, and also the Proxy Surfer. So these kinds of anonymous activities barred out by using this Concept.
Abstract-Software is ubiquitous in our daily life. It brings us great convenience and a big headache about software reliability as well: Software is never bug-free, and software bugs keep incurring monetary loss of even catastrophes. In the pursuit of better reliability, software engineering researchers found that huge amount of data in various forms can be collected from software systems, and these data, when properly analyzed, can help improve software reliability. Unfortunately, the huge volume of complex data renders the analysis of simple techniques incompetent; consequently, studies have been resorting to data mining for more effective analysis. In the past few years, we have witnessed many studies on mining for software reliability reported in data mining as well as software engineering forums. These studies either develop new or apply existing data mining techniques to tackle reliability problems from different angles. In order to keep data mining researchers abreast of the latest development in this growing research area, we propose this paper on data mining for software reliability. In this paper, we will present a comprehensive overview of this area, examine representative studies, and lay out challenges to data mining researchers.
Abstract - Let G = (V,E) be a simple, undirected, finite nontrivial graph. A non empty set DÍV of vertices in a graph G is a dominating set if every vertex in V-D is adjacent to some vertex in D. The domination number g(G) of G is the minimum cardinality of a dominating set. A dominating set D is called a non split locating equitable dominating set if for any two vertices u,wÎV-D, N(u)ÇD ¹ N(w)ÇD, ½N(u)ÇD½=½N(w)ÇD½ and the induced sub graph áV-Dñ is connected.The minimum cardinality of a non split locating equitable dominating set is called the non split locating equitable domination number of G and is denoted by gnsle(G). In this paper, bounds for gnsle(G) and exact values for some particular classes of graphs were found.
Abstract- Machine learning, a branch of artificial intelligence is a natural outgrowth of the intersection of Computer Science and Statistics. Disease diagnosis is one of its well known applications. This paper focuses on diagnosing diseases in human beings by analyzing their tongue using decision tree classifiers algorithm. Data collection is performed by manual methods. The ultimate aim is to track some common diseases by analyzing the basic tongue.
Abstract-GasCan is a specialized and unique database of gastric cancer protein encoding genes expressed in human and mouse. The features that make GasCan unique are availability of gene information, availability of primers for each gene, with their features and conditions given that are useful in PCR amplification, especially in cloning experiments and to make it more unique built in programmed sequence analysis facility is provided that analyze gene sequences in database itself, resulting sequence analysis information can be valuable for researchers in different experiments. Furthermore, DNA sequence analysis tool is provided that can be access freely. GasCan will expand in future to other species, genes and cover more useful information of other species. Flexible database design, expandability and easy access of information to all of the users are the main features of the database. The Database is publicly available at http://www.gastric-cancer.site40.net.
Abstract-The current trend in the application space towards systems of loosely coupled and dynamically bound components that enables just-in-time integration jeopardizes the security of information that is shared between the broker, the requester, and the provider at runtime. In particular, new advances in data mining and knowledge discovery that allow for the extraction of hidden knowledge in an enormous amount of data impose new threats on the seamless integration of information. We consider the problem of building privacy preserving algorithms for one category of data mining techniques, association rule mining.Suppose Alice owns a k-anonymous database and needs to determine whether her database, when inserted with a tuple owned by Bob, is still k-anonymous. Also, suppose that access to the database is strictly controlled, because for example data are used for certain experiments that need to be maintained confidential. Clearly, allowing Alice to directly read the contents of the tuple breaks the privacy of Bob (e.g., a patient’s medical record); on the other hand, the confidentiality of the database managed by Alice is violated once Bob has access to the contents of the database. Thus, the problem is to check whether the database inserted with the tuple is still k-anonymous, without letting Alice and Bob know the contents of the tuple and the database, respectively. In this paper, we propose two protocols solving this problem on suppression-based and generalization-based k-anonymous and confidential databases. The protocols rely on well-known cryptographic assumptions, and we provide theoretical analyses to proof their soundness and experimental results to illustrate their efficiency.We have presented two secure protocols for privately checking whether a k-anonymous database retains its anonymity once a new tuple is being inserted to it. Since the proposed protocols ensure the updated database remains K-anonymous, the results returned from a user’s (or a medical researcher’s) query are also k-anonymous. Thus, the patient or the data provider’s privacy cannot be violated from any query. As long as the database is updated properly using the proposed protocols, the user queries under our application domain are always privacy-preserving.
Abstract-Intrusion Detection System used to discover attacks against computers and network Infrastructures. There are many techniques used to determine the IDS such as Outlier Detection Schemes for Anomaly Detection, K-Mean Clustering of monitoring data, classification detection and outlier detection. The data mining approaches help to determine what meets the criteria as an intrusion versus normal traffic, whether a system uses anomaly detection, misuse detection, target monitoring, or stealth probes. This paper attempts to evaluate, categorize, compares and summarizes the performance of data mining techniques to detect the intrusion.
Abstract - Human learning system is highly sensitive to responsive system according the processing, mapping, motion, auditory and visualization system. Special education system is implemented to overcome the demanded sense of the human special care sensitive signals. This responsive system is balanced and effectively instrumented with modern technological learning pedagogy to bring the special need learners into the normal learning system. In the learning process, cognitive human sensors directly influence the learning effectiveness. This paper attempted to observe the cognitive load such as mental , physical , temporal ,performance , effort and frustration in the long term , short term, working , instant , responsive, process, recollect , reference , instruction and action memory and classify the observed values as per the generalized and specialized properties. The six working loads are observed in the ten types of learning system. The classification analysis aimed to predicate the pattern for learning system for specific learning challenges.
Abstract -Remote Sensing is the science and art of acquiring information (spectral, spatial, and temporal) about material objects, area, or phenomenon, without coming into physical contact with them ad plays a significant role in feature extraction. In the present paper, implementation of color mapping index method is analyzed to extract features from RSI in spectral domain. Color indexing is applied after fixing the index value to the pixels of selected ROI (Region of Interest) of RSI and there by clustering based on these index values. Color mapping, which is also called tone mapping can be used to apply color transformations on the final image colors of the ROI. The process of color map indexing is a color map approximation approach on RSI for feature extraction includes designing appropriate algorithm, its implementation and discussion on the results of such implementation on ROI.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Final project report on grocery store management system..pdf
V1_I2_2012_Paper2.docx
1. Integrated Intelligent Research (IIR) International Journal of Data Mining Techniques and Applications
Volume: 01 Issue: 02 December 2012 Page No.29-35
ISSN: 2278-2419
29
Green Computing - Maturity Model for
Virtualization
M.Ranjani
Pavai Arts and Science College for Women, Anaipalayam
Abstract-Population exploration causes the high usage of
electronic devices such as computer, laptop, and household
equipments. So we can save the power and world from the
pollution and its unpleasant additional result. Using the Green
Computing we can save the power, world., .Green computing
means the study and practice of designing ,manufacturing
using, and disposing of computers ,servers, and associate
subsystem such as printers, monitors etc. The goal of Green
computing are similar to green chemistry reduce the use of
hazardous materials maximum energy efficiency during the
product’s life time and promote the reusability of defunct
products and factory waste. Green computing can also develop
solutions that offer benefits by “aligning all IT process and
practices with the core principles of sustainability which are to
reduce, reuse, and recycle and finding innovative ways to use it
in business product to deliver sustainability benefits across the
enterprise and beyond”.In this thesis, green maturity model for
virtualization and its levels are explained “Green Computing,”
is especially important and timely: As computing becomes
increasingly pervasive, the energy consumption attributable to
computing is climbing, despite the clarion call to action to
reduce consumption and reverse greenhouse effects. At the
same time, the rising cost of energy — due to regulatory
measures enforcing a “true cost” of energy coupled with
scarcity as finite natural resources are rapidly being diminished
— is refocusing IT leaders on efficiency and total cost of
ownership, particularly in the context of the world-wide
financial crisis.. The Five steps for Green computing for
energy conservation.
Keywords- Green Computing IT, customer Relationship
Management (CRM), Storage Area Network (SAN), Virtual
Private Network (VPN), Power Usage Effectiveness(PUE),
Software as a Service (SaaS),VMM (Virtual Maturity Model)
I. INTRODUCTION
Computing is also an area of human activity in which there is
real environmental saving to be made some of which can be
achieved by implementing relatively straight forward practical
methods. PC power use is ripe for making environmental
savings. Climate change , Power Management. Shut down and
Switch Off the computer and put into a “Stand by “Mode. Will
save a lot of power. Green computing or green IT, refers to
environmentally sustainable computing or IT. It is the study
and practice of designing, manufacturing, using, and
disposing— efficiently and effectively with minimal or no
impact on the environment. Green IT also strives to achieve
economic viability and improved system performance and use,
while abiding by our social and ethical responsibilities. Thus,
green IT includes the dimensions of environmental
sustainability, the economics of energy efficiency, and the total
cost of ownership, which includes the cost of disposal and
recycling .It is important to understand the need of the study of
green computing. It is a tool by which global warming can be
control and reduce. The Global surface temperature increased
by 0.74 ± 0.18 °C (1.33 ± 0.32 °F) during the 100 years ending
in 2005. Most conspicuously, according to the latest IPCC
report the global surface temperature will likely to rise a
further 1.1 to 6.4 °C (2.0 to 11.5 °F) during the twenty-first
century.
II. MATURITY MODEL
A. Reference Model for Virtualization
Virtualization is a term used to mean many things, but in its
broader sense, it refers to the idea of sharing. To understand
the different forms of virtualization and the architectural
implications for creating and deploying new applications, we
propose a reference model to describe the differing forms of
the concept. In this model we observe a number of different
layers of abstraction at which virtualization can be applied,
which we describe as increasing levels of maturity, shown in
Table 1. We assert that higher levels of virtualization maturity
correspond to lower energy consumption, and therefore
architectures based on higher levels of maturity are “greener”
than those at lower levels, which we discuss further on.
Table-1 Levels of virtualisation of maturity
Virtualization Maturity Name Applications Infrastructure Location ownership
Level 0 Local Dedicated Fixed Distributed Internal
Level 1 Logical Shared Fixed Centralized Internal
Level 2 Data center Shared Virtual Centralized Internal
Level 3 Cloud Software as a service Virtual Virtual virtual
Level 0 (“Local”) means no virtualization at all. Applications
are all resident on individual PCs, with no sharing of data or
server resources.
Level 1(“Logical Virtualization”) introduces the idea of sharing
applications. This might be, for example, through the use of
departmental servers running applications that are accessed by
many client PCs. This first appeared in the mainstream as
2. Integrated Intelligent Research (IIR) International Journal of Data Mining Techniques and Applications
Volume: 01 Issue: 02 December 2012 Page No.29-35
ISSN: 2278-2419
30
mainframe and then “client/server” technology, and later with
more sophisticated N-tier structures. Although not
conventionally considered virtualization, in fact, it is arguably
the most important step. Large organizations typically have a
large portfolio of applications, with considerable functional
overlaps between applications.
Level 2 (“Data Center Virtualization”) is concerned with
virtualization of hardware and software infrastructure. The
basic premise here is thatindividual server deployments do not
need to consume the hardware resources of dedicated
hardware, and these resources can therefore be shared across
multiple logical servers. This is the level most often associated
with the term virtualization. The difference from Level 1 is that
the hardware and software infrastructure upon which
applications/ servers are run is itself shared (virtualized). For
server infrastructure, this is accomplished with platforms such
as Microsoft Virtual Server and VMware among others, where
a single physical server can run many virtual servers. For
storage solutions, this level is accomplished with Storage Area
Network (SAN) related technologies, where physical storage
devices can be aggregated and partitioned into logical storage
that appears to servers as dedicated storage but can be managed
much more efficiently. The analogous concept in networking at
this level is the Virtual Private Network (VPN) where shared
networks are configured to present a logical private and secure
network much more efficiently than if a dedicated network
were to be set up.
Level 3 (“Cloud virtualization”) in the virtualization maturity
model extends Level 2 by vitalizing not just resources but also
the location and ownership of the infrastructure through the use
of cloud computing. This means the virtual infrastructure is no
longer tied to a physical location, and can potentially be moved
or reconfigured to any location, both within or outside the
consumer’s network or administrative domain. The implication
of cloud computing is that data center capabilities can be
aggregated at a scale not possible for a single organization, and
located at sites more advantageous (from an energy point of
view, for example) than may be available to a single
organization. This creates the potential for significantly
improved efficiency by leveraging the economies of scale
associated with large numbers of organizations sharing the
same infrastructure. Servers and storage virtualized to this
level are generally referred to as Cloud Platform and Cloud
Storage.Summary of the virtualization layers as they map to
the server, storage, and networkaspects is shown in
Table 2.
Technology aspect for virtualization
Virtualization Maturity Name Server Storage Network
Level 0 Local Standard PC Local Disks None
Level 1 Departmental Client-server n-tier File server,DB server LAN Shared services
Level 2 Data center Server virtualization SAN WAN/VPN
Level 3 Cloud Cloud platform Cloud storage Internet
B. Starting at Home – Level 0
Level 0 (“Local”) in the virtualization maturity model means
no virtualization at all. Even with no virtualization, there is
plenty of scope for energy savings. Traditional design and
development approaches may lead to applications that are less
efficient than they could be. There are also a number of other
design issues that can be readily recognized in applications,
and therefore, a set of rules, or principles, can be recommended
to be implemented by architects and developers for all
applications.
Enable power saving mode
Principle: Always design for transparent sleep/wake mode.
Minimize the amount of data stored
Principle: Minimize the amount of data stored by the
application and include data archiving in application design.
Design and code efficient applications
Principle: Design, develop, and test to maximize the efficiency
of code.
Sharing is Better – Level 1
Level 1 (“Logical Virtualization”) in the virtualization maturity
model introduces the idea of sharing applications. This might
be for examplethrough the use of departmental servers running
applications that are accessed by many client PCs. This first
appeared in the mainstream as“client/server” technology, and
later with more sophisticated N-tier structures. Although not
conventionally considered virtualization, in fact,it is arguably
the most important step. Large organizations typically have a
large portfolio of applications, with considerable functional
overlapsbetween applications. For large organizations, this will
produce much bigger payoffs than any subsequent hardware
virtualization. The best way to do this is to have a complete
Enterprise Architecture, encompassing an Application
Architecture identifying the functional footprints and overlaps
of the application portfolio, so that a plan can be established
for rationalizing unnecessary or duplicated functions. This may
be accomplished by simply identifying and decommissioning
unnecessarily duplicated functionality, or factoring out
common components into shared services. As well as solving
data integrity and process consistency issues, this will
generally mean there are fewer and smaller applications
overall, and therefore, less resources required to run them,
lowering the energy/emissions footprint and at the same time
reducing operational costs. The increased use of shared
application services ensures that common functions can be
centrally deployed and managed rather than unnecessarily
consuming resources within every application that uses them.
Principle: Develop a plan to rationalize your applications and
platform portfolio first.
A single computer running at 50 percent CPU usage will use a
considerably less power than two similar computers running at
25 percent CPU usage .This means that single-application
servers are not efficient and that servers should ideally be used
3. Integrated Intelligent Research (IIR) International Journal of Data Mining Techniques and Applications
Volume: 01 Issue: 02 December 2012 Page No.29-35
ISSN: 2278-2419
31
as shared resources so they operate at higher utilization levels.
When aiming for this end result, it is important to ensure that
sociability testing is executed to make sure that applications
can work together. Also, performance testing should be
executed to ensure that each application will not stop the others
on the server from runningefficiently while under load. In
effect, it is important to ensure that the available CPU can be
successfully divided between the applications on the server
with sufficient capacity for growth. As a by-product of
executing this form of design, it is recommended that a set of
architectural standards be introduced to require that
applications install cleanly into an isolated space and not
impact other applications, and that testing takes place to ensure
that this is the case. However, we have all seen examples
where this is not the case regardless of how simple this task
would appear.
Principle: Consolidate applications together onto a minimum
number of servers.
Level 2: Infrastructure Sharing Maximized
As defined earlier, Level 2 (“Data Center Virtualization”) is
the level most often associated with the term “virtualization.”
Through platforms like Microsoft Virtual Server, VMware and
others, server and storage virtualization does provide more
efficient solutions for organizations that have the size and
capability to develop a virtualized infrastructure. The bar for
virtualization is low and is becoming lower all the time as
virtualization software becomes easier to manage and more
capable. The price/performance of servers has now reached the
point where even smaller organizations hosting only 3 or 4
departmental applications may reduce costs through
deployment into a virtualized environment. It would appear
straightforward, by extending the previous arguments about
avoiding single-task computers, that data center virtualization
would provide a simple answer and approach to share
resources. This is indeed a good starting point, but is not as
simple as it appears. The lowest-hanging fruit in the transition
to virtualization are in test, development, and other
infrequently used computers. Moving these machines into a
single virtual environment reduces the physical footprint, heat
produced and power consumed by the individual servers. Each
physical server that runs a virtualized infrastructure has finite
resources and consumes energy just like any other computer. It
is therefore apparent, from extending the same set of rules
outlined above, that the aim is to load as much as possible onto
a single physical server, and to make use of its resources in the
most efficient way possible. Creating virtual servers does not
come at zero energy or management cost. Computers should
not be running unless they are needed, even in a virtual
environment. This will extend the limit of the available
resources. It is particularly efficient to do this in a virtual
environment as virtual machines can be easily paused,
restarted, and even moved. This adds, as would be expected,
the requirement for applications running on the virtual
machines to be able to be paused, along with the base operating
system. It could be possible, for example, to pause the
operation of a file and print server at night while application
updates run on another virtual machine, making use of the now
available resources.
Figure-1
Principle: Eliminate dedicated hardware infrastructure, and
virtualize servers and storage to obtain energy economies of
scale and improve utilization
A Brighter Shade of Green: Cloud Computing
Cloud computing provides the next big thing in computing —
some interesting architectural constructs, some great potential
from a monetary aspect, and a very real option to provide a
more environmentally friendly computing platform.
Fundamentally, cloud computing involves three different
models:
• Software as a Service (SaaS), refers to a browser or client
application accessing an application that is running on servers
hosted some whereon the Internet.
• Attached Services refers to an application that is running
locally accessing services to do part of its processing from a
server hosted somewhere on the Internet.
• Cloud Platforms allow an application that is created by an
organization’s developers to be hosted on a shared runtime
platform somewhere on the Internet. All of the above models
have one thing in common — the same fundamental approach
of running server components somewhere else, on someone
else’s infrastructure, over the Internet. In the SaaS and attached
services models the server components are shared and accessed
4. Integrated Intelligent Research (IIR) International Journal of Data Mining Techniques and Applications
Volume: 01 Issue: 02 December 2012 Page No.29-35
ISSN: 2278-2419
32
from multiple applications. Cloud Platforms provide a shared
infrastructure where multiple applications are hosted together.
Level 3: Cloud Virtualization
Cloud virtualization in the virtualization maturity model can
significantly improve efficiency by leveraging the economies
of scale associated with large numbers of organizations sharing
the same infrastructure. Obtaining energy efficiencies in data
centers is highly specialized and capital intensive. Standard
metrics are emerging such as Power Usage Effectiveness
which can be used to benchmark how much energy is being
usefully deployed versus how much is wasted on overhead.
There is a large gap between typical data centers and best
practice for PUE. The other major advantage of Level 3 is the
ability to locate the infrastructure to best advantage from an
energy point of view.
Figure-2
Principle: Shift virtualized infrastructure to locations with
access to low emissions energy and infrastructure with low
PUE.
Note that applications need to be specifically designed for
Level 3 to take full advantage of the benefits associated with
that level. This is an
impediment to migrating existing functions and applications
that may limit the degree to which organizations can move to
this level.
Principle: Design applications with an isolation layer to enable
cloud based deployment later on, even if your organization is
not yet ready for this step.
Making Sure Your Cloud has a Green Lining
There are four aspects of efficiency that should be considered
in cloud computing:
• The placement and design of the cloud data center
• The architecture of the cloud platform
• The architecture and development approach of the
applications that are hosted.
Different vendors are approaching the design of their data
centers in different ways. Different approaches that can be used
to reduce power consumption in data centers include:
• Buying energy-efficient servers
• Building energy efficient data centers that use natural airflow,
water cooling (ideally using recycled water and cooling the
water in an efficient manner)
• Efficient operation by running lights out, by moving load to
the cooler parts of the data center and by recycling anything
that comes out of the data center, including equipment.Some
data center operators already publish statistics on their power
usage, such as Google. These operators use an industry
standard for measuring the efficiency of a data center through
the ratio of power used to run IT equipment to the amount of
power used to run the data center itself (PUE). As this space
grows, it is expected that other organizations will do the same,
allowing a comparison to be made. Another area that can be
considered is the technical architecture of the cloud platform,
as different organizations provide different facilities and these
facilities can determine the efficiency of the application and
therefore impact the efficiency of the overall platform. Some
cloud vendors, for example, provide services that are
controlled by the vendor; such is the case with SaaS vendors.
In this case it is up to the architect of the calling application to
ensure the efficiency of the overall architecture. Other cloud
vendors host virtual machines that run on a scalable cloud
infrastructure. There is no doubt that this is more efficient than
executing physical servers, and likely more efficient than
executing virtual machines in a local data center.
Principle: Take all steps possible to ensure that the most
efficient cloud vendor is used.
Embodied energy means there can be a big sacrifice in
deploying an application in a data center if it subsequently gets
migrated to the cloud (meaning the embodied energy of the
original deployment hardware is “thrown away”). A driving
factor in moving through the virtualization levels from an
energy point of view is to consider the embodied energy
implications of moving up the levels. If you have to purchase a
lot of new hardware to move from one level to the next, the
embodied energy in these additions may negate the benefits in
operational savings. This is why it is critical to consider the
embodied energy as part of the entire lifecycle energy footprint
during application design, and in selection of deployment
architecture.
5. Integrated Intelligent Research (IIR) International Journal of Data Mining Techniques and Applications
Volume: 01 Issue: 02 December 2012 Page No.29-35
ISSN: 2278-2419
33
Table-3
Getting a Return on the Embodied Energy Costs ofBuying a New Virtualization Server
We define the following:
N – the number of servers to be virtualized on a single new physical server
B – embodied energy ratio (embodied energy of new server divided by total energy consumption of that server over its life
cycle)
E – efficiency factor (energy consumption of a single new server with capacity equivalent to the original N servers divided
by energy consumption of N original servers, assuming the same technology and utilization, for the projected life)
T – technology factor (energy consumption of new servers per unit
CPU capacity divided by energy consumption of old servers per unit CPU capacity)
U = utilization factor (utilization of old servers divided by utilization of new server )
To pay back the cost of embodied energy and realize a net gain,
you need:
E x U x T < (1 - B)
If a typical B value is 25 percent, then total improvement factors needs to be better than 0.75. This is easy to achieve since
even if the technologies of old and new servers are similar (T= 1) and there is no efficiency gains (E=1) you would still
expect U to be lower than 0.5 if N is greater than 2 since nearly all servers are grossly underutilized. Thus as soon as you
can virtualize more than two servers you can probably justify the embodied energy of buying a new server over the life
cycle of that server, from an energy point of view.
C. Moving to Increasing Levels of Virtualization
Referring to the model in Table 1, most IT organizations are
now at Level 1 with more advanced organizations moving in
whole or in part to Level2. Only a small proportion of
organizations are at Level 3. Although we argue that these
levels of increasing maturity in virtualization correspond to
reduced energy footprint, we note that Level 3 is not
necessarily an endpoint for all organizations — in some cases
there may be good business reasons for not moving to Level 3
at all. We omit the transition from Level 0 to Level 1 as the
vast majority of medium to large organizations have already
taken this step. Moving from Level 1 to 2 involves replacing
individual dedicated servers with larger platforms running
virtual servers. If more than one physical server is required,
additional benefits can be achieved by grouping applications
on physical servers such that their peak load profiles are spread
in time rather than coincident. This enables statistical
averaging of load since normally the sizing of server capacity
is driven by peak load rather than average load. The trade-off
here is the embodied energy cost of the new server to host the
virtualized environments.
In general, with regard to server virtualization at least, the
gains in virtualizing multiple servers onto a physical server are
substantially greater than the embodied energy costs associated
with buying a new server to host the virtualized platform.
Level 2 virtualization can also be used to leverage the
embodied energy in existing infrastructure to avoid the need
for procuring more infra structure. This may seem counter
intuitive but it may be better to use an existing server as a
virtualization host rather than buy are placement for it. For
example, if you have four servers that could potentially be
virtualized on a single new large server, there may be
advantages in simply retaining the best two
servers, each virtualizing two server instances (thus avoiding
the embodied energy in a new server and reducing operational
consumption by about a half) rather than throwing all 4 out and
buying a new server. This can be possible because the existing
servers are probably running at such low utilization that you
can double their load without impact on system performance.
Obviously, this will ultimately depend on the antiquity of the
existing servers and current and projected utilization levels.
Principle: measure server utilization levels – low utilization is
a strong indicator for potential virtualization benefits.
Note also that if you can skip Level 2 altogether, rather than
deploy to level 2 and then migrate to Level 3 later, you can
save on the embodied energy costs of the entire Level 2
infrastructure. Moving from Level 2 to 3 is fundamentally
about two things —sharing the data center infrastructure across
multiple organizations, and enabling the location of the data
center infrastructure to shift to where it is most appropriate.
Sharing the infrastructure across multiple organizations can
deliver big benefits because achieving best practice efficiencies
in data center energy usage requires complex, capital intensive
environments.
Another advantage of Level 3 is that the infrastructure can be
dynamically tuned to run at much higher levels of utilization
(and thus energy efficiency) than would be possible if
dedicated infrastructure was used, since the dedicated
infrastructure would need to be provisioned for future growth
rather than currently experienced load. In a cloud structure,
hardware can be dynamically provisioned so that even as load
for any individual application grows, the underlying hardware
platform can be always run at optimal levels of utilization.
The trade-offs here are:
• Increased load and dependence on external network
connectivity, although this is largely energy neutral
• (Perceived) loss of local control because the infrastructure is
being managed by an external organization (although they may
commit to service levels previously unachievable through the
internal organization)
6. Integrated Intelligent Research (IIR) International Journal of Data Mining Techniques and Applications
Volume: 01 Issue: 02 December 2012 Page No.29-35
ISSN: 2278-2419
34
• (Perceived) security or privacy concerns with having the data
hosted by an external party (such as managing hospital records,
for example).
Principle: Design and implement systems for the highest level
you can, subject to organizational, policy and technological
constraints.
Figure-3
III. STEPS OF GREEN COMPUTING
There are five steps to take towards green computing:
1. Develop a sustainable green computing plan.
2. Recycle
3. Make environmentally sound purchase decisions.
4. Reduce paper consumption.
5. Conserve energy.
A. Develop a sustainable green computing plan
Green computing best practices and policies should cover
power usuage, reduction of paper consumption as well as
recommendations for new equipment and recycling old
machines .
B. Recycle
Computers have toxin metals and pollutants that can emit
harmful emissions into the environment never discard
computers in a landfill. Recycle them instead through
manufacturer programs such as HP’s planet partners recycling
services in your community.
C. Make environmentally sound purchase decision
Help institution purchases evaluate ,compare and
select desktop computers, notebook and monitors
based on environmental attributes.
Provide a clear, consistent set of performance criteria
for the design of products.
Recognize manufacturer efforts to reduce the
environmental impact of products by reducing or
elimination environmentally sensitive materials
,designing for longevity and reducing packaging
materials.
Figure-4
D. Reduce paper consumption
Using e-mail, electronic archiving. Use the “track Changes”
feature in electronic documents. Rather than red-line
corrections on paper. When you do print out documents, make
sure to use both sides of the paper, recycle regularly, use
smaller fonts and margins and selectively print required pages.
E. Conserve Energy
Turn Off your computer when you know you won’t use it for
an extended period of time. Turn on power management
features during shorter periods of inactivity.
IV. CONCLUSIONS
There is a compelling need for applications to take
environmental factors into account in their design, driven by
the need to align with organizational environmental policies,
reduce power and infrastructure costs and to reduce current or
future carbon costs. The potential reduction in energy and
emissions footprint through good architectural design is
significant. The move to more environmentally sustainable
applications impacts software and infrastructure architecture.
The link between the two is strong, driving a need for joint
management of this area of concern from infrastructure and
software architects within organizations. These issues should
be considered at the outset and during a project, not left to the
end. An interesting observation is that our principles also align
well with the traditional architectural drivers. Does this mean
that energy reduction can be used as a proxy for all the other
drivers? An architecture designed solely to reduce energy over
7. Integrated Intelligent Research (IIR) International Journal of Data Mining Techniques and Applications
Volume: 01 Issue: 02 December 2012 Page No.29-35
ISSN: 2278-2419
35
the full lifecycle would seem to also result in a “good”
architecture from a broader perspective. Can we save a lot of
time and effort by just concentrating solely on energy
efficiency above all else?We are not passive spectators, but
active contestants in the drama of our existence. We need to
take responsibility for the kind of life. Make your entire
organisation Green in every way possible. Understand the life
cycle of IT products. Reduce as much paper as possible and
recycle it when you can. Recycle the water the organisation
uses by collecting rain water and filter it for sinks and drinking
fountains, take drain water from sinks and water fountains and
use the grey water for flushing the toilets. Encourage your
employees to carpool, ride bicycles, or use any other mass
transit transportation. A green roof can be a good location for a
break area for employees. These are but a few small ideas you
can use to make your business more Green which is good for
the Environment and the stock-holders. Let’s start working on
it and embrace the future.
REFERENCES
[1] Baroudi, Hill, Reinhold, and Senxian (2009) Green IT for Dummies,
[2] Climate Savers Computing Initiative (2010) Retrieved from
http://www.climatesaverscomputing.org/ Energy Star Program (2010)
Retrieved fromhttp://www.energystar.gov/
[3] Data Center Energy Forecast http://svlg.net/campaigns/datacenter/doc/
[4] [ Microsoft IT Right Sizing - Identifying Server Candidates for
Virtualization http://technet.microsoft.com/en-us/library/
[5] Microsoft IT Showcase Virtualization Whitepaper
http://technet.microsoft.com/en-us/library/