Virtualization is a new method regarding using computing resources efficiently, by maximizing energy efficiency, extend the life of the hardware and also recycles. Virtualization technology is a system of work done by the software can merge some real physical systems into a single virtual form commonly known as virtualization without prejudice advantages over the single system. Of course, this system can reduce the amount of hardware, electrical energy consumption and time used thereby increasing the level of efficiency and effectiveness. Also, virtualization certainly reduces heat energy arising from the number of installed hardware, thereby reducing the increase in temperature geothermal (Global Warming). Some virtualization includes Server Virtualization, Network Virtualization, Memory Virtualization, Grid Computing, Application Virtualization, Storage Virtualization, Virtualization and Thin Client Platform.
This slides will provide viewers a complete understanding of all the different virtualization techniques.
The main reference for the presentation is taken from Mastering cloud computing By Rajkumar Buyya.
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
Cloud Application Platform -Aneka. It is Manjrasoft's solution for Developing deploying and managing cloud applications.
Ppts are prepared for VTU 7th semester students course, referring Text book " Mastering Cloud Computing" by Rajkumar Buyya, Christian Vecchiola,S.Thamarai Selvi,McGraw Hill Education (India) Private Limited.
This slides will provide viewers a complete understanding of all the different virtualization techniques.
The main reference for the presentation is taken from Mastering cloud computing By Rajkumar Buyya.
This produced by straight forward compiling algorithms made to run faster or less space or both. This improvement is achieved by program transformations that are traditionally called optimizations.compiler that apply-code improving transformation are called optimizing compilers.
Cloud Application Platform -Aneka. It is Manjrasoft's solution for Developing deploying and managing cloud applications.
Ppts are prepared for VTU 7th semester students course, referring Text book " Mastering Cloud Computing" by Rajkumar Buyya, Christian Vecchiola,S.Thamarai Selvi,McGraw Hill Education (India) Private Limited.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Virtualization is a technique, which allows to share single physical instance of an application or resource among multiple organizations or tenants (customers)..
Virtualization is a proved technology that makes it possible to run multiple operating system and applications on the same server at same time.
Virtualization is the process of creating a logical(virtual) version of a server operating system, a storage device, or network services.
The technology that work behind virtualization is known as a virtual machine monitor(VM), or virtual manager which separates compute environments from the actual physical infrastructure.
This Presentation was prepared by Abdussamad Muntahi for the Seminar on High Performance Computing on 11/7/13 (Thursday) Organized by BRAC University Computer Club (BUCC) in collaboration with BRAC University Electronics and Electrical Club (BUEEC).
Please contact me to download this pres.A comprehensive presentation on the field of Parallel Computing.It's applications are only growing exponentially day by days.A useful seminar covering basics,its classification and implementation thoroughly.
Visit www.ameyawaghmare.wordpress.com for more info
The interest in virtualization has been growing rapidly in the IT industry because of inherent benefits like better resource utilization and ease of system manageability. The experimentation and use of virtualization as well as the simultaneous deployment of virtual software are increasingly getting popular and in use by educational institutions for research and teaching. This paper stresses on the potential advantages associated with virtualization and the use of virtual machines for scenarios, which cannot be easily implemented and/or studied in a traditional academic network environment, but need to be explored and experimented by students to meet the raising needs and knowledge-base demanded by the IT industry. In this context, we discuss various aspects of virtualization – starting from the working principle of virtual machines, installation procedure for a virtual guest operating system on a physical host operating system, virtualization options and a performance study measuring the throughput obtained on a network of virtual machines and physical host machines. In addition, the paper extensively evaluates the
use of virtual machines and virtual networks in an academic environment and also specifically discusses sample projects on network security, which may not be feasible enough to be conducted in a physical network of personal computers; but could be conducted only using virtual machines.
Artificial Intelligence: Introduction, Typical Applications. State Space Search: Depth Bounded
DFS, Depth First Iterative Deepening. Heuristic Search: Heuristic Functions, Best First Search,
Hill Climbing, Variable Neighborhood Descent, Beam Search, Tabu Search. Optimal Search: A
*
algorithm, Iterative Deepening A*
, Recursive Best First Search, Pruning the CLOSED and OPEN
Lists
Virtualization is a technique, which allows to share single physical instance of an application or resource among multiple organizations or tenants (customers)..
Virtualization is a proved technology that makes it possible to run multiple operating system and applications on the same server at same time.
Virtualization is the process of creating a logical(virtual) version of a server operating system, a storage device, or network services.
The technology that work behind virtualization is known as a virtual machine monitor(VM), or virtual manager which separates compute environments from the actual physical infrastructure.
This Presentation was prepared by Abdussamad Muntahi for the Seminar on High Performance Computing on 11/7/13 (Thursday) Organized by BRAC University Computer Club (BUCC) in collaboration with BRAC University Electronics and Electrical Club (BUEEC).
Please contact me to download this pres.A comprehensive presentation on the field of Parallel Computing.It's applications are only growing exponentially day by days.A useful seminar covering basics,its classification and implementation thoroughly.
Visit www.ameyawaghmare.wordpress.com for more info
The interest in virtualization has been growing rapidly in the IT industry because of inherent benefits like better resource utilization and ease of system manageability. The experimentation and use of virtualization as well as the simultaneous deployment of virtual software are increasingly getting popular and in use by educational institutions for research and teaching. This paper stresses on the potential advantages associated with virtualization and the use of virtual machines for scenarios, which cannot be easily implemented and/or studied in a traditional academic network environment, but need to be explored and experimented by students to meet the raising needs and knowledge-base demanded by the IT industry. In this context, we discuss various aspects of virtualization – starting from the working principle of virtual machines, installation procedure for a virtual guest operating system on a physical host operating system, virtualization options and a performance study measuring the throughput obtained on a network of virtual machines and physical host machines. In addition, the paper extensively evaluates the
use of virtual machines and virtual networks in an academic environment and also specifically discusses sample projects on network security, which may not be feasible enough to be conducted in a physical network of personal computers; but could be conducted only using virtual machines.
A Rookie-level presentation on Virtualization, and a sneak peek Cloud Computing.
This is a presentation created for a seminar presentation on Cloud and Virtualization Technologies.
Under normal conditions, this presentation may take upto 20-40 mins to complete.
Created and presented in Oct 2014.
Need for Virtualization – Pros and cons of Virtualization – Types of Virtualization –System VM, Process VM, Virtual Machine monitor – Virtual machine properties - Interpretation and binary translation, HLL VM - supervisors – Xen, KVM, VMware, Virtual Box, Hyper-V.
The security and speed of data transmission is very important in data communications, the steps that can be done is to use the appropriate cryptographic and compression algorithms in this case is the Data Encryption Standard and Lempel-Ziv-Welch algorithms combined to get the data safe and also the results good compression so that the transmission process can run properly, safely and quickly.
The problem of electric power quality is a matter of changing the form of voltage, current or frequency that can cause failure of equipment, either utility equipment or consumer property. Components of household equipment there are many nonlinear loads, one of which Mixer. Even a load nonlinear current waveform and voltage is not sinusoidal. Due to the use of household appliances such as mixers, it will cause harmonics problems that can damage the electrical system equipment. This study analyzes the percentage value of harmonics in Mixer and reduces harmonics according to standard. Measurements made before the use of LC passive filter yield total current harmonic distortion value (THDi) is 61.48%, while after passive filter use LC the THDi percentage becomes 23.75%. The order of harmonic current in the 3rd order mixer (IHDi) is 0.4185 A not according to standard, after the use of LC passive filter to 0.088 A and it is in accordance with the desired standard, and with the use of passive filter LC, the power factor value becomes better than 0.75 to 0.98.
This paper examines the long-term simultaneous response between dividend policy and corporate value. The main problem studied is that the dividend policy is responded very slowly to the final goal of corporate value. Analysis of Data was using Vector Autoregression (VAR). The result of the discussion concludes the effect of different simultaneous response every period between dividend policy with corporate value, short-term, medium-term, and long-term. The strongest response to dividend changes comes from free cash flow whereas the highest response to corporate value comes from market book value.
Whatsapp is a social media application that is currently widely used from various circles due to ease of use and security is good enough, the security at the time of communicating at this time is very important as well with Whatsapp. Whatsapp from the network is very secure but on the local storage that contains the message was not safe enough because the message on local storage is not secured properly using a special algorithm even using the software Whatsapp Database Viewer whatsapp message can be known, to improve the security of messages on local storage whatsapp submitted security enhancements using the Modular Multiplication Block Cipher algorithm so that the message on whatsapp would be better in terms of security and not easy to read by unauthorized ones.
Consumers are increasingly easy to access to information resources. Consumers quickly interact with whatever they will spend. Ease of use of technology an impact on consumer an attitude are increasingly intelligent and has encouraged the rise of digital transactions. Technology makes it easy for them to transact on an e-commerce shopping channel. Future e-commerce trends will lead to User Generated Content related to user behavior in Indonesia that tends to compare between shopping channels. The purpose of this study was to examine the direct and indirect effects of Perceived Ease of Use on Behavioral Intention to transact in which Perceived Usefulness is used as an intervening variable. The present study used the descriptive exploratory method with causal-predictive analysis. Determination method of research sample used purposive sampling. The enumerator team assists in the distribution of questionnaires. The results of the study found that the direct effect of perceived ease of use on behavioral intention to transact is smaller than that indirectly mediated by perceived usefulness variables.
Performance is a process of assessment of the algorithm. Speed and security is the performance to be achieved in determining which algorithm is better to use. In determining the optimum route, there are two algorithms that can be used for comparison. The Genetic and Primary algorithms are two very popular algorithms for determining the optimum route on the graph. Prim can minimize circuit to avoid connected loop. Prim will determine the best route based on active vertex. This algorithm is especially useful when applied in a minimum spanning tree case. Genetics works with probability properties. Genetics cannot determine which route has the maximum value. However, genetics can determine the overall optimum route based on appropriate parameters. Each algorithm can be used for the case of the shortest path, minimum spanning tree or traveling salesman problem. The Prim algorithm is superior to the speed of Genetics. The strength of the Genetic algorithm lies in the number of generations and population generated as well as the selection, crossover and mutation processes as the resultant support. The disadvantage of the Genetic algorithm is spending to much time to get the desired result. Overall, the Prim algorithm has better performance than Genetic especially for a large number of vertices.
Implementation of Decision Support System for various purposes now can facilitate policy makers to get the best alternative from a variety of predefined criteria, one of the methods used in the implementation of Decision Support System is VIKOR (Vise Kriterijumska Optimizacija I Kompromisno Resenje), VIKOR method in this research got the best results with an efficient and easily understood process computationally, it is expected that the results of this study facilitate various parties to develop a model any solutions.
Edge detection is one of the most frequent processes in digital image processing for various purposes, one of which is detecting road damage based on crack paths that can be checked using a Canny algorithm. This paper proposed a mobile application to detect cracks in the road and with customized threshold function in the requests to produce useful and accurate edge detection. The experimental results show that the use of threshold function in a canny algorithm can detect better damage in the road
The security and confidentiality of information becomes an important factor in communication, the use of cryptography can be a powerful way of securing the information, IDEA (International Data Encryption Algorithm) and WAKE (Word Auto Key Encryption) are some modern symmetric cryptography algorithms with encryption and decryption function are much faster than the asymmetric cryptographic algorithm, with the combination experiment IDEA and WAKE it probable to produce highly secret ciphertext and it hopes to take a very long time for cryptanalyst to decrypt the information without knowing the key of the encryption process.
Employees are the backbone of corporate activities and the giving of bonuses, job titles and allowances to employees to motivate the work of employees is very necessary, salesman on the company very much and to find the best salesman cannot be done manually and for that required the implementation of a system in this decision support system by applying the TOPSIS method, it is expected with the implementation of TOPSIS method the expected results of top management can be fulfilled.
English is a language that must be known all-digital era at this time where almost all information is in English, ranging from kindergarten to college learn English. elementary school is now also there are learning and to help introduce English is prototype application recogni-tion of common words in English and can be updated dynamically so that updates occur information to new words and sentences in Eng-lish to be introduced to students.
The selection of the best employees is one of the process of evaluating how well the performance of the employees is adjusted to the standards set by the company and usually done by top management such as General Manager or Director. In general, the selection of the best employees is still perform manually with many criteria and alternatives, and this usually make it difficult top managerial making decisions as well as the selection of the best employees periodically into a long and complicated process. Therefore, it is necessary to build a decision support system that can help facilitate the decision maker in determining the best choice based on standard criteria, faster, and more objective. In this research, the computational method of decision-making system used is Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The criteria used in the selection of the best employees are: job responsibilities, work discipline, work quality, and behaviour. The final result of the global priority value of the best employee candidates is used as the best employee selection decision making tool by top management.
Rabin Karp algorithm is a search algorithm that searches for a substring pattern in a text using hashing. It is beneficial for matching words with many patterns. One of the practical applications of Rabin Karp's algorithm is in the detection of plagiarism. Michael O. Rabin and Richard M. Karp invented the algorithm. This algorithm performs string search by using a hash function. A hash function is the values that are compared between two documents to determine the level of similarity of the document. Rabin-Karp algorithm is not very good for single pattern text search. This algorithm is perfect for multiple pattern search. The Levenshtein algorithm can be used to replace the hash calculation on the Rabin-Karp algorithm. The hash calculation on Rabin-Karp only counts the number of hashes that have the same value in both documents. Using the Levenshtein algorithm, the calculation of the hash distance in both documents will result in better accuracy.
Cybercrime is a digital crime committed to reaping profits through the Internet as a medium. Any criminal activity that occurs in the digital world or through the internet network is referred to as internet crime. Cybercrime also refers to criminal activity on computers and computer networks. This activity can be done in a certain location or even done between countries. These crimes include credit card forgery, confidence fraud, the dissemination of personal information, pornography, and so on. In ancient times there was no strong law to combat cybercrime. Since there are electronic information laws and transactions, legal jurisdiction of computer crime has been applied. Computer networks are not only installed in one particular local area but can be applied to a worldwide network. It is what makes cybercrime can occur between countries freely. This issue requires universal jurisdiction. A country has the authority to combat crimes that threaten the international community. This jurisdiction is applied without determining where the crime was committed and the citizen who committed the cybercrime. This jurisdiction is created in the absence of an international judicial body specifically to try individual crimes. Cybercrime cannot be totally eradicated. Implementing international jurisdiction at least reduces the number of cybercrimes in the world.
Competitive market competition so the company must be smart in managing finance. In promoting the selling point, marketing is the most important step to be considered. Promotional routine activity is one of the marketing techniques to increase consumer appeal to marketed products. One of the important agendas of promotion is the selection of the most appropriate promotional media. The problem that often occurs in the process of selecting a promotional media is the subjectivity of decision making. Marketing activities have a taxation fund that must be issued. Limited funds are one of the constraints of improving market strategy. So far, the selection of promotional media is performed by the company manually using standardized determination that already applies. It has many shortcomings, among others, regarding effectiveness and efficiency of time and limited funds. Markov Chain is very helpful to the company in analyzing the development of the company over a period. This method can predict the market share in the future so that company can optimize promotion cost at the certain time. Implementation of this algorithm produces a percentage of market share so that businesses can determine and choose which way is more appropriate to improve the company's market strategy. Assessment is done by looking at consumer criteria of a particular product. These criteria can determine consumer interest in a product so that it can be analyzed consumer behavior.
The transition of copper cable technology to fiber optic is very triggering the development of technology where data can be transmitted quickly and accurately. This cable change can be seen everywhere. This cable is an expensive cable. If it is not installed optimally, it will cost enormously. This excess cost can be used to other things to support performance rather than for excess cable that should be minimized. Determining how much cable use at the time of installation is difficult if done manually. Prim's algorithm can optimize by calculating the minimum spanning tree on branches used for fiber optic cable installation. This algorithm can be used to shorten the time to a destination by making all the points interconnected according to the points listed. Use of this method helps save the cost of fiber optic construction.
An image is a medium for conveying information. The information contained therein may be a particular event, experience or moment. Not infrequently many images that have similarities. However, this level of similarity is not easily detected by the human eye. Eigenface is one technique to calculate the resemblance of an object. This technique calculates based on the intensity of the colors that exist in the two images compared. The stages used are normalization, eigenface, training, and testing. Eigenface is used to calculate pixel proximity between images. This calculation yields the feature value used for comparison. The smallest value of the feature value is an image very close to the original image. Application of this method is very helpful for analysts to predict the likeness of digital images. Also, it can be used in the field of steganography, digital forensic, face recognition and so forth.
Compression is an activity performed to reduce its size into smaller than earlier. Compression is created since lack of adequate storage capacity. Data compression is also needed to speed up data transmission activity between computer networks. Compression has the different rule between speed and density. Compressed compression will take longer than compression that relies on speed. Elias Delta is one of the lossless compression techniques that can compress the characters. This compression is created based on the frequency of the character of a character on a document to be compressed. It works based on bit deductions on seven or eight bits. The most common characters will have the least number of bits, while the fewest characters will have the longest number of bits. The formation of character sets serves to eliminate double characters in the calculation of the number of each character as well as for the compression table storage. It has a good level of comparison between before and after compression. The speed of compression and decompression process possessed by this method is outstanding and fast.
Technological developments in computer networks increasingly demand security on systems built. Security also requires flexibility, efficiency, and effectiveness. The exchange of information through the internet connection is a common thing to do now. However, this way can be able to trigger data theft or cyber crime which resulted in losses for both parties. Data theft rate is getting higher by using a wireless network. The wireless system does not have any signal restrictions that can be intercepted Filtering is used to restrict incoming access through the internet. It aims to avoid intruders or people who want to steal data. This is fatal if not anticipated. IP and MAC filtering is a way to protect wireless networks from being used and misused by just anyone. This technique is very useful for securing data on the computer if it joins the public network. By registering IP and MAC on a router, this will keep the information unused and stolen. This system is only a few computers that can be connected to a wireless hotspot by IP and MAC Address listed.
Catfish is one type of freshwater fish. This fish has a good taste. In the cultivation of these fish, many obstacles need to be faced. Because living in dirty water, this type of fish is susceptible to disease. Many symptoms arise during the fish cultivation process; From skin disease to physical. Catfish farmers do not know how to diagnose diseases that exist in their livestock. This diagnosis serves to separate places between good and sick catfish. The goal is that the sale value of the fish is high. Catfish that have diseases will be sold cheaper to be used as other animal feed while healthy fish will be sold to the market or exported to other countries. Diagnosis can be done by expert system method. The algorithm of certainty factor is one of the good algorithms to determine the percentage of possible fish disease. This algorithm is very helpful for farmers to improve catfish farming.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
How libraries can support authors with open access requirements for UKRI fund...
Virtualization Approach: Theory and Application
1. IOSR Journal of Electrical and Electronics Engineering (IOSR-JEEE)
e-ISSN: 2278-1676,p-ISSN: 2320-3331, Volume 11, Issue 5 Ver. II (Sep - Oct 2016), PP 187-191
www.iosrjournals.org
DOI: 10.9790/1676-110502187191 www.iosrjournals.org 187 | Page
Virtualization Approach: Theory and Application
Hamdani1
, Andysah Putera Utama Siahaan2
1
Faculty of Engineering,Universitas Pembangunan Panca Budi, Medan, Indonesia
2
Faculty of Computer Science,Universitas Pembangunan Panca Budi, Medan, Indonesia
Abstract: Virtualization is a new method regarding using computing resources efficiently, by maximizing
energy efficiency, extend the life of the hardware and also recycles. Virtualization technology is a system of
work done by the software can merge some real physical systems into a single virtual form commonly known as
virtualization without prejudice advantages over the single system. Of course, this system can reduce the
amount of hardware, electrical energy consumption and time used thereby increasing the level of efficiency and
effectiveness. Also, virtualization certainly reduces heat energy arising from the number of installed hardware,
thereby reducing the increase in temperature geothermal (Global Warming). Some virtualization includes
Server Virtualization, Network Virtualization, Memory Virtualization, Grid Computing, Application
Virtualization, Storage Virtualization, Virtualization and Thin Client Platform.
Keywords: Virtualization, Virtual Machine
I. Introduction
Virtualization is a method that emulates a real system into a virtual system that physically separated
dependence. In other words, something that has been virtualized can no longer be seen with the eye or held, but
has the same function as the original system. The purpose of virtualization is to make savings from all aspects.
Virtualization is a concept that was first developed in the 1960s to manage mainframe hardware. And also in
1960-based computer x86 architecture are faced with the same problem, namely rigidity and
underutilization.Virtualization is always used first of all just to test the invention or new system over an existing
system. It aims to get results or to test whether the new system has still to be compatible or needed further
improvement. Then some subsequent period virtualization can stand alone in a parent (host) system.This
understanding tries to describe the virtualization technique. It describes the basic rule of the virtuality.
II. Theories
2.1 Virtual Machine
Virtual machine (VM) is a device that typically is a software or operating system, which can not be
seen physically but can be run in an environment other. In this case, the VM is referred to as a "guest" because
the device only rides, while the environment is running so-called "host." The basic concept of the virtual
machine is that it emulates the hardware of a computer consisting of a CPU, memory, HDD into multiple
execution environments, so that seems to have the physical device on each virtualization. VM to appear for their
trial to run multiple operating systems on a single computer.
Fig. 1 Virtual Machine Architecture
2. Virtualization Approach: Theory and Application
DOI: 10.9790/1676-110502187191 www.iosrjournals.org 188 | Page
Figure 1 shows the virtual machine architecture. The server can covers several virtual machines
simultaneously. Virtual machine technology has many functions such as hardware consolidation, facilitate the
rescue system, and run the software earlier. One of the most important applications of virtual machine
technology is cross-platform integration. Some other important applications are:
1. Server Consolidation. If several servers run applications that only takes up very little resources, VM can
be used to incorporate the applications that run on only one server, even if the application requires that the
operating system is different.
2. Automation and consolidation. Each VM can act as different environments; this allows developers so no
need to provide environment physically.
3. Easy System Recovery. Virtualization can be used for system recovery that requires portability and
flexibility across platforms.
4. The demonstration software. With VM technology, clean operating system, and its configuration can be
provided quickly.
A. Virtualization Types
Virtual Machine provides convenience regarding the distribution of hardware resources that exist in
each guest system, each running a guest operating system itself. The software provides the virtualization is
called a virtual machine monitor or hypervisor. A hypervisor can run on hardware or an operating system.
The main advantages and VM system are:
1. Various operating system environment can be run on the same computer, in strong isolation between
environments
2. VM can provide the instruction set architecture (ISA) that is different to that of the hardware.
The operating system on the guest does not have the same operating system. The use of the VM to
support a variety of different operating systems became so popular in embedded systems, where the real-time
operating system is used in conjunction with a high-level operating system such as Linux or Windows. Another
function is to sandbox that isolates changes in the codes that suspicion arises from the OS that can not be trusted
because it is still in the development stage. VM has other benefits in the development of operating systems such
as access better debugging and reboots faster than the original machine.
2.2 Virtualization Process
A virtual machine process sometimes called an application virtual machine, runs as a normal
application inside an operating system and supports the process. VM process was created at the time the process
was initiated and is destroyed when the process exits. The aim is to provide a programming environment that is
platform-independent which abstracts the details of the software or operating system and allow a program
executed in the same way on any platform. The process VM provides a high-level abstraction process is
implemented clicking VM-use interpreters. VM-type became popular with the Java programming language,
which is implemented by the Java Virtual Machine. Another example is the .NET Framework, which runs on a
VM called the Common Language Runtime. A special case of the VM process is a system that abstracts the
communication mechanism of cluster computers. VMnya is not a single process, but rather one process per
physical machine in the cluster. VM is designed to simplify the work program an application in parallel with
letting the programmer focus on algorithms rather than the communication mechanisms provided by the
interconnect and operating systems. The fact that communication was not hidden and the cluster is not cultivated
represented as a single machine.
2.3 Virtualization Sample
In GNU / Linux, one is the famous virtual machine is Vmware. VMware allows multiple operating
systems to run on one machine PC simultaneously. It can be done without having to reconfigure the storage
media and reboot. On the Virtual Machine supplied will be executed by the operating system as desired. In this
way, the user can boot up an operating system Linux as the host operating system and then run other operating
systems, such as Microsoft Windows or Solaris. The operating system that runs on the host operating system is
known as the guest operating system.
3. Virtualization Approach: Theory and Application
DOI: 10.9790/1676-110502187191 www.iosrjournals.org 189 | Page
Fig. 2VMware Virtual Machine Scheme
VMware logically regarded as software is often used for the experimental new operating system, games,
applications, to install two operating systems and run (cross platform) on the same drive without having to dual-
boot. Technically we just press Alt + Tab to switch operating systems. But VMware is not the emulator as well
as PS2 emulator because it does not emulate a CPU and hardware in a Virtual Machine (VM), but only allow
other operating systems to run in parallel with the operating system has been running. Each Virtual Machine
(VM) can have its IP address that can be tested connections of the connected network (ping). The sample of this
can be seen in Figure 2. It describes the scheme of the virtualization usage.
III. Implementation
Before the virtualization runs, the operating system must first be executed first. If the operating system
on a computer having problems, eating the worst thing to do is reinstall. While in the world of virtualization
everything changed 180 degrees.
3.1 Full Virtualization
Full Virtualization is a way of doing virtualization for the various virtual machine environment, where
the model of full virtualization provides a complete modeling of hardware. The complete simulation allows any
software that can be executed directly on the host operating system can also be executed on the guest operating
systems in, including all operating systems.One illustration of full virtualization is a control program of the
operating system CP / CMS from IBM. Each user of the CP / CMS given a computer system (which is a virtual
machine) that stands alone. The virtual machine has all the capabilities of the underlying hardware, and to its
users, the virtual machine can not be distinguished by a system of its own. Simulations carried out thoroughly
and based on the principle of operation of hardware that includes instruction sets, main memory, interrupts,
exceptions, and device access. A result is a machine that can be shared among many users.Full virtualization is
only possible with the right combination of hardware and the right software. For example, it is not possible for
most systems IBM System / 360. X86 systems also were once thought not to be able to run full virtualization,
but by using the binary technique translation, VMware can run full virtualization.
The main challenge in full virtualization is the simulation of the operation that requires special
privileges such as instruction M / K. The effect of any operations performed in the VM must be kept inside the
VM - virtual operation is not allowed to change the status in another VM, the control program, or hardware.
Instructions whose influence is regulated by the control program can be directly executed by the hardware.
While the instructions that could affect up to outside the VM must be wrapped and simulated. Full virtualization
has so far proved a success for the division of a computer system to be used for the isolation of many users and
users with other users and with the control program to obtain the reliability and security of the system.
4. Virtualization Approach: Theory and Application
DOI: 10.9790/1676-110502187191 www.iosrjournals.org 190 | Page
3.2 Half Virtualization
Half Virtualization in computer science is a virtualization technique that is used for implementation on
a wide variety of virtual machine environment, which in this part of VM virtualization environment only
provides partial hardware simulation only. Not all of the hardware features simulated so that not all software can
run without modification first.The primary key of the partial address virtualization is virtualization, which
means that each virtual machine consists of an address independent. This ability should be supported by the
ability to relocate from a hardware address - already exists in most practical implementations of virtualization
half.Virtualization is the initial part of their full virtualization. Virtualization part used in the first generation of
time-sharing systems CTSS and experimental paging system on the IBM M44 / 44x. This term can also be used
to describe the operating system that provides separate address space for different users or
processes.Virtualization half is much more easy to implement than full virtualization, often unable to provide a
useful and powerful VM and supports critical applications. The drawback is a hardware compatibility issue
earlier and portability (not multiple systems). If hardware features are not simulated, then the software that uses
the feature will fail to run.
3.3 Native Virtualization
Native Virtualization is a technique by which a VM is used to simulate a complete hardware
environment so that the operating system can run unmodified on the same CPU type in complete isolation in the
VM container. Native virtualization leveraging the capabilities of hardware support available in the latest
processors from Intel (Intel VT) and Advanced Micro Devices (AMD-V) to provide performance closer to the
original system.Native virtualization, also known as accelerated virtualization or virtualization hybrid is a
combination of full virtualization and acceleration techniques I / O and is often used to greatly improve the
performance of full virtualization. Typically, this method starts with the Virtual Machine Monitor is capable of
full virtualization ago, based on analysis of performance, running acceleration technique chosen. I / O and
network drivers are part of the most common virtualization accelerated in the original.
IV. Virtualization and Probabilities
In the past, the boundaries between virtualization and portability are fairly clear boundaries. When you
hear the word virtualization of computing, always first imagined virtual memory, and virtual machine (like Java)
or virtual environment (Operating System running on other operating systems). But not imagined that
virtualization can be attributed to its portability. Portability is more complex than just a program that can be
moved easily and may be made without too much reconfiguration in another environment. Portability in a
Windows environment which I am most familiar is the Portable Application Suite, in the Linux environment is
already so familiar it feels us with LiveCD, LiveDVDs, and LiveUSB.Currently, there are so many alternatives
to VMware as virtualization. From the world of open source, we know Xen, QEMU, and VirtualBox Colinux.
Even Microsoft also issued a Virtual PC. The most exciting development is all becoming increasingly compact.
Sekompak possible to enable the Virtual Disks that have been made can be moved easily. Virtualization is now
the case is not too complex. Configuration is easy, and there are many alternatives to VMWare. There are some
things that make us need to think further about green computing, matter and energy efficiency.
Virtualization on a more serious computing from the desktop is needed mainly for efficiency. Basically,
virtualization has always been that the existence of the operating system. Several processes or software is
computing nicer to work in their natural environment or even have software that is only available on the specific
operating system platform. For example, the famous AutoCAD drafter environment has not had a substitution
outside of Windows. Virtualization also creates other efficiencies, where a machine can be used to run multiple
operating systems, or operate system can be run at any time when required by the same machine that was
running the other operating system.
V. Virtualization Softwares
Before attempting virtualization to note is how adequate capabilities (hardware) computer to virtual
desired operating system. The host system needs a large of resources since it shares its resources to the guest
operating systems. There are some of the virtualization software available, such as:
4.1 coLinux
coLinux or Cooperative Linux is a special Virtualization in Windows for multiple Linux distributions.
With coLinux, we can run Ubuntu, Debian, ArchLinux, Fedora and others. The interesting thing is, when the
image is placed at the Linux USB Portable Notebook, it did not disturb. I say almost no degradation of
performance which means until the process is finished booting Linux. We can bring coLinux daemon without
having to be installed on a USB Flash image at the same time we need.
5. Virtualization Approach: Theory and Application
DOI: 10.9790/1676-110502187191 www.iosrjournals.org 191 | Page
4.2 QEMU
QEMU emulator can run as meaning it can run operating systems for one type of machine to machine
another kind or as a virtualizationapplication with QEMU accelerator executes guest code (other operating
systems) directly to the host (the primary operating system / parent). This is another type of savings. QEMU is
more advanced than coLinux because it is available on multiple operating systems.Qemu is equipped with
QEMU Manager GUI so that we can arrange the images of operating systems and virtualization that we prepare
with ease. Allocate resources, disk space and what devices in parent operating system that will be mapped to the
virtual operating system (Guest OS). Defining this should be done with the text mode configuration coLinux.
Performance, in some ways, coLinux better, but QEMU has more features than coLinux. coLinux present for
Windows users who want to learn Linux. Running Qemu as an emulator, not required complicated procedures.
To try an OS, we do not even need to allocate space for the virtual hard disk image; we can even run the iso
from the operating system. If we've already downloaded the Ubuntu iso, FreeDOS or the latest Fedora, then we
can immediately run with QEMU with only direct CD / DVD to boot the iso. Qemu present as a virtualizes app
in many operating systems for computing broadly than just try.
4.3 VirtualBox
VirtualBox became part of the SUN. VirtualBox foreclosed SUN SUN to add the portfolio, especially
in working on Xen virtualization as Novell bundled into SUSE Linux Enterprise and Microsoft acquired from
Connectix VirtualPC. SUN declare VirtualBox virtual computing ready for a more serious so that it can be used
from home to enterprise scale.VirtualBox is available on many operating systems (host): Linux, Solaris, and
OpenBSD. Virtualbox developed rapidly, releases the series out in a relatively fast also with new features. The
concept offered at Full Virtualization in VirtualBox is the same as VMWare, QEMU, and Win4Lin where the
hardware specifications of the guest hosts will follow the hardware specifications of increasingly sophisticated
increasingly sophisticated hardware host a guest anyway. However, there is some hardware virtualized host alias
does not follow that the CD / DVD drive, hard drive, RAM, and VGA memory of the benefits you can manage
the ability of the virtualized hardware but it can not exceed the ability of the host hardware.Like a guest requires
the original computer RAM and hard drive space. The amount of RAM memory RAM will cut the guest host,
are trivial RAM must be allocated to the guest host. For example, a guest requires 256 MB of RAM memory
and a memory of 1 Gb host so the host memory: 256 Mb to 1 Gb reduced. If there is a guest host, more memory
to be allocated again. As for the guest, the disk should not beexceeding the capacity of the host storage. Virtual
hard disk files can be stored on a flash drive, a regular hard drive, network and so on, but the virtual disk
capacity can not exceed the virtual hard drive where it was saved.
VI. Conclusion
Virtualization is a method to make something off of physical dependence. Example; a virtual machine
is a computer, which was only in the form of a file on the hard disk. With virtualization, then a computer
(physical) can run many virtual computers while at the same time. The types of VM is the VM system in which
a VM can run its operating system; then the VM is where the VM just to run a single process. Then VM is also
divided by the level of virtualization, namely full virtualization which simulates all the features of the hardware
that allow software running on a VM without modification. Then virtualization half, where not all of the
hardware features simulated. The latter is a native virtualization, which is a full virtualization combined with the
help of hardware that supports virtualization.
References
[1]. P. G and Vijayrajan, “Analysis of Performance in the Virtual Machines Environment,” International Journal of Advanced Science
and Technology, vol. 32, no. 7, pp. 53-64, 2011.
[2]. Ali dan N. Meghanathan, “Virtual Machines And Networks, Installation, Performance, Study, Advantages and Virtualization
Options,” International Journal of Network Security & Its Applications, vol. 3, no. 1, pp. 1-15, 2011.
[3]. S. M., H. G. M., N. A. dan U. J., “Performance Analysis of Kernal-Based Virtual Machine,” International Journal of Computer
Science & Information Technology, vol. 5, no. 1, pp. 137-144, 2013.
[4]. X. Zhao, K. Borders dan A. Prakash, Virtual Machine Security Systems, vol. 57, USA: Department of EECS, University of
Michigan, 2013, pp. 339-365.
[5]. P. H. Gum, “System/370 extended architecture: Facilities for virtual machines,” IBM Journal of Research and Development, vol. 27,
no. 6, p. 530–544, 1983.
[6]. P. F. Silvia, R. Karthiha, R. Aarthy dan C. S. G. Das, “Virtual machine vs Real Machine: Security Systems,” International Journal
of Engineering and Technology, vol. 1, no. 1, pp. 9-13, 2009.
[7]. A. P. U. Siahaan, “Comparison Analysis of CPU Scheduling : FCFS, SJF and Round Robin,” International Journal of Engineering
Development and Research, vol. 4, no. 3, pp. 124-132, 2016.
[8]. R. A. Meyer dan L. H. Seawright, “A Virtual Machine Time Sharing System,” IBM System Journal, vol. 9, no. 3, pp. 199-218, 1970.