Your SlideShare is downloading. ×
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Autonomic Computing: Vision or Reality
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Autonomic Computing: Vision or Reality

1,009

Published on

Autonomic computing is a new computing paradigm which combines multiple disciplines of computer science with the sole aim of developing self-managing computer systems. Dating from early 2001, it is …

Autonomic computing is a new computing paradigm which combines multiple disciplines of computer science with the sole aim of developing self-managing computer systems. Dating from early 2001, it is one of the most recent paradigm shifts, and as such it is still in a research-only phase, however, attracting a lot of business investors in the process.

The following survey presents in a clear and appropriately detailed manner the problem of computer science which autonomic computing tries to solve, the details of the proposed solution, together with the some of the immediate and long-term benefits it will provide. Moreover, the survey outlines the basic principles which define a system as an autonomic one, and presents a novel method of designing autonomic systems. Closing the survey are two sections which briefly outline the most prominent research projects on autonomic computing, together with a distiled summary of the major challenges which will be faced by businesses in the process of adopting autonomic systems.

Published in: Technology
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,009
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
58
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Autonomic Computing: Vision or Reality Ivo Neskovic Department of Computer Science, City College - an International Faculty of the University of Sheffield 3, Leontos Sofou Street, Thessaloniki 54635, Greece ineskovic@city.academic.gr Abstract. Autonomic computing is a new computing paradigm which combines multiple disciplines of computer science with the sole aim of developing self-managing computer systems. Dating from early 2001, it is one of the most recent paradigm shifts, and as such it is still in a research- only phase, however, attracting a lot of business investors in the process. The following survey presents in a clear and appropriately detailed man- ner the problem of computer science which autonomic computing tries to solve, the details of the proposed solution, together with the some of the immediate and long-term benefits it will provide. Moreover, the survey outlines the basic principles which define a system as an autonomic one, and presents a novel method of designing autonomic systems. Closing the survey are two sections which briefly outline the most prominent research projects on autonomic computing, together with a distiled summary of the major challenges which will be faced by businesses in the process of adopting autonomic systems.1 IntroductionAt the beginning of the last decade, a prominent computing association pre-dicted a problem similar to the US telephony problem of 1920 [1]. In 1920, therapid adoption of the telephone and its frequent use in households, generatedan uncertainty that there would not be enough telephone operators to work theswitchboards [2]. Similarly, IBM in 2001 pointed out the same problem regardingthe widespread use of computer systems [2, 3]. Their prediction stated that untilthe end of the decade the IT sector would need 200 million workers to maintaintrillion devices [2]. As a comparison to the extremes of the predicted numbers,the whole labor force of the United States is 200 million workers. This pre-diction spawned the creation of the manifesto of Autonomic Computing whereIBM first introduced and coined the term. According to [3] businesses, humansand devices require the constant services of the IT industry to maintain them.Furthermore, the complexity of the systems and their interdependencies createa lack of professional IT personnel to manage said systems [4]. As human de-pendency on technology grew exponentially, so will the problem of managingcomputer systems.
  • 2. 2 The ProblemSmall, fast and cheap computer systems have led to tremendous advances inInformation Technology. The prime concern of computer scientists over the pastfew decades has been Moore’s Law, while at the same time questioning theprolonging of miniaturized system components. Although Moore’s Law is a majorissue, it is not the currently biggest issue threatening the IT industry. The majorproblem with computer systems is the ever-increasing complexity. They havebecome so complex, that if not for a crucial paradigm shift in designing andmanaging computer systems, the shortage of skilled IT administrators will causeMoore’s Law to seem irrelevant [5]. This scenario is similar to the telephony crisis in the US in the 1920s [6].It has been predicted that every woman in the US, by the 1980 would need tooperate the manual switchboards. This would have been the case if not for theinvention of the automatic branch exchanges. The estimates are that the demandfor IT administrators will double every six years. It had been predicted that bythe end of 2010, 200 million US citizens would have to be IT administrators [6].The conclusion derived was that the problem will persist, as long as systems donot develop self-managing capabilities, in order to guarantee that their users willnot dip into the complexity of the systems. The increasing complexity of computer systems, together with the lack ofskilled professionals, indicates an inevitable demand towards automating theplethora of functions associated with the maintenance of computer systems. Thisis the exact message that Paul Horn, Senior Vice President of Research at IBM,delivered to the world in 2001 and has spawned a new generation of computing,autonomic computing.3 The SolutionNaturally, the solution to the emerging problem, as proposed by P. Horn, ap-proaches it from the most important, end user’s perspective. Ideally, the end userdesires two primary properties from computer systems: 1) the interaction has tobe intuitive, and 2) their involvement in the smooth running of the system hasto be minimal to none [3]. As with many other scientific breakthroughs, computer science turned tonature in a quest for inspiration. The only truly autonomic system known tomankind is the human central nervous system. The autonomic functions of thecentral nervous system includes sending control messages to the organs in thehuman body at a sub-conscious level, leaving the actual human being unawareof this process [3]. Temperature and heart rate control, breathing patterns andpupil dilation are some of the primary functionalities of the nervous system whichare done without any conscious thought. Equivalently, the same pattern can beapplied to computing: a network of autonomic ’smart’ computing componentswhich provide the user with the desired functionality anytime, anywhere, withouta conscious effort [3].
  • 3. The term autonomic computing, as coined by IBM, is also inspired by theautonomic nervous system. The purpose of this new computing paradigm isto transfer the definition of technology, from computing to data [4]. The mainidea is to allow the users to access information from multiple distributed points,with great transparency to how this is achieved [7, 8]. Moreover, this shift inparadigms, as presented by IBM, will need to change the focus of the IT industry,from increasing the processing speed and storage capacity to developing largedistributed networks which are self-managing [9], self-diagnostic and transparentto use. Adopting the autonomic computing paradigm, directly implies that design,implementation and support of computer systems must adopt three basic prin-ciples which are vital to the end user [10]:Flexible. The system must be able to transfer data through a platform and hardware independent approach.Accessible. The system muse be always accessible, i.e. it has to be always ’on’.Transparent. The system will function and will adapt to the users needs with- out any involvement from the user’s side [11].4 The BenefitsAutonomic computing exists to diminish the coiling need for skilled, IT literal,human resources and to steer computing into a new age which makes better useof the potential to assist higher-order thinking and decision making [12]. Thebenefits from the new paradigm will not only have immediate effect, but also along-term one. Most importantly, the human involvement in maintaining complex computersystems will be one of the most immediate effects of autonomic computing in theIT industry, together with significantly decreased general costs [4]. The visionfor long-term benefits is the ability for organizations and business to collaborateon complex problem solving with the aid of autonomic computing [6]. All the benefits of autonomic computing, as predicted by Paul Horn in theIBM manifesto of autonomic computing can be summarized as:Short-term IT related benefits – Simplified user experience through a more responsive, real-time system. – Cost savings - easily scalable. – Scaled power, storage and costs that optimize usage across both hardware and software. – Full use of idle processing power, including home PC’s, through networked systems. – Natural language queries allow deeper and more accurate returns. – Seamless access to multiple file types. Open standards will allow users to pull data from all potential sources by re-formatting on the fly. – Stability. High availability. High security system. Fewer system or network errors due to self-healing [13].
  • 4. Long-term, Higher Order Benefits – Realize the vision of enablement by shifting available resources to higher- order business. – Embedding autonomic capabilities in client or access devices, servers, stor- age systems, middleware, and the network itself. Constructing autonomic federated systems. – Achieving end-to-end service level management. – Collaboration and global problem-solving. Distributed computing allows for more immediate sharing of information and processing power to use complex mathematics to solve problems. – Massive simulation - weather, medical - complex calculations like protein folding, which require processors to run 24/7 for as long as a year at a time.5 The Eight ElementsBeing a fairly new paradigm, the current definition of autonomic computingis definitely not the final one. As the idea matures and research is done onthat topic, the definition might shift but the fundamental eight principles ofautonomic computing must stay constant. Those are:1. An autonomic computing system needs to ’know itself’ - its components must also possess a system identity. Since a system can exist at many levels, an autonomic system will need detailed knowledge of its components, current status, ultimate capacity, and all connections to other systems to govern itself. It will need to know the extent of its owned resources, those it can borrow or lend, and those that can be shared or should be isolated.2. An autonomic computing system must configure and reconfigure itself under varying (and in the future, even unpredictable) conditions [14]. System con- figuration or setup must occur automatically, as well as dynamic adjustments to that configuration to best handle changing environments [15].3. An autonomic computing system never settles for the status quo - it always looks for ways to optimize its workings [11]. It will monitor its constituent parts and fine-tune work-flow to achieve predetermined system goals.4. An autonomic computing system must perform something akin to healing - it must be able to recover from routine and extraordinary events that might cause some of its parts to malfunction. It must be able to discover problems or potential problems, then find an alternate way of using resources or reconfiguring the system to keep functioning smoothly.5. A virtual world is no less dangerous than the physical one, so an autonomic computing system must be an expert in self-protection. It must detect, iden- tify and protect itself against various types of attacks to maintain overall system security and integrity.6. An autonomic computing system must know its environment and the context surrounding its activity, and act accordingly [15]. It will find and generate rules for how best to interact with neighboring systems. It will tap available
  • 5. resources, even negotiate the use by other systems of its underutilized ele- ments, changing both itself and its environment in the process, in a word, adapting [11].7. An autonomic computing system cannot exist in a hermetic environment [15]. While independent in its ability to manage itself [16], it must function in a heterogeneous world and implement open standards, in other words, an autonomic computing system cannot, by definition, be a proprietary solution.8. An autonomic computing system will anticipate the optimized resources needed while keeping its complexity hidden. It must marshal IT resources to shrink the gap between the business or personal goals of the user, and the IT implementation necessary to achieve those goals, without involving the user in that implementation6 The ChallengesThe effort needed for developing and implementing autonomic computing [10] isintimidating at least, requiring the conversion of experts from multiple technicaland scientific disciplines together with businesses and institutions, which willfocus their knowledge and expertise on the challenge of autonomic computing[17]. The greatest challenge is the fact that the central focus behind autonomiccomputing is the holistic conceptualization of computing. The problem is notin the hardware which will support autonomic computing, but in creating thestandards and technology which is a necessity for effective system interaction[18]. Furthermore, the systems should be able to execute business policies andto heal [13, 19] and protect themselves with no outside human intervention. This results in redefining the way computer systems are designed as follows[20, 14]: – The computing paradigm will change from one based on computational power to one driven by data. – The way we measure computing performance will change from processor speed to the immediacy of the response. – Individual computers will become less important than more granular and dispersed computing attributes. – The economics of computing will evolve to better reflect actual usage - what IBM calls e-sourcing. Moreover, the design of the distinctive components of an autonomic systemwill have to change and will include [20, 14]: – Scalable storage and processing power to accommodate the shifting needs of individual and multiple autonomic systems. – Transparency in routing and formatting data to variable devices. – Evolving chip development to better leverage memory.
  • 6. – Improving network monitoring functions to protect security, detect potential threats and achieve a level of decision-making that allows for the redirection of key activities or data [12]. – Smarter microprocessors that can detect errors and anticipate failures. The above lists are just a part of the indications and emerging challenges ofautonomic computing which are yet to be solved [5]. Few advances have beenmade on the above challenges in the last decade, however, the primary oneremains, and that is to develop the open standards and interfaces of computingsystems which will start realizing the vision of autonomic computing [6].7 Academic FocusThe academia as a whole, has dedicated a substantial amount of research onautonomic computing and related projects. Most of the work is still exploratoryand experimental, however there are few projects which are well underway andhave provided with amazing results [17]. Following are short summaries of threeof the most popular projects on autonomic computing which are developed atsome of the most prestigious universities like Carnegie Mellon.7.1 Berkeley University of California: Recovery-Oriented ComputingThe Recovery-Oriented Computing (ROC) project is a joint Berkeley/Stan-ford research project that is investigating novel techniques for building highly-dependable Internet services. ROC emphasizes recovery from failures rather thanfailure-avoidance. This philosophy is motivated by the observation that even themost robust systems still occasionally encounter failures due to human operatorerror, transient or permanent hardware failure, and software anomalies resultingfrom software aging.7.2 Carnegie Mellon University: Self-securing Storage & DevicesSelf-securing storage is an exciting new technology for enhancing intrusion sur-vival by enabling the storage device to safeguard data even when the clientoperating system (OS) is compromised. It capitalizes on the fact that storageservers (whether file servers, disk array controllers, or even IDE disks) run sep-arate software on separate hardware. This opens the door to server-embeddedsecurity that cannot be disabled by any software (even the OS) running onclient systems. Of course, such servers have a narrow view of system activity,so they cannot distinguish legitimate users from clever impostors. But, from be-hind the thin storage interface, a self-securing storage server can actively lookfor suspicious behavior, retain an audit log of all storage requests, and preventboth destruction and undetectable tampering of stored data. The latter goalsare achieved by retaining all versions of all data - instead of over-writing old
  • 7. data when a write command is issued, the storage server simply creates a newversion and keeps both. Together with the audit log, the server-retained versionsrepresent a complete history of system activity from the storage systems pointof view.7.3 Georgia Institute of Technology: QfabricDistributed applications require end-to-end Quality of Service (QoS) manage-ment to ensure that 1) such applications achieve their goals in regard to func-tionality and performance and 2) system resources (processors, networks, disks,memory, etc.) are shared in a manner that prevents applications from inter-fering with each other. QoS-awareness of applications is an approach to allowthem to take part in resource management. This happens through interfacesthat allow applications to specify their desired QoS or monitor the achievedQoS. The approach is to closely integrate applications and resource managers inthe QoS management. This is achieved by tying applications and resource man-agers through the same event-based control path. In other words, any controlinformation exchanged between applications via the control path can be moni-tored by the underlying resource management. On the other hand, all resourcemanagement activities can be monitored by the application. Furthermore, ap-plication and resource managers can interact freely to ensure optimal resourcescheduling and adaptations.8 Business FocusTo enable autonomic computing, businesses must be prepared to evolve almostevery aspect of how they conduct business. Current approaches to managinginternal operations, including defining computer and communications systemsbetween employees and customers, will need to become more fluid, while stillmaintaining rigorous standards of privacy and security. These systems will alsoneed to adapt in order to integrate with external systems outside of an individualbusiness, and perhaps even with other systems around the world. Additionally, the broad design and definition of technology systems will ex-pand, changing interface design, standards, and the translation of business poli-cies into IT policy. New business models will evolve to account for the changingeconomics of computer systems and IT services, making it easier for businessesto pay only for what they use. Consider some major challenges faced by organizations embracing new e-business technologies. – As a proliferating range of access devices become part of the corporate infras- tructure, enterprises must transform both their IT systems and the business processes to connect with employees, customers and suppliers. No longer must they only manage desktops, workstations and PCs, but also PDAs, cell phones, pagers, and other network devices [21]. Annual compound growth of these devices is expected to exceed 38% over the next three years.
  • 8. – Companies must also manage the very products they produce, such as net- work enabled cars, washing machines, and entertainment systems, as part of this integrated system, extending the system concept well beyond tradi- tional corporate boundaries. This demands a reliable infrastructure that can accommodate rapid growth and the ability to hide system complexity from its users - the company’s customers, employees and suppliers. – Emerging Web services standards promise to make delivery of valuable ser- vices over the Internet possible. In one recent Infoworld survey, close to 70% of respondents said they would be developing Web services strategies within the year, and roughly the same percentage felt Web Services likely to emerge as the next business model of the Internet. IT services, in par- ticular, are a likely candidate for delivery in a utility-like fashion, a trend called e-sourcing. However, such services cannot become widespread unless the IT systems become more automated and allow true economies of scale for e-sourcing providers. – Customers must also gain enough confidence in this model to turn over critical business data and processes. It is unlikely they will develop this confidence if system reliability remains dependent on an inadequate supply of IT workers. – The underlying technologies to enable greater automation of complex sys- tems management [16] are ripe for the innovation. The emergence of XML and a host of new standards provides just a glimpse of the glue needed to bind such self-governing systems, and advances in workload management and software agents promises possible incremental paths to autonomic com- puting.9 ConclusionIs it possible to meet the grand challenge of autonomic computing? It is, although it will take time and patience. However, it is more likely tohave less automated realizations of the autonomic computing vision long beforethe most challenging problems are solved. Doing so will increase the value of thissystems as autonomic computing technology improves and earns greater trustand acceptance. A vision this grandiose requires the involvement of experts frommany areas of computer science, as well as disciplines which are beyond the tra-ditional computing boundaries. More likely than not, for the vision to becomea reality, the autonomic computing ecosystem must seek fresh knowledge fromscientists studying nonlinear dynamics and complexity for new theories of emer-gent phenomena and robustness. Economists and e-commerce researchers mustprovide novel ideas and technologies about negotiation and supply webs. Psychol-ogists and human factors researches will provide insight into new goal-definitionand visualization paradigms, together with ways to help humans develop theirtrust in autonomic systems. As a final thought, it is crucial to bridge the language and cultural dividesamong the many disciplines needed for this endeavor and to harness the diversity
  • 9. with the goal of successful and universal approaches to autonomic computing.Autonomic computing is still a vision, however reality is just around the corner.References 1. Huebscher, M.C., McCann, J.A.: A survey of autonomic computingdegrees, models, and applications. ACM Computing Surveys (CSUR) 40(3) (2008) 2. Mainsah, E.: Autonomic computing: the next era of computing. Electronics and Communication Engineering (February) (2002) 8–9 3. Horn, P.: Autonomic Computing: IBM’s Perspective on the State of Information Technology (2002) 4. Kephart, J., Chess, D.: The vision of autonomic computing. Computer (January) (2003) 41–50 5. Garcia, A., Batista, T., Rashid, A., Sant’Anna, C.: Autonomic computing: emerg- ing trends and open problems. SIGSOFT Softw Eng Notes 30(4) (2005) 1–7 6. Dobson, S., Sterritt, R., Nixon, P., Hinchey, M.: Fulfilling the Vision of Autonomic Computing. IEEE Computer 43(1) (2010) 3541 7. Gouin-Vallerand, C., Abdulrazak, B., Giroux, S., Mokhtari, M.: Toward autonomic pervasive computing. In: Proceedings of the 10th International Conference on Information Integration and Web-based Applications & Services. Number c, ACM (2008) 673676 8. Salehie, M., Tahvildari, L.: Autonomic computing: emerging trends and open prob- lems. In: Proceedings of the 2005 workshop on Design and evolution of autonomic application software, ACM (2005) 7 9. Levy, R., Nagarajarao, J., Pacifici, G., Spreitzer, A., Tantawi, A., Youssef, A.: Per- formance management for cluster based Web services. IFIP/IEEE Eighth Interna- tional Symposium on Integrated Network Management, 2003. (i) (2003) 247–26110. Trencansky, I., Cervenka, R., Greenwood, D.: Applying a UML-based agent mod- eling language to the autonomic computing domain. Companion to the 21st ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications - OOPSLA ’06 (2006) 52111. Lee, K., Sakellariou, R., Paton, N., Fernandes, A.: Workflow adaptation as an autonomic computing problem. In: Proceedings of the 2nd workshop on Workflows in support of large-scale science, ACM (2007) 3412. Ramirez, A., Knoester, D., Cheng, B., McKinley, P.: Applying genetic algorithms to decision making in autonomic computing systems. In: Proceedings of the 6th international conference on Autonomic computing, ACM (2009) 9710613. Ahmed, S., Ahamed, S.I., Sharmin, M., Haque, M.M.: Self-healing for autonomic pervasive computing. Proceedings of the 2007 ACM symposium on Applied com- puting - SAC ’07 (2007) 11014. Melcher, B., Mitchell, B.: Towards an autonomic framework: Self-configuring net- work services and developing autonomic applications. Intel Technology Journal 8(4) (August 2004) 27929015. Solomon, B., Ionescu, D., Litoiu, M., Mihaescu, M.: A real-time adaptive control of autonomic computing environments. Proceedings of the 2007 conference of the center for advanced studies on Collaborative research - CASCON ’07 (2007) 12416. Boutilier, C., Das, R., Kephart, J.O., Tesauro, G., Walsh, W.E.: Cooperative Nego- tiation in Autonomic Systems Using Incremental Utility Elicitation. In: Nineteenth Conference on Uncertainty in Artificial Intelligence. (2003) 89–97
  • 10. 17. Kephart, J.: Research challenges of autonomic computing. Proceedings. 27th International Conference on Software Engineering, 2005. ICSE 2005. (2005) 15–2218. Smirnov, M.: Autonomic communication: research agenda for a new communica- tion. Fraunhofer FOKUS White Paper (November) (2004) 1–2119. Cheng, J., Cheng, W., Nagpal, R.: Robust and self-repairing formation control for swarms of mobile agents. In: Proceedings of the National Conference on Artificial Intelligence. Volume 20., Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999 (2005) 5920. IBM: White Paper: An architectural blueprint for autonomic computing (2005)21. Conti, M., Kumar, M.: Opportunities in opportunistic computing. Computer (2009)

×