Your SlideShare is downloading. ×
ZERONE 2010 - Annual Technical Journal, IOE, Nepal
Upcoming SlideShare
Loading in...5

Thanks for flagging this SlideShare!

Oops! An error has occurred.

Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

ZERONE 2010 - Annual Technical Journal, IOE, Nepal


Published on

ZERONE 2010 is the sixth annual technical journal published by the students in the Department of Electronics and Computer Engineering, Institute of Engineering, Pulchowk, Nepal.

ZERONE 2010 is the sixth annual technical journal published by the students in the Department of Electronics and Computer Engineering, Institute of Engineering, Pulchowk, Nepal.

Published in: Education, Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

No notes for slide


  • 1. Message from the Head of Department It gives me great pleasure that the students of Bachelor’s degree of Electronics and Computer Engineering are bringing out yet again another issue of “ZERONE”, an annual technological journal. ZERONE contains academic, engineering articles and latest developments in field of digital engineering. I am very much impressed with the quality of articles. I would like to thank the contributors and members of ZERONE committee for compiling and editing such wonderful array of articles. The effect of ZERONE may be small, but small differences do bring large changes. I am assured that readers will get inspired from the articles and contribute further to the field of “Information Technology”, which Nepal desperately needs to excel in. Lastly, I would like to congratulate the ZERONE editorial board for their active and sincere effort for this technological journal. This type of academic enhancing work is what Institute of Engineering thrives for and will always be appreciated and encouraged by our Department of Electronics and Computer Engineering. I hope readers will enjoy the articles and find them useful. _____________ __________ Shashidhar Ram Joshi, Ph.D. Professor and Head of Department
  • 2. Message from the President I am really glad to know about the continuity of ZERONE, an annual technical journal, published by the students of the Department of Electronics and Computer Engineering, Pulchowk Campus. New technologies play a key role in the development of the country. As the field of science and technology is ever changing, it is indispensable for the students to keep themselves updated with fast changing technologies. A technical journal of this kind provides students with informative news about the new faces of the changing world and creates enthusiasm among students in their related fields. Lastly, my hearty congratulations to the ZERONE team for their excellent work in bringing out this issue. _____________________ Prakash Sapkota President, FSU
  • 3. ZERONE An annual technical journal published by the students of Department of Electronics and Computer Engineering, Pulchowk Campus Institute of Engineering Volume 6•2067/2010 Advisors Dr. Shashidhar Ram Joshi Dr. Subarna Shakya Chief Shristi Nhuchhe Pradhan Co-ordinator Nar Kaji Gurung Editors Bikram Adhikari Presha Joshi Ruchin Singh Vandana Dhakal Layout & Design Saurab Rajkarnikar Nar Kaji Gurung Kailash Budhathoki Printed at Rajmati Press Nakabahil, Lalitpur G.P.O. Box: 2512 Tel: 5534527 few words... The ZERONE team is delighted to bring out the sixth issue of the journal on the hands of its keen readers. Even though the leadership has been handed down from one batch to another, we have persistently tried to maintain the quality and the standards of the issues. This issue brings forward an array of information rich articles. There are articles based on nascent technologies like the Chemical Computing and 4G Mobile Technology. Contemporary technologies, as for example, the Cloud Computing, Photovol- taic, Migration to IPV6 have been included. There is an interesting article based on building a hu- manoid robot with LEGO Mindstorms. Likewise, we have focused on articles giving useful insight to exciting projects which was undertaken suc- cessfully by the students. These kind of technical journals definitely help and encourage the students to undertake the on- going researches and innovate new ideas. The team would like to thank all the organizations for providing us with the financial support needed to publish a freely distributed journal of this scale. We would like to extend our gratitude towards our colleagues and our teachers who have supported Zerone through articles and valuable suggestions. Without you all, Zerone wouldn’t exist at all. Finally, we would like to wish a very best for the new team. Keep the spirit alive!
  • 4. Table of ContentsTable of ContentsTable of ContentsTable of ContentsTable of Contents Cutting Edge Technologies Chemical Computing: A New Era in Technology....................................................................... 1 Prabhat Dahal, 2062 Electronics 4G Mobile Technology ................................................................................................................. 4 Sudha Lohani, 2063 Electronics Be Ready! HTML 5 is coming ...................................................................................................... 6 Ganesh Tiwari, 2063 Computer Quantum Teleportation: The Promises It Holds ..................................................................... 11 Barsha Paudel, 2063 Electronics Contemporary Technologies Cloud Computing ...................................................................................................................... 14 Nar Kaji Gurung, 2063 Computer VPN - Solution to Remotely Connected Intranet ..................................................................... 20 Ranjan Shrestha, 2062 Electronics Connecting to Matlab................................................................................................................ 23 Sugan Shakya, 2062 Electronics Photovoltaic ................................................................................................................................ 25 Dipendra Kumar Deo, 2062 Electronics Electromagnetic Interference ................................................................................................... 28 Rupendra Maharjan, 2062 Electronics Migration to IPV6....................................................................................................................... 30 Mithlesh Chaudhary, 2062 Electronics Magnetic Stripe Cards .............................................................................................................. 33 Pushkar Shakya, 2063 Computer Robotics My First Humanoid Robot: An Experience worth Sharing with Freshman and Sophomore ........................................... 36 Bikram Adhikari, DOECE, Pulchowk Campus Project Ideas Spectrum Analysis and its Benefits ........................................................................................ 41 Prajay Singh Silwal, 2062 Electronics SIMULINK Model of an Inverted Pendulum System Using a RBF Neural Network Controller ............................................................................... 44 Bikram Adhikari, DOECE, Pulchowk Campus IRIS Regognition and Identification System........................................................................... 50 Ruchin Singh / Sanjana Bajracharya / Saurab Rajkarnikar, 2062 Computer
  • 5. RFID ........................................................................................................................................... 53 Ashish Shrestha, 2062 Electronics Symphony and MVC Architecture........................................................................................... 56 Suraj Maharjan / Ram Kasula / Prasanna Man Bajracharya, 2062 Computer Computer Operation & Programming How to create a Symbian Installation Source using Visual C++ 6.0 .................................. 61 Kishoj Bajracharya, 2062 Computer Implementing Virtual Hosting .................................................................................................. 64 Ganesh Tiwari / Biraj Upadhyaya, 2063 Computer
  • 6. CHEMICAL COMPUTINGCHEMICAL COMPUTINGCHEMICAL COMPUTINGCHEMICAL COMPUTINGCHEMICAL COMPUTING A New Era in Technology Cutting-Edge TechnologiesCutting-Edge TechnologiesCutting-Edge TechnologiesCutting-Edge TechnologiesCutting-Edge Technologies A ll known life forms process information on a bio-molecular level. Examples are: signal processing in bacteria (e.g., chemotaxis), gene expression and morphogenesis, defense coordination and adaptation in the immune system, broadcasting information by the endocrine system, or finding a short route to a food source by an ant colony. This kind of information processing is known to be robust, self-organizing, adaptive, decentralized, asynchronous, fault-tolerant, and evolvable. Computation emerges out of an orchestrated interplay of many decentralized relatively simple components (molecules).We now expect to make available a technology that allows to create computational systems with the properties of their biological counterpart. A couple of approaches are already using the chemical metaphor (e.g., Gamma, MGS, amorphous computing, and reaction-diffusion processors). A chemical computer, also called reaction- diffusion computer, BZ (Belousov-Zhabotinsky) computer or gooware computer is an unconventional computer based on a semi-solid chemical "soup" where data is represented by varying concentrations of chemicals. The computations are performed by naturally occurring chemical reactions. So far it is still in a very early experimental stage, but may have great potential for the computer industry. The simplicity of this technology is one of the main reasons why it in the future could turn into a serious competitor to machines based on conventional hardware. A modern microprocessor is an incredibly complicated device that can be destroyed during production by no more than a single airborne microscopic particle. In contrast, a cup of chemicals is a simple and stable component that is cheap to produce. In a conventional microprocessor, the bits behave much like cars in city traffic; they can only use certain roads, they have to slow down and wait for each other in crossing traffic, and only one driving field at once can be used. In a BZ solution, the waves are moving in all thinkable directions in all dimensions, across, away and against each other. These properties might make a chemical computer able to handle billions of times more data than a traditional computer. An analogy would be the brain; even if a microprocessor can transfer information much faster than a neuron, the brain is still much more effective for some tasks because it can work with much higher amount of data at the same time. Historical background Originally chemical reactions were seen as a simple move towards a stable equilibrium which was not very promising for computation. This was changed by a discovery made by Boris Belousov, a Soviet scientist, in the 1950s. He created a chemical reaction between different salts and acids that swing back and forth between being yellow and clear because the concentration of the different components changes up and down in a cyclic way. He noted that in a mix of potassium bromate, cerium(IV) sulfate, propanedioic acid and citric acid in dilute sulfuric acid, the ratio of concentration of the cerium(IV) and cerium(III) ions oscillated, causing the colour of the solution to oscillate between a yellow solution and a colorless solution. This is due to thecerium(IV)ionsbeing reduced bypropanedioic acid to cerium(III) ions, which are then oxidized back to cerium(IV) ions by bromate(V) ions. At the time this was considered impossible because it seemed to go against the second law of thermodynamics, which states that in a closed system the entropy will only increase over time, causing the components in the mixture to distribute themselves till equilibrium is gained and making any changes in the concentration 062 Electronics Prabhat Dahal
  • 7. ZERONE 2010 7 Cutting-Edge Technologies themanwhosoldtheworldimpossible. But modern theoretical analyses shows sufficiently complicated reactions can indeed comprise wave phenomena without breaking the laws of nature. (A convincing directly visible demonstration was achieved by Anatol Zhabotinsky with the Belousov-Zhabotinsky reaction showing spiraling colored waves.) Basic principles The wave properties of the BZ reaction means it can move information in the same way as all other waves. This still leaves the need for computation performed by conventional microchips using the binary code transmitting and changing ones and zeros through a complicated system of logic gates. To perform any conceivable computation, it is sufficient to have NAND gates. (A NAND gate has two bits input. Its output is 0 if both bits are 1, otherwise it's 1). In the chemical computer version, logic gates are implemented by concentration waves blocking or amplifying each other in different ways. Current research Chemical computers can exploit several different kinds of reaction to carry out the computation. For example, so-called conformation computers use polymer molecules that change shape in response to a particular input. Metabolic computing exploits the kinds of reactions typically found inside a living cell. In 1989, how the light-sensitive chemical reactions could perform image processing was demonstrated. This led to an upsurge in the field of chemical computing. Andrew Adamatzky at the University of the West of England demonstrated simple logic gates using reaction-diffusion processes. Furthermore he had theoretically shown how a hypothetical "2+ medium" modeled as a cellular automaton can perform computation. The breakthrough came when he read a theoretical article of two scientists who illustrated how to make logic gates to a computer by using the balls on a billiard table as an example. Like in the case with the AND-gate, two balls represent two different bits. If a single ball shoots towards a common colliding point, the bit is 1. If not, it is 0. A collision will only occur if bothballs are sent toward the point, which then is registered in the same way as when two electronic 1's gives a new and single 1. In this way the balls work together like an AND-gate. Adamatzkys' greatachievement wasto transfer this principle to the BZ-chemical reaction and replace the billiard balls with waves. If it occurs, two waves in the solution will meet and create a third wave which is registered as a 1. He has tested the theory in practice and has already documented that it works. For the moment, he is cooperating with some other scientists in producing some thousand chemical versions of logic gates that is going to become a form of chemical pocket calculator. One of the problems with the present version of this technology is the speed of the waves; they only spread at a rate of a few millimeters per minute. According to Adamatzky, this problem can be eliminated by placing the gates very close to each other, to make sure the signals are transferred quickly. Another possibility could be new chemical reactions where waves propagate much faster. If these teething problems are overcome, a chemical computer will offer clear advantages over an electronic computer. Latest advancements 1. Reaction-diffusion computing This type ofcomputation exploits waves travelling through a beaker of chemicals to carry out useful calculations. These waves are the information carriers in the computer. They are created by triggeringchemical reactions in thesoup atspecific points. As waves propagate from different areas they collide and interact - effectively processing the information they hold. At the site of their interaction a point with a new chemical
  • 8. 8 ZERONE 2010 Cutting-Edge Technologies concentration is created, which is in effect an answer. With a beaker full of thousands of waves travelling and interacting witheach other, complex computational problems can be solved. An increasing number of individuals in the computer industry are starting to realise the potential of this technology. IBM is at the moment testing out new ideas in the field of microprocessing with many similarities to the basic principles of a chemical computer. 2. Robot gel Although the process sounds complicated and esoteric it can be applied to almost all computational problems. According to Dr Adamatzky, Reaction-diffusion processors are universal computers and they can solve all types of problems. As a result, computer giant IBM is already interested in the technology. Although slower than silicon, its key advantage is that it is cheap to produce and incredibly robust. Working with chemist Ben De Lacy Costello, Dr Adamatzky hasalready produced logic gates using the technique that can be used to make chemical "circuitry". Here is an excerpt from news in Chemical Computing that made a sensation in BBC sometime back where Dr. Adamatzy says- "Ultimately, we will produce a general purpose chemical chip. The chip would be capable of mathematical operations such as adding and multiplying numbers. I believe we can take the research even further to create intelligent, amorphous robots. In these, silicon circuitry would be of no use. Assume we have fabricated an artificial amoeba, gel-based robot, without any fixed shape, and capable for splitting into several smaller robots. Cornventional silicon circuits will not work because they have rigid architecture. But as chemical computers are an amorphous blob they could be cut in half and both would continue functioning independently. You can not cut your laptop in half and expect both parts to function properly; you can do this with reaction-diffusion processors." 3. Nano-chemical computation Scientists have achieved the goal of creating a nano-scale “chemical brain” that can transmit instructions to multiple (at present as many as 16) molecular “machines” simultaneously. The new molecular processor means that nano- chemical computation may soon be possible, ushering in a new era in super-light, super-fast, more versatile computer processing capabilities and, by extension, robotics. The BBC reports that: The machine is made from 17 molecules of the chemical duroquinone. Each one is known as a “logic device”. They each resemble a ring with four protruding spokes that can be independently rotated to represent four different states. One duroquinone molecule sits at the centre of a ring formed by the remaining 16. All are connected by chemical bonds, known as hydrogen bonds. The structure is just 2 nanometers in diameter, and can produce 4 billion different permutations of chemical transmission of “information”. This allows for a far more efficient distribution of information than a traditional binary circuit. The researchers say the structure of the “chemical brain” was inspired by the activity of glial cells in the human brain. Glial cells are non-neuronal “glue” or connective cells. In the brain, they are estimated to outnumber neurons by 10 to 1 and assist in chemical transmission of neural signals. Their ability to transmit signals in parallel, or to multiple tangent cells at once, reportedly gave rise to the 17-molecule duroquinone design. In recent years, the inability of research teams and engineers to keep pace with “Moore’s law” — which predicts that computing speed (by way of the reduction in size of processing units or the increasing density of circuits possible in a given space) will double roughly every 18 months— has been tested, due to heat-diffusion constraints and the related energy bleed. Nano-chemical processors would enable an entirely new structure for the smallest-scale computing circuits, and could lead to serious advances in the nature and capabilities of microprocessors, which are far larger in size and could therefore contain many times more circuits than at present. The researchers have reportedly already moved beyond the initial 17-molecule design, capable of processing 16 instructions simultaneously, to devices capable of 256 simultaneous transmissions. They are also designing a molecular device that would be capable of up to 1024 simultaneous transmissions.
  • 9. ZERONE 2010 9 Cutting-Edge Technologies 4 G refers to the fourth generation of cellular wireless and is a successor to 3G and 2G standards. Though different regions have diversified approaches towards the next generation mobile communication technology (called 4th generation mobile, or 4G Mobile), the future trend is same: Convergence among fixed, mobile and wireless communications. A 4G system is expected to upgrade existing communication networks and is expected to provide a comprehensive and secure IP based solution where facilities such as voice, data and streamed multimedia will be provided to users on an "Anytime, Anywhere" basis and at much higher data rates compared to previous generations. Currently, the 3G mobile service is available in the world. In the next stage, from around 2010, Japanese mobile operators will upgrade to "Long Term Evolution (LTE)" services. LTE technology is sometimes also termed 3.9G or Super-3G. 4G technologies enable still higher data speeds, and are currently under development and testing. It is currently not possible to predict when exactly 4G services will be introduced to the markets, however it could be around 2015 or later. Objectives • 4G is being developed to accommodate the QoS (Quality of Service) and rate requirements set by forthcoming applications like wireless broadband access, Multimedia Messaging Service (MMS), video chat, mobile TV, High Definition Television(HDTV) content, Digital Video Broadcasting (DVB). • A spectrally efficient system (in bits/s/Hz and bits/s/Hz/site). • High network capacity, more simultaneous users per cell. • Reduce blips in transmission when a device moves between areas covered by different networks. • A data rate of at least 100 Mbit/s between any two points in the world. •Smooth handoff (handover) across heterogeneous networks. An instance of handover is, when the phone is moving away from the area covered by one cell and entering the area covered by another cell, the call is transferred to the second cell in order to avoid call termination when the phone gets outside the range of the first cell. • Compatible operation with existing wireless standards. Key 4G technologies • Modulation can also be employed as a multiple accesstechnology (Orthogonal Frequency Division Multiple Access; OFDMA). In this case, each OFDM symbol can transmit information to/from several users using a different set of sub-carriers (sub-channels). • MIMO: Multiple Input Multiple Output to attain ultra high spectral efficiency. MIMO uses signal multiplexing between multiple transmitting antennas (space multiplex) and time or frequency. • Adaptive Radio Interface: There will be two radio processor modules, connected by digital interconnection system to conform to a predetermined radio communications channel patching arrangement. • Modulation, spatial processing including multi- antenna and multi-user MIMO. • The cooperative relaying concept, which exploits the inherent spatial diversity of the relay channel by allowing mobile terminals to co- operate. Sudha Lohani 063 Electronics 4G Mobile4G Mobile4G Mobile4G Mobile4G Mobile TTTTTechnologyechnologyechnologyechnologyechnology
  • 10. 10 ZERONE 2010 Cutting-Edge Technologies • Access Schemes: Schemes like Orthogonal FDMA (OFDMA), Single Carrier FDMA (SC- FDMA), Interleaved FDMA and Multi-carrier code division multiple access (MC-CDMA) are gaining more importance for the next generation systems. For the next generation UMTS (Universal Mobile Telecommunication System), OFDMA is being considered for the downlink. By contrast, IFDMA is being considered for the uplink. • Multimedia service delivery, service adaptation and robust transmission: Audio and video coding are scalable. For instance, a video flow can be split into three flows which can be transported independently. The first flowprovides availability, the other two quality and definition. Advantages • In the 4G mobile era, the access to the mobile services will be evolved to an open Mobile Cloud and will be fully open to any developers and providers. Thus, any non-wireless industries, such as Google, Microsoft, Oracle can provide services for their mobile users. • The mobile device system architecture will be open in order to converge multiple RTTs (radio transmission technologies) in one same device. Like laptop computer, the future Smartphone will be based on open wireless architecture (OWA) technology which means, when you change the wireless standards, you do not need to change phone. It is totally different from current multi- standards phone which is in closed system architecture, and users can not remove the unused RTT modules. In the OWA system, RTT card can be changed to switch wireless standards, or multiple wireless standards can be integrated in one RTT SIM card. Based on this OWA platform, you can integrate home phone, office phone and mobile phone into one common Personal device - it is more than just a phone. In fact, this 4G mobile device is a system to bring the world in the hand, can be called iHand - the World in Hand. • Any portable consumer electronics device can be a mobile phone by inserting the OWA-powered mobile RTT(s) card. This approach is truly converging the mobile wireless technology with the computer technology. The first commercial launch of 3G was also by NTT DoCoMo in Japan on October 1, 2001 and slowly it spread over the world, while the technology arrived in Nepal in 2007, May 17th. The 4G is expected to be in market by 2015.But it seems we will have to wait for a while before we will get to enjoy the service.
  • 11. ZERONE 2010 11 Cutting-Edge Technologies T o give users more flexibility and interoperability, and enable more interactive and exciting websites and applications, HTML 5 introduces and enhances a wide range of features including form controls, APIs, multimedia, structure, and semantics. HTML 5 is said to become a game-changer in Web application development, one that might even make obsolete such plug-in-based rich Internet application (RIA) technologies as Adobe Flash, Microsoft Silverlight, and Sun JavaFX. Work on HTML 5, originally referred to as Web Applications 1.0, was initiated in 2004 and is currently being carried out in a jointeffortbetween the W3C HTML WG (Work Group) and the Web Hypertext Application Technology Working Group (WHATWG). Many key players are participating in the W3C effort including representatives from the four major browser vendors: Apple, Mozilla, Opera, and Microsoft; and a range of other organizationsand individuals. Specification is still a work in progress has quite a long way from completion. In addition to specifying markup, HTML 5 introduces a number of APIs that help in creating Web applications. These can be used together with the new elements introduced for applications: • 2D drawing API which can be used with the new canvas element. • API for playing of video and audio which can be used with the new video and audio elements. • An API that enables offline Web applications. • An API that allows a Web application to register itself for certain protocols or media types. • Editing API in combination with a new global content editable attribute. • Drag & drop API in combination with a draggable attribute. • API that exposes the history and allows pages to add to it to prevent breaking the back button. • Cross-document messaging Existing DocumentObject Model (DOM) interfaces are extended and de facto features documented. HTML 5 is defined in a way that it is backwards compatible with the way user agents handle deployed content. To keep the authoring language relatively simple for authors several elements and attributes are not included as outlined in the other sections of this document, such as presentational elements that are better dealt with using CSS. 1. Structure The HTML serialization refers to the syntax that is inspired by the SGML syntax from earlier versions of HTML, but defined to be more compatible with the way browsers actually handle HTML in practice. Example document that conforms to the HTML5 syntax: <!doctype html> <html> <head> <meta charset="UTF-8"> <title>Example document</title> </head> <body> <p>Example paragraph</p> </body> </html> The XML serialisation refers to the syntax using XML 1.0 and namespaces, just like XHTML 1.0. Example document that conforms to the XML syntax of HTML 5 : Be Ready !Be Ready !Be Ready !Be Ready !Be Ready ! HTML5 is coming Ganesh Tiwari 063 Computer
  • 12. 12 ZERONE 2010 Cutting-Edge Technologies <?xml version="1.0" encoding="UTF-8"?><html xmlns=""> <head> <title>Example document </title> </head> <body><p>Example paragraph</p> </body> </html> 2. Replacement of <div> tag The use of div elements is largely because current versions of HTML 4 but lack the necessary semantics for describing these parts more specifically. HTML 5 addresses this issue by introducing new elements for representing each of these different sections. The div elements can be replaced with the new elements: header, nav, section, article, aside, and footer. The markup for the above document could look like the following: <body> <header>...</header> <nav>...</nav> <article> <section> ... </section> </article> <aside>...</aside> <footer>...</footer> </body> 3. Embedded media Video on the Web is booming, but it's almost all proprietary. YouTube uses Flash, Microsoft uses Windows Media®, and Apple uses QuickTime. HTML currently lacks the necessary means to successfully embed and control multimedia itself. Whether any one format and codec will be preferred is still under debate. Probably Ogg Theora support at least will be strongly recommended, if not required. Support for proprietary formats patent-encumbered formats will be optional. The simplest way to embed a video is to use a video element and allow the browser to provide a default user interface. The controls attribute is a boolean attribute that indicates whether or not the author wants this UI on or off by default. The optional poster attribute can be used to specify an image which will be displayed in place of the video before the video has begun playing. <video src="video.ogg" id="video" controls="true" poster="poster.jpg"> </video><p> <button type="button" onclick=";"> Play</button> <button type="button" onclick="video.pause();"> Pause</button> <button type="button" onclick="video.currentTime = 0;"><< Rewind</button> HTML4 Structure HTML5 Structure
  • 13. ZERONE 2010 13 Cutting-Edge Technologies A complementary audio element is also proposed. Most of the attributes are common between the video and audio elements, although for obvious reasons, the audio element lacks the width, height, and poster attributes. <audio src="music.mp3" controls=”true” autoplay="autoplay"> <a href="music.mp3">Download song</a> </audio> Figure can be used to associate a caption together with some embedded content, such as a graphic or video. <figure> <img src="pic.png"> <legend>Example</legend> </figure> 4. Canvas Canvas is used for dynamic scriptable rendering of bitmap graphics on the fly. It was initially introduced by Apple for use inside their own Mac OX WebKit component, powering components like Dashboard widgets and the Safari browser. Some browsers already support the <canvas> tag, like Firefox and Opera. The <canvas> tag is only a container for graphics; you must use a script to actually paint graphics. Canvas consists of drawable region defined in html code with height and width attributes. <canvas id=”a_canvas” width=”400” height=”300”> </canvas> JavaScript code may access the area through full set of draing function similar to other common 2D APIs, thus allowing for dynamically generated graphics. 2D and 3D graphics both will be possible with the help of API, which is expected to be popular for online gaming, animations and image composition. 5. MathML and SVG The HTML syntax of HTML 5 allows for MathML and SVG elements to be used inside a document. E.g. a very simple document using some of the minimal syntax features could look like: <!doctype html> <title>SVG in text/html</title> <p> A green circle:<svg> <circle r="50"cx="50" cy="50" fill="green"/> </svg> </p> 6. Interactivity HTML 5 also goes under the rubric of Web Applications 1.0. Several new elements are focused on more interactive experiences for Web pages: • details • datagrid • menu • command These elements all have the potential to change what is displayed based on user action and choice without loading a new page from the server. datagrid The datagrid element serves the role of a grid control. It's intended for trees, lists, and tables that can be updated by both the user and scripts. By contrast, traditional tables are mostly intended for static data. <datagrid> <table> <tr><td>Jones</td><td>Allison</ td><td>A-</td><td>B+</td><td>A</ td></tr> <tr><td>Smith</td><td>Johnny</ td><td>A</td><td>C+</td><td>A</ td></tr> ... </table> </datagrid> What distinguishes this from a regular table is that the user can select rows, columns, and cells; collapse rows, columns, and cells; edit cells; delete rows, columns, and cells; sort the grid; and otherwise interact with the data directly in the browser on the client. The JavaScript code may monitor the updates.
  • 14. 14 ZERONE 2010 Cutting-Edge Technologies Menu and command The menu element has actually been present in HTML since at least version 2. It was deprecated in HTML 4, but it comes roaring back with new significance in HTML 5. In HTML 5, a menu containscommand elements,eachofwhichcauses an immediate action. For example, The label attribute gives a title for the menu. For example, <menu type="popup" label="Edit"> <command onclick="undo()" label="Undo"/> <command onclick="redo()" label="Redo"/> <command onclick="cut()" label="Cut"/> <command onclick="copy()" label="Copy"/> <command onclick="paste()" label="Paste"/> <command onclick="delete()" label="Clear"/> </menu> Menus can be nested inside other menus to create hierarchical menus. 7. Web forms 2 Web forms 2 specification adds lots of features for authoring forms for basic client side validation, new input types, and repetition blocks. Several JavaScript implementations are under development. Some examples of web form 2 are: <input type="email” value="a@b"> <input pattern="[1- ]{10}"value="1234567891"> <input type="number" min="7" max="25" step="2"> </label> <input type="date” required> Other elements The following elements have been introduced for better structure: • dialog can be used to mark up a conversation like this: <dialog> <dt> hello, how r u <dd> fine and you? <dt> mee to good </dialog> • embed is used for plugin content. • mark represents a run of marked (highlighted) text. It is not similar as <em> tag. You searched for <m>marker</m> • meter represents a measurement, such as disk usage, user ratings. Rating: <meter min=“0” max=“5”value=“3”> • progress represents a completion of a task, such as downloading or when performing a series of expensive operations. We can use the progress element to display the progress of a time consuming function in JavaScript. <progress value=“128” max=“1024”>72.5%</progress> • time represents a date and/or time, which solves Accessibility Issue. It can be used in Microformats like hCalendar <time datetime="2007-08-02T23:30Z"> Fri, Aug 03 2007 at 09:30</time> • details represents additional information or controls which the user can obtain on demand. • datalist together with the a new list attribute for input is used to make comboboxes: <input list="browsers"> <datalist id="browsers"> <option value="Safari"> <option value="Internet Explorer"> <option value="Opera"> <option value="Firefox"> </datalist> • keygen represents control for key pair generation. • bb represents a user agent command that the user can invoke. • output represents some type of output, such as from a calculation done through scripting.
  • 15. ZERONE 2010 15 Cutting-Edge Technologies 1. 2. 3. References • ruby, rt and rp allow for marking up ruby annotations. The input element's type attribute now has the following new values: datetime, datetime-local, date, month, week, time, number, range, email, url, search, color The idea of these new types is that the user agent can provide the user interface, such as a calendar date picker or integration with the user's address book, and submit a defined format to the server. It gives the user a better experience as his input is checked before sending it to the server meaning there is less time to wait for feedback.. At last Work on HTML 5 is rapidly progressing, yet it is still expected to continue for several years. Due to the requirement to produce test cases and achieve interoperable implementations, current estimates have work finishing in around ten to fifteen years. During this process, feedback from a wide range of people including web designers and developers, CMS and authoring tool vendors and browser vendors is vital to ensure its success. Everyone is not only welcome, but actively encouraged to contribute feedback on HTML 5. There are numerous venues through which you may contribute. You may join the W3C’s HTML WG and subscribe/contribute to the HTML WG mailing lists, WHATWG mailing lists, or wiki.
  • 16. 16 ZERONE 2010 Cutting-Edge Technologies T eleportation is the name given by science fiction writers to the feat of making an object or person disintegrate in one place while a perfect replica appears somewhere else. Teleportation involves de-materializing an object at one point, and sending the details of that object's precise atomic configuration to another location, where it will be reconstructed. What this means is that time and space could be eliminated from travel - we could be transported to any location instantly, without actually crossing a physical distance. This, until now, has only been available to read in si-fi novels and watch with thrill and excitement over si-fi movies. Imagination leads to great innovations and so there are scientists working right now on such a method of travel to convert this imagination into reality by combining properties of telecommunications and transportation to achieve a system called teleportation. But, Quantum teleportation is not the same as the teleportation most of us know from science fiction, where an object (or person) in one place is “beamed up” to another place where a perfect copy is replicated. In quantum teleportation two photons or ions (for example) are entangled in such a way that when the quantum state of one is changed the state of the other also changes, as if the two were still connected. This enables quantum information to be teleported if one of the photons/ions is sent some distance away. It works by entangling two objects, like photons or ions. The first teleportation experiments involved beams of light. Once the objects are entangled, they're connected by an invisible wave, like a thread or umbilical cord. That means when something is done to one object, it immediately happens to the other object, too. Einstein called this "spooky action at a distance." Although the first proof-of-principle demonstration was reported in 1997 by the Innsbruck and Rome groups, long-distance teleportation had so far only been realized in fibre with lengths of hundreds of metres apart until this recent experiment. And those distances have been accomplished with fiber channels, which help preserve the photons' state. But the ongoing vast research and experiments have continuously tried bringing this quantum teleportaion concept to a whole new level now. Recently in what promises to be a milestone experiment led by Jian-Wei Pan and Cheng-Zhi Peng at the University of Science and Technology of China and Tsinghua University (Beijing, China), quantum information was ‘transmitted’ through the open air between two stations 16 kilometers (10 miles) apart. The previous record was a few hundred meters using fiber optic cable. At the distance of 10 or more kilometers, this almost mysterious form of communication, called “spooky action at a distance” by Einstein, becomes possible for Earth to orbiting satellites. What’s so spooky is the nature of quantum entanglement, how separated particles can share quantum properties as if they were one particle. Entangled photon pairs were generated for this experiment at the teleportation site using a semiconductor, a blue laser beam, and a crystal of beta-barium borate (BBO). The pairs of photons were entangled in the spatial modes of photon 1 and polarization modes of photon 2. The research team designed two types of telescopes to serve as optical transmitting and receiving antennas. It’s one thing to imagine this kind of 2=1 condition Barsha Paudel 063 Electronics “Once the objects are entangled, they're connected by an invisible wave, like a thread or umbilical cord.” QuantumTeleportation! The Promises it Holds
  • 17. ZERONE 2010 17 Cutting-Edge Technologies for distances no bigger than an atom, but over kilometers? The researchers set up two ‘stations:’ “Alice” located in a suburb of Beijing and 16 kilometers away on the other side of a reservoir was “Bob.” Alice and Bob each received one of a pair of entangled photons. Photons, the equivalent of electrons for light, are often used for entanglement experiments as they are good for transmission and can be manipulated by specialized lasers. At the Alice station, one entangled photon was measured in combination with an unknown qubit (a quantum unit of information), in a sense was charged-up by a maximally applied entangling force with both spatial and polarization (laser) methods. The result, a more highly entangled particle, was sent via telescope to Bob. At the Bob station, that photon then also projected the status of the unknown qubit as did Alice. The mumbo jumbo means that the state of one photon (Alice) instantly reflected is the state of the other entangled photon (Bob). These researchers found that even at this distance the photon at the receiving end still responded to changes in state of the photon remaining behind. The qubit is the piece of quantum information that is passed, so this is a form of communication. This experiment required a great deal of groundbreaking work, including specialized telescopes designed for the open air transfer, active feedback control for transmission stability, and synchronized real-time information transfer. The result was information fidelity approaching 89%, good enough for a lot of quantum jazz. That does not mean this is ready for real-world applications. It does mean that practical applications can be envisioned. Between now and the time when quantum teleportation is used for communication, there needs to be a lot of work done with the size, cost, and reliability of the equipment needed to generate and control the entanglement effect. The entangled photons will need better control. Charged electrons – ions – are easier to manipulate, for example to create encryption patterns; but something will be needed to achieve a similar level of manipulation for photons. Nevertheless, this is a mind-opening achievement. Now, why is this a big deal? Well, in the past scientists have only been able to teleport information across a small span of a few meters and even then they had to do so through some kind of conduit like a fiber optic cable. What happened recently was an open-air quantum teleportation from across ten miles. An optical free-space link is highly desirable for extending the transfer distance, because of its low atmospheric absorption for certain ranges of wavelength. Scientists in China have succeeded in teleporting information between photons further than ever before. They transported quantum information over a free space distance
  • 18. 18 ZERONE 2010 Cutting-Edge Technologies of 16 km (10 miles), much further than the few hundred meters previously achieved, which brings us closer to transmitting information over long distances without the need for a traditional signal. This thus has proved itself to be an unprecedented achievement. Quantum teleportation is central to the practical realization of quantum communication and with the distance of 16 km which is greater than the effective aerosphere thickness of 5-10 km, the group's success could pave the way for experiments between a ground station and a satellite, or two ground stations with a satellite acting as a relay. The experiments confirm the feasibility of space-based quantum teleportation, and represent a giant leap forward in the development of quantum communication applications. This means quantum communication applications could be possible on a global scale in the near future. This result confirms the feasibility of space-based experiments, and is an important step towards quantum-communication applications on a global scale. So the promises of quantum teleportation is huge and hard to miss. We never know, soon walking could be so 2010. 1. 2 . w w w . e n . w i k i p e d i a . o r g / w i k i / Quantum_teleportation 3. full/nphoton.2010.87.html 4. teleportation-over-16-km-in-open-air/ References An umbrella that lets you surf the Internet while walking in the rain takes mobile electronics to a new level. Called Pileus, the Internet umbrella sports a large screen, which drapes across the inside of the umbrella, and a camera, digital compass, GPS, and motion sensor, all located in the umbrella’s handle. So far, the umbrella, which is only in prototype form, has two capabilities: photo-sharing through Flickr and 3-D map navigation. To operate this handheld electronic umbrella, you just rotate the grip of the handle. The umbrella was created at Keio University by Takashi Matsumoto and Sho Hashimota, who have now co-founded the company Pileus LLC. [Source:] Ordinary Things Turned Hi-Tech! The Internet Umbrella
  • 19. ZERONE 2010 19 Contemporary Technologies C loud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model promotes availability and is composed of five essential characteristics, three delivery models, and four deployment models. In another way, Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. A user can have as much or as little of a service as they want at any given time; and the service are fully managed by the provider (the consumer needs nothing but an IP enabled devices (PC, laptop, cell phone and Internet access). Cloud computing supportGrid computing ("a form of distributed computing whereby a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks") by quickly providing physical and virtual servers on which the grid applications can run. It also supports non grid environments, such as a three-tier Web architecture running standard or Web 2.0 applications. Cloud computing can be confused with utility computing (the "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity") and autonomic computing ("computer systems capable of self- management"). CloudComputing Nar Kaji Gurung 063 Computer Contemporary TechnologiesContemporary TechnologiesContemporary TechnologiesContemporary TechnologiesContemporary Technologies Some vendors supplying cloud computing
  • 20. Introduction to cloud computing Cloud computing infrastructures can allow enterprises to achieve more efficient use of their IT hardware and software investments. Cloud computing is an example of an ultimately virtualized system, and a natural evolution for data centers that employ automated systems management, workload balancing, and virtualization technologies. The Cloud makes it possible to launch Web 2.0 applications quickly and to scale up applications as much as needed when needed. The platform supports traditional Java™ and Linux, Apache, MySQL, PHP (LAMP) stack-based applications as well as new architectures such as Map Reduce © IBM Corporation 2007 Cloud Computing 5 and the Google File System, which provide a means to scale applications across thousands of servers instantly. Cloud computing users can avoid capital expenditure (CapEx) on hardware, software, and services when they pay a provider only for what they use. Consumption is billed on a utility (e.g. resources consumed, like electricity, telephone) or Subscription (e.g. time based, like a newspaper) basis with little or no upfront cost. Other benefits of this time sharing style approach are low barriers to entry, shared infrastructure and costs, low management overhead, and immediate access to a broad range of applications. Users can generally terminate the contract at any time (thereby avoiding return on investment risk and uncertainty) and the services are often covered by service level agreements (SLAs) with financial penalties. Essential characteristics On-demand self-service: A consumer can unilaterally provision computing capabilities, such as server time and networkstorage, as needed automaticallywithoutrequiringhuman interaction with each service’s provider. Ubiquitous network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs). Location independent resource pooling: The provider’s computing resources are pooled to serve all consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. The customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, network bandwidth, and virtual machines. Rapid elasticity: Capabilities can be rapidly and elastically provisioned to quickly scale up and rapidly released to quickly scale down. To the consumer, the capabilities available for provisioning often appear to be infinite and can be purchased in any quantity at any time. Measured Service: Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g, storage,Processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service. Note: Cloud software takes full advantage of the cloud paradigm by being service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. Delivery models of cloud computing Cloud Software as a Service (SaaS): The capability provided to the consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email). The consumer does not manage or control the underlying cloud infrastructure, network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings. In the software-as-a-service cloud model, the vendor supplies the hardware infrastructure, the software product and interacts with the user through a front-end portal. Cloud Platform as a Service (PaaS): Platform- as-a-service in the cloud is defined as a set of software and product development tools hosted on the provider's infrastructure. Developers create applications on the provider's platform over the Internet. PaaS providers may use APIs, website portals or gateway software installed on the customer's computer., (an outgrowth
  • 21. ZERONE 2010 21 Contemporary Technologies of and GoogleApps are examples of PaaS. Developers need to know that currently, there are not standards for interoperability or data portability in the cloud. Some providers will not allow software created by their customers to be moved off the provider's platform. Cloud Infrastructure as a Service (IaaS): The capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly select networking components (e.g., firewalls, load balancers). Deployment models (types of cloud) Private cloud: The cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on premise or off premise. Private clouds are a good option for companies dealing with data protection and service-level issues. Community cloud: The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on premise or off premise. Public cloud: Public clouds are run by third parties and jobs from many different customers may be mixed together on the servers, storage systems, and other infrastructure within the cloud. End users don’t know who else’s job may be running on the same server, network, or disk as their own jobs. The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services. Hybrid cloud: Hybrid clouds combine the public and private and community cloud models.. Hybrid clouds offer the promise of on-demand, externally provisioned scale, but add the complexity of determining how to distribute applications across these different environments. Architecture of cloud computing It typically involves multiple cloud components communicating with each other over application programming interfaces, usually web services. Cloud computing types
  • 22. Contemporary Technologies 22 ZERONE 2010 Cloud computing system is divided it into two sections: the front end and the back end. The front end is the side the computer user, or client, sees. The back end is the "cloud" section of the system. Most of the time, servers don't run at full capacity. That means there's unused processing power going to waste. It's possible to fool a physical server into thinking it's actually multiple servers, each running with its own independent operating system. The technique is called server virtualization. By maximizing the output of individual servers, server virtualization reduces the need for more physical machines. On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. A central server, monitoring traffic and client demands to ensure everything runs smoothly. It follows a protocol and uses a special kind of software called middleware which allows networked computers to communicate with each other. If a cloud computing there's likely to be a high demand for a lot of storage space. Cloud computing systems need at least twice copy of storage devices (redundancy).It supports RAID architecture. Layers of the cloud computing 1. Application: A cloud application leverages the Cloud in software architecture, often eliminating the need to install and run the application on the customer's own computer, thus alleviating the burden of software maintenance, ongoing operation, and support. For example: Peer-to-peer / volunteer computing (Bit torrent, BOINC Projects, Skype),Web application (Facebook),Software as a service (Google Apps, SAP and Salesforce),Software plus services (Microsoft Online Services) 2. Client: A cloud client consists of computer hardware and/or computer software which relies on cloud computing for application delivery, or whichis specifically designed for delivery of cloud services and which, in either case, is essentially useless without it. For example: Mobile (Android, iPhone, Windows Mobile),Thin client (CherryPal, Zonbu, gOS-based systems),Thick client / Web browser (Microsoft Internet Explorer, Mozilla Firefox) 3. Infrastructure: Cloud infrastructure, typically a platform virtualization environment, as a service. For example: Full virtualization (GoGrid, Skytap),Grid computing (Sun Grid),Management (RightScale),Compute (Amazon Elastic Compute Cloud),Platform ( 4. Platform: A cloud platform, such as Platform as a service, the delivery of a computing platform, and/or solution stack as a service, facilitates deployment of applications without the cost and complexityofbuyingand managing theunderlying hardware and software layers. For example: Web application frameworks (Java Google Web Toolkit (Google App Engine),Python Django (Google App Engine),Ruby on Rails (Heroku),.NET (Azure Services Platform),Web hosting (Mosso),Proprietary ( ) 5. Service: A cloud service includes "products, services and solutions that are delivered and consumed in real-time over the Internet". For example, Web Services which may be accessed by other cloud computing components, software, e.g., Software plus services, or end users directly. Specific examples include: Identity (OAuth, OpenID), Payments (Amazon Flexible Payments Service, Google Checkout, and PayPal), Mapping (Google Maps, Yahoo! Maps), and Search (Alexa, Google Custom Search, and Yahoo! BOSS) 6. Storage: Cloud storage involves the delivery of data storage as a service, including database- like services, often billed on a utility computing basis, e.g., per gigabyte per month. For example: Database ( Google App Engine's BigTable datastore),Network attached storage (MobileMe Cloud computing sample architecture
  • 23. ZERONE 2010 23 Contemporary Technologies iDisk, Nirvanix CloudNAS),Synchronization (Live Mesh Live Desktop component, MobileMe push functions),Web service (Amazon Simple Storage Service, Nirvanix SDN) Backup (Backup Direct, Iron Mountain Inc services) Cloud storage can be delivered as a service to cloud computing, or can be delivered to end points directly. Cloud computing applications The applications of cloud computing are practically limitless. Why would anyone want to rely on another computer system to run programs and store data? Here are just a few reasons: • Clients would be able to access their applications and data from anywhere at any time. • It could bring hardware costs down. Cloud computing systems would reduce the need for advanced hardware on the client side. • Cloud computing systems give these organizations company-wide access to computer applications. The companies don't have to buy a set of software or software licenses for every employee. • Servers and digital storage devices take up space. Some companies rent physical space to store servers and databases because they don't have it available on site. Cloud computing gives these companies the option of storing data on someone else's hardware, removing the need for physical space on the front end. • Corporations might save money on IT support. Streamlined hardware would, in theory, have fewer problems than a network of heterogeneous machines and operating systems. • If the cloud computing system's back end is a grid computing system, then the client could take advantage of the entire network's processing power. The cloud system would tap into the processing power of all available computers on the back end, significantly speeding up the calculation. Criticism and disadvantages of cloud computing • Since cloud computing does not allow users to physically possess the storage of their data (the exception being the possibility that data can be backed up to a user-owned storage device, such as a USB flash drive or hard disk) it does leave responsibility of data storage and control in the hands of the provider. • Cloud computing has been criticized for limiting the freedom of users and making them dependent on the cloud computing provider, and some critics have alleged that is only possible to use applications or services that the provider is willing to offer. Thus, The London Times compares cloud computing to centralized systems of the 1950s and 60s, by which users connected through "dumb" terminals to mainframe computers. Typically, users had no freedom to install new applications and needed approval from administrators to achieve certain tasks. Overall, it limited both freedom and creativity. The Times argues that cloud computing is a regression to that time. • Similarly, Richard Stallman, founder of the Free Software Foundation, believes that cloud computing endangers liberties because users sacrifice their privacy and personal data to a third party. He stated that cloud computing is "simply a trap aimed at forcing more people to buy into locked, proprietary systems that would cost them more and more over time." Companies using cloud computing Google: Google has opened its cloud to outside developers. Google's Application Engine is a free service that lets anyone build and run web applications on Google's very own distributed infrastructure. "Google Application Engine gives you access to the same building blocks that Google uses for its own applications, making it easier to build an application that runs reliably, even under heavy load and with large amounts of data.” In particular, the platform offers: • Dynamic web serving, with full support of common web technologies “Cloud computing infrastructures are next generation platforms that can provide tremendous value to companies of any size.”
  • 24. Contemporary Technologies 24 ZERONE 2010 • Persistent storage (powered by Bigtable and GFS with queries, sorting, and transactions) • Automatic scaling and load balancing • Google APIs for authenticating users and sending email • Fully featured local development environment offers the Amazon Web Services, including its Elastic Computing Cloud (for processing power), Simple Storage Service (for storage), and SimpleDB (for database queries) Microsoft: Microsoft has offered developers a quick peek at an unreleased Windows Mobile client for its fledgling "Live Mesh" service. Live Mesh has been described as a "software-plus- service platform." Intended to integrate desktop and mobile operating systems, it provides synchronization and remote access services similar to those offered by products. Conclusion In today's global competitive market, companies must innovate and get the most from its resources to succeed. This requires enabling its employees, business partners, and users with the platforms and collaboration tools that promote innovation. Cloud computing infrastructures are next generation platforms that can provide tremendous value to companies of any size. They can help companies achieve more efficient use of their IT hardware and software investments and provide 1. 2. Cloud Computing, Andy Bechtolsheim, Chairman & Co-founder, Arista Networks November 12th, 2008 3. Cloud computing primer, Sun Microsystem. 4. Cloud computing, Greg Boss, Padma Malladi, Dennis Quan, Linda Legregni, Harold Hall 5. Application architecture for cloud computing, 6. Cloud computing for science and engineering, CSIDC, References a means to accelerate the adoption of innovations. Cloud computing increases profitability by improving resource utilization. Costs are driven down by delivering appropriate resources only for the time those resources are needed. Cloud computing has enabled teams and organizations to streamline lengthy procurement processes. Cloud computing enables innovation by alleviating the need of innovators to find resources to develop, test, and make their innovations available to the user community. Innovators are free to focus on the innovation rather than the logistics of finding and managing resources that enable the innovation. Source:
  • 25. ZERONE 2010 25 Contemporary Technologies L et’s get started with a familiar scenario. Let’s suppose a XYZ bank has its central office in Kathmandu (for obvious reasons). Its branch offices are located at different major cities all over Nepal. Whenever you carry out transactions from any of the offices, all branch offices get informed about it and the database is updated accordingly. But how? You may answer, there is connectivity between these branch offices or all branch offices communicate with central office. Yes, that is obvious. The connectivity could be Wired or Wireless. In Wired connectivity, it could be dedicated Leased line to create private Wide Area Network (WAN). It provides better quality, reliability and speed. But, it would cost a huge amount for the bank to use optical fibers or other kinds of wires like coaxial, twisted cable (for ISDN) which wouldn’t be a wise decision. Also, another alternative is Wireless connectivity. It can use transmitters and receivers (using antenna) following the principles of Line of Sight (direct)/indirect to create private WWAN (Wireless Wide Area Network) to connect all the offices. But wireless communication isn’t much reliable. The radio/micro waves would interfere much with the noisy (unwanted signals) environment and distort the original signals. Also, the inappropriate weather conditions would degrade the quality and speed. Also, the installation of Wireless systems would cost much. For a large company, the capital mayn’t be a problem though there are other technical difficulties. But, we have a much better approach which we can access at a cheap rate. The concept is VPN which is an acronym for Virtual Private Network. We all know about Internet which is expanding rapidly. The Internet is more like an infrastructure. Most parts of the country/world have a global reach through Internet. The dedicated communication satellites have global reachto each VPNVPNVPNVPNVPN Ranjan Shrestha 062 Electronics Solution to remotely connected intranetSolution to remotely connected intranetSolution to remotely connected intranetSolution to remotely connected intranetSolution to remotely connected intranet
  • 26. Contemporary Technologies 26 ZERONE 2010 and every part of the world. Hence, instead of using dedicated leased lines(Wired)/Wireless over a large geographic area, VPN uses cheap public network, such as the Internet as a backbone to create a virtually circuited private network for the company to stay connected with its branch offices (here bank). Also, a well designed VPN can reduce operational cost, increase in security, extend to much larger geographic area, builds a concept of global networking (assume branch offices in other countries), connect to extranet (connecting with other company’s private networks). Types of VPN 1. Remote access VPN It is a user to LAN connection used by company that has employees who need to connect to the private network from various remote locations. Generally, Enterprise Service Provider (ESP) sets up a Network Access Server (NAS) and provides remote users with client softwares. Then, the remote users can communicate with the corporate network through NAS using the client software. The Remote Access VPN permit secure, encrypted connections between a company’sprivate network and remote users through a third party service provider. 2. Site to site VPN With the use of dedicated equipments and encryption algorithms, a company can connect to multiple fixed sites over a public network such as the internet. The Site to Site VPN can of the following types. Intranet based If a company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect LAN to LAN. Extranet based When the network of a company wants to communicate with network of another company (may be a partner, supplier, and customer), they can build an extranet VPN that connects LAN to LAN and that allows various companies to work in a shared environment. Tunneling concept Most VPNs rely on tunneling to communicate with private networks that reach across the Internet. The tunneling protocol provides a secure path through an untrusted network. It is the process of placing entire packet within another packet and sending it over a network. This means the actual packet (information) isn’t disclosed in the public network. The tunneling uses three different protocols: Carrier protocol used by the network that the information is travelling over, Encapsulating protocol uses Generic Routing Encapsulation(GRE), IPSec, etc that is wrapped around the original information and Passenger protocol process on original data(IPX, IP) being carried. Security implementation in VPN The VPN networks are designed using internet resources. We all know the public network; the internet is not much secure. Hence, a well designed VPN uses several methods for keeping the connection and data secure. 1. Firewall A firewall is a part of computer system or network that is designed to block unauthorized access while permitting authorized communication. It can be implemented in either hardware or software, or a combination of both. The gateway (routes packets to outside of local network) can be configured to permit/deny access certain ports. A Cisco’s 1700 series routers can be upgraded with appropriate IOS (Internetwork Operating System) to include firewall capabilities. 2. Encryption It is the process of transforming information using an algorithm to make it unreadable to anyone except those processing special knowledge usually referred to as a key. It is used to protect data in transit. Most encryption systems belong to one of the two categories: Symmetric-key encryption and Public-key encryption. In Symmetric-key encryption, each computer has a secret key that it can use to encrypt a packet of information before it is sent over the network to another computer. Both the communicating partners must know the key used for encryption and decryption. In Public- key encryption, it uses a combination of a private "a well designed VPN can reduce operational cost, increase in security, extend to much larger geographic area, builds a concept of global networking , connect to extranet etc..”
  • 27. ZERONE 2010 27 Contemporary Technologies key and a public key. The private key is kept secret, while the public key may be distributed widely. The messages are encrypted with the recipient’s public key and can only be decrypted with the corresponding private key. The keys are related mathematically but the private key can’t be easily derived from the public key. 3. IPSec The Internet Protocol Security is a protocol that is used to secure IP communications by authenticating and encrypting each IP packet of a data stream. The two encryption modes are Tunnel and Transport. IPSec can be used to protect (encrypt) data flow between PC to router, PC to server, between gateways, firewall to gateway. IPSec has a dual mode, end-to-end, security scheme operating at OSI (Open Systems Interconnection) model Layer 3(Network layer). 1. 2. vpn.htm 3. Image by Cisco Systems, Inc. References 4. AAA server The AAA (Authentication, Authorization and Accounting) servers are used for more secure access in VPN environment. Before a session is established, the request is proxied to AAA server. Then, it authenticates (knows who is trying to access), authorizes (provides access in accordance to the predefined settings), accounts (security auditing, billing or reporting).
  • 28. Contemporary Technologies 28 ZERONE 2010 M ATLAB is a high-performance language for technical computation of complex algorithm. This provide easy mean to implement of algorithm on digital signal processing, image processing, signal and communication model etc. Today all the circuits designs arecarried out on µC and FPGA and they are usually digital computations. for these types of computations on computer matlab is a powerful tool.While performing the digital I/p and o/p we always preferred to use microcontroller (µC)(ie, cheap and easy).But when speed and parallel computing are of chief issue, FPGAs are the best alternative(though expensive and complex).But computer interface still find its own worth when you are performing complex algorithm, which will be tedious to implement in µC and FPGA. Accessing matlab through matlab is justified for the projects which perform complex algorithm on digital data. How to connect to the hardware port of computer using MATLAB 1. Create a digital IO (DIO) object 2. All lines to it (we may treat device object as a container for lines) 3. Line and Port Characterization (to specify whether it is input or output or bidirectional). Parallel port It is a 25 pin (also available as 36 pin) intended for 8 bits parallel data transmission in TTL logic. • 8 output pins accessed via the DATA Port • 5 input pins (one inverted) accessed via the STATUS Port • 4 output pins (three inverted) accessed via the CONTROL Port • 8 grounds pins The PC supports up to three parallel ports that are assigned the labels LPT1, LPT2, and LPT3 With addresses (in hex) 378, 278, and 3BC, respectively. The addresses of the ports are Printer Data Port Status Control LPT1 0x0378 0x0379 0x037a LPT2 0x0278 0x0279 0x027a LPT3 0x03bc 0x03bd 0x03be CONNECTING TO MATLAB Sugan Shakya 062 Electronics
  • 29. ZERONE 2010 29 Contemporary Technologies Normally there is one parallel port LPT1 in our pc but you can check for your pc using device manager. We can access the parallel port through MATLAB using following code: parallelPort= digitalio('parallel','LPT1'); hwlines = addline(parallelPort,0:7,'out'); (or, hwlines = addline(parallelPort,0:7,'in'); (or, addline (dio,0:7,{'in','in','in','in','out','out','out','out'}); Line specification is as per our requirement. We can write or read value to the port as follows: Write: val=12 putvalue(parallelPort,val) Read: valbin=getvalue(parallelPort) % it is binary vector val=binvec2dec(valbin) MATLAB also has facility to implement a timer.Suppose we need to monitor the value at port every 5 seconds for 1hour duration. portTimer.m parallelPort = digitalio('parallel',’LPT1’); addline(parallelPort,0:7,'in'); set(parallelPort,'TimerFcn',@findvalue); set(parallelPort,'TimerPeriod',5.0); start(parallelPort) pause(60) delete(parallelPort) clear parallelPort findValue.m function y=findsum(obj,event) %find sum of array; val=getvalue(obj) Serial port Serial ports consist of two signal types: data signals and control signals. To support these signal types, as well as the signal ground, the RS-232 standard defines a 25-pin connection. However, most PC's and UNIX platforms use a 9- pin connection. In fact, only three pins are required for serial port communications: one for receiving data, one for transmitting data, and one for the signal ground. The logic level for serial port is defined by RS 232 standard and is not TTL compatible. The serial data format includes one start bit, between five and eight data bits, and one stop bit. Usually there is one serial port at the rear part of computer with label COM1 and address 03F8 in hex. To display all properties and their current values: s = serial('COM1'); get(s) Before you can write or read data, both the serial port object and the device must have identical communication settings. Configuring serial port communications involves specifying values for following properties: s = serial('COM1'); %create a serial port object set(s,'BaudRate',19200) %configuring port for its baud rate fopen(s) %connect to the device connected to the port %reading data and %writing data fclose(s) delete(s) clear s We can write a binary data using fwrite function. We can read a binary data using fread function. We can write a text using fprintf function. We can read a text using fscanf function.
  • 30. Contemporary Technologies 30 ZERONE 2010 P hotovoltaic (PV) system converts sunlight into electricity. Sunlight energy generates free electrons in a semiconductors device to produce electricity.The sun supplies all the energy that drives natural global systems and cycles. Each wavelength in the solar spectrum corresponds to a frequency and an energy; shorter the wavelength, higher the frequency and greater the energy. The great majority of energy is in the visible region (wavelength range from about 0.2µm to 4µm). An average of 1367w of solar energy strikes each square meter of the Earth’s outer atmosphere. Although the atmosphere absorbs and reflects this radiation, a vast amount still reaches the Earth’s surface. The amount of sunlight striking the Earth varies by region, season, time of day, climate and measure of air pollution. The amount of electricity produced by PV devices depends on the incident sunlight and the device efficiency. Characteristics of PV system: • They rely on sunlight. • They generate electricity with little impact on the environment. • They have no moving parts to wear out. • They are modular, which means they can be matched to a need for power at any scale. • They can be used as independent power sources, or in combinations with other sources. • They are reliable and long-lived. • They are solid-state technology and are easily mass-produced and installed. Knowing how the PV effect works in crystalline silicon helps us understand how it works in all Photovoltaic Dipendra Kumar Deo 062 Electronics Sun with solar cell
  • 31. ZERONE 2010 31 Contemporary Technologies devices. All matters are composed of atoms. Positive protons and neutral neutrons comprise the nucleus of the atom. Negative electrons lie in the orbits which are at different distances depending on their energy levels. Outermost or valence electrons determine the way solid structures are formed. Four of silicon’s 14 electrons are valence electrons. In a crystalline solid a silicon atom share each of its four valence electrons with four valence electrons of other silicon atom. Light of sufficient energy can dislodge an electron from its bond in the crystal, creating a hole. These negative and positive charges (free electrons and holes) are the constituents of electricity. PV cells contain an electric field that forces free negative and positive charges in opposite directions, driving an electric current. To form the electric field, the silicon crystal is doped to alter the crystal’s electrical properties. Doping the crystal with phosphorus adds extra, unbounded electrons to the crystal, producing n-type material. Doping the crystal with boron leaves holes (bonds missing electrons act as possible charges) in the crystal, producing p- type material. In p-type material, holes, which are more numerous than free electrons, are the majority charge carriers. In n-type material, free electrons, which are more numerous than holes, are the majority charge carriers. The majority carriers respond physically to an electric field. When n-type and p-type material come in contact, an electric field forms at the junction (known as p-n junction). Once the materials are in contact, the majority carriers diffuse across the junction. This creates (in the immediate vicinity of the junction) excess electrons on the p-side and excess holes on the n-side. At equilibrium there is a net concentration of opposite charges on either side of the junction, which creates an electric field across the junction. Photons absorbed by a cell createelectron-hole pairs. Theelectric field attracts photo generated minority carriers across the interface and repels photo generated majority carriers. This sorting out of the photo generated electrons and holes by the electric field are what drive the charge in an electric circuit. Attaching an external circuit (eg: bulb) allows electrons to flow from the n-layer through a load and back to the p-layer. The band-gap energy is the amount of energy required of a photon to move an electron from valence band to conduction band. Band-gap energies of PV material range from about 1 to 3.33eV; crystalline silicon’s band-gap energy is 1.1eV. Photons with too light energy pass through the material or create heat; photons with too much energy create charge carriers, but also heat up the cell. Material with lower band-gap energies creates greater current: material with higher band-gap energies have higher voltages. The electric power produced by a PV cell is I*V, the product of the current and voltages. The PV cell is the basic unit in a PV system. An individual PV cell typically produces between 1 and 2W, hardly enough power for the great majority of applications. But the power can be increased by connecting cells together to form larger units called modules. Modules, in turns, can be connected to form even large units known as arrays, which can be interconnected for more Cell, Module, Array
  • 32. Contemporary Technologies 32 ZERONE 2010 power, and so on. In this way, PV system can be built to meet almost any power need, no matter how small or great. This can be implemented in the Grid system, as the grid connected photovoltaic power system is connected to the commercial electric grid. These are generally small and 3kW for private resistance, 20kW for multiple dwelling, 100-200kW for school and factories. The operation of such system is based on the principle of feeding power in to grid when the solar generation exceeds the load demand (during day time) and taking power from the grid during the night. These systems do not require storage of energy but require additional components to regulate voltage, frequency, and waveform to meet the stringent requirements of feeding the power into the grid. Application of photovoltaic system Telecommunications: The power consumption oftelecommunication equipmenthas considerably reduced due to use of solid devices. Transmitters and relays stations now consume 50-100W. These stations are often located in remote and difficult to access areas like mountain tops and deserts. Cathodic protection: Various metallic structures like pipe lines, well heads, bridges etc. are protected from corrosion by cathodic protection system. In this technique a small direct current is impressed on the structure at regular intervals to prevent electrochemical corrosion. Small PV panels may be used to provide this current very efficiently. Navigational aids: Marine beacons and navigational lights on buoys around the world are now-a-days being powered reliably and cost effectively by simple PV generators, which were earlier powered by kerosene or batteries with several maintenance problems. Remote aircraft beacons: Remote radio and beacons near the airports may be powered economically by solar PV. One of the earliest examples of powering light beacons by PV is seven mountain peaks near Medina airport in Saudi Arabia. Alarm systems: PV systems are beings used to power railway signals, alarm system, fog, fire and flood hazard warning, traffic lights and highway telephones. Automatic meteorological stations: For precise weather forecasting it is necessary to collect meteorological at fixed time of intervals at several locations and then transmit them to a weather stations for analysis.Solar powered meteorological stations are reliable, economical and relatively free of maintenance problems. Defence equipment: Many defence equipments like mobile telephones, remote instrumentations, radar, water purifier etc may be effectively powered by PV. Emergency equipment: Battery charging on life boats and rafts. Providing essential services after earthquakes, floods and other natural disasters may be done efficiently by PV system. Providing electric power to remote villages and islands specially in developing countries by PV systems where large number of villages remain unconnected to main grid. Set of arrays of solar cell Set of arrays of solar cell in each roof of house
  • 33. ZERONE 2010 33 Contemporary Technologies E lectromagnetic interference (EMI), also termedas radiofrequency interference(RFI), is any undesirable electromagnetic emission or any electrical or electronic disturbance, man- made or natural, which causes an undesirable response, malfunctioning or degradation in the performance of electrical equipment. The disturbance may interrupt, obstruct, or otherwise degrade or limit the effective performance of the circuit. The source may be any object, artificial or natural, that carries rapidly changing electrical currents such as an electrical circuit, the Sun or the Northern Lights. Radiated RFI is most often found in the frequency range from 30MHz to 10GHz. Types EMI can broadly be divided into two types: narrowband and broadband. Narrowband interference: arises from intentional transmissions such as radio and TV stations, pager transmitters, cell phones, etc. Broadband interference: arises from incidental radio frequency emitters which include electric power transmission lines, electric motors, thermostats, microprocessors, etc. Anywhere electrical power is being turned off and on rapidly is a potential source. The spectra of these sources generally resemble to that of synchrotron sources, stronger at lower frequencies and diminishing at higher frequencies, though this noise is often modulated, or varied, by creating device in some way. These sources include computers and other digital equipments such as televisions, mobiles etc. The rich harmonic content of these devices means that they can interfere over a very broad spectrum. Characteristic of broadband RFI is an inability to filter it effectively once it has entered the receiver chain. EMI in ICs ICs are often a source of EMI, but they must usually couple their energy to larger objects such as heatsinks, circuit board planes and cables to radiate significantly.On ICs, EMI are usually reduced by usage of bypass or decoupling capacitors on each active device, rise time control of high speed signals using series resistors, and power Vcc filtering. However, shielding is a last option after all other techniques have failed. At lower frequencies, radiation is almost exclusively via input/output cables; RF noise gets onto the power planes and is coupled to the line drivers via the VCC and ground pins. The RF is then coupled to the cable through the line driver as common mode noise. Common Mode Noise is a noise signal which is found in phase on both the line and neutral conductors with respect to ground. Common mode noise also typically has equal amplitude on both line and neutral conductors. So, one of the ways to deduce its effect is to use choke or braid-breaker. At higher frequencies, traces get electrically longer and higher above the plane. So, two techniques are used: wave shaping with series resistors and embedding the traces between two planes. Even ElectrElectrElectrElectrElectromagnetic Interferomagnetic Interferomagnetic Interferomagnetic Interferomagnetic Interferenceenceenceenceence Rupendra Maharjan 062 Electronics Electromagnetic interference
  • 34. Contemporary Technologies 34 ZERONE 2010 if these measures can’t reduce EMI to the permissible level, shielding technique such as RF gadgets and copper tape can be used. Necessity of regulation of EMI Because these EMIs are unwanted potentials, they are regulated to allow today’s sensitive equipment to function properly without suffering degradation in performance due to interference generated by other electronic devices. The EMI spectrum is a limited natural resource that must be maintained to allow reliable radio frequency communications. The successful regulation of EMI will allow future electronic devices to operate as defined, in the intended environment, without suffering any degradation in performance due to interference, and without disrupting the performance of other equipment. EMI filter An EMI filter is a passive electronic device used to suppress conducted interference present on any power or signal line. It may be used to suppress the interference generated by the device itself as well as to suppress the interference generated by other equipment to improve the immunity of a device to the EMI signals present within its electromagnetic environment. Most EMI filters include components to suppress both common anddifferential modeinterference.Filters can also be designed with added devices to provide transient voltage and surge protection as well as battery backup. An EMI filter has a high reactive component to its impedance. That means the filter looks like a much higher resistance to higher frequency signals. This high impedance attenuates or reduced the strength of these signals so they will have less of an effect on other devices. 1. 2. 3. 4. sDefinition/0,,sid40_gci213940,00.html References Use The Best... Linux for Servers Mac for Graphics Palm for Mobility Windows for Solitaire - T-Shirt The Horror,The Heartbreak: Facebook is under major revision.The site will be online after few weeks Credit: Ruchin Singh ruchin 2010
  • 35. ZERONE 2010 35 Contemporary Technologies T he world changed tremendously over the last 10-20 years as the result of the growth and maturation of the internet and networking technologies in general. Twenty years ago, no global network existed to which the general population could easily connect. Ten year age, the public internet had grown to the point where people in the most part of the world could connect to him Internet. Today, practically everyone seems to have access, through their PCs, handheld devices and phones. The original design for the Internet required unique IP addresses thatare connected in network. The peopleadministrating the program ensure that none of the IP address was reused. Internet was growing so fast that there arises the lack of IP address. Its reality that number of people and devices that get connected to networks increase each and every day. That’s not a bad thing at all- we are finding new and exciting way to communicate to more people all the time, and that’s good thing. Infact, it’s a basic human need. IPv4 has only about 4.3 billions addresses available- in theory, and we know that we don’t even get to use all of those. There really are only about 250 million addresses that can be assigned to devices. China is barely online, and we know that there’s a huge population of people and corporations there that surely want to be. Moreover, it’s estimated that just over 10% of populationis connected to internet. The above statistics revels the ugly truth of IPV4’s capacity. So, we have to do something before we run out of addresses and lose the ability to connect with each other. The main long term solution was to increase the size of IP address. So, IPV6 came. The problem is that most of the Cisco’s router and switches that are using in IPV4 do not support IPV6. XP doesn’t support IPV6. Even most of ISPs doesn’t have sufficient infrastructure to support IPV6. For IPV6, it must have hardware support and software support. Many short term solutions to the addressing problem were suggested. Some of them are discussed here: 1. Dual stacking The term dual stacks mean that the host or routers uses both IPV6 and IPV4 at the same time. Hosts have both IPV4 and IPV6 addresses. This means that host can send IPV4 packets to other IPV4 hosts and that the host can send IPV6 packets to other IPV6 hosts. Configuration of dual stack: MIGRATIONTOIPV6 Mithlesh Chaudhary 062 Electronics
  • 36. Contemporary Technologies 36 ZERONE 2010 R (config) #ipv6 unicast-routing R (config) #interface fast Ethernet 0/0 R (config_if) #ipv6 addresses 2001:db8:3c4d:1::/64 eui-64 R (config-if) #ip address 2. Tunneling Tunnel function is generally to take IPV6 packet sent by a host and encapsulates it inside an IPV4 packet. The IPV6 packets can then be forward over an existing IPV4 internetwork. The other device then removes the IPV4 header, reveling the original IPV6 packet. Fig shows IPV6 to IPV4 (6 to 4) tunnel (meaning IPV6 inside IPV4). In the fig, we need two encapsulates or tunnel the IPV6 packet into new IPV4 header, with destination address (IPV4) of router R4. R2 and R3 easily forward the packet, while R4 de-encapsulates the original IPV6 packets, forwarding it to IPV6 pc2. Configuration 6 to 4 tunneling Router1 (config) # int tunnel 0 Router1 (config-if) #ipv6 address 2001:db8:1:1::1/64 Router1 (config-if) #tunnel source Router1 (config-if) #tunnel destination192.168.40.1 Router1 (config-if) #tunnel mode ipv6ip Router2 (config) #int tunnel 0 Router2 (config-if) #ipv6 address 2001:db6:2:2::1/64 Router2 (config-if) #tunnel source Router2 (config-if) #tunnel destination Router2 (config-if) #tunnel mode ipv6ip 3. Network Address Translation (NAT) NAT is the protocols that are used to reduce the demands of number of ipv4.NAT function changes the private IP addresses to publicly registered IP addresses inside each packet. Router (performing NAT),changesthe packet’s source IP address when packet leaves the private organization. The router (performing NAT) also changes the destination address in each packet that is forwarded back into private network. Cisco IOS software supports several variations of NAT. There are generally three types of NAT: a. Static NAT b. Dynamic NAT c. Port address translation (PAT) a. Static NAT It is one to one mapping of IP address i.e. the NAT router simple configures a one to one mapping between private address and registered address. IP V4 Network Internet
  • 37. ZERONE 2010 37 Contemporary Technologies The design concern of NAT is to save the IP address.So, if we use one to one function of inside local and global than Nat aim can’t be archived. Static NAT configuration Router (config) #ip Nat inside source static Router (config) #int f0 Router (config-if) #ip Nat inside Router (config) #int S0 Router (config-if) #ip Nat outside Here is inside local address and is inside global address.Also, f0 is the inside interface and S0 is outside interface. After creating the static NAT entries, the router needs to know which interfaces are “inside” and which interfaces are “outside”. The ‘ip Nat inside’ and ‘ip Nat outside’ interface subcommands identify each interface appropriately. b. Dynamic NAT Like static, Dynamic NAT also creates a one to one mapping between inside local and inside global address. However, mapping of an inside local address to an inside global address happens dynamically. Dynamic NAT sets up a pool of possible inside global address and defines matching criteria to determine which inside local IP addresses should be translated with NAT. Nat can be configured with more IP addresses in the inside local address list than in inside global address pool. If all the ip address of the NAT pool are in used and if at this time, new packet arrive at router, the router simply discards the packet. The user must try again until there would be an ip address in NAT pool. Dynamic NAT configuration Dynamic NAT configuration requires identifies as either inside or outside interface. It uses an access control list (ACL) to identify which inside local IP address need to have their address translated. Router (config) #int F0 Router (config-if) #ip Nat inside Router (config) #int S0 Router (config-if) #ip Nat outside Router (config) # access-list 1 permit Router (config) #ip Nat pool ioe netmask Router (config) #ip Nat inside source list 1 pool ioe c. Port address translation (PAT) Dynamic NAT lessens the problem of static NAT by some degree because every single host in an internet work should seldom need to communicate with the internet at the same time. However, large percent of IP hosts in a network will need Internet access throughout that company’s in normal business hours. So the NAT still requires a large number of registered IP addresses, again failing to reduce ipv4 address consumption. When PAT creates the dynamic mapping, it selects not only an inside global IP address but also unique port number. The NAT router keeps a NAT table entry for every unique combination of inside global address and a unique port number associate with the inside global address. And because port number field has 16 bits, NAT overload can use more than 65,000 port numbers, allowing it to scale well without needing many registered IP addresses. Port address translation (PAT) configuration Router (config) #int F0 Router (config-if) #ip Nat inside Router (config) #int S0 Router (config-if) #ip Nat outside Router (config) # access-list 1 permit Router (config) #ip Nat pool ioe netmask Router (config) #ip Nat inside source list 1 pool ioe overload For PAT, we generally need only one IP address in the pool. For this, we have a command: Router (config) #ip Nat pool ioe netmask Note that for ip pool we use global address and for access-list we use local address.
  • 38. Contemporary Technologies 38 ZERONE 2010 M any of us are already familiar with magnetic stripe cards. The most visible form of their use is bank (credit, debit, and ATM) cards, but these can find applications in many places such as identity cards, library cards and transportation tickets. This article intends to introduce magnetic stripe cards and outline their physical and technical aspects in brief. A magnetic stripe card contains a black or brown stripe made from tiny iron-based magnetic particles in a resin. Digital data is stored in the stripe by magnetizing the particles as in digital tape storage. The magnetic stripe, sometimes called a magstripe, is read by physical contact by swiping past a reading head or inserting it in a reader. They can be manufactured by various techniques which depend on the requirements such as durability, cost, etc. It is said that the first use of magnetic stripes on cards was in the early 1960’s when London Transit Authority installed a magnetic stripe system in the London underground (UK). By the late 1960’s BART (Bay Area Rapid Transit) (USA) had installed a paper based ticket the same size as the credit cards we use today. This system used a stored value on the magnetic stripe which was read and rewritten every time the card was used. Specification The magnetic stripe is located ~6 mm from the edge of the card, and is ~10 mm wide. The magnetic stripe contains three tracks, each ~2.8 mm wide. Tracks 1 and 3 are typically recorded at 210 bits per inch (8.27 bits per mm), while track 2 typically has a recording density of 75 bits per inch (2.95 bits per mm). How information is encoded Each character that is encoded on the stripe is made of a number of bits. The polarity of the magnetic particles in thestripeis changed to define each bit. Several schemes exist to determine whether each bit is a 1 or a 0, the most commonly used schemes are F2F (or Aiken Bi-Phase) and MFM (Modified Frequency Modulation). The ISO/IEC 7811 standards specify F2F encoding. In this encoding, each bit has the same physical length on the stripe. The presence or absence of a polarity change in the middle of the bit dictates whether it is a 1 or a 0. The width of a single bit always remains the same but some bits have an extra polarity change in the middle and these are called 1s. MFM encoding is more complicated. This type of encoding allows twice as much data to be encoded with the same number of flux reversals (edges). Once the encoding scheme is chosen, the format Magnetic StripeCards Pushkar Shakya 063 Computer “Digital data is stored in the stripe by magnetizing the particles as in digital tape storage.” Magnetic stripe cards
  • 39. ZERONE 2010 39 Contemporary Technologies of the data must be selected. ISO/IEC 7811 specifies two different schemes for use on interchange cards (such as bank cards). These are four bits plus parity and six bits plus parity. The four bits allow only the encoding of numbers plus some control characters, the use of six bits allows the full alpha numeric set to be encoded. The parity bitis used to help determine if an error occurred in the reading of the data. The total number of 1s in a character is added up; in odd parity this must equal an odd number. If the total is odd, the parity bit is set to a 0, if the total is even the parity bit is set to a 1. Magnetic stripe coercivity Magnetic stripes come in two main varieties: high- coercivity (Hi-Co) at 4000 Oe (Oersted) and low- coercivity (Lo-Co) at 300 Oe but it is not infrequent to have intermediate values at 2750 Oe. High- coercivity magnetic stripes are harder to erase, and therefore are appropriate for cards that are frequently used or that need to have a long life. Low-coercivity magnetic stripes require a lower amount of magnetic energy to record, and hence the card writers are much cheaper than machines which are capable of recording high-coercivity magnetic stripes. A card reader can read either type of magnetic stripes; the case may not be the same for card writers, however. High coercivity stripes are resistant to damage from most magnets likely to be owned by consumers and problems of accidental erasure are diminished. Low coercivity stripes are easily damaged by even a brief contact with a magnetic purse strap or fastener. Because of this, virtually all bank cards today are encoded on high coercivity stripes despite a slightly higher per-unit cost. Usually low-coercivity magnetic stripes are a light brown color, and high-coercivity stripes are nearly black. The material used to make the particles defines the Coercivity of the stripe. Standard low coercivity stripes use iron oxide (Gamma Ferric Oxide) as the material to make the particles and the particles are acicular (needle shaped) with an aspect ratio of approximately 6 to 1. The acicular particles have an easy axis of magnetization along the length of the particle which makes the alignment an easy process. High coercivity stripes are made from other materials like Barium Ferrite. The particles used in most of these materials are notacicular,they areplatelets. These plateletshave an easy axis of magnetization through the plate, which means the alignment field has to stand the particles on edge and they have to stay that way to get the best performance from the stripe. Obviously the particles want to fall over as soon as the field is removed from the stripe so part of the skill in making a high quality stripe lies in designing a process that can keep those particles on their side. As explained above, the stripe is made from many small particles bound together in a resin. The density of the particles in the resin is one of the controlling factors for the signal amplitude. The more particles there are, the higher the signal amplitude; and it should not be related to material coercivity. Signal amplitude is important because it defines the design of the readers for the cards. Standards exist (ISO/IEC 7811) which defines the signal amplitude for cards that are used in the interchange environment. By conforming to these standards, a user ensures that the magnetic stripe can be read in any financial terminal worldwide. It also makes the range of available readers much greater. Magnetic stripe track formats: ISO/IEC 7813 There are three tracks on magnetic cards used for financial transactions. Track 1 It is 210 bits per inch (bpi), and holds 79 six-bit plus parity bit read-only alphabetic characters and hence is the only track that contains the cardholder's name. The information on track 1 on financial cards is contained in several formats: A, which is reserved for proprietary use of the card issuer, B, which is described below, C-M, which is reserved for use by ANSI Subcommittee X3B10 and N-Z, which are available for use by individual card issuers: Format B: • Start sentinel — one character (generally '%') • Formatcode="B"—onecharacter(alphabetonly) “The acicular particles have an easy axis of magnetization along the length of the particle which makes the alignment an easy process.”
  • 40. Contemporary Technologies 40 ZERONE 2010 • Primary account number (PAN) — up to 19 characters. Usually, matches the credit card number printed on the front of the card. • Field Separator — one character (generally '^') • Country code - 3 characters • Name — 2 to 26 characters • Field Separator — one character (generally '^') • Expiration date — four characters in the form YYMM. • Service code — three characters •Discretionary data — enough characters to fill out maximum record length (79 characters total) • End sentinel — one character (generally '?') • Longitudinal redundancy check (LRC) — one character (a form of computed check character) LRC is a validity character calculated from other data on the track. Card readers use it only to verify the input internally. Track 2 It has 75 bpi, and holds 40 four-bit plus parity bit characters. This format was developed by the banking industry (ABA). This track is written with a 5-bit scheme (4 data bits + 1 parity), which allows for sixteen possible characters, which are the numbers 0-9, plus the six characters (: ; < = > ?). The data format is as follows: • Start sentinel — one character (generally ';') • Primary account number (PAN) — up to 19 characters. Usually, matches the credit card number printed on the front of the card. • Separator — one char (generally '=') • Country code -- 3 characters • Expiration date — four characters in the form YYMM. • Service code — three characters • Discretionary data — as in track one • End sentinel — one character (generally '?') • LRC — one character Track 3 It has 210 bpi, and holds 107 four-bit plus parity bit characters and the standards were created by the Thrift-Savings industry. Credit cards typically use only tracks 1 and 2. Track 3 is a read/write track (that includes an encrypted PIN, country code, currency units, amount authorized), but its usage is not standardized among banks and is virtually unused. Security Magnetic stripes are not inherently secure. The problem with being easy to manufacture and encode is that it also makes it easy for the crooks to do the same. Several schemes are available for creating a secure encoding on a magnetic stripe, Watermark Magnetics, XSec, Holomagnetics, XiShield, Jitter Enhancement, ValuGard, and MagnePrint are a few. Each of these technologies exploits some aspect of the magnetic stripe, the card, and the data on the stripe to tie everything together to make counterfeiting the card in some fashion very difficult. They use different means - ValuGard uses the inherent signal amplitude properties of the stripe, Watermark Magnetics uses a special magnetic stripe, and XSec uses the inherent jitter properties of the stripe. What all this means is that the actual piece of magnetic stripe can be tied to the encoding to prevent fraud. Conclusion Magnetic stripe technology is being used in many fields. Magnetic stripe technology provides the ideal solution to many aspects of our life. It is very inexpensive and readily adaptable to many functions. This coupled with the advent of the security techniques now available means that many applications can expect to be using magnetic stripe technology for a long time. 1. 2. 3. References
  • 41. ZERONE 2010 41 Robotics Introduction The world around us is exploring new horizons of technology in ever-accelerating pace, especially in embedded systems, computer and automation. While embedded platforms are an attractive option to learn and implement new technologies, most freshmen and sophomore engineering students do not have acquired sufficient skills to understand and use the complex development tools needed to program these platforms. There has been increasing number of ill-prepared students enrolled to engineering courses, who neither have the required pre-requisites from their intermediate level/school courses, nor do they know much about their field of interest [1]. To ensure that we produce qualified engineers, not only theoretically but practically as well, activity based technology curriculum is needed at the school level as well as early engineering courses to give students an insight into engineering fields and attract students to technology studies [2]. As opposed to the traditional education approach, technology curriculum should be more practice oriented and activity based. Therefore, new approaches are needed to design and implement high quality technology program at beginning of their engineering career. Robotics, being a multifaceted representation of modern science and technology, fits into this planning perfectly. Robotics is defined as an intelligentconnection between the perception and action [3]. It is an engineering art combing electrical and mechanical technologies. It is widely used nowadays and has been part of our daily life. In the industrial area, robots are widely “““““MMMMMy Fy Fy Fy Fy Fiririririrssssst Ht Ht Ht Ht Humumumumumaaaaannnnnoooooiiiiid Rd Rd Rd Rd Rooooobbbbbooooottttt””””” Bikram Adhikari DOECE, Pulchowk Campus Abstract: As the world around is becoming overwhelmed with sophisticated embedded gadgets and robots, it is indispensable to create enthusiasm among students about embedded systems and robotics in their early career. While embedded platforms are an attractive option to learn and implement new technologies, most freshmen and sophomore engineering students do not have acquired sufficient skills to understand and use the complex development tools needed to program these platforms. Students need simpler and spontaneous environment to experiment their creativity and apply basic engineering concepts. Sharing an experience of building a humanoid robot with LEGO Mindstorms NXT ® this paper presents LEGO Mindstorms Kit as a useful tool for freshman and sophomore to learn robotics and embedded systems. An Experience worth Sharing with Freshmen and Sophomore Index Terms: LEGO Mindstorms, Robotics, Embedded Systems RoboticsRoboticsRoboticsRoboticsRobotics Figure 1. Integrated STEM areas MMMMMathematics SSSSScience EEEEEngineering TTTTTechnology
  • 42. used to increase productivity and hence production capacity. It is an excellent way to introduce the students to integrated areas of science, technology, engineering and mathematics (STEM) [1]. However, due to the complexity of robotics studies, it is hard to attract and pass the knowledge to students[2]. This paper, presents the comparative analysis of LEGO Mindstorms NXT with traditional learning approach and its capabilities to be an interactive platform for school students as well as freshman and sophomore engineering student to learn embedded systems and robotics using simple mechanical parts and graphical software. LEGO mindstorms The Lego Mindstorms Kit is a set of robotics for the educational area, which lets you create and program robots, using simple mechanical parts, in order to perform simple and complex tasks. The kit consists of mounting blocks, motors, sensors, and a microprocessor is the brain of the system [4]. LEGO NXT brick Figure 3 shows the block diagram of LEGO Mindstorms NXT “brick”. Its central processor is a 32-bit Atmel ARM7 processor with 256KB Flash, 64KB RAM operating at 48 MHz. It has 100 × 64 pixel LCD graphical display and 8-bit resolutions sound channel. The brick can be programmed with Bluetooth radio and can store multiple programs which can be selected using buttons [4]. From a freshmen or sophomore point of view, to program such a system by conventional techniques would require a knowledge challenging toolchain. Another fact to ponder is this “toy”, as said by LEGO, is suitable for kids from ages 8 and above. With these two contradictions, LEGO introduces a different approachtoprogramming these systemsthatfocus more on the concept implementation and less on bit-level operations. LEGO mindstorms NXT software As mentioned in the previous section, programming the NXT brick by 8 years old and subsequently freshman and sophomore engineering students needs a noble approach primarily because of the limited skill set of the students. The LEGO Mindstorms NXT kit provides a wonderful software development platform that can be used to program the brick without having any prior knowledge of Figure 3. Block diagram of LEGO Mindstorms NXT Brick [5] Figure 2. LEGO Mindstorms Education Base Set [4]
  • 43. ZERONE 2010 43 Robotics programming. Figure 4 shows an example of a program that is written in this Mindstorms NXT software [6]. Since the program is in completely graphical mode, this helps the students to focus more on design aspect rather than starting with basic but complicated phase of learning new tools and associated abstract syntaxes. Another important feature of graphical programming is parallel programming which is inherently hard to concept to teach with traditional approach. The parameters associated with a block can be easily configured at the bottom of the screen itself. This software also exposes some key embedded concepts such as memory and resource management in a fun environment. LEGO sensors and motors Figure 5 shows a range of sensors and motors for LEGO brick. LEGO Mindstorms NXT 2.0 comes with two touch sensor, one ultrasonic sensor for measurement of distance ranging from 5cm to 250cm, one microphone and one color sensor that can identify fifteen different colors. These sensors can be easily calibrated using simple calibration program within the software [4]. There are three dc motors with tachometer feedback built-in to provide a robust position and velocity response. LEGO building blocks Using traditional learning approach, students find it ominously challenging and daunting to build all the basic parts of a robot from scratch. Freshmen and sophomore find it very difficult and costly to learn with traditional approach. With LEGO Mindstorms, the physical structure of robot or other mechanical framework can be built with LEGO building blocks without any requirement of costly and sophisticated workshop and tedious labor. There are various LEGO building instructions to assemble a range of robots on the official LEGO website [4]. Figure 4 shows building instruction of a part of LEGO Humanoid robot’s leg. My first LEGO humanoid robot Making a robot is not an easy task. The knowledge of robotics and automation is not acquired by turning the pages of lecture notes and submitting weekly assignments. Making a complete system fully working takes a lot of effort, knowledge and perseverance. In addition, the challenge is surmounted by unavailability of equipments that necessitate development from scratch. For beginners, robotics is an incredibly daunting field to get into. Figure 4. LEGO Mindstorms NXT Software [4] Figure 5. LEGO Mindstorms NXT Sensors and Motors [4] Figure 6. LEGO Mindstorms Building Instruction
  • 44. 44 ZERONE 2010 Robotics Recently, I built a humanoid robot, my childhood dream, using LEGO Mindstorms NXT. With my nerve wrecking prior experiences of building simple, yet complicated to build, robots, I was astounded to see myself building a humanoid robot effortlessly. Following the building instructions and using drag and drop programming environment was just so simple. Finally the robot walked up to me and said, “Hello!!!” It was a wonderful experience. At the mean time, I had a flash back into my past robotic experiences. I don’t remember making a robot without using a multimeter or soldering iron, or going through lumps of datasheets, or pondering in front of oscilloscope wondering, “Why is there 500mV spike in the signal?” More than 80% of the time was spent on development of fundamental parts of the robot. Major portion of mechanical job was to work for hours and hours in lathe machine to make shafts, bushes and wheels. As a beginner in robotics, I spent hours and hours going through datasheets and surfing the internet to discover some basic principles and techniques. When I was making this humanoid robot, I laughed at my silly mistakes in the past and realized how easy and effective would it had been if I had an opportunity to use this kit as a sophomore. I don’t say that all that I went through was useless. It is important to have a strong foundation, be able to accomplish every single piece of work oneself and acquire in- depth knowledge of engineering but LEGO Mindstorms can be the right point to take off. LEGO Mindstorms can turn out to be an exceptional platform for a variety of research purposes. Especially designed for students from age group 8 years and above this unique platform integrates the key traits of activity based learning. Not only robotics, this extremely powerful platform can be used as a base platform to start learning programming and embedded systems. To understand and implement various mechanisms without having to go to a workshop may tow mechanical students to use this kit as well. Conclusion In this paper, LEGO Mindstorms NXT has been presented as an indispensable tool to introduce embedded systems and robotics to school students, freshmen and sophomore engineering students. Many students find early courses in engineering to be abstract, narrowly focused along with rigorous mathematics. Platforms like LEGO Mindstorms arecompact, costeffective and simple packages that provide flexibility in design and development of robots and other embedded systems without having to worry about complex development tools. Such a platform helps in encouraging creativity and enables students to absorb concepts effectively. Figure 8. LEGO Humanoid Robot Experience Figure 7. Mindstorms NXT Humanoid Robot
  • 45. (2) Kin W. Lau, Heng Kiat Tan, Benjamin T. Erwin, Pave1 Petrovic, Creative Learning in School with LEGO® Programmable Robotics Products, 29th ASEE/IEEE Frontiers in Education Conference, 1999 (3) Lady Daiana O. Maia, Vandermi J. da Silva, Ricardo E. V. de S. Rosa, Jose P. Queiroz- Neto, Vicente Lucena Jr., An Experience to use Robotics to Improve Computer Science Learning, 39th ASEE/IEEE Frontiers in Education Conference, 2009. (4) LEGO official website (5) LEGO MINDSTORMS NXT Hardware Developers NXTreme.aspx (6) LEGO MINDSTORMS NXT Software, Mindstorms%20Software%20Announcement.aspx References (1) Tanja Karp, Richard Gale, Laura A. Lowe, Vickie Medina and Eric Beutlich Generation NXT: Building Young Engineers with LEGOs, IEEE Transactions on Education, VOL 53 No. 1, February 2010. LEGO Mindstorms NXT can be used as an instructional tool to bring students into areas that are broad scoped, fun and challenging as well. Students can begin with simple robots and successively proceed to sophisticated NXT controller, motors and sensors. Not just how to use these sensors and motors, students can also learn how they work so that they can attempt to make their own sensors, interfaces, motor drivers and eventually their own robot. At school level, students are able to develop a deeper and broader understanding of the field of engineering. Consequently, they are able to make a well informed decision when choosing their field of study. Copyright 1996 Randy Glasbergeon
  • 46. Robotics 46 ZERONE 2010 A signal may be presented in various ways. On one hand, we can study how a signal is varying in time, which is the case when using an oscilloscope. The shape presented on an oscilloscope is produced by momentary values of the signal magnitude at differentoccasions. This procedure is to study the signal in time domain. This kind of presentation is very useful when studying transient events represented by rise time and fall time, as well as analyzing digital signals to detect intermittent interference of the information. This is however, not always a useful way to represent information. For instance, if a large signal with weak interference superimposed, the interference may be hard to detect. Thus in such a case spectral analysis method is more appropriate to represent the signal where we usually represent the signal in frequency domain (instrument used for such frequency domain measurement is known as spectrum analyzer). Being my final year project topic, I want to briefly discuss why spectrum analysis is important? And how can we perform spectrum analysis? What is spectrum? So what is a spectrum in the context of this discussion? A spectrum is a collection of sine waves that, when combined properly; produce the time-domain signal under examination. Figure 1.1. shows the waveform of a complex signal in time domain. Suppose that we are hoping to see a sine wave. Although the wave form certainly shows us that the signal is not a pure sinusoid, it does not give us a definitive indication of the reason why. Figure 1.2. shows our complex signal in frequency domain. The frequency-domain display plots the amplitude versus the frequency of each sine wave in the spectrum. As shown, the spectrum in this case comprises just two sine waves. We now know why our original waveform was not a pure sine wave. It contained a second sine wave, the second harmonic in this case which was difficult to determine in time domain measurement. Prajay Singh Silwal 062 Electronics Project IdeasProject IdeasProject IdeasProject IdeasProject Ideas Spectrum Analysis and its Benefits Figure 1.1. Waveform of a complex signal in time domain, Figure 1.2. Complex signal in frequency domain
  • 47. Project Ideas ZERONE 2010 47 How can we perform spectrum analysis? Fourier analysis tells us that any periodic signal may be represented by a series of sine waves with varying amplitudes, frequencies and phases. If filtering of the signal is possible, the spectral components may be presented separated from each other. If the amplitude of each spectral component is displayed versus the corresponding frequency a spectral analysis is obtained, and each spectral component can be studied independently of the oth If we let our signal pass through a set of filters consisting of band pass filters, i.e., a filter bank (as shown in figure 2), and study the output signal from different filters by means of an oscilloscope, one or more sinusoids signals will appear( first sinusoid obviously being fundamental and remaining being harmonics). If on other hand, we choose to present the signal in the frequency domain via spectrum analyzer the signals will be represented by vertical lines since each of them only corresponds to a single frequency. The height of a vertical line will represent the amplitude of the corresponding signal and the horizontal position will represent the corresponding frequency. As in the case with the oscilloscope, we can read the signal amplitude, but in spectrum analyzer it is also possible to read the amplitudes of all interfering signals as well as the corresponding frequencies. This feature facilitates to decide on the origins of the interferences. However while studying about interferences present in signal of radio frequency (few GHz), where amplitudes of interferences are very small, not even the spectrum analyzer in linear scale will give the convenient result. A linear display in thefrequencydomainas wellas inthetimedomain will only show signals that are comparable in size, i.e., in the same order of magnitude. If a signal contains interference in the order of 1/ 10000 of magnitude of the main signal, we can neither detect the disturbances nor analyze them or measure their magnitudes. For this reason, a logarithmic display is used where the amplitudes are presented in dBm, i.e., decibels in relation to 1 mW. P (dBm) =10*log (P/1mW) Through this kind of display both harmonics and a spurious are clearly shown. The information of the signal is not all changed, but the way of presenting data will make all information more accessible. We can clearly see that the same signal displayed in different ways, viz. time-domain representation and frequency-domain representation, is giving access to different information. The phase information, however, will be lost in frequency domain. The most appropriate domain has to be chosen in each specific case. Some particular systems are specifically frequency-domain oriented. Applications Engineers and scientists are looking for innovative uses of RF technology since 1860’s. The radio had become the first practical application of RF signals. Over the next three decades, several research projects were launched to investigate methods of transmitting and receiving signals to Figure 2. Working of a spectrum analyzer
  • 48. Project Ideas 48 ZERONE 2010 detect and locate objects at great distances. By the onset of World War II, radio detection and ranging (also known as radar) had become another prevalent RF application. Due in large part to sustained growth in the military and communications sectors, technological innovation in RF accelerated steadily throughout the remainder of the 20th century. Products such as mobile phones that operate in licensed spectrum must be designed not to transmit RF power into adjacent frequency channels and cause interference. This is especially challenging for complex multi-standard devices that switch between different modes of transmission and maintain simultaneous links to different network elements. To overcome these evolving challenges, it is crucial for today’s engineers and scientists to be able to reliably detect and characterize RF signals that change over time, something not easily done with traditional measurement tools. To address these problems, there is a need of the Real-Time Spectrum Analyzer (RTSA), an instrument that can discover elusive effects in RF signals, trigger on those effects, seamlessly capture them into memory, and analyze them in the frequency, time, modulation, statistical and code domains. Telecommunications systems are often using the so called FDMA (Frequency Division Multiple Access) which means that different channels are using different frequencies. This procedure is putting high demands on the spectral purity of the signal. In order to decide whether or not a signal is able to interfere with adjacent channels, it has to be studied in the frequency domain. This fact makes the spectrum analyzer one of the most important instruments for RF (Radio Frequency) measurements. 1. 2. References Scientists have turned ordinary laptops into earthquake detectors. The portable seismic recorders rely on accelerometers built into the laptops, which are motion-detecting devices made to turn off your computer if dropped. The jostle of an earthquake could do the same trick. And when an earthquake is detected, a special software program transmits the shaking intensity over the Internet to researchers at the University of California, Riverside, and Stanford University. To avoid false alarms, the software only signals a quake when several computers in one area transmit the earthquake alerts. So far about 1,000 people from 61 countries have volunteered for the Quake-Catcher Network. [Source:] Ordinary Things Turned Hi-Tech! Laptop Earthquake Detectors
  • 49. Project Ideas ZERONE 2010 49 SIMULINK Model of an Inverted Pendulum SystemSIMULINK Model of an Inverted Pendulum SystemSIMULINK Model of an Inverted Pendulum SystemSIMULINK Model of an Inverted Pendulum SystemSIMULINK Model of an Inverted Pendulum System Using a RBF Neural Network Controller Bikram Adhikari DOECE, Pulchowk Campus Abstract: This article presents a robust control of an inverted pendulum system. The cart pendulum system is non holonomic and non linear system so that simple linear controllers (like proportional- integral-derivative controller) may not have a good performance. The proposed system uses error back- propagation algorithm developed for RBF network. The RBF network acts as a compensation technique to classical PID controller for robust control of inverted pendulum system. Index Terms: Digital incremental PID control, RBF network, Back Propagation, Inverted Pendulum, adaptive control, SIMULINK Modeling Introduction Inverted pendulum systems are considered as a prototype example of nonlinear system control applications to researchers and educators. Human body itself is an example of a mobile inverted pendulum system (MIPS), controlling itself from falling while walking. The inverted pendulum system has been considered as a well known prototype system of representing nonlinear systems for testing control algorithms [1-4]. Hence it has become an important subject in robotics and control systems. PID controllers can balance the pendulum by selecting suitable gains However, simultaneous control of both angle and position by PID controllers has been known to be very difficult since the inverted pendulum system is nonlinear [5]. There has been active research in mobile inverted pendulum systems. ’Segway’ has proven to be the future of transportation with its remarkable two wheeled transportation vehicle. Soon Segway as an astronaut robot in the space is being introduced [6]. “Joe” robot uses state feedback control algorithm has been applied to the MIPS to control velocity and position [7]. To overcome nonlinear behaviours, nonlinear control methods, adaptive control methods and intelligent control approaches have been proposed. The inverted Pendulum system is a single input- multiple output (SIMO) system where a single input force controls both the position and tilt. One of the merits of using neural network as an auxiliary controller is when complicated dynamic model of the system is not available. System structure The inverted pendulum model uses two PID controllers connected in parallel. The total sum of the outputs of the controllers is the input to the inverted pendulum system. Each PID controller fights each other to satisfy the requirements of the angle and the cart, but not enough to control both control actions. This leads to the introduction of the RBF network to improve the performance. PID controller The equation of digital form of a pid controller for position and tilt control are given as Figure 1. Cart-pendulum model
  • 50. Project Ideas 50 ZERONE 2010 Äux [k] = kpx (ex [k]–ex [k-1])+kix ex [k] +kdx (ex [k]-2ex [k-1]+ex [k-2]) ... (1) Äuè [k] = kpè (eè [k] – eè [k-1])+kiè eè [k] +kdè (eè [k]-2eè [k-1]+eè [k-2]) ... (2) u[k] = u[k-1]+ Äuè [k] + Äux [k] ... (3) where eè =èd - è and ex = xd - x The controller gains are adjusted by trial and error method. Figure below shows the SIMULINK model for pid control of an inverted pendulum system. RBF network The RBF network is known for simplicity and well formulated whose analysis is easier than other neural network in control applications. The RBF network consists of input, hidden, and output layer whose hidden layer is the only nonlinear layer as shown in Fig. 3. The nonlinear Figure 2. RBFN inverted pendulum [5] Figure 3. RBF Network [5]
  • 51. Project Ideas ZERONE 2010 51 Figure 5. RBFN Feedforward path Figure 4. Simulink Model of RBFN inverted pendulum
  • 52. Project Ideas 52 ZERONE 2010 function for the hidden layer of the RBF network is given by the Gaussian function. ...(4) where, XI is the input, µj is the center value, sj is the width value. Then the output of the RBF network can be calculated as the sum. ...(5) where, NH is the number of hidden units, wjk is the weight value and bk is the bias. The RBF network outputs are added to the PID controller input to form the new control inputs as shown in Fig. 2. The RBF network compensates for uncertainties by adding signals to the controller. Äux [k] = kpx (ex [k] – ex [k-1] + 1 ) +kix (ex [k]+ 2 ) +kdx (ex [k]- 2ex [k-1]+ex [k-2] + 3 ) ...(6) Äuè [k] = kpè (eC[k] – eè [k-1] + 4 ) +kiè (eè [k] + 5 ) +kdè (eè [k]-2eè [k-1]+eè [k-2] + 6 ) ...(7) Figure 6. RBFN Update bias and weight Figure 7. RBFN Update mean and variance
  • 53. Project Ideas ZERONE 2010 53 u[k] = u[k-1]+ Äuè [k] + Äux [k] ...(8) RBF Learning Algorithm When neural networks are used in control applications, on-line learning and control is preferred. To achieve on-line learning and control, real-time control hardware has to be implemented in advance [8]. In this paper, we design a SIMULINK model for calculation of the learning algorithm called the back-propagation algorithm based on the gradient. In this section, the back- propagation algorithm for the RBF network is derived. Define the neural network output as = è + x ...(9) Where x = kpx 1 + kix 2 + kdx 3 And è = kpè 4 + kiè 5 + kdè 6 Using equation (6), (7), (8) and (9) Äuè [k] + Äux [k]= Ät – ...(10) The back-propagation learning algorithm is derived to generate neural network output signals i to identify the inverse dynamics as given in (10). The left side of (10) is the error functions to be minimized. Then, the training signal î is defined as î = kpx (ex [k] – ex [k-1]) + kix ex [k] + kdx (ex [k]-2ex [k-1]+ex [k-2]) + kpè (eè [k] – eè [k-1]) + kiè eè [k] + kdè (eè [k]-2eè [k-1]+eè [k-2]) ...(11) If î =0, then t = in (10). Define the objective function to be minimized as E = 1/2 î 2 ...(12) Differentiating equation (12) with respect to weights, w ( wjk , bk , sj , µj ) We obtain the detailed update equations (8) as ...(13) ...(14) ...(15) ...(16) Experimental setup The experimental setup ofan RBFN based inverted pendulum system is shown in figure 4-7. Results Figure 8 shows how the controller responds better than linear PID-Controller in dynamic situations. Figure 8 show the response of angle and position of a RBFN inverted pendulum system. Figure 9 Inverted pendulum angle control using RBFN controller [8] Figure 10 Cart position Control using RBFN controller [8] Conclusion This paper presents a SIMULINK modeling of a low cost intelligent neural network controller for non linear systems. The RCT has been used as an online learning algorithm for neural network. A neural network controller and PID controllers Figure 8. Comparison of RBFN Controller and PID Controller
  • 54. Project Ideas 54 ZERONE 2010 Figure 9. Inverted pendulum angle control using RBFN [8] Figure 10. Cart position control using RBFN controller [8] [1] M. W. Spong, P. Corke, and R. Lozano, “Nonlinear control of the inertia wheel pendulum”, Automatica, 37, pp. 1845- 1851,, 2001 [2] M. W. Spong, “The swing up control problem for the acrobat”, IEEE Control Systems Magazine, 15, pp. 72-79, 1995 [3] W. White and R. Fales, “Control of double inverted pendulum with hydraulic actuation : a case study”, Proc. Of the American Control Conference, pp.495-499, 1999 [4] Seul Jung, H. T. Cho, T. C. Hsia, “Neural network control for position tracking of a two-axis inverted pendulum system: Experimental studies”,IEEE Transaction on Neural Networks, vol. 18, no.4, pp. 1042- 1048, 2007 [5] Jin Seok Noh, Geun Hyeong Lee, Ho Jin Choi, and Seul Jung, “Robust Control of a Mobile Inverted Pendulum Robot Using a RBF Neural Network Controller”,Proceedings of the 2008 IEEE [6] R. O. Ambrose, R. T. Savely, S. M Goza, P. Strawser, M. A. Diftler, I. Spain, and N. Radford, “Mobile manipulation using NASA’s robonaut”, IEEE ICRA, pp. 2104- 2109, 2004 [7] F. Grasser, A. Darrigo, S. Colombi, and A. Rufer, “JOE: A mobile inverted pendulum”, IEEE Trans. on Industrial Electronics, Vol. 49, No.1, pp. 107-114, 2002 [8] Seul Jung and S. S. Kim, “Hardware implementation of a real-time neural network controller with a DSP and an FPGA for nonlinear systems”, IEEE Transaction on Industrial Electronics, vol.54, no. 1, pp. 265-271, 2007 [9] Seul Jung and S. S. Kim, “Control Experiment of a Wheel-Driven Mobile Inverted Pendulum Using Neural Network”, IEEE Transaction on Control Systems Technology have been designed in SIMULINK. The neural network controller functions as an auxiliary controller to compensate for uncertainties in systems such that the performance of a primary PID controlled system is improved. Experimental studies show that the implemented neural network controller works quite well for the position control of a robot ?nger [8], as well as for the inverted pendulum system. Position tracking control of the cart while balancing the pendulum has been successfully performed. Future research is to implement a lower cost intelligent controller using a DSP. References
  • 55. Project Ideas ZERONE 2010 55 I RIS Recognition and Identification System is a recognition system based on the principles of biometric recognition system. A biometric system uniquely identifies and authenticates humans based on their physical or behavioral features. Iris recognition, used by us in this project, is one of the most reliable methods of biometric authentication that recognizes a person by pattern of the iris. No two irises are alike - not between identical twins, or even between the left and right eye of the same person. The iris, which is located behind the transparent cornea and aqueous humour of the eye, is a membrane of the eye that is responsible for controlling the diameter and size of the central darker pupil and the amount of light reaching the retina. The iris has many features that can be used to distinguish one iris from the other. On of the primary visible characteristics is the trabecular meshwork, a tissue which gives the appearance ofdividing the iris in radial fashion that is permanently formed by the eighth month of gestation. During the development of the iris, there is no genetic influence on it, a process known as chaotic morphogenesis that occurs on the seventh month of gestation, which means that even identical twins have different irises. The factthat the iris is protected behind the eyelid, cornea, and aqueous humor means that unlike other biometrics such as fingerprints, the likelihood of damage or abrasion is minimal. The iris is also not subject to the effects of aging which means it remains in a stable form from about the age of one until death. The use of glasses or contact lenses (colored or clear) has little effect on the representation of the iris and hence does not interfere with the recognition technology. IRIS Recognition and Identification System, gathers iris information from segmented iris and encodes the pattern into bit information. This bit information or biometric template is used to compare and identify the authenticated or impostor users. Iris recognition algorithms also need to isolate and exclude the artifacts as well as locate the circular iris region from the acquired eye image. Artifacts in iris include eyelids and eyelashes partially covering it. Then, the extracted iris region is normalized. The normalization process will unwrap the doughnut shaped extracted irises into a constant dimensional rectangle. The significant features of the normalized iris must be encoded so that comparisons between templates can be made. Our system makes useof a 1D Log-Gabor Filter to create a bit-wise biometric template. Finally, templates are matched using Hamming distance. The Hamming distance gives a clear measure of number of bits that are same between two bit patterns. A decision can then be made upon whether the two patterns were generated from different irises or from the same one. IRIS, IRIS Recognition & Identification System consists of five steps: IRIS Recognition & Identification SystemIRIS Recognition & Identification SystemIRIS Recognition & Identification SystemIRIS Recognition & Identification SystemIRIS Recognition & Identification System Ruchin Singh Sanjana Bajracharya Saurab Rajkarnikar 062 Computer
  • 56. Project Ideas 56 ZERONE 2010 1. Eye images acquisition For our project, IRIS, we acquired images from CASIA (Chinese Academy of Sciences-Institution of Automation) and used 5 sets of the iris images. CASIA has a total of 22,051 eye images from more than 100 subjects. All iris images are 8 bit gray- level JPEG files, collected under near infrared illumination. Almost all the subjects are Chinese except a few. 2. Segmentation Segmentation is the process that isolates the circular iris region of the Eye Images. The isolation is performed by the following i. Gaussian smoothing This is the process where the noises that can be present in the images, as a result of errors in the image acquisition process that result in pixel values that do not reflect the true intensities of the real scene, are removed by smoothing the pixels in the image. In this, smoothing filters are used for blurring and noise reduction in the images. ii. Edge detection This is the approach for detecting meaningful discontinuities in an image. To be classified as a meaningful edge point, the transition in gray level associated with that point has to be significantly stronger than the background at that point. For our project we have used the Sobel operator to determine the edges of the eye image. iii. Non-max suppression This is the stage that finds the local maxima in the direction of the gradient and suppresses all others, minimizing the false edges. The local maxima is found by comparing the pixel with its neighbors along the direction of the gradient. This helps to maintain single pixel thin edges before the final thresholding stage. iv. Hysteresis thresholding This process alleviated problems associated with edge discontinuities by identifying strong edges and preserving the relevant weak edges in addition to maintaining some level of noise suppression. In hysteresis threshold, we use two threshold values, th as the high threshold value and tl as the lower threshold value with th >tl . Pixel values that are above the th value are immediately classified as edges.
  • 57. Project Ideas ZERONE 2010 57 v. Circular Hough Transform Circular Hough Transform is a technique that located circular shapes in images and has been used to extract circles and ellipses (or conic sections). The iris and pupil are detected and differentiated by this process in our IRIS Recognition & Identification System. After these processes are completed, we remove noise (like eyelids and eyelashes) by threshold processing and Linear Hough Transform. 3. Normalisation Normalisatoin is the process of un-wrapping the segmented iris into constant rectangular dimension. The normalisation process employed in our system takes into account pupil dilation and the zoom factors of the camera. In our system, we have employed Rubber-Sheet model to unwrap the iris. 4. Encoding Encoding is the process that converts the normalised iris template into bit-wise template. The discriminating features of the iris are encoded here. In our system, we have used Log Gabor Filter to encode the normalised iris image. Log Gabor Filter encodes only the phase information of the iris image. 5. Matching Matching is the measure of similarity or dissimilarity between two iris templates. In our system, matching is performed by calculating the Hamming Distancebetween the two iris templates. Our IRIS (IRIS Recognition & Identification System) was fairly accurate with FAR of 11.28% and FRR of 3.09%. FAR stands for False Acceptance Rate that measures the probability of an individual being wrongly identified as another individual and FRR stands for False Reject Rate that measures the probability of an enrolled individual not being identified by the system. The system designed and implemented using the above steps could successfully accept or reject individuals by identifying their iris. 1. John Daugman, PhD, OBE University of Cambridge, How Iris Recognition Works", The Computer Laboratory, Cambridge CB2 3QG, U.K. 2. Chinese Academy of Sciences Institute of Automation".Database of 756 Greyscale Eye Images. References Source:
  • 58. Project Ideas 58 ZERONE 2010 RFID Ashish Shrestha 062 Electronics R FID (Radio frequency identification) technology was developed in 1920's. Massachusetts Institute of Technology (MIT) developed the technology to allow robots to “talk” to each other. In 1939, the IFF (identification, friend or foe) transponder was developed. The IFF transponder was used in aircraft to identify themselves as friend or foe to other aircraft. The British used this technology during World War II to identify their aircraft. In the 1950s these system were developed specifically for Governmental and Military use in USA and USSR. Semiconductor technologies at that time were in their infancy and devices were large and with high current consumption and expansive, which did not recommend their use of Passive RFID systems. The real explosion of Passive RFID technology was at the end of 1980s and was made possible by the improved size, current consumptions of circuitries, and price of semiconductor technologies. This enabled an acceptable RFID performance (communication distance) for passive under acceptable investment. The first generations of RFID tags were only used as identification devices, having only a fixed identification code stored into the tag’s memory. There was mainly a one way communication with the tag communicating back its memory content when triggered by reader activation. An example of an early RFID system patent from Mario Cardullo is the figure 1. Now RFID systems are widely used in applications with the primary task to identify items, but there are also new applications where high security and computation as well as integrated sensors and actors are required. Due to the current cost structure of RFID systems, new application fields can be justified based on return of investment (ROI). Figure 1. Example of an early RFID system patent from Mario Cardullo
  • 59. Project Ideas ZERONE 2010 59 Automatic identification (auto ID) technologies help machines or computers identify objects by using automatic data capture. RFID is one type of auto ID technology that uses radio waves to identify, monitor, and manage individual objects as they move between physical locations. Although there are a variety of methods for identifying objects with RFID, the most common method is by storing a serial number that identifies a product and its related information. RFID devices and software must be supported by an advanced software architecture that enables the collection and distribution of location-based information in real time. A basic RFID system consists of three components: • A transponder (RF tag) • An antenna or coil • A transceiver (with decoder) RFID tags are small devices containing a chip and an antenna that store the information for object identification. Tags can be applied to containers, pallets, cases, or individual items. With no line- of-sight requirement, the tag transmits information to the reader, and the reader converts the incoming radio waves into a form that can be read by a computer system. An RFID tag can be active (with a battery) or passive (powered by the signal strength emitted by the reader). The functionality of both types is similar; the main difference is the increased performance in view of communication distance and computation capabilities of the active Vs the lower cost of passive transponders. The integrated battery increases the cost of the transponder, limits the tag’s life time, cause environmental issues over disposal, and limits the form factor and thickness of the tags. These disadvantages of the active transponders limit the applications where these tags can be used. Due to the very high market share of the passive technology, only this technology will be presented in the following sections. The tags mostly act as a slave and rely on the reader to activate it using the “Reader Talks First” (RTF) concept. The reader supplies energy via the RF field and transmit requests/commands to instruct the tag about the action to be executed. The tag receives and decodes RF signals coming from the reader, executes the instructed action, and may respond with data or status information. The cost structure of the tag can be roughly split in costs for IC, antenna, assembly, and test. The electronics part (IC, Integrated Circuit) of the tag consists of some basic functional modules which are used to enable certain functionality as shown in the figure 2. Figure 2. Basic functional modules of RFID tags
  • 60. Project Ideas 60 ZERONE 2010 to supply the IC with energy. • The Limiter limits the RF voltage at the inputs pins to avoid over voltage which would destroy the circuitry. • The Clock Regenerator extracts the frequency signal from the RF signal which is used as an internal clock. • The Demodulator decodes the incoming data signal and generates a binary bit stream representing the command And data to be executed. These data are used by the IC to execute the requested activities. • The Modulator modulates the decoded response data. • The Logic part represents the microcontroller or digital circuitry of the tag. • The Memory unit (mostly EEPROM) contains the tag specific data as well as additional memory where application specific data can be programmed. The Reader consists of a control unit and the radio frequency (RF) unit containing the transmitter and the receiver modules. In the control unit, the firmware and the hardware is implemented to control the reader activities such as communications with a host computer and the tag, as well as data processing generated by the tag, demodulates and decodes the data, and sends the binary data to the control unit for further processing. The transmitter generates the RF signal (frequency and power level) which is connected to the antenna resonance circuit. The receiver part receives the RF signal. Why RFID? In the past, the most used identification system has been the barcode. The main reason for the wide usage of this system is the low cost of a barcode by simply printing it on the items and the improved performance (detection rate, and reliability) of the new generation of scanners. There are still some disadvantages of this technology though: • Data cannot be modified or added • Requires line of sight for operation (label must be seen by the reader) • Highmaintenance effort for the complex scanner optics Modern application processes like item tracking, require extended capabilities of the ID system which cannot be achieved by the barcodes. In these applications, RFID systems can add value through extended functionality. This should not be misunderstood to imply the complete replacement of barcodes by RFID. RFID is an alternative to barcodes which will lead to a coexistence of both technologies based on the performance and capability requirements and the specific investment to use the RFID technology for these applications. Most applications will require the use of both, barcode and RFID in parallel. Summarizing the advantages of the RFID systems in relation to other identification systems currently in use and especially barcode: • Battery-less. Supply voltage derived from the RF field • No line-of-sight required for the communication • Large operating and communication range • Read and Write capability of the transponder memory • High communication speed • High data capacity (user memory) • High data security • Data encryption/authentication capability • Multiple tag read capability with anti-collision (50–100 tags) • Durability and reliability • Resistant to environmental influence • Reusability of the transponder • Hands free operation • Miniaturized (IC size< 1mm2) • Very low power 1. 2. 3. 4. References • The induced voltage is rectified by the Rectifier
  • 61. Project Ideas ZERONE 2010 61 S ymfony is a web application framework written in PHP which follows the model- view-controller (MVC) paradigm. Symfony aims to speed up the creation and maintenance of web applications and to replace repetitive coding tasks by power, control and pleasure. It requires a few prerequisites for the installation of UNIX, window with a web server and PHP 5 installed. It is currently compatible with the following object relational mapping: propel and doctrine. Object-relational mapping is a programming technique for converting data between incompatible type systems in relational databases and object-oriented programming languages. ORM often reduces the amount of code needed to be written, making the software more robust. Symfony uses Propel as the ORM, and Propel uses Creole for database abstraction. It allows you to access your database using a set of objects, providing a simple API for storing and retrieving data. Doctrine is an object relational mapper (ORM) for PHP 5 that sits on top of a powerful database abstraction layer (DBAL). One of its key features is the option to write database queries in a proprietary object oriented SQL dialect called Doctrine Query Language (DQL). This provides developers with a powerful alternative to SQL that maintains flexibility without requiring unnecessary code duplication. Symfony is aimed at building robust applications in an enterprise context, and aims to give developers full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized. To match enterprise development guidelines, Symfony is bundled with additional tools to help developers test, debug and document projects. Symfony was built in order to fulfill the following requirements: • Easy to install and configure on most platforms (and guaranteed to work on standard *nix and Windows platforms) • Database engine- independent • Simple to use, in most cases, but still flexible enough to adapt to complex cases • Based on the premise of convention over configuration--the developer needs to configure only the unconventional • Compliant with most web best practices and design patterns • Enterprise-ready--adaptable to existing information technology (IT) policies and architectures, and stable enough for long-term projects • Very readable code, with php Documentor comments, for easy maintenance • Easy to extend, allowing for integration with other vendor libraries Symfony provides a lot of features seamlessly integrated together, such as: • simple templating and helpers • cache management • smart URLs •scaffolding Symfony and MVC Architecture Suraj Maharjan Ram Kasula Prasanna Man Bajracharya 062 Computer "Symfony is aimed at building robust applications in an enterprise context, and aims to give developers full control over the configuration: from the directory structure to the foreign libraries, almost everything can be customized.”
  • 62. Project Ideas 62 ZERONE 2010 • multilingualism and I18N support • object model and MVC separation • Ajax support • Enterprise ready The MVC architecture Model Model is often related with the business logic of the application (the database belongs to this layer). It knows all the data that needs to be displayed. It encapsulates core data and logic. It is always isolated from the User Interface (UI) and the way data needs to be displayed. The model is used to manage information and notify observers when that information changes. It contains only data and functionality that are related by a common purpose. If we need to model two groups of unrelated data and functionality, we create two separate models. A model encapsulates more than just data and functions that operate on it. A model is meant to serve as a computational approximation or abstraction of some real world process or system. It is the domain-specific representation of the information on which the application operates. Domain logic adds meaning to raw data (for example, calculating whether today is the user's birthday, or the totals, taxes, and shipping charges for shopping cart items). Many applications use a persistent storage mechanism (such as a database)to storedata. MVC does not specifically mention the data access layer because it is understood to be underneath or encapsulated by the model. View View is responsible for rendering the output correlated to a particular action. It is the UI part of the application. It uses read-only methods of the model and queries data to display them to the end users. It may be a window GUI or a HTML page. View encapsulates the presentation of the data; there can be many views of the common data. The view is responsible for mapping graphics onto a device. A view typically has a one to one correspondence with a display surface and knows how to render to it. A view attaches to a model and renders its contents to the display surface. In addition to, when the model changes, the view automatically redraws the affected part of the image to reflect those changes. A view may be a composite view containing several sub-views, which may themselves contain several sub-views. Controller The Controller is a piece of code that calls the Model to get some data that it passes to the View for rendering to the client. It acts as interacting glue between models and views. It accepts input from the user and makes request from the model for the data to produce a new view. A controller is the means by which the user interacts with the application. A controller accepts input from the user and instructs the model and view to perform actions based on thatinput. In effect, the controller is responsible for mapping end-user action to application response. For example, if the user clicks the mouse button or chooses a menu item, the controller is responsible for determining how the application should respond. So the controller layer contains the code linking the business logic and the presentation and is split into several components that can be used for different purposes. MVC comes in different flavors; control flow is generally as follows: The user interacts with the user interface in some way (for example, presses mouse button). The controller handles the input event from the user interface, often via a registered handler or callback. The controller notifiesthemodel oftheuser action, possibly resulting in a change in the model's state. (For example, the controller updates the user's shopping cart). A view uses the model indirectly to generate an appropriate user interface (for example, the view lists the shopping cart's contents). The view gets its own data from the model. The model and controller have no direct knowledge of the view. The model, view and controller are intimately related and in constant contact. Therefore, they must reference each other. The picture above illustrates the basic Model-View-Controller relationship: Advantages of MVC architecture • Modularized development: Modularization is the process of dividing any complex problem into smaller sub-modules. In MVC, we divide our system into three parts in order to reduce complexity.
  • 63. Project Ideas ZERONE 2010 63 •More control over URLs: Almost all MVC based frameworks have the feature of URL routing that gives us more control over the URL so the sites will be more secure. • Maintainability and code reuse: The Modular design of MVC supports the design goal of reusable software. As MVC requires a definite rule and style for coding, the result can be much more maintainable and reusable software. • Test driven development: By following MVC, one can easily tests each and every part of the system. Moreover, most of the MVC frameworks do have one or more built-in testing frameworks. • Separation of concern: Since MVC has three components, operations are quite isolated from each other. For example; people working on view part can concentrate only on the UI and the part visible to the end users; people working on model part can concentrate on the business logic and the functional requirements of the system or “What” part of the system and finally people working on the controller section may have knowledge of both view and model section so that interaction between other two components could be made easily. There is clear designation of roles for each stakeholder of a system. Disadvantages of MVC •Level of complexity: MVC can increase the level of complexity of a system since MVC requires planning in depth. So any wrong decision taken early could impact the wholeapplication life cycle. •Difficulty in managing file: This may be context dependent. Some people might feel odd when dealing with more files. An MVC based system has comparatively more number of files than a non-MVC based system. Example: Backend control panal generation using Symfony Install the symfony Generate the project php lib/vendor/symfony/data/bin/symfony generate:project project_name(eg eshopping) Generater a application named backend $ php symfony generate:app --escaping- strategy=on --csrf-secret=UniqueSecret1 backend In project _nameconfigschema.yml write down the schema (model schema)eg:
  • 64. Project Ideas 64 ZERONE 2010 propel: member: _attributes: { idMethod: native } id: credential: { type: varchar(225), required: true, default: '' } first_name: { type: varchar(128) } last_name: { type: varchar(128) } email: { type: varchar(128), index: unique } secret_question: { type: longvarchar } secret_answer: { type: longvarchar } primary_phone: { type: varchar(32)} secondary_phone: { type: varchar(32)} password: { type: varchar(32), required: true } confirm_code: { type: varchar(32) } is_confirmed: { type: INTEGER, required: true} is_deleted: { type: INTEGER, required: true } is_active: { type: INTEGER, required: true } access_num: { type: INTEGER, required: true } created_at: ~ updated_at: ~ modified_at: {type: timestamp } In project _nameconfigdatabase.yml connect with the database eg: dev: propel: param: classname: DebugPDO test: propel: param: classname: DebugPDO all: propel: class: sfPropelDatabase param: classname: PropelPDO dsn: mysql:dbname=eshopping;host=localhost username: root password: encoding: utf8 persistent: true pooling: true Load test data in fixture eg: in project_namedatafixtures Member: Member_1: credential: 'member' email: password: test first_name: Suraj last_name: Maharjan primary_phone: 4215590 created_at: 2007-12-17 10:17:39 is_deleted: 0 confirm_code: 14ad75adde376a57cda2069a6d5902d6 is_confirmed: 1 Member_2: credential: 'member' email: password: 1234 first_name: Test last_name: Tester primary_phone: 32123123121 created_at: 2007-12-17 10:17:39 is_deleted: 0 confirm_code: 14ad75adde376a57cda2069a6d5902d6 is_confirmed: 1 Building sql, model … from our schema php symfony propel:build-all --no- confirmation Generater the member module php symfony propel:generate-admin backend Member --module=member Loading the test data php symfony propel:data-load References “The glass is neither half- full nor half-empty: it's twice as big as it needs to be.” “If it weren’t for C, we’d be writing programs in BASI, PASAL, and OBOL.”
  • 65. Explained: The Discrete Fourier Transform The theories of an early-19th-century French mathematicianhave emergedfrom obscuritytobecome part of the basic language of engineering. I n 1811, Joseph Fourier, the 43-year-old prefect of the French district of Isère, entered a competition in heat research sponsored by the French Academy of Sciences. The paper he submitted described a novel analytical technique that we today call the Fourier transform, and it won the competition; but the prize jury declined to publish it, criticizing the sloppiness of Fourier’s reasoning. Now, however, his name is everywhere. The Fourier transform is a way to decompose a signal into its constituent frequencies, and versions of it are used to generate and filter cell-phone and Wi-Fi transmissions, to compress audio, image, and video files so thatthey take up lessbandwidth. It’s so ubiquitous that “you don’t really study the Fourier transform for what it is,” says Laurent Demanet, an assistant professor of applied mathematics at MIT. “You take a class in signal processing, and there it is. You don’t have any choice.” The Fourier transform comes in three varieties: the plain old Fourier transform, the Fourier series, and the discrete Fourier transform. But it’s the discrete Fourier transform, or DFT, that accounts for the Fourier revival. In 1965, the computer scientists James Cooley and John Tukey described an algorithm called the fast Fourier transform, which made it much easier to calculate DFTs on a computer. All of a sudden, the DFT became a practical way to process digital signals. To get a sense of what the DFT does, consider an MP3 player plugged into a loudspeaker. The MP3 player sends the speaker audio information as fluctuations in the voltage of an electrical signal. Those fluctuations cause the speaker drum to vibrate, which in turn causes air particles to move, producing sound. An audio signal’s fluctuations over time can be depicted as a graph: the x-axis is time, and the y- axis is the voltage of the electrical signal, or perhaps the movement of the speaker drum or air particles. Either way, the signal ends up looking like an erratic wavelike squiggle. But when you listen to the sound produced from that squiggle, you can clearly distinguish all the instruments in a symphony orchestra, playing discrete notes at the same time. That’s because the erratic squiggle is, effectively, the sum of a number of much more regular squiggles, which represent different frequencies of sound. “Frequency” just means the rate at which a voltage fluctuates, and it can be represented as the rate at which a regular squiggle goes up and down. When you add two frequencies together, the resulting squiggle goes up where both the component frequencies go up, goes down where they both go down, and does something in between where they’re going in different directions. The DFT does mathematically what the human ear does physically: decompose a signal into its component frequencies. Unlike the analog signal from, say, a record player, the digital signal from an MP3 player is just a series of numbers: CD- quality digital audio recording, for instance, collects 44,100 samples a second. If you extract some number of consecutive values from a digital signal — 8, or 128, or 1,000 — the DFT represents them as the weighted sum of an equivalent number of frequencies. (“Weighted” just means that some of the frequencies count more than others toward the total.) Demanet points out that the DFT has plenty of applications, in areas like wireless technologies spectroscopy and magnetic resonance imaging. But ultimately, he says, “It’s hard to explain what sort of impact Fourier’s had,” because the Fourier transform is such a fundamental concept that by now, “it’s part of the language.” MIT News [Source: MIT News Office, Massachusetts Institute of Technology]
  • 66. 66 ZERONE 2010 S IS is an installation file for Symbian OS. Generally, Symbian OS build tools are command line based and would work without any integrated development environment (IDE) if suitable Software Development Kit (SDK) is used. Development is much more complicated if command line is used as many things are to be done manually. IDE is used for programmer’s simplicity and flexibility. For building any project with Microsoft Visual C++ 6 as IDE, S60 Version 2 Feature Pack 2 is used as SDK. Perform the following operations: • Install Microsoft Visual C++ 6 as IDE. • Along with IDE, following software tools are to be installed in the computer:- ♦ Java JRE: Choose the latest version of Java i.e. java version 1.6.0. ♦ ActivePerl: It is needed by Symbian OS tool chain to compile the project. Make sure that the installation program to set the path. • Open My Computer -> Properties -> Advanced -> Environment Variables • Add C:Program FilesJavajdk1.6.0 bin to the PATH variable. • to PATHEXT so that perl scripts don’t need • Install S60 Version 2 Feature Pack 2. To create a new project & run on the emulator Let us create a simple “HelloWorld” project. •Select or click on Application Wizard of the SDK from the location C:Documents and SettingsAll UsersStart MenuProgramsSeries 60 Developer Tools2nd Edition SDK Feature Pack 21.0Tools Or, • Click on start -> All Programs -> Series 60 Developer Tools -> 2nd Edition SDK Feature Pack 2 -> 1.0 ->Tools -> Application Wizard. • Fill the project name as “HelloWorld” and the wizard plugin should be set to Series 60 Application Wizard and click on create button. Note that the default project folder is C:WorkHelloWorld. • Select any of the type of application to create. Application title & Application UID may or may not be changed, it depends on the users. Click on Next button. • Fill up the copyright message and the name of the Author. Click on Next button. • Click on Generate button and the Application Wizard will create a project named HelloWorld in the default location C:WorkHelloWorld. It enables the starting of IDE (i.e. Visual C++ 6.0) whenever the project is created. • Press “Ctrl + F7” to compile the project and press “Ctrl + F5” to execute the program in the emulator. • It then displays virtual machine of S60 mobile i.e. emulator. Run HelloWorld with emulator and observe the result. Content of HelloWorld folder HelloWorld folder contains directories with some of the following names: • Aif: The application information resource file is located here. This contains bitmaps and captions associated with the application. The bitmap files associated with the aif file are stored in this folder also. How to create a Symbian Installation Source (SIS) using Visual C++ 6.0 Kishoj Bajracharya 062 Computer Computer Operation & ProgrammingComputer Operation & ProgrammingComputer Operation & ProgrammingComputer Operation & ProgrammingComputer Operation & Programming
  • 67. ZERONE 2010 67 Computer Operation & Programming • Data: Resource files for the application are stored under this folder. It is also common to put resource files in the group directory. •Group: This contains the bld.inf file (component definition file) and the.mmp (project definition) file. The generated abld.bat file will also be found here. Therefore, command line builds tend to be done from this folder. • Inc: All the header files and the string localization file (.loc in Symbian pre-v.9.0 and.rls in Symbian v9.0) are found here. For Symbian OS UI application, at least four classes are required and they all are created during the creation of the new project. The four classes are: CHelloWorldApp: It creates documents, define UID, and define application properties. CHelloWorldAppUi: It creates View, control command handling & user interactions CHelloWorldContainer: Display of data on the screen is done & command handling for view is done here. CHelloWorldDocument: It creates AppUi, takes cares of data model of application. • Install: The installation.pkg file is located here. • Src: All the source files will be found under here. The classes of the source folders are as follows: ♦ An application: The application class serves to define the properties of the application, and also to manufacture a new blank document. In the simplest case the only property that we have to define is the application’s unique identifier, or UID. ♦ A document: A document represents the data model for the application. If the application is file-based, the document is responsible for storing and restoring the application’s data. Even if the application is not file-based, it must have a document class, even though that class doesn’t do much apart from creating the application user interface (app UI).
  • 68. 68 ZERONE 2010 Computer Operation & Programming ♦ An app UI: The app UI is entirely invisible. It creates an application view (app view) to handle drawing and screen-based interaction. In addition, it provides the means for processing commands that may be generated, for example, by menu items. ♦ An app view: This is, in fact, a concrete control, whose purpose is to display the application data on screen and allow us to interact with it. In the simplest case, an app view provides only the means of drawing to the screen, but most application views will also provide functions for handling input events. To run the existence project • Place the folder of existence project say HelloWorld in that drive where SDK is installed. •Open the folder HelloWorld and then open sub- folder named as ‘group‘. • Click on file with.DSW as extension. It will open the IDE that contains source code of the project. • Press “Ctrl + F7” to compile the project and press “Ctrl + F5” to execute the program in the emulator. • It then displays virtual machine of S60 mobile i.e. emulator. Run HelloWorld with emulator and observe the result. To get the applications HelloWorld to mobiles • Run command prompt. • Go to the ‘group’ folder of project by typing say cd C:WorkHelloworld Helloworldgroup. •Type ‘bldmake bldfiles’ and this creates abld.bat and some other information files. • To compile for mobile phones, type ‘abld build thumb urel’ or ‘abld build thumb udeb’. • Copy a file with extension.pkg of sis sub-folder of the project to the location given by ‘C:Symbian8.0aS60_2nd_FP2Examples toolsandutilitiesinstall’. • For creating a package go to the location given by above from the command line and type makesis followed by file with extension.pkg. This creates a file with extension.SIS. Send this file to either phone memory or memory card of the mobile phones. This is how a Symbain Installation Source(SIS) is developed. To install HelloWorld in the cellular phone Transfer the HelloWorld.sis file to the mobile phones through Bluetooth, Nokia PCSuite or memory card. Install ‘HelloWorld.sis’ either to the phone memory or the memory card of the mobile phone.
  • 69. ZERONE 2010 69 Computer Operation & Programming I f you design websites you might want to install a web server on your computer in order to test your sites in an environment that matches the real thing as close as possible. Configure the open-source Apache HTTP (web) server and how to make it work with not one site but as many as you require using a technique called name-based virtual hosting. This article allows you to access to your local repository using addresses such as instead of http://localhost/ ~myuser/myproject/ Early Web servers were designed to handle the contents of a single site. The standard way of hosting several Web sites in the same machine was to install and configure different, and separate, Web server instances. As the Internet grew, so did the need for hosting multiple Web sites and a more efficient solution: virtual hosting was developed. Virtual hosting allows a single instance of Apache to serve different Web sites, identified by their domain names. IP-based virtual hosting means that each of the domains is assigned a different IP address; name-based virtual hosting means that several domains share a single IP address. Name-based virtual hosting requires HTTP/1.1 support. On Windows Configuring Apache: The first file we'll need to edit is the Apache httpd.conf file. If you install the Apache software using the download from the Apache web site, you should have a menu item that will open this file for editing. Click Start- >Programs->Apache HTTP Server->Configure Apache Server->Edit the Apache httpd.conf Configuration File. If you don't have that start menu item, start your text editor and open the file. It will be in a sub-folder named conf of your Apache folder. For example, C:Program FilesApache GroupApacheconfhttpd.conf Configuration Note that Apache changed the preferred method for configuring the Apache server with the release of Apache 2.2. For versions beginning with 2.2, the preferred configuration is more modular. Setting up a virtual host as described here will still work with the newer versions, but to follow the modular approach, the editing of httpd.conf is only to uncomment (remove the # from the beginning of the following line: #Include conf/extra/httpd-vhosts.conf Everything else is entered in the file httpd- vhosts.conf, which will be located in the extra folder below the folder containing httpd.conf. Security Version 2.2 also changed some of the default security configuration parameters. To set things up the way you need them, you need to add the following block to either your httpd.conf file or just above the virtual hosts or to your httpd- vhosts.conf file: <Directory "MySites "> Order Deny,Allow Allow from all </Directory> The above assumes you're using the directory structure described below. Adjust that as necessary to reflect your actual directory. Now, for this example, we'll assume that you have your web sites located in a folder on your C drive called My Sites. Each web site has a sub-folder of its own under that folder, like this: C:My SitesSite1 C:My SitesSite2 Say also, for this example, that the domains for the two sites are and We're going to set up virtual hosts for those two sites using the domain names site1.local and site2.local. This way, you'll be able to tell at a glance whether you're looking at the live site, or testing site. Ganesh Tiwari Biraj Upadhyaya 063 Computer ImplementingVirtualHosting
  • 70. 70 ZERONE 2010 Computer Operation & Programming In reality, you can call the domains anything you want. You could just as easily name them microsoft.monkeybutt and ibm.greentambourine. I choose to use the convention of using the same domain name along with the.local TLD to simplify and minimizethetyping needed to switchbetween thelive siteand the testing site. The only important point, and it's really important, is that you NEVER use an actual, real, live domain name. If you used, for example, for the local virtual host, you would never be able to actually reach the live site. All requests for the live site would be re- directed to your local virtual host. Go to the very bottom of your httpd.conf file in your text editor. You should see an example of a virtual host there. Each line of that example will begin with an octothorpe (#). The octothorpe character marks the line as a comment, so the example is not executed. Add the following lines below that example: NameVirtualHost <VirtualHost> DocumentRoot "C:My SitesSite1" ServerName site1.local </VirtualHost> <VirtualHost> DocumentRoot "C:My SitesSite2" ServerName site2.local </VirtualHost> That's all you need to do! Save and close the file. Thatwill tell the Apache server everything itneeds to know in order for it to serve the pages using the domain names site1.local and site2.local. One note is that, in the above example, we have a space in the path. Because of that, we put quotation marks around the document root directory. If the path does not have any spaces in it, do not quote the path. If the directory used for your sites were, for example MySites instead of My Sites, the document root line would look like this instead: DocumentRoot C:MySitesSite1 Resolving the DNS issue Obviously, if you typed http://site1.local in your browser, it would not be found by your Internet provider's DNS server. We're next going to edit another file to work around that. The second file you need to edit is called hosts, with no file extension. It is a Windows system file and it will enable you to enter specific addresses for specific domains instead of using a DNS lookup. The normal location for this file is: C:WINNTsystem32driversetchosts or C:Windowssystem32driversetchosts If you don't find it there, do a search in your windows directory for the word hosts in the file name. The file you want is called hosts, with no file extension. The correct file will begin with the following lines: # Copyright (c) 1993-1999 Microsoft Corp. # This is a sample HOSTS file used by Microsoft TCP/IP for Windows. Once again, in this file, the octothorpe character is a comment marker. Lines beginning with it are comments. In all likelihood, there will be nothing there, except for comments. If there are any other non-commented entries, leave them alone. Just go to the bottom of the file, below all the comments and any existing entries and add the following two lines: site1.local site2.local That's all you need to do there. Save and close the host file. The only remaining thing you need to do is to re- start the Apache server. You need to do this because Apache only reads the configuration file when it first starts up. Click Start->Programs ->Apache HTTP Server->Control Apache Server ->Restart. If you don't have that menu item, open a command prompt and change to the Apache directory, and type the following command and press the Enter key: apache -w -n "Apache" -k restart You should see a message like this: The Apache service is restarting. The Apache service has restarted. That's it! You're done! Close the command window and start your web browser. In the browser's address bar, type http://site1.local and hit the Enter key. You should now see your local copy of your site1. Okay, now I'll mention one very small, but possibly important, caveat. When you create the virtual hosts like this, the default http://localhost will no longer work. In many cases, that is “Virtual hosting allows a single instance of Apache to serve different Web sites, identified by their domain names”
  • 71. ZERONE 2010 71 Computer Operation & Programming unimportant. However, if you're using something like phpMyAdmin, you'll still need it. The solution to that is to create one additional virtual host called "localhost" that points to the original Apache htdocs folder. It might look something like this: <VirtualHost> DocumentRoot C:Apachehtdocs ServerName localhost </VirtualHost> Don't forget to include that additional virtual host when you edit the Windows hosts file. For Debian, Ubuntu Once the server is installed, it is time to get into apache 2 configuration. Let's open apache's main configuration file, name /etc/apache2/apache2.conf. A search for the word virtual brings us to the following line: Include /etc/apache2/sites-enabled/ [^.#]* This mean that when starting apache, it will look for files in /etc/apache2/sites-enabled/. Let’s go there and see what is in. $cd /etc/apache2/sites-enabled/ $ls -l total 1 lrwxrwxrwx 1 root root 36 2005-12-27 01:42 000-default -> /etc/apache2/ sites-available/default Well, this only links to the file in directory /etc/ apache2/sites-available/. The point in doing such is it allows you, mainly when you are using your box as a web server, to: 1. Have a simple main configuration file 2. Be able to edit or create a new host by creating/ editing a file from /etc/apache2/sites-available/ 3. In case your web server doesn't restart because of misconfiguration, you can simply remove the link from the file in /etc/apache2/sites-enabled/ pointing tothemalformed filein /etc/apache2/sites- available/ Now letsay you want to be able to map the domain name to you local machine, using the code file in /home/myuser/public_html/ While in /etc/apache2/sites-available, create a new file (let say $sudo vi Now add the following lines: <VirtualHost> ServerAdmin webmaster@localhost #We want to be able to access the web site using or ServerAlias DocumentRoot /home/myuser/ public_html/ #if using awstats ScriptAlias /awstats/ /usr/lib/cgi- bin/ #we want specific log file for this server CustomLog /var/log/apache2/ combined </VirtualHost> Now, we specified a new host to apache but it is not yet linked to the repertory where apache actually looks for virtual hosts. Let go to: $cd /etc/apache2/sites-enabled/ And create a link to the file we just created: $sudo ln -s /etc/apache2/sites- available/ Now apache is almost ready to restart, but before doing so, we must inform our linux system that and are not to be looked for on the net, but on the local machine instead. To do so, simply edit /etc/hosts and add the new host names at the end of the line beginning by, which is localhost. In the end, your file should look like: localhost.localdomain localhost And now we are done, simply reload apache: sudo /etc/init.d/apache2 reload Open your web browser and enter the following address Magic, it runs the same as when you were using http://localhost/~myuser/ but it is far more useful when developing a web service and want to be able to develop applications on your machine just like it is where the real web site. to enable a new virtual host simply type: sudo a2ensite mysiteavailable-site to disable a virtual host: sudo a2dissite mysiteavailable-site where mysiteavailable-site is the name of the virtual host you want to enable/disable, so in out example: