• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Volume 1 issue 5
 

Volume 1 issue 5

on

  • 11,156 views

 

Statistics

Views

Total Views
11,156
Views on SlideShare
11,156
Embed Views
0

Actions

Likes
1
Downloads
276
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Volume 1 issue 5 Volume 1 issue 5 Document Transcript

    • International Journal of Advances in Engineering & Technology (IJAET) VOLUME-1 VOLUME- ISSUE-5 ISSUE- NOVEMBER- NOVEMBER-2011URL : http://www.ijaet.orgE-mail : editor@ijaet.org
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table of Content S. No. Article Title & Authors (Vol. 1, Issue. 5, Nov-2011) Page No’s 1. APPLICATION OF SMES UNIT TO IMPROVE THE VOLTAGE 1-13 PROFILE OF THE SYSTEM WITH DFIG DURING GRID DIP AND SWELL A. M. Shiddiq Yunus, A. Abu-Siada and M. A. S. Masoum 2. HYBRID MODEL FOR SECURING E-COMMERCE 14-20 TRANSACTION Abdul Monem S. Rahma, Rabah N. Farhan, Hussam J. Mohammad 3. DSSS DIGITAL TRANSCEIVER DESIGN FOR ULTRA 21-29 WIDEBAND Mohammad Shamim Imtiaz 4. INTRODUCTION TO METASEARCH ENGINES AND RESULT 30-40 MERGING STRATEGIES: A SURVEY Hossein Jadidoleslamy 5. STUDY OF HAND PREFERENCES ON SIGNATURE FOR RIGHT- 41-46 HANDED AND LEFT-HANDED PEOPLES Akram Gasmelseed and Nasrul Humaimi Mahmood , 6. DESIGN AND SIMULATION OF AN INTELLIGENT TRAFFIC 47-57 CONTROL SYSTEM Osigwe Uchenna Chinyere, Oladipo Onaolapo Francisca, Onibere Emmanuel Amano 7. DESIGN OPTIMIZATION AND SIMULATION OF THE 58-68 PHOTOVOLTAIC SYSTEMS ON BUILDINGS IN SOUTHEAST EUROPE Florin Agai, Nebi Caka, Vjollca Komoni 8. FAULT LOCATION AND DISTANCE ESTIMATION ON POWER 69-76 TRANSMISSION LINES USING DISCRETE WAVELET TRANSFORM Sunusi. Sani Adamu, Sada Iliya 9. AN Investigation OF THE PRODUCTION LINE FOR ENHANCED 77-88 PRODUCTION USING HEURISTIC METHOD M. A. Hannan, H.A. Munsur, M. Muhsin i Vol. 1, Issue 5, pp. i-iii
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 10. A NOVEL DESIGN FOR ADAPTIVE HARMONIC FILTER TO 89-95 IMPROVE THE PERFORMANCE OF OVER CURRENT RELAYS A. Abu-Siada 11. ANUPLACE: A SYNTHESIS AWARE VLSI PLACER TO 96-108 MINIMIZE TIMING CLOSURE Santeppa Kambham and Krishna Prasad K.S.R 12. FUNCTIONAL COVERAGE ANALYSIS OF OVM BASED 109-117 VERIFICATION OF H.264 CAVLD SLICE HEADER DECODER Akhilesh Kumar and Chandan Kumar 13. COMPARISON BETWEEN GRAPH BASED DOCUMENT 118-125 SUMMARIZATION METHOD AND CLUSTERING METHOD Prashant D.Joshi, S.G.Joshi, M.S.Bewoor, S.H.Patil 14. IMPROVED SEARCH ENGINE USING CLUSTER ONTOLOGY 126-132 Gauri Suresh Bhagat, Mrunal S. Bewoor, Suhas Patil 15. COMPARISON OF MAXIMUM POWER POINT TRACKING 133-148 ALGORITHMS FOR PHOTOVOLTAIC SYSTEM J. Surya Kumari, Ch. Sai Babu 16. POWER QUALITY DISTURBANCE ON PERFORMANCE OF 149-157 VECTOR CONTROLLED VARIABLE FREQUENCY INDUCTION MOTOR A. N. Malleswara Rao, K. Ramesh Reddy, B. V. Sanker Ram 17. INTELLIGENT INVERSE KINEMATIC CONTROL OF SCORBOT- 158-169 ER V PLUS ROBOT MANIPULATOR Himanshu Chaudhary and Rajendra Prasad 18. FAST AND EFFICIENT METHOD TO ASSESS AND ENHANCE 170-180 TOTAL TRANSFER CAPABILITY IN PRESENCE OF FACTS DEVICE K. Chandrasekar and N. V. Ramana 19. ISSUES IN CACHING TECHNIQUES TO IMPROVE SYSTEM 181-188 PERFORMANCE IN CHIP MULTIPROCESSORS H. R. Deshmukh, G. R. Bamnote 20. KANNADA TEXT EXTRACTION FROM IMAGES AND VIDEOS 189-196 FORVISION IMPAIRED PERSONS Keshava Prasanna, Ramakhanth Kumar P, Thungamani.M, Manohar Koli ii Vol. 1, Issue 5, pp. i-iii
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 21. COVERAGE ANALYSIS IN VERIFICATION OF TOTAL ZERO 197-203 DECODER OF H.264 CAVLD Akhilesh Kumar and Mahesh Kumar Jha 22. DESIGN AND CONTROL OF VOLTAGE REGULATORS FOR 204-217 WIND DRIVEN SELF EXCITED INDUCTION GENERATOR Swati Devabhaktuni and S. V. Jayaram Kumar 23. LITERATURE REVIEW OF FIBER REINFORCED POLYMER 218-226 COMPOSITES Shivakumar S, G. S. Guggari 24. IMPLEMENTATION RESULTS OF SEARCH PHOTO AND 227-235 TOPOGRAPHIC INFORMATION RETRIEVAL AT A LOCATION Sukhwant Kaur, Sandhya Pati, Trupti Lotlikar, Cheryl R, Jagdish T., Abhijeet D. 25. QUALITY ASSURANCE EVALUATION FOR PROGRAMS USING 236-247 MATHEMATICAL MODELS Murtadha M. Hamad and Shumos T. Hammadi 26. NEAR SET AN APPROACH AHEAD TO ROUGH SET: AN 248-253 OVERVIEW Kavita R Singh, Shivanshu Singh 27. MEASUREMENT OF CARBONYL EMISSIONS FROM EXHAUST 254-266 OF ENGINES FUELLED USING BIODIESEL-ETHANOL-DIESEL BLEND AND DEVELOPMENT OF A CATALYTIC CONVERTER FOR THEIR MITIGATION ALONG WITH CO, HC’S AND NOX. Abhishek B. Sahasrabudhe, Sahil S. Notani, Tejaswini M. Purohit, Tushar U. Patil and Satishchandra V. Joshi 28. IMPACT OF REFRIGERANT CHARGE OVER THE 267-277 PERFORMANCE CHARACTERISTICS OF A SIMPLE VAPOUR COMPRESSION REFRIGERATION SYSTEM J. K. Dabas, A. K. Dodeja, Sudhir Kumar, K. S. Kasana 29. AGC CONTROLLERS TO OPTIMIZE LFC REGULATION IN 278-289 DEREGULATED POWER SYSTEM S. Farook, P. Sangameswara Raju 30. AUTOMATIC DIFFERENTIATION BETWEEN RBC AND 290-297 MALARIAL PARASITES BASED ON MORPHOLOGY WITH FIRST ORDER FEATURES USING IMAGE PROCESSING Jigyasha Soni, Nipun Mishra, Chandrashekhar Kamargaonkar iii Vol. 1, Issue 5, pp. i-iii
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 31. REAL ESTATE APPLICATION USING SPATIAL DATABASE 298-309 M. Kiruthika, Smita Dange, Swati Kinhekar, Girish B, Trupti G, Sushant R. 32. DESIGN AND VERIFICATION ANALYSIS OF APB3 PROTOCOL 310-317 WITH COVERAGE Akhilesh Kumar and Richa Sinha 33. IMPLEMENTATION OF GPS ENABLED CAR POOLING SYSTEM 318-328 Smita Rukhande, Prachi G, Archana S, Dipa D 34. APPLICATION OF MATHEMATICAL MORPHOLOGY FOR THE 329-336 ENHANCEMENT OF MICROARRAY IMAGES Nagaraja J, Manjunath S.S, Lalitha Rangarajan, Harish Kumar. N 35. SECURING DATA IN AD HOC NETWORKS USING MULTIPATH 337-341 ROUTING R. Vidhya and G. P. Ramesh Kumar 36. COMPARATIVE STUDY OF DIFFERENT SENSE AMPLIFIERS IN 342-350 SUBMICRON CMOS TECHNOLOGY Sampath Kumar, Sanjay Kr Singh, Arti Noor, D. S. Chauhan & B.K. Kaushik 37. CHARACTER RECOGNITION AND TRANSMISSION OF 351-360 CHARACTERS USING NETWORK SECURITY Subhash Tatale and Akhil Khare 38. IMPACT ASSESSMENT OF SHG LOAN PATTERN USING 361-374 CLUSTERING TECHNIQUE Sajeev B. U, K. Thankavel 39. CASCADED HYBRID FIVE-LEVEL INVERTER WITH DUAL 375-386 CARRIER PWM CONTROL SCHEME FOR PV SYSTEM R. Seyezhai 40. A REVIEW ON: DYNAMIC LINK BASED RANKING 387-393 D. Nagamalleswary , A. Ramana Lakshmi , 41. MODELING AND SIMULATION OF A SINGLE PHASE 394-400 PHOTOVOLTAIC INVERTER AND INVESTIGATION OF SWITCHING STRATEGIES FOR HARMONIC MINIMIZATION B. Nagaraju, K. Prakash iv Vol. 1, Issue 5, pp. i-iii
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 42. ENHANCEMENT OF POWER TRANSMISSION CAPABILITY OF 401-416 HVDC SYSTEM USING FACTS CONTROLLERS M. Ramesh, A. Jaya Laxmi 43. EIGEN VALUES OF SOME CLASS OF STRUCTURAL 417-421 MATRICES THAT SHIFT ALONG THE GERSCHGORIN CIRCLE ON THE REAL AXIS T. D. Roopamala and S. K. Katti 44. TYRE PRESSURE MONITORING AND COMMUNICATING 422-428 ANTENNA IN THE VEHICULAR SYSTEMS K. Balaji, B. T. P. Madhav, P. Syam Sundar, P. Rakesh Kumar, N. Nikhita, A. Prudhvi Raj, M. Mahidhar 45. DEEP SUB-MICRON SRAM DESIGN FOR DRV ANALYSIS AND 429-436 LOW LEAKAGE Sanjay Kr Singh, Sampath Kumar, Arti Noor, D. S. Chauhan & B.K.Kaushik 46. SAG/SWELL MIGRATION USING MULTI CONVERTER 437-440 UNIFIED POWER QUALITY CONDITIONER Sai Ram. I, Amarnadh.J, K. K. Vasishta Kumar 47. A NOVEL CLUSTERING APPROACH FOR EXTENDING THE 441-446 LIFETIME FOR WIRELESS SENSOR NETWORKS Puneet Azad, Brahmjit Singh, Vidushi Sharma 48. SOLAR HEATING IN FOOD PROCESSING 447-453 N. V. Vader and M. M. Dixit 49. EXPERIMENTAL STUDY ON THE EFFECT OF METHANOL - 454-461 GASOLINE, ETHANOL-GASOLINE AND N-BUTANOL- GASOLINE BLENDS ON THE PERFORMANCE OF 2-STROKE PETROL ENGINE Viral K Pandya, Shailesh N Chaudhary, Bakul T Patel, Parth D Patel 50. IMPLEMENTATION OF MOBILE BROADCASTING USING 462-472 BLUETOOTH/3G Dipa Dixit, Dimple Bajaj and Swati Patil 51. IMPROVED DIRECT TORQUE CONTROL OF INDUCTION 473-479 MOTOR USING FUZZY LOGIC BASED DUTY RATIO CONTROLLER Sudheer H., Kodad S.F. and Sarvesh B. 52. INFLUENCE OF ALUMINUM AND TITANIUM ADDITION ON 480-491 v Vol. 1, Issue 5, pp. i-iii
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 MECHANICAL PROPERTIES OF AISI 430 FERRITIC STAINLESS STEEL GTA WELDS G. Mallaiah, A. Kumar and P. Ravinder Reddy 53. ANOMALY DETECTION ON USER BROWSING BEHAVIORS 492-499 FOR PREVENTION APP_DDOS Vidya Jadhav and Prakash Devale 54. DESIGN OF LOW POWER LOW NOISE BIQUAD GIC NOTCH 500-506 FILTER IN 0.18 µM CMOS TECHNOLOGY Akhilesh kumar, Bhanu Pratap Singh Dohare and Jyoti Athiya Members of IJAET Fraternity A-FBest Reviewers for this Issue are: 1. Dr. Sukumar Senthilkumar 2. Dr. Tang Aihong 3. Dr. Rajeev Singh 4. Dr. Om Prakash Singh 5. Dr. V. Sundarapandian 6. Dr. Ahmad Faridz Abdul Ghafar 7. Ms. G Loshma 8. Mr. Brijesh Kumar vi Vol. 1, Issue 5, pp. i-iii
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 APPLICATION OF SMES UNIT TO IMPROVE THE VOLTAGE PROFILE OF THE SYSTEM WITH DFIG DURING GRID DIP AND SWELL A. M. Shiddiq Yunus1, 2, A. Abu-Siada2 and M. A. S. Masoum2 1 Department of Mechanical Engineering, Energy Conversion Study Program, State Polytechnic of Ujung Pandang, Makassar, Indonesia 2 Departement of Electrical and Computer Engineering, Curtin University, Perth, AustraliaABSTRACTOne of the most important parameters of the system where wind turbine generators (WTGs) are connected isvoltage profile at the point of common coupling (PCC). In the earlier stage, WTGs were possible to bedisconnected from the system to avoid the damage of WTGs. Following the rapid injection of WTGs to theexisting network during last decades, the transmission line operators (TSOs) require WTGs to stay connected incertain level of fault to continue support the grid. This new requirements have been compiled in newinternational grid codes. In this paper, superconducting magnetic energy storage (SMES) is applied to improvethe voltage profile of PCC bus where WTGs equipped with doubly fed induction generator (DFIG) is connectedto meet the used gird codes of Spain and German during grid dip and swell. The voltage dip at the grid side isexamined to comply with the low voltage ride through (LVRT) while the voltage swell at the grid side isexamined to comply with the high voltage ride through (HVRT) of both Spain and German voltage ride through(VRT).KEYWORDS: Voltage Ride through (VRT), SMES, DFIG, Voltage Dip & Voltage Swell. I. INTRODUCTIONThe effect of pollution from conventional energy to the environment and the implementation ofcarbon tax have become a trigger of the increase of renewable energy utilization in the world. Inaddition, conventional energy is very limited and would soon be finished if exploited on a large scalebecause of oil, gas or coal is a material created in the process of millions of years. The limited amountand high demand for energy resources will affect the rise in oil prices from time to time. Therefore,attention is directed now onto the renewable energies which are clean and abundantly available in thenature [1]. The first wind turbines for electricity generation had already been developed at thebeginning of the twentieth century. The technology was improved step by step from the early 1970s.By the end of the 1990s, wind energy has re-emerged as one of the most important sustainable energyresources. During the last decade of the twentieth century, worldwide wind capacity doubledapproximately every three years [2]. The global installed capacity worldwide increased from just lessthan 2000 MW at the end of 1990 to 94000 MW by the end of 2007. In 2008, wind power alreadyprovides a little over 1% of global electricity generation and by about 2020, it is expected that windpower to be providing about 10% of global electricity [3]. Moreover, the total 121 GW installedcapacity of wind turbine in 2008 has produced 260 TWh of electricity and has saved about 158million tons of CO2. In addition, the predication of total installed capacity of wind turbines will be573 GW in 2030 [4]. Power quality issue is the common consideration for new construction orconnection of power generation system including WTGs installation and their connection to the 1 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963existing power system. In this paper, voltage dip (sag) and swell will be considered as the conditionsof the fault ride through capability of WTG equipped with DFIG. Voltage dip (sag) and swell are twocommon types of power quality issue. Voltage dip is a decrease to between 0.1 and 0.9 pu in rmsvoltage or current at the power frequency for durations of 0.5 cycles to 1 minute. Voltage dips areusually associated with system faults but can also be caused by switching of heavy loads or starting oflarge motors. A swell is defined as an increase in rms voltage or current at the power frequency fordurations from 0.5 cycles to 1 minute. Typical magnitudes are between 1.1 and 1.8 pu. As with dips,swells are usually associated with system fault conditions, but they are much less common thanvoltage dips. A swell can occur due to a single line-to-ground fault on the system resulting in atemporary voltage rise on the unfaulted phases. Swells can also be caused by switching off a largeload or switching on a large capacitor bank [5, 6]. Since voltage dip is a common power qualityproblem in power systems, most of studies are focused on the performance of WTGs during voltagedip [7-14]. Although it is a less power quality problem, voltage swell may also lead to thedisconnection of WTGs from the grid. In this paper, voltage dip and swell will applied on the gridside to investigate their effects on PCC which could affect the continuation of WTGs connection ifcomplying with the used grid codes in this paper as explained below with and without SMESconnected.II. SPAIN AND GERMAN GRID CODEIn the earlier stage, WTGs are possible to be disconnected from the system to avoid the damage ofWTGs. Following the rapid injection of WTGs to the existing network during last decades, thetransmission line operators (TSOs) require WTGs to stay connected in certain level of fault tocontinue support the grid. This new requirements have been compiled in new grid codes. However,most of grid codes are only providing low voltage ride through (LVRT) in their codes without anyrestriction information regarding the high voltage ride through (HVRT) which might be can leadinstability in the PCC. The following figures are the international grid codes of Spain and Germanwhich used in this study. Figure 1a and 1b show the voltage ride through (VRT) of Spain and Germanrespectively. The selection of these grid codes is based on their strictness in low voltage ride through(LVRT), meanwhile providing complete voltage ride through (VRT) with their HVRT. (a) (b) Figure 1. (a) FRT of Spain grid code and (b) FRT of German grid code [15]In Figure 1 (a), the FRT of Spain is divided by three main blocks. “A” block is representing the highvoltage ride through (HVRT) of Spain grid code. The maximum allowable high voltage in the vicinityof PCC is 130% lasts for 0.5 s. After that the maximum high voltage is reduced to 120% until next0.5 s. All high voltage profiles above “A” block will lead the disconnection of WTGs from thesystem. The normal condition of this grid code is laid on “B” block. All voltage profiles within thisblock range are classified as a normal condition (90% to 110%). The low voltage ride through 2 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963(LVRT) is limited in “C” block. The minimum voltage drop allows in this grid code is 50% lasts for0.15 s and increased to 60% until 0.25s. The low voltage restriction then ramp to 80% at 1 s andreaching the normal condition in 15 s since the fault occurs. The HVRT of German grid code (shownin Figure 1(b)) is much strict then Spain. The maximum allowable HVRT is 120% for 0.1 s (shown in“A” block). The normal condition that is shown in “B” block is the same with Spain grid code.However, the LVRT is allowed to reach 45% lasts for 0.15 s and should be at least 70% until 0.7 s.After that the voltage margin ramps to 85% at 1.5 s.III. SYSTEM UNDER STUDYThere are two major classifications of wind turbine generator, fixed-speed turbine and variable-speedturbines. One of the most popular variable speed wind turbine is doubly fed induction generator(DFIG). About 46.8 % of this type has been installed in 2002 [2]. A doubly fed induction generator(DFIG) using a medium scale power converter. Slip rings are making the electrical connection to therotor. If the generator is running super-synchronously, electrical power is delivered to the grid throughboth the rotor and the stator. If the generator is running sub- synchronously, electrical power isdelivered into the rotor from the grid. A speed variation of + 30% around synchronous speed can beobtained by the use of a power converter of 30% of nominal power. The stator winding of thegenerator is coupled to the grid, and the rotor winding to a power electronic converter, nowadaysusually a back-to-back voltage source converter with current control loops. In this way, the electricaland mechanical rotor frequencies are decoupled, because the power electronic converter compensatesthe different between mechanical and electrical frequency by injecting a rotor current with variablefrequency. Variable speed operation thus became possible. The typical of generic model of DFIG isshown in Figure 1. Figure 2. Typical configuration of WTG equipped with DFIGThe system under study shown in Figure 3 consists of six-1.5 MW DFIG connected to the AC grid atPCC via Y/∆ step up transformer. The grid is represented by an ideal 3-phase voltage source ofconstant frequency and is connected to the wind turbines via 30 km transmission line. The reactivepower produced by the wind turbine is regulated at 0 Mvar at normal operating conditions. For anaverage wind speed of 15 m/s which is used in this study, the turbine output power is 1 pu and thegenerator speed is 1 pu. SMES Unit is connected to the 25 KV (PCC) bus and is assumed to be fullycharged at its maximum capacity of 2 MJ. Figure 3. System under study 3 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963IV. SMES CONFIGURATION AND CONTROL SYSTEMThe selection of SMES Unit in this paper is based on its advantages over other energy storagetechnologies. Compared to other energy storage options, the SMES unit is ranked first in terms ofhighest efficiency which is 90-99% [16-18]. The high efficiency of the SMES unit is achieved by itslower power loss because electric currents in the coil encounter almost no resistance and there are nomoving parts, which means no friction losses. SMES stores energy within a magnetic field created bythe flow of direct current in a coil of superconducting material. Typically, the coil is maintained in itssuperconducting state through immersion in liquid helium at 4.2 K within a vacuum - insulatedcryostat. A power electronic converter interfaces the SMES to the grid and controls the energy flowbidirectionally. With the recent development of materials that exhibit superconductivity closer toroom temperatures this technology may become economically viable [1]. The stored energy in theSMES coil can be calculated as: 1 2 E= I L (1) 2 SM SMWhere E is the SMES energy; ISM is the SMES Current and LSM is the SMES inductor coil.The SMES unit configuration used in this paper consists of voltage source converter (VSC) andDC-DC chopper which are connected through a DC shunt capacitor. The VSC is controlled by ahysteresis current controller (HCC) while the DC-DC chopper is controlled by fuzzy logic controller(FLC) as shown in Figure 4. Figure 4. SMES configurationDC-DC Chopper along with FLC is used to control charging and discharging process of the SMEScoil energy. The generator active power and the current in the superconductor coil are used as inputsto the fuzzy logic controller to determine the value of the DC chopper duty cycle, active power ofDFIG and SMES coil current are used as inputs of the fuzzy logic controller. The duty cycle (D) iscompared with 1000 Hz saw-tooth signal to produce signal for the DC-DC chopper as can be seen inFigure 5. Figure 5. Control algorithm of DC-DC chopperCompared with pulse width modulation (PWM) technique, the hysteresis band current control has theadvantages of ease implementation, fast response, and it is not dependent on load parameters [19]. 4 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Hysteresis current control (HCC) is used to control the power flow exchange between the grid and theSMES unit. HCC is comparing the 3-phase line currents with the reference currents (Id* and Iq*). Thevalue of Id* and Iq* are generated through the conventional PIs controller both from the deviation ofthe capacitor voltage Vdc and system voltage Vs. To minimize the effect of phases interference whilemaintaining the advantages of the hysteresis methods, phase-locked loop (PLL) technique is appliedto limit the converter switching at a fixed predetermined frequency [20]. The proposed controlalgorithm in this paper is much simpler and closer to realistic application compared with the controllerused in [21], where four PIs controller were used which complicate the process of finding the optimalparameters of the PIs, moreover, only Pg was used as the control parameter of the DC-DC chopperand it ignored the energy capacity of the SMES coil. The detailed VSC control scheme used in thispaper is shown in Figure 6. The rules of duty cycles D and the corresponding SMES action are shownin Table I. When D is equal to 0.5, SMES unit is in idle condition and there is no power exchangebetween the SMES unit and the system. When there is any voltage drop because of fault, thecontroller generates a duty cycle in the range of 0 to 0.5 according to the value of the inputs andpower will be transferred from SMES coil to the system. The charging action (corresponding to theduty cycle higher than 0.5) will take place when SMES coil capacity is dropped and power will betransferred from the grid to the SMES unit. Figure 6. Control algorithm of VSC Table 1. Rules of duty cycle Duty cycle (D) SMES coil action D = 0.5 standby condition 0 ≤ D < 0.5 discharging condition 0.5 < D ≤ 1 charging conditionThe variation range in SMES current and DFIG output power and the corresponding duty cycle are used todevelop a set of fuzzy logic rules in the form of (IF-AND-THEN) statements to relate the input variables tothe output. The duty cycle for any set of input date (Pg and ISM) can be evaluated from the surface graphshown in Figure 7. 5 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 7. Surface graph- Duty cycle V. SIMULATION RESULTSIn this paper, two grid disturbances will be applied. The first disturbance would be a voltage dip of20% and the second is a voltage swell of 135%. Both of disturbances are applied at 0.5 s and last for 5cycles.5.1. Voltage Dip Figure 8. Complying voltage profile at PCC with Spain VRT during grid dip Figure 9. Complying voltage profile at PCC with German VRT during grid dipAs can be seen in Figures 8 and 9, during voltage dip at the grid side, voltage profile at the PCC willbe dropped about 0.35 pu without SMES connected. This value is beyond the LVRT of both Spainand German, therefore in this case, the DFIGs have to be disconnected from the system. However,when SMES is connected voltage drop at the PCC can be significantly corrected to about 0.8 pu far 6 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963from the lowest limit of LVRT of both Spain and German. When fault is cleared, it is naturally thatthere is a spark which forces the overshoot voltage, however, the overshoot is still under the safetymargin of both Spain and German HVRT. Figure 10. Shaft speed during grid dipDuring voltage dip, the speed of shaft will increase at the time when the grip dip occurs to compensatethe power drop due to the voltage drop at the PCC as shown in Figure 10. In some severe grid dipcases the extreme oscillation on shaft speed will lead to instability of the system. With SMESconnected to the PCC, the oscillation, settling time and the overshoot of the shaft speed aresignificantly reduced if compared with the system without SMES. Figure 11. Current behaviour of SMES coil during grid dip Figure 12. Stored energy behaviour of SMES coil during grid dip 7 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 13. Voltage behaviour across the SMES coil during grid dip Figure 14. Duty cycle of DC-DC chopper during grid dipThe behavior of the SMES coil during the fault can be investigated through Fig 11 to Fig.13 whichrespectively show the SMES coil current, SMES stored energy and the voltage across the coil. TheSMES coil energy is 2 MJ during normal operating conditions, when voltage dip occurs, SMES coilinstantly discharges its energy into the grid as shown in Figure 11. The characteristic of SMEScurrent shown in Figure 12 is similar to the energy stored in the coil. The charging and dischargingprocess of SMES coil can also be examined from the voltage across SMES coil (VSM) shown inFigure 13. During normal operating conditions, VSM is equal to zero, it goes to negative value duringdischarging process and will return back to zero level after the fault is cleared. As mentioned before,the duty cycle of DC-DC chopper play important role to determine the charging and dischargingprocess of SMES coil energy. As shown in Figure 14, when voltage dip occur, power produced byDFIG will also reduced, hence the FLC will see this reduction and act according to the membershipfunction rules shown in Figure 7, the duty cycle will in the range between 0 to 0.5 at this stage andonce the fault is cleared, the control system will act to charging the SMES coil. In this stage, dutycycle will be in the range of 0.5 to 1 and will be back to its idle value of 0.5 once the SMES coilenergy reach its rated capacity. 8 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-19635.2. Voltage Swell Figure 15. Complying voltage profile at PCC with Spain and German HVRT during grid swellThe grid swell is started at 0.5 s and lasts for 5 cycles. As can be observed in Figure 15, withoutSMES unit connected, during grid swell, voltage profile at the PCC will rise above 130 % and in thiscondition, DFIGs that connected at the PCC have to be disconnected from the grid if complying withboth HVRT of Spain and German, however when fault is cleared out, the voltage profile can be soonrecovered and remains in the safety margin of both LVRT of Spain and German. When SMES unit isconnected, the voltage at the PCC is corrected to the safety margin of both HVRT of the grid codes ofSpain and German, hence avoid the disconnection of DFIGs from the grid. Figure 16. Shaft speed during grid swellVoltage swell at the grid side will force the voltage at the PCC will increase accordingly depends onthe percentage level of the swell. Hence, the power will be forced to level above the pre determinedrated, the speed control in this condition will limit the speed to avoid over-speeding of the shaft,however in certain level of swell, the over speed protection may work and lead the generator to beshut down. As described in Figure 16, with SMES connected to the PCC, the settling time andoscillation of the shaft speed can be considerably reduced compared with the system without SMES. 9 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 17. Current behaviour of SMES coil during grid swell Figure 18. Stored energy behaviour of SMES coil during grid swell Figure 19. Voltage behaviour across the SMES coil during grid swell 10 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 20. Duty cycle of DC-DC chopper during grid swellBehaviours of SMES unit can be seen in Figures 17 to 20. Because the voltage swell at the grid sidecausing short overshoot of power produced by DFIGs, current in the SMES coil will rise slightly andlikewise the energy in the SMES coil following the control regulation of FLC to damp the highvoltage at the PCC. When voltage swell is cleared out, voltage at the PCC will slightly drop causingthe power produced by DFIGs will drop either. This small amount of power drop is seen by thecontroller and taking action to discharging the small amount of energy and improve the voltage at thePCC, this can be justified in Figure 15, where voltage drop is lesser and voltage recovery is quickerwith SMES unit connected if compare with the system without SMES.VI. CONCLUSIONSThis paper investigates the use of SMES unit to enhance the VRT capability of doubly fed inductiongenerator to comply with the grid codes of Spain and German grid codes. Results show that, withoutthe use of SMES unit, DFIGs must be disconnected from the grid because the voltage drop duringgrid dip and voltage rise during grid swell at the PCC will cross beyond the safety margin of both theLVRT and HVRT of Spain and German, therefore in this condition wind turbines equipped withDFIG must be disconnected from the power system to avoid the turbines from being damaged.However, using the proposed converter and chopper of the SMES unit which are controlled using a hysteresiscurrent controller (HCC) and a fuzzy logic controller (FLC), respectively, both the LVRT and HVRTcapability of the DFIGs can significantly improve and their connection to the grid can be maintainedto support the grid during faulty condition and to ensure the continuity of power supply.ACKNOWLEDGEMENTThe first author would like to thank the Higher Education Ministry of Indonesia (DIKTI) and the StatePolytechnic of Ujung Pandang for providing him with a PhD scholarship at Curtin University,Australia.REFERENCES[1] L. Freris and D. Infield, Renewable Energy in Power System. Wiltshire: A John Wiley & Sons, 2008.[2] T. Ackerman, Wind Power in Power System. West Sussex: John Wiley and Sons Ltd, 2005.[3] P. Musgrove, Wind Power. New York: Cambridge University Press, 2010.[4] "Global wind energy outlook 2010," Global Wind Energy Council, 2010.[5] A. N. S. (ANSI), "IEEE Recommended Practice for Monitoring Electric Power Quality," 1995.[6] E. F. Fuchs and M. A. S. Masoum, "Power Quality in Power Systems and Electrical Machines," Elsevier, 2008.[7] R. K. Behera and G. Wenzhong, "Low voltage ride-through and performance improvement of a grid connected DFIG system," in Power Systems, 2009. ICPS 09. International Conference on, 2009, pp. 1- 6. 11 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[8] S. Hu and H. Xu, "Experimental Research on LVRT Capability of DFIG WECS during Grid Voltage Sags," in Power and Energy Engineering Conference (APPEEC), 2010 Asia-Pacific, pp. 1-4.[9] K. Lima, A. Luna, E. H. Watanabe, and P. Rodriguez, "Control strategy for the rotor side converter of a DFIG-WT under balanced voltage sag," in Power Electronics Conference, 2009. COBEP 09. Brazilian, 2009, pp. 842-847.[10] L. Trilla, O. Gomis-Bellmunt, A. Junyent-Ferre, M. Mata, J. Sanchez, and A. Sudria-Andreu, "Modeling and validation of DFIG 3 MW wind turbine using field test data of balanced and unbalanced voltage sags," Sustainable Energy, IEEE Transactions on, vol. PP, pp. 1-1, 2011.[11] Y. Xiangwu, G. Venkataramanan, P. S. Flannery, and W. Yang, "Evaluation the effect of voltage sags due to grid balance and unbalance faults on DFIG wind turbines," in Sustainable Power Generation and Supply, 2009. SUPERGEN 09. International Conference on, 2009, pp. 1-10.[12] Y. Xiangwu, G. Venkataramanan, P. S. Flannery, W. Yang, D. Qing, and Z. Bo, "Voltage-Sag Tolerance of DFIG Wind Turbine With a Series Grid Side Passive-Impedance Network," Energy Conversion, IEEE Transactions on, vol. 25, pp. 1048-1056.[13] A. M. Shiddiq-Yunus, A. Abu-Siada, and M. A. S. Masoum, "Effects of SMES on Dynamic Behaviours of Type D-Wind Turbine Generator-Grid Connected during Short Circuit," in IEEE PES meeting Detroit, USA: IEEE, 2011.[14] A. M. Shiddiq-Yunus, A. Abu-Siada, and M. A. S. Masoum, "Effects of SMES Unit on the Perfromance of Type-4 Wind Turbine Generator during Voltage Sag," in Renewable Power Generation RPG 2011 Edinburgh, UK: IET, 2011.[15] Alt, x, M. n, Go, O. ksu, R. Teodorescu, P. Rodriguez, B. B. Jensen, and L. Helle, "Overview of recent grid codes for wind power integration," in Optimization of Electrical and Electronic Equipment (OPTIM), 2010 12th International Conference on, pp. 1152-1160.[16] R. Baxter, Energy Storage: A Nano Technical Guide. Oklahoma: PenWell Corporation, 2006.[17] F. A. Farret and M. G. Simoes, Integration of Alternative Source of Energy. New Jersey: John Wiley & Sons, 2006.[18] E. Acha, V. G. Agelidis, O. Anaga-Lara, and T. J. E. Miller, Power Electronic Control in Electrical System. Oxford: Newnes, 2002.[19] M. Milosevic. vol. 2011.[20] L. Malesani and P. Tenti, "A novel hysteresis control method for current-controlled voltage-source PWM inverters with constant modulation frequency," Industry Applications, IEEE Transactions on, vol. 26, pp. 88-92, 1990.[21] M. H. Ali, P. Minwon, Y. In-Keun, T. Murata, and J. Tamura, "Improvement of Wind-Generator Stability by Fuzzy-Logic-Controlled SMES," Industry Applications, IEEE Transactions on, vol. 45, pp. 1045-1051, 2009.AuthorsA. M. Shiddiq Yunus was born in Makassar, Indonesia. He received his B.Sc fromHasanuddin University in 2000 and his M.Eng.Sc from Queensland University ofTechnology (QUT), Australia in 2006 both in Electrical Engineering. He recently towardshis PhD study in Curtin University, WA, Australia. His employment experience includedlecturer in the Department of Mechanical Engineering, Energy Conversion Study Program,State Polytechnic of Ujung Pandang since 2001. His special fields of interest includedsuperconducting magnetic energy storage (SMES) and renewable energy.A. Abu-Siada received his B.Sc. and M.Sc. degrees from Ain Shams University, Egypt andthe PhD degree from Curtin University of Technology, Australia, All in ElectricalEngineering. Currently, he is a lecturer in the Department of Electrical and ComputerEngineering at Curtin University. His research interests include power system stability,condition monitoring, superconducting magnetic energy storage (SMES), power electronics,power quality, energy technology, and system simulation. He is a regular reviewer for theIEEE Transaction on Power Electronics, IEEE Transaction on Dielectric and ElectricalInsulations, and the Qatar National Research Fund (QNRF). 12 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Mohammad A. S. Masoum received his B.S., M.S. and Ph.D. degrees in Electrical andComputer Engineering in 1983, 1985, and 1991, respectively, from the University of Colorado,USA. Dr. Masoums research interests include optimization, power quality and stability of powersystems/electric machines and distributed generation. He is the co-author of Power Quality inPower Systems and Electrical Machines (New York: Academic Press, Elsevier, 2008).Currently, he is an Associate Professor and the discipline leader for electrical power engineeringat the Electrical and Computer Engineering Department, Curtin University, Perth, Australia and asenior member of IEEE. 13 Vol. 1, Issue 5, pp. 1-13
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 HYBRID MODEL FOR SECURING E-COMMERCE TRANSACTION Abdul Monem S. Rahma1, Rabah N. Farhan2, Hussam J. Mohammad3 1 Computer science Dept. University of Technology, Iraq 2 &3 Computer science Dept., College of Computer, Al-Anbar University, IraqABSTRACTThe requirements for securing e-commerce transaction are privacy, authentication, integrity maintenance andnon-repudiation. These are the crucial and significant issues in recent times for trade which are transacted overthe internet through e-commerce channels. In this paper suggest cipher method that is improves the Diffie-Hellman key exchange by using truncated polynomial in discrete logarithm problem ( DLP ) to increases thecomplexity of this method over unsecured channel, also combines the hashing algorithm of MD5, the symmetrickey algorithm of AES and the asymmetric key algorithm of Modification of Diffie-Hellman (MDH).KEYWORDS: key exchange, Securing E-commerce Transaction, Irreducible Polynomial I. INTRODUCTIONAs an electronic commerce exponentially grows, the number of transactions and participants who usee-commerce applications has been rapidly increased. Since all the interactions among participantsoccur in an open network, there is a high risk for sensitive information to be leaked to unauthorizedusers. Since such insecurity is mainly created by the anonymous nature of interactions in e-commerce,sensitive transactions should be secured. However, cryptographic techniques used to secureecommerce transactions usually demand significant computational time overheads, and complexinteractions among participants highly require the usage of network bandwidth beyond themanageable limit [1].Security problems on the Internet receive public attention, and the media carry stories of high-profilemalicious attacks via the Internet against government, business, and academic sites [3].Confidentiality, integrity, and authentication are needed. People need to be sure that their Internetcommunication is kept confidential. When the customers shop online, they need to be sure that thevendors are authentic. When the customers send their transactions request to their banks, they want tobe certain that the integrity of the message is preserved [2].From above discussions, it is clear that we must pay careful attention to security in E-commerce.Commonly, the exchange of data and information between the customers and the vendors and thebank must rely on personal computers that are available worldwide and based on central processingunits (CPU) with 16-bit or 32-bit or 64-bit and operating systems that commonly used such as(windows) that running on the same computer. Communication security requires a period of time toexchange information and data between the customers and the vendors and the bank in such a waythat no one can break this communication during this period. Irreducible truncated polynomialmathematics was adopted since 2000, which was developed for use in modern encryption methods,such as AES. Irreducible truncated polynomial mathematics we can use to build the proposed systembecause it is highly efficient and compatible with personal computers.As a practical matter, secure E-commerce may come to mean the use of information securitymechanisms to ensure the reliability of business transactions over insecure networks [4]. 14 Vol. 1, Issue 5, pp. 14-20
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 II. RELATED WORKSIn the following review, different methods were used in order to increase the e-commerce security:Sung W. T, Yugyung L., and et al (2001) this research proposed an adaptive secure protocol tosupport secure e-commerce transactions. This adaptive Secure Protocol dynamically adapts thesecurity level based on the nature and sensitivity of the interactions among participants. The securityclass incorporates the security level of cryptographic techniques with a degree of informationsensitivity. It forms implements Adaptive Secure Protocol and measures the performance of AdaptiveSecure Protocol. The experimental results show that the Adaptive Secure Protocol providesecommerce transactions with high quality of security service [9].Also Ganesan R and Dr. K. Vivekanandan (2009) proposed a software implementation of a digitalenvelope for a secure e-commerce channel that combines the hashing algorithm of MD5 for integrity,the symmetric key algorithm of AES and the asymmetric key algorithm of Hyperelliptic CurveCryptography (HECC). The algorithm tested for various sizes of files. The digital envelope combiningAES and HECC is the better alternative security mechanism for the secure e-commerce channel toachieve Privacy, Authentication, Integrity maintenance and Non-Repudiation [5].Also H. K. Pathak , Manju Sanghi [2010] proposed a new public key cryptosystem and a KeyExchange Protocol based on the generalization of discrete logarithm problem using Non-abeliangroup of block upper triangular matrices of higher order. The proposed cryptosystem is efficient inproducing keys of large sizes without the need of large primes. The security of both the systems relieson the difficulty of discrete logarithms over finite fields [6].III. AES ALGORITHMThe Advanced Encryption Standard AES is a symmetric block cipher. It operates on 128-bit blocks ofdata. The algorithm can encrypt and decrypt blocks using secret keys. The key size can either be 128-bit, 192-bit, or 256-bit. The actual key size depends on the desired security level[57].The algorithm consists of 10 rounds (when the key has 192 bits, 12 rounds are used, and when the keyhas 256 bits, 14 rounds are used). Each round has a round key, derived from the original key. There isalso a 0th round key, which is the original key. The round starts with an input of 128 bits andproduces an output of 128 bits. There are four basic steps, called layers that are used to form therounds [8]:The ByteSub Transformation (SB): This non-linear layer is for resistance to differential and linearcryptanalysis attacks.The ShiftRow Transformation (SR): This linear mixing step causes diffusion of the bits over multiplerounds.The MixColumn Transformation (MC): This layer has a purpose similar to ShiftRow.AddRoundKey (ARK): The round key is XORed with the result of the above layer.IV. BASICS OF MD5MD5 (Message-Digest algorithm 5), is an Internet standard and is one of the widely usedcryptographic hash function with a 128-bit message digest. This has been employed in a wide varietyof security applications. The main MD5 algorithm operates on a 128-bit, divided into four 32-bitwords [5]. V. MODIFICATION OF DIFFIE-HELLMAN (MDF)The idea is improves the Diffie-Hellman key exchange by using truncated polynomial in discretelogarithm problem ( DLP ) to increases the complexity of this method over unsecured channel. TheDLP of our cipher method is founded on polynomial arithmetic, whereas the elements of the finitefiled G are represented in polynomial representations. The original DLP implies a prime number forits module operation, and the same technique is used in proposal method but considering anirreducible (prime) polynomial instead of an integer prime number. Before offering the method, wewill offer Discrete Logarithm Problem ( DLP ) in polynomialsi. Discrete Logarithm Problem (DLP) in polynomials 15 Vol. 1, Issue 5, pp. 14-20
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 In our method (DLP) involve raising an polynomial to an polynomial power, mod irreduciblepolynomial .The algorithm to compute offer as following :Where:F (a) = polynomial value, F (x) = polynomial value. F (g) = irreducible polynomial value.ii. The solution steps for this method Algorithm 1: Modular Exponentiation Algorithm in Polynomial. Input: . Output: F ( z ) = Value in polynomial . Process: Step1: Convert the F(x) to binary and put the value in K as Kn , Kn-1 , Kn-2 , ..... k0 . Step2: Select F (z ) polynomial variable first equal to one F (z) = 1 . Step3: apply following For i = n down to 0 F ( z ) = F ( z ) ⊗ F ( z ) mod F( g ) If Ki = 1 then F ( z ) = F ( z ) ⊗ F ( a ) mod F( g ) Step4: return F ( z ) Step5: End.We suppose there are two sides want to exchange key (Client and Server) the Client side encryptmessage and Server side decrypt its, as following:1. Key generationThere are two publicly known numbers: irreducible polynomial F( p ) and a polynomial value F( a )that is a primitive root of F( p ).Client SideThe client side select a random polynomial value F( XC ) < F( p ) and computes:F( YC )= ( mod F( p )…………..(1)Server SideThe server side select a random polynomial value F( XS ) < F( p ) and computes:F( YS )= ( mod F( p ) ………….. (2) Each side keeps the F(X) value private and makes the F(Y) value available publicly to the other side.Client SideThe client side compute shared key by return the F( YS ) from server side :Key = ( mod F( p ) ………….. (3)Server SideThe server side compute shared key by return the F ( Yc ) from client side :Key = mod F( p ) ………….. (4)Now the two sides have same Secret key (SK): ………….. (5) 16 Vol. 1, Issue 5, pp. 14-20
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-19632. Encryption MessageTo encrypt the message firstly convert each letter from message to polynomial, secondly apply thefollowing equation to find cipher ( C ):Ci = ( Mi Sk) mod F(g) ………….. (6)3. Decryption MessageTo decrypt message firstly compute the multiplicative inverse for secret key(Sk), secondly apply the following equation to find message:Mi = (Ci Sk) mod F (g) ………….. (7) Figure (1): Modification of Diffie Hellman (MDF)VI. IMPLEMENTATION DETAILSWe present here combines the best features of both symmetric and asymmetric encryption techniques.The data (plain text) that is to be transmitted is encrypted using the AES algorithm. The data (plaintext) used input to MD5 to generate AES key. This key encrypted by using modification of diffie-hellman (MDF). The using of MD5 useful in two directions, firstly to ensure integrity of the data thatis transmitted, secondly to easy generate secret key that used in AES algorithm. Thus the client sendscipher text of the message, and ciphertext of the AES key also its represent ciphertext of the message 17 Vol. 1, Issue 5, pp. 14-20
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963digest. The server upon receiving ciphertext of the message, and ciphertext of the AES key. Firstdecrypts the Ciphertext of the AES key by (MDH) to obtain the AES key. This is then used to decryptthe cipher text of the message by AES decryption to obtain the plain text. The plaintext is againsubjected to MD5 hash algorithm to compare with decrypted message digest to ensure integrity ofdata. Client Side Server Side Plain Text Plain AES Text AES MD5 MD5 Servers Servers Private key Public key Compare MDHSymmetric MDH Key If same ACCEPT Symmetric , else REJECT Key Figure (1): implementation details of modelVII. RESULTSThe hybrid algorithm execute on PC computer of CPU Intel Pentium 4 2.2 MHz Dual Core 2. Theprograms implemented using Microsoft Visual Studio 2008 (C#). Its tested with three messagesdifferent in length (1000 char, 3000 char, 5000 char) .The key sizes that used for AES (128 bit) .thetable 1 provides details on the time taken for encryption, decryption for (AES,MDH) and Calculationof MD5 Message Digest.Table 1: Time in (Second: Milliseconds) for AES, MDH Encryption and Decryption and Calculation of MD5Message DigestMessage length AES Enc AES Dec MDH Enc MDH Dec MD51000 char 0:30 0:17 0:700 0:500 0:203000 char 0:93 0:62 1: 500 1: 300 0:355000 char 0:187 0:109 2:800 2:400 0:52 18 Vol. 1, Issue 5, pp. 14-20
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963VIII. ANALYSISWith any cryptographic system dealing with 128 bit key, the total number of combination is .The time required to check all possible combinations at the rate of rate 50 billion keys / second isapproximately ( 5 * ) years thus AES is very strong and efficiency to used in e-commerce .Randomness of Modification of Diffie-Hellman (MDH) is very high whatever the irreduciblepolynomial because the result is always unexpected, also the complexity is always complex because itdepends on irreducible truncated polynomial. IX. CONCLUSIONSatisfying security requirements is one of the most important goals for e-commerce system securitydesigners; in this paper we give the protocol design for securing e-commerce transaction by usinghybrid encryption technique. This hybrid encryption method surely will increase the performance ofcryptographic algorithms. This protocol will ensure the confidentiality, integrity and authentication.The AES algorithm provides confidentiality, the MD5 hash function provides the integrity and themodification of Diffie-Hellman will ensure the authentication. We have tested the algorithm forvarious sizes of messages. The experimental results showed that the model be improved theinteracting performance, while providing high quality of security service for desired e-commercetransactions.REFERENCE[1] Sung W. T., Yugyung L., Eun K. P., and Jerry S. ," Design and Evaluation of Adaptive Secure Protocol for E-Commerce " , 0-7803-7128-3/01/$10.00 (C) | 2001 IEEE.[2] Abeer T. Al-Obaidy , " Security Techniques for E-Commerce Websites ", Ph. Thesis, The Department of Computer Science , University of Technology, 2010.[3] Oppliger R.,"Security Technologies for the World Wide Web, Second Edition", Library of Congress, © ARTECH HOUSE, Inc., USA, 2003.[4] Wooseok Ham, “Design of Secure and Efficient E-commerce Protocols Using Cryptographic Primitives", MSc. Thesis , School of Engineering , Information and Communications University 2003.[5] Ganesan R. , Dr. Vivekanandan K., " A Novel Hybrid Security Model for E-Commerce Channel" , © 2009 IEEE.[6] Pathak H. K. , Manju S. , " Public key cryptosystem and a key exchange protocol using tools of non-abelian group" , (IJCSE) International Journal on Computer Science and Engineering , Vol. 02, No. 04, 2010 .[7] Oswald E., " Encrypt: State of the Art in Hardware Architectures", Information Society Technologies, UK, 2005.[8] Trappe W., Washington L.,"Introduction to Cryptography with Coding Theory, Second Edition", ©PearsonEducation, Inc. Pearson Prentice Hall, USA, 2006.[9] Sung W. T., Yugyung L., et al," Design and Evaluation of Adaptive Secure Protocol for E-Commerce”, , ©IEEE, 2005.AuthorsAbdul Monem Saleh Rahma awarded his MSc from Brunel University and his PhD fromLoughborough University of technology United Kingdom in 1982, 1985 respectively. Hetaught at Baghdad university department of computer science and the Military Collage ofEngineering, computer engineering department from 1986 till 2003.He fills the position ofDean Asst. of the scientific affairs and works as a professor at the University of TechnologyComputer Science Department .He published 82 Papers in the field of computer science andsupervised 24 PhD and 57 MSc students. His research interests include Cryptography,Computer Security, Biometrics, image processing, and Computer graphics. And heAttended and Submitted in many Scientific Global Conferences in Iraq and Many othercountries.Rabah Nory Farhan has received Bachelor Degree in Computer Science, AlmustanseriaUniversity, 1993, High Diploma in Data Security/Computer Science, University ofTechnology, 1998. Master Degree in Computer Science, University of Technology,2000.PHD Degree in Computer Science, University of Technology, 2006. Undergraduate 19 Vol. 1, Issue 5, pp. 14-20
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Computer Science Lecturer, University of Technology, 2002 to 2006. Undergraduate and postgraduateComputer Science Lecturer, Graduate Advisor, Computer College, University of Al-Anbar,2006 -till now.Hussam Jasim Mohammed Al-Fahdawi has received B.Sc in Computer Science, Al-AnbarUniversity, Iraq, (2005-2009). M.Sc student (2010- tell now) in Computer Science Department,Al-Anabar University. Fields of interest: E-Commerce Security, cryptography and relatedfields. Al-Fahdawi taught many subjects such as operation system, computer vision, imageprocessing. 20 Vol. 1, Issue 5, pp. 14-20
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 DSSS DIGITAL TRANSCEIVER DESIGN FOR ULTRA WIDEBAND Mohammad Shamim Imtiaz Part-time Lecturer, Department of EEE, A.U.S.T, Dhaka, BangladeshABSTRACTDespite the fact ultra-wideband technology has been around for over 30 years, there is a newfound excitementabout its potential for communications. In this paper we are specifically focused on a software radio transceiverdesign for impulse-based UWB with the ability to transmit a raw data rate of 100 Mbps yet encompass theadaptability of a reconfigurable digital receiver. Direct sequence spread spectrum has become the modulationmethod of choice for wireless local area networks, because it’s numerous advantages such as jammersuppression, code division multiple access and ease of implementation. We also observe its characteristics andcomplete the modulation techniques with MATLAB Simulink. The latter includes bit error rate testing for varietyof modulation schemes and wireless channels using pilot-based matched filter estimation techniques.Ultimately, the transceiver design demonstrates the advantage and challenge of UWB technology while boastinghigh data rate communication capability and providing the flexibility of a research test bed.K EY WORDS: Ultra-wideband (UWB), direct sequence spread spectrum (DSSS), wireless local areanetworks (WLAN’s), personal communication systems (PCS), code division multiple access (CDMA). I. INTRODUCTIONUltra wideband (also known as UWB or as digital pulse wireless) is a wireless technology fortransmitting large amount of digital data over a wide spectrum of frequency bands with very lowpower for a short distance. Ultra wideband radio can carry a huge amount of data over a distance up to230 feet at very low power (less than 0.5 mW) and it has the ability to carry signals through doors andother obstacles that tend to reflect signals at more limited bandwidths and higher power [5]. Theconcept of UWB was formulated in the early 1960s through research in time-domain electromagneticand receiver design, both performed primarily by Gerald F. Ross [1]. Through his work, the firstUWB communications patent was awarded for the short-pulse receiver, which he developed whileworking for Sperry Rand Corporation. Throughout that time, UWB was referred in broad terms as“carrier less” or impulse technology. After that UWB was coined in the late 1980s to describe thedevelopment, transmission, and reception of ultra-short pulses of radio frequency (RF) energy. Forcommunication applications, high data rates are possible due to the large number of pulses that can becreated in short time duration [3][4]. Due to its low power spectral density, UWB can be used inmilitary applications that require low probability of detection [14]. UWB also has traditionalapplications in non cooperative radar imaging, target sensor data collection, precision locating andtracking applications [13]. A significant difference between traditional radio transmissions and UWBradio transmissions are that traditional systems transmit information by varying the power level,frequency, and/or phase of a sinusoidal wave. UWB transmissions transmit information by generatingradio energy at specific time instants and occupying large bandwidth thus enabling a pulse-position ortime-modulation [4].UWB communications transmit in a way that doesnt interfere largely with othermore traditional narrow band and continuous carrier wave uses in the same frequency band [5][6].However first studies show that the rise of noise level by a number of UWB transmitters puts a burdenon existing communications services [10]. This may be hard to bear for traditional systems designsand may affect the stability of such existing systems. The design of UWB is very different from thatof conventional narrow band. In the conventional narrow band, frequency domain should be 21 Vol. 1, Issue 5, pp. 21-29
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963considered to design the filter or mixer because the signals are in narrow frequency band. On the otherhand, in UWB, time domain should be also considered to design especially for miser because thecarrier less signals possess wide frequency-band and using short pulse means discontinuous signal.The Federal Communications Commission has recently approved use of Ultra Wideband technology,allowing deployment primarily in frequency band not only from 3.1 GHz, but also below 960 MHzfor imaging applications [2]. Hence, pulse width should be about 2 ns in order to be used below 960MHz frequency band.Recently there has been a burst of research about UWB; hence more and more papers are beingpublished. However, many papers have been found on the transceiver circuit description for UWBwith different technology but here we propose a system model of UWB Transceiver with DirectSequence Spread Spectrum technology. In this paper we focused on a software based radiotransceiver design for impulse-based UWB with the ability to transmit a raw data rate of 100 Mbpsyet encompass the adaptability of a reconfigurable digital receiver. Here we also introduce atransmitter and receiver of pulse based ultra wideband modulation. Direct sequence spread spectrum(DSSS) has become the modulation method of choice for wireless local area networks (WLAN’s), andpersonal communication systems (PCS), because it’s numerous advantages, such as jammersuppression, code division multiple access (CDMA), and ease of implementation. As with otherspread spectrum technologies, the transmitted signal takes up more bandwidth than the informationsignal that is being modulated. The name spread spectrum comes from the fact that the carrier signalsoccur over the full bandwidth (spectrum) of a devices transmitting frequency.This paper is structured as follows: Section 2 briefly introduces system blocks that have used todesign the DSSS Digital Transceiver. Section 3 and 4 present the design of DPSK Transmitter andDPSK Receiver respectively. Section 5 exhibits the results taken by oscilloscopes and demonstratesthe discussion of finding such results. Section 6 suggests the future work and modification of thispaper. Section 7 concludes the paper.II. SYSTEM MODELThe designed model for the transceiver is shown in Fig-1, consists of a hierarchical system whereblocks represent subsystems and oscilloscopes are placed along the path for display purposes.The main components or blocks of this design are PN sequence generator, XOR, Unite delay, Switch,Pulse generator, Derivative, Integer delay, Digital Filter, Product, Gain and oscilloscope. The PNSequence Generator block generates a sequence of pseudorandom binary numbers. A pseudo noisesequence generator which uses a shift register to generate sequences, can be used in a pseudorandomscrambler, descrambler and in a direct-sequence spread-spectrum system [12]. The PN SequenceGenerator block uses a shift register to generate sequences. Here, PN sequence generator uses forgenerating both incoming message and high speed pseudo random sequence number for spreadingpurpose. XOR block work as a mixer, it mixes two different inputs with each other as digital XORdoes and gives the output. The Unit Delay block holds and delays its input by the sample period youspecify. This block is equivalent to the discrete-time operator. The block accepts one input andgenerates one output. Each signal can be scalar or vector. If the input is a vector, the block holds anddelays all elements of the vector by the same sample period. Pulse generator capable of generating avariety of pulses with an assortment of options.Switch uses for switching the two different input and direct it to the output as per requirement.Derivative block basically differentiate the input data. The pulse generator and sequentially twoderivatives are used for performing Bi-phase modulation as per requirement. Integer delay use todelay the 63 chip incoming data. Digital filter has its special use. It uses for creating digital filter forrecovering purpose. Gain blocks use for amplifying process. Oscilloscopes are placed along the pathfor display purpose.Direct-sequence spread spectrum (DSSS) is a modulation technique. The DPSK DSSS modulationand dispread techniques are mainly use for designing the whole transceiver with the exception ofreceiving the signal using Bi-phase modulation. The design for pulse based UWB is divided into threeparts as DSSS DPSK transmitter where transmitter part is separately designed, DPSK DSSStransceiver where received signal has dispread with some propagation delay, DPSK DSSS transceiverwith Bi-phase modulator and matched filter where original signal has recovered. 22 Vol. 1, Issue 5, pp. 21-29
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 2231 Figure 1: Simulink model of DPSK DSSS Transceiver :The data signal, rather than being transmitted on a narrow band as is done in microwavecommunications, is spread onto a much larger range of frequencies (RF bandwidth) using a specific muchencoding scheme. This encoding scheme is known as a Pseudo-noise sequence, or PN sequence. Pseudo noiseDirect sequence spread spectrum has become the modulation method of choice for wireless local areanetworks, and personal communication systems. Direct works, Direct-sequence spread-spectrum transmissions spectrummultiply the data being transmitted by a "noise" signal. This noise signal is a pseudorandom sequenceof 1 and −1 values, at a frequency much higher than that of the original signal, thereby spreading the originalenergy of the original signal into a much wider band. The resulting signal resembles white noise, like noisean audio recording of "static". However, this noise-like signal can be used to exactly reconstruct the th likeoriginal data at the receiving end, by multiplying it by the same pseudorandom sequence [12] Thisprocess, known as "de-spreading", mathematically constitutes a correlation of the transmitted PN spreading",sequence with the PN sequence that the receiver believes the transmitter is using. For de de-spreading towork correctly, transmit and receive sequences must be synchronized. This requires the receiver tosynchronize its sequence with the transmitters sequence via some sort of timing search process.However, this apparent drawback can be a significant benefit: if the sequences of multiple transmittersare synchronized with each other, the relative synchronizations the receiver must make between themcan be used to determine relative timing, which, in turn, can be used to calculate the receiversposition if the transmitters positions are known [12]. This is the basis for many satellite navigationsystems.The resulting effect of enhancing signal to noise ratio on the channel is called process gain. This gaineffect can be made larger by employing a longer PN sequence and more chips per bit, but physicaldevices used to generate the PN sequence impose practical limits on attainable processing gain [12]. heIII. DPSK TRANSMITTERDPSK DSSS transmitter consists of PN Sequence generator which generates a sequence of pseudorandom binary numbers using a linear linear-feedback shift register, XOR used for mixing data, Unite delayused for delayed data and oscilloscopes are placed along the path for display purposes. Here, PN
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231 2231-1963Sequence generator is used as both generating message and a sequence of pseudo random binarynumbers for spreading process. Figure 2 is the Simulink model of DPSK DSSS Transmitter. TransmitterWhen differentially encoding an incoming message, each input data bit must be delayed until the nextone arrives. The delayed data bit is then mixed with the next incoming data bit. The output of themixer gives the difference of the incoming data bit and the delayed data bit. The differentiallyencoded data is then spread by a high speed pseudo noise sequence (PN).This spreading process high-speedassigns each data bit its own unique code, allowing only a receiver with the same spreading to ts withdispread the encoded data.The 63-bit pseudo noise sequences (PN) used in this papers are generated by a 6th order maximal bitlength sequence shown in equation one one, (1) Figure 2: Simulink model of DPSK DSSS TransmitterThe maximal length spreading sequence uses a much wider bandwidth than the encoded data bitstream, which causes the spread sequence to have a much lower power spectral density [11]. Thetransmitted signal is then given by, (2)Where is the differentially encoded data, and is the 63 chip PN spreading code. Forrecovering of message sequence, we XOR the modulated signal with same type of 63 bit pseudo noise 63-bitsequences (PN). Here we also use a unite delay to find the original signal. The signal recovering findprocess is successfully done with some propagation delay which was obvious because of some noise& losses.IV. DPSK RECEIVERBefore dispreading, the receiving signal is modulated by Bi phase modulation technique then signal is Bi-phasesplit into two parallel paths and fed into two identical matched filters with the input to one having adelay of 63 chips. Figure 3 is the Simulink model of DPSK DSSS Receiver.The BPSK modulation technique is mathematically described as: (3)Where, is a data bitsCertain advantage of Bi-phase modulation is its improvement over OOK and PPM in BER phaseperformance, as the is 3 dB less than OOK for the same probability of bit error.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963The probability of bit error for Bi-phase modulation assuming matched filter reception is: (4) Figure 3: Simulink model of DPSK ReceiverAnother benefit of Bi-phase modulation is its ability to eliminate spectral lines due to the change inpulse polarity. This aspect minimizes the amount of interference with conventional radio systems[16]. A decrease in the overall transmitted power could also be attained, making Bi-phase modulationa popular technique in UWB systems when energy efficiency is a priority.Special type of Digital Matched Filter have used for recovering the transmitted message. This Digitalmatched filtering is a data processing routine which is optimal in term of signal-to-noise ratio (SNR).Specifically, it can be shown for an additive white Gaussian noise (AWGN) channel with nointerference that the matched filter maximizes the SNR for a pulse modulated system. To perform thisoperation, the received waveform is over sampled to allow for multiple samples per pulse period.Over sampling gives a more accurate representation of the pulse shape, which then produces betterresults using a digital matched filter [11]. Correlation processing, another form of matched filtering, isoften used in the digital domain when dealing with white noise channels. The method for calculatingthe correlation output is the following: h (5)Where: Is the resulting correlation value Is the pulse periodN Is the number of samples in one pulse width Is the received sampled waveformh Is the known pulse waveformOne of the primary drawbacks of the matched filter receiver topology is the lack of knowledge of thepulse shape at the receiver due to distortion in the channel. Imperfect correlations can occur byprocessing the data with an incorrect pulse shape, causing degradation in correlation energy. There arenumerous ways to correct this problem, including an adaptive digital equalizer or matching a templateby storing multiple pulse shapes at the receiver. A more accurate approach is to estimate the pulseshape from the pilot pulses, which will experience the same channel distortion as the data pulses [11].This estimation technique is a promising solution to UWB pulse distortion.The outputs of the two matched filters are denoted by and are given by
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 = (6) = (7)Where the data is bit period, and is the autocorrelation function of the 63-chip pseudorandomsequence. Since there are exactly 63 chips per data bit the PN sequence is periodic with so (8)The two outputs of the matched filters are then mixed and then low pass filtered and the originalmessage is recovered. V. RESULTS AND DISCUSSIONFollowing the analytical approach presented in section 3 and 4, we evaluate the simulation result ofUWB technology. The simulations are performed using MATLAB [15], and the proof-of-concept isvalid as the BER curves are slightly worse than theoretical values for a perfectly matched receiver dueto the imperfections in the template caused by noise and aperture delay variation. Figure 4 shows theoriginal input message sequence that is generated from a PN sequence generator. Then, the incomingmessage are differentially encoded by using mixer and unite delay where each input data bit hasdelayed with Unit delay until the next one arrives where the delayed data bit is then mixed with thenext incoming data bit. Figure 5 shows such a differential output of the original message signal.Eventually the mixer will give the difference of the incoming data bit and the delayed data bit. Thedifferentially encoded data is then spread by a high-speed 63-bit pseudo noise (PN) Sequencegenerator which is generated by a 6th order maximal length sequence. This spreading process assignseach data bit its own unique code which is shown in Figure 6 allowing only a receiver with the samespreading to dispread the encoded data. Figure 4: Original Input message signal
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 5: Differential output of message signal Figure 6: Output waveforms of Simulink DPSK DSSS Transmitter Figure 7: Received Signal into DPSK DSSS Receiver after Dispreading
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 Figure 8: Original recovered output signal For recovering of message sequence in the receiving part of DPSK DSSS transceiver, the modulated signal has been dispread using same type of 63-bit pseudo noise sequences and also use a unite delay to find the original signal. Before dispreading, the receiving signal is modulated by Bi-phase modulation technique then signal is split into two parallel paths and fed into two identical matched filters with the input to one having a delay of 63 chips. Among two split signal, one is spreading received message and another is Bi-phase modulated signal. The signal recovering process is successfully done with some propagation delay which was obvious because of some noise & losses. Figure 7 represented the received signal into DPSK DSSS receiver after dispreading and Figure 8 denoted original recovered messages.VI. FUTURE MODIFICATION AND WORK Designing of Transceiver was difficult and it took time to resolve the obstacles. The transmitter side was easy to build but it was hard to recover it in the receiver side due to spreading process. The recovered massage came with unwanted delays after dispreading it into DPSK DSSS receiver with the same 63-bit PN Sequence generator. To remove the delay a BPSK modulator and two special matched filters were used. This Matched filters are usually FIT filters which are designed in a special way to recover the original signal. Its have used for detecting the 6th order maximal length sequence and recovering the transmitted message. In the first matched filter the input signal was delayed due to correlating purpose. It was obtained by correlating the delayed signal with the received signal to detect the presence of the template in the received signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. As it is known that matched filter is the optimal linear filter for maximizing the signal to noise ratio in the presence of additive stochastic noise, use of more matched filter increase the possibilities of recovering the original signal and maximizing the signal to noise ratio depending on signal that is being transmitted. In this whole work we have discussed about UWB basics, modulation technique and transmitter circuits but all of those were limited in the design and system level. Though we have included some present important features and applications of UWB but implementation or circuit level simulation has not been done here. People who are interested in analyzing UWB technology can work on circuit level simulation.VII. CONCLUSIONS We have analyzed the performance of UWB technology using Time Hopping (TH) technique. The results from the system simulation were very encouraging for the UWB receiver design presented in this paper. It was also shown by increasing the number of averaged pilot pulses in the pilot-based matched filter template, better performance can be obtained, although the data rate will suffer. Performance for multipath was also examined (albeit for perfect synchronization) and was close to the theoretical values. Finally, use of the template sliding matched filter synchronization routine led to worse BER performance when compared with perfect synchronization results. Although these
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963simulations were specific in terms of data bits and number of multipath, other simulations weresuccessfully run on a smaller-scale varying these two parameters. The results of the system simulationgive a solid foundation for the design as a whole, but also will assist in the future with issues such asthe implementation of receiver algorithms within the PGA and determining timing limitations whenthe receiver is being constructed.REFERENCES [1]. G. F. Ross, “Transmission and reception system for generating and receiving base-band duration pulse signals without distortion for short base-band pulse communication system,” US Patent 3,728,632, April 17, 1973. [2]. Authorization of Ultra wideband Technology, First Report and Order, Federal Communications Commission, February 14, 2002. [3]. C. R. Anderson, “Ultra wideband Communication System Design Issues and Tradeoffs,” Ph.D. Qualifier Exam, Virginia Polytechnic Institute and State University, May 12, 2003. [4]. J. R. Foerster, “The performance of a direct-sequence spread ultra-wideband system in the presence of multipath, narrowband interference, and multiuser interference,” IEEE Conference on Ultra Wideband Systems and Technologies, May 2002. [5]. C. R. Anderson, A. M. Orndorff, R. M. Buehrer, and J. H. Reed, “An Introduction and Overview of an Impulse-Radio Ultra wideband Communication System Design,” tech. rep., MPRG, Virginia Polytechnic Institute and State University, June 2004 [6]. J. Han and C. Nguyen, “A new ultra-wideband, ultra-short monocycle pulse generator with reduced ringing,” IEEE Microwave and Wireless Components Letters, Vol. 12, No. 6, pp. 206-208, June 2002. [7]. S. Licul, J. A. N. Noronha, W. A. Davis, D. G. Sweeney, C. R. Anderson, T. M. Bielawa, “A parametric study of time-domain characteristics of possible UWB antenna architectures,” submitted to IEEE Vehicular Technology Conference, February 2003. [8]. M. Z. Win and R. A. Scholtz, “Impulse radio: how it works,” IEEE Communications Letters, Vol. 2, No. 1, pp. 10-12, January 1998. [9]. J. Ibrahim “Notes on Ultra Wideband Receiver Design,” April 14, 2004.[10]. Takahide Terada, Shingo Yoshizumi,Yukitoshi and Tadahiro kuroda, “Transceiver Circuits for Pulsed- Based Ultra Wideband” Department of Electrical Engineering, Keio University, Japan, Circuits and Systems, 2004. ISCAS 04.L. W. Couch II, Digital and Analog Communication Systems, 6th Edition, New Jersey: Prentice Hall, 2001.[11]. S.M. Nabritt, M.Qahwash, M.A. Belkerdid, “Simulink Simulation of a Direct Sequence Spread Spectrum Differential Phase Shift Keying SAW Correlator”, Electrical and Comp. Engr. Dept, University of Central Florida, Orlando FL 32816, Wireless Personal Communications, The Kluwer International Series in Engineering and Computer Science, 2000, Volume 536, VI, 239-249[12]. Alonso Morgado, Rocio del Rio and Jose M. de la Rosa, “A Simulink Block Set for the High-Level Simulation of Multistandard Radio Receivers”, Instituto de Microelectronica de Sevilla-IMSE-CNM (CSIC), Edif. CICA-CNM, Avda Reina Mercedes s/n, 41012-Sevilla, Spain[13]. M. I. Skolnik, Introduction to Radar Systems, 3rd Edition. New York: McGraw- Hill, 2001.[14]. Military Applications of Ultra-Wideband Communications, James W. McCulloch and Bob Walters[15]. Matlab, Version 7 Release 13, The Mathworks, Inc., Natick, MA.[16]. L. W. Couch II, Digital and Analog Communication Systems, 6th Edition, New Jersey: Prentice Hall, 2001.AuthorMohammad Shamim Imtiaz was born in Dhaka, Bangladesh in 1987. He received hisBachelor degree in Electrical and Electronic Engineering from Ahsanullah University ofScience and Technology, Dhaka, Bangladesh in 2009. He is working as a Part-TimeLecturer in the same university from where he has completed his Bachelor degree. Currentlyhe is focusing on getting into MSc Program. His research interests include digital system,digital signal processing, multimedia signal processing, digital communication and signalprocessing for data transmission and storage. There are other several projects he is workingon and they are “Comparison of DSSS Transceiver and FHSS Transceiver on the basis ofBit Error Rate and Signal to Noise Ratio”, “Mobile Charging Device using Human Heart Pulse”, “Analysis ofCMOS Full Adder Circuit of Different Area and Models”.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 INTRODUCTION TO METASEARCH ENGINES AND RESULT MERGING STRATEGIES: A SURVEY Hossein Jadidoleslamy Deptt. of Information Tech., Anzali International Branch, University of Guilan, Rasht, IranABSTRACTMetaSearch is utilizing multiple other search systems to perform simultaneous search. A MetaSearch Engine(MSE) is a search system that enables MetaSearch. To perform a MetaSearch, user query is sent to multiplesearch engines; once the search results returned, they are received by the MSE, then merged into a singleranked list and the ranked list is presented to the user. When a query is submitted to a MSE, decisions are madewith respect to the underlying search engines to be used, what modifications will be made to the query and howto score the results. These decisions are typically made by considering only the user’s keyword query,neglecting the larger information need. The cornerstone of their technology is their rank aggregation method. Inother words, Result merging is a key component in a MSE. The effectiveness of a MSE is closely related to theresult merging algorithm it employs. In this paper, we want to investigate a variety of result merging methodsbased on a wide range of available information about the retrieved results, from their local ranks, their titlesand snippets, to the full documents of these results.KEYWORDS: Search, Web, MetaSearch, MetaSearch Engine, Merging, Ranking. I. INTRODUCTIONMetaSearch Engines (MSEs) are tools that help the user identify such relevant information. Searchengines retrieve web pages that contain information relevant to a specific subject described with a setof keywords given by the user. MSEs work at a higher level. They retrieve web pages relevant to a setof keywords, exploiting other already existing search engines. The earliest MSE is the MetaCrawlersystem that became operational since June 1995 [5,16]. Over the last years, many MSEs have beendeveloped and deployed on the web. Most of them are built on top of a small number of populargeneral-purpose search engines but there are also MSEs that are connected to more specialized searchengines and some are connected to over one thousand search engines [1,10]. In this paper, weinvestigate different result merging algorithms; The rest of the paper is organized as: In Section 2motivation, In Section 3 overview of MSE, Section 4 provides scientific principles of MSE, Section 5discusses about why do we use MSE, Section 6 discusses architecture of MSE, Section 7 describesranking aggregation methods, In Section the paper expresses key parameters to evaluating the rankingstrategies, Section 9 gives conclusions and Section 10 present future works.II. MOTIVATIONThere are some primarily factors behind developing a MSE, are:• The World Wide Web (WWW) is a huge unstructured corpus of information; MSE covers a larger portion of WWW;• By MSE we can have the latest updated information;• MSE increases the web coverage;• Improved convenience for users; 30 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963• MSE provides fast and easy access to the desired search [5]; better retrieval effectiveness [2];• MSE provides a broader overview of a topic [12];• MSE has ability to search the invisible Web, thus increasing the precision, recall and quality of result;• MSE makes the user task much easier by searching and ranking the results from multiple search engine;• MSE provides a quick way to determine which search engines are retrieving the best match for users information need [4].III. OVERVIEW OF METASEARCH ENGINEMSE search several engines at once; it does not crawl the web or maintain a database of web pages;instead, they act as a middle agent, passing the user’s query simultaneously to other search engines orweb directories or deep web, returning the results, collecting them, remove the duplicate links,merge and rank them into a single list and display it to the user [5,8]. Some samples of MSEs areVivisimo, MetaCrawler, Dogpile, Mamma, and Turbo10.a. Differences Between Search and MetaSearch• MSE does not crawl the Web [2,4];• MSE does not have a Database [4,10];• MSE sends search queries to several search engines at once [2,5];• MSE increased search coverage (but is limited by the engines they use with respect to the number and quality of results) and a consistent interface [6,12];• MSE is an effective mechanism to reach deep web.b. MetaSearch Engine Definition• Dictionary meaning for Meta: more comprehensive, transcending;• Accept the User query; Convert the query into the correct syntax for underlying search engines, launch the multiple queries, wait for the result; Analyze, eliminate duplicates and merge results; Deliver the post processed result to the users.• A MSE allows you to search multiple search engines at once, returning more comprehensive and relevant results, fast [5,9];• A search engine which does not gather its own information directly from web sites but rather passes the queries that it receives onto other search engines. It then compiles, summarizes and displays the found information;• MSE is a hub of search engines/databases accessible by a common interface providing the user with results which may/may not be ranked independently of the original search engine/source ranking [6,10].c. The Types of MetaSearch EngineDifferent types of MetaSearch Engines (MSEs) are:• MSEs which present results without aggregating them;• Searches multiple search engines, aggregates the results obtained from them and returns a single list of results [1,3], often with duplicate removed;• MSEs for serious deep digging.d. MSE IssuesSome of most common issues in MSEs are as follows:• Performing search engine/database selection [5,6];• How to pass user queries to other search engines;• How to identify correct search results returned from search engines; an optimal algorithm for implementing minimum cost bipartite matching;• How to search results extraction, requiring a connection program and an extraction program (wrapper) for each component search engine [14];• Expensive/time-consuming to produce/maintain wrapper; 31 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963• merging the results from different search sources;• Different search engines produce result pages in different formats [6,8].IV. SCIENTIFIC FUNDAMENTALS a. Search Engine SelectionTo enable search engine selection, some information that can represent the contents of the documentsof each component search engine needs to be collected first. Such information for a search engine iscalled the representative of the search engine [5,17]. The representatives of all search engines used bythe MSE are collected in advance and are stored with the MSE. During search engine selection for agiven query, search engines are ranked based on how well their representatives match with the query.Different search engine selection techniques often use different types of representatives. A simplerepresentative of a search engine may contain only a few selected key words or a short description.This type of representative is usually produced manually but it can also be automatically generated[5]. As this type of representatives provides only a general description of the contents of searchengines, the accuracy of using such representatives for search engine selection is usually low. Moreelaborate representatives consist of detailed statistical information for each term in each search engine[5,9,17]. b. Automatic Search Engine ConnectionIn most cases, the HTML form tag of a MSE contains all information needed to make the connectionto the search engines. The form tag of each search engine interface is usually pre-processed to extractthe information needed for program connection and the extracted information is saved at the MSE[5,17]. After the MSE receives a query and a particular search engine, among possibly other searchengines, is selected to evaluate this query, the query is assigned to the name of the query textbox ofthe search engine and sent to the server of the search engine using the HTTP request method. Afterthe query is evaluated by the search engine, one or more result pages containing the search results arereturned to the MSE for further processing. c. Automatic Search Result ExtractionA result page returned by a search engine is a dynamically generated HTML page. In addition to thesearch result records (SRRs) for a query, a result page usually also contains some unwantedinformation/links [5]. It is important to correctly extract the SRRs on each result page. A typical SRRcorresponds to a retrieved document and it usually contains the URL, title and a snippet of thedocument. Since different search engines produce result pages in different format, a separate wrapperprogram needs to be generated for each search engine [5,14]. Most of them analyze the source HTMLfiles of the result pages as text strings or tag trees to find the repeating patterns of the SRRs. d. Results MergingResult merging is to combine the search results returned from multiple search engines into a singleranked list. There are many methods for merging/ranking search results; some of them are,• Normalizing the scores returned from different search engines into values within a common range with the goal to make them more comparable [1,6,16]; the results from more useful search engines to be ranked higher.• Using voting-based techniques.• Downloading all returned documents from their local servers and compute their matching scores using a common similarity function employed by the MSE [1,6,17].• Using techniques rely on features such as titles and snippets and so on [1].• The same retrieved results from multiple search engines are more relevant to the query [1,5]. V. WHY ARE METASEARCH ENGINES USEFUL? 1. Why MetaSearch?• Individual Search engines do not cover all the web;• Individual Search Engines are prone to spamming [5]; 32 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963• Difficulty in deciding and obtaining results with combined searches on different search engines [6];• Data Fusion (multiple formats supported) and take less effort of user. 2. Why MetaSearch Engines?• General search engines have difference in search syntax, frequency of updating, display results/search interface and incomplete database [5,16];• MSE improves the search quality with comprehensive, efficient and one query queries all;• MSE is good for quick search results overview with 1 or 2 keywords;• MSE convenient to search different content sources from one page. 3. Key Applications of MetaSearch Engines• Effective mechanism to search surface/deep web;• MSE provides a common search interface over multiple search engines [5,10];• MSE can support interesting special applications. 4. General Features of MetaSearch Engine• Unifies the search interface and provides a consistent user interface; Standardizes the query structure [5];• May make use of an independent ranking method for the results [6]; May have an independent ranking system for each search engine/database;• MetaSearch is not a search for Meta data.VI. METASEARCH ENGINE ARCHITECTUREMSEs enable users to enter search criteria once and access several search engines simultaneously.This also may save (a lot of time) the user from having to use multiple search engines separately (byinitiating the search at a single point). MSEs have virtual databases; they do not compile a physicaldatabase. Instead, they take a users request, pass it to several heterogeneous databases and thencompile the results in a homogeneous manner. No two MSEs are alike; they are different incomponent search engines, ranking/merging methods, search results presentation and etc. a. Standard Architecture Feedback Query Dispatcher Knowledge Personalize User Interface User SE1 SE2 SE3 Web Display Figure1. Block diagram and components• User Interface: similar search engine interfaces with options for types of search and search engines to use;• Dispatcher: generates actual queries to the search engines by using the user query; may involve choosing/expanding search engines to use; 33 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963• Display: generates results page from the replies received; May involve ranking, parsing and clustering of the search results or just plain stitching;• Personalization/Knowledge: may contain either or both. Personalization may involve weighting of search results/query/engine for each user. b. The Architecture of a MSE with Concerns User PreferencesCurrent MSEs make several decisions on be-half of the user, but do not consider the user’s completeinformation need. A MSE must decide which sources to query, how to modify the submitted query tobest utilize the underlying search engines, and how to order the results. Some MSEs allow users toinfluence one of these decisions, but not all three [4,5]. Figure2. The architecture of a MSE with user needsUser’s information needs are not sufficiently represented by a keyword query alone [4,10]. Thisarchitecture has an explicit notion of user preferences. These preferences or a search strategy, are usedto choose the appropriate search engines (source selection), query modifications and influence theorder the results (result scoring). Allowing the user to control the search strategy can provide relevantresults for several specific needs, with a single consistent interface [4]. The current user interfaceprovides the user with a list of choices. The specification of preferences allows users with differentneeds, but the same query, to not only search different search engines (or the same search engineswith different “modified” queries), but also have results ordered differently [4]. Sometimes Eventhough users have different information needs, they might type the same keyword query, and evensearch some of the same search engines. This architecture guarantees consistent scoring of results bydownloading page contents and analyzing the pages on the server [1,4]. c. Helios ArchitectureIn this section we describe the architecture of Helios. The Web Interface allows users to submit theirqueries and select the desired search engines among those supported by the system. This informationis interpreted by the Local Query Parser & Emitter that re-writes queries in the appropriate format forthe chosen engines. The Engines Builder maintains all the settings necessary to communicate with theremote search engines. The HTTP Retrievers modules handle the network communications. Oncesearch results are available, the Search Results Collector & Parser extracts the relevant informationand returns it using XML. Users can adopt the standard Merger & Ranker module for search results orintegrate their customized one [12]. 34 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 Figure3. The architecture of HELIOS MSE d. Tadpole Architecture In this architecture, when a user issues a search request, multiple threads are created in order to fetch the results from various search engines. Each of these threads is given a time limit to return the results, failing which a time out occurs and the thread is terminated [5,11]. Figure4. Basic component architecture of a typical MSE MSEs are web services that receive user queries and dispatch them to multiple crawl-based search engines; then collect returned results, reorder them and present the ranked result list to the user [11]. The ranking fusion algorithms that MSEs utilize are based on a variety of parameters, such as the ranking a result receives and the number of its appearances in the component engine’s result lists [15]. Better results classification can be achieved by employing ranking fusion methods that take into consideration additional information about a web page. Another core step is to implicitly/explicitly collect some data concerning the user that submits the query. This will assist the engine to decide which results suit better to his informational needs [4,11,15].VII. RESULTS MERGING AND RANKING STRATEGIES There are many techniques for ranking retrieved search results from different search engines in MSEs; some important approaches are, • Normalizing/ uniform the scores of search results[1]; • The reliability of each search engine; • The document collection used by a search engine; 35 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963• Some ranking algorithms which completely ignore the scores assigned by the search engines to the retrieved web pages [1]: such as bayes-fuse and borda-fuse[7];• Merging based on SRR contents such as title, snippet, local rank and different similarity functions[6];• Considering the frequencies of query terms in each SRR, the order and the closeness of these terms;• Downloading and analyzing full document.We want to investigate result merging algorithms for MSEs. Most search engines present moreinformative search result records (SRRs) of retrieved results to the user; a typical SRR consists of theURL, title and snippet of the retrieved result [6,7]. 1) Take the Best RankIn this algorithm, we try to place a URL at the best rank it gets in any of the search engine rankings[13]. That is [17],• MetaRank (x) = Min (Rank1(x), Rank2(x), …, Rankn(x));Clashes are avoided by search engines popularity. 2) Borda’s Positional MethodIn this algorithm, MetaRank of an URL is obtained by computing the L1-Norm of the ranks indifferent search engines [8,17],• MetaRank(x) =∑ (Rank1(x) p, Rank2(x) p, …, Rankn(x) p) 1/p;Clashes are avoided by search engine popularity. 3) Weighted Borda-FuseIn this algorithm, search engines are not treated equally, but their votes are considered with weightsdepending on the reliability of each search engine. These weights are set by the users in their profiles.Thus, the votes that the i result of the j search engine receive are [9,17],• V (ri,j) = wj * (maxk (rk)-i+1);Where wj is the weight of the j search engine and rk is the numbers of results rendered by searchengine k. Retrieved pages that appear in more than one search engines receive the sum of their votes. 4) The Original KE AlgorithmKE Algorithm on its original form is a score-based method [1]. It exploits the ranking that a resultreceives by the component engines and the number of its appearances in the component engines’ lists.All component engines are treated equally, as all of them are considered to be reliable. Each returnedranked item is assigned a score based on the following formula [10],• Wke = ∑mi=1(r (i)) / ((n) m * (k/10 + 1) n);Where ∑mi=1(r(i)) is the sum of all rankings that the item has taken, n is the number of search enginetop-k lists the item is listed in, m is the total number of search engines exploited and k is the totalnumber of ranked items that the KE Algorithm uses from each search engine. Therefore, it is clearthat the less weight a result scores the better ranking it receives. 5) Fetch Retrieved DocumentsA straightforward way to perform result merging is to fetch the retrieved documents to the MSE andcompute their similarities with the query using a global similarity function. The main problem of thisapproach is that the user has to wait a long time before the results can be fully displayed. Therefore,most result merging techniques utilize the information associated with the search results as returnedby component search engines to perform merging. The difficulty lies in the heterogeneities among thecomponent search engines. 6) Borda CountBorda Count is a voting-based data fusion method [15]. The returned results are considered as thecandidates and each component search engine is a voter. For each voter, the top ranked candidate isassigned n points (n candidates), the second top ranked candidate is given n–1 points, and so on. Forcandidates that are not ranked by a voter (i.e., they are not retrieved by the corresponding search 36 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963engine), the remaining points of the voter will be divided evenly among them. The candidates are thenranked on their received total points in descending order [13,15,17]. 7) D-WISE MethodIn D-WISE, the local rank of a document (ri) returned from search engine j is converted to a rankingscore (rsij); the formula is [6],• rsij = 1 – (ri - 1) * Smin / (m * Sj) ;Where Sj is the usefulness score of the search engine j, Smin is the smallest search engine scoreamong all component search engines selected for this query and m is the number of documentsdesired across all search engines. This function generates a smaller difference between the rankingscores of two consecutively ranked results retrieved from a search engine with a higher search enginescore. This has the effect of ranking more results from higher quality search engines higher. Oneproblem of this method is that the highest ranked documents returned from all the local systems willhave the same ranking score 1. 8) Merging Based on Combination Documents Records (SRRs)Among all the proposed merging methods, the most effective one is based on the combination of theevidences of document such as title, snippet, and the search engine usefulness. In these methods [1,2]:for each document, computing the similarity between the query and its title and its snippet;aggregating linearly the two as this document’s estimated global similarity. For each query term,computing its weight in every component search engine based on the Okapi probabilistic model [6].The search engine score is the sum of all the query term weights of this search engine. Finally, theestimated global similarity of each result is adjusted by multiplying the relative deviation of its sourcesearch engine’s score to the mean of all the search engine scores. It is very possible that for a givenquery, the same document is returned from multiple component search engines. In this case, their(normalized) ranking scores need to be combined [1]. A number of linear combination fusionfunctions have been proposed to solve this problem include min, max, sum, average and etc [15]. 9) Use Top Document to Compute Search Engine Score (TopD)Assume Sj denote the score of search engine j with respect to q. This algorithm uses the similaritybetween q and the top ranked document returned from search engine j (denoted dij) [6,7]. Fetching thetop ranked document from its local server have some delay, but that this delay is tolerable, since onlyone document is fetched from each used search engine. The similarity function using the Cosinefunction and Okapi function. The formula is [6],• ∑TEq W * (((K1 + 1) * tf) / (K + tf)) * (((K3 + 1) * qtf) / (K3 + qtf)) ;• With W = Log ((N-n+0.5) /(n+0.5)) and K = K1 * ((1-b)+b*(dl/avgdl)) ;Where tf is the frequency of the query term T within the processed document, qtf is the frequency ofT within the query, N is the number of documents in the collection, n is the number of documentscontaining T, dl is the length of the document, and avgdl is the average length of all the documents inthe collection. K1, k3 and b are the constants with values 1.2, 1,000 and 0.75, respectively [6]. N, n,and avgdl are unknown, we can use some approximations to estimate them. The ranking scores of thetop ranked results from all used search engines will be 1[1,6]. We remedy this problem by computingan adjusted ranking score arsij by multiplying the ranking score computed by above formula, namelyrsij, by Sj [6], arsij = ∑ (rsij * Sj); If a document is retrieved from multiple search engines, wecompute its final ranking score by summing up all the adjusted ranking scores. 10) Use Top Search Result Records (SRRs) to Compute Search Engine Score (TopSRR)In this method, when a query q is submitted to a search engine j, the search engine returns the SRRsof a certain number of top ranked documents on a dynamically generated result page. In the TopSRRalgorithm, the SRRs of the top n returned results from each search engine, instead of the top rankeddocument, are used to estimate its search engine score [6]. Intuitively, this is reasonable as a moreuseful search engine for a given query is more likely to retrieve better results which are usuallyreflected in the SRRs of these results. Specifically, all the titles of the top n SRRs from search engine jare merged together to form a title vector TVj, and all the snippets are also merged into a snippetvector SVj. The similarities between query q and TVj, and between q and SVj are computedseparately and then aggregated into the score of search engine j [6], 37 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963• Sj = C1 * Similarity (q, TVj) + (1 – C1) * Similarity (q, SVj);Too, both the Cosine function and Okapi function are used [6,7]. 11) Compute Simple Similarities between SRRs and Query (SRRsim)We can rank SRRs returned from different search engines; because each SRR can be considered as therepresentative of the corresponding full document. In the SRRsim algorithm, the similarity between aSRR (R) and a query q is defined as a weighted sum of the similarity between the title (T) of R and qand the similarity between the snippet (S) of R and q [6,7],• Sim(R , q) = C2 * Similarity (q, T) + (1 – C2) * Similarity (q , S) ;Where, C2 is constant (C2 = 0.5). Again both the Cosine function and the Okapi function are used. Ifa document is retrieved from multiple search engines with different SRRs (different search enginesusually employ different ways to generate SRRs), then the similarity between the query and each suchSRR will be computed and the largest one will be used as the final similarity for merging. 12) Rank SRRs Using More Features (SRRRank)The similarity function used in the SRRsim algorithm may not be sufficiently powerful in reflectingthe true matches of the SRRs with respect to a given query [6]. For example, these functions do nottake proximity information such as how close the query terms occur in the title and snippet of a SRRinto consideration, nor does it consider the order of appearances of the query terms in the title andsnippet. Somtimes, the order and proximity information have a significant impact on the match ofphrases. This algorithm defines five features with respect to the query terms; that are [6,7],• NDT: The number of distinct query terms appearing in title and snippet;• TNT: total number occurrences of the query terms in the title and snippet;• TLoc: The locations of the occurred query terms;• ADJ: whether the occurred query terms appear in the same order as they are in the query and whether they occur adjacently;• WS: the window size containing distinct occurred query terms.For each SRR of the returned result, the above pieces of information are collected. The SRRRankalgorithm works as [6]:• All SRRs are grouped based on NDT. The groups having more distinct terms are ranked higher;• Within each group, the SRRs are further put into three subgroups based on TLoc. The subgroup with these terms in the title ranks highest, the subgroup with the distinct terms in the snippet and the subgroup with the terms scattered in both title and snippet;• Finally, within each subgroup, the SRRs that have more occurrences of query terms (TNT) appearing in the title and the snippet are ranked higher. If two SRRs have the same number of occurrences of query terms, first the one with distinct query terms appearing in the same order and adjacently (ADJ) as they are in the query is ranked higher, and then, the one with smaller window size is ranked higher.If there is any tie, it is broken by the local ranks. The result with the higher local rank will have ahigher global rank in the merged list. If a result is retrieved from multiple search engines, we onlykeep the one with the highest global rank [3,6]. 13) Compute Similarities between SRRs and Query Using More Features (SRRSimMF)This algorithm is similar to SRRRank except that it quantifies the matches based on each featureidentified in SRRRank so that the matching scores based on different features can be aggregated intoa numeric value [1,3]. Consider a given field of a SRR, say title (the same methods apply to snippet).For the number of distinct query terms (NDT), its matching score is the ratio of NDT over the totalnumber of distinct terms in the query (QLEN), denoted SNDT=NDT/QLEN. For the total number ofquery terms (TNT), its matching score is the ratio of TNT over the length of title, denotedSTNT=TDT/TITLEN. For the query terms order and adjacency information (ADJ), the matchingscore SADJ is set to 1 if the distinct query terms appear in the same order and adjacently in the title;otherwise the value is 0. The window size (WS) of the distinct query terms in the processed title isconverted into score SWS= (TITLEN–WS)/TITLEN. All the matching scores of these features are 38 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 aggregated into a single value, which is the similarity between the processed title T and q, using this formula [6], • Sim(T , q) = SNDT + (1/QLEN) * (W1 * SADJ + W2 * SWS + W3 * STNT) ; For each SRR, the final similarity is, • Similarity = (TNDT/QLEN) * (C3 * Sim(T , q) + (1 – C3) * Sim (S , q)) ; Where TNDT is the total number of distinct query terms appeared in title and snippet [6,7].VIII. EVALUATION KEY PARAMETERS FOR RANKING STRATEGIES Some parameters for ranking methods are algorithmic complexity (time complexity), rank aggregation time, overlap across search engines (relative search engine performance) and performance of the various rank aggregation methods include precision with respect to number of results returned and precision vs. recall. IX. CONCLUSION In this paper, we have presented an overview and some ranking strategies on MSEs. An effective and efficient result merging strategy is essential for developing effective MetaSearch systems. We investigated merging algorithms that utilize a wide range of information available for merging, from local ranks by component search engines, search engine scores, titles and snippets of search result records to the full documents. We discuss methods for improving answer relevance in MSEs; propose several strategies for combining the ranked results returned from multiple search engines. Our study has several results; that are, • A simple and efficient merging method can help a MSE significantly outperform the best single search engine in effectiveness [2]; • Merging based on the titles and snippets of returned search result records can be more effective than using the full documents. This implies that a MSE can achieve better performance than a centralized retrieval system that contains all the documents from the component search engines; • The computational complexity of ranking algorithms used and performance of the MSE are conflicting parameters; • MSEs are useful, because, • Integration of search results provided by different engines; Comparison of rank positions; • Advanced search features on top of commodity engines; • A MSE can be used for retrieving, parsing, merging and reporting results provided by other search engines. X. FUTURE WORKS Component search engines employed by a MSE may change their connection parameters and result display format anytime. These changes can make the affected search engines unusable in the MSE. How to monitor the changes of search engines and make the corresponding changes in the MSE automatically. Most of today’s MSEs employ only a small number of general purpose search engines. Building large-scale MSEs that using numerous specialized search engines is another area problem. Challenges arising from building very large-scale MSEs include automatic generation and maintenance of high quality search engine representatives needed for efficient and effective search engine selection, and highly automated techniques to add search engines into MSEs and adapt to changes of search engines. REFERENCES [1] Renda M. E. and Straccia U.; Web metasearch: Rank vs. score based rank aggregation methods; 2003. [2] Meng W., Yu C. and Liu K.; Building efficient and effective metasearch engines; In ACM Computing Surveys; 2002. 39 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[3] Fagin R., Kumar R., Mahdian M., Sivakumar D. and Vee E.; Comparing and aggregating rankings with ties; In PODS; 2004.[4] Glover J. E., Lawrence S., Birmingham P. W. and Giles C. L.; Architecture of a Metasearch Engine that Supports User Information Needs; NEC Research Institute, Artificial Intelligence Laboratory, University of Michigan; In ACM; 1999.[5] MENG W.; Metasearch Engines; Department of Computer Science, State University of New York at Binghamton; Binghamton; 2008.[6] Lu Y., Meng W., Shu L., Yu C. and Liu K.; Evaluation of result merging strategies for metasearch engines; 6th International Conference on Web Information Systems Engineering (WISE Conference); New York; 2005.[7] Dwork C., Kumar R., Naor M. and Sivakumar D.; Rank aggregation methods for the Web; Proceedings of ACM Conference on World Wide Web (WWW); 2001.[8] Fagin R., Kumar R., Mahdian M., Sivakumar D. and Vee E.; Comparing partial rankings; Proceedings of ACM Symposium on Principles of Database Systems (PODS); 2004.[9] Fagin R., Kumar R. and Sivakumar D.; Comparing top k lists; SIAM Journal on Discrete Mathematics; 2003.[10] Souldatos S., Dalamagas T. and Sellis T.; Captain Nemo: A Metasearch Engine with Personalized Hierarchical Search Space; School of Electrical and Computer Engineering; National Technical University of Athens; November, 2005.[11] Mahabhashyam S. M. and Singitham P.; Tadpole: A Meta search engine Evaluation of Meta Search ranking strategies; University of Stanford; 2004.[12] Gulli A., University of Pisa, Informatica; Signorini A., University of Iowa, Computer Science; Building an Open Source Meta Search Engine; May, 2005.[13] Aslam J. and Montague M.; Models for Metasearch; In Proceedings of the ACM SIGIR Conference; New Orleans; 2001.[14] Zhao H., Meng W., Wu Z., Raghavan V. and Yu C.; Fully automatic wrapper generation for search engines; World Wide Web Conference; Chiba, Japan; 2005.[15] Akritidis L., Katsaros D. and Bozanis P.; Effective Ranking Fusion Methods for Personalized Metasearch Engines; Department of Computer and Communication Engineering, University of Thessaly; Panhellenic Conference on Informatics (IEEE); 2008.[16] Manning C. D., Raghavan P. and Schutze H.; Introduction to Information Retrieval; Cambridge University Press; 2008.[17] Dorn J. and Naz T.; Structuring Meta-search Research by Design Patterns; Institute of Information Systems, Technical University Vienna, Austria; International Computer Science and Technology Conference; San Diego; April, 2008.Author BiographyH. Jadidoleslamy is a Master of Science student at the Guilan University in Iran. He receivedhis Engineering Degree in Information Technology (IT) engineering from the University ofSistan and Balouchestan (USB), Iran, in September 2009. He will receive his Master of Sciencedegree from the University of Guilan, Rasht, Iran, in March 2011. His research interests includeComputer Networks (especially Wireless Sensor Network), Information Security, and E-Commerce. He may be reached at tanha.hossein@gmail.com. 40 Vol. 1, Issue 5, pp. 30-40
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963STUDY OF HAND PREFERENCES ON SIGNATURE FOR RIGHT- HANDED AND LEFT-HANDED PEOPLES Akram Gasmelseed and Nasrul Humaimi Mahmood Faculty of Health Science and Biomedical Engineering, Universiti Teknologi Malaysia, Johor, Malaysia.A BSTRACTSignature is the easiest way to issue the document. The problem of handwritten signature verification is apattern recognition task used to differentiate two classes of original and fake signatures. The subject of interestin this study is about signature recognition that deals with the process of verifying the written signature patternsof human individuals and specifically between right-handed and left-handed people. The method that been usedin this project is an on-line verification by using IntuosTM Graphics Tablet and Intuos pen as the data capturingdevice. On-line signature verification involved the capturing of dynamic signature signals such as pressure ofpen tips, time duration of whole signature, altitude and azimuth. The ability to capture the signature and have itimmediately available in a digital form for verification has opens up a range of new application areas about thistopic.K EY WORDS: Signature verification, IntuosTM Graphics Tablet, Right-handed people, Left-handed people I. INTRODUCTIONRecent years, handwritten signatures are commonly used to identify the contents of a document or toconfirm a financial transaction. Signature verification is usually made by visual check up. A personcompares the appearance of two signatures and accepts the given signature if it is sufficiently similarto the stored signature, for example, on a credit card. When using credit cards, suitable verification ofsignature by a simple comparison using the human eye is difficult [1,2].In order to prevent illegal use of credit cards, an electrical method for setting an auto identificationdevice is desired. Biometrics, an identification technology that uses characteristics of the human body,characteristics of motion or characteristics of voice is often effective in identification [2]. However,identification technologies that use physical characteristics, especially fingerprints, often presentdifficulties as a result of psychological resistance. In contrast, automatic signature verificationprovides a great advantage in current social systems because the handwritten signature is often usedfor legal confirmation.Theoretically, the problem of handwritten signature verification is a pattern recognition task used todifferentiate two classes of original and fake signatures. A signature verification system must be ableto detect forgeries and to reduce rejection of real signatures simultaneously [3]. Automatic signatureverification can be divided into two main areas depending on the data gaining method. The methodsare off-line and on-line signature verification [2,4].In off-line signature verification, the signature is available on a document which is scanned to obtainits digital image representation. This method also identifies signatures using an image processingprocedure whereby the user is supposed to have written down completely the signature onto atemplate that is later captured by a CCD camera or scanner to be processed. Another method is on-line signature verification. It used special hardware, such as a digitizing tablet or a pressure sensitivepen, to record the pen movements during writing [5,6,7]. On-line signature verification also involvedthe capturing of dynamic signature signals such as pressure of pen tips, time duration of wholesignature and velocity along signature path.In the past few years, there have been a lot of researches [8,9] regarding signature verification andsignature recognition. Unfortunately, none of them specify the research and focusing on hand 41 Vol. 1, Issue 5, pp. 41-46
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963preferences. The subject of interest in this research is about signature recognition that deals with theprocess of verifying the written signature patterns of human individuals and specifically among right-handed and left-handed people. II. METHODOLOGIESThe method that had been used in this work is an on-line verification by using IntuosTM 9 X 12Graphics Tablet and Intuos pen as the data capturing device. The information then had been processedusing suitable software such as Capture 1.3, Microsoft Excel, MATLAB and MINITAB. Theflowchart of methodology is shown in Figure 1. Figure 1: Flowchart of methodologyThe first phase is about collecting the signature or data of individuals. Figure 2 shows the process oftaking the signature. The data had been collected minimum 30 from right-handed and 30 left-handedpeople and taken from both of their hands (left and right). This will be totalled up all the data to 120.All the data will be detected and digitalis by Capture 1.3 software and then save in format of word pad. Figure 2: Process of taking the signatureThe data had arranged using Excel and simulate by using MATLAB and MINITAB. All the data wereanalysed using correlation and regression methods. The last phase of this work is to get the resultfrom the analysis phase. All the data then, analysed between left-handed and right-handed people’ssignatures. The result and all the problems during this project will be discussed clearly. Lastly, overallconclusion and recommendation is summarized. III. RESULT AND DISCUSSIONLinear correlation coefficient measures the strength of a linear relationship between two variables.This method measures the extent to which the points on a scatter diagram cluster about a straight line. 42 Vol. 1, Issue 5, pp. 41-46
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Table one shows the correlation coefficient for pressure, altitude and azimuth of the samples fromdifferent right-handed and left-handed peoples. From table 1, some analysis can be done accordingly. Table 1: Correlation Measurement Correlation RR-RL (RH) LL-LR (LH) RR-LL (major) RL-LR (minor) Pressure 0.935 0.949 0.882 0.878 Altitude 0.487 0.893 0.779 0.920 Azimuth -0.832 -0.623 0.925 0.500 Figure 3: Graph of CorrelationFirstly, the analysis for right-handed people (RR-RL) and left-handed people (LL-LR) correlation hasbeen made. For the right-handed people have a difference of 0.014 less than left-handed people forpressure correlation in this study. It same with the altitude correlation but it was less about 0.406 lessthan left-handed people. For azimuth correlation the result it was in negative value but right-handedpeople have higher difference than left-handed people about 0.209. The negative value showed thevalues of the data are in opposite directions. So it was recommended to apply the dominant for eachcorrelation while doing this study to get maximum information for application.Secondly, for major usage (RR-LL) and for minor usage hand (RL-LR) the higher value has beendominant for major usage compared to minor usage in term of pressure and azimuth correlation thatare 0.004 and 0.425 respectively. For altitude correlation minor usage has a value of 0.141 greaterthan major usage. To get a measure for more general dependencies in the data, the percentage of thedata also has been made. For a pressure correlation of LH people (94.9%) is higher than thecorrelation value of pressure for RH people (93.5%). The correlation value of altitude for LH people(89.3%) is also higher than the correlation value of pressure for RH people (48.7%). But, thecorrelation value of azimuth for LH people (62.3%) is lower than the correlation value of azimuth forRH people (83.2%).The left-handed people have higher values of correlation compared to right-handed people forpressure and altitude. But for azimuth, right-handed people have higher correlation than left-handedpeople. From this result, it is advisable to use the left-handed people information or setting if using forpen pressure and also altitude. The right-handed people information or setting can be advisable to usefor azimuth.Figure 3 shows that the pen pressures have the higher percentage of correlation rather than altitudeand azimuth for all types of hand usage. With this result, it is advisable to use the pen pressure toobtain the signature recognition.Regression generally models the relationship between one or more response variables and one ormore predictor variables. Linear regression models the relationship between two or more variables 43 Vol. 1, Issue 5, pp. 41-46
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963using a linear equation. Linear regression gives a formula for the line most closely matching thosepoints. It also gives an R-Squared (r2) value to say how well the resulting line matches the originaldata points. The closer a line is to the data points, overall, the stronger the relationship. Table 2: Regression Analysis Equation R-Sq PRES RR vs. ALT RR, AZM RR PRES RR = - 3892 + 2.60 ALT RR + 2.44 AZM RR 69.4% PRES LL vs. ALT LL, AZM LL PRES LL = - 629 + 10.2 ALT LL - 2.10 AZM LL 82.3% PRES RL vs. ALT RL, AZM RL PRES RL = - 1265 + 9.30 ALT RL - 1.52 AZM RL 90.1% PRES LR vs. ALT LR, AZM LR PRES LR = - 25218 + 25.0 ALT LR + 11.8 AZM LR 79.4% PRES RR vs. PRES RL PRES RR = 114 + 0.787 PRES RL 87.5% PRES LL vs. PRES LR PRES LL = 101 + 0.772 PRES LR 90.1% PRES RR vs. PRES LL PRES RR = 77.0 + 0.985 PRES LL 77.9% PRES RL vs. PRES LR PRES RL = 83.9 + 0.946 PRES LR 77.0% ALT RR vs. ALT RL ALT RR = 353 + 0.392 ALT RL 23.7% ALT LL vs. ALT LR ALT LL = - 741 + 2.26 ALT LR 79.7% ALT RR vs. ALT LL ALT RR = 261 + 0.517 ALT LL 60.6% ALT RL vs. ALT LR ALT RL = - 581 + 1.92 ALT LR 84.6% AZM RR vs. AZM RL AZM RR = 3552 - 1.02 AZM RL 69.3% AZM LL vs. AZM LR AZM LL = 7326 - 5.37 AZM LR 38.8% AZM RR vs. AZM LL AZM RR = - 738 + 0.763 AZM LL 85.5% AZM RL vs. AZM LR AZM RL = - 268 + 2.91 AZM LR 25.0%Table 2 shows all variables have the linear relationship that shown by the linear equations. For theright-handed people have the equation of “PRES RR = 114 + 0.787 PRES RL” and value of r2 =87.5%. The left-handed people have equation of “PRES LL = 101 + 0.772 PRES LR” and have highervalues of r2 that are 90.1%. The high value of r2 shows that the pressure has a strong relationship forthe right-handed and left-handed people. For the altitude and azimuth, the value of r2 is less than 80%.This means there are weak relationships between them.For the linear relationship between pen pressure, altitude and azimuth, table 2 shows that left-handedpeople have a value of r2 that is 82.3% higher than right-handed people with 69.4%. But for the minorusage hand the r2 value is higher for right-handed people with 90.1% rather than left-handed peoplewith 79.4%. These results show that there are high linear relationship between pen pressure, altitudeand azimuth for both of the people and also their major and minor usage hand. Figure 4: Graph of Regressions 44 Vol. 1, Issue 5, pp. 41-46
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Figure 4 shows that the pen pressures have the higher percentage of regression rather than altitude andazimuth for all types of hand usage. This also can be advised to use the pen pressure to obtain thesignature recognition. IV. CONCLUSION AND FUTURE WORKSThis work is about analyzing signature recognition especially on people’s hand preferences by usingcorrelation and regression methods. The left-handed people have higher values of correlationcompared to right-handed people for pressure and altitude. But for azimuth, right-handed people havehigher correlation than left-handed people. That means for each hand preference group are havingtheir own parameters that can be consider during performing signature recognition between these twogroups of people. From the regression method, the results show that there are high linear relationshipbetween pen pressure, altitude and azimuth for both of the people and also their major and minorusage hand. Meaning that, all groups of data are having highly linear relationship between these threeparameters. The resulting analysis, for pen pressure can be advisable to be obtained for signaturerecognition rather than altitude and azimuth. Pen pressure data analysis is showing the highest valueof correlation and regression compared to the data of altitude and azimuth. This result indicates thatthe data from left-handed and right-handed people’s signatures are highly related in term of penpressure.This research work can be extended in order to apply to the real world due to the market demand as anestablish method or technique to verify the signatures. Some of further recommendation can be made.Firstly, the analysis can be extended by developing new software of signature recognition. Thesoftware will make the research more reliable and maybe can predict the outcome from the inputsignatures. The method thats been used is only using correlation and regression analysis to analyze allthe data. By using several recognition algorithms, the research can be ended with more precise andtrusted results. The numbers of data also should be increased to greater than 30 for each of the datagroups. The physical poses and body position for person that give the signature also very important.They must have the same pose during the signature was taken. This will decrease the false of Intuospen position that will affect on the altitude and azimuth of the signatures.REFERENCES[1] Anil K. Jain, Friederike D. Griess and Scott D. Connell, On-line signature verification. Pattern Recognition 35 (2002) pp.2963 – 2972.[2] Hiroki Shimizu, Satoshi Kiyono, Takenori Motoki and Wei Gao. An electrical pen for signature verification using a two-dimensional optical angle sensor. Sensors and Actuators A 111 (2004) pp.216–221.[3] Inan Güler and Majid Meghdadi. A different approach to off-line handwritten signature verification using the optimal dynamic time warping algorithm. Digital Signal Processing 18 (2008) pp.940–950.[4] Musa Mailah and Lim Boon Han. Biometrics signature verification using pen position, time, velocity and pressure parameters. Jurnal Teknologi,UTM 48(A) Jun 2008: pp. 35 - 54.[5] Fernando Alonso-Fernandez, Julian Fierrez-Aguilar, Francisco del-Valle and Javier Ortega- Garcia. On-Line Signature Verification Using Tablet PC. Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis (2005) pp 245-250.[6] Oscar Miguel-Hurtado, Luis Mengibar-Pozo, Michael G. Lorenz and Judith Liu-Jimenez. On- Line Signature Verification by Dynamic Time Warping and Gaussian Mixture Models. 41st Annual IEEE International Carnahan Conference on Security Technology (2007), pp. 23-29.[7] Seiichiro Hangai, Shinji Yamanaka, Takayuki Hamamoto, On-Line Signature Verification Based On Altitude and Direction of Pen Movement., IEEE International Conference on Multimedia and Expo, (2000), pp.489-492.[8] Lim Boon Han, Biometric Signature Verification Using Neural Network. Universiti Teknologi Malaysia. Master of Engineering (Mechanical) Thesis, 2005.[9] Reena Bajaj and Santanu Chaudhury. Signature Verification Using Multiple Neural Classifiers. Pattern Recognition, Vol. 30, No. 1, pp. l-7, 1997. 45 Vol. 1, Issue 5, pp. 41-46
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963AuthorsA. GASMELSEED received his B.Sc. degree in Electrical Engineering and Informatics – majorin Computer Engineering – and M.Sc degree in Electrical Engineering and Informatics fromBudapest, Hungary, in 1993 and 1999, respectively. He received the PhD degree in ElectricalEngineering from Universiti Teknologi Malaysia (UTM), Malaysia, in 2009. His research is inthe areas of electromagnetic biological effects, biophotonics, and computer signal/image-processing application to biomedical engineering. Currently he is a Senior Lecturer at Faculty ofHealth Science and Biomedical Engineering, UTM.N. H. MAHMOOD received his B.Sc. and M.Sc. degrees in Electrical Engineering fromUniversiti Kebangsaan Malaysia (UKM) and Universiti Teknologi Malaysia (UTM)respectively. He obtained his Ph.D. degree from the University of Warwick, United Kingdom.His research areas are biomedical image processing, medical electronics and rehabilitationengineering. Currently he is a Senior Lecturer at Faculty of Health Science and BiomedicalEngineering, UTM. 46 Vol. 1, Issue 5, pp. 41-46
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 DESIGN AND SIMULATION OF AN INTELLIGENT TRAFFIC CONTROL SYSTEM 1 Osigwe Uchenna Chinyere, 2Oladipo Onaolapo Francisca, 3 Onibere Emmanuel Amano 1, 2 Computer Science Department, Nnamdi Azikiwe University, Awka, Nigeria 3 Computer Science Department, University of Benin, Benin City, NigeriaABSTRACTThis paper described our research experiences of building an intelligent system to monitor and control roadtraffic in a Nigerian city. A hybrid methodology obtained by the crossing of the Structured Systems Analysis andDesign Methodology (SSADM) and the Fuzzy-Logic based Design Methodology was deployed to develop andimplement the system. Problems were identified with the current traffic control system at the ‘+’ junctions andthis necessitated the design and implementation of a new system to solve the problems. The resulting fuzzy logic-based system for traffic control was simulated and tested using a popular intersection in a Nigerian city;notorious for severe traffic logjam. The new system eliminated some of the problems identified in the currenttraffic monitoring and control systems.KEYWORDS: Fuzzy Logic, embedded systems, road traffic, simulation, hybrid methodologies I. INTRODUCTIONOne of the major problems encountered in large cities is that of traffic congestion. Data from theChartered Institute of Traffic and Logistic in Nigeria revealed that about 75 per cent mobility needs inthe country is accounted for by road mode; and that more than seven million vehicles operate onNigerian roads on a daily bases [1]. This figure was also confirmed by the Federal Road SafetyCommission of Nigeria; the institution responsible for maintaining safety on the roads [2]. Thecommission further affirmed that the high traffic density was caused by the influx of vehicles as aresult of breakdown in other transport sectors and is most prevalent in the ‘+’ road junctions.Several measures had been deployed to address the problem of road traffic congestion in large citiesin Nigeria; namely among these are: the construction of flyovers and bypass roads, creating ringroads, posting of traffic wardens to trouble spots and construction of conventional traffic light basedon counters. These measures however, had failed to meet the target of freeing major ‘+’ intersectionsresulting in loss of human lives and waste of valuable man hour during the working days.This paper described a solution to road traffic problems in large cities through the design andimplementation of an intelligent system; based on fuzzy logic technology to monitor and controltraffic light system. The authors will show how the new fuzzy logic traffic control system for “+”junction, eliminated the problems observed in the manual and conventional traffic control systemthrough the simulation software developed using Java programming language. This paper is dividedinto five sections. The first section provided a brief introduction to traffic management in general anddescribed the situations in urban cities. We reviewed related research experiences and results on roadtraffic systems in the second section. Particular attention was given to intelligent traffic controlsystems and several approached were outlined. While section three described the methodologiesdeployed in the development of the system, section four presented the research results and section fiveconcluded the work. II. REVIEW OF RELATED WORKAn intelligent traffic light monitoring system using an adaptive associative memory was designed byAbdul Kareem and Jantan (2011). The research was motivated by the need to reduce the unnecessary 47 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963long waiting times for vehicles at regular traffic lights in urban area with fixed cycle protocol. Toimprove the traffic light configuration, the paper proposed monitoring system, which will be able todetermine three street cases (empty street case, normal street case and crowded street case) by usingsmall associative memory. The experiments presented promising results when the proposed approachwas applied by using a program to monitor one intersection in Penang Island in Malaysia. Theprogram could determine all street cases with different weather conditions depending on the stream ofimages, which are extracted from the streets video cameras [3].A distributed, knowledge-based system for real-time and traffic-adaptive control of traffic signals wasdescribed by Findler and et al (1997). The system was a learning system in two processes: the firstprocess optimized the control of steady-state traffic at a single intersection and over a network ofstreets while the second stage of learning dealt with predictive/reactive control in responding tosudden changes in traffic patterns [4]. GiYoung et al., (2001) believed that electro sensitive trafficlights had better efficiency than fixed preset traffic signal cycles because they were able to extend orshorten the signal cycle when the number of vehicles increases or decreases suddenly. Their work wascentred on creating an optimal traffic signal using fuzzy control. Fuzzy membership function valuesbetween 0 and 1 were used to estimate the uncertain length of a vehicle, vehicle speed and width of aroad and different kinds of conditions such as car type, speed, delay in starting time and the volume ofcars in traffic were stored [5]. A framework for a dynamic and automatic traffic light control expertsystem was proposed by [6]. The model adopted inter-arrival time and inter-departure time to simulatethe arrival and leaving number of cars on roads. Knowledge base system and rules were used by themodel and RFID were deployed to collect road traffic data. This model was able to make decisionsthat were required to control traffic at intersections depending on the traffic light data collected by theRFID reader. A paper by Tan et al., (1996) described the design and implementation of an intelligenttraffic lights controller based on fuzzy logic technology. The researchers developed a software tosimulate the situation of an isolated traffic junction based on this technology. Their system was highlygraphical in nature, used the Windows system and allowed simulation of different traffic conditions atthe junction. The system made comparisons the fuzzy logic controller and a conventional fixed-timecontroller; and the simulation results showed that the fuzzy logic controller had better performanceand was more cost effective [7].Research efforts in traffic engineering studies yielded the queue traffic light model in which vehiclesarrive at an intersection controlled by a traffic light and form a queue. Several research effortsdeveloped different techniques tailored towards the evaluation of the lengths of the queue in each laneon street width and the number of vehicles that are expected at a given time of day. The efficiency ofthe traffic light in the queue model however, was affected by the occurrence of unexpected eventssuch as the break-down of a vehicle or road traffic accidents thereby causing disruption to the flow ofvehicles. Among those techniques based on the queue model was a queue detection algorithmproposed by [8]. The algorithm consisted of motion detection and vehicle detection operations, bothof which were based on extracting the edges of the scene to reduce the effects of variations in lightingconditions. A decentralized control model was described Jin & Ozguner (1999). This model was acombination of multi-destination routing and real time traffic light control based on a concept of cost-to-go to different destinations [9]. A believe that electronic traffic signal is expected to augment thetraditional traffic light system in future intelligent transportation environments because it has theadvantage of being easily visible to machines was propagated by Huang and Miller (2004). Theirwork presented a basic electronic traffic signaling protocol framework and two of its derivatives, areliable protocol for intersection traffic signals and one for stop sign signals. These protocols enabledrecipient vehicles to robustly differentiate the signal’s designated directions despite of potentialthreats (confusions) caused by reflections. The authors also demonstrated how to use one of theprotocols to construct a sample application: a red- light alert system and also raised the issue ofpotential inconsistency threats caused by the uncertainty of location system being used and discussmeans to handle them [10]. Di Febbraro el al (2004) showed that Petri net (PN) models can be appliedto traffic control. The researchers provided a modular representation of urban traffic systems regulatedby signalized intersections and considered such systems to be composed of elementary structuralcomponents; namely, intersections and road stretches, the movement of vehicles in the traffic networkwas described with a microscopic representation and was realized via timed PNs. An interestingfeature of the model was the possibility of representing the offsets among different traffic light cycles 48 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963as embedded in the structure of the model itself [11]. Nagel and Schreckenberg (1992) described aCellular Automata model for traffic simulation. At each discrete time-step, vehicles increase theirspeed by a certain amount until they reach their maximum velocity. In case of a slower movingvehicle ahead, the speed will be decreased to avoid collision. Some randomness is introduced byadding for each vehicle a small chance of slowing down [12].The experiences of building a traffic light controller using a simple predictor was described byTavladakis (1999). Measurements taken during the current cycle were used to test several possiblesettings for the next cycle, and the setting resulting in the least amount of queued vehicles wasexecuted. The system was highly adaptive, however as it only uses data of one cycle and could nothandle strong fluctuations in traffic flow well [13]. Chattarajet al., (2008) proposed a novelarchitecture for creating Intelligent Systems for controlling road traffic. Their system was based onthe principle of the use of Radio Frequency Identification (RFID) tracking of vehicles. Thisarchitecture can be used in places where RFID tagging of vehicles is compulsory and the efficiency ofthe system lied in the fact that it operated traffic signals based on the current situation of vehicularvolume in different directions of a road crossing and not on pre-assigned times [14]. III. METHODOLOGYA novel methodology was described in this work for the design and implementation of the intelligenttraffic lights control system. This methodology was obtained as a hybrid of two standardmethodologies: The Structured System Analysis and Design Methodology (SSADM) and the FuzzyBased Design Methodology (Figure 1). The systems study and preliminary design was carried outusing the Structured System Analysis and Design Methodology and it replaced the first step of theFuzzy Based Design Methodology as shown in the broken arc in figure 1. The Fuzzy Logic-basedmethodology was chosen as the paradigm for an alternative design methodology; applied indeveloping both linear and non-linear systems for embedded control. Therefore, the physical andlogical design phases of the SSADM were replaced by the two steps of the Fuzzy Logic-basedmethodology to complete the crossing of the two methodologies. A hybrid methodology wasnecessary because there was a need to examine the existing systems, classify the intersections as “Y”and “+” junction with the view of determining the major causes of traffic deadlock on road junction.There was also the need to design the traffic control system using fuzzy rules and simulation toimplement an intelligent traffic control system that will eliminate logjam. Understand physical System and control requirement Investigate current system Business System Design the controller Options(BSOs) using fuzzy Rules Requirement Specification Simulate, Debug and Technical Implement the system System Options(TSOs) Logical Design Physical Design Figure1 Our Hybrid Design Methodology 49 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963An analysis of the current traffic control system in the South Eastern Nigerian city showed that someof the junctions are controlled by traffic wardens while some are not manned at all. Some of thesejunctions also have traffic lights strategically located but are not intelligent. These problems areinherent due to nonchalant attitude of traffic warders to effectively control traffic through handsignals. They could easily get tired as they are humans. Also, they can leave their duty post when theweather is not conducive to them. Cars in urban traffic can experience long travel times due toinefficient fixed time traffic light controller being used at the some junctions in the cities. Moreover,there is no effective intelligent traffic system that works twenty four hours (day and night) toeffectively control signal at these busy junctions. In addition, aside from the manual control of trafficby traffic policemen, basically, there are two types of conventional traffic light control in use. Onetype of control uses a preset cycle time to change the lights while the other type of control combinedpreset cycle time with proximity sensors which can activate a change in the cycle time of the lights. Incase of a less traveled street which may not need a regular cycle of green light when cars are present.This type of control depended on having a prior knowledge of flow patterns at the intersection so thatsignal cycle times and placement of proximity sensors may be customized for the intersection. IV. RESULTS AND DISCUSSIONSBased on our analysis of the present traffic control system, the following assumptions becamenecessary in order to develop a feasible system: 1. The system will only work for an isolated four-way junction with traffic coming from the four cardinal directions. 2. Traffic only moves from the North to the South and vice versa at the same time; and at this time, the traffic from the East and West is stopped. In this case, the controller considers the combination of all the waiting densities for the North and south as that of one side and those of the east and west combined as another side. 3. Turns (right and left) are considered in the design 4. The traffic from the west lane always has the right of way and the west-east lane is considered as the main traffic.4.1 Results: Input / Output Specifications for the DesignFigure 2 shows the general structure of a fuzzy input output traffic lights control system. The systemwas modeled after the intelligent traffic control system developed at the Artificial intelligence centre,Universiti Teknologi Malaysia for the city of Kualar Lumpur, Malaysia by [7]. S represented the twoelectromagnetic sensors placed on the road for each lane. The first sensor was placed behind eachtraffic lights and the second sensor was located behind the first sensor. A sensor network normallyconstitutes a wireless ad-hoc network [15], meaning that each sensor supported a multi-hop routingalgorithm. While the first sensor is required to count the number of cars passing the traffic lights; thesecond is required to count the number of cars coming to intersection at distance D from the lights.To determine the amount of cars between the traffic lights, the difference of the reading between thetwo sensors is evaluated. This differs from what is obtained in a conventional traffic control systemwhere a proximity sensor is placed at the front of each traffic light and can only sense the presence ofcars waiting at the junction and not the amount of cars waiting at traffic. The sequence of states thatthe fuzzy traffic controller should cycle through is controlled by the state machine controls the. Thereis one state for each phase of the traffic light. There is one default state which takes place when noincoming traffic is detected. This default state corresponds to the green time for a specific approach,usually to the main approach. In the sequence of states, a state can be skipped if there is no vehiclequeues for the corresponding approach. The objectives of this design are to simulate an intelligentroad traffic control system and build a platform independent software that is simple, flexible androbust and will ease traffic congestion (deadlock) in an urban city in Nigeria especially at “+”junction. 50 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Counter Queue Arrival S D Fuzzy State Logic machine Controller Traffic Lights InterfaceFigure 2 General structure of a fuzzy input/output traffic lights Control System4.2 High Level Model and Modules Specifications for the SystemFigure 3 shows the high level model of the system. The main module is the Traffic Panel and the mainclass traffic system is implemented using the java programming language. There are several methodsthat implement the intelligent traffic Light system such as changeLight, Calflow, TrafficPanel,PaintMode, PaintLightPeriod, PaintLights, Traffic System, Waiting, Moving, Flow density, Run,ActionPerformed and ItemStateChanged. These methods are interwoven into a complete interface thatimplements a total intelligent traffic control system. The main class trafficSystem, which isimplemented using java programming language calls other methods already stated above. ThechangeLight module is overloaded with the function of toggling the lights (green to red and viceversa) depending on the signal passed to its executable thread. Calflow animates the objects (cars) onthe interface using a flow sequence that depicts a typical traffic and a time sequence automaticallygenerated by the system timer (measured in milliseconds), taken into consideration the number of carswaiting and the time they have been on the queue. Traffic panel initializes the interface parameterssuch as frames, buttons, timer, objects and other processes (threads) that run when the interface isinvoked by the applet viewer command. On the other hand, PaintMode, PaintLight, PaintRoad,PaintLights are modules which draw the objects(Cars), lights, roads(paths) for traffic flow and graphsfor traffic count and toggling of traffic lights. These modules implement the various functionalities ofthe graphic interface or class library. CalFlow TrafficSystem ChangeLight Traffic panel ItemState Changed PaintMode PaintLight PaintRoad Waiting Moving FlowDensity Run ActionPerformed Figure 3 High level model of the traffic control systemIt is worth mentioning here that the attributes of a typical car object are initialized by class nodedefined at the beginning of the code. Such attributes as the X and Y co-ordinates of the car object, theline, road and delay of the car object are all encapsulated in class node. The class is inherited by otherclasses to implement the entire system. Traffic system class initializes the buttons that start and end 51 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 the traffic light simulation. The start and end processes commence and terminate the traffic flow and light sequence respectively. The modules for the commencement and termination of the traffic control process are bound to these controls at run time. This is achieved by implementing class ActionListener that listens to a click event on a specific button. Each click event invokes ActionEvent that retrieves the label on each button to determine which button is being invoked. This allows a comprehensive control of operations on the interface without deadlock. Waiting module enables the program to plot graph for waiting time of cars. Moving class also plots the graph for moving time of cars both in conventional traffic control system and fuzzy logic traffic control system. Flow density module checks the car density of every lane that is, checks which lane has more cars before it gives access for movement. Run class multithreads the traffic light. It controls the Go and Stop button. ActionPerformed class is responsible for loading the applet in browser. ItemStateChanged class ensures that car sensors are not deselected thereby making the program work efficiently. Finally, the traffic control system simulates the complete functionality of a real time traffic light and provides a user friendly interface for easy implementation. The overall internal context diagram for the system is shown in Figure 4. ChangeLight CreateCarQueue Traffic TrafficLightModule CarModule AdvanceQueueCheck Control System StopLight StopQueue CarDensityChecker Advance object StopCarDensityChecker for moving cars Initialize object for moving cars r Figure 4 Overall internal context diagram for the system 4.3 Simulation of the Traffic Control System Java SE 6 Update 10 was the tool deployed for building the simulated version of the traffic control system. This choice was based on the feature that the Java is the researchers’ language of choice in developing applications that require higher performance [15]. The Java Virtual Machine, (JVM) provided support for multiple languages platforms and the Java SE 6 Update 10 provided an improved performance of Java2D graphics primitives on Windows, using Direct3D and hardware acceleration. Figures 5 shows control centre for the simulation of the traffic control system. 52 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 5 The simulated fuzzy logic traffic control systemThe system is highly graphical in nature. A number of pop-up and push-down menus were introducedin the implementation for ease of use (figure 5). Command buttons to display graphs showing waitingtime of cars (Figure 6), Movement time of cars (Figure 7), car flow density (Figure 8) and currentarrival/departure times were all embedded in the application’s control centre. The views can becascaded to show the control centre and any of the graphs at the same time (Figure 9). Two fuzzyinput variables were chosen in the design to represent the quantities of the traffic on the arrival side(Arrival) and the quantity of traffic on the queuing side (Queue). The green side represented thearrival side while the red side is the queuing side. To vary the flow of traffic in the simulationaccording to real life situations; the density of flow of cars is set as required by clicking on the arrowson the sides of each lane. Figure 6 Car waiting time in the simulation 53 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 7 Car moving time in the simulation Figure 8 Flow density of cars in the simulation 54 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 9 Cascading different views of the traffic control system V. CONCLUSIONInformation technology (IT) has transformed many industries, from education to health care togovernment, and is now in the early stages of transforming transportation systems. While many thinkimproving a country’s transportation system solely means building new roads or repairing aginginfrastructures, the future of transportation lies not only in concrete and steel, but also increasingly inusing IT. IT enables elements within the transportation system—vehicles, roads, traffic lights,message signs, etc. to become intelligent by embedding them with microchips and sensors andempowering them to communicate with each other through wireless technologies [16]. Theresearchers in this work, attempted to solve the problems of road traffic congestion in large citiesthrough the design and implementation of an intelligent system; based on fuzzy logic technology tomonitor and control traffic lights. An analysis of the current traffic management system in Nigeriawas carried out and the results of the analysis necessitated the design of an intelligent traffic controlsystem. Figures 5 through 9 shows the outputs of a Java software simulation of the system developedusing a popular ‘+” junction in an eastern Nigeria city; notorious for traffic congestion. The systemeliminated the problems observed in the manual and conventional traffic control system as the flowdensity was varied according to real life traffic situations. It was observed that the fuzzy logic controlsystem provided better performance in terms of total waiting time as well as total moving time. Sinceefficiency of any service facility was measured in terms of how busy the facility is, we thereforedeemed it imperative to say that the system under question is not only highly efficient but also hascurbed successfully the menace of traffic deadlock which has become a phenomenon on our roads asless waiting time will not only reduce the fuel consumption but also reduce air and noise pollution.REFERENCES [1]. Ugwu, C. (2009). Nigeria: Over 7 Million Vehicles Ply Nigerian Roads Daily- Filani. Champion Newspapers, Nigeria 2nd October 2009. Posted by AllAfrica.com project. Downloaded 15 September 2011 from http://allafrica.com/stories/200910020071.html [2]. Mbawike, N. (2007). 7 Million Vehicles Operate On Nigerian Roads – FRSC. LEADERSHIP Newspaper, 16th November, 2007. Posted by Nigerian Muse Projects. Downloaded 15 September 2011 from http://www.nigerianmuse.com/20071116004932zg/nm-projects/7-million-vehicles-operate-on- nigerian-roads-frsc/ 55 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 [3]. Abdul Kareem, E.I. & Jantan, A. (2011). An Intelligent Traffic Light Monitor System using an Adaptive Associative Memory. International Journal of Information Processing and Management. 2( 2): 23-39 [4]. Findler, N. V., Sudeep S., Ziya, M. & Serban, C. (1997). Distributed Intelligent Control of Street and Highway Ramp Traffic Signals. Engineering Applications of Artificial Intelligence 10(3):281- 292. [5]. GiYoung, L., Kang J. and Hong Y. (2001). The optimization of traffic signal light using artificial intelligence. Proceedings of the 10th IEEE International Conference on Fuzzy Systems. [6]. Wen, W. (2008). A dynamic and automatic traffic light control expert system for solving the road congestion problem. Expert Systems with Applications 34(4):2370-2381. [7]. Tan, K., Khalid, M. and Yusof, R. (1996). Intelligent traffic lights control by fuzzy logic. Malaysian Journal of Computer Science, 9(2): 29-35 [8]. Fathy, M. and Siyal, M. Y. (1995). Real-time image processing approach to measure traffic queue parameters. Vision, Image and Signal Processing, IEEE Proceedings - 142(5):297-303. [9]. Lei, J and Ozguner. U. (1999). Combined decentralized multi-destination dynamic routing and real- time traffic light control for congested traffic networks. In Proceedings of the 38th IEEE Conference on Decision and Control. [10]. Huang, Q. and Miller, R. (2004). Reliable Wireless Traffic Signal Protocols for Smart Intersections. Downloaded August 2011 from http://www2.parc.com/spl/members/qhuang/papers/tlights_itsa.pdf [11]. Di Febbraro, A., Giglio, D. and Sacco, N. (2004). Urban traffic control structure based on hybrid Petri nets. Intelligent Transportation Systems, IEEE Transactions on 5(4):224-237. [12]. Nagel, K.A. and Schreckenberg, M.B. (1992).A cellular automation model for freeway Traffic. Downloaded September 2011 from www.ptt.uni- duisburg.de/fileadmin/docs/paper/1992/origca.pdf. [13]. Tavladakis, A. K.(1999). Development of an Autonomous Adaptive Traffic Control System. European Symposium on Intelligent Techniques. [14]. Chattaraj, A. Chakrabarti, S., Bansal, S., Halder , S. and . Chandra, A. (2008). Intelligent Traffic Control System using RFID. In Proceedings of the National Conference on Device, Intelligent System and Communication & Networking, India. [15]. Osigwe U. C. (2011). An Intelligent Traffic Control System. Unpublished M.Sc thesis, Computer Science Department, Nnamdi Azikiwe University, Awka, Nigeria. [16]. Ezell, S. (2011). Explaining IT application leadership :Intelligent Transportation System. White paper of the Information Technology and Innovation Foundation, (ITIF). Downloaded August 2011 from www.itif.org/files/2010-1-27-ITS_Leadership.pdfAUTHORS’ BIOGRAPHYOsigwe, Uchenna Chinyere is completing her M.Sc. in Computer Science at NnamdiAzikiwe University Awka, Nigeria. She is a chartered practitioner of the computingprofession in Nigeria; haven been registered with by the Computer Professionals RegulatoryCouncil of Nigeria. She is currently a Systems Analyst with the Imo State UniversityTeaching Hospital Orlu, Nigeria.Oladipo, Onaolapo Francisca holds a Ph.D in Computer Science from Nnamdi AzikiweUniversity, Awka, Nigeria; where she is currently a faculty member. Her research interestsspanned various areas of Computer Science and Applied Computing. She has publishednumerous papers detailing her research experiences in both local and international journalsand presented research papers in a number of international conferences. She is also a reviewerfor many international journals and conferences. She is a member of several professional andscientific associations both within Nigeria and beyond; they include the British ComputerSociety, Nigerian Computer Society, Computer Professionals (Regulatory Council) ofNigeria, the Global Internet Governance Academic Network (GigaNet), International Association Of ComputerScience and Information Technology (IACSIT ), the Internet Society (ISOC), Diplo Internet GovernanceCommunity and the Africa ICT Network.Emmanuel Onibere started his teaching career in the University of Ibadan in 1976 as anAssistant Lecturer. He moved to University of Benin in 1977 as Lecturer II. He rose toAssociate Professor of Computer Science in 1990. In January 1999 he took up an appointmentat University of Botswana, Gaborone to give academic leadership, while on leave of absence 56 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963from the University of Benin. In October 2000, he was appointed Common Wealth Visiting Professor ofComputer Science to University of Buea in Cameroon to again give academic leadership. He returned inDecember 2002 to University of Benin. In 2003 he was appointed full Professor of Computer Science inUniversity of Benin. Prof. Onibere, has been an External Examiner at B.Sc, M.Sc. and Ph.D levels in manyUniversities and he has been a resource person in a number of workshops and conferences both inside andoutside Nigeria. He had BSc in Mathematics, MSc and PhD in Computer Science. His special area of research isin Software Engineering. He has been involved in a number of research projects both in Nigeria and outsideNigeria. He has been Chairman of organizing Committee of a number of conferences and training programmes.Prof. E.A. Onibere has produced 5 Ph.Ds and over 42 Masters. He has published 5 books and fifty articles. Heis currently the Deputy Vice Chancellor (academic) of University of Benin and Chairman of InformationTechnology Research and Grants Committee of National Information Technology Development Agency(NITDA) of the Ministry of Science and Technology. 57 Vol. 1, Issue 5, pp. 47-57
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 DESIGN OPTIMIZATION AND SIMULATION OF THE PHOTOVOLTAIC SYSTEMS ON BUILDINGS IN SOUTHEAST EUROPE Florin Agai, Nebi Caka, Vjollca Komoni Faculty of Electrical and Computer Engineering, University of Prishtina, Prishtina, Republic of Kosova.ABSTRACTThe favourable climate conditions of the Southeast Europe and the recent legislation for the utilization ofrenewable energy sources provide a substantial incentive for the installation of photovoltaic (PV) systems. Inthis paper, the simulation of a grid-connected photovoltaic system is presented with the use of the computersoftware package PVsyst and its performance is evaluated. The performance ratio and the various power losses(temperature, soiling, internal network, power electronics) are calculated. There is also calculated the positiveeffects on the environment by reducing the release of gases that cause greenhouse effect.KEYWORDS: Photovoltaic, PV System, Renewable Energy, Simulation, OptimizationI. INTRODUCTIONThe aim of the paper is to present a design methodology for photovoltaic (PV) systems, like those ofsmall appliances, as well as commercial systems connected to the network. It will present also thepotentials of Southeast Europe (Kosova) to use solar energy by mentioning changes in regulations forinitiating economic development. The project of installing a PV system connected to the grid, whichis the roof type, will have to respond to the requests: 1. What is the global radiation energy of the sun 2. What is the maximum electrical power which generates the PV system 3. What is the amount of electrical energy that the system produces in a year 4. What is the specific production of electricity 5. How much are the losses during the conversion in PV modules (thermal degradation, the discrepancy). 6. How much are the values of loss factors and the normalized output 7. What is the value of the Performance Ratio (PR) 8. How much are the losses in the system (inverter, conductor, ...) 9. What is the value of energy produced per unit area throughout the year 10. What is the value of Rated Power Energy 11. What is the positive effect on the environmentWe want to know how much electricity could be obtained and how much will be the maximum powerproduced by photovoltaic systems connected to network, build on the Laboratory of Technical Facultyof Prishtina, Prishtina, Kosovo.Space has something over 5000 m2 area, and it has no objects that could cause shadows. We want toinstall panels that are in single-crystalline technology and we are able to choose from the programlibrary. Also the inverters are chosen from the library. 58 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 2231 Figure 1. Laboratory conceptual plan for PV system on the roof. Photo taken from Google MapIn the next chapter, the smilar and related projects are mantioned and we can study the explainedresults through the references. In the material and methods is explained the use of the software forsimulation the design and use of a PV sistem. In results chapter the detailed report explains allparameters and results of the simulation. All the losses and mismatches along the system arequantified, and visualised on the "Loss Diagram", specific for each configuration.II. RELATED WORKIn the paper ” Performance analysis of a grid connected photovoltaic park on the island of C Crete” [2],the grid-connected photovoltaic park (PV park) of Crete has been evaluated and presented by long connectedterm monitoring and investigating. Also, the main objective of the project “Technico Technico-economicalOptimization of Photovoltaic Pumping Systems Pedagogic and Simulation Tool Implementation inthe PVsyst Software” [9], is the elaboration of a general procedure for the simulation of photovoltaicpumping systems, and its implementation in the PVsyst software. This tool is mainly dedicated toengineers in charge of solar pumping projects in the southern countries.III. MATERIALS AND METHODSWithin the project we will use the computer program simulator PVsyst, designed by Energy Institute Eof Geneva, which contains all the sub subprograms for design, optimization and simulation of PV systemsconnected to the grid, autonomous and solar water pumps. The program includes a separate databasefor about 7200 models of PV modules and 2000 models of inverters.PVsyst is a PC software package for the study, sizing, simulation and data analysis of complete PVsystems. It is a tool that allows to analyze accurately different configurations and to evaluate itsresults in order to identify the best technical and economical solution and closely compare theperformances of different technological options for any specific photovoltaic project. Project design .part, performing detailed simulation in hourly values, including an easy-to-use expert system, which includ usehelps the user to define the PV-field and to choose the right components. Tools performs the database fieldmeteo and components management. It provides also a wide choice of general solar tools (solargeometry, meteo on tilted planes, etc), as well as a powerful mean of importing real data measured onexisting PV systems for close comparisons with simulated values. Besides the Meteo Databaseincluded in the software, PVsyst now gives access to many meteorological data sources availablefrom the web, and includes a tool for easily importing the most popular ones.The data for the parameters of location: Site and weather: Country: KOSOVO, Locality: Prishtina, PrishtinaGeographic coordinates: latitude: 42o40N, longitude: 21o10 E, altitude: 652m. Weather data: .Prishtina_sun.met:Prishtina, Synthetic Hourly data synthesized from the program Meteonorm97. Prishtina, MeteonormSolar path diagram is a very useful tool in the first phase of the design of photovoltaic systems fordetermining the potential shadows. Annual global radiation (radiant and diffuse) for Prishtina is 1193[kWh/m2.year]. The value of Albedo effect for urban sites is 0.14 to 0.22; we will take the average0.2. [1]
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 2. The diagram of sun path for Prishtina (42o40’ N, 21o10’ E)Transposition factor = 1.07 (Transposition factor shows the relationship between radiation panels andglobal radiation). For grid connected system, the user has just to enter the desired nominal power, tochoose the inverter and the PV module types in the database. The program proposes the number ofrequired inverters, and a possible array layout (number of modules in series and in parallel). Thischoice is performed taking the engineering system constraints into account: the number of modules inseries should produce a MPP voltage compatible with the inverter voltage levels window. The usercan of course modify the proposed layout: warnings are displayed if the configuration is not quitesatisfactory: either in red (serious conflict preventing the simulation), or in orange (not optimalsystem, but simulation possible). The warnings are related to the inverter sizing, the array voltage, thenumber of strings by respect to the inverters, etc.Photovoltaic (PV) module solution: From the database of PVmodules, we choose the model of thesolar panel and that is: CS6P – 230M, with maximum peak power output of WP = 230W – CanadianSolar Inc.Inverter solution: For our project we will choose inverter 100K3SG with nominal power Pn=100kWand output voltage of 450-880V, the manufacturer Hefei. For chosen modules here are somecharacteristics of working conditions: Figure 3. U-I characteristics for irradiation h = 1245 W/m2and working temperature 60oC.Output power P = f(U)
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 2231 Figure 4. The characteristic of power for irradiation h = 1245W/m2and working temperature 60oC Figure 5. Blok-diagram of the PV System ureFigure 5. shows the PV system is comprised of a 2622 Canadian Solar CS6P 230M monocrystalline CS6P-230Msilicon PV modules (panels). The PV modules are arranged in 138 parallel strings (string – serialconnection of modules), with 19 modules (panels) in each, and connected to six Hefei 100K3SG ,inverters installed on the supporting structure, plus connection boxes, irradiance and temperaturemeasurement instrumentation, and data logging system. The PV system is mounted on a stainless steelsupport structure facing south and tilted at 30°. Such a tilt angle was chosen to maximize yearly atenergy production.IV. RESULTS1. Global horizontal irradiation energy of the sun for a year in the territory of Eastern Europe, (specifically for Prishtina) according to results from PVsyst program is h=1193 kWh/m2year. At
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 the panel surface the level of radiation is 7.9% higher because the panels are tilted. This value is reduced for 3.3% because of the effect of Incidence Angle Modifier (IAM) and the final value is: h = 1245 kWh/m2year. Reference incident energy falling on the panels surface (in a day) is: Yr = 3526 kWh/m2/kWp/day. The highest value of total radiation on the panel surface is in July, 167.5 kW/m2, where as the lowest value is in December, 41.4kW/m2. Annual irradiation is 1245 kW/m2, and the average temperature is 10.26 oC. The PV system generates 76.2 MWh of electricity in July and 20 MWh in December.2. Maximum electric power that PV system generates in output of inverter is: Pnom = 603kWp.3. Annual produced electric energy in output of inverter is: E = 610,512kWh.4. Specific production of electricity (per kWp/year) is: 1012 kWh/kWp/year.5. Losses of power during PV conversion in modules are: FV losses due to radiation rate = 4.7% FV losses due to the temperature scale = –4.9% Losses due to quality of modules = 7976 kWh per year (1.2%) Losses due to mis match of modules = 14334 kWh per year (2.1%) Losses due to conduction resistance = 5174 kWh per year (0.8%).6. Loss factors and Normalised production are: Lc – Panel losses (losses in PV array) = 982,006 kWh per year (13.1%) Ls – System losses (inverter ...) = 40,904 kWh per year (6.7%) Yf – Useful energy produced (the output of inverter) = 610,512 kWh per year. Loss factors and Normalised production (per installed kWp) are: Lc – Panel losses (losses in PV array) per maximum power = 0.55 kWh/kWp/day Ls – Losses in the system (inverter ...) for maximum powe = 0.20 kWh/kWp/day Yf – Useful produced energy (the output of inverter) for maximum power = 2.77 kWh/kWp/day7. Performance ratio (PR) is the ratio between actual yield (output of inverter) and target yield (output of PV array) [2]:PR = = = × × . = = 0.787 78.7% (1)8. System losses are losses in the inverter and conduction. They are Ls = – 6.7 %. System Efficiency (of inverters) is: 1– 0.067 = 0.933, or ηsys = 93.3 %. Overall losses in PV array (temp, module, quality, mismatch, resistant) are: Lc = – 13.1 %. PV array efficiency is: Lc = 1– 0.131 = 0.869, orηrel = 86.9 %.9. The energy produced per unit area throughout the year is: [3] = ℎη η η η = hη = 0.787 × 1245 × 0.143 = 140.4 annual (2)10. Energy forRated Poweris: = η η η = = PR = 0.787 × = 0.9798 97.98% (3)11. Economic Evaluation. With the data of retail prices from PV and inverter stock market we can make estimation for the return of investment [4]: Panels: 2622(mod) × 1.2 (Euro/Wp.mod) × 230 (WP) = 723672 Euro Inverters: 6 × 5200 (Euro) = 31200 Euro Cable: 2622(mod) × 3 (euro/mod) = 7866 Euro Construction: 2622 (mod) × 5 (Euro/mod) = 13110 Euro Handwork: 2622 (mod) × 5 (Euro /mod) = 13110 Euro Total: 788958 Euro If the price of one kWh of electricity is 0.10 Euro/kWh, then in one year will be earned [5]: 610500 (kWh/year) x 0.10 (Euro/kWh) × 1 (year) = 61050 (Euro/year) 62 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 The time for return of investment will be : × . = = 12.9 a (4) Module life time is 25 years, and the inverter live time is 5 years.12. Positive effect on the environment. During the generation of electricity from fossil fuels, as a result we produce greenhouse gases such as: nitrogen oxide (NOx), Sulphur dioxide (SO2) and Carbon dioxide (CO2). Also is produced the large amount of ash that must be stored [6]. Table1.Positive effects of the PV system for environmental protection Statistics for products by the power plants with fossil fuels (coal) with the capacity of electricity production (E = 610.5 MWh per year) Byproducts of coal Per kWh For annual energy production of power plant E = 610.5 MWh SO2 1.24 g 757 kg NOx 2.59 g 1581 kg CO2 970 g 692.2 t Ash p 68 g 41.5 t13. Diagrams Figure 6. Diagram of system lossesThe simulation results include a great number of significant data, and quantify the losses at everylevel of the system, allowing to identify the system design weaknesses. This should lead to a deepcomparison between several possible technologic solutions, by comparing the available performancesin realistic conditions over a whole year. The default losses management has been improved,especially the "Module quality loss" which is determined from the PV modules tolerance, and themismatch on Pmpp which is dependent on the module technology. Losses between inverters and gridinjection have been implemented. These may be either ohmic wiring losses, and/or transformer losseswhen the transformer is external.Detailed loss diagram (Figure 6) gives a deep sight on the quality of the PV system design, byquantifying all loss effects on one only graph. Losses on each subsystem may be either grouped orexpanded in detailed contributions. 63 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Results - and particularly the detailed loss diagram - show the overall performance and theweaknesses of a particular design. Figure 7. Reference incident Energy in collector plane Figure 8. Normalized productions (per installed kWp) Figure 9. Normalized production and Loss factors 64 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 10. Performance ratio (PR) Figure 11. Daily input/output diagram Figure 12. Daily system output energy 65 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 13. Incident irradiation distribution Figure 14. Array power distributionV. CONCLUSIONSThe design, the optimization and the simulation ofthe PV systems for use in Southeast Europe havebeen analyzed and discussed, and the following conclusions are drawn: average annual PV systemenergy output is 1012 kWh/kWp and average annual performance ratio of the PV system is 78.7 %.The performance ratio (Figure 10) shows the quality of a PV system and the value of 78.7% isindicative of good quality (Equation 1). Usually the value of performance ratio ranges from 60-80%[7]. This shows that about 21.3% of solar energy falling in the analysed period is not converted in tousable energy due to factors such as losses in conduction, contact losses, thermal losses, the moduleand inverter efficiency factor, defects in components, etc.It is important that we have matching between the voltage of inverter and that of the PV array, duringall operating conditions. Some inverters have a higher efficiency in certain voltage, so that the PVarray must adapt to this voltage of maximum efficiency. Use of several inverters cost more than usinga single inverter with higher power.In (Figure 9) is presented the histogram of the waited power production of the array, compared to the invertersnominal power. Estimation of the overload losses (and visualization of their effect on the histogram). This toolallows to determine precisely the ratio between array and inverter Pnom, and evaluates the associated losses.Utility-interactive PV power systems mounted on residences and commercial buildings are likely tobecome a small, but important source of electric generation in the next century. As most of the electricpower supply in developed countries is via centralised electric grid, it is certain that widespread use ofphotovoltaic will be as distributed power generation inter-connected with these grids.This is a new concept in utility power production, a change from large-scale central examination ofmany existing standards and practices to enable the technology to develop and emerge into themarketplace. [8]. As prices drop, on-grid applications will become increasingly feasible. For the 66 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963currently developed world, the future is grid-connected renewables. In the next 20 years, we canexpect only a slight improvement in the efficiency of first generation (G-1) silicon technology. Willwe witness a change of the dominant technology of the G-1 in an era of market share with second-generation technology (G-2), based mainly on thin-film technology (with 30% cost reduction) [9].While these two branches will largely dominate the commercial sector of PV systems, within the next20 years will have increased use of third generation technology (G-3) and other new technologies,which will bring to enlarge the performance or cost reduction of solar cells [10]. During this project,the overall results of the simulation system to connect to the network PV is bringing in the bestconditions possible, by using the software package PVsyst [16]. Overall, the project gives themunderstand the principle of operation, the factors affecting positively and negatively, losses incurredbefore the conversion, conversion losses and losses in the cells after conversion. All this helps us tomake optimizing FV systems under conditions of Eastern Europe.REFERENCES[1] Ricardo Borges, Kurt Mueller, and Nelson Braga. (2010) “The Role of Simulation in Photovoltaics: From Solar Cells To Arrays”. Synopsys, Inc.[2] Kymakis, E.; Kalykakis, S.; Papazoglou, T. M., (2009) “Performance analysis of a grid connected photovoltaic park on the island of Crete”, Energy Conversion and Management, Vol. 50, pp. 433-438[3] Faramarz Sarhaddi, Said Farahat, Hossein Ajam, and Amin Behzadmehr, (2009) “Energetic Optimization of a Solar Photovoltaic Array”, Journal of Thermodynamics,Volume, Article ID 313561, 11 pages doi:10.1155/2009/313561.[4] Colin Bankier and Steve Gale. (2006) “Energy Payback of Roof Mounted Photovoltaic Cells”. Energy Bulletin.[5] Hammons, T. J. Sabnich, V. (2005), “Europe Status of Integrating Renewable Electricity Production into the Grids”, Panel session paper 291-0, St. Petersburg.[6] E. Alsema (1999). “Energy Requirements and CO2 Mitigation Potential of PV Systems.” Photovoltaics and the environment. Keystone, CO, Workshop Proceedings.[7] Goetzberger, (2005), Photovoltaic Solar Energy Generation, Springer.[8] Chuck Whitaker, Jeff Newmiller, Michael Ropp, Benn Norris, (2008) “Distributed Photovoltaic Systems Design and Technology Requirements”. Sandia National Laboratories.[9] Mermoud, A. (2006), "Technico-economical Optimization of Photovoltaic Pumping Systems Pedagogic and Simulation Tool Implementation in the PVsyst Software", Research report of the Institut of the Environnemental Sciences, University of Geneva.[10] Gong, X. and Kulkarni, M., (2005), Design optimization of a large scale rooftop pv system, Solar Energy, 78, 362-374[11] S.S.Hegedus, A.Luque, (2003),“Handbook of Photovoltaic Science and Engineering" John Wiley & Sons,[12] Darul’a, Ivan; Stefan Marko. "Large scale integration of renewable electricity production into the grids". Journal of Electrical Engineering. VOL. 58, NO. 1, 2007, 58–60[13] A.R. Jha, (2010), “Solar cell technology and applications”, Auerbach Publications[14] Martin Green, (2005), “Third Generation Photovoltaics Advanced Solar Energy Conversion”, Springer,[15] M.J. de Wild-Scholten, (2006), A cost and environmental impact comparison of grid-connected rooftop and ground-based pv systems, 21th European Photovoltaic Solar Energy Conference, Dresden, Germany,[16] www.pvsyst.comAuthorsFlorin Agai received Dipl. Ing. degree from the Faculty of Electrical Engineering in Skopje,the “St. Kiril and Metodij” University, in 1998. Currently works as Professor at Electro-technical High School in Gostivar, Macedonia. Actually he finished his thesis to obtain Mr. Sc.degree from the Faculty of Electrical and Computer Engineering, the University of Prishtina,Prishtina, Kosovo. 67 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Nebi Caka received the Dipl. Ing. degree in electronics and telecommunications from theTechnical Faculty of Banja Luka, the University of Sarajevo, Bosnia and Herzegovina, in 1971;Mr. Sc degree in professional electronics and radio-communications from the Faculty ofElectrical Engineering and Computing, the University of Zagreb, Zagreb, Croatia, in 1988; andDr. Sc. degree in electronics from the Faculty of Electrical and Computer Engineering, theUniversity of Prishtina, Prishtina, Kosovo, in 2001. In 1976 he joined the Faculty of Electricaland Computer Engineering in Prishtina, where now is a Full Professor of Microelectronics,Optoelectronics, Optical communications, VLSI systems, and Laser processing.Vjollca Komoni received Dipl. Ing. degree in electrical engineering from the Faculty ofElectrical and Computer Engineering, the University of Prishtina, Prishtina, Kosovo, in 1976;Mr. Sc degree in electrical engineering from the Faculty of Electrical Engineering andComputing, the University of Zagreb, Zagreb, Croatia, in 1982; and Dr. Sc. degree in electricalengineering from the Faculty of Electrical and Computer Engineering, the University of Tirana,Tirana, Albania, in 2008. In 1976 she joined the Faculty of Electrical and ComputerEngineering in Prishtina, where now is an Assistant Professor of Renewable sources, Powercables, Electrical Installations and Power Systems. 68 Vol. 1, Issue 5, pp. 58-68
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 FAULT LOCATION AND DISTANCE ESTIMATION ON POWER TRANSMISSION LINES USING DISCRETE WAVELET TRANSFORM Sunusi. Sani Adamu1, Sada Iliya2 1 Department of Electrical Engineering, Faculty of Technology, Bayero University Kano, Nigeria 2 Department of Electrical Engineering, College of Engineering, Hassan Usman Katsina PolytechnicABSTRACTFault location is very important in power system engineering in order to clear fault quickly and restore powersupply as soon as possible with minimum interruption. In this study a 300km, 330kv, 50Hz power transmissionline model was developed and simulated using power system block set of MATLAB to obtain fault currentwaveforms. The waveforms were analysed using the Discrete Wavelet Transform (DWT) toolbox by selectingsuitable wavelet family to obtain the pre-fault and post-fault coefficients for estimating the fault distance. Thiswas achieved by adding non negative values of the coefficients after subtracting the pre-fault coefficients fromthe post-fault coefficients. It was found that better results of the distance estimation, were achieved usingDaubechies ‘db5’wavele,t with an error of three percent (3%).KEYWORDS: Transmission line, Fault location, Wavelet transforms, signal processing I. INTRODUCTIONFault location and distance estimation is very important issue in power system engineering in order toclear fault quickly and restore power supply as soon as possible with minimum interruption. This isnecessary for reliable operation of power equipment and satisfaction of customer. In the past severaltechniques were applied for estimating fault location with different techniques such as, lineimpedance based numerical methods, travelling wave methods and Fourier analysis [1]. Nowadays,high frequency components instead of traditional method have been used [2]. Fourier transform wereused to abstract fundamental frequency components but it has been shown that Fourier Transformbased analysis sometimes do not perform time localisation of time varying signals with acceptableaccuracy. Recently wavelet transform has been used extensively for estimating fault locationaccurately. The most important characteristic of wavelet transform is to analyze the waveform on timescale rather than in frequency domain. Hence a Discrete Wavelet Transform (DWT) is used in thispaper because it is very effective in detecting fault- generated signals as time varies [8].This paper proposes a wavelet transform based fault locator algorithm. For this purpose,330KV,300km,50Hz transmission line is simulated using power system BLOCKSET of MATLAB[5].The current waveform which are obtained from receiving end of power system has been analysed.These signals are then used in DWT. Four types of mother wavelet, Daubechies (db5), Biorthogonal(bio5.5), Coiflet (coif5) and Symlet (sym5) are considered for signal processing.II. WAVELET TRANSFORMWavelet transform (WT) is a mathematical technique used for many application of signal processing[5].Wavelet is much more powerful than conventional method in processing the stochastic signal 69 Vol. 1, Issue 5, pp. 69-76
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963because of analysing the waveform in time scale region. In wavelet transform the band of analysis canbe adjusted so that low frequency and high frequency components can be windowing by differentscale factors. Recently WT is widely used in signal processing application such as de noising,filtering, and image compression [3]. Many pattern recognition algorithms were developed based onthe wavelet transform. According to scale factors used the wavelet can be categorized into differentsections. In this work, the discrete wavelet transform (DWT) was used. For any function (f), DWT iswritten as. , = [ ] (1)Where ψ is the mother wavelet [3], is the scale parameter , , are the translation parameters.III. TRANSMISSION LINE EQUATIONSA transmission line is a system of conductors connecting one point to another and along whichelectromagnetic energy can be sent. Power transmission lines are a typical example of transmissionlines. The transmission line equations that govern general two-conductor uniform transmission lines,including two and three wire lines, and coaxial cables, are called the telegraph equations. The generaltransmission line equations are named the telegraph equations because they were formulated for thefirst time by Oliver Heaviside (1850-1925) when he was employed by a telegraph company and usedto investigate disturbances on telephone wires [1]. When one considers a line segment with , , and . The electric flux ψparameters resistance (R), conductance (G), inductance (L), and capacitance (C), all per unit length,(see Figure 3.1) the line constants for segment are , , and current , ,and the magnetic flux created by the electromagnetic wave, which causes the instantaneous voltage = ,are: = , (2) (3)Calculating the voltage drop in the positive direction of x of the distance one obtains , , − + , =− , =− = + , (4)If cancelled from both sides of equation (4), the voltage equation becomes , , =− − , (5)Similarly, for the current flowing through G and the current charging C, Kirchhoff’s current law canbe applied as , , − + , =− , =− = + , (6)If cancelled from both sides of (6), the current equation becomes , , =− − , (7) , and , will decrease in amplitude for increasing .The negative sign in these equations is caused by the fact that when the current and voltage wavespropagates in the positive x-direction,The expressions of line impedance, Z and admittance Y are given by , = + (8) , = + (9) 70 Vol. 1, Issue 5, pp. 69-76
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Differentiate once more with respect to x, the second-order partial differential equations , , =− = , = , (10) , , =− = , = , (11) Figure 1 Single phase transmission line modelIn this equation, is a complex quantity which is known as the propagation constant, and is given by = = + (12)Where, is the attenuation constant which has an influence on the amplitude of the wave, and isthe phase constant which has an influence on the phase shift of the wave.Equations (7) and (8) can be solved by transform or classical methods in the form of two arbitraryfunctions that satisfy the partial differential equations. Paying attention to the fact that the secondderivatives of the voltage and current functions, with respect to t and x, have to be directly , = +proportional to each other, so that the independent variables t and x appear in the form [1] (13) , = [ + (14)Where Z is the characteristic impedance of the line and is given by = (15)A1 and A2 are arbitrary functions, independent of xTo find the constants A1and A2 it has been noted that when = , = R and = r fromequations (13) and (14) these constants are found to be = (16) = (17)Upon substitution in equation in (13) and (14) the general expression for voltage and current along along transmission line become = + (18) = − (19)
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963The equation for voltage and currents can be rearranged as follows = + (20) = + (21)Recognizing the hyperbolic functions ℎ, ℎ, the above equations (20) and (21)are written as follows: = ℎ + ℎ (22) = ℎ + ℎ (23) = , = = , the result isThe interest is in the relation between the sending end and receiving end of the line. Setting = ℎ + ℎ (24) = ℎ + ℎ (25)Rewriting the above equations (24) and (25) in term of ABCD constants we have = (26)Where = ℎ , = ℎ , = ℎ = ℎIV. TRANSMISSION LINE MODELIn this paper fault location was performed on power system model which is shown in figure 2. Theline is a 300km, 330kv, 50Hz over head power transmission line. The simulation was performed usingMATLAB SIMULINK.Continuous powergui Step2 Scope1 VT x1 x3 x4 + Step 1 v - + v - Scope4 c 2 CB x2 1 VT Scope3 c Scope2 CB 2 1 CT Ac voltage source i i + - + - C T A a A B b B C C c Ditributed line1 Distributed line 3 Distributed line 6 Distributed line 2 Distributed line 4 Distributed line 5 400MVA Transformer R-L-C Load A B C Three phase Fault breaker FIG3.1 300 ,50Hz 330kV Transmission line model KM , Figure 2: Simulink transmission line model 72 Vol. 1, Issue 5, pp. 69-76
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963The fault is created after every 50km distance, with a simulation time of 0.25sec, sample time = 0,resistance per unit length = 0.012ohms, inductance per unit length = 0.9H and capacitance per unitlength = 127farad.4.1 SIMULATION RESULTSFigure 3 shows the normal load current flowing prior to the application of the fault, while the faultcurrent is shown in figure 4, which is cleared in approximately one second. Fig 3: Pre-fault current waveform at 300km Fig 4: Fault current waveform at 50km4.2 DISCRETE WAVELET COEFFICIENTS. Figures 5 and 6 showed pre-fault/post fault wavelets coefficients (approximate, horizontaldetail, diagonal detail and vertical detail) at 3 00km using the following db5 wavelet familioes.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Fig 5: Pre- fault wavelet coefficients Fig. 6: Post- fault wavelet coefficients at 50km4.2.1 TABLES OF THE COEFFICIENTSThe tables below present the minimum / maximum scales of the coefficients using db5. Table 1: Pre-fault wavelet coefficients using db5 Coefficients Max. Scale Min. Scale Approximate(A1) 693.54 0.00 Horizontal(H1) 205.00 214.44 Vertical (V1) 235.56 218.67 Diagonal (D1) 157.56 165.78
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 2: Pre-fault wavelet coefficients using db5 Coefficients Max. Scale Min. Scale Approximate(A1) 693.54 34.89 Horizontal(H1) 218.67 201.33 Vertical (V1) 201.33 218.67 Diagonal (D1) 157.56 148.89 Table 3: Differences between maximum and minimum scale of the coefficients using db5 db5 max db5 min Coefficients A1 H1 V1 D1 A1 H1 V1 D1 Coefficients. At 50km 693.54 218.67 201.33 157.56 34.89 201.33 218.67 148.89 Pre-fault coefficients. 693.54 205.00 235.56 157.56 0.00 214.44 218.67 165.78 Differences 0.00 13.67 -34.23 0.00 34.89 -13.11 0.00 -16.89Estimated distance (km) = 13.67 + 34.89 = 48.5 Table 4: Actual and estimated fault location Actual location(km) db5 bio5.5 coif5 Sym5 50 48.5 39.33 47.32 26.23 100 97.44 173.78 04.37 43.564.3 DISCUSSION OF THE RESULTS. The results are presented in figures 5 and 6, and tables 1 to 4. Figure 3 is the simulation result ofpre-fault current waveform which indicates that the normal current amplitude reaches 420A. When afault was created at 50km from the sending end point, figure 4 shows that the fault current amplitudereaches up to 14 kA.The waveforms obtained from figures 3 and 4 were imported into the wavelet toolbox of MATLABfor proper analysis to generate the coefficients. Figures 5 and 6 presents the discrete wavelettransform coefficients in scale time region. The scales of the coefficients are based on minimum scaleand maximum scale. These scales for both pre-fault and post fault coefficients were recorded fromthe work space environment of the MATLAB which was presented in tables 1and 2.The estimated distance was obtained by adding non negative values of the scales after subtracting thepre-fault coefficients from the post-fault coefficients; this is presented in table 4. V. CONCLUSIONSThe application of the wavelet transform to estimate the fault location on transmission line has beeninvestigated. The most suitable wavelet family has been made to identify for use in estimating thefault location on transmission line. Four different types of wavelets have been chosen as a motherwavelet for the study. It was found that better result was achieved using Daubechies ‘db5’ waveletwith an error of 3%. Simulation of single line to ground fault (S-L-G) for 330kv, 300km transmissionline was performed using SIMULINK MATLAB SOFTWARE. The waveforms obtained fromSIMULINK have been converted as a MATLAB file for feature extraction. DWT has been used toanalyze the signal to obtain the coefficients for estimating the fault location. Finally it was shown thatthe proposed method is accurate enough to be used in detection of transmission line fault location. 75 Vol. 1, Issue 5, pp. 69-76
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963REFERENCES[1] Abdelsalam .M. (2008) “Transmission Line Fault Location Based on Travelling Waves” Dissertation submitted to Helsinki University, Finland, pp 108-114.[2] Aguilera, A.,(2006) “ Fault Detection, classification and faulted phase selection approach” IEE Proceeding on Generation Transmission and Distribution vol.153 no. 4 ,U.S.A pp 65-70[3] Benemar, S. (2003) “Fault Locator For Distribution System Using Decision Rule and DWT” Engineering system Conference, Toranto, pp 63-68[4] Bickford, J. (1986) “Transient over Voltage” 3rd Edition, Finland, pp245-250[5] Chiradeja , M (1997) “New Technique For Fault Classification using DWT” Engineering system Conference, UK, pp 63-68[6] Elhaffa, A. (2004) “Travelling Waves Based Earth Fault Location on transmission Network” Engineering system Conference, Turkey, pp 53-56[7] Ekici, S. (2006) “Wavelet Transform Algorithm for Determining Fault on Transmission Line’’ IEE Proceeding on transmission line protection. Vol. 4 no.5, Las Vegas, USA, pp 2-5[8] Florkowski, M. (1999) “Wavelet based partial discharge image de-noising” 11thInternational symposium on High Voltage Engineering, UK, pp. 22-24.[9] Gupta, J (2002) “Power System Analysis” 2nd Edition, New Delhi, pp, 302-315[10] Okan, G. (1995) “Wavelet Transform for Distinguishing Fault Current” John Wiley Inc. Publication, New York, pp 39-42[11] Osman, A. (1998) “Transmission Line Distance protection based on wavelet transform” IEEE Transaction on power delivery, vol. 19, no2, Canada pp.515-523[12] Saadat, H. (1999) “Power System Analysis” Tata McGraw-Hill, New Delhi, pp 198-206[13] Wavelet Toolbox for MATLAB , Mathworks (2005)[14] Youssef, O. (2003) “A wavelet based technique for discriminating fault” IEEE Transaction on power delivery, vol.18, no. 1, USA, pp 170-176 .[15] Yeldrim, C (2006) “ Fault Type and Fault Location on Three Phase System” IEEE Proceeding on transmission line protection. Vol. 4 no.5 , Las-Vegas, USA ,pp 215-218[16] D.C. Robertson, O.I. Camps, J.S. Meyer and W.B. Gish, ‘ Wavelets and electromagnetic power system transients’, IEEE Trans. Power Delivery, vol11, no 2, pp1050-1058, April 1996Authors’ BiographySunusi Sani Adamu receives the B.Eng degree from Bayero University Kano, Nigeria in1985; the MSc degree in electrical power and machines from Ahmadu Bello University,Zaria, Nigeria in 1996; and the PhD in Electrical Engineering, from Bayero University, Kano,Nigeria in 2008. He is a currently a senior lecturer in the Department of ElectricalEngineering, Bayero University, Kano. His main research area includes power systemssimulation and control, and development of microcontroller based industrial retrofits. DrSunusi is a member of the Nigerian Society of Engineers and a registered professionalengineer in Nigeria.Sada Iliya receives the B.Eng degree in Electrical Engineering from Bayero UniversityKano, Nigeria,in 2001. He is about to complete the M.Eng degree in Electrical Engineeringfrom the same University. He is presently a lecturer in the Department of ElectricalEngineering, Hassan Usman Ploytechnic, Katsina, Nigeria. His research interest is in powersystem operation and control.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963AN Investigation OF THE PRODUCTION LINE FOR ENHANCED PRODUCTION USING HEURISTIC METHOD M. A. Hannan, H.A. Munsur, M. Muhsin Deptt. of Mechanical Engg., Dhaka University of Engg. & Tech., Gazipur. Bangladesh.ABSTRACTLine balancing is the phase of assembly line study that nearly equally divides the works to be done among theworkers so that the total number of employees required on the assembly line can be minimized. As smallimprovements in the performance of the system can lead to significant monetary consequences, it is of utmostimportance to develop practical solution procedures that may yield a significant enhancement in thethroughputs of production. Bangladesh Machine Tool Factory (BMTF) was undertaken as a research projectwhich had been incurring loss for a long time at their current production rate. In the course of analysis, a linebalancing (LB) technique was employed to have a detail analysis of the line. This paper describes how anefficient heuristic approach was applied to solve the deterministic and single-model ALB problem. The aim ofthe work was sought as to minimize the number of workstations with minimum cycle time so as to maximize theefficiency of the production line. The performance level was found so low that there was no way to improve theproductivity without any reduction of the idle time from the line curtailing the avoidable delays so far possible.All the required data was measured and the parameters such as elapsed times, efficiencies, number of workers,time of each of the workstations etc. was calculated from the existing line. The same production line wasredesigned through rehabilitating & reshuffling the workstations as well as the workers and using the newlyestimated time study data, keeping minimum possible idle time at each of the stations. A new heuristic approach,the Longest Operation Time (LOT) method was used in designing the new production line. After set up of thenew production line, the cost of production and effectiveness of the new line was computed and compared withthose of the existing one. How much costs could be saved and how much productivity could be increased for thenewly designed production line that were estimated and the production was found to have been increased by asignificant amount reducing the overall production cost per unit.KEYWORDS: Assembly Line Balancing (Alb), Workstation, Line Efficiency, Task Time, Cycle Time and LineBottleneck. I. INTRODUCTIONAn arrangement of workers, machines, and equipment in which the product being assembled passesconsecutively from operation to operation until completed. Also it is called production line[1].An assembly line[1] is a manufacturing process (sometimes called progressive assembly) in whichparts (usually interchangeable parts) are added to a product in a sequential manner using optimallyplanned logistics to create a finished product much faster than with handcrafting-type methods. Thedivision of labor was initially discussed by Adam Smith, regarding the manufacture of pins, in hisbook “The Wealth of Nations” (published in 1776). The assembly line developed by Ford MotorCompany between 1908 and 1915 made assembly lines famous in the following decade through thesocial ramifications of mass production, such as the affordability of the Ford Model T and theintroduction of high wages for Ford workers. Henry Ford was the first to master the assembly line andwas able to improve other aspects of industry by doing so (such as reducing labor hours required toproduce a single vehicle, and increased production numbers and parts). However, the variouspreconditions for the development at Ford stretched far back into the 19th century, from the gradualrealization of the dream of interchangeability, to the concept of reinventing workflow and jobdescriptions using analytical methods (the most famous example being “Scientific Management”).Ford was the first company to build large factories around the assembly line concept. Mass productionvia assembly lines is widely considered to be the catalyst which initiated the modern consumer cultureby making possible low unit cost for manufactured goods. It is often said that Fords productionsystem was ingenious because it turned Fords own workers into new customers. Put another way, 77 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Ford innovated its way to a lower price point and by doing so turned a huge potential market into areality. Not only did this mean that Ford enjoyed much larger demand, but the resulting larger demandalso allowed further economies of scale to be exploited, further depressing unit price, which tappedyet another portion of the demand curve. This bootstrapping quality of growth made Ford famous andset an example for other industriesFor a given a set of manufacturing tasks and a specified cycle time, the classical line balancingproblem consists of assigning each task to a workstation such that: (i) each workstation can completeits assigned set of tasks within the desired cycle time, (ii) the precedence constraints among the tasksare satisfied, and (iii) the number of workstations is minimized. (Krajewski and Ritzman, 2002[2],Meredith and Schafer, 2003)[3]. Scholl (1999) [6].The precedence relations among activities in a line balancing problem present a significant challengefor researchers in formulating and implementing an optimization model for LB problem. Whileinteger programming formulations are possible, but they quickly become unwieldy and increasinglydifficult to solve when problem size increases. As a result, many researchers recommend heuristicapproaches to solving the line balancing problem (Meredith and Schafer, 2003[3], Sabuncuoglu[5],Erel et al. 2000[5]; Suresh, Vivod and Sahu, (1996)[7].An assembly line (as shown in Figure 1) is a flow-oriented production system where the productiveunits performing the operations, referred to as stations, are aligned in a serial manner. The workpieces visit stations successively as they are moved along the line usually by some kind oftransportation system, e.g. a conveyor belt. The current market is intensively competitive andconsumer-centric. For example, in the automobile industry, most of the models have a number offeatures, and the customer can choose a model based on their desires and financial capability.Different features mean that different, additional parts must be added on the basic model. Due to highcost to build and maintain an assembly line, the manufacturers produce one model with differentfeatures or several models on a single assembly line. Due to the complex nature of the ALB problem,there are many heuristics that was used to solve the real life problems relating to the assembly linewith a view to increase the efficiency and productivity of the production line at minimum cost.Now-a-day, in mass production, a huge number of units of the same product are produced. This isonly possible with a high degree of division of labors. Since Adam Smith (1776) [8] it has beenshown that division of labor will train the required skills of the workers and will increase theproductivity to a maximum. The maximum degree of division of labor is obtained by organizingproduction as an assembly line system. Even in the early days of the industrial revolution massproduction was already organized in assembly line systems. According to Salveson [9], the "Firstassembly line was introduced by Eli Whitney during the French Revolution [10] for themanufacturing of muskets. The most popular example is the introduction of the assembly line on 1April 1913, in the “John R-Streeta of Henry Ford’s Highland-Parka production plant [10], where arestill `up to date’ because of the principle to increase productivity by division of labor is timeless. Themost known example is the final assembly in automotive industry. But nearly all goods of daily lifeare made by mass production which at its later stages is organized in assembly line productionsystems. For example the final assembly of consumer durables, like coffee machines, toasters,washing machines, refrigerators or products of the electrical industry like radio and TV or evenpersonal computers is organized in assembly line systems.The characteristic problem in assembly line systems is how to split up the total work to be done by thetotal system among the single stations of the line. This problem is called “assembly line balancing”because we have to find a “balance” of the work loads of the stations. First of all we have todetermine the set of single tasks which have to be performed in the whole production system and thetechnological precedence relations among them. The work load of each station (also: set of task,station load, operation) is restricted by the cycle time, which depends on the fixed speed of theconveyor and the length of the stations. The cycle time is defined as the time between the entering oftwo consecutive product units in a station[11].In the literature usually the objective is to minimize the number of stations in a line for a given cycletime. This is called time-oriented assembly line balancing[12]. As in recent years the industry wasfacing with sharp competitiveness the production cost has become more relevant. Even in suchsuccessful production systems like the assembly line system, we have to look for possibilities to cutdown production cost. As final assembly is usually a labor intensive kind of production we may 78 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963analyze the existent wage compensation system. Almost all collective agreements between unions andemployers work with a wage differential in most developed industrial nations, e.g. in German industrywhich has been analyzed in detail. The higher the difficulties to perform a task, the higher the pointvalue of the task and the wage rate. As the tasks in final assembly are similar but not of uniquedifficulty there exists certain different wage rates in assembly line production systems. Under thiseconomic perspective the objective in organizing work in assembly line production systems is not tominimize the number of stations, but to minimize the total production cost per unit. Therefore wehave to allocate the tasks to stations in a way that both, cost rates and number of stations areconsidered. This is done in cost-oriented assembly line balancing [13]. A formal description of thisobjective and the restrictions of this problem are given in [14, 15]. As this paper is directly related to aprevious work [16] the formal descriptions needed are reduced to a minimum. Compared to existentbalances which were obtained by the use of time-oriented methods neglecting wage rate differences, itis possible to realize savings in production cost up to a two-digit percentage by a cost-orientedreallocation of tasks using cost-oriented methods. Figure 1: A typical assembly line with few work stationsII. APPROACHES TO DETERMINATION OF PERFORMANCE OF ASSEMBLY LINE BALANCING PROBLEM (ALBP)According to M. Amen (2000)[17], there are two types of optimization problems for the linebalancing problem (LBP). Assembly line balancing problems are classified into two categories. InType-I problems with the cycle time, number of tasks, tasks times and task precedence. The objectiveis to find the minimum number of workstations. A line with fewer stations results in lower labor costand reduced space requirements. Type-I problems occurs when we have to develop a new assemblyline. Type-II problem occurs when the numbers of workstations or workers are fixed. Here theobjective is to minimize the cycle time. This will maximize the production rate because the cycle timeis expressed in time units per part (time/parts) and if we can find the minimum cycle time then we canget more production per shift. This kind of problem occurs when a factory already has a productionline and the management wants to find the optimum production rate so that the number ofworkstations (workers) is fixed. According to Nearchou (2007), the goal of line balancing is todevelop an acceptable, though not necessarily optimum but near to an optimum solution for assemblyline balancing for higher production. With either type, it is always assumed that the station time,which is the sum of times of all operations assigned to that station, must not exceed the cycle time.However, it is unnecessary or even impossible (e.g. when operation times are uncertain) to set a cycletime large enough to accommodate all the operations assigned to every station for each model.Whenever the operator cannot complete the pre-assigned operations on a work piece, work overloadoccurs. Since, idle time at any station is the un-utilized resource, the objective of line balancing is tominimize this idle time.Line balancing[12] is the phase of assembly line study that nearly equally divides the work to be doneamong the workers so that the total number of employees required on the assembly line can beminimized. The Type-II approach had been followed, where the line balancing involves selecting theappropriate combination of work tasks to be performed at each workstation so that the work isperformed in a feasible sequence and proximately equal mounts of time are allocated at each of theworkstations. The aim of the present study is to minimize the required labor input and facilityinvestment for a given output. The objective of the present work was to perform either: (i) Minimizingthe number of workstation (workers) required to achieve a given cycle time (i.e., given productioncapacity) or, minimizing the cycle time to maximize the output rate for a given number ofworkstations.Assembly lines are designed for a sequential organization of workers, tools or machines, and parts.The motion of workers is minimized to the extent possible. All parts or assemblies are handled either 79 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963by conveyors or motorized vehicles such as forklifts, or gravity, with no manual trucking. Heavylifting is done by machines such as overhead cranes or forklifts. Each worker typically performs onesimple operation. According to Henry Ford [19] the principles of assembly are:(a) Placing the tools and the men in the sequence of the operation so that each component part shalltravel the least possible distance while in the process of finishing.(b) Using work slides or some other form of carrier so that when a workman completes his operation,he drops the part always in the same place--which place must always be the most convenient place tohis hand--and if possible have gravity carry the part to the next workman for his operation.(c) Using sliding assembling lines by which the parts to be assembled are delivered at convenientdistances.III. PROBLEM DESCRIPTIONFirst, let us make some assumptions complied with most practical mixed model assembly lines:1. The line is connected by a conveyor belt which moves at a constant speed. Consecutive work piecesare equi-spaced on the line by launching each after a cycle time.2. Every work piece is available at each station for a fixed time interval. During this interval, the workload (of the respective model) has to be performed by an operator while the work piece ridesdownstream on the conveyor belt. If the work load is not finished within the cycle time, the operatorcan drift to the next consecutive station f or a certain distance. If the drifting distance is reachedwithout finishing the operations, work overload occurs. In this case, a utility worker is additionallyemployed to perform the remainder work so fast that the work can be completed as soon as possible.3. The operators of different stations do not interfere with each other while simultaneously servicing awork piece (i.e. during drifting operations).4. The operator returns to the upstream boundary of the station or the next work piece, whatever isreached first, in zero time after finishing the work load on the current unit, because the conveyorspeed is much smaller than the walking speed of the operators.5. Precedence graphs can be accumulated into a single combined precedence graph, similar operationsof different models may have different operation time; zero operation time indicate that an operationis not required for a model.6. Cycle time, number of stations, drifting distance, conveyor speed and the sequence of models to beassembled within the decision horizon must be known.IV. SURVEY OF THEORIES4.1 A heuristics applied for solving the cost-oriented assembly line balancing problem applied in LBP [15,23] many heuristics exist in literature for LB problem. The heuristic provides satisfactory solution butdoes not guarantee the optimal one (or the best solution).As the Line balancing problems can be solved by many ways, out of those, the Longest OperationTime (LOT)[23] approach had been used. It is the line-balancing heuristic that gives top assignmentpriority to the task that has the longest operation time.The steps of LOT are:LOT 1: To assign first the task that takes the most time to the first station.LOT 2: After assigning a task, to determine how much time the station has left to contribute.LOT 3: If the station can contribute more time, to assign it to a task requiring as much time aspossible.The operations in any line follow same precedence relation. For example, operation of super-finishingcannot start unless earlier operations of turning, etc., are over. While designing the line balancingproblem, one has to satisfy the precedence constraint. This is also referred as technological constraint,which is due to sequencing requirement in the entire job. V. TERMINOLOGY DEFINED IN ASSEMBLY LINE5.1 Few Terminology of assembly line analysis [24, 25]a. Work Element (i) : The job is divided into its component tasks so that the work may be spread along the line. Work element is a part of the total job content in the line. Let TV be the maximum 80 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 number of work element, which is obtained by dividing the total work elements into minimum rational work elements. Minimum rational work element is the smallest practical divisible task into which a work can be divided. The time in a-work element, i say (TjN), is assumed as constant. Also, all TiN are additive in nature. This means that if "assume that if work elements, 4 and 5, are done at any one station, the station time would be (T4N + T5N). Where N is total number of work elements.b. Work Stations (w): It is a location on the assembly line where a combination of few workelements is performed. c. Total Work Content (Twc) : This is the algebraic sum of time of all the work elements on the line.Thus; Twe = ∑Ni=1 TiNd. Station Time (Tsi) : It is the sum of all the work elements (i) on work station (s). e. Cycle Time (c): This is the time between two successive assemblies coming out of a line. Cycletime can be greater than or equal to the maximum of all times. If, c = max {Tsi}, then there will beideal time at all stations having station time less than the cycle time. f. Delay or Idle Time at Station (Tds) : This is the difference between the cycle time of the line andstation time. Tds = c - Tsi g. Precedence Diagram This is a diagram in which the work elements are shown as per theirsequence relations. Any job cannot be performed unless its predecessor is completed. A graphicalrepresentation, containing arrows from predecessor to Predecessor have the successor work element.Every node in the diagram represents a work element. h. Balance Delay or Balancing Less (d) : This is a measure of line-inefficiency. Therefore, theefficient is done to minimize the balance delay. Due to imperfect allocation of work along variousstations, there is idle time to station. Therefore, balance delay: D = nc - Twe / nc = nc - ∑Ni=1 TiN / nc Where; c = Total cycle time; Twe = Total work content; n = Total number of stations. i. Line Efficiency (LE) : It is expressed as the ratio of the total station time to the cycle time,multiplied by the number of work stations (n): LE = ∑Ni=1 TiN / (nc) x 100% Where; Tsi = Station time at station i, n = Total number of stations, c = Total cycle timej. Target time : Target cycle time (which must be greater than or equal to the target task) or definethe target number of workstations. If the Σti and n are known, then the target cycle time ct can befound out by the formula: ct = Σti/n.k. The Total Idle time: : The total idle time for the line is given by: k IT = nc - ∑ ti i =1A line is perfectly balanced if IT = 0 at the minimum cycle time. Sometimes the degree to which aline approaches this perfect balance is expressed as a percentage or a decimal called the balance delay.In percentage, the balance delay is found given as 1 0 0 ( IT ) D= ncWhere, IT = Total idle time for the line. n = the number of workstations, assuming one worker per workstation c = the cycle time for the line ti = time for the ith work task k = the total number of work task to be performed on the production line 81 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963The total amount of work to be performed on a line broken into tasks and the tasks assigned to workstations so the work is performed in a feasible sequence within and acceptable cycle time. The cycletime for a line (time between completions of successive items on the line) is determined by themaximum amount of time required at any workstation. Work can not flow through the line any fasterthan it can pass through the slowest stage (the bottleneck of the line)[28]. If one workstation has agreat deal of more work than others, it is desirable to assign some of this work to stations with lesswork so that there will exist no bottlenecks in the line.VI. DATA PRESENTATION FOR WORK STATIONSThe following table shows the time study data at each of the work stations of the present productionline[7}: Table 1: Elapsed time at each work station Station Time-1 Time-2 Time-3 Tasks No. of workers No. (Minutes) (Minutes) (Minutes) (a) Box opening 2 10 12 11 01 (b) check 2 10 11 9 Parts distribution 2 30 29 32 Frame cleaning 2 30 32 34 Axle with wheel 2 50 54 48 Leaf spring setting 2 30 32 30 Engine mounting 2 20 18 21 02 Axle with frame 2 40 42 45 Harnessing 2 30 32 28 Disc wheel setting 2 20 22 21 Check 1 30 30 28 Bracket fittings 4 60 55 50 Flexible piping 4 30 26 27 Copper piping 4 30 28 26 03 Nut tightens 4 30 25 28 Booster + Air tank 1 170 180 190 Check 1 30 26 25 Engine assembly 2 30 28 32 Alternation 2 15 14 16 Fan 2 15 16 17 04 Self stator 2 14 15 16 Transmission sub. Ass. 2 30 32 35 Member assembly 2 60 60 65 Radiator, silencer, ass. 3 60 65 62 Check 1 30 25 26 05 Horn and hose pipe 2 20 25 25 Air cleaner 2 20 22 26 Fuel tank 2 30 32 35 Battery carrier 2 30 31 33 Transfer line 2 30 28 35 06 Propeller shaft 2 50 60 55 Fluid Supply 2 20 25 22 Check 1 30 35 30 Cabin sub assembly 3 90 100 95 Side, signal lamp 2 30 35 40 07 Cabin on Chassis 3 30 32 29 Starting system 2 30 32 34 Check 2 25 26 30 Wood pattern making 6 60 60 65 08 Seat making 5 45 55 48 Wood paining 7 47 54 51 Load body sub assy. 8 60 58 62 Load body on Vehicle 12 55 58 60 Electric wiring 4 25 30 30 09 Pudding 5 52 55 55 Rubbing the cabin 6 64 58 60 Primary painting 3 40 42 44 Re-pudding 4 25 28 24 10 Final painting 3 50 48 55 Touch-up 3 32 30 34 82 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963VII. COMPARISON BETWEEN EXISTING AND MODIFIED MODELS OF THE PRODUCTION LINE Fig 2:Existing Model of AL : Fig 3: Proposed Model of AL (with ten stations) (With eight Stations) Station-1 Station-1 Working time (modified) = 50 Working time = 60 No. of workers (required) = 2 No. of workers (working) = 2 Station-2 Station-2 Working time (modified) = 60 Working time = 45 No. Of workers (required) = 2 No. Of workers (working) = 2 Station-3 Station-3 Working time (modified) = 192 Working time = 210 No. Of workers (required) = 4 No. Of workers (working) = 6 Station-4 Station-4 Working time (modified) = 156 Working time = 230 No. Of workers (required) = 6 No. Of workers (working) = 8 Station-5 Station-5 Working time (modified) = 229 Working time (modified) = 229 No. Of workers (required) = 4 No. Of workers (required) = 4 Station-6 Station-6 Working time (modified) = 174 Working time (modified) = 174 No. Of workers (required) = 4 No. Of workers (required) = 4 Station-7 Station-7 Working time (modified) = 199 Working time (modified) = 199 No. Of workers (required) = 5 No. Of workers (required) = 5 Station-8 Station-8 Working time (modified) = 205 Working time (modified) = 205 No. Of workers (required) = 27 No. Of workers (required) = 27 Station-9 Working time (modified) = 234 No. Of workers (required) = 15 Station-10 Working time (modified) = 192 No. Of workers (required) = 19VIII. ASSEMBLY LINE AND ANALYSIS The present situation of the stations is shown in the following table below. Table 2: Observed time and workers at all workstations in the existing production line Station No. No of Worker Elapsed Time(Min) Station 1 2 50 Station 2 2 60 Station 3 6 210 Station 4 8 230 Station 5 6 160 Station 6 5 198 Station 7 6 185 Station 8 33 235 Station 9 19 202 Station 10 22 210 83 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Actual number of 109 workers, WA Total Elapsed Time 1740 mins.IX. PERFORMANCE ANALYSIS OF THE ASSEMBLY LINEIterations for line balance efficiency at the stations:First iteration, as a sample calculation:Using the existing production line time data from Table 2, where the total elapsed time was 210minutes at the workstation no.3.9.1 Sample Calculations:From the existing model we have[30]: Available time / Period CycleTime = Output Units required / period 8 hours × 60 min. 480 min. = = = 240 min. 2 2 Theoretical minimum no. of workers. = ∑ T CT Since, Total time, ΣT = W1T1 + W2T2 + W3T3 + ----------- + Wy Ty = 22,593 minutes. Theoretical minimum no of workers = Balance Efficiency =9.2 Iterations for final balance efficiency:Similarly, the existing assembly line had been rearranged several times many iterations had beencarried out at all workstations at aim to eliminate the idle time and reducing the number of workstations to eight, keeping the precedence works logical and finally the station time have beenfurnished in the Table 3. Eliminating all idle time, the total elapsed time for the line has been made to1685 minutes. Table 3: Total elapsed time in all workstations in the new production line(for Iterations #1). Stations Functions Time Consumed Station1 Materials Handling & 223 Distribution Station2 Spot Welding Section 223 Station3 Metal Section 203 Station4 Painting Section 205 Station5 Chassis Section 205 Station6 Trimming Section. 206 Station7 Final Section. 208 Station8 Inspection Section 212 Total working time that had 1685 minutes been reduced to9.3 Sample analysis for reducing the idle time and number of work stations to minimum asfollows:Let us consider the Work Station no. 3:This station has five workers. Applying the line balancing technique the precedence diagram is shownin Fig 2. 84 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 D 23 A 22 C 33 E 37 F 13 G9 B 23Figure 3: Precedence diagram and elapsed time of tasks operations at Station # 3 in the existing assembly line. Table 4: Time elapsed in modified line at workstation no. 3. Tasks Workers Predecessor Actual time needed to finish the work activity (min.) /Activity A 2 - 16 B 1 - 19 C 1 B, A 35 D 1 C 26 E 1 D 19 F 1 E 32 G 2 F 9 H 3 G 6 D 26 ∑ = 162 A 16 C 35 E 19 F 32 G9 H6 B 19Figure 4 : Precedence diagram and elapsed time of tasks operations at Station # 3 in the proposed assembly line.Therefore, Time can be saved at this station = (240 – 162) = 78 minutes. In this way all the idle timehad been computed. This saved time could be used at another station. If all of the 5 workers work at atime, they are not fully busy with all the works. So, partly they can be utilized at other stations formaximum utilization of workers and machines and to minimize the cost of production.Table 5: Balance Efficiency after computations of all the Iterations completed at all stations or all iterations. Actual no Theoretica Balance Iterati Cycle time of l minimum efficiency on no (CT)min. workers workers (eff.B)% (WA) (WT) 01 240 107 96 86 02 240 86 72 84 03 240 109 95 86 04 240 99 96 97 05 240 101 100 99 06 240 104 100 96 07 240 104 101 98 08 240 97 97 100 09 240 103 100 97 10 240 103 99 96In the similar way the theoretical minimum number of workers and Balance efficiency were found outand these are furnished in Table 4. X. COST ANALYSIS AND COMPARISONS [29]Cost Calculations and Cost savings at the present rate of production(two vehicles per day) : Table 6: Worker reduction drive at different stations. No of workers that can be Station Number reduced 01 00 85 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 02 00 03 02 04 02 05 02 06 01 07 01 08 06 09 04 10 03 Total no. of workers reduced = 21 Total no. of reduced workers = 21 Nos. The authority pays at least Tk 200/- to every worker for each working day. Therefore, according to the previous design of the production line, cost can be saved through worker reduction policy: Daily Savings = Tk. 200/- × 21 = Tk. 4200/- Monthly Savings = Tk. 4200/- × 26 = Tk. 1,09,200/- Considering one day as holiday, number of working days in a month = 26 The labor cost of existing line has been found as follows: For one vehicle: (a) Assembly cost Tk. 6000/- (b) Painting Tk. 4700/- (c) Load body fabrication Tk. 7500/- (d) Load body Painting Tk. 6600/- Therefore, Total Labor Cost = Tk. 24,800/- Daily labor cost (for Production of two vehicles) = Tk.24,800/- × 2 = Tk. 4,96,00/- Monthly labor cost =Tk. 49,600/- × 26 = Tk.12,89,600/-. In the modified production line it could easy save: Tk. 4,200/- from every pair of automobile assembled everyday. Therefore, Monthly money savings (for the modified model) = Tk. 4,200 /- × 26 = Tk. 1,09,200/- Labor cost calculations if three vehicles were produced a day: For increasing productivity in 8 hours working period (in a working day) from two to three automobiles, the number of workers on the assembly line = (0+2+3+0+2+1+2+1+1) = 12 workers more required than the existing model. For this enhanced number of workers the labor cost will be increased too much. Total cost increased: Daily Increased Cost = Tk. 200/- × 12 = Tk. 2,400/- Monthly Increased Cost = Tk. 2,400/- ×26 = Tk. 62,400/- And Total number of vehicles assembled in a month will be = 3×26 = 78. Total monthly labor cost for assembly of 78 vehicles = Total labor cost of two vehicles assembled + total cost increased for three vehicles assembled in a month = Tk. 12,89,600/-+ Tk.62,400/- + Tk. 13, 52,000/-XI. RESULTS AND DISCUSSIONS Cost Comparison if 2 and 3 nos. of automobiles could be produced in each working day: If the top management wants to produce two automobiles each working day, the labor cost would be found for each vehicle = Tk.12,89,600/- ÷ 52 = Tk. 24,800/- But, if the management wants to produce three vehicles each working day, then the labor cost would be found for each vehicle = Tk.13,52,000/- ÷ 78 = Tk17,333/-. Therefore, it would be now easy to realize that, it would be more profitable to produce three vehicles each working day, instead of two.XII. CONCLUSIONS The proposed line has been designed very carefully in order to keep the balance efficiency at maximum level. Through the redesigning process of the production line all the idle and avoidable delays have been eliminated and the production line has been made free of bottlenecks, as a result it is found that the production rate can be increased with a considerable amount of profit margin. Through the study of total labor costs, it had been shown that if the daily delivery rate could be kept constant, about Tk.1, 94,142.00 could be saved every month. The gains in productivity allowed BMTF to increase worker pay from Tk. 150.00 per day to $200.00 per day and to reduce the hourly work week while continuously lowering the product price. These 86 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963goals appear altruistic; however, it has been argued that they were implemented by BMTF in order toreduce high employee turnover.ACKNOWLEDGEMENTThe author would like to thank Mr. H.A. Munsur and Mr. M. Muhsin, two of his undergraduate students whocarried out a research work for redesigning, rehabilitation and balancing the production line of BangladeshMachine Tool Factory (BMTF) in 2010 for obtaining the B.Sc. Engineering Degree under his direct supervision.They successfully completed the research work showing that the proposed model of production line wouldincrease a significant number of products saving a considerable amount of money which has a positive impact inreducing the cost per unit.REFERENCES[1] www. Assembly line - Wikipedia, the free encyclopedia.mht.[2] Krajewski, L. and L. Ritzman (2002), Operations Management Strategy and Analysis, 6th Edition, Prectice-Hall, New Jersey.[3] Meredith, J. and S. Shafer (2003), Introducing Operations Management, Wiley, New York.[4] Ragsdale, C.T. (2003), "A New Approach to Implementing Project Networks in Spreadsheets," INFORMS Transactions on Education, Vol. 3, No. 3.[5] Sabuncuoglu, I., E. Erel, and M. Tanyer (2000), "Assembly Line Balancing Using Genetic Algorithms," Journal of Intelligent Manufacturing, Vol. 11, pp. 295-310.[6] Scholl, A. (1999), Balancing and Sequencing of Assembly Lines, Springer Verlag, Heidelberg.[7] Suresh, G., V. Vivod, and S. Sahu (1996), "A Genetic Algorithm for Assembly Line Balancing," Production Planning and Control, Vol. 7, No. 1, pp. 38-46.[8] A. Smith, An Inquiry into the Nature and Causes of the Wealth of Nations, 1st Edition, 1776, London , 2nd edn. London 1789.[9] M.E. Salveson, The assembly line balancing problem, Journal of Industrial Engineering 6 (1955) 18- 25.[10] K. Williams et al., The myth of the line: Fords production of the Model T at Highland Park, 1909-16, Business History 35 (1993) 66}87.[11] www.assembly line Definition2 from Answers_com.htm[12] Ajenblit D. A., "Applying Genetic Algorithms to the U-shaped Assembly Line Balancing Problem", Proceedings of the IEEE Conference on Evolutionary Computation , (1992), pp. 96-101.[13] M.D. Kildbridge and L. Wester, "A Heuristic Method of Assembly Line Balancing", Journal of Industrial Engineering, Vol. 12, No. 4, (1961), pp. 292-298.[14] Dar-El, E. M., “Solving Large Single-model Assembly Line Balancing Problem – A comparative Study”, AIIE Transactions, Vol. 7, No 3, (1975), pp. 302-306.[15] F.M. Tonge, "Summary of a Heuristic Line Balancing Procedure", Management Science, Vol. 7, No. 1, 1969, pp. 21-42.[16] H.A. Munsur and Mr. M. Muhsin, “Assembly line Balancing for Enhanced Production”, an unpublished thesis carried out under direct supervision of the author for obtaining B.Sc Engineering Degree, ME Department, DUET, Gazipur, 2010.[17] M. Amen, Heuristic methods for cost-oriented assembly line balancing: A survey, International Journal of Production Economics 68 (2000), pp 114.[18] Ajenblit, D.A., Wainwright, R.L. (1998), “ Applying genetic algorithms to the U-shaped assembly line balancing problem”, Management Science, Vol. 7, No. 4, pp. 21-42.[19] Leu Y., Matheson L.A., and Ress L.P., "Assembly Line Balancing Using Genetic Algorithms with Heuristic-Generated Initial Populations and Multiple Evaluation Criteria", Decision Sciences, Vol. 25 Num. 4 (1996), pp. 581-605.[20 ] Ignall, E. J., “Review of Assembly Line Balancing” Journal of Industrial Engineering, Vol. 15, No 4 (1965), pp. 244- 254.[21] Klein M., "On Assembly Line Balancing", Operations Research, Vol. 11, (1963), pp. 274-281.[22] A.A. Mastor, An experimental investigation and comparative evaluation of production line balancing techniques, Management Science 16 (1970) 728-746.[23] Held M., R.M. Karp, and R. Shareshian, "Assembly Line Balancing Dynamic Programing with Precedence Contraints", Operations research, Vol. 11, No. 3, (1963), pp. 442-460.[24] J.R. Jackson, A computing procedure for a line balancing problem, Management Science 2 (1956) 261- 271.[25] F.B. Talbot, J.H. Patterson, W.V. Gehrlein, A comparative evaluation of heuristic line balancing techniques, Management Science 32 (1986) 430-454. 87 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[26] Bowman E. H. "Assembly Line Balancing by Linear Programming“ Operations Research, Vol. 8, (1960), pp. 385-389.[27] R. Wild, Mass-production Management - The Design and Operation of Production Flow-line Systems, Wiley, London, 1972.[28] F.W. Taylor, The Principles of Scientific Management, Harper & Brothers Publishers, New York/London, 1911.[29] M. Amen, An exact method for cost-oriented assembly line balancing, International Journal of Production Economics 64 (2000) 187}195. M. Amen / Int. J. Production Economics 69 (2001) 255}264 263.[30] Dar-El, E. M., "Solving Large Single-model Assembly Line Balancing Problem – A comparative Study", AIIE Transactions, Vol. 7, No 3, (1975), pp. 302-306.Author’s Biography:M. A. Hannan has been working as a Faculty member in the Department of MechanicalEngineering, Dhaka University of Engineering & Technology, Gazipur, Bangladesh. He has aspecialization in Industrial & Production Engineering, DUET. Bangladesh. His specialization is inPOM of Industrial Engineering. 88 Vol. 1, Issue 5, pp. 77-88
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 A NOVEL DESIGN FOR ADAPTIVE HARMONIC FILTER TO IMPROVE THE PERFORMANCE OF OVER CURRENT RELAYS A. Abu-Siada Department of Electrical and Computer Engineering, Curtin University, Perth, AustraliaABSTRACTDue to the ever-increasing in non linear loads and the worldwide trend to establish smart grids, harmonic levelin the electricity grids is significantly increased. In addition to their impact on power quality, harmonic currentcan have a devastating effect on the operation of over current relays as they are designed to operate efficientlyat the fundamental frequency. The distorted waveform will affect the operation of the over current rely and maycause the relay to trip under normal operating conditions. To solve this problem, power passive and activefilters are employed to eliminate the harmonics and purify the relay operational signal. Passive filters are notcost effective choice to solve this issue. On the other hand, active filters are more complex and need proper andcomplicated controller. This paper introduces a new and simple approach for adaptive filter design. Thisapproach is economic, compact and very effective in eliminating harmonics in the grid. It can be easily attachedwith any protective relay to improve its performance. Application of this design to improve the performance ofover current relays in the IEEE-30 bus system with heavy penetration of non-linear loads is investigated.KEYWORDS: Over current relay, harmonic filters, IEEE-30 bus system I. INTRODUCTIONMost of the litratures reveal that the performance of relays in presence of harmonic currents is notsignificantly affected for total harmonic distortion (THD) less than 20% [1]. As there has been atremendous increase in harmonic sources in the last few decades, harmonic levels of 20 % and higherare expected. Moreover overcurrent relays have to operate with current transformers which maysaturate and distort the current waveform causing a relay to trip under conditions which wouldnormally incur smooth running of the system without interruption [1-5]. Current transformersaturation may occur due to the presence of harmonics which may cause a current transformer failureto devliver a true reproduction of the primary current to the relay during fault conditions and thus maycause undesirable operations [6-8]. Electromechanical relays are nowadays considered obsolete inmost of developing countries, however they are still used in some places. Electromechanical relaystime delay characteristics are altered in the presence of harmonics. Another type of relays that isaffected by harmonics is the negative-sequence overcurrent relay which is designed to specificallyfunction with the negative sequence current component and it cannot perform upto its standard whenthere is a significant waveform distortion. Digital and numerical relays usually have built-in filters tofilter out harmonics and thus are less prone to maloperation [9].Active power filters which are more flexible and viable than passive filters have become popularnowadays [10]. However, active power filters configuration is more complex and require appropriatecontrol devices to operate [11]. This paper introduces a novel active filter design that is compact,simple and reliable. Application of this design to improve the performance of over current relays inthe IEEE-30 bus system with heavy penetration of non-linear loads is investigated. 89 Vol. 1, Issue 5, pp. 89-95
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 2231The proposed filter design with the detailed circuit components is elaborated in section 2. To provethe reliability of the proposed filter, the simulation results of two case studies are illustrated in section3. Application of the proposed filter to the IEEE 30 bus system is examined in section 4. Section 5 IEEE-30draws the overall conclusion of the paper.II. PROPOSED FILTER DESIGNTo purify the current signal received by the current transformer (CT), the distorted current signal hewhich consists of a fundamental current component (I0) and harmonic current components (Ihs) in thesecondary side of the step down transformer is extracted and the fundamental current component isfiltered out using a narrow band rejected filter while the remaining harmonic components will be usedto cancel the harmonic components in the other path by using a shifting transformer as shown in Fig.1. In this way the current signal fed to the rel will only contain the fundamental current component. relayThe overall circuit is shown in Fig. 2. he Figure 1. Proposed harmonic design Fig Figure 2. Filter componentsIn the circuit shown in Fig. 2, the current transformer measures the distorted current from the step- stepdown transformer secondary. The resistor R with its value of 1 is used to convert the current signal
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963to a voltage signal which is amplified by 10 times using an operational amplifier. The key componentof the active filter is the narrow band-reject 50 Hz filter which suppresses the 50Hz fundamentalcomponent. The filter compresses low-pass and high-pass filter components with a summing amplifier(twin-T notch filters). The filter transfer function and the value of its components are calculated basedon the required specifications. The output signal of the filter is amplified using an operationalamplifier and then is converted to a current signal (comprising harmonic components only) using avoltage controlled current source (VCCS). The harmonic components are then fed to one terminal ofthe cancellation transformer where the original current component (comprising fundamental andharmonic components) is fed to another terminal for harmonic cancellation. In this way, a purefundamental current signal is guaranteed to be fed to the over current relay.III. SIMULATION RESULTSTo examine the filter capability in suppressing all undesired current harmonics while retaining thefundamental component, the circuit shown in Fig. 2 is simulated using PSIM software and 2 casestudies are performed.Case study 1: The primary side of the (1:1000) current transformer was fed by a distorted currentsignal compressing sub frequencies of high amplitude at 10 Hz and 35 Hz as shown in Table 1. The4th column in table 1 shows the ideal values of the output signal where all sub harmonic componentsare assumed to be eliminated and 100% (1 A) of the fundamental component will be supplied to therelay. The 5th column in Table 1 shows the output current components of the proposed filter. Theperformance of the filter in eliminating harmonic components can be examined by comparing thefilter output current components with the ideal output current. The waveforms of the input current,ideal output current and filter output current along with their harmonic spectrums are shown in Fig. 3. Table 1. Filter performance with Sub-harmonic components Harmonic Frequency Input Ideal output Output the Order ( Hz ) (A) (A) filter ( A ) 1 50 1000 1.0 0. 95 0.2 10 500 0 0.0213 0.7 35 500 0 0.0816 Figure 3. Waveforms and spectrum analysis for case study 1
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 2. Filter performance with sub-harmonic and harmonic components Harmonic Frequency Input Ideal output Output the Order ( Hz ) (A) (A) filter ( A ) 1 50 1000 1.0 0. 9863 0.2 10 500 0 0.0101 0.6 30 500 0 0.3293 2 100 500 0 0.0102 3 150 500 0 0.0023 5 250 300 0 0.0079 7 350 300 0 0.0131 9 450 300 0 0.0055 11 550 100 0 0.0067 13 650 100 0 0.0079Case study 2: The amount of harmonic contents in the input signal is significantly increased to includethe harmonic and sub harmonic orders shown in Table 2. It can be shown from table 2 that thedifference between the ideal output current and the actual filter output current is negligible. Thewaveforms of the input current, ideal output current and filter output currents along with theirharmonic spectrums for this case are shown in Fig. 4. Figure 4. Waveforms and spectrum analysis for case study 2IV. APPLICATION OF THE PROPOSED FILTER ON THE IEEE-30 BUS SYSTEMTo investigate the impact of the proposed filter on relay’s operation, the IEEE 30-bus system [12](shown in Fig. 5) is simulated using ETAP Software and the THD is measured as 3%. Relays
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963coordination is performed as in [13, 14]. A 3-phase short circuit fault is applied at bus 10 and as aresult, relays 8, 9 and 10 will trip in the sequence shown in Fig. 5 to isolate the faulty bus.Non-linear loads were then connected to the system at different buses such that the THD is reaching20%. The same three phase short circuit fault is applied on bus 10. As can be seen from Fig. 6, undersuch significant THD, the relays will have undesired tripping sequence and they will not isolate thefaulty bus. The tripping sequence in this case starts with relay 9 on bus 10. However, relays 8 and 9will not trip and relays 19 and 20 on bus 25 will trip instead. As a consequence, under such heavyharmonic level, the relays will have a malfunction operation and they will not isolate the faulty zone.To promote a correct sequence of relays tripping operation in the existence of significant THD, theproposed filter design was connected at the locations shown in Fig. 7. As a result, the THD wasreduced to only 3.1%. Fig. 7 shows a right sequence of relays tripping operation which is similar toFig. 5. The relay pickup values become much sensible to the relay operation after the installation ofharmonic filters. It can be concluded that the proposed filter is very effective in rectifying relaysoperation in the existence of significant harmonic currents as it eliminate a significant amount ofharmonic currents. Fig. 5 Tripping Sequence during 3 Phase Fault on bus 10 (THD = 3%)
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 6. Tripping Sequence during 3 Phase Fault on bus 10 (THD = 20%) Figure 7. Tripping Sequence during 3 Phase Fault on bus 10 (THD = 3.1%)
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 V. CONCLUSIONSimulation results show that, when the THD is more than 20%, the over current relay’s performanceis significantly affected and a malfunction will be caused. When a fault occurs in the system, the overcurrent relay will not be able to isolate the faulty location as they will trip in an undesired way.Reducing THD to a level below 20% will mitigate this problem and a proper relay’s operation can beretained. Passive harmonic filters are not a cost effective solution to solve this problem. The proposedfilter design is very effective in reducing the THD in the system to almost negligible level and willrectify relays operation in the existence of significant harmonic currents. The proposed filter iscompact, cost effective, technically sound and easy to be implemented.REFERENCES[1] Tumiran, T. Haryono and Zulkarnaini, “Effect of Harmonic Loads on Over Current Relay to DistributionSystem protection”, Proceeding of the International Conference on Electrical Engineering and Informatics June2007[2] N.X. Tung, G. Fujita, M.A.S Masoum, S.M Islam, “Impact of harmonics on Tripping time and Coordinationof Overcurrent Relay”, 7th WSEAS International Conference on Electric Power Systems, High Voltages,Electric Machines, Venice, Italy, November, 2007[3] A.wright and C.christopoulos, Electrical power system protection, London: Chapman & Hall, 1993[4] S. Arrillaga, Watson and Wood, Power system harmonics, England: John Wiley & Sons Ltd, 1997.[5] A. Watson, Power system harmonics, England: John Wiley & Sons Ltd, 2003.[6] N.A Abbas, “Saturation of current transformers and its impact on digital Over current relays” Msc Thesis,King Fahd University of Petroleum and Minerals, Dahrain, Saudi Arabia, August 2005[7] F. M. A. S. M. Elwald F, Power Quality in Power Systems and Electrical Machines, Amsterdam andBoston: Academic Press/Elsvier, 2008.[8] Francisco C. De La rosa.”Effect of harmonic distortion on power systems” in Harmonics and PowerSystems. Boca Raton, FL : CRC/Taylor & Francis, 2006.[9] A. A. Girgis, J. W. Nims, J. Jacamino, J. G. Dalton, and A. Bishop, "Effect of voltage harmonics on theoperation of solid state relays in industrial applications," in Industry Applications Society Annual Meeting,1990., Conference Record of the 1990 IEEE, 1990, pp. 1821-1828 vol.2.[10] C. Cheng-Che and H. Yuan-Yih, "A novel approach to the design of a shunt active filter for an unbalancedthree-phase four-wire system under nonsinusoidal conditions," Power Delivery, IEEE Transactions on, vol. 15,pp. 1258-1264, 2000.[11] G.J Wakileh, Power system harmonics fundamental analysis and filter design, Berlin ; New York: Springer, 2001[12] H. Saadat, Power System Analysis, New York: McGraw-Hills Inc., 2002.[13] M. Ezzeddine, R. Kaczmarek, and M. U. Iftikhar, "Coordination of directional overcurrent relays using anovel method to select their settings," Generation, Transmission & Distribution, IET, vol. 5, pp. 743-750.[14] D. Birla, R. P. Maheshwari, and H. O. Gupta, "A new nonlinear directional overcurrent relay coordinationtechnique, and banes and boons of near-end faults based approach," Power Delivery, IEEE Transactions on, vol.21, pp. 1176-1182, 2006.AuthorA. Abu-Siada (M’07) received his B.Sc. and M.Sc. degrees from Ain Shams University,Egypt and the PhD degree from Curtin University of Technology, Australia, All inElectrical Engineering. Currently, he is a lecturer in the Department of Electrical andComputer Engineering at Curtin University. His research interests include power systemstability, Condition monitoring, Superconducting Magnetic Energy Storage (SMES), PowerElectronics, Power Quality, Energy Technology, and System Simulation. He is a regularreviewer for the IEEE Transaction on Power Electronics, IEEE Transaction on Dielectricsand Electrical Insulations, and the Qatar National Research Fund (QNRF).
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 ANUPLACE: A SYNTHESIS AWARE VLSI PLACER TO MINIMIZE TIMING CLOSURE Santeppa Kambham1 and Krishna Prasad K.S.R2 1 ANURAG, DRDO, Kanchanbagh, Hyderabad-500058, India 2 ECE Dept, National Institute of Technology, Warangal-506004, IndiaABSTRACTIn Deep Sub Micron (DSM) technologies, circuits fail to meet the timings estimated during synthesis aftercompletion of the layout which is termed as ‘Timing Closure’ problem. This work focuses on the study ofreasons for failure of timing closure for a given synthesis solution. It was found that this failure is due to non-adherence of synthesizer’s assumptions during placement. A synthesis aware new placer called ANUPLACEwas developed which adheres to assumptions made during synthesis. The new algorithms developed areillustrated with an example. ANUPLACE was applied to a set of standard placement benchmark circuits. Therewas an average improvement of 53.7% in the Half-Perimeter-Wire-Lengths (HPWL) with an average areapenalty of 12.6% of the placed circuits when compared to the results obtained by the existing placementalgorithms reported in the literature.KEYWORDS: Placement, Signal flow, Synthesis, Timing I. INTRODUCTIONVLSI IC design process involves two important steps namely (i) synthesis of high level representationof the circuit producing technology mapped components and net-list and (ii) layout of the technologymapped circuit. During the layout process, the placement of circuit components to the exact locationsis carried out. The final layout should meet the timing and area requirements which are estimatedduring the synthesis process. Placement is the major step which decides the area and delay of thefinal layout. If the area and delay requirements are not met, the circuits are to be re-synthesized. Thistwo step process has to be iterated till the required area and delay are achieved. In Deep Sub Micron(DSM) technologies, circuits fail to meet the timing requirements estimated during the synthesis aftercompleting the layout. This is termed as “Timing Closure” problem. It has been found that even afterseveral iterations, this two step process does not converge [1,2,3]. One reason for this non-convergence is that the synthesis and layout are posed as two independent problems and each onesolved separately. There are other solutions which try to unify these two steps to achieve timingclosure which can be classified into two categories (i) synthesis centric[4,5,6] and (ii) layout centric[7,8]. In synthesis centric methods, layout related information is used during synthesis process. Inlayout centric methods, the sub modules of circuits which are not meeting the requirements are re-synthesised. All these methods have not investigated why a given synthesis solution is not able tomeet the timing requirements after placement. Our work focuses in finding the reasons for failure oftiming closure for a given synthesis solution. Based on these findings, we developed a placer namedas ANUPLACE which minimizes the timing closure problem by placing the circuits as per theassumptions made during the synthesis process. 96 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963In Section 2, we briefly review the existing methods of placement and their limitations. Section 3tabulates and illustrates the reasons for failure of timing closure. Section 4 describes the implicationsin adhering to synthesis assumptions during placement. Based on this, the basis for new placementalgorithm was worked out in Section 5. With this background, a new placer called ANUPLACE wasdeveloped which is described in Section 6. The new placer ANUPLACE is illustrated with anexample in Section 7. Improvements to initial placement solution are given in Section 8. Theexperimental setup to evaluate ANUPLACE is described in Section 9. Results are tabulated andimprovements obtained are discussed. Conclusions of research work carried and future scope aregiven in Section 10. II. EXISTING PLACEMENT METHODS AND THEIR LIMITATIONSPlacement assigns exact locations to circuit components within chip area. The existing algorithms usecomponent cell dimensions and component interconnection information as input to the placer. Thusthe placer is not directly coupled to the synthesis. Lot of information available after synthesis is notused during placement [9,10,11,28,36,37]. The studies in [12] show that the results of leadingplacement tools from both industry and academia may be up to 50% to 150% away from optimal intotal wire length.Major classical approaches to placement are Constructive method and Iterative method [13]. InConstructive placement, once the components are placed, they will never be modified thereafter. Theconstructive methods are (i) Partitioning-based (ii) Quadratic assignment and (iii) Cluster growth. Aniterative method repeatedly modifies a feasible placement by changing the positions of one or morecore cells and evaluates the result. It produces better result at the expense of enormous amounts ofcomputation time. Main iterative methods are (i) Simulated annealing, (ii) Simulated evolution and(iii) Force-directed. During placement, we have to optimize a specific objective function. Typicalobjectives include wire length, cut, routing congestion and performance. These classical approachesare very effective and efficient on small to medium scale designs. In DSM SOC era, due to complexchips and interconnect delay dominance, these are not very effective [1,2,3,4]. Some new methods toovercome this problem reported in literature [13] are (a) Hierarchical placement, which utilizes thestructural properties [23] of the circuit during placement (b) Re-synthesis, which re-synthesizes asoft-macro, in case of timing violation. (3) Re-timing method relocates registers to reduce the cycletime while preserving the functionality. Existing timing-driven placement algorithms[14,15,16,17,18,19] are classified into two categories: path-based and net-based. Path-basedalgorithms try to directly minimize the longest path delay. Popular approaches in this categoryinclude mathematical programming and iterative critical path estimation. TimberWolf [18] usedsimulated annealing to minimize a set of pre-specified timing-critical paths. The drawback is thatthey usually require substantial computation resources. In the net-based algorithms, timingconstraints are transformed into net-length constraints. The use of signal direction to guide theplacement process found to give better results [28]. In Timing driven placement based on monotonecell ordering constraints [24], a new timing driven placement algorithm was presented, whichattempts to minimize zigzags and criss-crosses on the timing-critical paths of a circuit.Table 1 summarises how the existing algorithms are unable to solve the timing closure problem for agiven synthesis solution. Most of the existing placement algorithms consider only connectivityinformation during placement and ignore other information available from synthesis [28].III. REASONS FOR FAILURE OF TIMING CLOSUREOur study has indicated that the failure of achieving timing closure is due to non-adherence ofsynthesizer’s assumptions during placement. The assumptions made during synthesis [25,26,27,29]and the implications of these assumptions during placement are summarized in Table 2 and illustratedin Figures 1 to 8. Column 1 with heading “Fig” refers to the Figure number. 97 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 1 Drawbacks of existing placement methods Placement method Drawback Minimizes all wires whereas only critical path is to be Quadratic assignment [20] minimized Cluster growth [21] Loosing track of cells along signal flow Simulated annealing [18] Signal flow disturbed Force directed [22] Tries to minimize all wires which is not required Global signal flow not known. Additional burden of Hierarchical placement [23] partitioning into cones Re-synthesis of soft-macros [8] Iterative process Additional burden of finding zigzags and criss-crosses from Monotone cell ordering [24] net-list Figure 1 Gates placed as per levels Figure 2 Non-uniformity of row widths Figure 3 Cones versus Rectangle Figure 4 Primary inputs Figure 5 Sharing inputs Figure 6 Sharing common terms Figure 7 Non-uniformity of cell sizes Figure 8 Pin positions on cell Table 2 Implication of non-adherence of synthesis assumptions Fig Synthesis Assumption Placement Implication 1 Gates are placed as per levels. During placement gates are randomly placed. This increases the delay in an unpredictable manner. 98 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 1 Delay is proportional to number Since gates are randomly placed, delay is no longer proportional of levels. to number of levels. 1 Delay from one level to the Since the original structure of levels is not maintained, the delay other level is a fixed constant from one level to the other level is unpredictable. (some “k”) 1 Upper bound of delay = No. of Since the original structure of levels is not maintained, upper levels * (Delay of max (level) + bound of delay is not predictable. delay from one level to the next level) 2 No restrictions on the aspect Synthesis assumes irregular structure as shown in figure. Placer ratio- number of rows and tries to achieve rectangular shape. Due to this, synthesis columns assumptions here can never be met, if the goal is a rectangle. 2 No restrictions on the aspect Synthesizer has no notion of shape of the placement. It does not ratio, no uniformity on size of bother about uniformity on the size of rows or columns. Thus rows or columns synthesizer may produce irregular shapes when it calculates delay. This is not the case with placer. 3 Synthesizer assumes a ‘cone’. Combinational circuits have a natural ‘cone’ shape as shown in figure. Placer requires ‘rectangle’ for effective use of silicon. Synthesizer expected delay can be achieved only if placer uses ‘cone’ for critical signals. 4 Geographical distance of input In the Figure, A & B assumed to be available in a constant ‘k’ source pins time. In reality, this can never be the case. This synthesis assumption can never be met. 5 Sharing of inputs Synthesizer assumes inputs to be available in constant time which is not the case during placement. This synthesis assumption can never be met. 6 Common terms Sharing output produces more wire during layout than what was assumed during synthesis. This synthesis assumption can never be met. 7 Non-uniformity of cell sizes Requires more wire during placement. Cell size (length and width) are uniform and fixed during synthesis as far as wire required for routing are concerned. This synthesis assumption can never be met. 8 Pin position on a cell It is assumed that inputs are available at the same point on the cell. This is not the case during placement. This synthesis assumption can never be met.IV. IMPLICATIONS IN ADHERING TO SYNTHESIS ASSUMPTIONS DURING PLACEMENTWe now analyze how we can adhere to synthesis assumptions during placement. Synthesizer assumesthat cells are placed as per the levels assumed during synthesis, whereas during placement cells areplaced randomly without any regard to levels. Cells can be placed as per the levels as a ‘cone’ [28]and use the left over area to fill with non-critical cells to form rectangle for better silicon utilization.Synthesizer assumes that delay is proportional to number of levels, where this information is lostduring placement due to random placement. By placing cells on critical paths, as per the levels alongsignal flow, we adhere to this synthesis assumption. Non-critical cells can be placed in the left-overarea. By placing cell as per levels assumed during synthesis, the cell from one level to the next levelcan be approximately maintained as a fixed constant. The upper bound of delay can be predicted.Synthesizer assumes irregular structure as shown in Figure 2. Cells which are not in critical paths canbe moved to other row to achieve rectangular shape. Based on the above analysis, the basis for thenew method is evolved, which is explained in the next section. 99 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 V. BASIS FOR THE NEW ALGORITHMSThe new method is evolved based on the following. • Use natural signal flow available during synthesis [28]. • Use cone placement for signals along critical path [28]. • Try to maintain the placement as close to the levels assumed during synthesis.Signal flow indicates the direction of signal, from Primary Input (PI) to another gate input or fromoutput of a gate to the input of another gate. The issues in placing cells along signal flow areexplained below with the help of Figure 9. The gate G has one output and 3 inputs.S1, S2, S3 show the direction of signals to the inputs of G. Ideally the output of preceding gatesshould be on the straight lines S1, S2, S3 as shown in Figure 9 for the gate g1, g2, g3. The gates g1,g2, g3 are to be placed as close as possible to G. The pin separations w1, w2 are much smaller thangate widths f1, f2, f3 for gate g1, g2, g3. It is impossible to place all input gates g1, g2, g3 in a row inLevel i such that their outputs fall on the straight lines s1, s2, s3. At least two out of 3 gates are to beplaced as shown in Figure 10. This results on two bends on signals s1 and s3. This cannot be avoided.There can be only one signal which can be placed on the straight line. This can be used for placingcritical paths. Other less critical paths can be placed above or below of this straight line. The newplacement algorithms which are used in ANUPLACE are explained in the next section. f1 g1 S1 f1 g1 S1 W1 g2 S2 f2 W1 f2 g2 S2 G W2 G W2 S3 f3 S3 f3 f3 g3 f3 g3 Level i Level i+1 Figure 9 Signal Flow as per Synthesizer Figure 10 Placement along Signal FlowVI. ALGORITHMS USED IN ANUPLACEANUPLACE reads the benchmark circuit which is in the form of a net-list, taken from “SIS”synthesizer [35], builds trees with primary outputs as roots as shown in Figure 11 and places thecircuit along signal flow as cones. The placement benchmark circuits in bookshelf [30] formatcontain ‘nodes’ giving aspect ratio of gates and ‘nets’ which give interconnection details betweengates and input/output terminals. These formats do not identify primary inputs or primary outputs.We took benchmark circuits from SIS [35] synthesizer in “BLIF” format which are then convertedinto Bookshelf format using converters provided in [31,32,33,34]. This produces “.nodes” and “.nets”file. The ‘nodes’ file identifies primary inputs and primary outputs by “_input” and “_output” suffixrespectively. The “nodes” file consists of information about gates, primary inputs and outputs. The“nets” file consists of inter connections between the nodes and inputs/outputs. While parsing thefiles, Primary Input/Output information is obtained using the “terminal” names which identify“input/output”. The new placement algorithm is shown in Figure 12. Once the trees are created, delayinformation is read into the data structure, from SIS, which is used during placement. This delayinformation is available at every node from “SIS” synthesizer. A circuit example with 3 Primaryoutputs, marked as PO-1, PO-2 and PO-3 is shown in Figure 11. 100 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 PO-1 PO-2 PO-3 Primary inputs Primary inputs Primary inputs Figure 11 Trees with Primary output as root Figure 12 ANUPLACE algorithm ANUPLACE works as follows. • Read the benchmark circuit which is in the form of a Net-list with timing information. • Build trees with primary outputs as roots. • Sort the trees based on time criticality. • Starting with most time critical tree, place on the layout surface, one tree pointed by the root starting from the primary output using “place-cell” algorithm shown in Figure 13. • Place the remaining trees one by one, on the layout surface using “place-cell”. The place_cell algorithm shown in Figure 13 works as follows. • Place the cell pointed by root using “place_one_cell” algorithm shown in Figure 14. • Sort the input trees based on time criticality; • For each input, if it is a primary input, place it using “place_one_cell”, if not; call “place_cell” with this input recursively. Figure 13 Algorithm Place-cell Figure 14 Algorithm Place-one-cell The “place_one_cell” algorithm shown in Figure 14 works as follows. The layout surface is divided into number of rows equal to number of levels in the tree as shown in Figure 11. Each row corresponds to one level of the tree. The first root cell is placed in the middle of the top row. Subsequently the children are placed below this row based on availability of the space. Roots of all trees (that is, all Primary Outputs) are placed in the top row. While placing a cell beneath a root, preference is given to the place along the signal flow. If space is not available on the signal flow path, then a cell is shifted either to right or left of the signal flow and placed as nearer as possible to the signal flow.VII. ILLUSTRATION OF ANUPLACE WITH AN EXAMPLE The ANUPLACE algorithms are illustrated with an example whose logic equations are shown in Figure 15. The timing information from the SIS synthesizer [35] is given in Table 3. The tree built by ANUPLACE with the average slacks is shown in Figure 16. The sequence of placement based on the time criticality is also shown in Figure 16. The sequence of placement is indicated by the numbers 1- 101 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-196331 shown at the each node of the tree. The placement sequence number for each gate is also shown inthe last column of Table 3. The initial placement is shown in Figure 17. a1 Primary Output Level 5 1 [a1] -4.58 2 17 Level 4 [143] [2] -4.46 -4.435 10 3 18 25 Level 3 [14] [17] [30] [135] -4.25 -4.46 -4.25 -4.24 14 4 19 22 26 29 11 7 Level 2 [127] [15] [19] [18] [119] [12] [82] [4] -4.085 -4.11 -4.365 -4.415 -4.21 -3.935 -4.21 -4.065 Level 1 a20 a5 a9 a6 a10 a8 Primary Inputs -3.72 -4.475 -4.395 -3.52 -4.415 -3.28 13,16 8,15,23,30 6,24,27 21,31 5,9,12,20 28Figure 15 Example-Equations Figure 16 Example-Tree built by ANUPLACEThere are 6 primary inputs marked as a5 a6 a8 a9 a10 a20 and there is one primary output marked asa1. There are 15 two input gates marked as [127], [15], [14], [18], [19], [17], [143], [4], [82], [135],[119], [12], [30], [2] and [a1]. The interconnections are as shown in Figure 16. The slack delayscomputed by the synthesizer at each gate are shown in Figure 16. The placement algorithm given inFigure 12, places the Primary Output cell a1 first. Then it looks at its leaf cells [143] and [2]. Fromthe time criticality given in Figure 16, it places cell [143] along the signal flow just below the cell[a1]. Then the algorithm is recursively invoked to place the tree with root as [143] which places thecells and the inputs in the sequence [17], [18], a10, a9, [19], a5, a10, [14], [15], a10, a20, [127], a5and a20 along the signal flow as shown. Once the placer completes the placement of tree pointed[143] as root, it starts placing the tree pointed by cell [2]. Now the cells marked [2], [30], [119], a10,a6, [12], a5, a9, [135], [82], a9, a8, [4], a5 and a6 are placed. This completes the placement ofcomplete circuit. Primary Inputs and Primary Outputs are re-adjusted after placing all the cells. a1 Primary Output Level 5 [a1] Level 4 [143] [2] Level 3 [14] [17] [30] [135] Level 2 [127] [15] [19] [18] [119] [12] [82] [4] Level 1 a20 a5 a9 a6 a10 a8 Primary Inputs Figure 17 Example Initial placement Figure 18 Find-best-place Table 3 Timing information for the example circuit Arrival Arrival Required Required Slack Placement Gate Slack rise Slack fall time rise time fall time rise time fall average sequence a5 1.45 1.11 -3.28 -3.11 -4.73 -4.22 -4.475 8,15,23,30 a6 0.69 0.53 -2.89 -2.93 -3.58 -3.46 -3.52 21,31 a8 0.35 0.27 -2.84 -3.1 -3.19 -3.37 -3.28 28 102 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 a9 1.16 0.89 -3.47 -3.27 -4.63 -4.16 -4.395 6,24,27 a10 1.5 1.15 -3.44 -2.74 -4.94 -3.89 -4.415 5,9,12,20 a20 0.8 0.61 -3.51 -2.52 -4.31 -3.13 -3.72 13,16 [127] 1.72 2.18 -1.74 -2.53 -3.46 -4.71 -4.085 14 [15] 2.08 2.09 -1.71 -2.35 -3.79 -4.43 -4.11 11 [14] 3.13 2.64 -1.58 -1.14 -4.71 -3.79 -4.25 10 [18] 1.93 2.07 -1.96 -2.87 -3.89 -4.94 -4.415 4 [19] 2.04 2.06 -1.93 -2.69 -3.98 -4.75 -4.365 7 [17] 3.11 2.66 -1.83 -1.31 -4.94 -3.98 -4.46 3 [143] 3.45 4.1 -0.53 -0.85 -3.98 -4.94 -4.46 2 [4] 2.05 2.04 -2.17 -1.87 -4.22 -3.91 -4.065 29 [82] 1.74 2.22 -2.42 -2.04 -4.16 -4.26 -4.21 26 [135] 3 2.79 -1.25 -1.44 -4.26 -4.22 -4.24 25 [119] 1.93 2.49 -1.84 -2.16 -3.77 -4.65 -4.21 19 [12] 2.04 2.04 -1.81 -1.98 -3.85 -4.02 -3.935 22 [30] 3.42 2.6 -1.22 -1.26 -4.65 -3.85 -4.25 18 [2] 3.72 3.98 -0.5 -0.67 -4.22 -4.65 -4.435 17 [a1] 4.94 4.22 0 0 -4.94 -4.22 -4.58 1VIII. CONTROLLING ASPECT RATIO Due to non-uniformity of number of cells per level, final aspect ratio is not rectangle. For better silicon utilization, it is required to make final aspect ratio as rectangle. Aspect ratio can be controlled while placing the cells by suitably modifying the algorithm “place-one-cell” given in Figure 14 which is discussed in the following paragraphs. 8.1 Algorithm: find-best-place In the main algorithm, “place_circuit”, the following steps are added. • Max_row=number of levels as given by synthesizer • Total_width=total of widths of all cells in the circuit • Average_width_per_level = Round (Total_width/Max_row) + Tolerance, where “Tolerance” is an integer to make the placement possible which can be varied based on need. At the beginning, before starting placing cells, a layout surface rectangle of the size “Max_row X Average_width_per_level” is defined. As the placement progresses, the “used space” and “available space” are marked as shown in Figure 18. The “find-best-place” algorithm works as follows. • Current level of parent cell = c as shown in Figure 18. • Check availability on level c-1. • If space available on level c-1, place the leaf cell at level c-1. • Else check availability on levels c, c-2, c+1 and c+2 in the “free” spaces as shown in Figure 18. • Find the shortest free location from the current position shown as C in the Figure 18. Place the leaf cell here. The example given in Figure 15 will be placed as follows. The total number of levels excluding Primary Inputs and Primary Output are 4. Assuming that each cell has a width of unit 1, total width of all cells in the circuit is 15. So Max_row=4, Total_width=15 and Average_width_per_level = round (15/4) = 4. Here Tolerance = 0. So a maximum of 4 cells can be placed per row. The final placement by ANUPLACE is shown in Figure 19. 103 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 19 Final placement by ANUPLACE Figure 20 Placement by existing placerThe resulting layout is nearer to a rectangle. After placing the gates, Primary Inputs and PrimaryOutputs are placed nearer to the gates from which these Input/Outputs are taken. The placement givenby the public domain placer [31,32,33,34] is shown in Figure 20.The experimental set up to evaluate ANUPLACE using benchmark circuits and the results are givenin the next section.IX. RESULTS AND DISCUSSIONIn this section, the test setup to evaluate the new algorithms with the benchmark circuits is described.The results are compared with those obtained from public domain placement algorithms. The test setup is shown in Figure 21. The test set up for comparing the results is shown in Figure 22. Thebenchmark circuit is taken in the form of a PLA. The normal placement bench mark circuits [36,37]are not useful, because they give only cell dimensions and interconnect information. Timing andother circuit information from synthesizer is not available in these placement bench marks. SISsynthesizer [35] is used for synthesizing the benchmark circuit. SIS [35] produces the net list inBLIF format along with the timing information. The BLIF output is then converted into Bookshelfformat using the public domain tools available at the web site [31,32,33,34] using the utility“blif2book-Linux.exe filename.blif filename”. ANUPLACE is used to place the circuit, which givesthe placement output in Bookshelf format. To check the overlaps and also to calculate the wirelenth(HPWL), a public domain utility [31,32,33,34], taken from the same web site, is used. The utility“PlaceUtil-Lnx32.exe -f filename.aux -plotWNames filename -checkLegal -printOverlaps”, checksfor out of core cells and overlaps. This utility also gives Half Perimeter Wire Length (HPWL) of theplaced circuit. The same “BLIF” file is used with the public domain placer available at [31] using theutility “LayoutGen-Lnx32.exe -f filename.aux -saveas filename ” and HPWL calculated using theutility “PlaceUtil-Lnx32.exe -f filename.aux -plotWNames filename -checkLegal”.The Table 4 shows the Half-Perimeter-Wire-Lengths (HPWL) of the placed circuits using existingpublic domain algorithms [31] and ANUPLACE. There is an average improvement of 53.7% inHPWLs with an average area penalty of 12.6%. Due to aligning of cells to signal flow, the layoutproduced by ANUPLACE is not a perfect rectangle. There will be white spaces at the left and rightsides as shown in Figure 19. Because of this, there is an increase in the area of the placed circuit.The cells which are logically dependent are placed together as in [28]. Other placement algorithmsrandomly scatter the cells. Because of this there is reduction in HPWL of the entire placed circuit.Since the cells are placed along the signal flow, wire length along the critical paths will be optimum.So zigzags and criss-crosses are not present as in [24]. Circuit is naturally partitioned when trees arebuilt rooted by Primary Outputs (POs). So there is no additional burden of extracting cones as in[23,28]. ANUPLACE is a constructive method, so better than other iterative methods. Only criticalpaths are given priority while construction of the layout. Global signal flow is kept in mind allthrough the placement, unlike other placement methods. Average slacks are used in theseexperiments. Using maximum of rise and fall slacks will give worst case delay. The timing results arebeing communicated in a separate paper. The final placement is closer to synthesis assumptions whencompared to other placement methods. This approach may be useful towards evolving Synergistic 104 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Design Flow, which is to create iteration loops that are tightly coupled at the various levels of designflow as mentioned in [1]. Table 4 Comparison of HPWL HPWL Core Area Improve- Circuit HPWL Area Area (ANUPLAC cell (ANUPLAC ment in Name (Existing) (Existing) Penalty % E) Area E) HPWL % 5xp1 1343.8 2021.9 317 361 378 50.46 4.7 9sym 4321.3 5162.4 657 729 730 19.46 0.1 apex2 10788.7 16088.1 1225 1369 1372 49.12 0.2 b12 765.1 1180.1 200 225 210 54.25 -6.7 b9 1591.1 2601.5 308 342 387 63.51 13.2 clip 2433 3968.9 511 576 612 63.13 6.3 cm82a 148.2 216 62 72 76 45.69 5.6 comp 2093.4 3681.9 452 506 650 75.88 28.5 con1 125.2 188.6 48 56 85 50.6 51.8 cordic 692.7 1569.8 230 256 360 126.63 40.6 count 2777.4 3842.9 473 529 520 38.36 -1.7 f51m 1494.7 2174.4 309 342 360 45.47 5.3 fa 62.9 83.9 30 36 44 33.27 22.2 ha 21.4 33.9 11 12 12 58.37 0 misex2 2107.3 2626.6 308 342 330 24.65 -3.5 mux1-8 111.3 148.3 32 36 42 33.33 16.7 mux8-1 130.6 211.9 59 64 88 62.29 37.5 o64 3008.3 3467.3 327 361 384 15.26 6.4 parity 346.3 636 149 169 196 83.64 16 rd53 425.2 659.5 130 144 192 55.09 33.3 rd73 1619.6 2666.1 387 420 500 64.61 19 rd84 1730.5 2588.6 394 441 480 49.59 8.8 sao2 1957.8 2913 384 420 500 48.79 19 squar5 648.9 835.2 156 169 192 28.71 13.6 t481 166.1 386.9 76 81 91 132.88 12.3 table3 69834.1 87642.5 4388 4900 4580 25.5 -6.5 Z5xp1 2521 3925.1 485 529 558 55.69 5.5 Z9sym 1276 1892.7 302 342 360 48.33 5.3 Figure 21 Test set up Figure 22 Set up to compare results 105 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 X. CONCLUSIONS AND FUTURE SCOPENew algorithms place the circuits along signal flow as per the assumptions made during synthesis.The study conducted investigates the reasons for failure of placement tools to achieve timings givenby the synthesizer. This showed us that certain assumptions made by synthesizer can be implemented,and some assumptions can never be implemented. Those which can be implemented are tried in ournew placement algorithms. One problem encountered during implementation of the algorithms wasthat new placer produced cones, which are area inefficient. This problem to some extentcircumvented by controlling the aspect ratio using non-critical cell placement to convert the cone intoa rectangle. This new placer uses knowledge of the delay information during construction of thesolution. This is useful to effectively control the aspect ratio of the placement solution. Theimprovements obtained in delay are being communicated in a separate paper.ACKNOWLEDGEMENTSWe thank Dr. K.D. Nayak who permitted and guided this work to be carried out in ANURAG. Wealso thank members of ANURAG who reviewed the manuscript. Thanks are due to Mrs. D.Manikyamma and Mr. D. Madhusudhan Reddy for the preparation of the manuscript.REFERENCES[1] Kurt Keutzer., et al., (1997), “The future of logic synthesis and physical design in deep-submicron process geometries”, ISPD 97 Proceedings of the international symposium on Physical design, ACM New York, NY, USA, pp 218-224.[2] Randal E. Byrant, et al., (2001), "Limitations and Challenges of Computer-Aided Design Technology for CMOS VLSI", Proceedings of the IEEE, Vol. 89, No. 3, pp 341-65.[3] Coudert, O, (2002), “Timing and design closure in physical design flows”, Proceedings. International Symposium on Quality Electronic Design (ISQED ’02), pp 511 – 516.[4] Gosti, W., et al., (2001), “Addressing the Timing Closure Problem by Integrating Logic Optimization and Placement”, ICCAD 2001 Proceedings of the 2001 IEEE/ACM International Conference on Computer-aided design, San Jose, California , pp 224-231.[5] Wilsin Gosti , et al., (1998), “Wireplanning in logic synthesis”, Proceedings of the IEEE/ACM international conference on Computer-aided design, San Jose, California, USA, pp 26-33[6] Yifang Liu, et al., (2011), “Simultaneous Technology Mapping and Placement for Delay Minimization”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 30 No. 3, pp 416–426.[7] Pedram, M. & Bhat, N, (1991), “Layout driven technology mapping”, 28th ACM/IEEE Design Automation Conference, pp 99 – 105.[8] Salek, A.H., et al., (1999), “An Integrated Logical and Physical Design Flow for Deep Submicron Circuits”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 18,No. 9, pp 1305–1315.[9] Naveed A. Sherwani, (1995), “Algorithms for VLSI Physical Design Automation”, Kluwer Academic Publishers, Norwell, MA, USA.[10] Sarrafzadeh, M., & Wong, C.K., (1996), “An introduction to VLSI Physical Design”, The McGraw-Hill Companies, New York.[11] Shahookar K & Mazumder P, (1991), “VLSI cell placement techniques” ACM Computing Surveys, Vol. 23, No. 2.[12] Jason Cong, et al., (2005), “Large scale Circuit Placement”, ACM Transactions on Design Automation of Electronic Systems, Vol. 10, No. 2, pp 389-430.[13] Yih-Chih Chou & Young-Long Lin, (2001), “Performance-Driven Placement of Multi-Million-Gate Circuits”, ASICON 2001 Proceedings of 4th International Conference on ASIC, Shanghai, China, pp 1- 11.[14] Andrew B. Kahng & Qinke Wang, (2004), “An analytic placer for mixed-size placement and timing- driven placement”, Proceedings of International Conference on Computer Aided Design, pp 565-572.[15] Jun Cheng Chi, et al., (2003), “A New Timing Driven Standard Cell Placement Algorithm”, Proceedings of International Symposium on VLSI Technology, Systems and Applications, pp 184-187.[16] Swartz, W., & Sechen, C., (1995), “Timing Driven Placement for Large Standard Cell Circuits”, Proc. ACM/IEEE Design Automation Conference, pp 211-215. 106 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[17] Tao Luo, et al., (2006), “A New LP Based Incremental Timing Driven Placement for High Performance Designs”, DAC ‘06 Proceedings of the 43rd Annual Design Automation Conference, ACM New York, NY, USA, pp 1115-1120.[18] Carl Sechen & Alberto Sangiovanni-Vincentelli, (1985), “The TimberWolf Placement and Routing Package”, IEEE Journal of Solid-State Circuits, vol. SC-20, No. 2, pp 510-522.[19] Wern-Jieh Sun & Carl Sechen, (1995), “Efficient and effective placement for very large circuits”,. IEEE Transactions on CAD of Integrated Circuits and Systems, Vol. 14 No. 3, pp 349-359[20] C. J. Alpert, et al., (1997), "Quadratic Placement Revisited", 34th ACM/IEEE Design Automation Conference, Anaheim, pp 752-757[21] Rexford D. Newbould & Jo Dale Carothers , (2003), “Cluster growth revisited: fast, mixed-signal placement of blocks and gates”, Southwest Symposium on Mixed Signal Design, pp 243 - 248[22] Andrew Kennings & Kristofer P. Vorwerk, (2006), “Force-Directed Methods for Generic Placement”, IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems, Vol. 25, N0. 10, pp 2076-2087.[23] Yu-Wen Tsay, et al., (1993), “A Cell Placement Procedure that Utilizes Circuit Structural Properties”, Proceedings of the European Conference on Design Automation, pp 189-193.[24] Chanseok Hwang & Massoud Pedram, (2006), “Timing-Driven Placement Based on Monotone Cell Ordering Constraints”, Proceedings of the 2006 Conference on Asia South Pacific Design Automation: ASP-DAC 2006, Yokohama, Japan, pp 201-206.[25] Brayton R K, et al., (1990), “Multilevel Logic Synthesis”, Proceedings of the IEEE, Vol. 78, No. 2, pp- 264-300.[26] Brayton R K, et al.,(1987), “MIS: A Multiple-Level Logic Optimization System”, IEEE Transactions on Computer Aided Design, Vol.6, No.6, pp-1062-1081.[27] Rajeev Murgai, et al.,(1995), “Decomposition of logic functions for minimum transition activity”, EDTC 95 Proceedings of the European conference on Design and Test, pp 404-410.[28] Cong, J. & Xu, D, (1995), “ Exploiting signal flow and logic dependency in standard cell placement”, Proceedings of the Asian and South Pacific Design Automation Conference, pp 399 – 404.[29] Fujita, M. & Murgai, R, (1997), “Delay estimation and optimization of logic circuits: a survey”, Proceedings of Asia and South Pacific Design Automation Conference, Chiba,Japan, pp 25 – 30.[30] Andrew Caldwell, et al., (1999), “Generic Hypergraph Formats, rev. 1.1”, from http://vlsicad.ucsd.edu/GSRC/bookshelf/Slots/Fundamental/HGraph/HGraph1.1.html.[31] Saurabh Adya & Igor Markov, (2005), “Executable Placement Utilities” from http://vlsicad.eecs.umich.edu/BK/PlaceUtils/bin.[32] Saurabh N. Adya, et al., (2003), "Benchmarking For Large-scale Placement and Beyond", International Symposium on Physical Design (ISPD), Monterey, CA, pp. 95-103.[33] Saurabh Adya and Igor Markov, (2003), “On Whitespace and Stability in Mixed-size Placement and Physical Synthesis”, International Conference on Computer Aided Design (ICCAD), San Jose, pp 311- 318.[34] Saurabh Adya and Igor Markov, (2002), "Consistent Placement of Macro-Blocks using Floorplanning and Standard-Cell Placement", International Symposium of Physical Design (ISPD), San Diego, pp.12- 17.[35] Sentovich, E.M., et al., (1992), “SIS: A System for Sequential Circuit Synthesis”, Memorandum No. UCB/ERL M92/41, Electronics Research Laboratory, University of California, Berkeley, CA 94720.[36] Jason Cong, et al, (2007), “UCLA Optimality Study Project”, from http://cadlab.cs.ucla.edu/~pubbench/.[37] C. Chang, J. Cong, et al., (2004), "Optimality and Scalability Study of Existing Placement Algorithms", IEEE Transactions on Computer-Aided Design, Vol.23, No.4, pp.537 – 549.AUTHORSK. Santeppa obtained B.Tech. in Electronics and Communication engineering fromJ N T U and M Sc (Engg) in Computer Science and Automation (CSA) from IndianInstitute of Science, Bangalore. He worked in Vikram Sarabhai Space Centre, Trivandrumfrom 1982 to 1988 in the field of microprocessor based real-time computer design. From1988 onwards, he has been working in the field of VLSI design at ANURAG, Hyderabad.He received DRDO Technology Award in 1996, National Science Day Award in 2001 and“Scientist of the Year Award" in 2002. He is a Fellow of IETE and a Member of IMAPSand ASI. A patent has been granted to him for the invention of a floating point processor device for high speedfloating point arithmetic operations in April 2002. 107 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963K.S.R. Krishna Prasad received B.Sc degree from Andhra University, DMIT in electronicsfrom MIT, M.Tech. in Electronics and Instrumentation from Regional Engineering College,Warangal and PhD from Indian Institute of Technology, Bombay. He is currently working asProfessor at Electronics and Communication Engineering Department, National Institute ofTechnology, Warangal. His research interests include analog and mixed signal IC design,biomedical signal processing and image processing. 108 Vol. 1, Issue 5, pp. 96-108
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 FUNCTIONAL COVERAGE ANALYSIS OF OVM BASEDVERIFICATION OF H.264 CAVLD SLICE HEADER DECODER Akhilesh Kumar and Chandan Kumar Department of E&C Engineering, NIT Jamshedpur, Jharkhand, IndiaABSTRACTCommercial chip design verification is a complex activity involving many abstraction levels (such asarchitectural, register transfer, gate, switch, circuit, fabrication), many different aspects of design (such astiming, speed, functional, power, reliability and manufacturability) and many different design styles (such asASIC, full custom, semi-custom, memory, cores, and asynchronous). In this paper, functional coverage analysisof verification of RTL (Register Transfer Level) design of H.264 CAVLD (context-based adaptive variablelength decoding) slice header decoder using SystemVerilog implementation of OVM (open verificationmethodology) is presented. The methodology used for verification is OVM which has gathered very positivepress coverage, including awards from magazines and industry organizations. There is no doubt that the OVMis one of the biggest stories in recent EDA (electronic design automation) history. The SystemVerilog languageis at the heart of the OVM which inherited features from Verilog HDL, VHDL, C, C++ and adopted by IEEE ashardware description and verification language in 2005. The verification environment developed in OVMprovides multiple levels of reuse, both within projects and between projects. Emphasis is put onto the actualusage of the verification components and functional coverage. The whole verification is done usingSystemVerilog hardware description and verification language. We are using QuestaSim 6.6b for simulation.KEYWORDS: Functional coverage analysis, RTL (Register Transfer Level) design, CAVLD (context-basedadaptive variable length decoding), slice header decoder, OVM (open verification methodology),SystemVerilog, EDA (electronic design automation). I. INTRODUCTIONVerification is the process which proceeds parallel as design creation process. The goal of verificationis not only finding the bugs but of proving or disproving the correctness of a system with respect tostrict specifications regarding the system [2].Design verification is an essential step in the development of any product. Today, designs can nolonger be sufficiently verified by ad-hoc testing and monitoring methodologies. More and moredesigns incorporate complex functionalities, employ reusable design components, and fully utilize themulti-million gate counts offered by chip vendors. To test these complex systems, too much time isspent constructing tests as design deficiencies are discovered, requiring test benches to be rewritten ormodified, as the previous test bench code did not address the newly discovered complexity. Thisprocess of working through the bugs causes defects in the test benches themselves. Such difficultiesoccur because there is no effective way of specifying what is to be exercised and verified against theintended functionality [11]. Verification of RTL design using SystemVerilog implementation of OVMdramatically improves the efficiency of verifying correct behavior, detecting bugs and fixing bugsthroughout the design process. It raises the level of verification from RTL and signal level to a levelwhere users can develop tests and debug their designs closer to design specifications. It encompassesand facilitates abstractions such as transactions and properties. Consequently, design functions areexercised efficiently (with minimum required time) and monitored effectively by detecting hard-to- 109 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963find bugs [7]. This technique addresses current needs of reducing manpower and time and theanticipated complications of designing and verifying complex systems in the future.1.1. Importance of Verification When a designer verifies her/his own design - then she/he is verifying her/his own interpretation of the design - not the specification. Verification consumes 50% to 70% of effort of the design cycle. Twice more Verification engineers than RTL designer. Finding bug in customer’s environment can cost hundreds of millions.1.2. Cost of the BugsBugs found early in design have little cost. Finding a bug at chip/system level has moderate cost. Abug at system/chip level requires more debug time and isolation time. It could require new algorithm,which could affect schedule and cause board rework. Finding a bug in System Test (test floor)requires re-spin of a chip. Finding a bug after customer delivery cost millions. Figure 1. Cost of bugs over time.II. SLICE HEADER2.1. THE H.264 SYNTAXH.264 provides a clearly defined format or syntax for representing compressed video and relatedinformation [1]. Fig. 2 shows an overview of the structure of the H.264 syntax. At the top level, anH.264 sequence consists of a series of ‘packets’ or Network Adaptation Layer Units, NAL Units orNALUs. These can include parameter sets containing key parameters that are used by the decoder tocorrectly decode the video data and slices, which are coded video frames or parts of video frames. 110 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 2. H.264 Syntax [1]2.2. SLICEA slice represents all or part of a coded video frame and consists of a number of coded macro blocks,each containing compressed data corresponding to a 16 × 16 block of displayed pixels in a videoframe.2.3. SLICE HEADERSupplemental data placed at the beginning of slice is Slice Header.III. SLICE HEADER DECODERAn H.264 video decoder carries out the complementary processes of decoding, inverse transform andreconstruction to produce a decoded video sequence [1].Slice header decoder is a part of H.264 video decoder. Slice header decoder module takes the input bitstream from Bit stream parser module. 111 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 Figure 3. H.264 video coding and decoding process [1] The slice header decoder module parses the slice header-RBSP (raw byte sequence payload) bit stream to generate first MB in slice, Slice type etc. The module sends the decoded syntax element to the controller.IV. CAVLD Context-adaptive variable-length decoding (CAVLD) is a form of entropy decoding used in H.264/MPEG-4 AVC video decoding. It is an inherently lossless decompression technique, like almost all entropy-decoders. V. SYSTEM VERILOG SystemVerilog is a combined Hardware Description Language (HDL) and Hardware Verification Language (HVL) based on extensions to Verilog HDL. SystemVerilog becomes an official IEEE standard in 2005. SystemVerilog is the extension of the IEEE Verilog 2001. It has features inherited from Verilog HDL, VHDL, C, C++. One of the most important features of SystemVerilog is that it’s an object oriented language [4]. SystemVerilog is rapidly getting accepted as the next generation HDL for System Design, Verification and Synthesis. As a single unified design and verification language, SystemVerilog has garnered tremendous industry interest, and support [9].VI. OVM (OPEN VERIFICATION METHODOLOGY) The Open Verification Methodology (OVM) is a documented methodology with a supporting building-block library for the verification of semiconductor chip designs [8]. The OVM was announced in 2007 by Cadence Design Systems and Mentor Graphics as a joint effort to provide a common methodology for SystemVerilog verification. After several months of extensive validation by early users and partners, the OVM is now available to everyone. The term “everyone” means just that everyone, even EDA competitors, can go to the OVM World site and download the library, documentation, and usage examples for the methodology [7]. OVM provides the best framework to achieve coverage-driven verification (CDV). CDV combines automatic test generation, self-checking testbenches, and coverage metrics to significantly reduce the time spent verifying a design [2]. The purpose of CDV is to: Eliminate the effort and time spent creating hundreds of tests. Ensure thorough verification using up-front goal setting. Receive early error notifications and deploy run-time checking and error analysis to simplify debugging.VII. OVM TESTBENCH A testbench is a virtual environment used to verify the correctness of a design. The OVM testbench is composed of reusable verification environments called OVM verification components (OVCs). An 112 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 OVC is an encapsulated, ready-to-use, configurable verification environment for an interface protocol, a design submodule, or a full system. Each OVC follows a consistent architecture and consists of a complete set of elements for stimulating, checking, and collecting coverage information for a specific protocol or design. Fig 4. Testbench [2].VIII. DEVELOPMENT OF OVM VERIFICATION COMPONENTS SystemVerilog OVM Class Library: Figure 5. OVM Class Library [3] The SystemVerilog OVM Class Library provides all the building blocks to quickly develop well- constructed, reusable, verification components and test environments [3]. The library consists of base classes, utilities, and macros. Different verification components are developed by deriving the base classes, utilities, and macros. The OVM class library allows users in the creation of sequential constrained-random stimulus which helps collect and analyze the functional coverage and the information obtained, and include assertions as members of those configurable test-bench environments. The OVM Verification Components (OVCs) written in SystemVerilog code is structured as follows [4]: — Interface to the design-under-test — Design-under-test (or DUT) — Verification environment (or testbench) — Transaction (Data Item) — Sequencer (stimulus generator) — Driver — Top-level of verification environment 113 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 — Instantiation of sequencer — Instantiation of driver— Response checking — Monitor — Scoreboard— Top-level module — Instantiation of interface — Instantiation of design-under-test — Test, which instantiates the verification environment — Process to run the testInterface: Interface is nothing but bundle of wires which is used for communication betweenDUT(Design Under Test) and verification environment(testbench). The clock can be part of theinterface or a separate port [2]. Figure 6. Interface [2]Here, all the Slice Header Decoder signals are mentioned along with their correct data types. Amodport is defined showing connections with respect to the verification environment.Design Under Test (DUT): DUT completely describes the working model of Slice Header Decoderwritten in Hardware Description Language which has to be tested and verified.Transaction (Data Item): Data items represent the input to the DUT. The sequencer which createsthe random transactions are then retrieved by the driver and hence used to stimulate the pins of theDUT. Since we use a sequencer, the transaction class has to be derived from the ovm_sequence_itemclass, which is a subclass of ovm_transaction. By intelligently randomizing data item fields usingSystemVerilog constraints, one can create a large number of meaningful tests and maximize coverage.Sequencer: A sequencer is an advanced stimulus generator that controls the items that are provided tothe driver for execution. By default, a sequencer behaves similarly to a simple stimulus generator andreturns a random data item upon request from the driver. It allows to add constraints to the data itemclass in order to control the distribution of randomized values.Driver: The DUT’s inputs are driven by the driver that runs single commands such as bus read orwrite. A typical driver repeatedly receives a data item and drives it to the DUT by sampling anddriving the DUT signals.Monitor: The DUT’s output drives the monitor that takes signal transitions and groups them togetherinto commands. A monitor is a passive entity that samples DUT signals but does not drive them.Monitors collect coverage information and perform checking.Agent: Agent encapsulates a driver, sequencer, and monitor. Agents can emulate and verify DUTdevices. OVCs can contain more than one agent. Some agents (for example, master or transmitagents) initiate transactions to the DUT, while other agents (slave or receive agents) react totransaction requests.Scoreboard: It is a very crucial element of a self-checking environment. Typically, a scoreboardverifies whether there has been proper operation of your design at a functional level.Environment: The environment (env) is the top-level component of the OVC. The environment class(ovm_env) is architected to provide a flexible, reusable, and extendable verification component. Themain function of the environment class is to model behaviour by generating constrained-randomtraffic, monitoring DUT responses, checking the validity of the protocol activity, and collectingcoverage. 114 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Test: The test configures the verification environment to apply a specific stimulus to the DUT. Itcreates an instance of the environment to invoke the environment.Top-level module: A single top-level module connects the DUT with the verification environmentthrough the interface instance. Global clock pulses are created here. run_test is use to run theverification process. global_stop_request is used to stop the verification process after a specifiedperiod of time or number of iterations or after a threshold value of coverage.IX. FUNCTIONAL COVERAGE ANALYSIS9.1. CoverageAs designs become more complex, the only effective way to verify them thoroughly is withconstrained-random testing (CRT). This approach avoids the tedium of writing individual directedtests, one for each feature in the design. If the testbench is taking a random walk through the space ofall design states, one can gauge the progress using coverage.Coverage is a generic term for measuring progress to complete design verification. The coverage toolsgather information during a simulation and then post-process it to produce a coverage report. One canuse this report to look for coverage holes and then modify existing tests or create new ones to fill theholes. This iterative process continues until desired coverage level. Figure 7. Coverage convergence [2]9.2. Functional CoverageFunctional coverage is a measure of which design features have been exercised by the tests.Functional coverage is tied to the design intent and is sometimes called “specification coverage”. Onecan run the same random testbench over and over, simply by changing the random seed, to generatenew stimulus. Each individual simulation generates a database of functional coverage information. Bymerging all this information together, overall progress can be measured using functional coverage.Functional coverage information is only valid for a successful simulation. When a simulation failsbecause there is a design bug, the coverage information must be discarded. The coverage datameasures how many items in the verification plan are complete, and this plan is based on the designspecification. If the design does not match the specification, the coverage data is useless. Reaching for100% functional coverage forces to think more about what anyone want to observe and how one candirect the design into those states.9.3. Cover PointsA cover point records the observed values of a single variable or expression.9.4. BINS 115 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Bins are the basic units of measurement for functional coverage. When one specify a variableor expression in a cover point, SystemVerilog creates a number of “bins” to record how manytimes each value has been seen. If variable is of 3-bits, maximum number of bins created bySystemVerilog is eight.9.5. Cover GroupMultiple cover points that are sampled at the same time (such as when a transactioncompletes) are place together in a cover group. X. SIMULATION RESULT OF OVM BASED VERIFICATION OF H.264 CAVLD SLICE HEADER DECODERWe use the QuestaSim 6.6b for simulation. Sequencer produces the sequences of data (transitions)which is send to the DUT through the driver which converts the transactions into pin level activity.The monitor keep track with the exercising of the DUT and its response and gives a record ofcoverage of the DUT for the test performed. Figure 8 showing the simulation result of coverage withcover groups. Total numbers of cover groups in the verification of Slice Header Decoder are thirty.Inside a cover group, a number of cover points are present and inside a cover point, a number of binsare present. We are considering a cover group CV_CAVLD_SH_09.Figure 8. Simulation result of coverage Figure 9. Simulation result of coverage with coverpoints and binsFigure 9 shows the cover point (FI_SH_09) and bins inside the cover group CV_CAVLD_SH_09.The whole coverage report is very large and is not possible to include in this paper. We are includingthe coverage report related to cover group CV_CAVLD_SH_09.Coverage reropt:COVERGROUP COVERAGE:---------------------------------------------------------------------------------Covergroup Metric Goal/ Status At Least---------------------------------------------------------------------------------TYPE /CV_CAVLD_SH_09 100.0% 100 Covered Coverpoint CV_CAVLD_SH_09::FI_SH_09 100.0% 100 Covered covered/total bins: 3 3 missing/total bins: 0 3 bin pic_order_cnt_lsb_min 263182 1 Covered bin pic_order_cnt_lsb_max 3811 1 Covered bin pic_order_cnt_lsb_between 36253 1 Covered The number (Metric) present in front of bins represents the number of hits of a particular bin appearsduring simulation. 116 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963XI. CONCLUSIONWe present the verification of H.264 CAVLD Slice Header Decoder using SystemVerilogimplementation of OVM. We analyze the functional coverage with cover groups, cover points, andbins. We achieve the 100 percent functional coverage for Slice Header Decoder module. Sincecoverage is 100%, hence the RTL design meets the desired specifications of slice header decoder.REFERENCES[1] ‘THE H.264 ADVANCED VIDEO COMPRESSION STANDARD’, Second Edition by Iain E. Richardson.[2] ‘SystemVerilog for Verification: A Guide to Learning the Testbench Language Features’ by Chris Spear s.l.: Springer, 2006.[3] OVM User Guide, Version 2.1.1, March 2010.[4] http://www.doulos.com/knowhow.[5] http://www.ovmworld.org.[6] H.264: International Telecommunication Union, Recommendation ITU-TH.264: Advanced Video Codingfor Generic Audiovisual Services, ITU-T, 2003.[7] Open Verification Methodology: Fulfilling the Promise of SystemVerilog by Thomas L. Anderson, ProductMarketing Director Cadence Design Systems, Inc.[8] O.Cadenas and E.Todorovich, Experiences applying OVM 2.0 to an 8B/10B RTL design, IEEE 5thSouthern Conference on Programmable Logic, 2009, pp. 1 - 8.[9] P.D. Mulani, SoC Level Verification Using SystemVerilog, IEEE 2nd International Conference onEmerging Trends in Engineering and Technology (ICETET), 2009, pp. 378 - 380.[10] G. Gennari, D. Bagni, A.Borneo and L. Pezzoni, Slice header reconstruction for H.264/AVC robustdecoders, IEEE 7th Workshop on Multimedia Signal Processing (2005), pp. 1 - 4. [11] C. Pixley et al., Commercial design verification: methodology and tools, IEEE International Conferenceon Test Proceedings, 1996, pp. 839 - 848.AuthorsAkhilesh Kumar received B.Tech degree from Bhagalpur university, Bihar, India in 1986and M.Tech degree from Ranchi, Bihar, India in 1993. He has been working in teaching andresearch profession since 1989. He is now working as H.O.D. in Department of Electronicsand Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interested fieldof research digital circuit design.Chandan Kumar received B. E. Degree from Visveswarya Technological University,Belgaum, Karnataka, India in 2009. Currently pursuing M. Tech project work under guidanceof Prof. Akhilesh Kumar in the Department of Electronics & Communication Engg, N. I. T.,Jamshedpur. Interest of field is ASIC Design & Verification, and Image Processing. 117 Vol. 1, Issue 5, pp. 109-117
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 COMPARISON BETWEEN GRAPH BASED DOCUMENT SUMMARIZATION METHOD AND CLUSTERING METHOD Prashant D.Joshi1, S.G.Joshi2, M.S.Bewoor3, S.H.Patil41, 3, 4 Department of Computer Engineering, Bharati Vidyapeeth University, CoE, Pune, India 2 Department of Computer Engineering, A.I.S.S.M.S. CoE, Pune, IndiaABSTRACTDocument summarization and clustering are two techniques which can be used while accessing text files withinshort period of time from the computer. In document summarization graph method, document graph of each textfile is generated. For creating document graph each paragraph is assumed as one individual node. Node scoreand Edge score are calculated using mathematical formulas. Input query is applied on the document andaccording to that summary from the Text file is generated. Clustering ROCK algorithm can also be used fordoing the summarization. Here each paragraph is considered as individual cluster and link score between twoparagraphs are calculated and on that basis two clusters are merged. Here Input query is applied on themerged clusters as well as individual cluster and accordingly summary is generated. Various results are takenin to consideration and we conclude that Rock algorithm requires less time as compared to other method fordocument summarization. Clustering ROCK algorithm can be used with standalone machine, LAN, Internet forretrieving text documents with small amount of retrieval time.KEYWORDS: Input Query, Document summarization, Document Graph, Clustering, Link, RobustHierarchical Clustering Algorithm I. INTRODUCTIONToday every human with basic computer knowledge is connected with the world by using an internet.WWW provides features like communication, chatting, Information Retrieval. Huge amount of data isavailable on N number of servers in the form of the files like text files, document files. TextSummarization is the process of identifying the most salient information in a document or text file. Inexisting days Query Summarization was done through the BOW (Bag of Words) approach, in whichboth the query and sentences were represented with word vectors. But this approach has drawbackwhere it merely considers lexical elements (words) in the documents, and ignores semantic relationsamong sentences. [6]Graph method is very important and crucial in document summarization which provides effective wayto study local, system level properties at a component level. Following examples shows theimportance of graphs. In the application of Biological network a protein interaction network isrepresented by a graph with the protein as vertices and edge is exist between two vertices if theproteins are known to interact based on two hybrid analysis and other biological experiments[3]. Instock market graph vertices are represented by stocks and edge between two point exist if they arepositively correlated over some threshold value based on the calculations.[3] in Internet application anInternet graph has vertices representing IP addresses while a web graph has vertices representingwebsites.[3]. In this paper we are comparing clustering ROCK algorithm with graph based documentsummarization algorithm for generating summary from the text file.Even though there is an increasing interest in the use of clustering methods in pattern recognition[Anderberg1973], image processing [Jain and Flynn 1996] and information retrieval [Rasmussen1992; Salton 1991], clustering has a rich history in other disciplines [Jain and Dubes 1988] such asbiology, psychiatry, psychology, archaeology, geology, geography, and marketing..[4] 118 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Currently, clustering algorithms can be categorized into partition-based, hierarchical, density-based,grid-based and model-based. [7] In clustering, related document should contain same or similar terms.One can expect a good document cluster to contain large number of matching terms. In reality when adocument cluster is large, there is no single term that occurs in all the documents of the cluster. Incontrast when a cluster is small one can expect certain term to occur in its all documents [8].Clustering and Data summarization are two techniques which are present in data mining. Data Miningis the notion of all methods and techniques, which allow to analyze very large data sets to extract anddiscover previously unknown structures and relations out of such huge heaps of details Theseinformation is filtered, prepared and classified so that it will be a valuable aid for decisions andstrategies.[5]II. RELATED WORK FOR DOCUMENT GRAPH METHOD2.1 Document SummarizationQuery-oriented summarization is primarily concerned with synthesizing an informative and well-organized summary from a collection of text document by applying an input query. Today mostsuccessful multi-document summarization systems refer the extractive summarization framework.These systems first rank all the sentences in the original document set and then select the most silentsentences to compose summaries for a good coverage of the concepts. For the purpose of creatingmore concise and fluent summaries, some intensive post-processing approaches are also appended onthe extracted sentences.Here input query as q and the collection of documents as D. The goal of QS is to generate a summarywhich best meets the information needs expressed by q . To do this, a Query Summarization systemgenerally takes two steps: first, the stop words from documents as well as from input query isremoved. Second sentences are selected until the length of the summary is reached.For making document graph node weights, edge weights should be known.Nodes are nothing but the paragraphs. Node weights are calculated after applying an input query. Following formula is refereed for calculating the node score.[1] . ∈ , . . . ….(1) [1]where N is total number of text files present on the system. df is total number of text files that contains the input term. tf means total count of input keywords in text file. qtf means number of times keyword occurred in input query. k1, b, k3 are constant value. Here k1 is assumed as 1, b =0.5, k3 =2 dl is the total text file length. avdl is average document length assume as 120.2.2 Problem Definition for Document Summarization using Graph based AlgorithmLets we have n document i.e.d1, d2, to dn. Size of document is total number of words. i.e. size (di).Term frequency tf(d,w) is no of words present in documents.Inverse document frequency is i.e.idf(w) Means inverse of documents contain word w in allDocuments.Keyword query is set of words. i.e.Q(w1,w2…wn).The document graph G (V, E) of a document d is defined as follows:• d is split to a set of non-overlapping text fragments t(v),each corresponding to a node v€V.• An edge e(u,v) €E is added between nodes u,v if there is an association between t(u) and t(v) in d.Two nodes can be connected using edges. Such edge weight is calculated by following formula. Heret (u) means first paragraph and t (v) means second paragraph. Like this edge weights between allparagraphs are calculated and stored in the database. Size t (u) shows number of keyword in firstparagraph and t (v) shows number of keyword in second paragraph. Edge weight can be calculatedbefore applying the input query because no. of text files are present on the system.[1] 119 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 ∈ ∩ , , . Escore (e)= …(2) [1]w€ t(u) ∩ t(v) means that common word present in both paragraph. Common keyword count isassigned to w. in this fashion all edge score of all text files are calculated and they are permanentlystored in the database. When new file is added then this module is run by administrator for storing theedge weights in the database.Summary module is referring the concept of spanning tree on the document graph because multipleNodes may have input query so which nodes will be selected? Different combinations from graph areidentified and node score is generated using following formula. Score (T) =a ∈ +b ….(3) [1] ∈Equation (3) will calculate the spanning tree score in the document graph.[1] From spanning tree tablethe minimum score of spanning tree is considered and that paragraph is displayed as summary.III. CLUSTERINGClustering can be considered as the most important unsupervised learning problem. Varioustechniques can be applied for making the groups. A loose definition of clustering could be “theprocess of organizing objects into groups whose members are similar with certain property. Thesimilarity criterion is distance: two or more objects belong to the same cluster if they are “close”according to a given distance (in this case geometrical distance). This is called distance-basedclustering.[4]Another kind of clustering is conceptual clustering: two or more objects belong to the same cluster ifthis one defines a concept common to all that objects. In other words, objects are grouped accordingto their fit to descriptive concepts, not according to simple similarity measures.3.1 Example:Clustering concept is always used with library where we have different subject’s book. These booksare arranged in proper structure to reduce the access time. Consider books of operating system theywill be kept in operating system shelf. Shelf has also assigned numbers for managing booksefficiently. Likewise all subjects’ books are arranged in cluster form. Clustering algorithms can be applied in many fields, for example • City-planning: globally houses are arranged by considering house type, value and geographical location; • Earthquake studies: clustering is applied while observing dangers zone. • World Wide Web: in WWW clustering is applied for document classification and document summary generation. • Marketing: for getting the details of the customer who purchase similar thing from huge amount of data. • Biology: classification of plants and animals given their features; • Libraries: organizing book in efficient order for reducing the access delay. • Insurance: identifying groups of motor insurance policy holders with a high average claim cost; identifying frauds [4]Problem definition: Assume n is no of text documents with size p number of paragraphs.Generate the summary from text files while applying the input query q. This paper followsfollowing system architecture for implementing text file summarization using clustering as well asgraph based method. Below fig.1.1 shows the system architecture for implementation of this system.IV. SYSTEM ARCHITECTUREThis system is developed in network environment. The main goal of this system is to get relevant textfile from the server without going through all text files. User time will be saved by just reading thesummary of text file relevant to input query. Here user input query is compared with all text files and 120 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231 2231-1963from that which text file is most relevant to input query that is generated as an output on usermachine. Here User can use graphical summarization method or can use clustering algorithm for ine.generating summary. Fig1.1 system Architecture for Document Summarization and Clustering Method V. ROCK ALGORITHM FOR CLUSTERINGProcedure cluster(S,k)Begin1.Link:=compute_links(S)2. For each s € S do3 q[s]:=build_local_heap(link,s)4.Q:=build_gloabal_heap(S,q)5.While size(Q)>k do {6.u:=extract,max(Q)7.v:=max(q[u])8delete(Q,v)9.w:=merge(u,v)10.for each x€ q[u] U q[v] do {11.link[x,w]:= link[x,u]+ link[x,v]12.delete(q(x),u);delete(q(x),v)13.insert(q([x],w,g(x,w));insert(q[w],x,q(x,w))14.update(Q,x,q[x])15.}16. insert(Q,w,q[w])17.Deallocate(q[u];deallocate(q[v])18.}end5.1 For calculating Link score here following algorithm is used.Procedure compute_link(S) 121 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Begin1.Compute nbrlist[i] for every point i in S2.Set link[i,j] to be zero for all i,j.3.for i=1 to n do {4. N:=nbrlist[i]5.for j:=1 to |N| -1 do6.for l:= j+1 to |N| do7.link[N[j],N[l]:=link[N[j],N[l]+18.}… [2]Following Example will give the concept of clustering and how it is applied on the text file. Let’sassume we have brainchip text file which contains four paragraphs.1 Brain chip offers hope for paralyzed.2. A team of neuroscientists have successfully implanted a chip into the brain of a quadriplegic man,allowing him to control a computer.3. Since the insertion of the tiny device in June, the 25-year old has been able to check email an playcomputer games simply using thoughts. He can also turn lights on and off and control a television, allwhile talking and moving his head.4. The chip, called BrainGate, is being developed by Massachusetts-based neurotechnology companyCyberkinetics, following research undertaken at Brown University, Rhode Island.Rock algorithm is applied on above text file following thing will be done on this file and result isgenerated.Count number of paragraphs in this file. Remove stop words from this file.Assume each paragraph as individual cluster.Above file contains 4 paragraphs. i.e.P1, P2, P3, P4.Start with P1, compare P1 with all reaming paragraphs and find the value of link.Link score is calculated by comparing keywords of each paragraph. The results of link score will bestored in one array. Table 1.1 Keywords of each individual paragraph Keywords Keywords Keywords Keywords List of C1 List of C2 List of C3 List of C4 Brain Team insertion Chip Chip neuroscientists Tiny BrainGate Offers successfully device Developed Hope implanted June Massachusetts_based paralyzed chip 25-year neurotechnology brain old Company quadriplegic check Cyberkinetics man Email Research allowing play Undertaken Control computer Brown computer games University simply Rhode thoughts Island turn Lights control television talking moving head 122 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 1.2 Local heap, Link result for P1-P4 Paragraphs Link Result Common words P1,p2 02 Chip, brain P1,p3 00 Nil P1,p4 01 Chip Table 1.3 Local heap, Link result for P2-P4 Paragraphs Link Common words Result P2,p3 02 Control, Computer P2,p4 01 Chip Table 1.4 Local heap, Link result for P3-P4 Paragraphs Link Common Result words P3,P4 00 NilFrom Table 1.2 it can easily understand that P1_P2 Link score is maximum. So P1-P2 can be mergedand one new cluster can be created. From Table 1.3 P2-P3 Link score is maximum i.e.2 so P2-P3 canbe merged and one new cluster can be created. In Table 1.4 Link score of P3-P4 is zero so no need tomake the cluster.Now we have C1, C2, C3, C4 total 4 clusters. Where C1 is merged keywords of P1-P2, C2 is mergedkeywords of P2-P3, C3 is individual paragraph3 i.e. P3 which is not matching with any otherparagraphs. Likewise C4 which is paragraph P4 having single keyword common with P1 but linkscore of P1-P4 is less than P1-P2. Here P4 is considered as individual cluster because input query maybe present with this paragraph also. So even though two paragraphs are not matching we want to takethem as separate cluster. Now apply “Brain Chip Research” Query on Merged cluster as well asindividual cluster.Brain chip part of Input query is present with both C1, C2 which shown with bold Letters. In C3 thereis no keyword of Input “Brain Chip Research”. In cluster C4 ‘Chip’ and ‘Research’ keywords arepresent with C4. The Keyword count of Input query on cluster as well as the size of Cluster isconsidered while selecting final cluster as an output.Here we are not getting “brain chip research” input query from individual cluster. So once againclustering algorithm should be applied on C1, C2, and C4. Link score between C1-C2, C1-C4, andC2-C4 is calculated and stored in database.C1-C4, C2-C4 will give all part of input query. C1-C4 will give Keyword count of 18 where as C2-C4will give keyword count of 24. So C1-C4 gives less count so Summary should be generated from C1,C4 clusters.VI. EXPERIMENTAL RESULTWe have implemented above system with following Hardware and software configuration.Pentium Processor: –IV, Hard disk: 160 Gb, RAM Capacity: 1 GbSoftware requirement for implementation of above system is:Operating System: Windows XP, Visual Studio.NET 2008, SQL Server 2005.We have stored 57 text files in the database, the memory capacity required for these text files were122 kb. Table 1.5 Clustering and Graph based Algorithm Result Sr.No. File Input Query Rock Algo Graph Algo Name (Time in ( Time in millisecond) millisecond ) 1 F1.txt eukaryotic organisms 218 234 2 F2.txt woody plant 249 280 3 F4 Bollywood film music 296 439 4 F6 personal computers 327 592 5 F7 Taj Mahal monument 390 852 6 F8 computer programs 468 1216 software 123 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 7 F13 wireless local area 390 758 network 8 F15 Mobile WiMAX 780 1060 9 F16 system development 670 724 10 F22 remote procedure calls 546 1482 First query is “eukaryotic organisms” which is applied on the system. Rock algorithm requires 218 milliseconds where as Graph based summarization requires 234 millisecond. Second query applied is “woody plant” here Rock algorithm requires 249 millisecond where as Document Graph algorithm requires 280Milisecond. After observing execution time of all Input query we conclude that Clustering Rock algorithm has good performance than graph based document summarization. But when input query is not available in any of the text file then graph based summarization gives output fast as compared to Rock Algorithm.VII. CONCLUSION In this paper we have compared the performance of Graph based document summarization method with clustering method. And the performance of Rock algorithm is better than Graph based document summarization method algorithm. This system can be applied with stand alone machine, LAN, WAN for retrieving text files within short period of time. Further this system can be improved to work on Doc file as well as PDF file which contain huge number of textual data. ACKNOWLEDGEMENT I am thankful to Professor & H.O.D. Dr. S. H. Patil, Associate Professor M. S. Bewoor, Prof. Shweta Joshi for their continuous guidance. I am also thanks to all my friends who are directly or indirectly supported me to complete this system. REFERENCES [1]. Ramakrishna Varadarajan School of Computing and Information Sciences Florida, International University, paper on “A System for Query-Specific Document Summarization”. [2]. Sudipto Guha_Stanford University Stanford, CA 94305, Rajeev Rastogi Bell Laboratories, Murray Hill, NJ 07974 Kyuseok Shim,Bell Laboratories Murray Hill, NJ 07974 Paper on “A Robust Clustering Algorithm for Categorical Attributes” . [3]. Balabhaskar Balasundaram 4th IEEE Conference on Automation Science and Engineering Key Bridge arriott, Washington DC, USA August 23-26, 2008 “ A cohesive Subgroup Model For Graph-based Text Mining”. [4]. A Review by A.K. Jain Michigan State University,M.N. Murty Indian Institute of Science AND P.J. Flynn The Ohio State University on “Data Clustering”. [5]. Johannes Grabmeier University of Applied Sciences, Deggendorf, Edlmaierstr 6+8, D- 94469Deggendorf, Germany, Andreas Rudolph Universitat der Bundeswehr Munchen, Werner- Heisenberg-Weg 39, Neubiberg, Germany D-85579, on “Techniques of Cluster Algorithms in Data Mining”. [6]. Prashant D. Joshi, M. S. Bewoor,S. H. Patil on topic “System for document summarization using graphs In text mining” in “International Journal of Advances in Engineering & Technology (IJAET)”. [7]. Bao-Zhi Qiu1, Xiang-Li Li, and Jun-Yi Shen, on “Grid-Based Clustering Algorithm Based on Intersecting Partition and Density Estimation”. [8]. Jacob kogan, Department of Mathematics and statistics,Marc Teboulle, paper on “The Entropic Geometric Means Algorithm: An Approach to Building small clusters for large text datasets”. Authors Prashant D. Joshi currently working as Assistant professor and pursuing Mtech Degree from Bharati Vidyapeeth Deemed University College of Engineering Pune. Total 5 and half years of teaching experience and six months of software development experience. He has Completed B.E. Computer science degree from Dr. Babasaheb Ambedkar University Aurangabad (MH) in year 2005 with distinction. Published 2 papers in national conferences, 2 papers in International conferences and published 1 paper in international Journal.His area 124 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963of interest is Data Mining, Programming Languages, and Microprocessors.S. G. Joshi currently working as Lecturer in A.I.S.S.M.S.College of Engineering, Pune. Sheis having total 2 years of teaching experience in polytechnic college. She has completed B.E.computer science engineering from Swami Ramanand Teerth Marathwada UniversityNanded with distinction. Her research interest is in Data Mining, Operating System, and DataStructure.M. S. Bewoor currently working as Assistant Professor in Bharati Vidyapeeth Deemeduniversity college of enginering,pune.she has total having 10 years of teaching experience inEngineering college and 3 years of Industry experience. She is involved in Reseacrh activityby presenting 07 papers in national conferences, 08- international conferences and 07-International journals.Her area of interest is Data Structure,Data Mining,ArtificialIntelligence.S. H. Patil working as professor and Head of Computer Department at Bharati VidyapeethDeemed University College of engineering Pune. Total 24 years of teaching experience. Hehas published more than 100 papers in National conferences, International conferences,National Journals and International Journals. His area of Interest is Operating System,Computer Network, and Database Management System. 125 Vol. 1, Issue 5, pp. 118-125
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 IMPROVED SEARCH ENGINE USING CLUSTER ONTOLOGY Gauri Suresh Bhagat, Mrunal S. Bewoor, Suhas PatilComputer Department, Bharati Vidyapeeth Demeed University College of Engineering, Pune Maharashtra, IndiaABSTRACTSearch engine such as Google and yahoo returns a list of web pages that match the user query. It is verydifficult for the user to find relevant web pages. Cluster based search engine can provide significantly morepowerful models for searching a user query. Clustering is a process of forming groups (clusters) of similarobjects from a given set of inputs. When applied to web search results, clustering can be perceived as a way oforganising the results into a number of easily brows able thematic groups. In this paper, we propose a newapproach for applying background knowledge during pre-processing in order to improve clustering results andallow for selection between results. We preprocess our input data applying an ontology-based heuristics forfeature selection and feature aggregation. The inexperienced users, who may have difficulties in formulating aprecise query, can be helped in identifying the actual information of interest. Clustering are readable andunambiguous descriptions (labels) of the thematic groups. They provide the users with an overview of the topicscovered in the results and help them identify the specific group of documents they were looking for.KEYWORDS: Cluster, stemming, stop words, Cluster label induction, Frequent Phrase Extraction, clustercontent discovery. I. INTRODUCTIONWith an enormous growth of the Internet it has become very difficult for the users to find relevantdocuments. In response to the user’s query, currently available search engines return a ranked list ofdocuments along with their partial content. If the query is general, it is extremely difficult to identifythe specific document which the user is interested in. The users are forced to sift through a long list ofoff-topic documents. For example When “java Map” query submitted to Cluster based search engineThe result set spans two categories, namely the Java map collection classes and maps for theIndonesian island Java. Generally speaking, the computer science student would be most likelyinterested in the Java map collection classes, whereas the geography student would be interested inlocating maps for the Indonesian island Java. The solution is that for each such web page, the search-engine could determine which real entity the page refers to. This information can be used to provide acapability of clustered search, where instead of a list of web pages of (possibly) multiple entities withthe same name, the results are clustered by associating each cluster to a real entity. The clusters can bereturned in a ranked order determined by aggregating the rank of the web pages that constitute thecluster.II. RELATED WORKThe Kalashnikov et al. Have developed a disambiguation algorithm & then studied its impact onpeople search [1]. The Author has proposed algorithm that use Extraction techniques to extractsentities such as names, organizations locations on each web page. The algorithm analyses severaltypes of information like attributes, interconnections that exist among entities in the Entity-Relationship Graph.If the multiple people name web pages merged into same cluster it is difficult for 126 Vol. 1, Issue 5, pp. 126-132
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963user to find relevant web pages. For the disambiguating people that have same name a novelalgorithm is developed.The Kalashnikov et al. have, discuss a Web People Search approach which is based on collecting co-occurrence information from web to make clustering decisions [2]. To classify the collected co-occurrence information a sky-line based classification technique is used.Bekkerman and Zilberstein have proposed framework makes the heuristic search viable in the vastdomain of the WWW and applicable to clustering of Web search results and to Web appearancedisambiguation [3].Chen and Kalashnikov have, presented graphical approach for entity resolution. The overall ideabehind this is to use relationships & to look at the direct and indirect (long) relationships that existbetween specific pairs of entity representations in order to make a disambiguation decision. In termsof the entity-relationship graph that means analyzing paths that exist between various pairs of nodes[4].III. DESIGN OF PREPROCESSING OF WEB PAGESThe preprocessing of the web pages which include the two processing named as stemming and stopsword removal. Stemming algorithms are used to transform the words in texts into their grammaticalroot form, and are mainly used to improve the Information Retrieval System’s efficiency. To stem aword is to reduce it to a more general form, possibly its root. For example, stemming the term mayproduce the term interest. Though the stem of a word might not be its root, we want all words thathave the same stem to have the same root. The effect of stemming on searches of English documentcollections has been tested extensively. Several algorithms exist with different techniques. The mostwidely used is the Porter Stemming algorithm. In some contexts, stemmers such as the Porter stemmerimprove precision/recall scores .After stemming it is necessary to remove unwanted words. There are400 to 500 types of stop words such as “of”, “and”, “the,” etc., that provide no useful informationabout the document’s topic. Stop-word removal is the process of removing these words. Stop-wordsaccount for about 20% of all words in a typical document. These techniques greatly reduce the size ofthe search engine’s index. Stemming alone can reduce the size of an index by nearly 40%. Tocompare a webpage with another webpage, all unnecessary content must be removed and the text putinto an array.When designing a Cluster Based Web Search, special attention must be paid to ensuring that bothcontent and description (labels) of the resulting groups are meaningful to humans. As stated, “a goodcluster—or document grouping—is one, which possesses a good, readable description”. There arevarious algorithms such as K means, K-medoid but this algorithm require as input the number ofclusters. A Correlation Clustering (CC) algorithm is employed which utilizes supervised learning. Thekey feature of Correlation Clustering (CC) algorithm is that it generates the number of clusters basedon the labeling itself & not necessary to give it as input but it is best suitable when query is personnames[9]. For general query, the algorithms are Query Directed Web Page Clustering (QDC), SuffixTree Clustering (STC), Lingo, and Semantic Online Hierarchical Clustering (SHOC)[5].The focus ismade on Lingo because the QDC considers only the single words. The STC tends to remove longerhigh quality phrases, leaving only less informative & shorter ones. So, if a document does not includeany of the extracted phrases it will not be included in results although it may still be relevant. Toovercome the STCs low quality phrases problem, in SHOC introduce two novel concepts: completephrases and a continuous cluster definition. The drawback of SHOC is that it provides vaguethreshold value which is used to describe the resulting cluster. Also in many cases, it producesunintuitive continuous clusters. The majority of open text clustering algorithms follows a schemewhere cluster content discovery is performed first, and then, based on the content, the labels aredetermined. But very often intricate measures of similarity among documents do not correspond wellwith plain human understanding of what a cluster’s “glue” element has been. To avoid such problemsLingo reverses this process—first attempt to ensure that we can create a human-perceivable clusterlabel and only then assign documents to it. Specifically, extract frequent phrases from the inputdocuments, hoping they are the most informative source of human-readable topic descriptions. Next,by performing reduction of the original term-document matrix using Singular Value Decomposition(SVD), try to discover any existing latent structure of diverse topics in the search result. Finally, 127 Vol. 1, Issue 5, pp. 126-132
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 match group descriptions with the extracted topics and assign relevant documents to them. The detail description of Lingo algorithm is in [4]. IV. FREQUENT PHRASE EXTRACTION The frequent phrases are defined as recurring ordered sequences of terms appearing in the input documents. Intuitively, when writing about something, we usually repeat the subject-related keywords to keep a reader’s attention. Obviously, in a good writing style it is common to use synonymy and pronouns and thus avoid annoying repetition. The Lingo can partially overcome the former by using the SVD-decomposed term document matrix to identify abstract concepts—single subjects or groups of related subjects that are cognitively different from other abstract concepts. A complete phrase is a complete substring of the collated text of the input documents, defined in the following way: Let T be a sequence of elements (t1, t2, t3 . . . tn). S is a complete substring of T when S occurs in k distinct positions p1, p2, p3 . . . pk in T and i, j 1 . . . k : tpi−1 ≠ tpj−1 (left completeness) and i, j 1 . . . k : tpi+|S| ≠ tpj+|S| (right-completeness). In other words, a complete phrase cannot be “extended” by adding preceding or trailing elements, because at least one of these elements is different from the rest. An efficient algorithm for discovering complete phrases was proposed in [11]. V. CLUSTER LABEL INDUCTION Once frequent phrases (and single frequent terms) that exceed term frequency thresholds are known, they are used for cluster label induction. There are three steps to this: term-document matrix building, abstract concept discovery, phrase matching and label pruning. The term-document matrix is constructed out of single terms that exceed a predefined term frequency threshold. Weight of each term is calculated using the standard term frequency, inverse document frequency (tfidf) formula [12], terms appearing in document titles are additionally scaled by a constant factor. In abstract concept discovery, Singular Value Decomposition method is applied to the term-document matrix to find its orthogonal basis. As discussed earlier, vectors of this basis (SVD’s U matrix) supposedly represent the abstract concepts appearing in the input documents. It should be noted, however, that only the first k vectors of matrix U are used in the further phases of the algorithm. We estimate the value of k by selecting the Frobenius norms of the term-document matrix A and its k-rank approximation Ak. Let threshold q be a percentage-expressed value that determines to what extent the k-rank approximation should retain the original information in matrix A. VI. CLUSTER CONTENT DISCOVERY In the cluster content discovery phase, the classic Vector Space Model is used to assign the input documents to the cluster labels induced in the previous phase. In a way, we re-query the input document set with all induced cluster labels. The assignment process resembles document retrieval based on the VSM model. Let us define matrix Q, in which each cluster label is represented as a column vector. Let C = QTA, where A is the original term-document matrix for input documents. This way, element cij of the C matrix indicates the strength of membership of the j-th document to the i-th cluster. A document is added to a cluster if cij exceeds the Snippet Assignment Threshold, yet another control parameter of the algorithm. Documents not assigned to any cluster end up in an artificial cluster called others.VII. FINAL CLUSTER FORMATION Clusters are sorted for display based on their score, calculated using the following simple formula: Score = label score × ||C||, where ||C|| is the number of documents assigned to cluster C. The scoring function, although simple, prefers well-described and relatively large groups over smaller, possibly noisy ones.VIII. ONTOLOGY Let tf(d, t) be the absolute frequency of term t T in document d D, where D is the set of documents and T = {t1,..., tm} is the set all different terms occurring in D. We denote the term vectors  →  = td ((tf(d, t1),....., tf(d,tm)). Later on, we will need the notion of the centroid of a set X of term vectors. It 128 Vol. 1, Issue 5, pp. 126-132
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963is defined as the mean value . As initial approach we have produced this standard representation of thetexts by term vectors. The initial term vectors are further modified as follows.Stopwords are words which are considered as non–descriptive within a bag–of–words approach.Following common practice, we removed stopwords from T.We have processed our text documents using the Porter stemmer. We used the stemmed terms toconstruct a vector representation td → for each text document. Then, we have investigated howpruning rare terms affects results. Depending on a pre-defined threshold δ, a term t is discarded fromthe representation (i. e., from the set T), if ∑ d D tf (d,t) ≤ δ. We have used the values 0, 5 and 30 for δ.The rationale behind pruning is that infrequent terms do not help for identifying appropriate clusters.Tfidf weighs the frequency of a term in a document with a factor that discounts its importance when itappears in almost all documents[14]. The tfidf (term frequency-inverted document frequency) of termt in document d is defined by:where df(t) is the document frequency of term t that counts in how many documents term t appears Iftfidf weighting is applied then we replace the term vectors  → = ((tf(d, t1),....., tf(d,tm)) by  → td  td= ((tfidf(d, t1),....., tfidf(d,tm)) [13]. A core ontology is a tuple O := (C, ≤ C) consisting of a set Cwhose elements are called concept identifiers, and a partial order ≤ C on C, called concept hierarchyor taxonomy . This definition allows for a very generic approach towards using ontologies forclustering.IX. RESULTS AND DISCUSSIONThe system was implemented using Net bean 6.5.1 as development tool & Jdk 1.6 developmentPlatform .Also it was tested for variety of queries under following four categories and the resultsobtained where satisfactory.9.1 Web pages retrieval for the queryThis module gives the facilities for specifying the various queries to the middleware. The front enddeveloped so far is as follows. The Figure 1 shows user interface, by using that the user enters thequery to the middleware. Along with the query, user can also select the number of results(50/100/150/200) to be fetched from source. In Figure.1, query entered is “mouse” & result selected is100.The user issues a query to the system (middleware) sends a query to a search engine, such asGoogle, and retrieves the top-K returned web pages. This is a standard step performed by most of thecurrent systems. The Figure1 shows that the 200 results were fetched from the source Google forquery “mouse” Input: Query “mouse” & k=50/100/150/200 page. Output: Web pages of Query“mouse”.The system was assessed for a number of real-world queries; also analyzed the results obtained fromour system with respect to certain characteristics of the input data. The queries are mainly categorizedin four types such as Ambiguous Query, General Query, Compound Query, People Name, The systemwas tested for all these queries & the result obtained is satisfactory.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 K=200 results Figure 1. Clustering results for a ambiguous query “mouse” & k=200 results X. QUALITY OF GROUP IDENTIFICATIONFigure 1 demonstrates the overall disambiguation quality results on WWW 2005 and WEPS data sets.We also compare the results with the top runners in the WEPS challenge [6]. The first runner in thechallenge reports 0.78 for Fp and 0.70 for B-cubed measures. The proposed algorithm outperforms allof the WEPS challenge algorithms. The improvement is achieved since the proposed disambiguationmethod is simply capable of analyzing more information, hidden in the data sets, and which [8] and[7] do not analyze. That algorithm outperforms [7] by 11.8 percent of F-measure, as illustrated inTable 1 and Table 2. In this experiment, F-measure is computed the same way as in [7].The field“#W” in Table 1. is the number of the to-be found web pages related to the namesake of interest. Thefield “#C” is the number of web pages found correctly and the field “#I” is the number of pages foundincorrectly in the resulting groups. The baseline algorithm also outperforms the algorithm proposed in[7]. Table 1. F- Measures Using WWW’05 Algo. WWW’05 Algo. Name #W #C #I F-measure Adam cheyer 96 62 0 78.5 William cohen 6 6 4 75.0 Steve hardt 64 16 2 39.0 David Israel 20 19 4 88.4 Leslie kaelbling 88 84 1 97.1 Bill Mark 11 6 9 46.2 Mouse 54 54 2 98.2 Apple 15 14 5 82.4 David Mulford 1 1 0 100.0 Java 32 30 6 88.2 Jobs 32 21 14 62.7 Gauri 1 0 1 0.0 Overall 455 313 47 80.3F-measure: let Si be the set of the correct web pages for cluster-i and Ai be the set of web pagesassigned to cluster-i by the algorithm .Then, Precisioni = , Recall i= and F is theirharmonic mean[10]. And Fp is referred to as Fα = 0.5 [8]. Table 2. F- Measures using Baseline Algo Baseline Algo Name #W #C #I F-measure Adam cheyer 96 75 1 87.2(+8.7) William cohen 6 5 0 90.9(+15.9) Steve hardt 64 40 7 72.1(+33.1) David Israel 20 14 2 77.8(-10.6)
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Leslie kaelbling 88 66 0 85.7(-11.4) Bill Mark 11 9 17 48.6(+2.4) Mouse 54 52 0 98.1(-0.1) Apple 15 15 2 93.8(+11.4) David Mulford 1 0 1 0.0(-100.0) Java 32 27 1 90.0(+1.8) Jobs 32 23 17 63.9(+1.2) Gauri 1 1 0 100.0(+100.0) Overall 455 327 47 82.4(+2.1) Table 3. F-Measure using Cluster Based Algo Cluster based Algo. Name #W #C #I F-measure Adam cheyer 96 94 0 98.9(+20.4) William cohen 6 4 0 80.0(+5.0) Steve hardt 64 51 2 87.2(+48.2) David Israel 20 17 2 87.8(-1.2) Leslie kaelbling 88 88 1 99.4(+2.3) Bill Mark 11 8 1 80.0(+33.8) Mouse 54 54 1 99.1(+0.9) Apple 15 12 5 75.0(-7.4) David Mulford 1 1 0 100.0(+0.0) Java 32 25 1 86.2(-2.0) Jobs 32 25 11 73.5(+10.8) Gauri 1 0 0 0.0(+0.0) Overall 455 379 24 92.1(+11.8)XI. CONCLUSIONThe number of outputs processed for a single query is likely to have impact on two major aspects ofthe results: the quality of groups’ description and the time spent on clustering .The focus is made onthe evaluation of usefulness of generated clusters. The term usefulness involves very subjectivejudgments of the clustering results. For each created cluster, based on its label, decided whether thecluster is useful or not. Useful groups would most likely have concise and meaningful labels, whilethe useless ones would have been given either ambiguous or senseless. For each cluster individually,for each snippet from this cluster, judged the extent to which the result fits its groups description. Avery well matching result would contain exactly the information suggested by the cluster label.ACKNOWLEDGEMENTSWe would like to acknowledge and extend our heartfelt gratitude to the following persons who havemade the completion of this paper possible: my guide Prof. M.S.Bewoor and Our H. O. D, Dr. SuhasH. Patil for his vital encouragement and support. Most especially to our family and friends and toGod, who made all things possible!REFERENCES[1] D.V. Kalashnikov, S.Mehrotra, R.N.Turenand Z.Chen, “Web People Search via Connection Analysis” IEEE Transactions on Knowledge and data engg.Vol 20,No11,November 2008.[2] D.V. Kalashnikov, S. Mehrotra, Z. Chen, R. Nuray-Turan, and N.Ashish, “Disambiguation Algorithm for People Search on the Web,” Proc. IEEE Int’l Conf. Data Eng. (ICDE ’07), Apr. 2007.[3] R. Bekkerman, S. Zilberstein, and J. Allan, “Web Page Clustering Using Heuristic Search in the Web Graph,” Proc. Int’l Joint Conf. Artificial Intelligence (IJCAI), 2007.[4] Z. Chen, D.V. Kalashnikov, and S. Mehrotra, “Adaptive Graphical Approach to Entity Resolution,” Proc. ACM IEEE Joint Conf. Digital Libraries (JCDL), 2007.[5] Zamir, O.E.: Clustering Web Documents: A Phrase-Based Method for GroupingSearch Engine Results. PhD thesis, University of Washington (1999). 131 Vol. 1, Issue 5, pp. 126-132
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[6] J. Artiles, J. Gonzalo, and S. Sekine, “The SemEval-2007 WePSEvaluation: Establishing a Benchmark for the Web People Search Task,” Proc. Int’l Workshop Semantic Evaluations (SemEval ’07), June 2007.[7] R. Bekkerman and A. McCallum, “Disambiguating Web Appearancesof People in a Social Network,” Proc. Int’l World Wide Web Conf. (WWW), 2005.[8] J. Artiles, J. Gonzalo, and F. Verdejo, “A Testbed for People Searching Strategies in the WWW,” Proc. SIGIR, 2005.[9] N. Bansal, A. Blum, and S. Chawla, “Correlation Clustering,”Foundations of Computer Science, pp. 238-247, 2002.[10] D.V.Kalashnikov, S.Mehrotra, R.N.Turenand Z.Chen, “Web People Search via Connection Analysis” IEEE Transactions on Knowledge and data engg.Vol 20,No11,November 2008.[11] Zhang Dong. Towards Web Information Clustering. PhD thesis, Southeast University, Nanjing, China, 2002.[12] Gerard Salton. Automatic Text Processing — The Transformation, Analysis, and Retrieval of Information by Computer. Addison–Wesley, 1989.[13] G. Amati, C. Carpineto, and G. Romano. Fub at trec-10 web track: A probabilistic framework for topic relevance term weighting. In The Tenth Text Retrieval Conference (TREC 2001). National Institute of Standards and Technology (NIST), online publication, 2001.[14] Hotho A., Staab S. and Stumme G, (2003) WordNet improves text document clustering, Proc. of the SIGIR 2003 Semantic Web Workshop, Pp. 541-544.AuthorsGauri S. Bhagat is a student of M.Tech in Computer Engineering, Bharati Vidyapeeth DeemedUniversity College of Engg, Pune-43.M. S. Bewoor working as an Associate Professor in Computer Engineering Bharati VidyapeethDeemed University college of Engg, Pune-43.She is having total 10 years of teaching experience.S. H. Patil working as a Professor and Head of Department in Computer engineering, BharatiVidyapeeth Deemed University college of Engg,Pune-43. He is having total 22 years of teachingexperience & working as HOD from last ten years.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 COMPARISON OF MAXIMUM POWER POINT TRACKING ALGORITHMS FOR PHOTOVOLTAIC SYSTEM J. Surya Kumari1, Ch. Sai Babu2 1 Asst. Professor, Dept of Electrical and Electronics, RGMCET, Nandyal, India. 2 Professor, Dept of Electrical and Electronics, J.N.T.University, Kakinada, India.ABSTRACTPhotovoltaic systems normally use a maximum power point tracking (MPPT) technique to continuously deliverthe highest possible power to the load when variations in the isolation and temperature occur, Photovoltaic(PV) generation is becoming increasingly important as a renewable source since it offers many advantages suchas incurring no fuel costs, not being polluting, requiring little maintenance, and emitting no noise, amongothers. PV modules still have relatively low conversion efficiency; therefore, controlling maximum power pointtracking (MPPT) for the solar array is essential in a PV system. The Maximum Power Point Tracking (MPPT)is a technique used in power electronic circuits to extract maximum energy from the Photovoltaic (PV) Systems.In the recent days, PV power generation has gained more importance due its numerous advantages such as fuelfree, requires very little maintenance and environmental benefits. To improve the energy efficiency, it isimportant to operate PV system always at its maximum power point. Many maximum power point Tracking(MPPT) techniques are available and proposed various methods for obtaining maximum power point. But,among the available techniques sufficient comparative study particularly with variable environmentalconditions is not done. This paper is an attempt to study and evaluate two main types of MPPT techniquesnamely, Open-circuit voltage and Short-circuit current. The detailed comparison of each technique is reported.The SIMULINK simulation results of Open-circuit voltage and Short-circuit current methods with changingradiation and temperature are presented.KEYWORDS: Photovoltaic system, modelling of PV arrays, Open-circuit voltage algorithm Short circuitcurrent algorithm, Boost converter and Simulation Results I. INTRODUCTIONRenewable sources of energy acquire growing importance due to its enormous consumption andexhaustion of fossil fuel. Also, solar energy is the most readily available source of energy and it isfree. Moreover, solar energy is the best among all the renewable energy sources since, it is non-polluting. Energy supplied by the sun in one hour is equal to the amount of energy required by thehuman in one year. Photo voltaic arrays are used in many applications such as water pumping, streetlighting in rural town, battery charging and grid connected PV systemThe maximum power point tracker is used with PV modules to extract maximum energy from the Sun[1]. Typical characteristics of the PV module shown in Fig.1 clearly indicate that the operating pointof the module (intersection point of load line and IV characteristic) is not same as the maximumpower point of the module. To remove this mismatch power electronic converter is accompanied withthe PV system as shown in Fig.1 The electrical characteristics of PV module depend on the intensityof solar radiation and operating temperature. Increased radiation with reduced temperature results inhigher module output. The aim of the tracker is to derive maximum power always against thevariations in sunlight, atmosphere, local surface reflectivity, and temperature. or to operate the moduleat MPP, a dc-to-dc power electronic converter is accompanied with the PV system. The electricalcharacteristic of PV module depends on the intensity of solar radiation and operating temperature.Increased radiation with reduced temperature results in higher module output. 133 Vol. 1, Issue 5, pp. 133-148
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 1: PV Module CharacteristicsSince a PV array is an expensive system to build, and the cost of electricity from the PV array systemsis more expensive compared to the price of electricity from the utility grid, the user of such anexpensive system naturally wants to use all of the available output power. A near sinusoidal current aswell as voltage with minimum harmonic distortion under all operating conditions [2], [3].Therefore, PV array systems should be designed to operate at their maximum output power levels forany temperature and solar irradiation level at all the time. The performance of a PV array systemdepends on the operating conditions as well as the solar cell and array design quality. Multilevelconverters are particularly interesting for high power applications. The main tasks of the systemcontrol are maximize the energy transferred from the PV arrays to the grid and to generate a nearsinusoidal current as well as voltage with minimum harmonic distortion under all operatingconditions.The paper is organized in the following way. Section II presents the entire system configurationSection III discuss about the Mathematical modeling of PV array, Maximum Power Point TrackingMethods, analyzing the boost converter, about the concept of multilevel inverter with Five- level H-bridge cascade multilevel inverter. In section IV Simulation results for the multilevel inverter systemunder considerations are discussed. Finally, conclusions are made in Section V. II. SYSTEM CONFIGURATIONThe system configuration for the topic is as shown figure 2. Here the PV array is a combination ofseries and parallel solar cells. This array develops the power from the solar energy directly and it willbe changes by depending up on the temperature and solar irradiances. [1], [2]. Fig. 2. System Configuration of PV SystemSo we are controlling this to maintain maximum power at output side we are boosting the voltage bycontrolling the current of array with the use of PI controller. By depending upon the boost converteroutput voltage this AC voltage may be changes and finally it connects to the utility grid that is nothingbut of a load for various applications. Here we are using Five-level H-Bridge Cascade multilevelinverter to obtain AC output voltage from the DC boost output voltage.III. PROPOSED MPPT ALGORITHM FOR PHOTOVOLTAIC SYSTEM3.1. Mathematical Modeling of PV Array
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963The PV receives energy from sun and converts the sun light into DC power. The simplified equivalentcircuit model is as shown in figure.3. Figure.3. Simplified – equivalent Circuit of Photovoltaic CellThe PV cell output voltage is a function of mathematical equation of the photocurrent that mainlydetermined by load current depending on the solar irradiation level during the operation. The equation(1) is, AKT c  I ph + I 0 − I c  (1) V cx = ln   − Rs I c q  I0 Where the symbols are defined as follows: q: electron charge (1.602 × 10-19 C). k: Boltzmann constant (1.38 × 10-23 J/0K). Ic: cell output current, A. Iph: photocurrent, function of irradiation level and junction temperature (5 A). Io: reverse saturation current of diode (0.0002 A). Rs: series resistance of cell (0.001 ). Tc: reference cell operating temperature (25 °C). Vc: cell output voltage, V.Both k and TC should have the same temperature unit, either Kelvin or Celsius. A method to includethese effects in the PV array modeling is given in [4]. These effects are represented in the model bythe temperature coefficients CTV and CTI for cell output voltage and cell photocurrent, respectively, asin equation (2) and (3), C TV = 1 + β T (T a − T x ) (2) γt (3) C T1 = 1 + (T x − Ta ) scWhere, βT=0.004 and γT=0.06 for the cell used and Ta=20°C is the ambient temperature during the celltesting. If the solar irradiation level increases from SX1 to SX2, the cell operating temperature and thephotocurrent will also increase from TX1 to TX2 and from IPh1 to Iph2, respectively. CSV and CSI, whichare the correction factors for changes in cell output voltage VC and photocurrent Iph respectively inequation (4) and (5), C SV = 1 + β T α S (S x − S C ) (4) 1 (5) C ST = 1 + (S X − S C ) Scwhere SC is the benchmark reference solar irradiation level during the cell testing to obtain themodified cell model. The temperature change, occurs due to the change in the solar irradiationlevel and is obtained using in equation (6), ∆ TC = 1 + α S (S X − S C ) (6)The constant represents the slope of the change in the cell operating temperature due to a change inthe solar irradiation level [1] and is equal to 0.2 for the solar cells used. Using correction factors CTV,CTI, CSV and CSI, the new values of the cell output voltage VCX and photocurrent IPHX are obtained forthe new temperature TX and solar irradiation SX as follows in equation (7) and (8), V CX = C TV C SV V C (7) I ph = C T 1 C Sl I ph (8)
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963VC and IPH are the benchmark reference cell output voltage and reference cell photocurrent,respectively. The resulting I-V and P V curves for various temperature and solar irradiation levelswere discussed and shown in [3, 4, and 5]; therefore they are not going to be given here again. Theoutput power from PV is the result from multiplying PV terminal voltage and PV output current areobtained from equation (9) and (10). The power output from PV modules is shown in (2).  q  Pc = V c  I ph − I a ∗ exp( V c − 1)   AKT  (9)  q  I c = I ph − I 0 ∗ exp  Vc − 1  AKT  (10) 3.2 MPPT MethodsThe tracking algorithm works based on the fact that the derivative of the output power P with respectto the panel voltage V is equal to zero at the maximum power point as in Fig.4.The derivative isgreater than zero to the left of the peak point and is less than zero to the right. Figure 3: P-V Characteristics of a module ∂P/∂V = 0 for V = Vmp (11) ∂P/∂V > 0 for V <Vmp (12) ∂P/∂V < 0 for V >Vmp (13)Various MPPT algorithms are available in order to improve the performance of PV system byeffectively tracking the MPP. However, most widely used MPPT algorithms are considered here, theyare a) Open Circuit Voltage b) Short Circuit CurrentA. Open-Circuit VoltageThe open circuit Voltage algorithm is the simplest MPPT control method. This technique is alsoknown as constant voltage method. VOC is the open circuit voltage of the PV panel. VOC depends onthe property of the solar cells. A commonly used VMPP/Voc value is 76% This relationship can bedescribed by equation (14), VMPP = k1 ∗ Voc (14)Here the factor k1 is always less than unity. It looks very simple but determining best value of k isvery difficult and k1 varies from 0.71 to 0.8. The common value used is 0.76; hence this algorithm isalso called as 76% algorithm. The operating point of the PV array is kept near the MPP by regulatingthe array voltage and matching it to a fixed reference voltage Vref. The Vref value is set equal to theVMPP of the characteristic PV module or to another calculated best open circuit voltage this methodassumes that individual insulation and temperature variations on the array are insignificant, and thatthe constant reference voltage is an adequate approximation of the true MPP.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure.4. Flow Chart of Open Circuit Voltage.The open circuit voltage method does not require any input. It is important to observe thatwhen the PV panel is in low insulation conditions, the open circuit Voltage technique is moreeffective. Detailed flowchart of the open circuit voltage algorithm is depicted in Figure.4.B. Short -Circuit CurrentThe Short Circuit Current algorithm is the simplest MPPT control method. This technique is alsoknown as constant current method. ISC is the Short circuit current of the PV panel. ISC depends on theproperty of the solar cells as shown in figure.3..This relationship can be described by equation (15), I MPP = k2 ∗ I SC (15)Here the factor k2 is always <1. It looks very simple but determining best value of k2 is very difficultand k2 varies from between 0.78 and 0.92.When the PV array output current is approximately 90% of the short circuit current, solar moduleoperates at its MPP. In other words, the common value of k2 is 0.9. Measuring ISC during operation isproblematic. An additional switch usually has to be added to the power converter. A boost converteris used, where the switch in the converter itself can be used to short the PV array. Power output is notonly reduced when finding ISC but also because the MPP is never perfectly matched. A way ofcompensating k2 is proposed such that the MPP is better tracked while atmospheric conditions change.To guarantee proper MPPT in the presence of multiple local maxima periodically sweeps the PV arrayvoltage from short-circuit to update k2. Detailed flowchart of the short circuit current algorithm isdepicted in Figure.5.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure.5. Flow Chart of Short Circuit Current MPPT3.3 MPPT MethodologyWhen compared with the system without control algorithms PV system output approximately 20 to65%. By using Control algorithms the dc-to-dc converter and performs all control functions requiredfor MPP Tracking process. The MPP of a module varies with radiation and temperature. The variationof MPP position under changing conditions demands optimized algorithm, which in turn control thedc to- dc converter operation to increase the PV efficiency. Table.1 shows the detailed comparisons ofthe above two methods. Each MPPT algorithm has its own merits and barriers in view of changingenvironmental conditions. The Open circuit voltage and short circuit current methods are simple andeasy for implementation. However, it is very tedious to find the optimal value of k factor for thechanging temperature and irradiance. The open circuit voltage algorithm suffers from low efficiency92%, as it is very tedious to identify the exact MPP. Also, this method fails to find MPP whenpartially shaded PV module or damaged cells are present. The short circuit current algorithm has thehigher efficiency 96%.The advantage of this method is, response is quick as ISC is linearlyproportional to the Imp respectively. Hence, this method also gives faster response for changingconditions. When rapidly changing site conditions are present and the efficiency depends on how themethod is optimised at design stage. The implementation cost of this method is relatively lower. TheOpen circuit voltage method is easy to implement as few parameters are to be measured and givesmoderate efficiencies about 92%.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 1: Comparison of MPPT methods Specification Open Open Short Circuit Circuit Voltage Current Efficiency Low About 90% High About 94% Complexity Very simple but Very simple but Very difficult to Very difficult to Get optimal k1 Get optimal k2 Realization Easy to implement Easy to implement as With Analog hardware few measured parameters Cost Relatively Lower Relatively Lower Reliability Not accurate and may Accurate and operate not operate exactly at exactly at MPP MPP (below to it) Rapidly changing Slower response as Vmp is Faster response as Imp is Atmospheric conditions. proportional to the VOC but Proportional to the ISC and may not locate Correct MPP locate correct MPP k factor 0.73 < k1 < 0.8 0.85 < k2 < 0.9 k1 ≈ 0.76 Varies with Temp k2 ≈ 0.9Varies with Temp and Irradiance and IrradianceThe implementation cost of Open circuit voltage method is relatively lower. The problems with thismethod are it gives arbitrary performance with oscillations around MPP particularly with rapidlychanging conditions and provides slow response. Sometimes, this method is not reliable as it isdifficult to judge whether the algorithm has located the MPP or not. The Short circuit method offershigh efficiencies about 96%. It has several advantages such as more accurate, highly efficient andoperates at maximum power point. This method operates very soundly with rapidly changingatmospheric conditions as it automatically adjusts the module’s operating voltage to track exact MPPwith almost no oscillations.3.4 Boost ConverterThe boost converter which has boosting the voltage to maintain the maximum output voltage constantfor all the conditions of temperature and solar irradiance variations. A simple boost converter is asshown in figure.6. Figure.6. Boost TopologyFor steady state operation, the average voltage across the inductor over a full period is zero as given inequation (16), (17) and (18). Vin*ton – (Vo-Vin) toff = 0 (16)Therefore, Vin*D*T = (Vo-Vin)(1-D)T (17)and
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 V o = 1 (18) V in 1 − DBy designing this circuit we can also investigate performance of converters which have input fromsolar energy. A boost regulator can step up the voltage without a transformer. Due to a single switch,it has a high efficiency.3.5 Multilevel Inverter topologyThe DC-AC converters have experienced great evaluation in the last decade due to their wide use inuninterruptible power supplies and industrial applications. Figure.6 shows the voltage source invertersproduce an output voltage or a current with levels either 0 or ± Vdc. They are known two-levelinverter. To obtain a quality output voltage (230.2V rms) or a current (4.2 Amps rms) waveform witha minimum amount of ripple content. Figure.7. Five-level H-Bridge Cascade Multilevel inverter circuitIV. SIMULATION RESULTSThe converter circuit topology is designed to be compatible with a given load to achieve maximumpower transfer from the solar arrays. The boost converter output which is giving to input to five-levelH-bridge multilevel inverter. We observed that the designed Five-level H-Bridge cascade multilevelinverter successfully followed the variations of solar irradiation and temperatures. Here the power ismaintaining maximum value and similarly the boost converter boosting the voltage under the controlof the MPPT. By this, PV array, boost converter output voltages are converted to AC voltages whichare supplied to the grid by using Five-level H-Bridge cascade multilevel inverter and itscharacteristics also mentioned here. Photovoltaic array V-I and P-V characteristics are obtained byconsidering the varying temperature and the varying irradiance conditions shown in Fig. 8, 9, 10 and11. Fig.8. Variations of V-I Characteristics of PV system with varying irradiance
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Fig.9 Variations of .P-V Characteristics of PV system with varying irradiance Fig.10.V-I Characteristics of PV system with three different varying temperature Fig.11. P-V Characteristics of PV system with varying temperature
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Fig.12. Voltage curve of PV system with Open circuit voltage control Fig.13. Current curve of PV system with Open circuit voltage MPPT control Fig.14. Power curve of PV system with Open circuit voltage MPPT control
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Fig.15. Voltage curve of PV system with Short circuit current MPPT control Fig.16. Current curve of PV system with Short circuit current MPPT control Fig.17. power curve of PV system with Short circuit current MPPT controlThe Efficiency of maximum power point tracker is defined as
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 1 ∫P actual (t ) η MPPT = 0 1 (19) ∫P 0 max (t )Fig.12, 13 and Fig.14 shows the simulation results of Voltage, Current and Power of the Open circuitvoltage method with radiation as 1000w/m2 and with temperature as 250C. Where as Fig.15,16 andFig.17 shows the simulation results Voltage, Current and Power of the Short circuit current method.The results clearly indicate that, the Short circuit current method is comparatively good in terms oftracking the peak power point (at that particular situation) At STC conditions (1000 w/m2, 250C), theefficiency of Open circuit voltage method is calculated using Eqn.(15) as 91.95% and for Short circuitcurrent method as 96%. These values are relatively high and obviously validate the algorithm of thetwo methods. The maximum power is 1kW for the solar irradiation and temperature levels. Fig. 18,19, 20 and 21 shows the gate pulses of the boost converter from Short Circuit Current MPPTalgorithm, current, output voltage and power response of the boost converter. Fig.22 and 23are showsthe output voltage and voltage with harmonic spectrum (THD = 11.59%) from five level H-bridgemultilevel-inverter. Fig. 18. Gate pulse response Fig. 19. Current response of boost converter
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Fig. 20. Voltage response of boost converter Fig. 21. Power response of boost converter Fig. 22. Five-level output voltage of inverter.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure.23. Output Voltage with Harmonic Spectrum (THD = 11.59%) Table 2 Comparison Evaluation of MPPT Methods Open circuit Short circuit MPPT voltage method current method methods Voltage 136.4 117 Current 7.88 9.76 Power 1075 1132 Efficiency 90.4% 93.4% Table 3 Comparison Evaluation of various parameters of Photovoltaic systems with MPPT methods Irradiance Open circuit Short Circuit Maximum Maximum Maximum W/m2 voltage(V) current (A) Voltage(V) current(A) Power(W) 1000 152.4 10 125 9.352 1169 800 150.1 8 122.7 7.436 912.39 600 147.2 6 122.5 5.445 667.01 400 143 4 116.4 3.694 429.98V. CONCLUSIONSThe derivative of the output power P with respect to the panel voltage V is equal to zero at themaximum power point (∂P/∂V = 0). Employing Control algorithms improves flexibility and fastresponse. Methodology of two major open circuit voltage and short circuit current are discussed. Theopen circuit voltage easy to implement and offers relatively moderate efficiencies but results inunpredictable performance against rapidly changing conditions. The short circuit current method iscomplex and expensive when compared to open circuit voltage. However, the short circuit currentmethod gives very high efficiencies about 96% and performs well with changing radiation andtemperature. It can be concluded that, if economical aspect is not a constraint and rapidly changingsite conditions are obligatory, the short circuit current method is the best choice among the twomethods discussed. A comprehensive evaluation of these two methods with the simulation results isalso stated. The principles of operation of Five-level H-Bridge cascade multilevel inverter topologysuitable for photovoltaic applications have been presented in this paper. The cost savings is furtherenhanced with the proposed cascade multilevel inverters because of the requires the least number of
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963component to achieved the same number of voltage level. These configurations may also be applied indistributed power generation involving photovoltaic cells. Solar cells in PV array works only in partof volt-ampere characteristic near working point where maximum voltage and maximum current canbe obtained. Photovoltaic system works most of time with maximum efficiency with minimum rippleand harmonics. But by using the P and O and Incremental Conductance Algorithms are easy toimplement and offers relatively high efficiencies against rapidly changing conditions than abovealgorithms. Employing microcontroller, DSP processors improves flexibility and fast response.ACKNOWLEDGEMENTWe express our sincere thanks to RGMCET for providing us good lab facilities. A heart full andsincere gratitude to my beloved supervisor professor Ch. Sai Babu Garu for their tremendousmotivation and moral supportREFERENCES[1] J Surya Kumari, Ch Sai Babu et.al, “An Enhancement of Static Performance of Multilevel Inverter for Single Phase Grid Connected Photovoltaic modules”, International journal of Recent Trends in Engineering, Academy Publishers, Finland, Vol. 3, No. 3, May 2010, pp.20-24.[2] J Surya Kumari, Ch Sai Babu et.al, “ Design and Investigation of Short Circuit Current Based Maximum Power Point Tracking for Photovoltaic System” International Journal of Research and Reviews in Electrical and Computer Engineering (IJRRECE) Vol. 1, No. 2, June 2011 ISSN: 2046-5149.[3] J Surya Kumari, Ch Sai Babu et.al, Mathematical Model of Photovoltaic System with Maximum Power Point Tracking (MPPT) International Conference on Advances in Engineering and Technology, (ICAET-2011), May 27-28, 2011.[4] Balakrishna S, Thansoe, Nabil A, Rajamohan G, Kenneth A.S., Ling C. J.’, “The Study And Evaluation Of Maximum Power Point Tracking Systems”, Proceedings Of International Conference On Energy And Environment 2006 (ICEE 2006), Organized by University Tenaga Nasional, Bangi, Selangor, Malaysia; 28-30 August 2006, pp.17-22.[5] Jawad Ahmad, “A Fractional Open Circuit Voltage Based Maximum Power Point Tracker for Photovoltaic Arrays”, Proceedings of 2nd IEEE International Conference on Software Technology and Engineering, ICSTE 2010, pp. 287-250.[6] R. Faranda, S. Leva, V. Maugeri, “MPPT techniques for PV systems: energetic and cost comparison.” Proceedings of IEEE Power and Energy Society General Meeting- Conversion and Delivery of Electrical Energy in the 21st Century, 2008, pp-1-6.[7] I.H. Altas; A.M. Sharaf, “A Photovoltaic Array Simulation Model for Matlab-Simulink GUI Environment”. Proceedings of IEEE, IEEE 2007.[8] Abu Tariq, M.S. Jamil, “Development of analog maximum power point tracker for photovoltaic panel.” Proceedings of IEEE International Conference on Power Electronic Drive Systems, 2005, PEDS 2005, pp-251-255.[9] M.A.S. Masoum, H. Dehbonei, “Theoretical and experimental analysis of photovoltaic systems with voltage and current based maximum power point trackers”, IEEE Transactions on Energy Conversion, vol. 17, No. 4, pp 514-522, Dec 2002.[10] J.H.R. Enslin, M.S. Wolf, D.B. Snyman and W. Swiegers, “Integrated photovoltaic maximum power point tracking converter”, IEEE Transactions on Industrial Electronics, Vol. 44, pp-769- 773, December 1997.[11] D.Y. Lee, H.J. Noh, D.S. Hyun and I. Choy, “An improved MPPT converter using current compensation methods for small scaled pv applications.” Proceedings of APEC, 2003, pp- 540545.[12] A.K. Mukerjee, Nivedita Dasgupta, “DC power supply used as photovoltaic simulator for testing MPPT algorithms.”, Renewable Energy, vol. 32, no. 4, pp-587-592, 2007.[13] Katshuhiko ogata, “MODERN CONTROL ENGINEERING” - Printice Hall of India Private Limited. 147 Vol. 1, Issue 5, pp. 133-148
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[14] Chihching Hua and ChihmingShen “Study of Maximum Power Tracking Techniques and control of DC/DC Converters for Photovoltaic Power Systems” IEEE 1998[15] Gui-Jia Su “Multilevel DC-Link Inverter” IEEE Transactions on Energy Conversion, vol. 41, No. 3, IEEE-2005[16] Martina Calais Vassilios G “A Transformer less Five Level Cascaded Inverter Based Single – Phase Photovoltaic Systems” IEEE-2000.[17] D.P. Hohm, D.P, M.E. Ropp, “Comparative Study of Maximum Power Point Tracking Algorithms, Journal of Progress in Photovoltaic: Research and Applications, Wiley Interscience, vol. 11, no. 1, pp. 47-62, 2003.[18] D.P Hohm, M.E. Ropp, Comparative Study of Maximum Power Point Tracking Algorithm Using an Experimental, Programmable, Maximum Power Point Tracking Test Bed. [Online], Available: IEEE Explore Database [12th July 2006][19] V. Salas, E. Olias, A. Barrado, and A. Lazaro, “review of maximum Power Point Tracking Algorithms for Standalone Photovoltaic systems.” Solar Matter, Solar Cells, vol. 90, no. 11, pp. 1555-1578, July 2006.[20] Mohammad A.S. Masoum, Hooman Dehbonei and Ewald F.Fuchs “Theoretical and Experimental Analysis of Photovoltaic System With Voltage –and Current-Based Maximum – Power- Point –Tracking IEEE Transactions on Energy conversion.Vol.17, No.4, December. 2002.[21] Yang Chen, Jack brouwer, “A New Maximum –Power- Point –Tracking Controller for Photovoltaic Power Generation” IEEE 2003.[22] Yeong –Chau Kuo, Tsorng-Juu Liang, Jiann-Fuh Chen “Novel Maximum –Power- Point – Tracking Controller for Photovoltaic Energy Conversion System” IEEE Transactions on Industrial Electronics.Vol.48, No.3, June 2001.[23] K.H.Hussein, IMuta, T.Hoshino, M.Osakada, “Maximum Photovoltaic Power : an algorithm for rapidly changing atmospheric conditions” IEEE Transactions on Industrial Electronics.Vol.142, No.1, January 1995.[24] T.J.Liang J.F.Chen, T.C.Mi, Y.C.Kuo and C.A Cheng “Study and Implemention of DSP- based Photovoltaic Energy Conversion System”2001 IEEE.[25] Chihchiang Hua, Jongrong lin and Chihming Shen “Implemention of a DSP-Controlled Photovoltaic System with Peak Power Tracking” IEEE Transactions on Industrial Electronics.Vol.45, No.1, FebruaryJ. Surya Kumari was born in Kurnool, India in 1981. She received the B.Tech (Electrical andElectronics Engineering) degree from S.K University, India in 2002 and the M.Tech (Highvoltage Engineering) from J.N.T University, Kakinada in 2006. In 2005 she joined the Dept.Electrical and Electronics Engineering, R.G.M. College of Engineering and Technology, Nandyal,as an Assistant Professor. She has published several National and InternationalJournals/Conferences. Her field of interest includes Power electronics, Photovoltaic system,Power systems and High voltage engineering.Ch. Sai Babu received the B.E from Andhra University (Electrical & Electronics Engineering),M.Tech in Electrical Machines and Industrial Drives from REC, Warangal and Ph.D inReliability Studies of HVDC Converters from JNTU, Hyderabad. Currently he is working as aProfessor in Dept. of EEE in JNTUCEK, Kakinada He has published several National andInternational Journals and Conferences. His area of interest is Power Electronics and Drives,Power System Reliability, HVDC Converter Reliability, Optimization of Electrical Systems andReal Time Energy Management.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 POWER QUALITY DISTURBANCE ON PERFORMANCE OFVECTOR CONTROLLED VARIABLE FREQUENCY INDUCTION MOTOR A. N. Malleswara Rao1, K. Ramesh Reddy2, B. V. Sanker Ram3 1 Research Scholar, JNT University Hyderabad, Hyderabad, India 2 G.Narayanamma Institute of Science and Technology, Hyderabad, India 3 JNTU College of Engineering, JNTUH, Hyderabad, IndiaABSTRACT Sensitive equipment and non-linear loads are now more common in both the industrial/commercial sectors andthe domestic environment. Because of this a heightened awareness of power quality is developing amongelectricity users. Therefore, power quality is an issue that is becoming increasingly important to electricityconsumers at all levels of usage. Continuous variation of single-phase loads on the power system network leadsto voltage variation and unbalance, most importantly; the three-phase voltages tend to become asymmetrical.Application of asymmetrical voltages to induction motor driven systems severely affects its workingperformance. Simulation of an Induction Motor under various voltage sag conditions using Matlab/Simulink ispresented in this paper. Variation of input current, speed and output torque for vector controlled variablefrequency induction motor-drive is investigated. Simulation results show that the variation of speed and currentin motor-drive system basically depends on the size of the dc link capacitor. It is shown that the most reductionof dc-link voltage happens during voltage sag. It is also observed that as the power quality become poor, themotor speed decreases, causing significant rise in power input to meet the rated load demand.KEYWORDS: Power quality disturbance, Sag, Vector Control Induction Drive I. INTRODUCTIONElectric power quality (PQ) has captured much attention from utility companies as well as theircustomers. The major reason for growing concerns are the continued proliferation of sensitiveequipment and the increasing applications of power electronics devices which results in power supplydegradation [1]. PQ has recently acquired intensified interest due to wide- spread use ofmicroprocessor based devices and controllers in large number of complicated industrial process [2].The proper diagnosis of PQ problems requires a high level of engineering ability. The increasedrequirements on supervision, control and performance in modern power systems make power qualitymonitoring a common practice for utilities [3].In general, the main PQ issue can be identified as, voltage variation, voltage imbalance, voltagefluctuations, low frequency, transients, interruptions, harmonic distortions, etc. The consequences ofone or more of the above non-ideal conditions may cause thermal effects, life expectancy reduction,dielectric strength and mis-operation of different equipment. Furthermore, the PQ can have directeconomic impact on technical as well as financial aspects by means of increase in power consumptionand in electric bill [4]. PQ problems affecting Induction Motor performance are harmonics, voltageunbalance, voltage sags, interruption etc. Voltage sags are mainly caused by faults on transmission ordistribution systems, and it is normally assumed that they have a rectangular shape [5]. This 149 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963assumption is based on neglecting a change in the fault impedance during the fault progress.However, this assumption does not hold in case of the presence of induction motors and longerduration faults since the shape of voltage sags in such cases gets deformed due to the motors’ dynamicresponses [6]. When voltage sags appear at the terminals of an induction motor, the torque and speedof the motor will decrease to levels lower than their nominal values. When voltage sags are over,induction motor attempts to re-accelerate, resulting in drawing an excessive amount of current fromthe power supply.In this paper first, various types of voltage sag are simulated in Matlab / Simulink environment.Thereafter, performance of an (Vector Controlled Variable Frequency Induction Motor)VCVF IM-drive system is simulated and the results are analyzed in order to identify the parameters affecting thedrive-motor performance.II. TYPES OF SAGSDue to different kinds of faults in power systems, different types of voltage sag can be produced.Different types of transformer connections in power grid have a significant role in determination ofvoltage sag type [7]. Voltage sag are divided in to seven groups as type A, B, C, D, E, F and G asshown in Table I. In this table "h" indicates the sag magnitude. Type A is symmetrical and the othertypes are known as unsymmetrical voltage sag.There are different power quality problems that can affect the induction motor behaviors such asvoltage sag (affecting torque, power and speed), harmonics (causing losses and affecting torque),voltage unbalance (causing losses), short interruptions (causing mechanical shock), impulse surges(affecting isolation), overvoltage (reducing expected life time), and under voltage (causingoverheating and low speed) . There are several power quality issues which until today were normallynot included in motor protection studies. However, they should be taken into consideration due totheir increasing influence. Other actual power quality problems have been considered for many yearsnow, such as voltage imbalance, under voltages, and interruptions [8].This type of problems is intensified today because power requirements of sensitive equipment, andvoltage– frequency pollution have increased drastically during recent years. The actual trend isanticipated to be maintained in the near future. Principally, voltage amplitude variations cause thepresent power quality problems. Voltage sags are the origin of voltage amplitude reduction togetherwith phase-angle shift and waveform distortion and result in having different effects on sensitiveequipment. Voltage sags, voltage swells, overvoltages, and undervoltages are considered such asamplitude variations [8].New power quality requirements have a great effect on motor protection, due to the increasinglypopular fast reconnection to the same source or to an alternative source. The characteristics of boththe motor and supply system load at the reconnection time instant are critical for the motor behavior.Harmless voltage sags can be the origin of great load loss (load drop) due to the protection devicesensitivity TABLE-I : Types of Sags Type A Type B Va = hV Va = hV 1 1 1 1 Vb = − hV − jhV 3 Vb = − V − jV 3 2 2 2 2 1 1 1 1 Vb = − hV + jhV 3 Vb = − V + jV 3 2 2 2 2 150 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Type C Type D Va = V Va = hV 1 1 1 1 Vb = − V − jhV 3 Vb = − hV − jV 3 2 2 2 2 1 1 1 1 Vb = − V + jhV 3 Vb = − hV + jV 3 2 2 2 2 Type E Type F Va = V Va = hV 1 1 1 1 1 Vb = − hV − jhV 3 Vb = − jV 3 − hV − jhV 3 2 2 3 2 6 1 1 1 1 1 Vb = − hV + jhV 3 Vc = + jV 3 − hV + jhV 3 2 2 3 2 6 Type G 2 h Va = ( + )V 3 3 1 1 Where 0 ≤ h < 1 Vb = − (2 + h)V − hVj 3 6 2 (h= sag magnitude) 1 1 Vb = − ( 2 + h)V + hVj 3 6 2 2.1 Symmetrical FaultsThe voltage during the fault at the point-of-common coupling (pcc) between the load and the fault canbe calculated from the voltage-divider model shown in Figure 1. Figure 1. Voltage divider model for voltage sags due to faults.For three-phase faults, the following expression holds: Z F+ V= E ----(1) Z F + + Z s+where ZS+ and ZF+ are the positive-sequence impedance of source at the pcc and impedancebetween the pcc and faulty point including the fault impedance itself. Through this relation it can beconcluded that the current through the faulted feeder is the main cause for the voltage drop [8].2.2 Non-Symmetrical Faults 151 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963For non-symmetrical faults the expressions are similar but slightly more complicated. This leads toresulting characterization of unbalanced dips due to non-symmetrical faults. For two-phase-to-groundand phase-to-phase faults the characteristic voltage is found from (2); for single-phase faults also thezero-sequence quantities affect the result: 1 Z F + + (Z F 0 + Z S 0 ) V= 2 E ----(2) 1 Z F 1 + Z S1 + ( Z F 0 + Z S 0 ) 2where ZS0 and ZF0 are the zero-sequence source impedance at the pcc and the zero-sequenceimpedance between the fault and the pcc, respectively [9]. For two-phase-to-ground faults it can alsobe obtained from: Z F + + 2( Z F 0 + Z S 0 ) V= E -------(3) Z F 1 + Z S 1 + 2( Z F 0 + Z S 0 )The main assumptions behind these equations are that the positive-sequence and negative-sequenceimpedances are equal and that all impedances are constant and time independent. They lead to a“rectangular dip” with a sharp drop in rms voltage, a constant rms voltage during the fault, and asharp recovery. Under the assumption of constant impedance, all load impedances can be included inthe source voltage and impedance equivalent, and the voltages at the motor terminals are equal to thevoltages at the PCC.III. BEHAVIOUR OF AN INDUCTION MOTOR SUPPLIED WITH NON- SINUSOIDAL VOLTAGEWhen induction motors are connected to a distorted supply voltage, their losses increase. These lossescan be classified into four groups: 1) Losses in the stator and rotor conductors, known as copper losses or Joule Effect losses. 2) Losses in the terminal sections, due to harmonic dispersion flows. 3) Losses in the iron core, including hysterics and Foucault effects; these increase with the order of the harmonic involved and can reach significant values when feeding motors with skewed rotors with wave forms which contain high frequency harmonics[7,8,9]. 4) Losses in the air gap. The pulsing harmonic torques is produced by the interaction of the flows in the air gap with those of the rotor harmonic currents, causing an increase in the energy consumed.These increased losses reduce the motor’s life. Further information on each of the groups is givenbelow. The effect of the copper losses intensifies in the presence of high frequency harmonics, whichaugment the skin effect, reducing the conductors’ effective section and so increasing their physicalresistance [10].3.1 Induction Motor BehaviourThe study can be done experimentally or analytically, by using dynamic load models mainly designedfor stability analysis, but they are rather complicated, requiring precise system data and high levelsoftware [11-13]. Therefore, in this investigation, the study is adopted as a preliminary step. When atemporary interruption or voltage sag takes place, with time duration between 3 seconds and 1 minute,the whole production process will be disrupted. Keeping the motor running is useless because most ofthe sensitive equipment will drop out. The induction motor should be disconnected, and the restartprocess should begin at the supply recovery, taking into account the reduction and control of the hotload pickup phenomenon.Keeping the motor connected to the supply during voltage sags and short interruptions, rather thandisconnecting and restarting it, is advantageous from the system’s stability point of view. It isnecessary to avoid the electromagnetic contactor drop out during transients. This scheme improves the 152 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963system ride-through ability due to the reduction of the reacceleration inrush [14]. Such problemsresult in the initial reduction of the motor speed, keeping for a while a higher voltage supplied by itsinternal, or back electromotive force (emf). The voltage reduction is governed by the stored energydissipation through the available closed circuits, which are the internal rotor circuit (including themagnetizing inductance) and the external circuit composed of the load (paralleled by the faulted pathin case of fault-originated voltage sags.) The whole circuit time-constant determines the trend whichthe decaying voltage will follow until the final voltage magnitude is reached or the event is ended.When the transient ends, the motor speed increases demanding more energy from the supply until thesteady state speed is reached. The load torque in this case shows very different characteristics ascompared to normal start up conditions, due to several reasons such as the motor generated voltagethat might be out of phase, heavily loaded machinery, and a rigorous hot-load pickup [15].As mentioned above, the single line-to-ground fault is the most probable type of fault, and through a∆Y transformer is transferred as a two-phase voltage sag, in which case normal and extremely deepvoltage sags should be considered as a case of transient unbalanced supply. The effect of voltageunbalance is the decrease of the developed torque and increase of the copper loss due to the negative-sequence currents. The thermal effect of the short duration considered can be neglected. Besides,three-phase voltage events represent the worst stability condition. Therefore, only balancedphenomena were experimentally studied here, leaving the unbalanced behavior for futureinvestigation [16],[17].IV. CASE STUDY AND SIMULATION RESULTSThis paper also investigates the impact of power quality on sensitive devices. At this stage, the focusis on the operation characteristics of a Vector Controlled Variable Frequency Induction Motor Drive(as shown in Fig. 2) in the presence of sag events. The motor under consideration is a 50 HP, 460Vand 60 Hz asynchronous machine. A DC voltage of 780V average is obtained at the DC link from thediode bridge rectifier which takes a nominal 3-phase (star connected) input of 580V rms. line-to-line.Voltage sags are normally described by magnitude variation and duration. In addition to thesequantities, sags are also characterized by unbalance, non sinusoidal wave shapes, and phase angleshifts. Fig 2 . Vector controlled Variable Frequency Induction Motor Drive 153 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Fig 3: Wave forms of 3 phase currents and Vdc during LG Fault Fig 4: waveforms of Vabc ,Iabc, Speed and Torque during LG fault Fig 5 : Wave forms of 3 phase currents and Vdc during LLG Fault 154 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Fig 6 : waveforms of Vabc ,Iabc, Speed and Torque during LLG Fault Fig 7: Wave forms of 3 phase currents and Vdc during 3 phase Fault Fig 8: waveforms of Vabc ,Iabc, Speed and Torque during 3phase faultFig. 3-8 illustrate disturbance inputs, the fall in DC link voltage and change in rotor speed for Case Ccorresponding to the sag event that occurs at time t= 3 seconds when Phase A and Phase B experience 155 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963a line to ground fault. The fall in DC link voltage, and the rotor speed are observed for the period ofthe event. When normal supply resumes, the DC link voltage stabilises at 780 Volts and the rotorspeed at 120 radians per second. There might be different kinds of short circuit faults on the networkresulting in voltage sags such as single phase-to-ground, phase-to-phase, 2 phase-to-ground and 3phase-to-ground faults. Studying the speed variation waveform of the induction motor due to thedifferent voltage sags caused by such faults at a specific place in the network as shown in Figure 5, itis proved that single phase-to-ground fault causes the least variation in speed profile but a 3 phase-to-ground fault the highest variations. Also, the ability of the drive to ride-through a voltage sag event isdependent upon the energy storage capacity of the DC link capacitor, the speed and inertia of the load,the power consumed by the load, and the trip point settings of the drive. The control system of thedrive has a great impact on the behaviour of the drive during sag and after recovery. The trip pointsettings can be adjusted to greatly improve many nuisance trips resulting from minor sags which maynot affect the speed of the motor. Table II shows three cases of inputs “A” to “C” supplied asunbalanced sags to the above system, and the corresponding outputs observed. TABLE II: SIMULATION RESULTS INPUT CASE LG LLG 3φ Fault Sag magnitude : Phase A 0.1 1 0.1 (p.u.) Phase B 1 0.1 0.1 Phase C 1 0.1 0.1 Start time of sag (sec) 4 4 4 Duration of sag (sec) 1 1 1 Phase angle shift: Phase A 0 0 0 (radians) Phase B -1.047 0 0 Phase C 1.047 0 0 Load torque (N-m) 50 50 50 Start time of load (sec) 0 0 0 Duration of load (sec) 4 4 4 Reference rotor speed (rad/s) 120 120 120 OBSERVATIONS Nominal DC link Voltage (V) 780 780 780 DC link Voltage during event (V) 450 370 250 Change in DC link Voltage (%) 42.3 52.6 68 Rotor speed during event (rad/s) 120 93 25 Change in rotor speed (%) 0 22.5 79.7 V. CONCLUSIONSVoltage sags and short time interruptions are a main power quality problem for the induction motorsutilized in the industrial networks. Such problems can also lead to the unbalanced voltages of thenetwork. Their result is the effect on torque, power and speed characteristics of the motor and theincrease in the losses. In this paper, the short interruption and voltage sag effects on the motorbehaviour were studied where through the simulations done with MAT LAB, the different behavioursof induction motors due to voltage sags from different origins and other related problems wereinvestigated. In addition the amount of effect of different sources of the faults leading to voltage sagand imbalanced voltage sag were observed. Behaviour of a Vector controlled Variable FrequencyInduction Motor Drive in the presence of sag events has been simulated as our initial investigation ofimpact of power quality on sensitive equipment.REFERENCES [1]C. Sankaran, Power Quality, 2002, CRC Press. [2] M. H. J. Bollen, “The influence of motor reacceleration on voltage sags,” IEEE Trans. on Industry Applications, Vol. 31, pp. 667–674, July/Aug. 1995. [3] J. W. Shaffer, “Air conditioner response to transmission faults,” IEEE Trans. on Power System, Vol. 12, pp. 614– 621, May 1997. [4] E. W. Gunther and H. Mehta, “A survey of distribution system power quality—Preliminary results,” IEEE Trans. on Power Delivery, Vol. 10, pp. 322–329, Jan. 1995. [5] L. Tang, J. Lamoree, M. McGranagham, and H. Mehta, “Distribution system voltage sags: Interaction with motor and drive loads,” in Proc. IEEE Transmiss. Distribut. Conf., pp. 1–6, Chicago, IL, 1994. 156 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 [6] D. S. Dorr, M. B. Hughes, T. M. Gruzs, R. E. Jurewicz, and J. L. Mc- Claine, “Interpreting recent power quality surveys to define the electrical environment,” IEEE Trans. Industry Applicat., vol.33, pp. 1480–1487, Nov./Dec. 1997. [7] C. Y. Lee, “Effects of unbalanced voltage on the operation performance of a three-phase induction motor,” IEEE Trans. Energy Conv., vol. 14, pp. 202–208, June 1999. [8] M.H.J. Bollen, M. Hager, C. Roxenius, “Effect of induction motors and other loads on voltage dips: Theory and measurement”, Proc. IEEE PowerTech Conf., June 2003, Italy. [9] W. H. Kersting, “Causes and Effects of Unbalanced Voltages Serving an Induction Motor”, IEEE Trans. on Industry Applications, Vol. 37, No. 1, pp. 165-170, January/February 2001. [10] G. Yalcinkaya, M.J. Bollen, P.A. Crossley, “Characterization of Voltage Sags in Industrial Distribution Systems”, IEEE Trans. on Industry Applications, Vol. 34, No. 4, pp. 682-688, July1998. [11] S. S. Mulukutla and E. M. Gualachenski, “A critical survey of considerations in maintaining process continuity during voltage dips while protecting motors with reclosing and bus-transfer practices,” IEEE Trans. Power Syst., vol. 7, pp. 1299–1305, Aug. 1992. [12] J. C. Das, “Effects of momentary voltage dips on the operation of induction and synchronous motors,” IEEE Trans. Industry Applicat., vol. 26, pp. 711–718, July/Aug. 1990. [13] T. S. Key, “Predicting behavior of induction motors during service faults and interruptions,” IEEE Industry Applicat. Mag., vol. 1, pp. 6–11, Jan. 1995. [14] J.C. Gomez, M.M. Morcos, C.A. Reineri, G.N.Campetelli, “Behaviour of Induction Motor Due to Voltage Sags and Short Interruptions”, IEEE Trans. on Power Delivery, Vol. 17, No. 2, pp. 434-440, April 2002. [15] J.C. Gomez, M.M. Morcos, C. Reineri, G. Campetelli, “Induction motor behaviour under short interruptions and voltage sags: An experimental study,” IEEE Power Eng. Rev., Vol. 21, pp. 11–15, Feb. 2001. [16] A.N.Malleswara Rao, Dr.K.Ramesh Reddy and Dr. B.V.Sanker Ram”A new approach to diagnosis of power quality problems using Expert system” International Journal Of Advanced Engineering Sciences And Technologies Vol No. 7, Issue No. 2, 290 – 297 [17]A.N. Malleswara Rao, Dr. K. Ramesh Reddy and Dr. B.V. Sanker Ram” Effects of Harmonics in an Electrical System” International Journal of Advances in Science and Technology (IJAET), Vol. No. 3, Issue No. 2, 25 – 30AUTHORSA. N. Malleswara Rao received B.E. in Electrical and Electronics Engineering from AndhraUniversity, Visakhapatnam, India in 1999, and M.Tech in Electrical Engineering from JNTUniversity, Hyderabad, India. He is Ph.D student at Department of Electrical Engineering, JNTUniversity, Hyderabad, India. His research and study interests include power quality and powerelectronics.K. Ramesh Reddy received B.Tech. in Electrical and Electronics Engineering from NagarjunaUniversity, Nagarjuna Nagar, India in 1985, M.Tech in Electrical engineering from NationalInstitute of Technology(Formerly Regional Engineering College), Warangal, India in 1989, andPh.D from SV University, Tirupathi, India in 2004. Presently he is Head of the department andDean of PG studies in the Department of Electrical & Electronics Engineering, G.NarayanammaInstitute of Technology & Science (For Women), Hyderabad, India. Prof. Ramesh Reddy is anauthor of 16 journal and conference papers, and author of two text books. His research and study interestsinclude power quality, Harmonics in power systems and multi-Phase Systems.B. V. Sanker Ram received B.E. in Electrical Engineering from Osmania University,Hyderabad, India in 1982, M.Tech in Power Systems from Osmania University, Hyderabad,India in 1984, and Ph.D from JNT University, Hyderabad, India in 2003. Presently he isprofessor in Electrical & Electronics Engineering, JNT University, Hyderabad, India. Prof.Sanker Ram is an author of about 25 journal and conference papers. His research and studyinterests include power quality, control systems and FACTS. 157 Vol. 1, Issue 5, pp. 149-157
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 INTELLIGENT INVERSE KINEMATIC CONTROL OF SCORBOT-ER V PLUS ROBOT MANIPULATOR Himanshu Chaudhary and Rajendra Prasad Department of Electrical Engineering, IIT Roorkee, IndiaABSTRACTIn this paper, an Adaptive Neuro-Fuzzy Inference System (ANFIS) method based on the Artificial NeuralNetwork (ANN) is applied to design an Inverse Kinematic based controller forthe inverse kinematical control ofSCORBOT-ER V Plus. The proposed ANFIS controller combines the advantages of a fuzzy controller as well asthe quick response and adaptability nature of an Artificial Neural Network (ANN). The ANFIS structures weretrained using the generated database by the fuzzy controller of the SCORBOT-ER V Plus.The performance ofthe proposed system has been compared with the experimental setup prepared with SCORBOT-ER V Plus robotmanipulator. Computer Simulation is conducted to demonstrate accuracyof the proposed controller to generatean appropriate joint angle for reaching desired Cartesian state, without any error. The entire system has beenmodeled using MATLAB 2011.KEYWORDS: DOF, BPN, ANFIS, ANN, RBF, BP I. INTRODUCTIONInverse kinematic solution plays an important role in modelling of robotic arm. As DOF (Degree of Freedom) ofrobot is increased it becomes a difficult task to find the solution through inverse kinematics.Three traditionalmethod used for calculating inverse kinematics of any robot manipulator are:geometric[1][2] ,algebraic[3][4][5] and iterative [6] methods. While algebraic methods cannot guarantee closed formsolutions. Geometric methods must have closed form solutions for the first three joints of themanipulator geometrically. The iterative methods converge only to a single solution and this solutiondepends on the starting point.The architecture and learning procedure underlying ANFIS, which is a fuzzy inference systemimplemented in the framework of adaptive networks was presented in [7]. By using a hybrid learningprocedure, the proposed ANFIS was ableto construct an input-output mapping based on both humanknowledge (in the form of fuzzy if-then rules) and stipulated input-output data pairs.Neuro-Genetic approach for the inverse kinematics problem solution of robotic manipulators wasproposed in [8]. A multilayer feed-forward networks was applied to inverse kinematic problem of a 3-degrees-of freedom (DOF) spatial manipulator robot in [9]to get algorithmic solution.To solve the inverse kinematics problem for three different cases of a 3-degrees-of freedom (DOF)manipulator in 3D space,a solution was proposed in [10]usingfeed-forward neural networks.Thisintroduces the fault-tolerant and high-speed advantages of neural networks to the inverse kinematicsproblem.A three-layer partially recurrent neural network was proposed by [11]for trajectory planning and tosolve the inverse kinematics as well as the inverse dynamics problems in a single processing stage forthe PUMA 560 manipulator.Hierarchical control technique was proposed in[12]for controlling a robotic manipulator.It was basedon the establishment of a non-linear mapping between Cartesian and joint coordinates using fuzzylogic in order to direct each individual joint. Commercial Microbot with three degrees of freedom wasutilized to evaluate this methodology.Structured neural networks based solution was suggested in[13] that could be trained quickly. Theproposed method yields multiple and precise solutions and it was suitable for real-time applications. 158 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963To overcome the discontinuity of the inverse kinematics function,a novel modular neural networksystem that consists of a number of expert neural networks was proposed in[14].Neural network based inverse kinematics solution of a robotic manipulator was suggested in[15]. Inthis study, three-joint robotic manipulator simulation software was developed and then a designedneural network was used to solve the inverse kinematics problem.An Artificial Neural Network (ANN) using backpropagation algorithm was applied in [16]to solveinverse kinematics problems of industrial robot manipulator.The inverse kinematic solution of the MOTOMAN manipulator using Artificial Neural Network wasimplemented in [17]. The radial basis function (RBF) networks was used to show the nonlinearmapping between the joint space and the operation space of the robot manipulator which in turnsillustrated the better computation precision and faster convergence than back propagation (BP)networks.Bees Algorithm was used to train multi-layer perceptron neural networks in [18]to model the inversekinematics of an articulated robot manipulator arm.This paper is organized into four sections. In the next section, the kinematicsanalysis (Forward as wellas inverse kinematics) of SCORBOT-ER V Plus has been derived with the help of DH algorithm aswell as conventional techniques such as geometric[1][2], algebraic[3][4][5] and iterative [6] methods.Basics of ANFIS are introduced in section3. It also explains the wayfor input selection for ANFISmodeling. Simulation results are discussed in section 4. Section 5 gives concluding remarks.II. KINEMATICS OF SCORBOT-ER V PLUSSCORBOT-ER V Plus [19] is a vertical articulated robot, with five revolute joints. It has a Stationarybase, shoulder, elbow, tool pitch and tool roll. Figure 1.1 identifies the joints and links of themechanical arm.2.1. SCORBOT–ER V PLUS STRUCTUREAll joints are revolute, and with an attached gripper it has six degree of freedom. Each joint isrestricted by the mechanical rotation its limits are shown below.Joint Limits:Axis 1: Base Rotation: 310°Axis 2: Shoulder Rotation: + 130° / – 35°Axis 3: Elbow Rotation: ± 130°Axis 4: Wrist Pitch: ± 130°Axis 5: Wrist Roll Unlimited (electrically 570°)Maximum Gripper Opening: 75 mm (3") without rubber pads 65 mm (2.6") with rubber padsThe length of the links and the degree of rotation of the joints determine the robot’s work envelope.Figure 1.2 and 1.3 show the dimensions and reach of the SCORBOT-ER V Plus. The base of the robotis normally fixed to a stationary work surface. It may, however, be attached to a slide base, resultingin an extended working range. 159 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-19632.2. FRAME ASSIGNMENT TO SCORBOT–ER V PLUSFor the kinematic model of SCORBOT first we have to assign frame to each link starting from base(frame {0}) to end-effector (frame {5}). The frame assignment is shown in figure 1.4.Here in model the frame {3} and frame {4} coincide at same joint, and the frame {5} is end– effectorposition in space. Joint i ( ) ( ) Operating range 1 − /2 16 349 1 −155° + 155° 2 0 221 0 2 −35° + 130° 3 0 221 0 3 −130° + 130° 4 /2 0 0 /2 + 4 −130° + 130° 5 0 0 145 5 −570° 570°2.3. FORWARD KINEMATIC OF SCORBOT–ER V PLUSOnce the DH coordinate system has been established for each link, a homogeneous transformationmatrix can easily be developed considering frame {i-1} and frame {i}. This transformation consists offour basic transformations. 0 T5 = 0T1 * 1T2 * 2T3 * 3T4 * 4T5 (1) C1 0 − S1 a1 * C1    0T =  S1 0 C1 a1 * S1  (2) 1  0 −1 0 d1    0 0 0 1    160 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 C2 − S2 0 a2 * C2  S C2 0 a2 * S 2  1 T2 =  2  (3) 0 0 1 0    0 0 0 1  C3 − S3 0 a3 * C3  S C3 0 a3 * S3  2 T3 =  3  (4) 0 0 1 0    0 0 0 1  − S 4 0 C 4 0  S 4 0 3T =  C 4 0  (5) 4  0 1 0 0    0 0 0 1 C 5 − S 5 0 0  0 0 4T =  S 5 C 5  (6) 5 0 0 1 d 5   0 0 0 1Finally, the transformation matrix is as follow: - −S S − C C S 1 5 −C S + C S S 1 5 CC234 C (a + a C + a C + d C ) 5 1 1 5 234 1 234 1 1 2 2 3 23 5 234 CS −SCS CC +SS S SC S (a + a C + a C + d C )  T = T =  0 1 5 1 5 234 1 5 1 5 234 1 234 1 1 2 2 3 23 5 234 (7)  −C C (d − a S − a S − d S )  5 5 SC 234 −S 5 234 234 1 2 2 3 23 5 234    0 0 0 1 Where, = ( ), = ( ) = ( + + ), = ( + + ).The T is all over transformation matrix of kinematic model of SCORBOT-ER V Plus, from this wehave to extract position and orientation of end –effector with respect to base is done in the followingsection.2.4. OBTAINING POSITION IN CARTESIAN SPACEThe value of , , is found from last column of transformation matrix as: - (8) X = C1 (a1 + a2 C2 + a3C23 + d5C234 ) Y = S1 (a1 + a2 C2 + a3C23 − d5C234 ) (9) Z = ( d1 − a2 S 2 − a3 S23 − d5 S 234 ) (10)For Orientation of end-effector frame {5} and frame {1} should be coincide with same axis but in ourmodel it is not coincide so we have to take rotation of −90° of frame {5} over y5 axis, so the overallrotation matrix is multiplied with −90° as follow: -  cos( −90o ) 0 sin(−90o )    Ry =  0 1 0     − sin(−90o ) 0 cos(−90o )   0 0 −1 Ry = 0 1 0    (11) 1 0 0   The Rotation matrix is: - 0 0 −1  − S1S5 − C1C5 S234 −C5 S1 + C1 S5 S234 C1C234  R = 0 1 0  ×  C1 S5 − S1C5 S 234    C1C5 + S1S5 S234 S5C234   1 0 0      −C5C234 S5C234 − S234   161 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963  C5C234 − S5C234 S 234  R =  C1S5 − S1C5 S 234  C1C5 + S1S5 S234 S5C234    − S1S5 − C1C5 S 234  −C5 S1 + C1 S5 S 234 C1C234   (12)Pitch: Pitch is the angle of rotation about y5 axis of end-effector pitchβ = θ 2 + θ 3 + θ 4 = θ 234 (13) 2 2 θ 234 = a tan 2( r13, ± r 23 + r 33 ) (14)Here we use atan2 because its range is [− , ], where the range of atan is [− /2, /2].Roll: The = 5 is derived as follow: - θ5 = a tan 2(r12 / C234 , r11 / C234 ) (15)Yaw: Here for SCORBOT yaw is not free and bounded by 1.2.5. HOME POSITION IN MODELINGAt home position all angle are zero so in equation (1.7) put 1 = 0, 2 = 0, 3 = 0, 4 = 0, 5 =0So the transformation matrix reduced to:- 0 0 1 a1 + a2 + a3 + d5   0 0 1 603 0 1 0 0  0 1 0 0  THome = =  (16)  −1 0 0 d1   −1 0 0 349      0 0 0 1  0 0 0 1 The home position transformation matrix gives the orientation and position of end-effector frame.From the 3×3 matrix orientation is describe as follow, the frame {5} is rotated relative to frame {0}such that 5 axis is parallel and in same direction to 0 axis of base frame; 5is parallel and in samedirection to 0 axis of base frame; and 5axis is parallel to 0but in opposite direction. The position is Tgiven by the 3 × 1 displacement matrix a1 + a2 + a3 + d5 0 d1  .  2.6. INVERSE KINEMATICS OF SCORBOT-ER V PLUSFor SCORBOT we have five parameter in Cartesian space is x, y, z, roll ( ), pitch ( ).For jointparameter evaluation we have to construct transformation matrix from five parameters in Cartesiancoordinate space. For that rotation matrix is generated which is depends on only roll, pitch and yaw ofrobotic arm. For SCORBOT there is no yaw but it is the rotation of first joint 1.So the calculation of yaw is as follow: - α = θ1 = a tan 2( x, y ) (17)Now for rotation matrix rotate frame {5} at an angle – about its x axis then rotate the new frame {5′} by an angle with its own principal axes ′ , finally rotate the new frame {5′′} by an angle withits own principal axes . = (− )∗ ( )∗ ( ) 1 0 0   Cγ 0 Sγ  Cα − Sα 0     =  0 Cβ Sβ  ×  0 1 0  ×  Sα  Cα 0  0 − Sβ Cβ   − Sγ 0 Cγ   0 0 1       162 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963  Cα Cγ −Cγ Sα Sγ    =  Cβ Sα − Cα S β Sγ Cβ Cα + S β Sγ Sα Cγ S β   − S β Sα − Cα Cβ Sγ − S β Cα + Sα Cβ Sγ Cβ Cγ    (18)Now rotate matrix by 90° about y axis: -  COS (90o ) 0 SIN (90o )    Ry ( −90o ) =  0 1 0   o o   − SIN (90 ) 0 COS (90 )   0 0 1 R y (−90o ) =  0 1 0    (19)  −1 0 0   After pre multiplying the equation 19 with equation 18, one will get following rotation matrix: -  − S β Sα − Cα Cβ Sγ − S β Cα + Sα Cβ Sγ Cβ Cγ    =  Cβ Sα − Cα S β Sγ Cβ Cα + S β Sγ Sα Cγ S β   −Cα Cγ Cγ Sα − Sγ    (20)So, the total transformation matrix is as follows: -  − S β Sα − Cα Cβ Sγ − S β Cα + Sα Cβ Sγ Cβ Cγ X  C S −C S S Cβ Cα + S β Sγ Sα Cγ S β Y T = β α α β γ   −Cα Cγ Cγ Sα − Sγ Z    0 0 0 1 (21)After comparing the transformation matrix in equation (7) with matrix in equation (21), one candeduce: - 1 = , 234 = , 5= ,Now, we have 1 and 5 directly but 2, 3 4 are merged in 234 so we have separate them, toseparate them we have used geometric solution method as shown in Figure 1.6Here for finding 2, 3, 4, we have X, Y, Z in Cartesian coordinate space from that we can take:- X 1 = ( X 2 + Y 2 ) andY1 = Z (22)We have pitch of end-effector 234 = , from that we can find point 2, 2 is calculated as follows: - X 2 = X 1 − d5 cos θ 234 (23) Y2 = Y1 + d5 sin θ 234 163 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Now the distance 3and 3can be found: - X 3 = X 2 − a1 Y3 = Y2From the low of cosines applied to triangle ABC, we have: - ( X 3 + Y32 − a2 − a3 ) 2 2 2 cos θ 3 = 2 a2 a3 θ3 = a tan 2(± 1 − cos 2 θ3 , cos θ3 ) (24)From figure 1.6 2 = −∅ − or θ 2 = −a tan 2(Y3 , X 3 ) − a tan 2(a3 sin θ3 , a2 + cos θ3 ) (25)Finally we will get: - θ 4 = θ 234 − θ 2 − θ3 (26)III. INVERSE KINEMATICS OF SCORBOT-ER V PLUS USING ADAPTIVE NEURO FUZZY INFERENCE SYSTEM (ANFIS)The proposed ANFIS[7][20][21] controller is based on Sugeno-type Fuzzy Inference System (FIS)controller.The parameters of the FIS are governed by the neural-network back propagation method.The ANFIS controller is designed by taking the Cartesian coordinates plus pitch as the inputs, and thejoint angles of the manipulator to reach a particular coordinate in 3 dimensional spaces as the output.The output stabilizing signals, i.e., joint angles are computed using the fuzzy membership functionsdepending on the input variables. The effectiveness of the proposed approach to the modeling isimplemented with the help of a program specially written for this in MATLAB. The informationrelated to data used to train is given inTable 1.2. Sr. Manipulator No. of No. of Parameters Total No. of No. of No. of No. of No. Angles Nodes Parameters Training Checking Fuzzy Linear Nonlinear Data Pairs Data Pairs Rules 01. Theta1 193 405 36 441 4500 4500 81 02. Theta2 193 405 36 441 4500 4500 81 03. Theta3 193 405 36 441 4500 4500 81 04. Theta4 193 405 36 441 4500 4500 81The procedure executed to train ANFIS is as follows:(1) Data generation: To design the ANFIS controller, the training data have been generated by usingan experimental setup with the help of SCORBOT-ER V Plus. A MATLAB program is written togovern the manipulator to get the input –output data set. 9000 samples were recorded through theexecution of the program for the input variables i.e., Cartesian coordinates as well as Pitch. Cartesiancoordinates combination for all thetas are given in Fig.1.7 164 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963(2) Rule extraction and membership functions: After generating the data, the next step is to estimatethe initial rules. A hybrid learning algorithm is used for training to modify the above parameters afterobtaining the Fuzzy inference system from subtracting clustering. This algorithm iteratively learns theparameter of the premise membership functions and optimizes them with the help of back propagationand least-squares estimation. The training is continued until the error minimization..The input as wellas output member function used was triangular shaped member function.The final fuzzy inferencesystem chosen was the one associated with the minimum checking error, as shown in figure 1.8.itshown the final membership function for the thetas after training. D e g r e e o f m e m b e r s h ip D e g r e e o f m e m b e r s h ip D e g r e e o f m e m b e r s h ip D e g r e e o f m e m b e r s h ip D e g re e o f m e m b e rs h ip D e g re e o f m e m b e rs h ip D e g re e o f m e m b e rs h ip D e g re e o f m e m b e rs h ip in11 m f1 i n 1 m f2 i n 1 m f3 in12 m f1 i n 2 m f2 i n 2 m f3 in1 m f1 1 in 1 m f2 in 1 m f3 in12 m f1 in 2 m f2 in 2 m f3 0.5 0.5 0.5 0.5 0 0 0 0 -0 . 5 0 0 .5 -0 . 4 -0 . 2 0 0 .2 0 .4 -0 . 5 0 0 .5 -0 . 4 -0 . 2 0 0.2 0.4 in p u t 1 in p u t 2 in p u t 1 in p u t 2 in13 m f1 in 3 m f2 i n 3 m f3 i n1 m f1 4 in 4 m f2 in 4 m f3 i n1 m f1 3 i n 3 m f2 i n 3 m f3 in1 m f1 4 i n 4 m f2 i n 4 m f3 0.5 0.5 0.5 0.5 0 0 0 0 -0 . 2 0 0 .2 0.4 in p u t 3 0.6 0.8 -4 -2 0 in p u t 4 2 4 θ2 o f m e m b e rs h ip o f m e m b e rs h ip θ1 o f m e m b e rs h ip o f m e m b e rs h ip -0 .2 0 0.2 0 .4 0.6 0 .8 -4 -2 0 2 4 in p u t 3 in p u t 4 in1 m f1 1 i n 1 m f2 i n 1 m f3 i n 1 m f1 2 i n 2 m f2 in 2 m f3 i n1 m f 1 1 in 1 m f2 in 1 m f3 i n12 m f 1 in 2 m f2 i n 2 m f3 0 .5 0 .5 0 .5 0 .5 o f m e m b e r sDh ei pg r e e o f m e m b e r sDh e pg r e e o f m e m b e r s Dh e pg r e e o f m e m b e r s Dh e pg r e e i i i 0 0 0 0 -0 .5 0 0 .5 -0 .4 -0 . 2 0 0 .2 0 .4 -0 .5 0 0 .5 -0 . 4 -0 .2 0 0 .2 0 .4 in p u t1 in p u t 2 in p u t 1 in p u t 2 i n13 m f1 i n 3 m f2 i n 3 m f3 i n 1 m f1 4 i n 4 m f2 in 4 m f3 i n13 m f1 i n 3 m f2 in 3 m f3 i n 1 m f1 4 in 4 m f2 i n 4 m f3 0 .5 0 .5 0 .5 0 .5 D e g re e D e g re e D e g re e D e g re e 0 0 θ3 0 0 θ4 -0 .2 0 0 .2 0 .4 0 .6 0 .8 -4 -2 0 2 4 -0 . 2 0 0 .2 0 .4 0 .6 0 .8 -4 -2 0 2 4 in p u t3 in p u t 4 in p u t 3 in p u t 4(3) Results: The ANFIS learning was tested on a variety of linear and nonlinear processes. TheANFIS was trained initially for 2 membership functions for 9000 data samples for each input as wellas output. Later on, it was increased to 3 membership functions for each input. To demonstrate theeffectiveness of the proposed combination, the results are reported for a system with81 rules and asystem with an optimized rule base. After reducingthe rules the computation becomes fast and it alsoconsumes less memory. The ANFIS architecture for θ1 is shownin Fig. 1.9. 165 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Five angles have considered for the representation of robotic arm. But as the 5 is independentof other angles so only remaining four angles was considered to calculate forward kinematics. Now,for every combination of 1, θ2, θ3 andθ4 values the x and y as well as z coordinates are deduced usingforward kinematics formulas.IV. SIMULATION RESULTS AND DISCUSSIONThe plots displaying the root-mean-square error are shown in figure 1.10. The plot in blue representserror1, the error for training data. The plot in green represents error2, the error for checking data.From the figure one can easily predict thatthere is almost null difference between the training error aswell as checking error after the completion of training of ANFIS. E r r o r C u rve s E r r o r C u rve s 0 .9 0.34 R M S E (R oo t M ea n S q u a r e d E rro r ) R M S E (R oo t M ea n S q u a r e d E rro r ) 0.85 0.32 0 .8 0 .3 0.75 0.28 0 .7 0.26 0.65 0.24 0 .6 0.55 0 2 4 6 8 10 12 14 16 18 20 θ1 0.22 0 2 4 6 8 10 12 14 16 18 20 θ2 E poc hs E poc hs R M S E (R o o t M e a n S q u a r e d E r r o r ) E r r o r C u rve s E r r o r C u rve s R M S E (R o o t M e a n S q u a r e d E rro r) 0 .7 0 .44 0 .42 0 .65 0 .4 0 .6 0 .38 0 .55 0 .36 0 .5 0 .34 0 .45 θ3 0 .32 0 2 4 6 8 10 12 14 16 18 20 θ4 0 2 4 6 8 10 12 14 16 18 20 E pochs E poc hsIn addition to above error plots, the plot showing the ANFIS Thetas versus the actual Thetasare givenin figures1.11,1.12,1.13 and 1.14 respectively. The difference between the original thetas values andthe values estimated using ANFIS is very small. Theta1 and ANFIS Prediction theta13210-1-2-3 Experimental Theta1 ANFIS Predicted Theta1-4 0 50 100 150 200 250 300 350 Time (sec) Theta2 and ANFIS Prediction theta2 2 Experimental Theta2 ANFIS Predicted Theta2 1.5 1 0.5 0-0.5 -1 0 50 100 150 200 250 300 350 Time (sec) 166 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Theta3 and ANFIS Prediction Theta33210-1-2 Experimental Theta3 ANFIS Predicted Theta3-3 0 50 100 150 200 250 300 350 Time (sec) Theta4 and ANFIS Prediction Theta4 2 1 0-1-2 Experimental Theta4 ANFIS Predicted Theta4-3 0 50 100 150 200 250 300 350 Time (sec)The prediction errors for all thetas appear in the figures 1.15, 1.16, 1.17, 1.18 respectively with a much finerscale. The ANFIS was trained initially for only 10 epochs. After that the no. of epochs were increased to 20 forapplying more extensive training to get better performance. Prediction Errors for THETA 13 Prediction Error Theta1210-1-2-3 0 50 100 150 200 250 300 350 Time (sec) Prediction Errors for THETA2 1 Prediction Error Theta20.5 0-0.5 -1-1.5 0 50 100 150 200 250 300 350 Time (sec) Prediction Errors for THETA3 2 Prediction Error Theta3 1 0-1-2-3 0 50 100 150 200 250 300 350 Time (sec) 167 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Prediction Errors for THETA41.5 10.5 0-0.5 -1-1.5 Prediction Error Theta4 -2 0 50 100 150 200 250 300 350 Time (sec) V. CONCLUSIONFrom the experimental work one can see that the accuracy of the output of the ANFIS based inversekinematic model is nearly equal to the actual mathematical model output, hence this model can beused as an internal model for solving trajectory tracking problems of higher degree of freedom (DOF)robot manipulator. Asingle camera for the reverse mapping from camera coordinates to real worldcoordinateshas been used in the present work, if two cameras are used stereo vision can be achievedandproviding the height of an object as an input parameter would not be required. The methodologypresented herecan be extended to be used for trajectory planning and quite a few tracking applicationswith real world disturbances. Thepresent work did not make use of color image processing; makinguse of color image processing can helpdifferentiate objects according to their colors along with theirshapes.ACKNOWLEDGEMENTSAs it is the case in almost all parts of human endeavour so also the development in the field of robotics has beencarried on by engineers and scientists all over the world.It can be regarded as a duty to express the appreciationfor such relevant, interesting and outstanding work to which ample reference is made in this paper.REFERENCES[1] F. R, "Position and velocity transformation between robot end-effector coordinate and joint angle," International Journal of Robotics, vol. 2, no. 2, pp. 35-45, 1983.[2] L. G. C. S., "Robot arm kinematics, dynamics and control," IEEE, vol. 15, no. 12, pp. 62-79, 1982.[3] D. J., Analysis of Mechanism and Robot Manipulators. New York, USA: Wiley, 1980.[4] D. Manocha and J. Canny, "Efficient inverse kinematics for general 6r manipulators," IEEE Transaction on Robotics Automation, IEEE, vol. 10, no. 5, pp. 648-657, 1994.[5] R. Paul, B. Shimano, and G. Mayer, "Kinematic control equations for simple manipulators," IEEE Transaction on System, Man and Cybernetics, IEEE, vol. SMC-11, no. 6, pp. 66-72, 1982.[6] J. Korein and N. Balder, "Techniques for generating the goal-directed motion of articulated structures," IEEE Computer Graphics and Application, IEEE, vol. 2, no. 9, pp. 71-81, 1982.[7] J.-S. R. Jang, "ANFIS : Adap tive-Ne twork-Based Fuzzy," IEEE Transactions on Systems, Man and Cybernatics, IEEE, vol. 23, no. 3, pp. 665-685, Jun. 1993.[8] K. Rasit, "A neuro-genetic approach to the inverse kinematics solution of robotic manipulators," Scientific Research and Essays, The Academic Journals, vol. 6, no. 13, pp. 2784-2794, Jul. 2011.[9] B. B. Choi and C. Lawrence, "Inverse Kinematics Problem In Robotics Using Neural Networks," NASA Technical Memorandum 105869, pp. 1-23, 1992.[10] D. M. A. L. R. O. B. W. H. E. H. Jack, "Neural Networks and The Inverse Kinematics Problem," Journal of Intelligent Manufacturing, vol. 4, pp. 43-66, 1993.[11] A. F. R. Arahjo and H. D. Jr., "A Partially Recurrent Neural Network To Perform Trajectory Planning, Inverse Kinematics and Inverse Dynamics," Systems, Man and Cybernetics, IEEE, vol. 2, pp. 1784-1789, 1998.[12] D. W. Howard and A. Zilouchian, "Application of Fuzzy Logic for The Solution Of Inverse Kinematics And Hierarchical Controls Of Robotic Manipulators," Journal of Intelligent and Robotic Systems, Kluwer 168 Vol. 1, Issue 5, pp. 158-169
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Academic Publishers, vol. 23, p. 217–247, 1998.[13] S. Tejomurtula and S. Kak, "Inverse Kinematics In Robotics Using Neural Networks," Elsevier Information Sciences, Elsevier, vol. 116, pp. 147-164, 1999.[14] E. Oyama, N. Y. Chong, A. Agah, T. Maeda, and S. Tachi, "Inverse Kinematics Learning By Modular Architecture Neural Networks With Performance Prediction Networks," International Conference On Robotics & Automation, IEEE, pp. 1006-1012, 2001.[15] R. Koker, C. Oz, T. Cakar, and H. Ekiz, "A Study Of Neural Network Based Inverse Kinematics Solution For A Three-Joint Robot," Robotics and Autonomous Systems, Elsvier, vol. 49, p. 227–234, 2004.[16] Z. Bingul, H. M. Ertunc, and C. Oysu, "Comparison Of Inverse Kinematics Solutions Using Neural Network For 6r Robot Manipulator With Offset," Computational Intelligence Methods And Applications, IEEE, pp. 1-5, 2005.[17] P.-Y. Zhang, T.-S. Lu, and Li-Bosong, "RBF Networks-Based Inverse Kinematics Of 6R Manipulator," International Journal Advance Manufacturing Technology, Springer-Verlag, vol. 26, p. 144–147, 2005.[18] D. T. Pham, M. Castellani, and A. A. Fahmy, "Learning The Inverse Kinematics Of A Robot Manipulator Using The Bees Algorithm," International Conference On Industrial Informatics, IEEE, pp. 493-498, 2008.[19] I. Inc. (2003) http://www.intelitekdownloads.com/Manuals/Robots/ER_V_plus_manual_100016.pdf.[20] S. R. Khuntia and S. Panda, "ANFIS approach for SSSC controller design for the improvement oftransient stability performance," Mathematical and Computer Modelling, Elsevier, pp. 1-12, Jun. 2011.[21] B. A. A. Omar, A. Y. M. Haikal, and F. F. G. Areed, "Design adaptive neuro-fuzzy speed controller for an electro-mechanical system," Ain Shams Engineering Journal, Elsevier, pp. 1-9, Jul. 2011.AuthorsHimanshu Chaudhary received his B.E. in Electronics and Telecommunication fromAmravati University, Amravati, India in 1996, M.E. in Automatic Controls and Roboticsfrom M.S. University, Baroda, Gujarat, India in 2000.Presently he is a research scholar inElectrical Engineering Department, IIT Roorkee, India. His area of interest includesindustrial robotics, computer networks and embedded systems.Rajendra Prasad received B.Sc. (Hons.) degree from Meerut University, India in 1973. Hereceived B.E.,M.E. and Ph.D. degree in Electrical Engineering from the University ofRoorkee, India in 1977, 1979 and 1990 respectively. . He also served as an AssistantEngineer in Madhya Pradesh Electricity Board (MPEB) from 1979- 1983. Currently, he is aProfessor in the Department of Electrical Engineering, Indian Institute of TechnologyRoorkee, Roorkee (India).He has more than 32 years of experience of teaching as well asindustry. He has published 176 papers in various Journals/conferences and received eightawards on his publications in various National/International Journals/Conferences Proceeding papers. He hasguided Seven PhD’s, and presently six PhD’s are under progress. His research interests include Control,Optimization, System Engineering and Model Order Reduction of Large Scale Systems and industrial robotics.
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 FAST AND EFFICIENT METHOD TO ASSESS AND ENHANCE TOTAL TRANSFER CAPABILITY IN PRESENCE OF FACTS DEVICE K. Chandrasekar1 and N. V. Ramana2 1 Department of EEE, Tagore Engineering College, Chennai, TN, India 2 Department of EEE, JNTUHCEJ, Nachupally, Karimnagar Dist, AP, IndiaABSTRACTThis paper presents the application of Genetic Algorithm (GA) to assess and enhance Total Transfer Capability(TTC) using Flexible AC Transmission System (FACTS) devices during power system planning and operation.Conventionally TTC is assessed using Repeated Power Flow (RPF) or Continuation Power Flow (CPF) orOptimal Power Flow (OPF) based methods which normally uses Newton Raphson (NR) method and theenhancement of TTC is done by optimally locating FACTS devices using an optimization algorithm. Thisincreases the CPU time and also limits the search space hence resulting in local optimal value in TTC. Toeliminate this drawback, in this paper a novel procedure using the optimization algorithm (GA) is proposedwhich simultaneously assess and enhance Total Transfer Capability (TTC) in presence of FACTS. Also powerflow is performed using Broyden’s method with Sherman Morrison formula instead of NR method, whichreduces the CPU time further without compromising the accuracy. To validate the proposed method, simulationtest is carried on WSCC 9 bus and IEEE 118 bus test system. Results indicate that the proposed methodenhances TTC effectively with higher computational efficacy when compared to that of conventional method.KEYWORDS: FACTS Device, Genetic Algorithm, Power System Operation and Control, Total TransferCapability I. INTRODUCTIONAccording to NERC report [1], Total Transfer Capability (TTC) is defined as the amount of electricpower that can be transferred over the interconnected transmission network in a reliable manner whilemeeting all defined pre and post contingencies. Available Transfer Capacity (ATC) is a measure oftransfer capability remaining in the physical transmission network for further commercial activityover and above already committed uses. It is well known that the FACTS devices are capable ofcontrolling voltage magnitude, phase angle and circuit reactance. By controlling these, we canredistribute the load flow and regulate bus voltage. Therefore this method provides a promising one toimprove TTC [2-7].The optimal location and settings of FACTS devices for the enhancement of TTC is a combinatorialanalysis. The best solutions to such type of problems can be obtained using heuristic methods. Thebasic approach is to combine a heuristic method with RPF [8] or CPF (Continuation Power Flow) [9-10] or OPF (Optimal Power Flow) [11] method to asses and enhance TTC. From the literatureavailable it is understood that in all these approaches heuristic methods are used only for finding theoptimal location and/or settings of FACTS devices to enhance TTC, but the value of TTC is computedusing conventional methods like CPF, RPF or OPF based methods [12-20] which takes muchcomputational time because of the following reasons. TTC should be computed accurately as well aswith less computational time because of the following reasons: 170 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963First, from [21] it is evident that in operation of a power system, ATC or TTC is done for a week andeach hour for a week has a new base case power flow. A typical TTC calculation frequency accordingto western interconnection report [22] is• Hourly TTC for the next 168 Hours : Once per day• Daily TTC for the next 30 days : Once per week• Monthly TTC for months 2 through 13: Once per month.Second, due to the fact of uncertainty in contingency listing, forecasted load demand etc even after acareful study in planning of power system and optimally locating these FACTS devices and itssettings to enhance TTC the results may not be optimal during different power system operatingconditions. Once when these FACTS devices are located, its location cannot be changed but thesettings of these FACTS devices can be adjusted to obtain a maximum TTC for different powersystem operating conditions. This is again a problem of combinatorial analysis with a number ofFACTS devices present in the system and with the wide range in its operating parameters.Hence for the above reasons and with the known solution methods [12-20] to asses and enhance TTCin presence of FACTS, very high computational time is needed, which may not be a drawback duringplanning of power system but has an adverse effect in the operation stage.In [23-24] TTC is computed with OPF based Evolutionary program(EP), in which EP is used to findthe location, setting of FACTS devices and simultaneously it searches the real power generation,generation voltages and real power load. This method can be used in both planning and operation of apower system but the major drawback in this method is that the length of chromosome, whichincreases with the power system size there by increasing the computational time for getting globaloptimal results. Further the load distribution factor and power factor of loads in the system has notbeen maintained constant.In this paper Genetic Algorithm with power flow using Broyden’s method [25-26] with ShermanMorrison formula (GABS) is proposed to assess and enhance TTC in presence of FACTS whicheffectively enhances TTC and reduces the computational time to a great extent during planning andoperation of power system. The results are compared with the conventional method GeneticAlgorithm with Repeated Power Flow using NR method (GARPFNR).The remaining paper is organized as follows: Section 2 deals with FACTS devices modelling andTTC problem formulation using GARPFNR. Section 3 gives the description about the proposedmethod. Section 4 deals with the Results and Discussion and finally conclusion are drawn in Section5.II. FACTS DEVICES AND TTC FORMULATION USING GARPFNRIn this paper the mathematical formulation for TTC with and without FACTS device using RPFNRmethod from [2] is combined with GA i.e. GARPFNR [18] to enhance TTC. Though there are manyheuristic methods which can be combined with RPFNR to enhance TTC using FACTS, GA is used inthis paper because these are best suited for optimization problems which do not possess qualities suchas continuity, differentiability etc. This works on the principle that the best population of a generationwill participate in reproduction and their children’s called as offspring’s will move on to nextgeneration based on the concept of “survival of the fittest”. Hence in this paper GARPFNR iscompared with the proposed method GABS. The TTC level in normal or contingency state is givenby: TTC = ∑P i∈sin k Di (λ max) (1)and ATC neglecting TRM, ETC is given by 0 ATC = ∑P i∈sin k Di (λ max) − ∑P i∈sin k Di (2)where 171 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 ∑Pi∈sin k Di (λ max) is the sum of load in sink area when λ = λ max . 0 ∑i∈sin k PDi is the sum of load in sink area when λ = 0 .Therefore the objective function is maximize TTC = ∑P i∈sin k Di (λ max) (3)Subject to n PGi − PDi − ∑ Plossij = 0 (4) j =1 n QGi − QDi − ∑ Qlossij = 0 (5) j =1 Vi min ≤ Vi ≤ Vi max (6) Sij = Sij max (7) PGi ≤ PGi max (8)2.1. Power Flow in GARPFNRIn GARPFNR method the power flow equations are solved repeatedly using NR method by increasingthe complex load at every load bus in the sink area and increasing the injected real power at generatorbus in the source area until limits are incurred the computational time will be more. In general NRmethod finds x iteratively such that F ( x) = 0 (9)In the iterative process, say in mth iteration x is updated as given below x m +1 = x m − ∆ x (10)and ∆ x = − ( J m ) −1 F ( x m ) (11)where J m is the Jacobian matrix.Since the power flow equations are solved repeatedly, for every step increment of λ ttc there are morethan one number of iteration and for every iteration a Jacobian matrix of size n × n is computed andthen inverted. For ‘n’ non linear equations, computation of Jacobian matrix elements includescomputation of n2 partial derivatives and ‘n’ number of component functions. Therefore n2 + nfunctional evaluations need to be done. Again inversion of an n × n Jacobian matrix using GaussJordan elimination method requires n3 arithmetic operations. The representation of chromosome inGARPFNR assuming one TCSC and one SVC at a time is shown in Fig 1.. Fig 1. Representation of Chromosome 172 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-19632.2. Computational Time in GARPFNRFor example let us consider a case in which GARPFNR has a population size of 30 and number ofgeneration is 100. For each chromosome let us say it takes 10 steps in steps of 1MW of load andgeneration increments to compute the loading factor λmax , and for each increment say a NR powerflow of 3 iterations takes 1.5 sec, then for 30 chromosomes and 100 generations with 10 contingenciesconditions the total time required to complete one transfer will be approximately 125 hrs. Theaccuracy of results can be improved by decreasing the step size at the cost of increase incomputational time i.e. if the step size is decreased by a factor 10 (from 1MW to 0.1 MW) then thetime for computation increases by the same factor 10.III. DESCRIPTION OF THE PROPOSED METHODIn this method power flow model of FACTS device and mathematical formulation of TTC is same asthat of GARPFNR method but the chromosome representation and power flow procedure differs asdiscussed below:3.1. Power Flow in GABSIn this Broyden’s method with Sherman Morrison formula is used for solving power flow. Broyden’smethod is a Quasi–Newton method. The starting point of Broyden’s method is same as NR methodi.e. an initial approximation x 0 is chosen to find F ( x 0 ) and x1 is calculated using the Jacobian J 0 .From the second iteration this method departs from NR by replacing the Jacobian matrix with anequivalent matrix ‘A’ which is given by A m = A( m −1) + [( F ( x m ) − F ( x m −1 ) − A m −1 ( x m − x m −1 )] (12)and x m +1 = x m − ( A m ) − 1 F ( x m ) (13)henceforth the number of functional evaluations is reduced to ‘n’ from ‘n + n’. Further the n3 2arithmetic operation for computing the inverse of Am matrix can be reduced to n2 operations using theSherman Morrison formula as shown below as ( m −1) −1 [A ] +U ( Am ) −1 = T m −1 −1 (14) ∆x [ A ] ∆F ( x )Where U = {∆x − [ Am −1 ]−1 ∆F ( x)}*{∆x [ Am −1 ]−1} T (15)3.2. Modified Chromosome RepresentationAs in GARPFNR method population is initialized randomly and each chromosome in the populationconsists of decision variables for FACTS device location, device settings, and objective functionvalue and apart from that it consist of λttc value. The value of λttc for each chromosome is Fig. 2. Modified representation of Chromosomefixed within a range between ‘0’ to ‘1’ (for an increase up to 100% in loading factor) or ‘0’ to ‘2’ (foran increase up to 200% in loading factor) which holds good for any complicated power system, sinceno power system even at the worst case is under utilized for more than 200% and the objective 173 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963function is designed such that GA maximizes the value of λttc subject to the satisfaction of equalityand inequality constraints. This eliminates the use of RPF or CPF methods to calculate the loadingfactor λttc .This is shown in Fig 2.3.3. Computational Time in GABSThe computational time for assessing and enhancing TTC using GABS in presence of FACTS is farless when compared to GARPFNR because of two main reasons.At first unlike GARPFNR method, GABS simultaneously finds the optimal location, settings for theFACTS devices and the loading factor λ max for TTC computation by representing all theseinformation in the chromosome.Secondly in power flow using Broyden’s method with Sherman Morrison formula the Jacobianinverse is computed only once during the first iteration for a given network topology and for theremaining iterations a rank one update is done to compute the inverse (an approximate Jacobianinverse). Due to the above fact the quadratic convergence of Newton Raphson method is replaced bysuper linear convergence which is faster than linear but slower than quadratic convergence. For alarge scale system, computing Jacobian inverse for ‘n’ number of iterations with many transferdirection in a single contingency case is a time consuming process when compared to super linearconvergence of Broyden’s method. Hence the total time required to compute TTC with Broyden’smethod is less when compared to NR method.For example let us consider the same case as that of GARPFNR which has a population size of 30 andnumber of generation are 100. For each chromosome let us say it takes 10 steps in steps of 1MW ofload and generation increments to compute the loading factor λ max , and for each increment say thepower flow in GABS using Broyden’s method with Sherman Morrison formula takes 4 iterations for atotal time of 2 sec, then for 30 chromosomes and 100 generations with 10 contingencies conditionsthe total time required to complete one transfer will be approximately 17 hrs which is only 13.6 % ofthe computational time when compared to GARPFNR. This approach can also be applied duringoperation of power system by removing the information of FACTS location in the chromosome.3.4. Algorithm for GABSThe algorithm for the proposed method GABS is given belowStep 1: Population size and number of generations is set.Step 2: Read Bus data, line data, objectives, decision variables, minimum and maximum value of decision variables.Step 3: Initialize the Population.Step 4: TCSC and SVC settings and/or its set values with λ max are obtained from decision variables of GA and make corresponding changes in power flow data.Step 5: Run Power Flow using Broyden’s method with Sherman Morrison formula.Step 6: Check for convergence of Power flow and limit violations if any. IF there is any violations, penalize the corresponding chromosome to a very low fitness value say 1× 10−5 . ELSE Fitness for the chromosome is evaluated as defined in (3). This process is repeated for all chromosomes.Step 7: Apply genetic operators to perform reproduction and Replace Population.Step 8: Check for maximum generation. IF yes go to step 9. ELSE go to step 4.Step 9: From the final solution identify the setting and/or location of TCSC and SVC and λ max to calculate TTC.The flow chart for GABS is shown in Fig 3. 174 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Start Read the Population size, number of generations and decision variables with its range. Read Power flow data and system operation limits Population initialization Obtain the location and/or set values of TCSC and SVC with λ max from the decision variables and make corresponding changes in power flow data Run Load flow using Broyden’s method with Sherman Morrison Formula Penalize chromosome by Yes assigning a very low Violations value of fitness No Calculate fitness for the chromosomes Apply Genetic operators – Cross over and Mutation Replace Population No Max Gen (or) convergence Yes Calculate TTC Stop Fig 3. Flow chart for GABS 175 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963IV. RESULTS AND DISCUSSIONGABS and GARPFNR is carried out in MATLAB environment using Genetic Algorithm and Directsearch toolbox and modified MATPOWER [27] simulation package in INTEL core 2 Duo CPUT5500@ 1.66 GHz processor under Windows XP professional operating system. The standard WSCC9 bus test system and IEEE 118 bus test system [27-28] are considered to test the performance of theproposed method. Transaction between Area 1 to Area 2 alone is considered. The base value, voltagelimits, SVC and TCSC limits are considered from [20]. For GABS and GARPFNR, population size of20 and 200 generations are considered with stall generation of 20. In each of the test system twocases are considered. First case represents planning problem in which FACTS device optimal settingsand location is found to enhance TTC, while the second case represents an operational problem suchas change in load, unexpected contingencies etc, assuming that the FACTS devices are alreadylocated in the system, new optimal settings alone are found to enhance TTC.4.1. WSCC 9 Bus Test SystemWSCC 9 bus test system is divided into two areas. Area 1 has buses 3, 6, 8 and 9. Area 2 has 1, 2, 4, 5and 7. Only one FACTS device in both the types (TCSC and SVC) is considered for placement.4.1.1. Power System Planning (WSCC 9 Bus Test System)The base case (without FACTS device) load in area 2 is 190 MW and using RPFNR method the TTCvalue, limiting condition and CPU time for computing this value is shown in column 2 of Table 1.Similarly with FACTS device, its optimal location and settings, TTC value, limiting condition and thecomputational time using GARPFNR and GABS is shown in column 3 and 4 of Table 1 respectively.It is evident that for the proposed method GABS computational time is 98.69% less and the TTCvalue is 0.653 % higher when compared to that of conventional method GARPFNR. The results aretabulated in Table 1. Table 1. WSCC 9 Bus for Transfer of power from Area 1 to Area 2 (Planning) Without FACTS With FACTS Device Parameters RPFNR GARPFNR GABS SVC at Bus 5, SVC at Bus 4, FACTS Device Qsvc=85.45 Qsvc=96.27 Setting and TCSC in line 4-5, Xtcsc TCSC in line 6-7, Xtcsc Location = - 0.3658 =0.0845 TTC (MW) 410.4 486.4 489.6 Limiting Condition Vmin at Bus 5 MVA Limit Line 1 - 4 MVA Limit Line 1 - 4 CPU Time (Sec) 1.182 549.328 7.1674.1.2. Power System Operation (WSCC 9 Bus Test System)In this case FACTS device location from the results of GABS method in 4.1.1 is considered as basecase. For the operational problem, the corresponding TTC values CPU time, with and without changein FACTS device settings are tabulated in Table 2. Using GABS the TTC values for 10% increase inload, 10% decrease in load, outage of line 6 -7 and generator outage at bus 3 are 0.3%, 0.157%,0.412% and 0.608% higher respectively and the corresponding CPU time for computation is very lowwhen compared to that of GARPFNR method as shown in Table 2. 176 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 2. WSCC 9 Bus for Transfer of power from Area 1 to Area 2 (Operation) Without change in FACTS device settings With change in FACTS device settings Parameters RPFNR GARPFNR GABS Qsvc=95.39 Qsvc=97.88 FACTS Device Setting Xtcsc = 0.3234 Xtcsc = 0.0934 Change in MVA Load TTC (MW) 440.99 440.99 442.34 (+ 10 %) at all MVA Limit MVA Limit MVA Limit Load Bus Limiting Condition Line 1 - 4 Line 1 - 4 Line 1 - 4 CPU Time (Sec) 1.196 416.911 3.547 Qsvc=99.62 Qsvc=99.07 Change in FACTS Device Setting Xtcsc = - 0.0212 Xtcsc = - 0.0943 MVA Load (- TTC (MW) 490.77 495.9 496.68 10 %) at all Load Bus Limiting Condition Vmin at Bus 5 Vmin at Bus 5 Vmin at Bus 5 CPU Time (Sec) 1.801 691.923 4.089 Qsvc=83.66 Qsvc=57.75 FACTS Device Setting Xtcsc = - 0.4602 Xtcsc = - 0.4830 Line 6 - 7 TTC (MW) 288.8 357.2 358.68 outage MVA Limit MVA Limit Limiting Condition Vmin at Bus 7 Line 5 - 6 Line 1 - 4 CPU Time (Sec) 0.66 283.886 7.327 Qsvc=100.00 Qsvc=85.77 FACTS Device Setting Xtcsc = 0.2705 Xtcsc = 0.2704 Outage of TTC (MW) 279.3 279.3 281.01 Generator at Bus 3 MVA Limit MVA Limit MVA Limit Line Limiting Condition Line 1 - 4 Line 1 - 4 1-4 CPU Time (Sec) 0.634 173.769 5.5534.2. IEEE 118 Bus Test SystemIEEE 118 bus test system is divided into two areas as shown in Table 3 and transfer of power fromArea 1 to Area 2 with only one FACTS device in both the types (TCSC and SVC) is considered forplacement. Table 3 Area classification of IEEE 118 bus test system Area Area 1 Area 2 Bus Numbers 1 – 23, 25 –37,39 – 64, 113 –115, 117 24,38,65 – 112, 116,1184.2.1 Power System Planning (IEEE 118 bus test system)The total load in area 2 is 1937 MW. TTC value without FACTS device using RPFNR method is2111.3 MW. TTC value with FACTS device using GARPFNR and GABS is 2202.8MW and 2224.3MW respectively and the corresponding time for calculation is shown in Table 4. Hence in theproposed method GABS the time required for computation is nearly 96.77% less and the TTC value is0.966 % higher when compared to the conventional method GARPFNR. 177 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 4 IEEE 118 Bus for Transfer of power from Area 1 to Area 2 (Planning) Without FACTS With FACTS Device Parameters RPFNR GARPFNR GABS SVC at Bus 44, SVC at Bus 86, FACTS Device Qsvc=57.65 Qsvc= - 61.58 TCSC in Setting and TCSC in line 89 - 92, line 89 - 92, Location Xtcsc = -0.4908 Xtcsc =0.1483 TTC (MW) 2111.3 2202.8 2224.3 MVA Limit MVA Limit MVA LimitLimiting Condition Line 89 - 92 Line 65 - 68 Line 65 - 68 CPU Time (Sec) 1.259 308 9.937 Table 5 IEEE 118 Bus for Transfer of power from Area 1 to Area 2 (Operation) Without change in FACTS device With change in FACTS device settings Parameters settings RPFNR GARPFNR GABS Qsvc=52.89 Qsvc=54.79 FACTS Device Setting ---- Xtcsc = 0.5 Xtcsc = 0.1421 Change in MVA TTC (MW) 2359.3 2359.3 2368.8Load (+ 5 %) at all Pg max at Bus MVA Limit Load Bus Limiting Condition Pg max at Bus 89 89 Line 89 - 92 CPU Time (Sec) 1.59 402.008 5.992 Qsvc= -100.00 Qsvc=52.04 FACTS Device Setting ----- Xtcsc = 0.2908 Xtcsc = 0.0876 Change in MVA TTC (MW) 1987.4 1987.4 2000.8Load (- 5 %) at all MVA Limit MVA Limit MVA Limit Load Bus Limiting Condition Line 65 - 68 Line 65 - 68 Line 65 - 68 CPU Time (Sec) 0.8 223.012 5.355 Qsvc=36.15 Qsvc= - 33.76 FACTS Device Setting ----- Xtcsc = - Xtcsc = -0.2061 0.2179 Line 23 - 24 TTC (MW) 1995.1 2150.1 2151.6 outage MVA Limit MVA Limit MVA Limit Limiting Condition Line 90 - 91 Line 65 - 68 Line 65 - 68 CPU Time (Sec) 0.541 270.317 5.977 Qsvc=27.28 Qsvc=99.23 FACTS Device Setting ----- Xtcsc = 0.2890 Xtcsc = 0.4955 Outage of TTC (MW) 2246.9 2246.9 2256 Generator at Bus Pg max at Bus Pg max at Bus Pg max at 61 Limiting Condition 89 89 Bus 89 CPU Time (Sec) 1.256 405.674 6.8124.2.2 Power System Operation (IEEE 118 Bus test system)In this case FACTS device location from the results of GABS method in 4.2.1 is considered as basecase. For an operational problem of ±5% change in load, outage of line 23 -24 and generator at bus61 is considered and their corresponding TTC values with and without change in FACTS devicesettings are tabulated in Table 5 which shows that the proposed method GABS is more efficient inassessing and enhancing TTC. 178 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 V. CONCLUSIONA fast and efficient method GABS is presented to assess and enhance TTC in presence of FACTSdevices. Simulation test is carried out on WSCC 9 bus, IEEE 118 bus test system and the results arecompared with the conventional GARPFNR method. From the results it is evident that the searchspace in the conventional method is limited due to the step increment in loading factor which resultsin local optimal value of TTC and use of NR method for power flow increases the CPU time due tothe presence of multiple Jacobian inverses. On the other hand GABS searches the loading factorinstead of incrementing it which results in near global optimal value of TTC and also the power flowis performed using Broyden’s method with Sherman Morrison formula which reduces the CPU timewhen compared to NR method. The percentage reduction in CPU time will increase further in GABSeither if the size of the system is more or when the system is lightly loaded. Hence GABS methodproves to be a promising one when compared to that of conventional method.REFERENCES[1] “Available Transfer Capability Definitions and Determination” NERC report, June 1996.[2] Ou, Y. and Singh, C. “Improvement of total transfer capability using TCSC and SVC”, Proceedings of the IEEE Power Engineering Society Summer Meeting. Vancouver, Canada, July 2001, pp. 15-19.[3] Farahmand, H. Rashidi-Nejad, M. Fotuhi-Firoozabad, M., “Implementation of FACTS devices for ATC enhancement using RPF technique”, IEEE Power Engineering conference on Large Engineering Systems, July 2004, pp. 30-35.[4] Ying Xiao, Y. H. Song, Chen-Ching Liu, Y. Z. Sun, “ Available Transfer Capability Enhancement Using FACTS Devices”, IEEE Trans. Power Syst., 2003,18, (1), pp. 305 – 312.[5] T Masuta, A Yokoyama, “ATC Enhancement considering transient stability based on OPF control by UPFC”, IEEE International conference on power system technology, 2006, pp. 1-6.[6] K.S. Verma, S.N. Singh and H.O. Gupta “FACTS device location for enhancement of Total Transfer Capacity” IEEE PES Winter Meeting, Columbus, OH, 2001, 2, pp. 522-527.[7] Xingbin Yu, Sasa Jakovljevic and Gamg Huang, “Total Transfer capacity considering FACTS and security constraints”, IEEE PES Transmission and Distribution Conference and Exposition, Sep 2003, 1, pp. 73-78.[8] Gravener, M.H. and Nwankpa, C. “Available transfer capability and first order sensitivity”, IEEE Trans. Power Syst., 1999, 14, (2), pp. 512-518.[9] H. Chiang, A. J. Flueck, K. S. Shah, and N. Balu, “CPFLOW: A practical tool for tracing power system steady-state stationary behavior due to load and generation variations,” IEEE Trans. Power Syst., 1995, 10, (2) pp. 623–634.[10] G. C. Ejebe, J. Tong, J. G. Waight, J. G. Frame, X. Wang, and W. F. Tinney, “Available transfer capability calculations,” IEEE Trans. Power Syst., 1998, 13, (4) pp. 1521–1527.[11] Ou, Y. and Singh, C. “Assessment of available transfer capability and margins”, IEEE Trans. Power Syst., 2002, 17, (2), pp. 463-468.[12] Leung, H.C., Chung, T.S., “Optimal power flow with a versatile FACTS controller by genetic algorithm approach”, IEEE PES Winter Meeting, Jan 2000, 4, pp 2806-2811.[13] S. Gerbex, R. Cherkaoui, A.J. Germond, “Optimal Location of Multitype FACTS Devices in a Power System by Means of Genetic Algorithms”, IEEE Trans. Power Syst., 2001, 16, (3), pp. 537-544.[14] S. Gerbex, R. Cherkaoui, and A. J. Germond, “Optimal Location of FACTS Devices to Enhance Power System Security”, IEEE Bologna Power Tech Conference, Bologna, Italy, June 2003, 3, pp. 23-26.[15] Wang Feng, and G. B. Shrestha, “Allocation of TCSC devices to optimize Total Transfer capacity in a Competitive Power Market”, IEEE PES Winter Meeting, Feb 2001, 2, pp. 587 -593.[16] Sara Molazei, Malihe M. Farsangi, Hossein Nezamabadi-pour, “Enhancement of Total Transfer Capability Using SVC and TCSC”, 6th WSEAS International Conference on Applications of Electrical Engineering, Istanbul, Turkey, May 27-29, 2007. pp 149-154. 179 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[17] Hossein farahmand, Masoud rashidinejad and Ali akbar gharaveisi, “A Combinatorial Approach of Real GA & Fuzzy to ATC Enhancement”, Turkish Journal Of Electrical Engineering, 2007, 1, (4), pp. 77-88.[18] Fozdar, M., “GA based optimisation of thyristor controlled series capacitor”, 42nd International Universities Power Engineering Conference, Brighton, Sept. 2007, pp. 392 – 396.[19] X. Luo, A. D. Patton, and C. Singh, “ Real power transfer capability calculations using multi-layer feed- forward neural networks,” IEEE Trans. Power Syst., 2000, 15, (2), pp. 903–908.[20] N. V. Ramana, K.Chandrasekar, “Multi Objective Genetic Algorithm to mitigate the composite problem of Total transfer capacity, Voltage stability and Transmission loss minimization”, IEEE 39th North American Power Symposium, New Mexico, 2007, pp 670-675.[21] Peter. W. Sauer, “Technical challenges of Computing ATC in Electric Power System”, 30th Hawaii International conference on system sciences, Wailea, HI, USA, Jan 1997, 5, pp. 589-593.[22] “Determination of ATC within the Western Interconnection”, WECC RRO Document MOD -003-0, June 2001.[23] Ongsakul, W. Jirapong, P. “Optimal allocation of FACTS devices to enhance total transfer capability using evolutionary programming”, IEEE International Symposium on Circuits and System, ISCAS, May 2005, 5, pp 4175- 4178.[24] Peerapol Jirapong and Weerakorn Ongsakul, “Optimal Placement of Multi-Type FACTS Devices for Total Transfer Capability Enhancement Using Hybrid Evolutionary Algorithm”, Journal of Electric Power Components and Systems, 2007, 35, (9) pp. 981 – 1005.[25] C. G. Broyden, “A class of methods for solving Non Linear Simultaneous Equations” Mathematics of Computation, 1965 , 19, (92), pp. 577-593.[26] Asif Selim, “An Investigation of Broyden’s Method in Load Flow Analysis”, MS thesis report, Ohio University, March 1994.[27] R. D. Zimmermann and Carlos E. Murillo-Sánchez, Matpower a Matlab® power system simulation package, User’s Manual, Version 3.2, 2007.[28] http://www.ee.washington.edu/research/pstca/.Authors K Chandrasekar received his B.E. (EEE) from University of Madras, Madras, India in 1997 and M.E (Power systems) form Madurai Kamarajar University, Madurai, India in 2001. He is currently an Assoc. Professor in Dept of EEE, Tagore Engineering College, Chennai and is pursuing PhD in J.N.T. University, Hyderabad, A.P, India. His research interests are in Power system Optimization, and application of FACTS devices. He is a member of IEEE. N. V. Ramana has Graduated in 1986 and Post-Graduated in 1991 respectively from S.V. University, Tirupati and obtained his Ph.D in the year 2005 from J.N.T. University, Hyderabad, A.P., India. He is currently Professor and Head, EEE dept., JNTUH College of Engineering, Nachupally, Karimnagar Dist. A.P, India. He has publications in international journals and conferences and presented papers in IEEE Conferences held in USA, Canada and Singapore. His research interests are design of intelligent systems for power systems using Fuzzy Logic Control and Genetic and Cluster Algorithms. 180 Vol. 1, Issue 5, pp. 170-180
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 ISSUES IN CACHING TECHNIQUES TO IMPROVE SYSTEM PERFORMANCE IN CHIP MULTIPROCESSORS H. R. Deshmukh1, G. R. Bamnote2 1 Associate professor, B.N.C.O.E., Pusad, M.S., India 2 Associate professor & Head, PRMIT&R, Badnera, M.S., IndiaABSTRACTAs cache management in chip multiprocessors has become more critical because of the diverse workloads,increasing working sets of many emerging applications, increasing memory latency and decreasing size ofcache devoted to each core due to increased number of cores on a single chip in Chip multiprocessors (CMPs).This paper identifies caching techniques and important issues in caching techniques in chip multiprocessor formanaging last level cache to reduce off chip access to improve the system performance under critical conditionsand suggests some future directions to address the identified issues.KEYWORDS: Multiprocessors, Partitioning, Compression, Fairness, QoS. I. INTRODUCTIONOver the past two decades, speed of processors has increased at much faster rate than DRAM speeds.As a result, the number of processor cycles it takes to access main memory has also increased. Currenthigh performance processors have memory access latency of well over more than hundreds of cycle,and trends indicate that this number will only increase in the future. The growing disparity betweenprocessor speed and memory speed is popularly referred in the architecture community as the MemoryWall [1]. Main memory accesses affect processor performance adversely. Therefore, currentprocessors use caches to reduce the number of memory accesses. A cache hit provides fast access torecently accessed data. However, if there is a cache miss at the last level cache, a memory access isinitiated and the processor is stalled for hundreds of cycles [1]. So as, to sustain high performance, itis important to reduce cache misses.The importance of cache management has become even more critical because of, diverse workloads,increasing working sets of many emerging applications, increasing memory latency and decreasingsize of cache devoted to each core due to increased number of cores on a single chip.Improvements in silicon process technology have facilitated the integration of multiple cores intomodern processors and it is anticipated that the number of cores on a single chip will continue toincrease in chip multiprocessors in future. Multiple application workloads are attractive for utilisingmulti-core processors, put significant pressure on the memory system [2]. This motivates the need formore efficient use of the cache in order to minimize the expensive, in terms of both latency and,requests to off-chip memory. This paper discusses the exiting approaches and limitations of exitingapproaches in caching techniques in chip multiprocessors available in literature and investigates theimportant issues in this area. II. REPLACEMENT TECHNIQUEDifferent workloads and program phases have diverse access patterns like, Sequential access patternin which all block are accesses one after another and never re-accessed, such as file scanning.Looping-like access patterns in which all blocks are accessed repeatedly with a regular interval,Temporally-clustered access patterns in which blocks accessed more recently are the ones more likelyto be accessed in the near future and Probabilistic access patterns in which, each block has a 181 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963stationary reference probability, and all blocks are accessed independently with the associatedprobabilities.Previous researchers [3]-[11] have shown that one replacement policy usually performs efficientlyunder the workload with one kind of access pattern; it may perform badly once the access pattern ofthe workload changes. For example, MRU replacement policy (Most Recently Used) performs well tosequential and looping patterns, LRU replacement policy performs well to temporally clusteredpatterns, while LFU replacement policy performs well to probabilistic patterns. From the study ofexisting replacement policies, it is found that none of the single cache replacement policy performsefficiently for mix type of access pattern like Sequential references, Looping references, Temporally-clustered references and Probabilistic references, which may be occurs simultaneously in oneworkload during execution. Some of the policies require additional data structures to hold theinformation of non-residential pages. Some policies require data update in every memory access,which necessarily increases memory and time overhead, in result degrade the performance.Kaveh Samiee et al. (2009, 2008) [3][4] suggested weighted replacement policy. The basic idea ofthis policy is to rank pages based on their recency, frequency and reference rate. So, pages that aremore recent and have used frequently are ranked higher. It means that the probability of using pageswith small reference rate is more than the one with bigger reference rate. This policy behaves likeboth LRU and LFU by replacing pages, that were not recently used and pages that are used only once.WRP needs three elements to work and will add space overhead to system. Algorithm needs a spacefor recency counter , frequency counter , and for weight value , which is as weighting value foreach object in the buffer. Calculating weighting function value for each object after every access tocache will cause a time overhead to system. This policy fails for sequential access and loop accesspatterns.Dr Mansard Jargh et al. (2004) [5] describes improved replacement policy (IRP) which perform somekey modifications to the LRU algorithm and combine it with a significantly enhanced version of theLFU algorithm and take spatial locality into account in the replacement decision. IRP also uses theconcept of spatial locality and therefore efficiently expels only blocks, which are not likely to be,accessed again. This algorithm-required memory overhead to store recency count ‘rc’, frequencycount ‘fc’ and block address ‘ba’ for each block. Algorithm required time and processor overhead tosearch smallest ‘fc’ value and largest ‘rc’ value, as well as time and processor overhead to changingvalue of ‘fc’ and ‘rc’ to every access to block. Algorithm does not perform well for loop accesspattern and sequential access pattern.Jiang et al. (2002) [6] presented low inter-reference recency set policy (LIRS). Its objective is tominimize the deficiencies presented by LRU using an additional criterion named IRR (Inter-Reference Recency) that represents the number of different pages accessed between the last twoconsecutive accesses to the same page. The algorithm assumes the existence of some behaviour inertiaand, according to the collected IRR’s, replaces the page that will take more time to be referencedagain. This means that LIRS does not replace the page that has not been referenced for the longesttime, but it uses the access recency information to predict which pages have more probability to beaccessed in a near future. The LIRS divides the cache into two sets, high inter-reference recency(HIR) block set and low inter-reference recency (LIR) block set. Each block with history informationhas a status either LIR or HIR. Cache is divided into a major part and a minor part in terms of size.Major part is used to store LIR blocks, and the minor part is used to store HIR blocks. A HIR block isreplaced when the cache is full for replacement, and the LIRS stack may grow arbitrarily large, andhence, it needs to be required large memory overhead. This policy does not perform well forsequential access pattern.Zhan-Sheng et al. (2008) [7] proposed CRFP Policy. It is and novel adaptive replacement policy,which combines LRU and LFU policies. CRFP propose a novel adaptive replacement policy thatcombined the LRU and LFU Policies (CRFP), CRFP is self-tuning and can switch between differentcache replacement policies adaptively and dynamically in response to the access pattern changes.Memory overhead is required to store the cache directory, recency value, and frequency value, hitvalue, miss value, switches time and switch ration. Policy also required time overhead to search cachedirectory, and computational time to switch from LRU to LFU. However, this policy fails in the case,where accesses inside loops with working set size slightly larger than the available memory. 182 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963E. J. O’Neil et al. (1993) [8] presented LRU-K policy which makes its replacement decision based onthe time of the Kth to last reference to the block i.e. reference density observed during the past K-reference. When K is larger, it can discriminate well between frequently and infrequently referencedblocks. When K is small, it can remove cold block quickly since such block would have wide spanbetween the current time and to last reference time. Time complexity of algorithm is O (log (n));however this policy does not perform well for loop access pattern, and sequential access pattern.Zhuxu Dong (2009) [9] proposed spatial locality based, block correlations directed cache replacementpolicy (BCD), which uses both of history and runtime access information to predict spatial locality,prediction results are use to improve the utilization of the cache and reduces the penalty incurred byincorrect predications. For most of real system workloads, BCD can reduce the cache miss ratio by11% to 38% compared with LRU.Y. Smaragadaki et al. (1999) [10] described early eviction LRU policy (EELRU) which was proposedas an attempt to mix LRU and MRU, based only on the positions on the LRU queue that concentratemost of the memory references. This queue is only a representation of the main memory using theLRU model, ordered by the recency of each page. EELRU detects potential sequential access patternsanalyzing the reuse of pages. One important feature of this policy is the detection of non-numericallyadjacent sequential memory access patterns. This policy does not perform well for loop accesspattern.Andhi Janapsatya et al. (2010) [11] proposed a new adaptive cache replacement policy, calledDueling CLOCK (DC). The DC policy developed to have low overhead cost, to capture recencyinformation in memory accesses, to exploit the frequency pattern of memory accesses and to be scanresistant. Paper proposed a hardware implementation of the CLOCK algorithm for use within an on-chip cache controller to ensure low overhead cost. DC policy, which is an adaptive replacementpolicy, that alternates between the CLOCK algorithm and the scan resistant version of the CLOCKalgorithm. This policy reduced maintenance cost of LRU policy. Research issue here is to explorehow replacement policy will perform efficiently under diverse workload (mix access pattern) and howprocessor and memory overhead will be, reduce for novel replacement policy.III. PARTITIONING TECHNIQUEChip multiprocessors (CMPs) have been widely adopted and commercially available as the buildingblocks for future computer systems. It contains multiple cores, which enables to concurrentlyexecute multiple applications (or threads) on a single chip. As the number of cores on a chipincreases, the pressure on the memory system to sustain the memory requirements of all theconcurrently executing applications (or threads) increases. An important question in CMP design ishow to use the limited area resources on chip to achieve the best possible system throughput for awide range of applications. Keys to obtaining high performance from multicore architectures is toprovide fast data accesses (reduce latency) for on-chip computation resources and manage the largestlevel on-chip cache efficiently so that off-chip accesses are reduced. While limited off-chipbandwidth, increasing latency, destructive inter-thread interference, uncontrolled contention andsharing, increasing pollution, decreasing harmonic mean and diverse workload characteristics posekey design challenges. To address these challenges many researchers [12]-[24] have proposesdifferent cache partitioning scheme to share on-chip cache resources among different threads, but allchallenges are not address properly.Cho and Jin et al. (2006) [12], proposed software-based mechanism for L2 cache partitioning basedon physical page allocation. However, the major focus of their work is on how to distribute data in aNon-Uniform Cache Architecture (NUCA) to minimize overall data access latencies. However, theydo not concentrate on the problem of uncontrolled contention on a shared L2 cache.David Tam et al. (2007) [13], demonstrated a software-based cache partitioning mechanism andshown some of the potential gains in a multiprogrammed computing environment, which allows forflexible management of the shared L2 cache resource. This work neither supports the dynamicdetermination of optimal partitions nor dynamically adjusts the number of partitions.Stone et al. (1992) [14] investigated optimal (static) partitioning of cache resources between multipleapplications, when the information about change in misses for varying cache size is available for each 183 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963of the competing applications. However, such information is non-trivial to obtain dynamically for allapplications, as it is dependent on the input set of the application.Suh et al. (2004) [15] described dynamic partitioning of shared cache to measure utility for eachapplication by counting the hits to the recency position in the cache and used way partitioning toenforce partitioning decisions. The problem with way partitioning is that it requires core-identifyingbits with each cache entry, which requires changing the structure of the tag-store entry. Waypartitioning also requires that the associativity of the cache be increased to partition the cache amonga large number of applications.Qureshi et al. (2006) [16] proposed the cache monitoring circuits outside the cache so that theinformation computed by one application is not polluted by other concurrently executing applications.They provide a set sampling based utility monitoring circuit that requires storage overhead of 2KB percore and used way partitioning to enforce partitioning decisions. TADIP-F is better able to respond toworkloads that have working sets greater than the cache size while UCP does not.Chang et al. (2007) [17] used time slicing as a means of doing cache partitioning so that eachapplication is guaranteed cache resources for a certain time quantum. Their scheme is still susceptibleto thrashing when the working set of the application is greater than the cache size.Suh et al. (2002) [18] described a way of partitioning a cache for multithreaded systems by estimatingthe best partition sizes. They counted the hits in the LRU position of the cache to predict the numberof extra misses that would occur if the cache size were decreased. A heuristic used this numbercombined with the number of hits in the second LRU position to estimate the number of cache missesthat are avoided if the cache size is increased.Dybdahl et al.,(2006) [19] presented the method which adjust the size of the cache partitions within ashared cache, work did not consider a shared partition with variable size, nor did they look atcombining private and shared caches.Kim et al. (2004) [20] presented cache partitioning in shared cache for a two-core CMP where a trialand fail algorithm was applied. Trial and fail as a partitioning method does not scale well withincreasing number of cores since the solution space grows fast.Z. Chishti et al. (2005) [21] described spilling evicted cache blocks to a neighbouring cache. They didnot consider putting constraints on the sharing or methods for protection from pollution. Nomechanism was described for optimizing partition sizes.Chiou et al.(2000) [22] suggested a mechanism for protecting cache blocks within a set. Theirproposal was to control which blocks that can be replaced in a set by software, in order to reduceconflicts and pollution. The scheme was intended for a multi-threaded core with a single cache.Dybdahl et al.(2007) [23] presented a approach in which the amount of cache space that can beshared among the cores is controlled dynamically, as well as uncontrolled sharing of resources is alsocontrol effectively . The adaptive scheme estimates, continuously, the effect of increasing/ decreasingthe shared partition size on the overall performance. Paper describes NUCA organization in whichblocks in a local partition can spill over to neighbour core partitions. Approach suffers from pollutionand harmonic mean problem.Dimitris Kaseridis et al. (2009) [24] proposed a dynamic partitioning strategy based on realistic lastlevel cache designs of CMP processors. Proposed scheme provides on average a 70% reduction inmisses compared to non-partitioned shared caches, and a 25% misses reduction compared to staticequally partitioned (private) caches. This work highlights the problem of sharing the last level ofcache in CMP systems and motivates the need for low overhead, workload feedback-basedhardware/software mechanisms that can scale with the number of cores, for monitoring andcontrolling the L2 cache capacity partitioning.Research issue here is to explore cost effective solution for future improvements in cachingrequirement, including thrashing avoidance, throughput improvement, fairness improvement and QoSguarantee under above key design challenges.IV. COMPRESSION TECHNIQUEChip multiprocessors (CMPs) combine multiple processors on a single die, however, the increasingnumber of processor cores on a single chip increases the demand of two critical resources, the sharedL2 cache capacity and the off-chip pin bandwidth. Demand of critical resources are satisfied by the 184 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963technique of cache compression. From the existing research work [25][26][27][28][29][30][31] it iswell known that Compression technique, which can both reduce cache miss ratio by increasing theeffective shared cache capacity, and improve the off-chip bandwidth by transferring data incompressed form. Jang-Soo Lee et al., (1999) [25] proposed the selective compressed memory systembased on the selective compression technique, fixed space allocation method, and several techniquesfor reducing the decompression overhead. The proposed system provide on the average 35% decreasein the on-chip cache miss ratio as well as on the average 53% decrease in the data traffic. However,authors could not control the problem of long DRAM latency and limited bus bandwidth.Charles Lefurgy et al (2002)[26] presented a method of decompressing programs using software. Itrelies on using a software managed instruction cache under control of the decompressor. This isachieved by employing a simple cache management instruction that allows explicit writing into acache line. It also considers selective compression (determining which procedures in a programshould be compressed) and show that selection based on cache miss profiles can substantiallyoutperform the usual execution time based profiles for some benchmarks. This technique achieveshigh performance in part through the addition of a simple cache management instruction that writesdecompressed code directly into an instruction cache line. This study focuses on designing a fastdecompressor (rather than generating the smallest code size) in the interest of performance. Papershown that a simple highly optimized dictionary compression perform even better than CodePack, butat a cost of 5 to 25% in the compression ratioPrateek Pujara et al. (2005) [27] investigated restrictive compression techniques for level one datacache, to avoid an increase in the cache access latency. The basic technique all words narrow (AWN)compresses a cache block only if all the words in the cache block are of narrow size. AWN techniquehere stores a few upper halfwords (AHS) in a cache block to accommodate a small number of normal-sized words in the cache block. Further, author not only make the AHS technique adaptive, where theadditional half-words space is adaptively allocated to the various cache blocks but also proposetechniques to reduce the increase in the tag space that is inevitable with compression techniques.Overall, the techniques in this paper increase the average L1 data cache capacity (in terms of theaverage number of valid cache blocks per cycle) by about 50%, compared to the conventional cache,with no or minimal impact on the cache access time. In addition, the techniques have the potential ofreducing the average L1 data cache miss rate by about 23%.Martin et al. (2008) [28] shown that it is possible to use larger block sizes without increasing the off-chip memory bandwidth by applying compression techniques to cache/memory block transfers. Sincebandwidth is reduced up to a factor of three, work proposes to use larger blocks. Whilecompression/decompression ends up on the critical memory access path, works find its negativeimpact on the memory access latency time. Proposed scheme dynamically chosen a larger cacheblock when advantageous given the spatial locality in combination with compression. This combinedscheme consistently improves performance on average by 19%.Xi Chen et al. (2009) [29] presented a lossless compression algorithm that has been designed forfast on-line data compression, and cache compression in particular. The algorithm has a number ofnovel features tailored for this application, including combining pairs of compressed lines into onecache line and allowing parallel compression of multiple words while using a single dictionary andwithout degradation in compression ratio. The algorithm is based on pattern matching and partialdictionary coding. Its hardware implementation permits parallel compression of multiple wordswithout degradation of dictionary match probability. The proposed algorithm yields an effectivesystem-wide compression ratio of 61%, and permits a hardware implementation with a maximumdecompression latency of 6.67 ns.Martin et al. (2009) [30] presents and evaluates FPC, a lossless, single pass, linear-time compressionalgorithm. FPC targets streams of double-precision floating-point values. It uses two context-basedpredictors to sequentially predict each value in the stream. FPC delivers a good average compressionratio on hard-to-compress numeric data. Moreover, it employs a simple algorithm that is very fast andeasy to implement with integer operations. Author claimed that FPC to compress and decompress 2 to300 times faster than the special-purpose floating-point compressors. FPC delivers the highestgeometric-mean compression ratio and the highest throughput on hard-to compress scientific datasets. It achieves individual compression ratios between 1.02 and 15.05. 185 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963David Chen et al. (2003)[31] propose a scheme that dynamically partitions the cache into sections ofdifferent compressibilities, in this work it is applied repeatedly on smaller cache-line sized blocks soas to preserve the random access requirement of a cache. When a cache-line brought into the L2 cacheor the cache-line is to be modified, the line is compressed using a dynamic, LZW dictionary.Depending on the compression, it is placed into the relevant partition. The partitioning is dynamic inthat the ratio of space allocated to compressed and uncompressed varies depending on the actualperformance, a compressed L2 cache show an 80% reduction in L2 miss-rate when compared tousing an uncompressed L2 cache of the same area.Research issues here is, when the processor requests a word within a compressed data block stored inthe compressed cache, the compressed block has to be all decompressed on the fly and then therequested word is transferred to the processor. Compression ratio, compression time anddecompression overhead, causes a critical effect on the memory access time and offsets thecompression benefits, these issues are interesting and challenging for future research. Another issueassociated with the compressed memory system is that, compressed blocks can be generated withdifferent sizes depending on the compression efficiency. Therefore, in worst case, the length of anycompressed block can be rather longer than that of its source block, this will adversely affect theperformance of system. V. CONCLUSIONFrom the above discussion following conclusion can be arrived to address the above research issues incaching techniques in chip multiprocessors to improve system performance • To develop low overhead novel replacement policy, which will performs efficiently under under diverse workload, different cache size and varying working set. • To develop efficient caching partitioning scheme in Chip Multiprocessors with different optimization objectives, including throughput, fairness, and guaranteed quality of service (QoS) • To develop low overhead caching compression/decompression scheme in Chip Multiprocessors to increase shared cache capacity and off chip Bandwidth.REFERENCES[1] John L. Henneaay and David A. Patterson, “Computer Architecture a Quantitative Approach”, Edition ,Elsevier publication, 2003.[2] Konstantinos Nikas, Matthew Horsnell, Jim Garside, “An Adaptive Bloom Filter Cache Partitioning Scheme for Multicore Architectures”, International Conference on, Embedded Computer Systems: Architectures, Modelling, and Simulation, July 21-24 2008, SAMOS 2008, pp. 21-24.[3] Kaveh Samiee, GholamAli Rezai Rad, “WRP: Weighting Replacement Policy to Improve Cache Performance”, Proceeding of the International Symposium on Computer Science and its Applications, 2008, pp. 38-41.[4] Kaveh Samiee “A Replacement Algorithm Based on Weighting and Ranking Cache”, International Journal of Hybrid Information Technology Volume Number 2 , April, 2009[5] Dr Mansard Jargh, Ahmed Hasswa, “Implementation Analysis and Performance Evolution of the IRP- Cache Replacement Policy”, IEEE, International Conference on Computer and Information Technology Workshops, 2004.[6] S. Jiang and X. Zhang, “LIRS: An Efficient Low Inter-reference Recency Set Replacement Policy to Improve Buffer Cache Performance”, Proceedings of the ACM SIGMETRICS Conference on Measurement and Modelling of computer Systems, pp. 31–42, 2002.[7] Zhan-sheng, Da-wei, Hui-juan1, “CRFP: A Novel Adaptive Replacement Policy Combined the LRU and LFU Policies”, IEEE 8th International Conference on Computer and Information Technology Workshops 2008.[8] E. J. Neil, P. E. Neil, and Gerhard Weikum, “The LRU-K Page Replacement Algorithm for Database Disk Buffering”, Proceedings of the 1993 ACM SIGMOD Conference, pp. 297–306, 1993.[9] Zhu Xu-Dong, Ke Jian, Xu Lu, “BCD: To Achieve the Theoretical Optimum of Spatial Locality Based Cache Replacement Algorithm”, IEEE International Conference on Networking, Architecture, and 186 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Storage, 2009[10] Y. Smaragdakis, S. Kaplan, and P. Wilson, “EELRU: Simple and Effective Adaptive Page Replacement”, Proceedings of ACM SIGMETRICS Conference on Measurement and Modelling of Computer Systems, 1999.[11] Andhi Janapsatya, Aleksandar Ignjatovic, Jorgen Peddersen and Parameswaran, “Dueling CLOCK: Adaptive Cache Replacement Policy Based on The CLOCK Algorithm”[12] S. Cho and L. Jin, “Managing Distributed, Shared L2 Caches through OS-level Page Allocation”, Proceedings of the Workshop on Memory System Performance and Correctness, 2006.[13] David Tam, Reza Azimi, Livio Soares, and Michael Stumm, “Managing Shared L2 Caches on Multicore Systems in Software”, Workshop on the Interaction between Operating Systems and Computer Architecture, 2007.[14] H. S. Stone, J. Turek, and J. L. Wolf., “Optimal Partitioning of Cache Memory” IEEE Transactions on Computers, 41(9):1054–1068, 1992.[15] G. E. Suh, L. Rudolph, and S. Devadas, “Dynamic Partitioning of Shared Cache Memory” Journal of Supercomputing, 28(1):7–26, 2004.[16] M. K. Qureshi and Y. Patt, “Utility Based Cache Partitioning: A Low Overhead High-Performance Runtime Mechanism to Partition Shared Caches”, The Annual IEEE/ACM International Symposium on Microarchitecture, MICRO06[17] J. Chang and G. S. Sohi, “Cooperative Cache Partitioning for Chip Multiprocessors”, Proceeding of Annual International Conference on Supercomputing, ICS-21, 2007.[18] G. Suh, S. Devadas, and L. Rudolph, “Dynamic Cache Partitioning for Simultaneous Multithreading Systems”, International Conference On Parallel and Distributed Computing Systems, 2002.[19] H. Dybdahl, P. Stenstrom, and L. Natvig “A Cache Partitioning Aware Replacement Policy for Chip Multiprocessors”, In International Conference High Performance Computing, HiPC, 2006.[20] C. Kim, D. Burger, and S. W. Keckler, “Nonuniform Cache Architectures for Wire-Delay Dominated On-Chip Caches”, IEEE Micro 2004, 23(6): 99-107,[21] Z.Chishti, M.D.Powell, and T. N. Vijaykumar, “Optimizing Replication Communication and Capacity Allocation in CMPs”, Annual International Symposium on Computer Architecture, ISCA, 2005, pp: 357-368.[22] D.Chiou, P.Jain, S. Devadas, and L. Rudolph, “Dynamic Cache Partitioning via Columnisation”, Proceedings of the Conference on Design Automation, Los Angeles, June 5-9, 2000, ACM, 2000.[23] Haakon Dybda, Perstenstrom, “An Adaptive Shared/Private NUCA Cache Partitioning Scheme for Chip Multiprocessors”, IEEE International Symposium on High Performance Computer Architecture, 2007, pp: 2 – 12.[24] Dimitris Kaseridis, Jeffrey Stuechelix and Lizy K. John, “Bank-aware Dynamic Cache Partitioning for Multicore Architectures International Conference on Parallel Processing 2009[25] Jang-Soo Lee, Won-Kee Hong, and Shin-Dug Kim, “Design and Evaluation of a Selective Compressed Memory System”, International Conference On Computer Design (ICCD), 1999, pp: 184-191.[26] CHARLES LEFURGY, EVA PICCININNI, AND TREVOR MUDGE, “REDUCING CODE SIZE WITH RUN-TIME DECOMPRESSION”, PROCEEDINGS ON INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE HPCA, 2002, PP. 218-228.[27] Prateek Pujara, Aneesh Aggarwal, “Restrictive Compression Techniques to Increase Level Cache Capacity”, IEEE International Conference on Computer Design: VLSI in Computers and Processors, ICCD 2005, PP: 327-333.[28] Martin Thuresson and Per Stenstrom, “Accommodation of the Bandwidth of Large Cache Blocks using Cache/Memory Link Compression”, International Conference on Parallel Processing, ICCP 2008, PP: 478-486.[29] Xi Chen, Lei Yang, Robert P. Dick, Li Shang, and Haris Lekatsas, “C-Pack: A High-Performance Microprocessor Cache Compression Algorithm”, IEEE Transaction on Very large Scale Integration System 2009, 44(99), PP: 1-11. 187 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963[30] Martin, Burtscher and Paruj Ratanaworabhan, “FPC: A High-Speed Compressor for Double-Precision Floating-Point Data” IEEE Transaction on Computers, vol. 58(1), January 2009, PP: 18-31.[31] David Chen, Enoch Pegerico and Larry Rudolpha, “A Dynamically Partitionable Compressed Cache”, Proceeding of Singapore-MIT Alliance Symposium, 2003.AuthorsH. R. Deshmukh received his M.E. CSE degree from SGB Amravati University, Amravatiin 2008, and research scholar from 2009. Working as associate professor in deptt. Of CSEB.N.C.O.E., Pusad (India), & life member of Indian Society for Technical Education NewDelhi.G. R. Bamnote is Professor & Head of Department. Of Computer Science & Engineering atProf. Ram Meghe Institute of Technology & Research, Badnera – Amravati. He did his BE(Computer Engg) in 1990 from Walchand College of Engineering, Sangli, M.E. (ComputerScience & Engg) from PRMIT&R, Badnera-Amravati in 1998 and Ph.D. in Computer Science& Engineering from SGB Amravati University, Amravati in 2009. He is life member of IndianSociety of Technical Education, Computer Society of India, and Fellow of The Institution ofElectronics and Telecommunication Engineers, The Institution of Engineers (India). 188 Vol. 1, Issue 5, pp. 181-188
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 KANNADA TEXT EXTRACTION FROM IMAGES AND VIDEOS FORVISION IMPAIRED PERSONS Keshava Prasanna1, Ramakhanth Kumar P2, Thungamani.M3, Manohar Koli4 1, 3 Research Assistant, Tumkur University,Tumkur, India. 2 Professor and HOD, R.V. College of Engineering,Bangalore, India. 4 Research Scholar, Tumkur University,Tumkur, India. ABSTRACT We propose a system that reads the Kannada text encountered in natural scenes with the aim to provide assistance to the visually impaired persons of Karnataka state. This paper describes the system design and standard deviation based Kannada text extraction method. The proposed system contain three main stages text extraction, text recognition and speech synthesis. This paper concentrated on text extraction from images/videos. In this paper: an efficient algorithm which can automatically detect, localize and extract Kannada text from images (and digital videos) with complex backgrounds is presented. The proposed approach is based on the application of a color reduction technique, a standard deviation base method for edge detection, and the localization of text regions using new connected component properties. The outputs of the algorithm are text boxes with a simple background, ready tobe fed into an OCR engine for subsequent character recognition. Our proposal is robust with respect to different font sizes, font colors, orientation, alignment and background complexities. The performance of the approach is demonstrated by presenting promising experimental results for a set of images taken from different types of video sequences.KEYWORDS: SVM, OCR, AMA, CCD Camera, Speech synthesis. I. INTRODUCTIONRecent studies in the field of computer vision and pattern recognition show a greatamount of interest incontent retrieval from images and videos. Text embedded in images contains large quantities of usefulsemantic information, which can be used to fully understand images. In this world maximum objectscan be analyzed and identified by reading the text information present on that objectAutomatic detection and extraction of text in images have been used in many applications such asdocument retrieving; a document image analysis system is one that can handle text documents inKannada, which is the official language of the south Indian state of Karnataka. The input to the systemis the scanned image of a page of Kannada text. The output is an editable computer file containing theinformation in the page. The system is designed to be independent of the size of characters in thedocument and hence can be used with any kind of document in Kannada. The task of separating linesand words in the document is fairly independent of the script and hence can be achieved with standardtechniques. However, due to the peculiarities of the Kannada script, we make use of a novelsegmentation scheme whereby words are first segmented to a sub-character level, the individual piecesare recognized and these are then put together to effect recognition of individual aksharas or characters.The Kannada alphabet (50) is classified into two main categories 16 Vowels and 34 consonants asshown in figure 1 and figure 2 words in Kannada are composed of aksharas[13] which are analogues tocharacters in English words. We use a novel feature vector to characterize each segment and employ aclassifier based on the recently developed concept of Support Vector Machines (SVM)[14], addressblock location, content based image/video indexing, mobile robot navigation to detect text basedlandmarks, vehicle license detection / recognition, object identification, etc. The blind peoples are 189 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963almost dependent on others. They cannot read and analyze objects their own. In making blind peoplesreadable extraction of textual information plays very vital role. Textual information extraction helpsblind peoples in various aspects such as identifying the objects,identifying and self-reading of the textbooks, newspapers, current and electric bills, sign boards, personal letters etc.OCR systems available for handling English documents, with reasonable levels of accuracy. (Suchsystems are also available for many European languages as well as some of the Asian languages such asJapanese, Chinese etc.) However, there are not many reported efforts at developing OCR systems forIndian languages. The work reported in this project is motivated by the fact that there are no reportedefforts at developing document analysis systems for the south Indian language, Kannada. In most OCR[13] systems the final recognition accuracy is always higher than the raw character recognitionaccuracy. For obtaining higher recognition accuracy, language-specific information such as co-occurrence frequencies of letters, a word corpus [14], a rudimentary model of the grammar etc. areused. This allows the system to automatically correct many of the errors made by the OCR subsystem.In our current implementation, we have not incorporated any such post-processing. The main reason isthat, at present we do not have a word corpus for Kannada. Even with a word corpus the task is stilldifficult because of the highly in flexional nature of Kannada grammar. The grammar also allows forcombinations of two or more words. Even though these follow well-defined rules of grammar, thenumber of rules is large and incorporating them into a good spell-checking application for Kannada is achallenging task. Figure 1: Vowels in Kannada [13] Figure 2: Consonants in Kannada [13] II. RELATED WORKDue to the variety of font size, style, orientation, and alignment as well as the complexity of thebackground, designing a robust general algorithm, which can effectively detect and extract text from 190 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963both types of images, which is full of challenges. Various methods have been proposed in the past fordetection and localization of text in images and videos. These approaches take into considerationdifferent properties related to text in an image such as color, intensity, connected – components, edgesetc. These properties are used to distinguish text regions from their background and / or other regionswithin the image.[1]. Xiaoqing Liu et al [1, 2]:The algorithm proposed is based on edge density, strength and orientation. The input image is first pre-processed to remove any noise if present. Then horizontal, vertical and diagonal edges are identified with the help of Gaussian kernels and based on edge density, strength and orientation text regions are identified. This approach is based on the fact that edges are most reliable features of text.[2]. JulindaGllavata et al [3]:The algorithm proposed is based on connected component based method. This approach is based on the fact that text is collection of characters usually comes in a group. The input image is first pre-processed to remove any noise if present. Then an input image is converted from RGB to YUV model and Y-channel is processed, horizontal and vertical projections are calculated. Then with the help of horizontal and vertical threshold text regions are identified.[3]. Wang and Kangas et al [4]:The algorithm proposed is based on color clustering. The input image is first pre-processed to remove any noise if present. Then the image is grouped into different color layers and gray component. This approach utilities the fact that usually the color data in text characters is different from the color data in the background. The potential text regions are localized using connected component based heuristics from these layers. Also an aligning and merging analysis (AMA) method is used in which each row and column value is analyzed. The experiments conducted show that the algorithm is robust in locating mostly Chinese and English characters in images; sometimes false alarms occurred due to uneven lighting or reflection in the test images.[4]. K.C. Kim et al [5]:The text detection algorithm is also based on color continuity. In addition it also uses multi-resolution wavelet transforms and combines low as well as high level image feature for text region extraction, which is a hierarchical feature combination method to implement text extraction in natural scenes. However, authors admit that this method could not handle large text very well due to the use of local features that represents only local variations of images blocks.[5]. Victor Wu et al [6]:The text finder algorithm proposed is based on the frequency, orientation and spacing of text within an image. Texture based segmentation is used to distinguish text from its background. Further a bottom – up ‘chip generation’ process is carried out which uses the spatial cohesion property of text strokes and edges. The results show that the algorithm is robust in most of the cases, expect for every small text characters that are not properly detected. Also in case of low contrast in image, misclassifications occur in the texture segmentation.[6].Qixiang Ye et al[7,8]:The approach used in [7, 8] utilizes a support vector machines (SVM) classifier to segment text from non – text in an image or video frame. Initially text is detected in multi scale images using non edge based techniques, morphological operations and projection profiles of the image. These detected text region are then verified using wavelet features and SVM. The algorithm is robust with respect to variance in color and size of font as well as language.[7].SanjeevKunteet al [11]:The Kannada character detection algorithm is based on Neural Network concept. The input image is first pre-processed to remove any noise if present. Neural classifiers are effectively used for the classification of characters based on moment features.[8]. Te´ofilo E. de Campos et al [12]:The character detection algorithm is based on SVM. It evaluate six different shape and edge based features, such as Shape Context, Geometric Blur and SIFT, but also features used for representing texture, such as filter responses, patches and Spin Images. III. PROPOSED WORK 191 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963In this Proposed Work, a robust system for automatically extracting Kannada text appearing in imagesand videos with complex background is presented. Standard deviation based edge detection isperformed to detect edges present in all directions.The identification of the used script can help in improving the segmentation results and in increasingthe accuracy of OCR by choosing the appropriate algorithms. Thus, a novel technique for Kannadascript recognition in complex images will be presented. Figure 3 shows the general configuration ofproposed system. The building elements are the TIE, the CCD-camera and the voice synthesizer. 3. Speech synthesis. 1. Textual 2. Optical Character Information Recognition. Extraction. Figure3. System configuration (walk-around mode)Proposed system contains three main steps after acquiring image with the help of CC-camera. 1. Textual information Extraction. 2. Optical character Recognition. 3. Speech Synthesis.As the first step in the development of this system, simple standard deviation based method forKannada text detection method is proposed.The different steps of our approaches are asfollows. 1. Image preprocessing. 2. Calculate Standard Deviation of Image. 3. Detection of Text Regions.Step 1: Image Preprocessing. If the image data is not represented in HSV color space, it is convertedto this color space by means of appropriate transformations. Our system only uses the intensitydataFigure 5 (V channel of HSV) during further processing. A median filtering operation is thenapplied on theV (intensity) band to reduce noise before a contrast-limited Adaptive HistogramEqualization is applied for contrast enhancement. Figure4.Original Image Figure5. V channelStep 2: Edge Detection. This step focuses the attention to areas where text may occur. We employ asimple method for converting the gray-level image into an edge image.Our algorithm is based on the fact that the characters processes high standard deviation compared totheir local neighbors. Std(x)=1/ (N-1) ∑(V (i)-µ(x)) 2…………… (1) 192 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 i€W(x)Where x is a set of all pixels in a sub-window W(x), N is a number of pixels in W(x), µ(x)is meanvalue of V(i)and i €W(x). A window size of 3X7 pixels was used in this step. Figure6. Standard Deviation ImageStep 3:Detection of Text Regions.Steps used in Kannada Text location are different compared toEnglish text localizationbecause features of both texts are different. Height and width ratio, Centroiddifference and orientation calculations used in English text extraction are not suitable for Kannada textextraction.Normally, text embedded in an image appears in clusters, i.e., it is arranged compactly. Thus,characteristics of clustering can be used to localize text regions. Since the intensity of the feature maprepresents the possibility of text, a simple global thresholding can be employed to highlight those withhigh text possibility regions resulting in a binary image. A morphological dilation operator can easilyconnect the very close regions together while leaving those whose positions are far away to each otherisolated. In our proposed method, we use a morphological dilation operator with a 7×7 squarestructuring element to the previous obtained binary image to get joint areas referred to as text blobs.Two constraints are used to filter out those blobs which do not contain text [1 ,2], where the firstconstraint is used to filter out all the very small isolated blobs whereas the second constraint filters outthose blobs whose widths are much smaller than corresponding heights. The retaining blobs areenclosed in boundary boxes. Four pairs of coordinates of the boundary boxes are determined by themaximum and minimum coordinates of the top, bottom, left and right points of the correspondingblobs. In order to avoid missing those character pixels which lie near or outside of the initial boundary,width and height of the boundary box are padded by small amounts as in Figure 7. Figure 7.Final results for the example given in Figure. 5 IV. EXPERIMENTAL EVALUATIONThe proposed approach has been evaluated using datasets containing different types of images Figure8,9,10. The whole test data consists of 300images where 100 of them were extractedfrom variousMPEG videos 193 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 Figure 8. Results of House Boards Figure 9. Results of Wall Boards Figure 10. Results of Banners.The precision and recall rates (Equations (2) and (3)), have been computed based on the number ofcorrectly detected words in an image in order to further evaluated the efficiency and robustness. Theprecision rate is defined as the ration of correctly detected words to the sum of correctly detected wordsplus false positive. False positive are those regions in the image, which are actually not characters oftext, but have detected by the algorithm as text regions. Correctly detected words Precision Rate=-----------------------------------*100% ............ (2) Correctly detected words + False PositivesThe Recall rate is defined as the ratio of correctly detected Words to the sum of correctly detectedwords plus false negatives. False negatives are those regions in the image, which are actually textcharacters, but have been not detected by the algorithm. Correctly detected words RecallRate=-----------------------------------*100% …... (3) Correctly detected words + False Negatives 194 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Table 1. Analysis of precession rate and recall rate PRECISSION RECALL TEST DATA NO OF IMAGES RATE RATE FROM 200 92.2 88.6 IMAGES FROM VIDEOS 100 78.8 80.2 TOTAL 300 80.5 84.4 V. CONCLUSIONIn this paper, Text extraction is a critical step as it sets up the quality of the final recognition result.Itaims at segmenting text from background, meaning isolating text pixels from those ofbackground.we presented the design of a Kannada scene-text detection module for visually impaired persons. Asthe first step in the development of this system, simple standard deviation based method for Kannadatext detection have been implemented and evaluated.VI. FUTURE WORKThe main challenge is to design a system as versatile as possible to handle all variability in daily life,meaning variable targets with unknown layout, scene text, several characterfonts and sizes andvariability in imaging conditions with uneven lighting, shadowing and aliasing. Variation in Fontstyle, size, Orientation, alignment & complexity ofbackground makes the text segmentation as achallenging task in text extraction.We plan to employ an OCR system to check the recognition performance for the text imagesproduced by the proposed algorithm andalso employ a Speech Synthesizer to spell the recognized textto vision impaired persons. Finally, work will focus on new methods for extracting Kannada textcharacters with higher accuracy.REFERENCES[1].Xiaoqing Liu and JagathSamarabandu , An Edge-based text region extraction algorithm for Indoormobile robot navigation, Proceedings of the IEEE, July 2005.[2].Xiaoqing Liu and JagathSamarabandu, Multiscale edge-based Text extraction from Complex images, IEEE,2006.[3].JulindaGllavata, Ralph Ewerth and Bernd Freisleben, A Robust algorithm for Text detection in images, Proceedings of the 3 international symposium on Image and Signal Processing and Analysis, 2003.[4].Kongqiao Wang and Jari A. Kangas, Character location in scene images from digital camera, the journal ofthe Pattern Recognition society, March 2003.[5]K.C. Kim, H.R. Byun, Y.J. Song, Y.W. Choi, S.Y. Chi, K.K. Kim and Y.K Chung, Scene TextExtraction in Natural Scene Images using Hierarchical FeatureCombining and verification , Proceedingsof the 17International Conference on Pattern Recognition (ICPR ’04), IEEE.[6] Victor Wu, RaghavanManmatha, and Edward M.Riseman,Text Finder: An Automatic System to Detect andRecognize Text in Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 21, No. 11,November 1999.[7]Qixiang Ye, Qingming Huang, Wen Gao and DebinZhao,Fast and Robust text detection in images andvideo frames, Image and Vision Computing 23, 2005.[8]Qixiang Ye, Wen Gao, Weiqiang Wang and Wei Zeng,A Robust Text Detection Algorithm in Imagesand Video Frames, IEEE, 2003.[9]Rainer Lienhart and Axel Wernicke, Localizing and Segmenting Text in Images and Videos, IEEETransactions on Circuits and Systems for Video Technology, Vol.12,No.4, April 2002.[10]Keechul Jung, Kwang in Kim and Anil K. Jain, Text information extraction in images and video: a survey,the journal of the Pattern Recognition society, 2004.[11]SanjeevKunte and R D Sudhaker Samuel, A simple and efficient optical character recognition systemforbasic symbols in printed Kannada text.[12]Nobuo Ezaki, Marius Bulacu, Lambert Schomaker, Text Detection from Natural Scene Images: Towards aSystem for Visually Impaired Persons, Proc. of 17th Int. Conf. on Pattern Recognition (ICPR 2004), IEEEComputer Society, 2004, pp. 683-686, vol. II, 23-26 August, Cambridge, UK. 195 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011. ©IJAET ISSN: 2231-1963 [13]T V Ashwin and P S Sastry, “A font and size-independent OCR system for printed Kannada documents using support vector machines”, S¯ adhan¯ a Vol. 27, Part 1, February 2002, pp. 35–58. © Printed in India [14] Department of Computer Sciences, University of Texas at Austin, Support Vector Machines, www.cs.utexas.edu/~mooney/cs391L/svm.ppt,The VC/SRM/SVM Bible:Keshava Prasanna received B.E from Bangalore University and M.Tech in Information andTechnology in the year 2005.He has experience of around 13 years in academics. Currentlypursuing Ph.D. and working as Research Assistant in Tumkur University, Tumkur. Life membershipin Indian Society for Technical Education (ISTE).Ramakanth Kumar P completed his Ph.D. from Mangalore University in the area of PatternRecognition. He has experience of around 16 years in Academics and Industry. His areas of interestare Image Processing, Pattern Recognition and Natural Language Processing. He has to his credits 03National Journals, 15 International Journals, and 20 Conferences. He is a member of the ComputerSociety of India (CSI) and a life memember of Indian Society for Technical Education (ISTE). Hehas completed number of research and consultancy projects for DRDO.Thungamani. M received B.E from Visvesvaraya Technological University and M.Tech inComputer Science and Engineering in the year 2007.She has experience of around 08 years inacademics. Currently pursuing Ph.D. and working as Research Assistant in Tumkur University,Tumkur. Life membership in Indian Society for Technical Education (MISTE) The Institution ofElectronics and Telecommunication Engineers (IETE).Manhoar Koli received B.E from Visvesvaraya Technological University and M.Tech in ComputerScience and Engineering.He has experience of around 08 years in academics. Currently pursuingPh.D. as Research Scholar in Tumkur University, Tumkur. 196 Vol. 1, Issue 5, pp. 189-196
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 COVERAGE ANALYSIS IN VERIFICATION OF TOTAL ZERO DECODER OF H.264 CAVLD Akhilesh Kumar and Mahesh Kumar Jha Department of E&C Engineering, NIT Jamshedpur, Jharkhand, IndiaABSTRACTH.264 video standard is used to achieve high quality video and high data compression when compared to otherexisting video standards. H.264 uses context-based adaptive variable length coding (CAVLC) to code residualdata in Baseline profile.The H.264 bitstream consist of zeros and ones.At one of the decoding stages of context-based adptive variable length decoder (CAVLD), Total Zeros decoder is used to calculate the total zeros, whichis the number of zeros before the last non-zero coefficient.H.264 specifies different lookup table to decode totalzero, which is chosen depending on the number of non zero coefficients.In this paper the coverage analysis inverification of Total Zeros decoder of the CAVLD ASIC using open verification methodology (OVM) isproposed.KEYWORDS: H.264, CAVLC/CAVLD, OVM I. INTRODUCTIONToday the verification engineer have outnumbered the design engineers for the most complexdesigns.Studies revealed that about 70% of all respin of Ics are due to functional errors.Verificationhas become the bottleneck in projects time-to-profit goal [1]. According to the InternationalTechnology Roadmap for Semiconductors (ITRS), in many application domains the verification of thedesign has become the predominant component of a projects development in terms of time,cost, andthe human resorces dedicated to it [2].H.264 is jointly developed by the ITU and ISO/IEC.It has better compression efficiency than previouscoding standards,and it is also network-friendly,which makes it suitable for many kinds of network[3].This paper is just about the verification of VLSI design of Total Zero Decoder of H.264 CAVLDdecoder.In this paper, the verification using OVM is built by developing verification componentsusing SystemVerilog and OVM class library, which provides the suitable building block to design thetest environment.OVM is an open source verification methodology library intended to run on multipleplatforms and be supported by multiple EDA vendors. OVM is used for functional verificationusing System Verilog, inclusive with a following library of System Verilog code [4]. The testbenches in OVM are composed of reusable verification components that are absolute verificationenvironments. The method does not depend on vendor and can be interoperated with severallanguages and simulators. The methodology is completely open, and includes a strong class libraryand source code [4].The work embodied in this paper presents the Verification of RTL Total Zero Decoder of CAVLDusing OVM.Coverage analysis is a vital part of the verification process; it gives idea that to whatdegree the source code of the DUT(Design Under Test) has been tested.The design and analysis iscarried out in QuestaSim from Mentor Graphics using QuestaSim-6.6b.II. PROPOSED INTERFACE DIAGRAM OF TOTAL ZERO DECODER2.1 Interface DiagramThe proposed interface diagram of total zero decoder is shown in Figure 1. 197 Vol. 1, Issue 5, pp. 197-203
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 1. Interface diagram of total zero decoderInputs to this process are bit stream, total coefficients and maximum number of coefficients. Thisprocess calculates the number of total zeros using the total coefficients, maximum number ofcoefficients and the bit stream. Total zeros are the number of zeros before the last quantizedcoefficient of the block. This process is basically a probability model where total zeros are derivedfrom the bit stream by VLC models, which are separated by using the total coefficients and maximumnumber of coefficients in the standard.Maximum number of coefficients and total coefficients is used to select the model used to derive thecoefficient token. After decoding the coefficient token, total zeros are derived from the look up tables(H.264 standard table 9.7, table 9.8 table9.9) [5] provided in the ROM. Output of this process is totalzeros.2.2 Port DescriptionThe port description of the proposed interface diagram of Total Zero Decoder is described in Table 1. Table 1.Port Description Signal Name I/O Bit Width Description Allowable Values System I/F clk1 I 1 Operative clock (dedicated to CAVLC) NA nreset I 1 Asynchronous Reset 0 – Reset 1 – No Reset sreset I 1 Synchronous Reset 1 – Reset 0 – No Reset Decode sequence control I/F dec_brk I 1 Request IP to stop the decoding 0 – IP continue decoding process 1 – IP stops decoding Bit stream parser I/F bitstream_i I 9 Input Bit stream from Getbits. 0 – (2^9 -1) TCTO I/F 198 Vol. 1, Issue 5, pp. 197-203
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 tcoeff_i I 5 Tcoeff of 4x4 Block. 0 – 16 tcoeff_vld_i I 1 Valid signal for Tcoeff of 4x4 Block. 0 – Not valid 1 - Valid Level Decoder I/F start_tz_i I 1 Start signal from controller 0 – Wait 1- Start total zeros module Slice Dec Controller I/F cavld_ceb_i I 1 Valid signal read clock enable to ROM 0 – Don’t enable clock 1 – Enable clock CAVLD Controller I/F maxcoeff_i I 5 Maximum coefficients of the block 0 - 16 shift_length_t O 4 No of bits to be skipped. 0–9 z_o shift_en_tz_o O 1 Valid signal for skip length 0 – Disable 1 – Enable Run before decoder I/F tz_valid_o O 1 Valid signal for Total Zeros 0 – Not valid 1 – valid total_zeros_o O 4 Total Zeros of 4x4 block 0 -152.3 Micro ArchitectureThe Micro-Architecture of the Decoder is shown in Figure 2.The architecture of Total zerodecoder is explained as follows:1. Pipeline Stage 1:The value of maximum coefficients of a block is taken as input. Based on the value ofmaximum number of coefficients and the total coefficients the value of ROM address fromwhich the total zero value of that particular block is found is calculated. The ROM table isdesigned as follows: • For Chroma DC values address ranges from 0x00h to 0x17h • For chroma 422 and where tc = 1 address ranges from 0x18h to 0x20h • For chroma 422 and where tc > 1 address ranges from 0x21h to 0x58h • For luma values where tc = 1 address ranges from 0x59h to 0x68h • For luma values where tc > 1 address ranges from 0x69h to 0x427h2. Pipeline Stage 2:In this stage the value of total zero is read from the TZ Rom and registered and sent as outputalong with tz_end.2.3 Timing DiagramThe timing diagram of Total Zero Decoder is shown in Figure 3. 199 Vol. 1, Issue 5, pp. 197-203
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 Figure 2. Total zeros decoder Architecture Diagram Figure 3.Timing Diagram of Total Zero Decoder2.4 Applying OVM to Total Zero DecoderA verifocation plan is developed to verify the Total Zero Decoder in the OVM environment.Thesuggested decoder is taken as DUT and then it was interfaced with the OVM environment.Thesuggested DUT was written using verilog coding.The open verification environment is created byjoining different components written in SystemVerilog coding, those componet are Transaction,Sequence, Sequencer, Driver, Coverage, Assertion, Interface, Monitor, Scoreboard, Agent, 200 Vol. 1, Issue 5, pp. 197-203
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963Environment and finally Top module.The Clock signal for the DUT is generated in top module.Thetop module contains the typical HDL construct and SystemVerilog interfaces. In the top module theDUT is connected to the test environment through the interface.The compilation and verificationanalysis is carried out in QuestaSim 6.6b form Mentor Graphics.III. SIMULATION RESULTSTo measure the coverage of the decoder the code was compiled and then simulated toget the encoded output. The simulated output is shown in Figure 4 and Figure 5.Figure 4. Simulation result when Figure 5.Simulation result when maxcoeff_i is 8 maxcoeff_i is 16IV. COVERAGE ANALYSISThe Coverage Summary and Coverage Report gives the details of the functional coverage whencomplete Analysis was done for the decoder and coverage report as shown in Figure 6 wasgenerated it is found that the coverage is less than 100%. Figure 6.Coverage results Figure 7.Coverage results V. CONCLUSION AND FUTURE SCOPEH.264/AVC is a public and open standard. Every manufacturer can build encoders and decoders in acompetitive market. This will bring prices down quickly, making this technology affordable to 201 Vol. 1, Issue 5, pp. 197-203
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963everybody. There is no dependency on proprietary formats, as on the Internet today, which is ofalmost importance for the broadcast community. OVM is clearly simulation-oriented. The testbenches in OVM are composed of reusable verification components that are absolute verificationenvironments. The method does not depend on vendor and can be interoperated with severallanguages and simulators. The methodology is completely open, and includes a strong class libraryand source code. In this work OVM based Total Zero Decoder VIP (Verification intellectualproperty) is developed. The decoder is subjected to various analyses. The decoder is verified forfunctional coverage using QuestaSim. It is observed after compilation and simulation that theverification environment is responding accurately with no errors. The Coverage Report of Total ZeroDecoder is 100%. This work can be extended to verify the various IP in the OVM environment andminimize the bugs generated, basically in the corner cases, thus reducing the verification time of adesign.ACKNOWLEDGEMENTThis work was supported by TATA ELXSI, Bangalore.REFERENCES [1] J.Bergeron, “What is verification?” in Writing Test benches: Functional Verification of HDL Models, 2nd ed. New York: Springer Science, 2003, ch.1, pp. 1-24. [2] International Technology Roadmap for Semiconductors [Online]. Available: http://www.itrs.net/Links/2006Update [3] R. Schafer, T. Wiegand and H. Schwarz, "EBU TECHNICAL REVIEW of the emerging H.264/AVC standard”, Heinrich Hertz Institute, Berlin, Germany,January 2003 [4] http://www.doulos.com/knowhow/sysverilog/ovm/tutorial_0 [5] ITU-T Rec. H.264, ITU-T Study Group, March 2009,Available: http://www.itu.int /rec/T-REC-H.264- 200903-S/en. [6] http://www.testbench.co.in [7] Chris Spear, SystemVerilog for Verification, New York : Springer, 2006. [8] OVM User Guide ,Vers. 2.1,OVM world ,December 2009, Available: www. ovmworld.org. [9] Iain E. Richardson, The H.264 Advanced Video Compression Standard ,2nd ed.UK : Wiley, 2010, pp. 81-85. [10] "VLSI Design of H.264 CAVLC Decoder", China-Papers, February 16,2010, [Online]. Available: http://mt.china-papers.com/4/?p=25415 [11] "The Algorithm Study on CAVLC Based on H.264/AVC and Its VLSI Implementation", China- Papers, May 31,2010, [Online].Available:http://mt.china-papers.com/4/?p=75976 [12] "Design of CAVLC Codec for H.264",China-Papers, March 24, 2010, [Online]. Available: http://mt.china-papers.com/4/?p=76424 [13] Wu Di, Gao Wen, Hu Mingzeng and JiZhenzhou, “A VLSI architecture design of CAVLC decoder”, ASIC,2003. [14] Tien-Ying Kuo and Chen-Hung Chan, “Fast Macroblock Partition Prediction for H.264/AVC ”, in IEEE International Conference on Multimedia and Expo (ICME2004), pp. 675–678, 2004. [15] Y.L. Lee, KH. Han, and G.J. Sullivan, “Improved lossless intra coding for H.264/MPEG-4 AVC ”, IEEE Trans. Image Processing, vol. 15, no. 9, pp. 2610–2615, Sept. 2006. [16] http://www.ovmworld.org/white_papers.php [17] OVM Golden Reference Guide ,Vers. 2.0, DOULOS, september 2008, Available: www.doulos.com [18] Mythri Alle, J Biswas and S. K. Nandy, "High performance VLSI architecture design for H.264 CAVLC Decoder",in Proceedings of Application-specific Systems, Architectures and Processors,2006 [19] "An Introduction to SystemVerilog",Asic,[Online].Available: http://www.asic.co.in /Index_files/tutorials/SystemVerilog_veriflcation.ppt [20] N. Keshaveni , S. Ramachandran and K.S. Gurumurthy "Implementation of Context Adaptive Variable Length Coder for H.264 Video Encoder",International Journal of Recent Trends in Engineering, Vol 2, No. 5, pp.341-345, November 2009. [21] Mihaela E.Radhu and Shannon M.Sexton, “Integrating Extensive Functional Verification into digital design Education,” IEEE Trans. Educ., vol. 51, no. 3, pp. 385–393, Aug.2008. [22] Donghoon Yeo and Hyunchul Shin, "High Throughput Parallel Decoding Method for H.264/AVC CAVLC",ETRI Journal, Vol. 31, no. 5, pp.510-517, October 2009. 202 Vol. 1, Issue 5, pp. 197-203
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963AuthorsAkhilesh Kumar received B.Tech degree from Bhagalpur University, Bihar, India in 1986 andM.Tech degree from Ranchi University, Bihar, India in 1993. He has been working in teachingand research profession since 1989. He is now working as H.O.D. in Department of Electronicsand Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interested field ofresearch is analog circuit and VLSI design.Mahesh Kumar Jha received B.Tech. Degree from Biju Patnaik University of Technology,Orissa, India in 2007. He is now pursuing M. Tech in Department of Electronics andCommunication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interested field ofresearch is VLSI Design. 203 Vol. 1, Issue 5, pp. 197-203
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963 DESIGN AND CONTROL OF VOLTAGE REGULATORS FOR WIND DRIVEN SELF EXCITED INDUCTION GENERATOR Swati Devabhaktuni1 and S. V. Jayaram Kumar2 1 Assoc. Prof., Gokarajurangaraju Institute of Engg. and Tech., Hyderabad, India 2 Professor, J.N.T. University Hyderabad, IndiaABSTRACTThis paper deals with the performance analysis of static compensator (STATCOM) based voltage regulator forself excited induction generators (SEIGs) supplying balanced/unbalanced and linear/non linear loads. A three-phase insulated gate bipolar transistor (IGBT) based current controlled voltage source inverter (CC-VSI)known as STATCOM is used for harmonic elimination. It also provides the required reactive power SEIG needsto maintain a constant terminal voltage under varying loads. A set of voltage regulators are designed and theirperformance is simulated using SIMULINK to demonstrate their capabilities as a voltage regulator, a harmoniceliminator, a load balancer and a neutral current compensator. It also discusses the merits and demerits, toselect a suitable topology of the voltage regulator according to self excited induction generator. The simulatedresults show that by using a STATCOM based voltage regulator the SEIG terminal voltage can be maintainedconstant and free from harmonics under linear/non linear and balanced/unbalanced loadsKEYWORDS: Self-excited induction generator, static compensator, voltage regulation, load balancing. I. INTRODUCTIONThe rapid depletion and the increased cost of conventional fuels have given a thrust to the research onself excited induction generator as alternative power sources driven by various prime movers based onnonconventional energy sources[5]. These energy conversion systems are based on constant speedprime movers, constant power prime movers and variable power prime movers[6][15]. In constantspeed prime movers (biogas, biomass, biodiesel etc) based generating systems; the speed of theturbine is almost constant therefore the frequency of the generated voltage remains constant. Anexternally driven induction machine operates as a self-excited induction generator (SEIG), with itsexcitation requirements being met by a capacitor bank connected across its terminals. The SEIG hasadvantages [1][12][16][25] like simplicity, being maintenance free, absence of DC, being brushless,etc. as compared to a conventional synchronous generator[8][11][13]. A major disadvantage of anSEIG is its poor voltage regulation [14][24][18]. It requires a variable capacitance bank to maintainconstant terminal voltage under varying loads.Attempts have been made to maintain constant terminal voltage using fixed capacitor and thyristorcontrolled reactors (TCR), saturable-core reactors and short-shunt connections [6][9][19][21]. Thevoltage regulation provided by these schemes is discrete but these inject harmonics into the generatingsystem. However, with the invention of solid state commutating devices, it is possible to make astatic, noiseless voltage regulator which is able to regulate continuously variable reactive power tokeep the terminal voltage of an SEIG constant under varying loads. This system, called STATCOM,has specific benefits compared to conventional SVC’s[2][23][17].Basic topology of STATCOM consists of a 3-phase current controlled voltage source converter (VSC)and an electrolytic capacitor at its DC bus. The DC bus capacitor is used to self support a DC bus ofSTATCOM and takes very small active power from SEIG for its internal losses to provide sufficientreactive power as per requirements [3][10]. Here STATCOM is a source of leading or lagging currentand can be designed in such a way to maintain constant voltage across the SEIG terminals with 204 Vol. 1, Issue 5, pp. 204-217
    • International Journal of Advances in Engineering & Technology, Nov 2011.©IJAET ISSN: 2231-1963varying loads. In this paper various STATCOM based VR topologies are presented which are basedon two leg VSC, three leg VSC for three phase three wire SEIG system[4][7][20].An SEIG is an isolated system, which is small in size, and the injected harmonics pollute thegenerated voltage. The STATCOM eliminates the harmonics, provides load balancing and suppliesthe required reactive power to the load and the generator. In this paper, the authors present a simplemathematical model for the transient analysis of the SEIG-STATCOM system underbalanced/unbalanced. Simulated results show that the SEIG-STATCOM system behaves as an idealgenerating system under these conditions.The brief description about this paper includes, Section2 discusses mainly about the variousSTATCOM controllers used in this paper with the diagrams. Section 3 includes the design of variousSTATCOM techniques included in this paper with the controlling strategies. Section 4 discusses theresults obtained from the MATLAB/SIMULINK models for various STATCOM techniques appliedto a self excited induction generator connected to a grid.Section 5 gives the conclusions of this paper.The system we tested has the following components: • a wind turbine • a three-phase, 3-hp,slip ring induction generator driven by the wind turbine • various sets of capacitors at stator terminals to provide reactive power to the induction generator • a three-phase various STATCOM devices • a three phase balanced/unbalanced grid II. SYSTEM STATCOM CONTROLLERSThe VRs are classified as three phase three wire VRs and three phase four wire VRs. These VRs arebased on the two leg VSC, three leg VSC, four leg VSC, three single phase VSC, three leg withmidpoint capacitor based VSC and transformer based VRs. In the following section, detailed systemdescription is presented for different STATCOM based voltage regulators.2.1. Three Phase 3-wire voltage regulatorsMainly two types of VR topologies