Pseudocode is an artificial and informal language that helps programmers to develop algorithms. In this
paper a software tool is described, for translating the pseudocode into a particular programming
language. This tool takes the pseudocode as input, compiles it and translates it to a concrete programming
language. The scope of the tool is very much wide as we can extend it to a universal programming tool
which produces any of the specified programming language from a given pseudocode. Here we present the
solution for translating the pseudocode to a programming language by implementing the stages of a
compiler
PSEUDOCODE TO SOURCE PROGRAMMING LANGUAGE TRANSLATORijistjournal
Pseudocode is an artificial and informal language that helps developers to create algorithms. In this papera software tool is described, for translating the pseudocode into a particular source programminglanguage. This tool compiles the pseudocode given by the user and translates it to a source programminglanguage. The scope of the tool is very much wide as we can extend it to a universal programming toolwhich produces any of the specified programming language from a given pseudocode. Here we present thesolution for translating the pseudocode to a programming language by using the different stages of acompiler
The document discusses the functions and purposes of translators in computing. It describes:
1) Interpreters and compilers translate programs from high-level languages to machine code. Compilers translate the entire program at once, while interpreters translate instructions one at a time as the program runs.
2) Translation from high-level languages to machine code involves multiple stages including lexical analysis, syntax analysis, code generation, and optimization.
3) Linkers and loaders are used to combine separately compiled modules into a complete executable program by resolving addresses and linking the modules together.
File Operation such as
Reading the file content
Writing the content to the file
Copying the content from one file to another file
Counting the number of character, words and lines of the file
The document discusses the C programming language. It states that C was created by Dennis Ritchie at Bell Labs in 1972 to develop the UNIX operating system. It became more widely used after Brian Kernighan and Dennis Ritchie published the first description of C in 1978. C is a general-purpose, high-level language that produces efficient, low-level code and can be compiled on many platforms. It is widely used to develop operating systems, compilers, databases and other systems programs.
This document discusses nested macros in system programming. It defines a macro as a unit of program specification that allows code expansion. A macro can contain a call to another macro, known as a nested macro call. Nested macro calls are expanded following the LIFO (last-in-first-out) rule, where the innermost macro call is expanded first. The document provides an example of a nested macro call in C code and how the macros would expand to generate the final code.
The document discusses control flow statements in Java, which break up the sequential flow of execution and enable conditional execution of code blocks. It covers the following key control flow statements:
- If-then and if-then-else statements allow conditional execution based on boolean tests.
- Switch statements allow multiple possible execution paths based on a variable's value.
- Loops - while, do-while and for statements - repeatedly execute a block of code while/until a condition is met.
- Branching statements like break, continue and return alter the flow of loops or methods.
The document provides examples to illustrate the usage of each statement type.
The document discusses the different phases of a compiler and storage allocation strategies. It describes:
1. The phases of a compiler include lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation.
2. Storage allocation strategies for activation records include static allocation, stack allocation, and heap allocation. Languages like FORTRAN use static allocation while Algol uses stack allocation.
3. Parameter passing mechanisms include call-by-value, call-by-reference, copy-restore, and call-by-name. Call-by-value passes the actual parameter values while call-by-reference passes their locations.
This document discusses language processors and the gaps they help bridge. A language processor bridges the specification or execution gap between how a software designer describes an idea and how it is implemented. It does this by taking a source program and producing a target program. The source and target languages can be at different levels like the application domain and execution domain, bridging the semantic gap. Language processors help reduce issues like long development times. Interpreters are one type of language processor that bridges execution gaps by running programs without translation to machine language.
PSEUDOCODE TO SOURCE PROGRAMMING LANGUAGE TRANSLATORijistjournal
Pseudocode is an artificial and informal language that helps developers to create algorithms. In this papera software tool is described, for translating the pseudocode into a particular source programminglanguage. This tool compiles the pseudocode given by the user and translates it to a source programminglanguage. The scope of the tool is very much wide as we can extend it to a universal programming toolwhich produces any of the specified programming language from a given pseudocode. Here we present thesolution for translating the pseudocode to a programming language by using the different stages of acompiler
The document discusses the functions and purposes of translators in computing. It describes:
1) Interpreters and compilers translate programs from high-level languages to machine code. Compilers translate the entire program at once, while interpreters translate instructions one at a time as the program runs.
2) Translation from high-level languages to machine code involves multiple stages including lexical analysis, syntax analysis, code generation, and optimization.
3) Linkers and loaders are used to combine separately compiled modules into a complete executable program by resolving addresses and linking the modules together.
File Operation such as
Reading the file content
Writing the content to the file
Copying the content from one file to another file
Counting the number of character, words and lines of the file
The document discusses the C programming language. It states that C was created by Dennis Ritchie at Bell Labs in 1972 to develop the UNIX operating system. It became more widely used after Brian Kernighan and Dennis Ritchie published the first description of C in 1978. C is a general-purpose, high-level language that produces efficient, low-level code and can be compiled on many platforms. It is widely used to develop operating systems, compilers, databases and other systems programs.
This document discusses nested macros in system programming. It defines a macro as a unit of program specification that allows code expansion. A macro can contain a call to another macro, known as a nested macro call. Nested macro calls are expanded following the LIFO (last-in-first-out) rule, where the innermost macro call is expanded first. The document provides an example of a nested macro call in C code and how the macros would expand to generate the final code.
The document discusses control flow statements in Java, which break up the sequential flow of execution and enable conditional execution of code blocks. It covers the following key control flow statements:
- If-then and if-then-else statements allow conditional execution based on boolean tests.
- Switch statements allow multiple possible execution paths based on a variable's value.
- Loops - while, do-while and for statements - repeatedly execute a block of code while/until a condition is met.
- Branching statements like break, continue and return alter the flow of loops or methods.
The document provides examples to illustrate the usage of each statement type.
The document discusses the different phases of a compiler and storage allocation strategies. It describes:
1. The phases of a compiler include lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation.
2. Storage allocation strategies for activation records include static allocation, stack allocation, and heap allocation. Languages like FORTRAN use static allocation while Algol uses stack allocation.
3. Parameter passing mechanisms include call-by-value, call-by-reference, copy-restore, and call-by-name. Call-by-value passes the actual parameter values while call-by-reference passes their locations.
This document discusses language processors and the gaps they help bridge. A language processor bridges the specification or execution gap between how a software designer describes an idea and how it is implemented. It does this by taking a source program and producing a target program. The source and target languages can be at different levels like the application domain and execution domain, bridging the semantic gap. Language processors help reduce issues like long development times. Interpreters are one type of language processor that bridges execution gaps by running programs without translation to machine language.
This document discusses the principles of compiler design. It describes the different phases of a compiler, including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It also discusses other language processing systems like preprocessors, assemblers, linkers, and loaders. The overall goal of a compiler is to translate a program written in one language into another language like assembly or machine code.
The document provides information about C programming language. It was prepared by Chetan Thapa Magar of Brightland Higher Secondary College. C was created by Dennis Ritchie in 1972 and draws concepts from earlier languages like BCPL and B. C is portable, efficient, structured, and a middle-level language well-suited for system and application programming. The document discusses C's history, elements, data types, operators, control structures like loops and decisions, and provides some example programs.
The document discusses the phases of a compiler including lexical analysis. It provides questions and answers related to compilers and lexical analysis. Specifically:
- It defines key terms related to compilers like translators, compilers, interpreters, and the phases of compilation.
- Questions cover topics like regular expressions, finite automata, lexical analysis issues, and the role of lexical analyzers.
- The role of the lexical analyzer is to read the source program and group it into tokens that are then passed to the parser.
- Regular expressions are used to specify patterns for tokens and can be represented by finite automata like NFAs and DFAs.
An important aspect of a program, apart from its ability to solve the problem, is its maintainability. A program has to undergo frequent changes in its lifetime because of the change in the problems to be solved. If a program is not written in a manner that allows incorporating changes easily, after a while, it may become useless altogether.
One way to bring some discipline into programming practices is structured programming. It is a way of creating programs that ensures high quality of maintainability, reusability, amenability to easy debugging and readability.
GOTO
This document provides a 2 mark question and answer review for the Principles of Compiler Design subject. It includes 40 questions and answers covering topics like the definitions and phases of a compiler, lexical analysis, syntax analysis, parsing, grammars, ambiguity, error handling and more. The questions are multiple choice or short answer designed to assess understanding of key compiler design concepts and techniques.
This document provides an introduction to the C programming language. It covers C program structure, variables, expressions, operators, input/output, loops, decision making statements, arrays, strings, functions, pointers, structures, unions, file input/output and dynamic memory allocation. The document uses examples and explanations to introduce basic C syntax and concepts.
This document discusses the role and implementation of a lexical analyzer. It begins by explaining that the lexical analyzer is the first phase of a compiler that reads source code characters and produces tokens for the parser. It describes how the lexical analyzer interacts with the parser by returning tokens when requested. The document then discusses several tasks of the lexical analyzer, including stripping comments and whitespace, tracking line numbers for errors, and preprocessing macros. It also covers concepts like tokens, patterns, lexemes, and attributes. Finally, it provides an example input and output of a lexical analyzer tokenizing a C program.
The document discusses the steps involved in executing a program written in language L. These steps include translation, linking, relocation, and loading. The translator outputs an object module, which contains program code, relocation tables, and linking tables. The linker processes multiple object modules, relocating addresses and resolving external references to produce an executable binary program. It builds a name table containing symbols and their loaded addresses. This allows the linker to modify address operands to point to the correct memory locations for program execution.
C was initially developed for writing system software. It has since become a popular language for various software programs. C is a high-level language that uses functions and supports structured programming. It facilitates low-level programming through pointers and access to hardware addresses. C is commonly used for systems programming due to its portability, efficiency and ability to access hardware. A C program contains functions, and execution begins with the main() function. Variables must be declared before use and are given a data type. Comments can be included to document a program. Standard headers and libraries provide common functions to C programs.
The document discusses system software and language processors. It defines system software as software designed to operate and control computer hardware and provide a platform for running application software. This includes operating systems, compilers, assemblers, and device drivers. It also discusses the roles of lexical analysis, syntax analysis, semantic analysis, and intermediate representations in language processors. Language processors bridge semantic gaps between how software is specified and how it is implemented through various translation and execution methods.
This document provides an introduction to the C programming language. It discusses that C is a general purpose, structured programming language that resembles algebraic expressions and contains keywords like if, else, for, do and while. C can be used for both systems and applications programming due to its flexibility. The document then discusses the structure of a C program, which consists of functions like main that contain statements grouped into blocks. It also covers C language components like data types, constants, variables and keywords. An example program that calculates the area of a circle is provided to demonstrate basic C syntax and components. Finally, conditional statements like if, if else, else if and switch that allow program flow control are introduced.
A compiler is a computer program that translates a program written in a source language into an equivalent program in a target language. The compilation process involves scanning the source code, parsing it, performing semantic analysis, generating intermediate code, optimizing the code, and finally generating the target code. Key data structures used in compilers include symbol tables to store information about identifiers, literal tables to store constants, and parse trees to represent the syntactic structure of the program.
Java is a compiled and interpreted, platform-independent, secure, robust, and object-oriented programming language. It is compiled into bytecode that can run on any Java Virtual Machine (JVM), making programs portable across platforms. The JVM is available on many operating systems, so Java code can run on Windows, Linux, Solaris, or Mac OS. Java uses automatic memory management, exceptions, and avoids many common programming bugs found in other languages like C/C++.
Prof. Chethan Raj C, BE, M.Tech (Ph.D) Dept. of CSE. System Software & Operat...Prof Chethan Raj C
Prof. Chethan Raj C, BE, M.Tech (Ph.D) Dept. of CSE. System Software & Operating System Lab Manual.
1) To make students familiar with Lexical Analysis and Syntax Analysis phases of Compiler Design and implement programs on these phases using LEX & YACC tools and/or C/C++/Java.
2) To enable students to learn different types of CPU scheduling algorithms used in Operating system.
3) To make students able to implement memory management - page replacement and deadlock handling algorithms.
Compiler Construction
Phases of a compiler
Analysis and synthesis phases
-------------------
-> Compilation Issues
-> Phases of compilation
-> Structure of compiler
-> Code Analysis
This document provides an overview of compiler design and its various phases. It discusses the different phases of a compiler including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It also describes the structure of a compiler and how it translates a source program through various representations and analyses until generating target code. Key concepts covered include context-free grammars, parsing techniques, symbol tables, intermediate representations, and code optimization.
The document discusses different types of loaders and their functions. An absolute loader is the simplest type of loader scheme. It loads assembled machine code directly into memory locations prescribed by the assembler. While simple to implement, absolute loaders require the programmer to manually assign and reference memory locations for subroutines. A general loader scheme uses an object deck format to avoid disadvantages of previous schemes. The general loader accepts object code from assemblers, resolves symbolic references and relocates addresses before loading the executable program into memory.
The document describes a programming language called BAIK, which stands for Bahasa Anak Indonesia untuk Komputer (Indonesian Children's Language for Computers). BAIK is designed to have syntax modeled on the Indonesian language to support multitier web development. It implements an Indonesian parsing engine and uses a binary search tree for memory allocation. The goal is to build a real programming language using a simple structure design and Indonesian lexical words for web development.
SOFTWARE COST ESTIMATION USING FUZZY NUMBER AND PARTICLE SWARM OPTIMIZATIONIJCI JOURNAL
Software cost estimation is a process to calculate effort, time and cost of a project, and assist in better
decision making about the feasibility or viability of project. Accurate cost prediction is required to
effectively organize the project development tasks and to make economical and strategic planning, project
management. There are several known and unknown factors affect this process, so cost estimation is a very
difficult process. Software size is a very important factor that impacts the process of cost estimation.
Accuracy of cost estimation is directly proportional to the accuracy of the size estimation.
Failure of Software projects has always been an important area of focus for the Software Industry.
Implementation phase is not the only phase for Software projects to fail, instead planning and estimation
steps are the most crucial ones, which lead to their failure. More than 50% of the total projects fail which
go beyond the estimated time and cost. The Standish group‘s CHAOS reports failure rate of 70% for the
software projects. This paper presents the existing algorithms for software estimation and the relevant
concepts of Fuzzy Theory and PSO. Also explains the proposed algorithm with experimental results.
Genetic algorithm guided key generation in wireless communication (gakg)IJCI JOURNAL
In this paper, the proposed technique use high speed stream cipher approach because this approach is useful where less memory and maximum speed is required for encryption process. In this proposed approach Self Acclimatize Genetic Algorithm based approach is exploits to generate the key stream for encrypt / decrypt the plaintext with the help of key stream. A widely practiced approach to identify a good set of parameters for a problem is through experimentation. For these reasons, proposed enhanced Self Acclimatize Genetic Algorithm (GAKG) offering the most appropriate exploration and exploitation behavior. Parametric tests are done and results are compared with some existing classical techniques, which shows comparable results for the proposed system.
This document discusses the principles of compiler design. It describes the different phases of a compiler, including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It also discusses other language processing systems like preprocessors, assemblers, linkers, and loaders. The overall goal of a compiler is to translate a program written in one language into another language like assembly or machine code.
The document provides information about C programming language. It was prepared by Chetan Thapa Magar of Brightland Higher Secondary College. C was created by Dennis Ritchie in 1972 and draws concepts from earlier languages like BCPL and B. C is portable, efficient, structured, and a middle-level language well-suited for system and application programming. The document discusses C's history, elements, data types, operators, control structures like loops and decisions, and provides some example programs.
The document discusses the phases of a compiler including lexical analysis. It provides questions and answers related to compilers and lexical analysis. Specifically:
- It defines key terms related to compilers like translators, compilers, interpreters, and the phases of compilation.
- Questions cover topics like regular expressions, finite automata, lexical analysis issues, and the role of lexical analyzers.
- The role of the lexical analyzer is to read the source program and group it into tokens that are then passed to the parser.
- Regular expressions are used to specify patterns for tokens and can be represented by finite automata like NFAs and DFAs.
An important aspect of a program, apart from its ability to solve the problem, is its maintainability. A program has to undergo frequent changes in its lifetime because of the change in the problems to be solved. If a program is not written in a manner that allows incorporating changes easily, after a while, it may become useless altogether.
One way to bring some discipline into programming practices is structured programming. It is a way of creating programs that ensures high quality of maintainability, reusability, amenability to easy debugging and readability.
GOTO
This document provides a 2 mark question and answer review for the Principles of Compiler Design subject. It includes 40 questions and answers covering topics like the definitions and phases of a compiler, lexical analysis, syntax analysis, parsing, grammars, ambiguity, error handling and more. The questions are multiple choice or short answer designed to assess understanding of key compiler design concepts and techniques.
This document provides an introduction to the C programming language. It covers C program structure, variables, expressions, operators, input/output, loops, decision making statements, arrays, strings, functions, pointers, structures, unions, file input/output and dynamic memory allocation. The document uses examples and explanations to introduce basic C syntax and concepts.
This document discusses the role and implementation of a lexical analyzer. It begins by explaining that the lexical analyzer is the first phase of a compiler that reads source code characters and produces tokens for the parser. It describes how the lexical analyzer interacts with the parser by returning tokens when requested. The document then discusses several tasks of the lexical analyzer, including stripping comments and whitespace, tracking line numbers for errors, and preprocessing macros. It also covers concepts like tokens, patterns, lexemes, and attributes. Finally, it provides an example input and output of a lexical analyzer tokenizing a C program.
The document discusses the steps involved in executing a program written in language L. These steps include translation, linking, relocation, and loading. The translator outputs an object module, which contains program code, relocation tables, and linking tables. The linker processes multiple object modules, relocating addresses and resolving external references to produce an executable binary program. It builds a name table containing symbols and their loaded addresses. This allows the linker to modify address operands to point to the correct memory locations for program execution.
C was initially developed for writing system software. It has since become a popular language for various software programs. C is a high-level language that uses functions and supports structured programming. It facilitates low-level programming through pointers and access to hardware addresses. C is commonly used for systems programming due to its portability, efficiency and ability to access hardware. A C program contains functions, and execution begins with the main() function. Variables must be declared before use and are given a data type. Comments can be included to document a program. Standard headers and libraries provide common functions to C programs.
The document discusses system software and language processors. It defines system software as software designed to operate and control computer hardware and provide a platform for running application software. This includes operating systems, compilers, assemblers, and device drivers. It also discusses the roles of lexical analysis, syntax analysis, semantic analysis, and intermediate representations in language processors. Language processors bridge semantic gaps between how software is specified and how it is implemented through various translation and execution methods.
This document provides an introduction to the C programming language. It discusses that C is a general purpose, structured programming language that resembles algebraic expressions and contains keywords like if, else, for, do and while. C can be used for both systems and applications programming due to its flexibility. The document then discusses the structure of a C program, which consists of functions like main that contain statements grouped into blocks. It also covers C language components like data types, constants, variables and keywords. An example program that calculates the area of a circle is provided to demonstrate basic C syntax and components. Finally, conditional statements like if, if else, else if and switch that allow program flow control are introduced.
A compiler is a computer program that translates a program written in a source language into an equivalent program in a target language. The compilation process involves scanning the source code, parsing it, performing semantic analysis, generating intermediate code, optimizing the code, and finally generating the target code. Key data structures used in compilers include symbol tables to store information about identifiers, literal tables to store constants, and parse trees to represent the syntactic structure of the program.
Java is a compiled and interpreted, platform-independent, secure, robust, and object-oriented programming language. It is compiled into bytecode that can run on any Java Virtual Machine (JVM), making programs portable across platforms. The JVM is available on many operating systems, so Java code can run on Windows, Linux, Solaris, or Mac OS. Java uses automatic memory management, exceptions, and avoids many common programming bugs found in other languages like C/C++.
Prof. Chethan Raj C, BE, M.Tech (Ph.D) Dept. of CSE. System Software & Operat...Prof Chethan Raj C
Prof. Chethan Raj C, BE, M.Tech (Ph.D) Dept. of CSE. System Software & Operating System Lab Manual.
1) To make students familiar with Lexical Analysis and Syntax Analysis phases of Compiler Design and implement programs on these phases using LEX & YACC tools and/or C/C++/Java.
2) To enable students to learn different types of CPU scheduling algorithms used in Operating system.
3) To make students able to implement memory management - page replacement and deadlock handling algorithms.
Compiler Construction
Phases of a compiler
Analysis and synthesis phases
-------------------
-> Compilation Issues
-> Phases of compilation
-> Structure of compiler
-> Code Analysis
This document provides an overview of compiler design and its various phases. It discusses the different phases of a compiler including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It also describes the structure of a compiler and how it translates a source program through various representations and analyses until generating target code. Key concepts covered include context-free grammars, parsing techniques, symbol tables, intermediate representations, and code optimization.
The document discusses different types of loaders and their functions. An absolute loader is the simplest type of loader scheme. It loads assembled machine code directly into memory locations prescribed by the assembler. While simple to implement, absolute loaders require the programmer to manually assign and reference memory locations for subroutines. A general loader scheme uses an object deck format to avoid disadvantages of previous schemes. The general loader accepts object code from assemblers, resolves symbolic references and relocates addresses before loading the executable program into memory.
The document describes a programming language called BAIK, which stands for Bahasa Anak Indonesia untuk Komputer (Indonesian Children's Language for Computers). BAIK is designed to have syntax modeled on the Indonesian language to support multitier web development. It implements an Indonesian parsing engine and uses a binary search tree for memory allocation. The goal is to build a real programming language using a simple structure design and Indonesian lexical words for web development.
SOFTWARE COST ESTIMATION USING FUZZY NUMBER AND PARTICLE SWARM OPTIMIZATIONIJCI JOURNAL
Software cost estimation is a process to calculate effort, time and cost of a project, and assist in better
decision making about the feasibility or viability of project. Accurate cost prediction is required to
effectively organize the project development tasks and to make economical and strategic planning, project
management. There are several known and unknown factors affect this process, so cost estimation is a very
difficult process. Software size is a very important factor that impacts the process of cost estimation.
Accuracy of cost estimation is directly proportional to the accuracy of the size estimation.
Failure of Software projects has always been an important area of focus for the Software Industry.
Implementation phase is not the only phase for Software projects to fail, instead planning and estimation
steps are the most crucial ones, which lead to their failure. More than 50% of the total projects fail which
go beyond the estimated time and cost. The Standish group‘s CHAOS reports failure rate of 70% for the
software projects. This paper presents the existing algorithms for software estimation and the relevant
concepts of Fuzzy Theory and PSO. Also explains the proposed algorithm with experimental results.
Genetic algorithm guided key generation in wireless communication (gakg)IJCI JOURNAL
In this paper, the proposed technique use high speed stream cipher approach because this approach is useful where less memory and maximum speed is required for encryption process. In this proposed approach Self Acclimatize Genetic Algorithm based approach is exploits to generate the key stream for encrypt / decrypt the plaintext with the help of key stream. A widely practiced approach to identify a good set of parameters for a problem is through experimentation. For these reasons, proposed enhanced Self Acclimatize Genetic Algorithm (GAKG) offering the most appropriate exploration and exploitation behavior. Parametric tests are done and results are compared with some existing classical techniques, which shows comparable results for the proposed system.
INTRUSION DETECTION IN MULTITIER WEB APPLICATIONS USING DOUBLEGUARDIJCI JOURNAL
Today, the use of distinct internet services and their applications by people are increase in very large amount. Due to its usage, it results the increase in data complexity. So, web services turn their focus on multi-tier design where web server acts as front-end and database server acts as back-end. Attackers try to hack personal data by targeting database server, hence it need to provide more security to both web server and database server. In this paper, the doubleguard system proposes an efficient intrusion detection and prevention system which detects and prevents various attacks in multi-tier web applications. This IDS system keeps track of all user sessions across both web server and database server. For this, it allocates the dedicated web container to each user’s session. Each user is associated with unique session ID which enhances more security. The system built well correlated model for website and detects and prevents various type of attacks. The system is implemented by using Apache webserver with MySQL.
This document summarizes and compares several existing broadcasting protocols for vehicular ad hoc networks (VANETs). It discusses the need for reliable broadcasting in VANETs to support safety applications. Simple flooding is not effective due to collisions in dense areas. The document reviews probabilistic, counter-based, distance-based, and location-based broadcasting schemes. It also discusses neighbor knowledge methods. None of the existing schemes are optimal for all situations. The document aims to propose a new intelligent broadcasting technique for emergency messages in VANETs to improve message reception reliability and availability.
BIG DATA ANALYTICS AND E LEARNING IN HIGHER EDUCATIONIJCI JOURNAL
With the advent of internet and communication technology the penetration of e-learning has increased. The
digital data being created by the higher educational institutions is also on ascent. The need for using “Big
Data” platforms to handle, analyse these large amount of data is prime. Many educational institutions are
using analytics to improve their process. Big Data analytics when applied onto teaching learning process
might help in improvising as well as developing new paradigms. Usage of Big Data supported databases
and parallel programming models like MapReduce may facilitate the analysis of the exploding educational
data. This paper focuses on the possible application of Big Data Techniques on educational data.
HEXAGONAL CIRCULARLY POLARIZED PATCH ANTENNA FOR RFID APPLICATIONSIJCI JOURNAL
A compact design of a hexagonal single feed circularly polarized microstrip antenna for RFID applications is
proposed. This structure fabricated on FR4 substrate offers compactness, good axial ratio bandwidth with a
broadside radiation characteristic in the entire band, better gain, good impedance bandwidth at a resonant
frequency of 2.45 GHz. The structure is suitable for RFID reader antenna applications.
Suitable Wind Turbine Selection using Evaluation of Wind Energy Potential in ...IJCI JOURNAL
Nowadays, low environmental impact of wind energy is attractive. This paper aims to investigate the wind-power production potential of sites in North of Iran. Analysis of the wind speed of one city in the province of MAZANDARAN which is located in north of Iran is performed in this paper. The class of this site is a class one wind power site and the annual average wind speed is 3.58 m/s. The power density of this site is 99 W/m2 at 50 m height. Wind speed data measured over a five-year period at a typical site on the north coast of Iran are presented. The annual wind speeds at different heights have been studied to make optimum selection of wind turbine installation among three commercial turbines
ENHANCING THE PERFORMANCE OF E-COMMERCE SOLUTIONS BY FRIENDS RECOMMENDATION S...IJCI JOURNAL
In the past, selling needs brick and mortar store and it is limited to a local customer base. But today, ecommerce
websites make it easy for the customers around the world to shop by virtual store. Performance of ecommerce
websites depend not only the quality and price of the product but also the customer centric suggestions about the
product and services and retrieval speed of websites. There is more possibility to buy a product brand suggested by
friends or relatives than by viewing commends from unknown people. We can implement such a customer centric
approach using Neo4j database, which provides easy and fast retrieval of information based on multiple customer
attributes and relations than relational databases. Experiment shows the enhancement of performance in terms of
time and complexity
Recognition of Persian handwritten characters has been considered as a significant field of research for
the last few years under pattern analysing technique. In this paper, a new approach for robust handwritten
Persian numerals recognition using strong feature set and a classifier fusion method is scrutinized to
increase the recognition percentage. For implementing the classifier fusion technique, we have considered
k nearest neighbour (KNN), linear classifier (LC) and support vector machine (SVM) classifiers. The
innovation of this tactic is to attain better precision with few features using classifier fusion method. For
evaluation of the proposed method we considered a Persian numerals database with 20,000 handwritten
samples. Spending 15,000 samples for training stage, we verified our technique on other 5,000 samples,
and the correct recognition ratio achievedapproximately 99.90%. Additional, we got 99.97% exactness
using four-fold cross validation procedure on 20,000 databases.
DYNAMIC PRIVACY PROTECTING SHORT GROUP SIGNATURE SCHEMEIJCI JOURNAL
Group Signature, extension of digital signature, allows members of a group to sign messages on behalf of
the group, such that the resulting signature does not reveal the identity of the signer. The controllable
linkability of group signatures enables an entity who has a linking key to find whether or not two group
signatures were generated by the same signer, while preserving the anonymity. This functionality is very
useful in many applications that require the linkability but still need the anonymity, such as sybil attack
detection in a vehicular ad hoc network and privacy preserving data mining. This paper presents a new
signature scheme supporting controllable linkability.The major advantage of this scheme is that the
signature length is very short, even shorter than this in the best-known group signature scheme without
supporting the linkability. A valid signer is able to create signatures that hide his or her identity as normal
group signatures but can be anonymously linked regardless of changes to the membership status of the
signer and without exposure of the history of the joining and revocation. From signatures, only linkage
information can be disclosed, with a special linking key. Using this controllable linkability and the
controllable anonymity of a group signature, anonymity may be flexibly or elaborately controlled
according to a desired level.
SEMANTIC WEB: INFORMATION RETRIEVAL FROM WORLD WIDE WEBIJCI JOURNAL
The large amount of information on web has led to impossible accurate search and integration of the information. One of the attractive procedures for facing the redundancy of information is the Semantic Web (SW). So, to structuring the information, improving the searches and presenting the meaning of the information, a technology is needed to create relationship between the existing information in the World Wide Web (WWW) and find the clear meaning among them. SW has meaningful relationship among information and is able to revolute the Information Retrieval (IR) method in web environment. SW is the development of the existing web by equipping it with the semantic cognitive elements and content mining, and then a combination of the continuous and accurate information will be produced. The SW creates a procedure by which information will be understandable for the machines. It is possible to suppose the SW as an effective way for presenting information in the web or as a global and universal link to the information database. In the web environment, there is need for a tool for integration of the information and techniques for processing the information because of the non-heterogeneous and non-concentrated information resources. Ontology is a suitable solution for fast and right access to the common information. SW uses the ontology via providing the conceptual and relational structure and makes possible the information be accessed by the users and be smartly retrieved. We in this paper to characteristics, advantages, architecture and problems of the SW and need implement it in the WWW.
EFFECT OF GEL PARAMETERS ON THE GROWTH AND NUCLEATION OF LEAD IODATE CRYSTALSIJCI JOURNAL
In the present investigation, lead iodate crystals were grown in silica gel at ambient temperature. The effect of various parameters like gel concentration, pH of gel, gel ageing, concentration of the reactants on the growth of these crystals were studied. Good quality crystals having different morphologies and habits were obtained. Some of These crystals were opaque and some were translucent
Cancer recognition from dna microarray gene expression data using averaged on...IJCI JOURNAL
Cancer is a major leading cause of death and responsible for around 13% of all deaths world-wide. Cancer
incidence rate is growing at an alarming rate in the world. Despite the fact that cancer is preventable and
curable in early stages, the vast majority of patients are diagnosed with cancer very late. Therefore, it is of
paramount importance to prevent and detect cancer early. Nonetheless, conventional methods of detecting
and diagnosing cancer rely solely on skilled physicians, with the help of medical imaging, to detect certain
symptoms that usually appear in the late stages of cancer. The microarray gene expression technology is a
promising technology that can detect cancerous cells in early stages of cancer by analyzing gene
expression of tissue samples. The microarray technology allows researchers to examine the expression of
thousands of genes simultaneously. This paper describes a state-of-the-art machine learning based
approach called averaged one-dependence estimators with subsumption resolution to tackle the problem of
recognizing cancer from DNA microarray gene expression data. To lower the computational complexity
and to increase the generalization capability of the system, we employ an entropy-based geneselection
approach to select relevant gene that are directly responsible for cancer discrimination. This proposed
system has achieved an average accuracy of 98.94% in recognizing and classifyingcancer over 11
benchmark cancer datasets. The experimental results demonstrate the efficacy of our framework.
Data Security via Public-Key Cryptography in Wireless Sensor NetworkIJCI JOURNAL
This document discusses using public-key cryptography for data security in wireless sensor networks. It begins with an abstract that introduces public-key infrastructure for sensor networks to allow services like digital signatures. It then provides background on wireless sensor networks and discusses their limitations, including limited resources and vulnerability of nodes. It reviews different techniques for distributing public keys, including public announcement, publicly available directories, using a public-key authority, and public-key certificates. It analyzes whether a public-key infrastructure is feasible for sensor networks given their constraints. The document concludes by discussing potential public-key schemes that could work for wireless sensor networks.
A F AULT D IAGNOSIS M ETHOD BASED ON S EMI - S UPERVISED F UZZY C-M EANS...IJCI JOURNAL
Machine learning approaches are generally adopted i
n many fields including data mining, image
processing, intelligent fault diagnosis etc. As a c
lassic unsupervised learning technology, fuzzy C-me
ans
cluster analysis plays a vital role in machine lear
ning based intelligent fault diagnosis. With the ra
pid
development of science and technology, the monitori
ng signal data is numerous and keeps growing fast.
Only typical fault samples can be obtained and labe
led. Thus, how to apply semi-supervised learning
technology in fault diagnosis is significant for gu
aranteeing the equipment safety. According to this,
a novel
fault diagnosis method based on semi-supervised fuz
zy C-means(SFCM) cluster analysis is proposed.
Experimental results on Iris data set and the steel
plates faults data set show that this method is su
perior to
traditional fuzzy C-means clustering analysis
O PTIMISATION B ASED ON S IMULATION : A P ATIENT A DMISSION S CHEDULING ...IJCI JOURNAL
This document summarizes a study that developed an optimization model to schedule patient admissions in a radiology department with the goal of reducing patient wait times. A mathematical model was created to minimize total completion time and total patient waiting time as a multi-objective problem. A multi-stage queuing system was used to represent the patient flow through registration, examination, and checkout. A case study was conducted of a hospital radiology department to collect data and test the optimization model using a multi-objective evolutionary algorithm. The results showed an average 7% reduction in total completion time and 34% reduction in total patient waiting time.
F B ASED T ALKING S IGNAGE FOR B LIND N AVIGATIONIJCI JOURNAL
The major challenge of visually impaired person is
in mobility, object identification and identificati
on of
space around him/her. The proposed RF Based Talking
Signage for Blind Navigation aims to provide a
universal electronic travel guide for the visually
challenged people. This system incorporates a user
friendly and versatile method called “Talking Signa
ge” that is implemented using android devices. The
system uses an Android application in the mobile ph
one which could deliver voice messages about the us
er
environment via a heterogeneous network. It can be
deployed in any dense environment so that blind
persons can fulfill their needs. The primary advant
age of the system compared to other system in the a
rea is
low cost, ease of transport, less power consumption
, lightweight, and it could be utilized by those pe
oples
who are technically challenged. The architecture pr
oposed in this paper clearly shows communication
between a mobile phone and a heterogeneous network
enabled with RF devices. We have implemented the
system in our university environment and the propos
ed system found to be a great success
International Journal on Cybernetics & Informatics (IJCI) Vol. 4, No. 2, Apr...IJCI JOURNAL
rom past few decades prosthetics is aiding in impr
oving the life style of the amputees. The major asp
ect of
concern in the existing prosthetic technology is co
ntrolling the speed of the prosthetic leg. The stan
ce phase
control of prosthetic can be achieved, but it is fo
und to fail at varying speeds. This works concentra
tes on
the pathway to control and vary the torques of the
prosthetic leg using the Bluetooth interface. The
programmed Aurdino is connected to a Bluetooth of t
he smart device and can be operated at varying
speeds thus overcoming the identified drawback. An
algorithm for developed process is also generated f
or
effective understanding and application
This document provides an overview of compilers and their various phases. It begins by introducing compilers and their importance for increasing programmer productivity and enabling reverse engineering. It then covers the classification of programming languages and the history of compilers. The rest of the document details each phase of the compiler process, including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, code generation, and the role of the symbol table. It provides definitions and examples for each phase to explain how a source program is translated from a high-level language into executable machine code.
This document provides an overview of compiler design and the different phases involved in compiling a program. It begins with defining a compiler as a program that translates code written in one programming language into another target language to be executed by a computer. The major phases of a compiler are then described as the analysis phase (front-end) which breaks down and analyzes the source code, and the synthesis phase (back-end) which generates the target code. Key phases in the front-end include lexical analysis, syntax analysis, and semantic analysis, while the back-end includes code optimization and code generation. Different types of compilers such as single-pass, two-pass, and multi-pass compilers are also introduced based on how many times
The document discusses the phases of a compiler:
- Lexical analysis breaks the source code into tokens by grouping characters into lexemes. Each lexeme is mapped to a token with an abstract symbol and attribute value.
- Syntax analysis (parsing) uses the tokens to build a syntax tree that represents the grammatical structure.
- Semantic analysis checks the syntax tree and symbol table for semantic consistency with the language definition and gathers type information.
- Later phases include intermediate code generation, optimization, and code generation to produce the target code.
The phases of a compiler are:
1. Lexical analysis breaks the source code into tokens
2. Syntax analysis checks the token order and builds a parse tree
3. Semantic analysis checks for type errors and builds symbol tables
4. Code generation converts the parse tree into target code
The document discusses the different phases of a compiler:
1. Lexical analysis scans source code and converts it to tokens.
2. Syntax analysis checks token arrangements against the grammar to validate syntax.
3. Semantic analysis checks that rules like type compatibility are followed.
4. Intermediate code is generated for an abstract machine.
5. Code is optimized in the intermediate representation.
6. Code generation produces machine code from the optimized intermediate code.
A compiler is a program that translates a program written in a source language into a target language. It has two main parts: analysis and synthesis. The analysis part breaks down the source code using lexical analysis, syntax analysis, and semantic analysis. The synthesis part constructs the target program using intermediate code generation, code optimization, and code generation. A compiler translates the source code into assembly code, which is then assembled into machine code and linked with libraries to create an executable program.
The document discusses the different phases of a compiler:
1. The lexical analyzer converts source code into tokens.
2. The syntax tree verifies that strings of tokens are valid based on grammar rules and reports errors.
3. The semantic analyzer checks for semantic errors like type mismatches and ensures types are used consistently.
4. Intermediate code generation converts code into postfix notation or three-address code.
5. Code optimization improves code efficiency.
6. Code generation produces the final target code.
Language translators convert programming source code into machine language understood by computer processors. The three major types are compilers, assemblers, and interpreters. Compilers translate high-level languages into machine code in one or more passes, assemblers assemble assembly language into machine code, and interpreters analyze and execute each line of source code as the program runs without pre-translation.
This document provides an introduction to compilers, including definitions of key terms like translator, compiler, interpreter, and assembler. It describes the main phases of compilation as lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It also discusses related concepts like the front-end and back-end of a compiler, multi-pass compilation, and different types of compilers.
This document provides lecture notes on compiler design. It discusses the structure of a compiler including phases like lexical analysis, syntax analysis, intermediate code generation, code optimization, code generation, table management and error handling. It also discusses different types of translators like compilers, interpreters and preprocessors. Finally, it discusses the evolution of programming languages, classification of languages and applications of compiler technology.
The document discusses the basics of compiler construction. It begins by defining key terms like compilers, source and target languages. It then describes the main phases of compilation as lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization and machine code generation. It also discusses symbol tables, compiler tools and generations of programming languages.
The document discusses the different phases of a compiler:
1. Lexical analysis scans the source code as characters and converts them into tokens.
2. Syntax analysis takes the tokens and checks that they form a syntactically correct parse tree based on the language's grammar.
3. Semantic analysis checks that the parse tree follows the language's rules, such as compatible data types in assignments.
Compiler Design Using Context-Free GrammarIRJET Journal
This document discusses the use of context-free grammars (CFGs) in compiler design. It provides an overview of CFGs and their role in specifying the syntax of programming languages. The document also examines some of the challenges of using CFGs, such as ambiguity issues, and discusses solutions. It describes the process of designing a compiler that uses a CFG to parse a custom programming language, including implementing a lexer, parser, code generator, and testing the compiler.
A compiler works in several stages to translate a program from a source language to an output language. It performs lexical analysis to break the source code into tokens, syntactical analysis to check syntax is correct, semantic analysis to ensure code makes logical sense, and optionally generates intermediate code before final code generation and optimization to produce the target program.
The document provides an introduction to compilers, including definitions of key terms like compiler, interpreter, assembler, translator, and phases of compilation like lexical analysis, syntax analysis, semantic analysis, code generation, and optimization. It also discusses compiler types like native compilers, cross compilers, source-to-source compilers, and just-in-time compilers. The phases of a compiler include breaking down a program, generating intermediate code, optimizing, and creating target code.
The document discusses the phases of a compiler including lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, and code generation. It describes the role of the lexical analyzer in translating source code into tokens. Key aspects covered include defining tokens and lexemes, using patterns and attributes to classify tokens, and strategies for error recovery in lexical analysis such as buffering input.
This document provides an overview of compiler construction principles and practices. It introduces the basic components of a compiler, including scanning, parsing, semantic analysis, code generation, and optimization phases. It also discusses related tools like assemblers, linkers, and debuggers. The document outlines the translation process from source code to target code and describes major data structures used in compilers like symbol tables and syntax trees.
This document provides an overview of the key components and phases of a compiler. It discusses that a compiler translates a program written in a source language into an equivalent program in a target language. The main phases of a compiler are lexical analysis, syntax analysis, semantic analysis, intermediate code generation, code optimization, code generation, and symbol table management. Each phase performs important processing that ultimately results in a program in the target language that is equivalent to the original source program.
Overlapping optimization with parsing through metagrammarsIAEME Publication
This document describes techniques for improving the performance of a meta framework developed by combining C++ and Java language segments. The meta framework identifies and parses source code containing C++ and Java statements using a metagrammar. Bytecodes are generated from the abstract syntax tree and optimized using techniques associated with the metagrammar, such as constant propagation. Constant propagation identifies constant values and replaces variables with those values to simplify expressions and reduce unnecessary computations. Other optimizations discussed include function inlining, exception handling, and eliminating unreachable code through constant folding. The goal is to develop an optimized meta framework that generates efficient bytecodes for hybrid C++ and Java source code.
The document discusses the phases of a compiler, which are typically divided into analysis and synthesis phases. The analysis phase includes lexical analysis, syntax analysis, and semantic analysis. The synthesis phase includes intermediate code generation, code optimization, and code generation. Other topics discussed include symbol tables, error handlers, examples of common compilers, and reasons for learning about compilers.
Similar to SOFTWARE TOOL FOR TRANSLATING PSEUDOCODE TO A PROGRAMMING LANGUAGE (20)
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
SOFTWARE TOOL FOR TRANSLATING PSEUDOCODE TO A PROGRAMMING LANGUAGE
1. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
DOI: 10.5121/ijci.2016.5209 79
SOFTWARE TOOL FOR TRANSLATING PSEUDOCODE
TO A PROGRAMMING LANGUAGE
Amal M R , Jamsheedh C V and Linda Sara Mathew
Department of Computer Science and Engineering, M.A College of Engineering,
Kothamangalam, Kerala, India
ABSTRACT
Pseudocode is an artificial and informal language that helps programmers to develop algorithms. In this
paper a software tool is described, for translating the pseudocode into a particular programming
language. This tool takes the pseudocode as input, compiles it and translates it to a concrete programming
language. The scope of the tool is very much wide as we can extend it to a universal programming tool
which produces any of the specified programming language from a given pseudocode. Here we present the
solution for translating the pseudocode to a programming language by implementing the stages of a
compiler.
KEYWORDS
Compiler, Pseudocode to Source code, Pseudocode Compiler, c, c++
1. INTRODUCTION
Generally a compiler is treated as a single unit that maps a source code into a semantically
equivalent target program [1]. If we are analysing a little, we see that there are mainly two phases
in this mapping: analysis and synthesis. The analysis phase breaks up the source code into
constituent parts and imposes a grammatical structure on them. It then uses this structure to create
an intermediate representation of the source code. If the analysis phase detects that the source
code is either syntactically weak or semantically unsound, then it must provide informative
messages. The analysis phase also collects information about the source code and stores it in a
data structure called a symbol table, which is passed along with the intermediate representation to
the synthesis phase. The synthesis phase constructs the target program from the intermediate
representation and the information in the symbol table [2], [3]. The analysis phase is often called
the front end of the compiler; the synthesis phase is the back end.
Compilation process operates as a sequence of phases, each of which transforms one
representation of the source program to another. Compilers have a machine-independent
optimization phase between the front end and the back end. The purpose of this optimization
phase is to perform transformations on the intermediate representation; so that the backend can
produce a better target program than it would have otherwise produced from an un-optimized
intermediate representation.
In this paper, it is intended to produce a user specified programming language from pseudocode.
This tool requires a single pseudocode and it can produce the programming language that is
specified by the user. Its significance is that it can be extended to a universal programming tool
that can produce any specified programming language from pseudocode.
2. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
80
2. COMPILING PSEUDOCODE
The process of compiling the pseudocode consists of certain analysis and operations that has to be
performed on it.
2.1. LEXICAL ANALYSER
The Lexical Analyser module analyses the pseudocode submitted by the user by using the
transition analysis. The keywords, Identifiers and tokens are identified from the given input.
2.2. SYNTAX ANALYSER
The Syntax Analyser module creates the Context Free Grammar from the pseudocode submitted
.The resultant grammar is then used for creation of the parse tree. Then pre order traversal is done
to obtain the meaning of the syntax.
2.3. SEMANTIC ANALYSER
The semantic Analyser uses the syntax tree and the information in the symbol table to check the
source program for semantic consistency with the language definition. It also gathers type
information.
2.4. INTERMEDIATE CODE GENERATOR
In this module, an intermediate code is generated from the input pseudocode. The generated
intermediate code is that code which is used for the conversion to any other languages.
2.5. INTERMEDIATE CODE OPTIMIZER
In this module, code optimization is applied on the intermediate code generated. The generated
optimized intermediate code is that code which is used for mapping to the concrete languages.
2.6. CODE GENERATOR
The optimized intermediate code is converted into the required programming language in this
module. The result might be obtained in the language selected by the user.
2.7. LIBRARY FILE MANAGER
In this module the administrator manages the library files of the target language and also
manipulates files in the library package.
3. PROBLEM STATEMENT
3.1. INTERPRET THE PSEUDOCODE
The main task is to identify and interpret the pseudocode given by the user. Each user has his own
style of presentation, variation in using keywords or terms (E.g.: - Sum, Add, etc. to find sum of
3. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
81
two numbers), structure of sentence, etc. So initial task is make the tool capable of develop the
tool is to interpret and identify the right meaning of each line of the pseudocode.
3.2. TRANSLATE THE PSEUDOCODE INTO PROGRAMMING LANGUAGE
The second step involves the translation of the interpreted pseudocode into programming
language. User can specify output in any of the available programming languages. So the tool
must be able to support all the available programming language features (in this paper we are
concentrating on C and C++ only). That is it must support the object oriented concepts, forms and
so on.
4. METHODOLOGY
4.1. LEXICAL ANALYSIS
The first phase of the software tool is called lexical analysis or scanning. The lexical analyser
reads the stream of characters making up the pseudocode and groups the characters into
meaningful sequences called lexemes. For each lexeme, the lexical analyser produces as output a
token of the form {token- name, attribute-value} that it passes on to the subsequent phase, syntax
analysis. In the token, the first component token- name is an abstract symbol that is used during
syntax analysis, and the second component attribute-value points to an entry in the symbol table
for this token. Information from the symbol-table entry 'is needed for semantic analysis and code
generation. For example, suppose a source program contains the declare statement[1],[2].
Declare an integer variable called sum# (1.1)
The characters in this assignment could be grouped into the following lexemes and mapped into
the following tokens passed on to the syntax analyser:
1. Declare a, is a lexeme that would be mapped into a token (Declare, 59), where Declare is
a keyword and 59 points to the symbol table entry for position.
2. Integer, is a lexeme that would be mapped into a token (Integer, 112), where Integer is a
keyword and 112 points to the symbol table entry for position.
3. Variable, is a lexeme that would be mapped into a token (Variable, 179), where Variable
is a keyword and 179 points to the symbol table entry for position.
4. Called, is a lexeme that would be mapped into a token (Called, 340), where Called is a
keyword and 340 points to the symbol table entry for position.
5. Sum,is a lexeme that would be mapped into a token (sum, 740), where sum is an
identifier and 740 points to the symbol table entry for position.
(Blanks separating the lexemes would be discarded by the lexical analyser.)
4.2. SYNTAX ANALYSIS
The second phase of the compiler is syntax analysis or parsing. The parser uses the first
components of the tokens produced by the lexical analyser to create a tree-like intermediate
representation that depicts the grammatical structure of the token stream. A typical representation
is a syntax tree in which each interior node represents an operation and the children of the node
represent the arguments of the operation [12], [13]. The syntax of programming language
constructs can be specified by context-free grammars or BNF (Backus-Naur Form) notation;
Grammars offer significant benefits for both language designers and compiler writers.A grammar
gives a precise, yet easy-to-understand, syntactic specification of a programming language. From
4. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
82
certain classes of grammars, we can construct automatically an efficient parser that determines
the syntactic structure of a source program. As a side benefit, the parser-construction process can
reveal syntactic ambiguity and trouble spots that might have slipped through the initial design
phase of a language. The structure imparted to a language by a properly designed grammar is
useful for translating source programs into correct object code and for detecting errors. A
grammar allows a language to be evolved or developed iteratively, by adding new constructs to
perform new tasks. These new constructs can be integrated more easily into an implementation
that follows the grammatical structure of the language.
4.3. CONTEXT-FREE GRAMMARS
Grammars were introduced to systematically describe the syntax of programming language
constructs like expressions and statements. Using a syntactic variable „Stmt‟ to denote statements
and variable „expr‟ to denote expressions, In particular, the notion of derivations is very helpful
for discussing the order in which productions are applied during parsing. The Formal Definition
of a Context-Free Grammar (grammar for short) consists of terminals, non-terminals, a start
symbol, and productions.
1. Terminals are the basic symbols from which strings are formed. The term "token name"
is a synonym for "terminal" and frequently we will use the word "token" for terminal
when it is clear that we are talking about just the token name. We assume that the
terminals are the first components of the tokens output by the lexical analyser.
2. Non terminals are syntactic variables that denote sets of strings. The set of string denoted
by non-terminals helps to define the language generated by the grammar. Non terminals
impose a hierarchical structure on the language that is the key to syntax analysis and
translation.
3. In a grammar, one nonterminal is distinguished as the start symbol, and the set of strings
it denotes is the language generated by the grammar. Conventionally, the productions for
the start symbol are listed first.
4. The productions of a grammar specify the manner in which the terminals and non-
terminals can be combined to form strings. Each production consists of:
a) A nonterminal called the head or left side of the production; this production defines some
of the strings denoted by the head.
b) The symbol --+. Sometimes:: = has been used in place of the arrow.
c) A body or right side consisting of zero or more terminals and non-terminals.
The Context Free Grammar generated from 1.1 by the Software tool is
Stmt -> declare_an <DataType> variable Called <Identifier> (1.2)
DataType->integer
Identifier->sum
4.4. SEMANTIC ANALYSIS
The semantic analyser uses the syntax tree and the information in the symbol table to check the
source program for semantic consistency with the language definition. It also gathers type
information and saves it in either the syntax tree or the symbol table, for subsequent use during
intermediate-code generation An important part of semantic analysis is type checking, where the
compiler checks that each operator has matching operands. For example, many programming
language definitions require an array index to be an integer; the compiler must report an error if a
floating-point number is used to index an array .The language specification may permit some type
5. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
83
conversions called coercions. For example, a binary arithmetic operator may be applied to either a
pair of integers or to a pair of floating-point numbers. If the operator is applied to a floating-point
number and an integer, the compiler may convert or coerce the integer into a floating-point
number. The Parse Tree generated from 1.2 by the Software tool is by array representation as
follows,
The pre order traversal:
Stmt-> declare_an DataType integer variable Called Identifier sum (1.3)
4.5. INTERMEDIATE CODE GENERATION
In the process of translating a source program into target code, a compiler may construct one or
more intermediate representations, which can have a variety of forms. Syntax trees are a form of
intermediate representation; they are commonly used during syntax and semantic analysis. After
syntax and semantic analysis of the source program, many compilers generate an explicit low-
level or machine-like intermediate representation, which we can think of as a program for an
abstract machine[10],[11]. This intermediate representation should have two important properties:
it should be easy to produce and it should be easy to translate into the target machine. In our
software tool intermediate code is generated to convert the code to various languages from single
pseudocode.
The intermediate code for 1.3 is as follows:149 i780 300o (1.4)
4.6. CODE GENERATION
The code generator takes as input an intermediate representation of the source program and maps
it into the target language. If the target language is machine Code, registers or memory locations
are selected for each of the variables used by the program. Then, the intermediate instructions are
translated into sequences of machine instructions that perform the same task.
The resultant program code for 1.4 is as follows:int sum; (1.5)
5. SCHEMA DESCRIPTION
6. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
84
Fig:-1 System environment of the proposed software tool
The proposed software tool consists of several modules which are used to process the input
4.1. INPUT DESIGN
This is a process of converting user inputs into computer based formats. The data is fed into
system using simple interactive forms. The forms have been supplied with messages so that user
can enter data without facing any difficulty. The data is validated wherever it requires in the
project. This ensures that only the correct data have been incorporated into the system. It also
includes determining the recording media methods of input, speed of capture and entry into the
system. The input design or user interface design is very important for any application. The
interface design defines how the software communicates with in itself, to system that interpreted
with it and with humans who use.
The main objectives that guide input design are as follows:
User friendly code editor- Providing line numbers and shaded graphical rows to easily identify
each line of code.
Arise that lead to processing delays- Input is designed so that it does not lead to bottlenecks and
thus avoid processing delays.
Dynamic check for errors in data-errors in data can lead to delays. Input design should be such
that the data being entered should be free from error to the maximum possible limit.
Avoided extra steps-more the number of steps more is the chance of an error. Thus the number of
steps is kept to a minimum possible.
Kept the process simple-the process should be kept as simple as possible to avoid errors.
4.2. OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In the output design, it is determined how the information is to be displayed
for immediate need and also the hard copy output. The output design should be understandable to
the user and it must offer great convenience. The output of the proposed software tool is designed
as opening the text file containing the translated code.
The main objectives that guide the output design are as follows:
a. User can copy the code and run it in any of the IDE available.
b. Since the output is written in a standard text file, the user can directly call the file
in the host program.
c. User can add some more code to the existing output and edit it easily.
4.3. DATA STRUCTURES USED
4.3.1. DATA BANK, TOKEN AND TOKEN ID
7. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
85
Here a table with all the tokens and there ID codes used in lexical analysis are tabulated. These
tokens and there IDs are stored using Hash Table while implementing.
4.3.2. LIBRARY FILE, FOR A PARTICULAR PROGRAMMING LANGUAGE:
This is the data in the library file stored in the Library in the software tool. The file consists of all
the keywords and header files in the language.
For example:-
Library File:For ‘C’:-
It includes the data in the library file stored in the Library of the software tool for all the
keywords and header files in the language „C‟.
6. EXPERIMENT ANALYSIS
Figure 1. Comparison of Final Lines of Code(FLOC)and Lines of code(LOC)
Figure 2. Comparison of Lines of Code(LOC)and Optimized Intermediate Lines of code(LOC)
Analysis: The graphs are plotted with the Lines of Code (LOC) against the number of
experiments. From the plots, it is clear that the initial LOC of the pseudocode given by the user is
reduced proportionally in the optimized intermediate generated codes (OILOC).Then the final
8. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
86
LOC of the generated code is comparatively larger in proportion of the LOC of the pseudocode
(see fig. 1).This measures indicates the efficiency of the tool in the generation of the code of
the specified programming language. This measures depends on the efficiency and compatibility
of the new developed tool.
7. CONCLUSIONS
This paper is focused on providing a user friendly environment for the beginners in programming.
They can easily build a code in specified language from a pseudocode without considering the
factor of knowledge about the syntax of the particular language. A beginner level programmer
familiar with writing pseudocode can implement his logic in any particular language, simply by
using this tool. The main advantage of this tool is that, user can build program code in any
language from a single pseudocode. For a beginner in programming, it is difficult to learn the
syntax of a particular language at the very first time. The user can implement his logic in a
pseudocode and the pseudocode required for this software tool requires simple syntax. A
formulated pseudocode is simple to be generated by a beginner. Then this pseudocode is simply
submitted to the text area in our tool. Then specify the language of the output required. Then after
processing he will get the resultant programming code in a file, which is much simpler with user
friendly interface. Then the resultant code can be executed in its programming platform. The
library files in the software tool can be manipulated to add more syntaxes into the database.
Future versions can be built with giving support to more languages. We can develop this software
tool to a universal programming tool, which can be used to build programming code in any of the
programming language, from simple, single pseudocode generated by the user. It reduces the
user‟s overhead to be known about the syntax of various languages.
REFERENCES
[1] G Alfred V.Aho,Monica S.Lam,Ravi Sethi,Jeffrey D.Ullman,Compilers Principles,Techniques and
Tools, Second edition 2007
[2] Allen Holub, “Compiler Design in C”, Prentice Hall of India, 1993.
[3] Kenneth C Louden, “Compiler Construction Principles and Practice”,Cenage Learning Indian
Edition..
[4] V Raghavan, “Priniples of Compiler Design”,Tata McGraw Hill,India, 2010
[5] Arthur B. Pyster, “Compiler design and construction: tools and techniques with C and Pascal”,
2nd Edition, Van Nostrand Reinhold Co. New York, NY, USA.
[6] D M Dhamdhare, System programming and operating system, Tata McGraw Hill & Company
[7] Tremblay and Sorenson, The Theory and Practice of Compiler Writing - Tata McGraw Hill &
Company.
[8] Steven S. Muchnick, “Advanced Compiler Design & plementation”, Morgan Kaufmann
Pulishers, 2000.
[9] Dhamdhere, “System Programming & Operating Systems”, 2nd edition, Tata McGraw Hill, India.
[10] John Hopcroft, Rajeev Motwani & Jeffry Ullman: Introduction to Automata Theory Languages
& Computation , Pearson Edn.
[11] Raymond Greenlaw,H. James Hoover, Fundamentals of Theory of
Computation,Elsevier,Gurgaon,Haryana,2009
[12] John C Martin, Introducing to languages and The Theory of Computation, 3rd Edition, Tata
McGraw Hill,New Delhi,2010
[13] Kamala Krithivasan, Rama R, Introduction to Formal
Languages,Automata Theory and Computation, Pearson Education Asia,2009.
[14] Rajesh K. Shukla, Theory of Computation, Cengage Learning, New Delhi,2009.
[15] K V N Sunitha, N Kalyani: Formal Languages and Automata Theory, Tata McGraw Hill,New
Delhi,2010.
[16] S. P. Eugene Xavier, Theory of Automata Formal Language &Computation,New Age
International, New Delhi ,2004.
[17] K.L.P. Mishra, N. Chandrashekharan , Theory of Computer Science , Prentice Hall of India.
9. International Journal on Cybernetics & Informatics (IJCI) Vol. 5, No. 2, April 2016
87
[18] Michael Sipser, Introduction to the Theory of Computation, Cengage Learning,New Delhi,2007.
[19] Harry R Lewis, Christos H Papadimitriou, Elements of the theory of computation, Pearson Education
Asia.
[20] Bernard M Moret: The Theory of Computation, Pearson Education.
[21] Rajendra Kumar,Theory of Automata Language & Computation,Tata McGraw Hill,New Delhi,2010.
[22] Wayne Goddard, Introducing Theory of Computation, Jones & Bartlett India,New Delhi2010.
AUTHORS
Amal M R is currently pursuing M.Tech in Computer Science and Engineering Mar
Athanasius College of Engineering, Kothamangalam. He completed his B.Tech from
Lourdes Matha College of Science and Technology Thiruvananthapuram. His areas of
research are Compiler and Cloud Computing.
Jamsheedh C V is currently pursuing M.Tech in Computer Science and
Engineering in Mar Athanasius College of Engineering, Kothamangalam. He
completed his B.Tech from Govt. Engineering College Idukki. His areas of research are
Networking and Cloud Computing
Linda Sara Mathew received her B.Tech degree in Computer Science and Engineering
from Mar Athanasius College of Engineering ,Kothamangaam,Kerala in 2002 and
ME degree in Computer Science And Engineering Coimbatore in 2011. She is
currently, working as Assistant Professor, with Department of Computer Science and
Engineering in Mar Athanasius College of Engineering, Kothamangalam and has a
teaching experience of 8 years. Her area of interests include digital signal processing,
Image Processing and Soft Computing.