This document describes the structure and components of a simple one-pass compiler to generate code for the Java Virtual Machine (JVM). It discusses lexical analysis, syntax-directed translation, predictive parsing, and code generation. The compiler consists of a lexical analyzer, syntax-directed translator using a context-free grammar, and parser/code generator to develop for the translator. It provides examples of attribute grammars, translation schemes, and techniques for handling ambiguity, precedence, and left recursion in parsing.
This document provides an overview of building a simple one-pass compiler to generate bytecode for the Java Virtual Machine (JVM). It discusses defining a programming language syntax, developing a parser, implementing syntax-directed translation to generate intermediate code targeting the JVM, and generating Java bytecode. The structure of the compiler includes a lexical analyzer, syntax-directed translator, and code generator to produce JVM bytecode from a grammar and language definition.
Syntax analysis involves converting a stream of tokens into a parse tree using grammar production rules. It recognizes the structure of a program using grammar rules. There are three main types of parsers - top-down, bottom-up, and universal. Top-down parsers build parse trees from the top-down while bottom-up parsers work from the leaves up. Bottom-up shift-reduce parsing uses a stack and input buffer, making shift and reduce decisions based on parser states to replace substrings matching productions. The largest class of grammars bottom-up parsers can handle is LR grammars.
This document discusses syntax analysis and parsing. It introduces context-free grammars and how they are used to specify the syntax of programming languages. There are two main types of parsing: top-down parsing which builds the parse tree from the root going down, and bottom-up parsing which builds up the tree from the leaves to the root. Specific parsing algorithms like recursive descent and LR parsing are also introduced.
The document discusses the role of the parser in compiler design. It explains that the parser takes a stream of tokens from the lexical analyzer and checks if the source program satisfies the rules of the context-free grammar. If so, it creates a parse tree representing the syntactic structure. Parsers are categorized as top-down or bottom-up based on the direction they build the parse tree. The document also covers context-free grammars, derivations, parse trees, ambiguity, and techniques for eliminating left-recursion from grammars.
The document discusses the Number class in Java. It provides methods for converting between primitive numeric types like int and their corresponding wrapper classes like Integer. The Number class is an abstract class that all wrapper classes like Integer extend. It contains methods for common operations on numbers like comparison, conversion to primitive types, parsing strings, and mathematical operations. The document lists and describes over 20 methods in the Number class.
A parser breaks down input into smaller elements for translation into another language. It takes a sequence of tokens as input and builds a parse tree or abstract syntax tree. In the compiler model, the parser verifies that the token string can be generated by the grammar and returns any syntax errors. There are two main types of parsers: top-down parsers start at the root and fill in the tree, while bottom-up parsers start at the leaves and work upwards. Syntax directed definitions associate attributes with grammar symbols and specify attribute values with semantic rules for each production.
1) The document discusses parsing methods for context-free grammars including top-down and bottom-up approaches. Top-down parsing starts with the start symbol and works towards the leaves, while bottom-up parsing begins at the leaves and works towards the root.
2) Key aspects of parsing covered include left recursion elimination, left factoring, shift-reduce parsing which uses a stack and parsing table, and constructing parse trees from the parsing process.
3) The output of parsers can include parse sequences, parse trees, and abstract syntax trees which abstract away implementation details.
The document discusses algorithms and their use for solving problems expressed as a sequence of steps. It provides examples of common algorithms like sorting and searching arrays, and analyzing their time and space complexity. Specific sorting algorithms like bubble sort, insertion sort, and quick sort are explained with pseudocode examples. Permutations, combinations and variations as examples of combinatorial algorithms are also covered briefly.
This document provides an overview of building a simple one-pass compiler to generate bytecode for the Java Virtual Machine (JVM). It discusses defining a programming language syntax, developing a parser, implementing syntax-directed translation to generate intermediate code targeting the JVM, and generating Java bytecode. The structure of the compiler includes a lexical analyzer, syntax-directed translator, and code generator to produce JVM bytecode from a grammar and language definition.
Syntax analysis involves converting a stream of tokens into a parse tree using grammar production rules. It recognizes the structure of a program using grammar rules. There are three main types of parsers - top-down, bottom-up, and universal. Top-down parsers build parse trees from the top-down while bottom-up parsers work from the leaves up. Bottom-up shift-reduce parsing uses a stack and input buffer, making shift and reduce decisions based on parser states to replace substrings matching productions. The largest class of grammars bottom-up parsers can handle is LR grammars.
This document discusses syntax analysis and parsing. It introduces context-free grammars and how they are used to specify the syntax of programming languages. There are two main types of parsing: top-down parsing which builds the parse tree from the root going down, and bottom-up parsing which builds up the tree from the leaves to the root. Specific parsing algorithms like recursive descent and LR parsing are also introduced.
The document discusses the role of the parser in compiler design. It explains that the parser takes a stream of tokens from the lexical analyzer and checks if the source program satisfies the rules of the context-free grammar. If so, it creates a parse tree representing the syntactic structure. Parsers are categorized as top-down or bottom-up based on the direction they build the parse tree. The document also covers context-free grammars, derivations, parse trees, ambiguity, and techniques for eliminating left-recursion from grammars.
The document discusses the Number class in Java. It provides methods for converting between primitive numeric types like int and their corresponding wrapper classes like Integer. The Number class is an abstract class that all wrapper classes like Integer extend. It contains methods for common operations on numbers like comparison, conversion to primitive types, parsing strings, and mathematical operations. The document lists and describes over 20 methods in the Number class.
A parser breaks down input into smaller elements for translation into another language. It takes a sequence of tokens as input and builds a parse tree or abstract syntax tree. In the compiler model, the parser verifies that the token string can be generated by the grammar and returns any syntax errors. There are two main types of parsers: top-down parsers start at the root and fill in the tree, while bottom-up parsers start at the leaves and work upwards. Syntax directed definitions associate attributes with grammar symbols and specify attribute values with semantic rules for each production.
1) The document discusses parsing methods for context-free grammars including top-down and bottom-up approaches. Top-down parsing starts with the start symbol and works towards the leaves, while bottom-up parsing begins at the leaves and works towards the root.
2) Key aspects of parsing covered include left recursion elimination, left factoring, shift-reduce parsing which uses a stack and parsing table, and constructing parse trees from the parsing process.
3) The output of parsers can include parse sequences, parse trees, and abstract syntax trees which abstract away implementation details.
The document discusses algorithms and their use for solving problems expressed as a sequence of steps. It provides examples of common algorithms like sorting and searching arrays, and analyzing their time and space complexity. Specific sorting algorithms like bubble sort, insertion sort, and quick sort are explained with pseudocode examples. Permutations, combinations and variations as examples of combinatorial algorithms are also covered briefly.
CBSE XII COMPUTER SCIENCE STUDY MATERIAL BY KVSGautham Rajesh
This document provides an overview of key concepts in C++ programming. It discusses the basic building blocks of a C++ program like data types, variables, constants, operators, control structures, functions and arrays. It explains fundamental concepts like tokens, literals, data type conversion and standard libraries. It also introduces object oriented programming concepts like classes and objects. The document is intended as a study material for class 12 students to help understand C++ programming.
This document discusses derivation, ambiguity, and parsing in compiler design. It contains the following key points:
1) A derivation is a sequence of production rules that derives an input string from the start symbol. Left-most and right-most derivations refer to replacing symbols from left to right or right to left.
2) A grammar is ambiguous if it has more than one parse tree for at least one input string. Top-down parsing builds parse trees from the root node down.
3) Examples show left-most and right-most derivations and how they can result in ambiguous or unambiguous parses for different input strings based on the grammar rules.
The WiMAX (IEEE 802.16e) standard offers peak data rates of 128Mbps downlink and
56Mbps uplink over 20MHz wide channels whilst the new standard in development, 4G
WiMAN-Advanced (802.16m) is targeting the requirements to be fully 4G using 64Q QAM,
BPSK and MIMO technologies to reach the 1Gbps rate. It is predicted that in an actual
deployment, using 4X2 MIMO in an urban microcell application using a 20 MHz TDD
channel, the 4G WiMAN-Advanced system will be able to support 120Mbps downlink and
60Mbps uplink per site concurrently. WiMAX applications are already in use in many countries
globally but research in 2010 gave results that showed only just over 350 set ups were actually
in use. Many previous WiMAX operators were found to have moved to LTE along with Yota,
who were the largest WiMAX operator in the world.
This document discusses syntax analysis in compilers. It covers the role of parsers, different types of parsers like top-down and bottom-up parsers, context free grammars, derivations, parse trees, ambiguity, left recursion elimination, left factoring, recursive descent parsing, LL(1) grammars, construction of parsing tables, and error recovery techniques for predictive parsing.
The document discusses strings in Python. It describes that strings are immutable sequences of characters that can contain letters, numbers and special characters. It covers built-in string functions like len(), max(), min() for getting the length, maximum and minimum character. It also discusses string slicing, concatenation, formatting, comparison and various string methods for operations like conversion, formatting, searching and stripping whitespace.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
5-Introduction to Parsing and Context Free Grammar-09-05-2023.pptxvenkatapranaykumarGa
The document provides information about parsing and context-free grammars. It defines key concepts such as nonterminals, terminals, productions, derivations, ambiguity, left recursion, left factoring, LL(1) parsing, and computing first sets. It also lists different types of parsing including top-down parsing, bottom-up parsing, backtracking, predictive parsing, LR parsing, operator precedence parsing, and recursive descent parsing.
This document discusses top-down parsing and different types of top-down parsers, including recursive descent parsers, predictive parsers, and LL(1) grammars. It explains how to build predictive parsers without recursion by using a parsing table constructed from the FIRST and FOLLOW sets of grammar symbols. The key steps are: 1) computing FIRST and FOLLOW, 2) filling the predictive parsing table based on FIRST/FOLLOW, 3) using the table to parse inputs in a non-recursive manner by maintaining the parser's own stack. An example is provided to illustrate constructing the FIRST/FOLLOW sets and parsing table for a sample grammar.
This document discusses text mining techniques in R, including importing text, working with strings, regular expressions, for loops, and preprocessing text. It provides examples of functions like readLines(), grep(), gsub(), strsplit(), and for loops. It also covers topics like tokenization, stemming, stopword removal, and bag-of-words representations for text analysis.
The document discusses syntax analysis and parsing. It covers context-free grammars, Backus-Naur Form (BNF), Extended BNF, and different parsing techniques like recursive descent parsing and LL parsing. It also discusses Scala's combinator parser, which uses parser combinators to parse input based on a grammar.
The document outlines a presentation on mobile game programming and stacks. It discusses abstract data types (ADTs), data structures, and specifically focuses on stacks. It provides examples of stack implementations in C++ using classes and templates. Finally, it discusses algorithms that use stacks, including converting number systems, evaluating postfix notation, and converting infix to postfix notation.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
Parsers Combinators in Scala, Ilya @lambdamix KliuchnikovVasil Remeniuk
The document describes parser combinators in Scala. Parser combinators allow building parsers from simple parsing functions combined using operators like ~ (sequence) and | (choice). This approach implements recursive descent parsing with backtracking. Examples show building parsers for arithmetic expressions and lambda calculus terms using parser combinators. The key advantages of the parser combinator approach are that it is functional in nature, parsers can be composed from small reusable pieces, and backtracking allows robust handling of parsing errors.
The document discusses different types of parsing including:
1) Top-down parsing which starts at the root node and builds the parse tree recursively, requiring backtracking for ambiguous grammars.
2) Bottom-up parsing which starts at the leaf nodes and applies grammar rules in reverse to reach the start symbol using shift-reduce parsing.
3) LL(1) and LR parsing which are predictive parsing techniques using parsing tables constructed from FIRST and FOLLOW sets to avoid backtracking.
This document discusses strings in Python. It defines strings as sequences of characters that can contain letters, numbers, and special characters. Strings are immutable and can be manipulated using built-in functions like len(), min(), max() as well as string methods. Common string operations include concatenation, slicing, formatting, comparison, searching, and conversion methods. The document provides examples of using these string functions and methods.
The document discusses algorithms and their analysis. It defines an algorithm as a step-by-step procedure for solving a problem and analyzes the time complexity of various algorithms. Key points made include:
1) Algorithms are analyzed based on how many times their basic operation is performed as a function of input size n.
2) Common time complexities include O(n) for sequential search, O(n^3) for matrix multiplication, and O(log n) for binary search.
3) Recursive algorithms like fibonacci are inefficient and iterative versions improve performance by storing previously computed values.
The document discusses syntax analysis and parsing. It defines a syntax analyzer as creating the syntactic structure of a source program in the form of a parse tree. A syntax analyzer, also called a parser, checks if a program satisfies the rules of a context-free grammar and produces the parse tree if it does, or error messages otherwise. It describes top-down and bottom-up parsing methods and how parsers use grammars to analyze syntax.
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms based on their time complexity, space complexity, and correctness. It provides examples of analyzing simple algorithms and calculating their complexity based on the number of elementary operations.
Introducción al Análisis y diseño de algoritmosluzenith_g
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms to determine their time and space complexity, and how this involves determining how the resources required grow with the size of the problem. It provides examples of analyzing simple algorithms and determining whether they have linear, quadratic, or other complexity.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
CBSE XII COMPUTER SCIENCE STUDY MATERIAL BY KVSGautham Rajesh
This document provides an overview of key concepts in C++ programming. It discusses the basic building blocks of a C++ program like data types, variables, constants, operators, control structures, functions and arrays. It explains fundamental concepts like tokens, literals, data type conversion and standard libraries. It also introduces object oriented programming concepts like classes and objects. The document is intended as a study material for class 12 students to help understand C++ programming.
This document discusses derivation, ambiguity, and parsing in compiler design. It contains the following key points:
1) A derivation is a sequence of production rules that derives an input string from the start symbol. Left-most and right-most derivations refer to replacing symbols from left to right or right to left.
2) A grammar is ambiguous if it has more than one parse tree for at least one input string. Top-down parsing builds parse trees from the root node down.
3) Examples show left-most and right-most derivations and how they can result in ambiguous or unambiguous parses for different input strings based on the grammar rules.
The WiMAX (IEEE 802.16e) standard offers peak data rates of 128Mbps downlink and
56Mbps uplink over 20MHz wide channels whilst the new standard in development, 4G
WiMAN-Advanced (802.16m) is targeting the requirements to be fully 4G using 64Q QAM,
BPSK and MIMO technologies to reach the 1Gbps rate. It is predicted that in an actual
deployment, using 4X2 MIMO in an urban microcell application using a 20 MHz TDD
channel, the 4G WiMAN-Advanced system will be able to support 120Mbps downlink and
60Mbps uplink per site concurrently. WiMAX applications are already in use in many countries
globally but research in 2010 gave results that showed only just over 350 set ups were actually
in use. Many previous WiMAX operators were found to have moved to LTE along with Yota,
who were the largest WiMAX operator in the world.
This document discusses syntax analysis in compilers. It covers the role of parsers, different types of parsers like top-down and bottom-up parsers, context free grammars, derivations, parse trees, ambiguity, left recursion elimination, left factoring, recursive descent parsing, LL(1) grammars, construction of parsing tables, and error recovery techniques for predictive parsing.
The document discusses strings in Python. It describes that strings are immutable sequences of characters that can contain letters, numbers and special characters. It covers built-in string functions like len(), max(), min() for getting the length, maximum and minimum character. It also discusses string slicing, concatenation, formatting, comparison and various string methods for operations like conversion, formatting, searching and stripping whitespace.
This document provides an overview of parsing in compiler construction. It discusses context-free grammars and how they are used to generate sentences and parse trees through derivations. It also covers ambiguity that can arise from grammars and various grammar transformations used to eliminate ambiguity, including defining associativity and priority. The dangling else problem is presented as an example of an ambiguous grammar.
5-Introduction to Parsing and Context Free Grammar-09-05-2023.pptxvenkatapranaykumarGa
The document provides information about parsing and context-free grammars. It defines key concepts such as nonterminals, terminals, productions, derivations, ambiguity, left recursion, left factoring, LL(1) parsing, and computing first sets. It also lists different types of parsing including top-down parsing, bottom-up parsing, backtracking, predictive parsing, LR parsing, operator precedence parsing, and recursive descent parsing.
This document discusses top-down parsing and different types of top-down parsers, including recursive descent parsers, predictive parsers, and LL(1) grammars. It explains how to build predictive parsers without recursion by using a parsing table constructed from the FIRST and FOLLOW sets of grammar symbols. The key steps are: 1) computing FIRST and FOLLOW, 2) filling the predictive parsing table based on FIRST/FOLLOW, 3) using the table to parse inputs in a non-recursive manner by maintaining the parser's own stack. An example is provided to illustrate constructing the FIRST/FOLLOW sets and parsing table for a sample grammar.
This document discusses text mining techniques in R, including importing text, working with strings, regular expressions, for loops, and preprocessing text. It provides examples of functions like readLines(), grep(), gsub(), strsplit(), and for loops. It also covers topics like tokenization, stemming, stopword removal, and bag-of-words representations for text analysis.
The document discusses syntax analysis and parsing. It covers context-free grammars, Backus-Naur Form (BNF), Extended BNF, and different parsing techniques like recursive descent parsing and LL parsing. It also discusses Scala's combinator parser, which uses parser combinators to parse input based on a grammar.
The document outlines a presentation on mobile game programming and stacks. It discusses abstract data types (ADTs), data structures, and specifically focuses on stacks. It provides examples of stack implementations in C++ using classes and templates. Finally, it discusses algorithms that use stacks, including converting number systems, evaluating postfix notation, and converting infix to postfix notation.
Compiler Components and their Generators - Traditional Parsing AlgorithmsGuido Wachsmuth
This document discusses parsing algorithms for compilers. It begins with an overview of topics to be covered, including lexical analysis, parsing algorithms like predictive and LR parsing, grammar classes, and an assignment on implementing a MiniJava compiler. It then covers predictive parsing in more detail, including how to generate parsing tables from grammars and how to use these tables in a predictive parsing automaton. Finally, it discusses LR parsing and how it can handle issues like left recursion that predictive parsing cannot. It provides an example of an LR parsing step involving expression evaluation.
Parsers Combinators in Scala, Ilya @lambdamix KliuchnikovVasil Remeniuk
The document describes parser combinators in Scala. Parser combinators allow building parsers from simple parsing functions combined using operators like ~ (sequence) and | (choice). This approach implements recursive descent parsing with backtracking. Examples show building parsers for arithmetic expressions and lambda calculus terms using parser combinators. The key advantages of the parser combinator approach are that it is functional in nature, parsers can be composed from small reusable pieces, and backtracking allows robust handling of parsing errors.
The document discusses different types of parsing including:
1) Top-down parsing which starts at the root node and builds the parse tree recursively, requiring backtracking for ambiguous grammars.
2) Bottom-up parsing which starts at the leaf nodes and applies grammar rules in reverse to reach the start symbol using shift-reduce parsing.
3) LL(1) and LR parsing which are predictive parsing techniques using parsing tables constructed from FIRST and FOLLOW sets to avoid backtracking.
This document discusses strings in Python. It defines strings as sequences of characters that can contain letters, numbers, and special characters. Strings are immutable and can be manipulated using built-in functions like len(), min(), max() as well as string methods. Common string operations include concatenation, slicing, formatting, comparison, searching, and conversion methods. The document provides examples of using these string functions and methods.
The document discusses algorithms and their analysis. It defines an algorithm as a step-by-step procedure for solving a problem and analyzes the time complexity of various algorithms. Key points made include:
1) Algorithms are analyzed based on how many times their basic operation is performed as a function of input size n.
2) Common time complexities include O(n) for sequential search, O(n^3) for matrix multiplication, and O(log n) for binary search.
3) Recursive algorithms like fibonacci are inefficient and iterative versions improve performance by storing previously computed values.
The document discusses syntax analysis and parsing. It defines a syntax analyzer as creating the syntactic structure of a source program in the form of a parse tree. A syntax analyzer, also called a parser, checks if a program satisfies the rules of a context-free grammar and produces the parse tree if it does, or error messages otherwise. It describes top-down and bottom-up parsing methods and how parsers use grammars to analyze syntax.
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms based on their time complexity, space complexity, and correctness. It provides examples of analyzing simple algorithms and calculating their complexity based on the number of elementary operations.
Introducción al Análisis y diseño de algoritmosluzenith_g
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms to determine their time and space complexity, and how this involves determining how the resources required grow with the size of the problem. It provides examples of analyzing simple algorithms and determining whether they have linear, quadratic, or other complexity.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Pride Month Slides 2024 David Douglas School District
Ch2 (1).ppt
1. 1
A Simple One-Pass Compiler
(to Generate Code for the JVM)
Chapter 2
COP5621 Compiler Construction
Copyright Robert van Engelen, Florida State University, 2005
2. 2
Overview
• This chapter contains introductory material
to Chapters 3 to 8
• Building a simple compiler
– Syntax definition
– Syntax directed translation
– Predictive parsing
– The JVM abstract stack machine
– Generating Java bytecode for the JVM
3. 3
The Structure of our Compiler
Lexical analyzer
Syntax-directed
translator
Character
stream
Token
stream
Java
bytecode
Syntax definition
(BNF grammar)
Develop
parser and code
generator for translator
JVM specification
4. 4
Syntax Definition
• Context-free grammar is a 4-tuple with
– A set of tokens (terminal symbols)
– A set of nonterminals
– A set of productions
– A designated start symbol
5. 5
Example Grammar
list list + digit
list list - digit
list digit
digit 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
G = <{list,digit}, {+,-,0,1,2,3,4,5,6,7,8,9}, P, list>
with productions P =
Context-free grammar for simple expressions:
6. 6
Derivation
• Given a CF grammar we can determine the
set of all strings (sequences of tokens)
generated by the grammar using derivation
– We begin with the start symbol
– In each step, we replace one nonterminal in the
current sentential form with one of the right-
hand sides of a production for that nonterminal
7. 7
Derivation for the Example
Grammar
list
list + digit
list - digit + digit
digit - digit + digit
9 - digit + digit
9 - 5 + digit
9 - 5 + 2
This is an example leftmost derivation, because we replaced
the leftmost nonterminal (underlined) in each step
8. 8
Parse Trees
• The root of the tree is labeled by the start symbol
• Each leaf of the tree is labeled by a terminal
(=token) or
• Each interior node is labeled by a nonterminal
• If A X1 X2 … Xn is a production, then node A has
children X1, X2, …, Xn where Xi is a (non)terminal
or ( denotes the empty string)
9. 9
Parse Tree for the Example
Grammar
Parse tree of the string 9-5+2 using grammar G
list
digit
9 - 5 + 2
list
list digit
digit
The sequence of
leafs is called the
yield of the parse tree
10. 10
Ambiguity
string string + string | string - string | 0 | 1 | … | 9
G = <{string}, {+,-,0,1,2,3,4,5,6,7,8,9}, P, string>
with production P =
Consider the following context-free grammar:
This grammar is ambiguous, because more than one parse tree
generates the string 9-5+2
12. 12
Associativity of Operators
right term = right | term
left left + term | term
Left-associative operators have left-recursive productions
Right-associative operators have right-recursive productions
String a=b=c has the same meaning as a=(b=c)
String a+b+c has the same meaning as (a+b)+c
13. 13
Precedence of Operators
expr expr + term | term
term term * factor | factor
factor number | ( expr )
Operators with higher precedence “bind more tightly”
String 2+3*5 has the same meaning as 2+(3*5)
expr
expr term
factor
+
2 3 * 5
term
factor
term
factor
number
number
number
14. 14
Syntax of Statements
stmt id := expr
| if expr then stmt
| if expr then stmt else stmt
| while expr do stmt
| begin opt_stmts end
opt_stmts stmt ; opt_stmts
|
15. 15
Syntax-Directed Translation
• Uses a CF grammar to specify the syntactic
structure of the language
• AND associates a set of attributes with
(non)terminals
• AND associates with each production a set of
semantic rules for computing values for the
attributes
• The attributes contain the translated form of the
input after the computations are completed
16. 16
Synthesized and Inherited
Attributes
• An attribute is said to be …
– synthesized if its value at a parse-tree node is
determined from the attribute values at the
children of the node
– inherited if its value at a parse-tree node is
determined by the parent (by enforcing the
parent’s semantic rules)
17. 17
Example Attribute Grammar
expr expr1 + term
expr expr1 - term
expr term
term 0
term 1
…
term 9
expr.t := expr1.t // term.t // “+”
expr.t := expr1.t // term.t // “-”
expr.t := term.t
term.t := “0”
term.t := “1”
…
term.t := “9”
Production Semantic Rule
24. 24
Parsing
• Parsing = process of determining if a string of
tokens can be generated by a grammar
• For any CF grammar there is a parser that takes at
most O(n3) time to parse a string of n tokens
• Linear algorithms suffice for parsing
programming language
• Top-down parsing “constructs” parse tree from
root to leaves
• Bottom-up parsing “constructs” parse tree from
leaves to root
25. 25
Predictive Parsing
• Recursive descent parsing is a top-down parsing
method
– Every nonterminal has one (recursive) procedure
responsible for parsing the nonterminal’s syntactic
category of input tokens
– When a nonterminal has multiple productions, each
production is implemented in a branch of a selection
statement based on input look-ahead information
• Predictive parsing is a special form of recursive
descent parsing where we use one lookahead
token to unambiguously determine the parse
operations
27. 27
Example Predictive Parser
(Program Code)
procedure match(t : token);
begin
if lookahead = t then
lookahead := nexttoken()
else error()
end;
procedure type();
begin
if lookahead in { ‘integer’, ‘char’, ‘num’ } then
simple()
else if lookahead = ‘^’ then
match(‘^’); match(id)
else if lookahead = ‘array’ then
match(‘array’); match(‘[‘); simple();
match(‘]’); match(‘of’); type()
else error()
end;
procedure simple();
begin
if lookahead = ‘integer’ then
match(‘integer’)
else if lookahead = ‘char’ then
match(‘char’)
else if lookahead = ‘num’ then
match(‘num’);
match(‘dotdot’);
match(‘num’)
else error()
end;
31. 31
Example Predictive Parser
(Execution Step 4)
simple()
match(‘array’)
array [ num num
dotdot ] of integer
Input:
lookahead
match(‘[’)
match(‘num’) match(‘dotdot’)
type()
32. 32
Example Predictive Parser
(Execution Step 5)
simple()
match(‘array’)
array [ num num
dotdot ] of integer
Input:
lookahead
match(‘[’)
match(‘num’) match(‘num’)
match(‘dotdot’)
type()
33. 33
Example Predictive Parser
(Execution Step 6)
simple()
match(‘array’)
array [ num num
dotdot ] of integer
Input:
lookahead
match(‘[’) match(‘]’)
match(‘num’) match(‘num’)
match(‘dotdot’)
type()
34. 34
Example Predictive Parser
(Execution Step 7)
simple()
match(‘array’)
array [ num num
dotdot ] of integer
Input:
lookahead
match(‘[’) match(‘]’) match(‘of’)
match(‘num’) match(‘num’)
match(‘dotdot’)
type()
35. 35
Example Predictive Parser
(Execution Step 8)
simple()
match(‘array’)
array [ num num
dotdot ] of integer
Input:
lookahead
match(‘[’) match(‘]’) type()
match(‘of’)
match(‘num’) match(‘num’)
match(‘dotdot’)
match(‘integer’)
type()
simple()
36. 36
FIRST
FIRST() is the set of terminals that appear as the
first symbols of one or more strings generated from
type simple
| ^ id
| array [ simple ] of type
simple integer
| char
| num dotdot num
FIRST(simple) = { integer, char, num }
FIRST(^ id) = { ^ }
FIRST(type) = { integer, char, num, ^, array }
37. 37
Using FIRST
expr term rest
rest + term rest
| - term rest
|
A
|
When a nonterminal A has two (or more) productions as in
Then FIRST () and FIRST() must be disjoint for
predictive parsing to work
procedure rest();
begin
if lookahead in FIRST(+ term rest) then
match(‘+’); term(); rest()
else if lookahead in FIRST(- term rest) then
match(‘-’); term(); rest()
else return
end;
We use FIRST to write a predictive parser as follows
38. 38
Left Factoring
When more than one production for nonterminal A starts
with the same symbols, the FIRST sets are not disjoint
We can use left factoring to fix the problem
stmt if expr then stmt
| if expr then stmt else stmt
stmt if expr then stmt opt_else
opt_else else stmt
|
39. 39
Left Recursion
When a production for nonterminal A starts with a
self reference then a predictive parser loops forever
A A
|
|
We can eliminate left recursive productions by systematically
rewriting the grammar using right recursive productions
A R
| R
R R
|
40. 40
A Translator for Simple
Expressions
expr expr + term
expr expr - term
expr term
term 0
term 1
…
term 9
{ print(“+”) }
{ print(“-”) }
{ print(“0”) }
{ print(“1”) }
…
{ print(“9”) }
expr term rest
rest + term { print(“+”) } rest | - term { print(“-”) } rest |
term 0 { print(“0”) }
term 1 { print(“1”) }
…
term 9 { print(“9”) }
After left recursion elimination:
41. 41
main()
{ lookahead = getchar();
expr();
}
expr()
{ term();
while (1) /* optimized by inlining rest()
and removing recursive calls */
{ if (lookahead == ‘+’)
{ match(‘+’); term(); putchar(‘+’);
}
else if (lookahead == ‘-’)
{ match(‘-’); term(); putchar(‘-’);
}
else break;
}
}
term()
{ if (isdigit(lookahead))
{ putchar(lookahead); match(lookahead);
}
else error();
}
match(int t)
{ if (lookahead == t)
lookahead = getchar();
else error();
}
error()
{ printf(“Syntax errorn”);
exit(1);
}
expr term rest
rest + term { print(“+”) } rest
| - term { print(“-”) } rest
|
term 0 { print(“0”) }
term 1 { print(“1”) }
…
term 9 { print(“9”) }
42. 42
Adding a Lexical Analyzer
• Typical tasks of the lexical analyzer:
– Remove white space and comments
– Encode constants as tokens
– Recognize keywords
– Recognize identifiers
44. 44
Token Attributes
factor ( expr )
| num { print(num.value) }
#define NUM 256 /* token returned by lexan */
factor()
{ if (lookahead == ‘(‘)
{ match(‘(‘); expr(); match(‘)’);
}
else if (lookahead == NUM)
{ printf(“ %d “, tokenval); match(NUM);
}
else error();
}
45. 45
Symbol Table
insert(s, t): returns array index to new entry for string s token t
lookup(s): returns array index to entry for string s or 0
The symbol table is globally accessible (to all phases of the compiler)
Each entry in the symbol table contains a string and a token value:
struct entry
{ char *lexptr; /* lexeme (string) */
int token;
};
struct entry symtable[];
Possible implementations:
- simple C code (see textbook)
- hashtables
46. 46
Identifiers
factor ( expr )
| id { print(id.string) }
#define ID 259 /* token returned by lexan() */
factor()
{ if (lookahead == ‘(‘)
{ match(‘(‘); expr(); match(‘)’);
}
else if (lookahead == ID)
{ printf(“ %s “, symtable[tokenval].lexptr);
match(NUM);
}
else error();
}
47. 47
Handling Reserved Keywords
/* global.h */
#define DIV 257 /* token */
#define MOD 258 /* token */
#define ID 259 /* token */
/* init.c */
insert(“div”, DIV);
insert(“mod”, MOD);
/* lexer.c */
int lexan()
{ …
tokenval = lookup(lexbuf);
if (tokenval == 0)
tokenval = insert(lexbuf, ID);
return symtable[p].token;
}
We simply initialize
the global symbol
table with the set of
keywords
50. 50
Generic Instructions for Stack
Manipulation
push v push constant value v onto the stack
rvalue l push contents of data location l
lvalue l push address of data location l
pop discard value on top of the stack
:= the r-value on top is placed in the l-value below it
and both are popped
copy push a copy of the top value on the stack
+ add value on top with value below it
pop both and push result
- subtract value on top from value below it
pop both and push result
*, /, … ditto for other arithmetic operations
<, &, … ditto for relational and logical operations
51. 51
Generic Control Flow
Instructions
label l label instruction with l
goto l jump to instruction labeled l
gofalse l pop the top value, if zero then jump to l
gotrue l pop the top value, if nonzero then jump to l
halt stop execution
jsr l jump to subroutine labeled l, push return address
return pop return address and return to caller
52. 52
Syntax-Directed Translation of
Expressions
expr term rest { expr.t := term.t // rest.t }
rest + term rest1 { rest.t := term.t // ‘+’ // rest1.t }
rest - term rest1 { rest.t := term.t // ‘-’ // rest1.t }
rest { rest.t := ‘’ }
term num { term.t := ‘push ’ // num.value }
term id { term.t := ‘rvalue ’ // id.lexeme }
54. 54
Translation Scheme to Generate
Abstract Machine Code
expr term moreterms
moreterms + term { print(‘+’) } moreterms
moreterms - term { print(‘-’) } moreterms
moreterms
term factor morefactors
morefactors * factor { print(‘*’) } morefactors
morefactors div factor { print(‘DIV’) } morefactors
morefactors mod factor { print(‘MOD’) } morefactors
morefactors
factor ( expr )
factor num { print(‘push ’ // num.value) }
factor id { print(‘rvalue ’ // id.lexeme) }
55. 55
Translation Scheme to Generate
Abstract Machine Code (cont’d)
:=
stmt id := { print(‘lvalue ’ // id.lexeme) } expr { print(‘:=’) }
code for expr
lvalue id.lexeme
56. 56
Translation Scheme to Generate
Abstract Machine Code (cont’d)
stmt if expr { out := newlabel(); print(‘gofalse ’ // out) }
then stmt { print(‘label ’// out) }
label out
code for expr
gofalse out
code for stmt
57. 57
Translation Scheme to Generate
Abstract Machine Code (cont’d)
stmt while { test := newlabel(); print(‘label ’ // test) }
expr { out := newlabel(); print(‘gofalse ’ // out) }
do stmt { print(‘goto ’ // test // ‘label ’ // out ) }
goto test
code for expr
gofalse out
code for stmt
label test
label out
58. 58
Translation Scheme to Generate
Abstract Machine Code (cont’d)
start stmt { print(‘halt’) }
stmt begin opt_stmts end
opt_stmts stmt ; opt_stmts |
59. 59
The JVM
• Abstract stack machine architecture
– Emulated in software with JVM interpreter
– Just-In-Time (JIT) compilers
– Hardware implementations available
• Java bytecode
– Platform independent
– Small
– Safe
• The JavaTM Virtual Machine Specification, 2nd ed.
http://java.sun.com/docs/books/vmspec
60. 60
Runtime Data Areas (§3.5)
pc
method code
operand stack
heap
constant pool frame
local vars &
method args
61. 61
Constant Pool (§3.5.5)
• Serves a function similar to that of a symbol table
• Contains several kinds of constants
• Method and field references, strings, float
constants, and integer constants larger than 16 bit
cannot be used as operands of bytecode
instructions and must be loaded on the operand
stack from the constant pool
• Java bytecode verification is a pre-execution
process that checks the consistency of the
bytecode instructions and constant pool
62. 62
Frames (§3.6)
• A new frame (also known as activation record) is
created each time a method is invoked
• A frame is destroyed when its method invocation
completes
• Each frame contains an array of variables known
as its local variables indexed from 0
– Local variable 0 is “this” (unless the method is static)
– Followed by method parameters
– Followed by the local variables of blocks
• Each frame contains an operand stack
63. 63
Data Types (§3.2, §3.3, §3.4)
byte a 8-bit signed two’s complement integer
short a 16-bit signed two’s complement integer
int a 32-bit signed two’s complement integer
long a 64-bit signed two’s complement integer
char a 16-bit Unicode character
float a 32-bit IEEE 754 single-precision float value
double a 64-bit IEEE 754 double-precision float value
boolean a virtual type only, int is used to represent true (1) false (0)
returnAddress the location of the pc after method invocation
reference a 32-bit address reference to an object of class type,
array type, or interface type (value can be NULL)
Operand stack has 32-bit slots, thus long and double occupy two slots
65. 65
The Class File Format (§4)
• A class file consists of a stream of 8-bit bytes
• 16-, 32-, and 64-bit quantities are stored in 2, 4,
and 8 consecutive bytes in big-endian order
• Contains several components, including:
– Magic number 0xCAFEBABE
– Version info
– Constant pool
– This and super class references (index into pool)
– Class fields
– Class methods
70. 70
Generating Code for the JVM
(cont’d)
stmt id := expr { emit2(istore, id.index) }
stmt if expr { emit(iconst_0); loc := pc;
emit3(if_icmpeq, 0) }
then stmt { backpatch(loc, pc-loc) }
code for expr
if_icmpeq off1 off2
code for stmt
code for expr
istore id.index
iconst_0
pc:
backpatch() sets the offsets of the relative branch
when the target pc value is known
loc: