match the following attributes to the parts of a compiler
strips out the comments and whitespace
converts text into lexemes
generates an Abstrat Symbol Tree
recursive descent or table driven
uses BNF or EBNF definitions
a. lexer
b. parser
Solution
The Program and Portfolio Management Maturity Model is an effective tool for organizations to
decide quickly what PPM improvements they should make to enhance their organization\'s
ability to optimize investments, execute big changes and deliver value.
Overview
Key Findings
Real project, program and portfolio management, when operating at the level of well-integrated
practices, is the key enabler that allows organizations to identify and execute strategic change.
Any meaningful undertakings to enhance or evolve the program and portfolio management
(PPM) function must pay more than lip service to organizational structure, business model and
culture to have any chance of success.
Competitive pressures and changing market conditions are forcing organizations toward Level 3,
and most organizations still aren\'t ready to make the leap.
Recommendations
At any level of PPM maturity, focus first on helping the organization make well-informed
investment choices. This will improve the odds of project success more than any other factor.
For organizations that require enterprisewide change and capabilities, it is worth the effort to
pursue Level 4, where enterprise PPM practices are built.
Identify objectives for your enterprise, and use them to identify the most salient improvement
opportunities for your enterprise.
Meticulously manage the change involved in maturing/improving your PPM capabilities.
Analysis
Project/program/portfolio management office (PMO) leaders, program managers and portfolio
managers are frequently challenged by ineffective processes, lack of stakeholder engagement and
difficult-to-quantify value. 1 Part of the problem is failure to match their processes, people and
technology approaches to the maturity level of their organizations. 2
The Gartner Program and Portfolio Management (PPM) Maturity Model assessment is designed
to help PPM leaders understand best practices around large project management, as well as PPM
to handle delivery of strategic value. This model assumes that organizations progress through a
maturity curve and that each level of organizational maturity directly affects the level of
investment and types of PPM approaches organizations choose to adopt.
arsing
A parser is an algorithm that determines whether a given input string is in a language and, as a
side-effect, usually produces a parse tree for the input. There is a procedure for generating a
parser from a given context-free grammar.
Recursive-Descent Parsing
Recursive-descent parsing is one of the simplest parsing techniques that is used in practice.
Recursive-descent parsers are also called top-down parsers, since they construct the parse tree
top down (rather than bottom up).
The basic idea of recursive-descent par.
match the following attributes to the parts of a compilerstrips ou.pdf
1. match the following attributes to the parts of a compiler
strips out the comments and whitespace
converts text into lexemes
generates an Abstrat Symbol Tree
recursive descent or table driven
uses BNF or EBNF definitions
a. lexer
b. parser
Solution
The Program and Portfolio Management Maturity Model is an effective tool for organizations to
decide quickly what PPM improvements they should make to enhance their organization's
ability to optimize investments, execute big changes and deliver value.
Overview
Key Findings
Real project, program and portfolio management, when operating at the level of well-integrated
practices, is the key enabler that allows organizations to identify and execute strategic change.
Any meaningful undertakings to enhance or evolve the program and portfolio management
(PPM) function must pay more than lip service to organizational structure, business model and
culture to have any chance of success.
Competitive pressures and changing market conditions are forcing organizations toward Level 3,
and most organizations still aren't ready to make the leap.
Recommendations
At any level of PPM maturity, focus first on helping the organization make well-informed
investment choices. This will improve the odds of project success more than any other factor.
For organizations that require enterprisewide change and capabilities, it is worth the effort to
pursue Level 4, where enterprise PPM practices are built.
Identify objectives for your enterprise, and use them to identify the most salient improvement
opportunities for your enterprise.
Meticulously manage the change involved in maturing/improving your PPM capabilities.
Analysis
Project/program/portfolio management office (PMO) leaders, program managers and portfolio
managers are frequently challenged by ineffective processes, lack of stakeholder engagement and
2. difficult-to-quantify value. 1 Part of the problem is failure to match their processes, people and
technology approaches to the maturity level of their organizations. 2
The Gartner Program and Portfolio Management (PPM) Maturity Model assessment is designed
to help PPM leaders understand best practices around large project management, as well as PPM
to handle delivery of strategic value. This model assumes that organizations progress through a
maturity curve and that each level of organizational maturity directly affects the level of
investment and types of PPM approaches organizations choose to adopt.
arsing
A parser is an algorithm that determines whether a given input string is in a language and, as a
side-effect, usually produces a parse tree for the input. There is a procedure for generating a
parser from a given context-free grammar.
Recursive-Descent Parsing
Recursive-descent parsing is one of the simplest parsing techniques that is used in practice.
Recursive-descent parsers are also called top-down parsers, since they construct the parse tree
top down (rather than bottom up).
The basic idea of recursive-descent parsing is to associate each non-terminal with a procedure.
The goal of each such procedure is to read a sequence of input characters that can be generated
by the corresponding non-terminal, and return a pointer to the root of the parse tree for the non-
terminal. The structure of the procedure is dictated by the productions for the corresponding non-
terminal.
The procedure attempts to "match" the right hand side of some production for a non-terminal.
To match a terminal symbol, the procedure compares the terminal symbol to the input; if they
agree, then the procedure is successful, and it consumes the terminal symbol in the input (that is,
moves the input cursor over one symbol).
To match a non-terminal symbol, the procedure simply calls the corresponding procedure for
that non-terminal symbol (which may be a recursive call, hence the name of the technique).
Recursive-Descent Parser for Expressions
Consider the following grammar for expressions (we'll look at the reasons for the peculiar
structure of this grammar later):
--> --> + | - | epsilon
--> --> * | / | epsilon
--> ( ) | number
We create procedures for each of the non-terminals. According to production 1, the procedure to
match expressions () must match a term (by calling the procedure for ), and then more
expressions (by calling the procedure ).
3. procedure E;
T; Estar;
Some procedures, such as , must examine the input to determine which production to choose.
procedure Estar;
if NextInputChar = "+" or "-" then
read(NextInputChar);
T; Estar;
We will append a special marker symbol (ENDM) to the input string; this marker symbol
notifies the parser that the entire input has been seen. We should also modify the procedure for
the start symbol, E, to recognize the end marker after seeing an expression.
Top-Down Parser for Expressions
procedure E;
T; Estar;
if NextInputChar = ENDM then /* done */
else print("syntax error")
procedure Estar;
if NextInputChar = "+" or "-" then
read(NextInputChar);
T; Estar;
procedure T;
F; Tstar;
procedure Tstar;
if NextInputChar = "*" or "/" then
read(NextInputChar);
F; Tstar;
procedure F;
if NextInputChar = "(" then
read(NextInputChar);
E;
if NextInputChar = ")" then
4. read(NextInputChar)
else print("syntax error");
else if NextInputChar = number then
read(NextInputChar)
else print("syntax error");
BNF is sort of like a mathematical game: you start with a symbol (called the start symbol and by
convention usually named S in examples) and are then given rules for what you can replace this
symbol with. The language defined by the BNF grammar is just the set of all strings you can
produce by following these rules.
The rules are called production rules, and look like this:
symbol := alternative1 | alternative2 ...
A production rule simply states that the symbol on the left-hand side of the := must be replaced
by one of the alternatives on the right hand side. The alternatives are separated by |s. (One
variation on this is to use ::= instead of :=, but the meaning is the same.) Alternatives usually
consist of both symbols and something called terminals. Terminals are simply pieces of the final
string that are not symbols. They are called terminals because there are no production rules for
them: they terminate the production process. (Symbols are often called non-terminals.)
Another variation on BNF grammars is to enclose terminals in quotes to distinguish them from
symbols. Some BNF grammars explicitly show where whitespace is allowed by having a symbol
for it, while other grammars leave this for the reader to infer.
There is one special symbol in BNF: @, which simply means that the symbol can be removed. If
you replace a symbol by @, you do it by just removing the symbol. This is useful because in
some cases it is difficult to end the replacement process without using this trick.
So, the language described by a grammar is the set of all strings you can produce with the
production rules. If a string cannot in any way be produced by using the rules the string is not
allowed in the language
OK. Now you know what BNF and EBNF are, what they are used for, but perhaps not why they
are useful or how you can take advantage of them.
The most obvious way of using a formal grammar has already been mentioned in passing: once
you've given a formal grammar for your language you have completely defined it. There can be
no further disagreement on what is allowed in the language and what is not. This is extremely
useful because a syntax description in ordinary prose is much more verbose and open to different
interpretations.
Another benefit is this: formal grammars are mathematical creatures and can be "understood"
5. by computers. There are actually lots of programs that can be given (E)BNF grammars as input
and automatically produce code for parsers for the given grammar. In fact, this is the most
common way to produce a compiler: by using a so-called compiler-compiler that takes a
grammar as input and produces parser code in some programming language.
Of course, compilers do much more checking than just grammar checking (such as type
checking) and they also produce code. None of these things are described in an (E)BNF
grammar, so compiler-compilers usually have a special syntax for associating code snippets
(called actions) with the different productions in the grammar.
The best-known compiler-compiler is YACC (Yet Another Compiler Compiler), which produces
C code, but others exist for C++, Java, Python as well as many other languages.
Parsing
They read symbols of some alphabet from their input.
Hint: The alphabet doesn't necessarily have to be of letters. But it has to be of symbols which
are atomic for the language understood by parser/lexer.
Symbols for the lexer: ASCII characters.
Symbols for the parser: the particular tokens, which are terminal symbols of their grammar.
They analyse these symbols and try to match them with the grammar of the language they
understood.
And here's where the real difference usually lies. See below for more.
Grammar understood by lexers: regular grammar (Chomsky's level 3).
Grammar understood by parsers: context-free grammar (Chomsky's level 2).
They attach semantics (meaning) to the language pieces they find.
Lexers attach meaning by classifying lexemes (strings of symbols from the input) as the
particular tokens. E.g. All these lexemes: *, ==, <=, ^ will be classified as "operator" token by
the C/C++ lexer.
Parsers attach meaning by classifying strings of tokens from the input (sentences) as the
particular nonterminals and building the parse tree. E.g. all these token strings:
[number][operator][number], [id][operator][id], [id][operator][number][operator][number] will
be classified as "expression" nonterminal by the C/C++ parser.
They can attach some additional meaning (data) to the recognized elements. E.g. when a lexer
recognizes a character sequence constituting a proper number, it can convert it to its binary value
and store with the "number" token. Similarly, when a parser recognize an expression, it can
compute its value and store with the "expression" node of the syntax tree.
They all produce on their output a proper sentences of the language they recognize.
Lexers produce tokens, which are sentences of the regular language they recognize. Each token
6. can have an inner syntax (though level 3, not level 2), but that doesn't matter for the output data
and for the one which reads them.
Parsers produce syntax trees, which are representations of sentences of the context-free language
they recognize. Usually it's only one big tree for the whole document/source file, because the
whole document/source file is a proper sentence for them. But there aren't any reasons why
parser couldn't produce a series of syntax trees on its output. E.g. it could be a parser which
recognizes SGML tags sticked into plain-text. So it'll tokenize the SGML document into a series
of tokens: [TXT][TAG][TAG][TXT][TAG][TXT]....