This document describes methods for formally defining the syntax and semantics of programming languages. It introduces context-free grammars and Backus-Naur Form (BNF) for specifying syntax and gives examples of grammars. Attribute grammars extend BNF with semantic attributes to describe static semantics. Operational semantics is discussed as a way to define the runtime meanings of language constructs by simulating program execution.
This document describes syntax and semantics for describing programming languages. It begins by introducing syntax as the form or structure of a language and semantics as the meaning. It then discusses formal methods for describing syntax, including context-free grammars (CFGs), Backus-Naur Form (BNF), derivations, parse trees, and ambiguity. Attribute grammars are introduced as a way to incorporate semantic information into CFGs. Operational and denotational semantics are discussed as approaches to formally describing the meaning of programs. Examples are provided for BNF syntax rules, derivations, attribute grammars, and a denotational semantics for expressions.
This document discusses formal methods for describing the syntax and semantics of programming languages, including attribute grammars. It begins by introducing syntax and semantics, and the problem of formally defining a language. It then covers context-free grammars and Backus-Naur form (BNF) as a way to describe syntax. Attribute grammars are introduced as a method to incorporate semantic information into syntax rules through attributes and functions. An example attribute grammar checks for type consistency in an expression grammar.
CS-4337_03_Chapter3- syntax and semantics.pdfFutureKids1
This document discusses formal methods for describing the syntax and semantics of programming languages, including context-free grammars, Backus-Naur Form (BNF), attribute grammars, and approaches for describing semantics. It provides examples of BNF rules and derivations. It also describes how attribute grammars can be used to specify static semantics by adding semantic attributes and rules to a context-free grammar.
The document discusses language design issues, specifically describing syntax and formal methods for describing syntax. It covers context-free grammars, Backus-Naur Form (BNF), and how BNF can be used to formally describe the syntax of programming languages. Key concepts explained include non-terminals, terminals, production rules, derivations, parse trees, and how precedence can be handled to avoid ambiguity.
This document discusses syntax analysis in language processing. It covers lexical analysis, which breaks programs into tokens, and parsing, which analyzes syntax based on context-free grammars. Lexical analysis uses state machines or tables to recognize patterns in code. Parsing can be done top-down with recursive descent or bottom-up with LR parsers that shift and reduce based on lookahead. Well-formed grammars and separation of lexical and syntax analysis enable efficient parsing.
This document describes syntax and semantics for programming languages. It begins by defining syntax as the form or structure of a language and semantics as the meaning. Formal methods for describing syntax include context-free grammars, Backus-Naur Form (BNF), and attribute grammars. Semantics can be described through operational semantics which defines meaning through state changes during execution or denotational semantics which provides an abstract definition using recursive functions. The document provides examples of BNF rules, derivations, and attribute grammars.
This chapter discusses lexical and syntax analysis in language processing. It covers lexical analysis, which breaks input into tokens, and parsing, which analyzes syntax based on a context-free grammar. The chapter describes recursive descent and bottom-up parsing techniques. Recursive descent parsing implements grammar rules as subroutines while bottom-up parsing uses shift-reduce operations on a parse stack. LR parsers can handle the broadest class of grammars for programming languages.
This document describes syntax and semantics for describing programming languages. It begins by introducing syntax as the form or structure of a language and semantics as the meaning. It then discusses formal methods for describing syntax, including context-free grammars (CFGs), Backus-Naur Form (BNF), derivations, parse trees, and ambiguity. Attribute grammars are introduced as a way to incorporate semantic information into CFGs. Operational and denotational semantics are discussed as approaches to formally describing the meaning of programs. Examples are provided for BNF syntax rules, derivations, attribute grammars, and a denotational semantics for expressions.
This document discusses formal methods for describing the syntax and semantics of programming languages, including attribute grammars. It begins by introducing syntax and semantics, and the problem of formally defining a language. It then covers context-free grammars and Backus-Naur form (BNF) as a way to describe syntax. Attribute grammars are introduced as a method to incorporate semantic information into syntax rules through attributes and functions. An example attribute grammar checks for type consistency in an expression grammar.
CS-4337_03_Chapter3- syntax and semantics.pdfFutureKids1
This document discusses formal methods for describing the syntax and semantics of programming languages, including context-free grammars, Backus-Naur Form (BNF), attribute grammars, and approaches for describing semantics. It provides examples of BNF rules and derivations. It also describes how attribute grammars can be used to specify static semantics by adding semantic attributes and rules to a context-free grammar.
The document discusses language design issues, specifically describing syntax and formal methods for describing syntax. It covers context-free grammars, Backus-Naur Form (BNF), and how BNF can be used to formally describe the syntax of programming languages. Key concepts explained include non-terminals, terminals, production rules, derivations, parse trees, and how precedence can be handled to avoid ambiguity.
This document discusses syntax analysis in language processing. It covers lexical analysis, which breaks programs into tokens, and parsing, which analyzes syntax based on context-free grammars. Lexical analysis uses state machines or tables to recognize patterns in code. Parsing can be done top-down with recursive descent or bottom-up with LR parsers that shift and reduce based on lookahead. Well-formed grammars and separation of lexical and syntax analysis enable efficient parsing.
This document describes syntax and semantics for programming languages. It begins by defining syntax as the form or structure of a language and semantics as the meaning. Formal methods for describing syntax include context-free grammars, Backus-Naur Form (BNF), and attribute grammars. Semantics can be described through operational semantics which defines meaning through state changes during execution or denotational semantics which provides an abstract definition using recursive functions. The document provides examples of BNF rules, derivations, and attribute grammars.
This chapter discusses lexical and syntax analysis in language processing. It covers lexical analysis, which breaks input into tokens, and parsing, which analyzes syntax based on a context-free grammar. The chapter describes recursive descent and bottom-up parsing techniques. Recursive descent parsing implements grammar rules as subroutines while bottom-up parsing uses shift-reduce operations on a parse stack. LR parsers can handle the broadest class of grammars for programming languages.
The document discusses syntax analysis and parsing. It covers context-free grammars, Backus-Naur Form (BNF), Extended BNF, and different parsing techniques like recursive descent parsing and LL parsing. It also discusses Scala's combinator parser, which uses parser combinators to parse input based on a grammar.
The document provides an introduction to the study of automata theory. It discusses five major topics covered in automata theory: finite state automata, context-free languages, Turing machines, undecidability and complexity. Finite state automata are defined as abstract computing devices composed of a finite number of states. They can be used to model hardware, software, algorithms and other processes. The document provides examples of finite state automata, including an on-off switch and a gas furnace.
This document describes syntax and semantics in programming languages. It defines syntax as the form of expressions, statements, and program units, while semantics refers to their meaning. It provides examples of syntax rules for Java statements in Backus-Naur Form (BNF) and describes how BNF is used to formally define a programming language's syntax through rules and derivations. It also discusses parse trees for representing the syntactic structure of statements generated from a grammar.
Syntax defines the grammatical rules of a programming language. There are three levels of syntax: lexical, concrete, and abstract. Lexical syntax defines tokens like literals and identifiers. Concrete syntax defines the actual representation using tokens. Abstract syntax describes a program's information without implementation details. Backus-Naur Form (BNF) uses rewriting rules to specify a grammar. BNF grammars can be ambiguous. Extended BNF simplifies recursive rules. Syntax analysis transforms a program into an abstract syntax tree used for semantic analysis and code generation.
This document discusses various topics related to programming languages including:
- It defines different types of programming languages like procedural, functional, scripting, logic, and object-oriented.
- It explains why studying programming languages is important for expressing ideas, choosing appropriate languages, learning new languages, and better understanding implementation.
- It describes programming paradigms like imperative, object-oriented, functional, and logic programming.
- It covers concepts like syntax, semantics, pragmatics, grammars, and translation models in programming languages.
The document discusses lexical analysis, which is the first stage of syntax analysis for programming languages. It covers terminology, using finite automata and regular expressions to describe tokens, and how lexical analyzers work. Lexical analyzers extract lexemes from source code and return tokens to the parser. They are often implemented using finite state machines generated from regular grammar descriptions of the lexical patterns in a language.
This document discusses automata theory and focuses on grammars, languages, and finite state machines. It defines key terminology like alphabets, strings, languages, and regular expressions. It explains Chomsky's hierarchy of formal languages from type-3 regular languages to type-0 recursively enumerable languages. The document also discusses finite state automata (FSA), deterministic finite automata (DFA), non-deterministic finite automata (NFA), context-free grammars, pushdown automata, and Turing machines. Examples of grammars, languages, and finite state machines are provided to illustrate these concepts.
This document discusses parsing techniques for programming languages. It begins by defining regular expressions and context-free grammars. It then describes LL(1) parsing which is a top-down parsing technique that uses prediction sets to parse in linear time. The document provides an example of an LL(1) grammar and parsing table. It also discusses problems that can arise with making grammars LL(1) and techniques for resolving them. The document concludes by introducing LR parsing as a bottom-up technique that uses states and shift-reduce actions to parse in linear time.
The document discusses lexical analysis of computer programming languages. It introduces lexical analysis as the process of reading a string of characters and categorizing them into tokens based on their roles. This involves constructing regular expressions to define the patterns for different token classes like keywords, identifiers, and numbers. The document then explains how to specify the lexical structure of a language by defining regular expressions for each token class and using them to build a lexical analyzer that takes a string as input and outputs the sequence of tokens.
This document provides an overview of regular expressions and the grep command in Unix/Linux. It defines what regular expressions are, describes common regex patterns like characters, character classes, anchors, repetition, and groups. It also explains the differences between the grep, egrep, and fgrep commands and provides examples of using grep with regular expressions to search files.
Regular languages can be described using regular expressions, which use operations like union, concatenation, and Kleene star. Regular expressions allow specifying languages that can be recognized by finite automata. Practical uses of regular expressions include text search tools like grep and generating lexical analyzers for compilers.
This document provides an overview of different data types in programming languages. It discusses primitive data types like integers, floating point numbers, Booleans, characters and strings. It also covers user-defined types like enumerated types, subrange types, arrays, associative arrays and records. For each type, it describes common design considerations, examples from different languages, and how they are typically implemented.
Using Language Oriented Programming to Execute Computations on the GPUSkills Matter
F# has a number of features that support language oriented programming (LOP) – the ability to create an abstract description of a problem then have this description executed in another environment. In this talk we’ll look at the design of an F# library that uses LOP techniques to a user execute matrix calculations either on the CPU or GPU. We’ll examine the features that F# provides to support this technique. We’ll start by taking a look at union types and active patterns, and then we’ll see how these are used by F#’s quotation system to give access to an abstract description of functions. Finally, we’ll see how these descriptions of functions can then be translated into computations the GPU understands and executed.
Lisp and Scheme are dialects of the Lisp programming language. Scheme is favored for teaching due to its simplicity. Key features of Lisp include S-expressions as the universal data type, functional programming style with first-class functions, and uniform representation of code and data. Functions are defined using DEFINE and special forms like IF and COND control program flow. Common list operations include functions like CAR, CDR, CONS, LIST, and REVERSE.
A compiler is a program that translates a program written in a source language into an equivalent program in a target language. It has two major parts - analysis and synthesis. In analysis, an intermediate representation is created by analyzing the lexical, syntactic and semantic properties of the source program. In synthesis, the target program is created from the intermediate representation through code generation and optimization. The major techniques used in compiler design like parsing and code generation can also be applied to other domains like natural language processing.
This document discusses different data types and type systems in programming languages. It covers:
1) Four approaches to defining data types - collection of values, internal structure, equivalence classes, and collections of operations. Most languages combine a structural and abstraction approach.
2) The purposes of types, including providing context and type checking to prevent meaningless operations. Strong and static typing are discussed.
3) Examples of type systems in different languages like Pascal, Java, C, and Python. Common type categories like discrete, scalar, composite, and pointers are also outlined.
4) Concepts related to type checking including equivalence, compatibility, inference, casting, coercion, records, variants, and arrays. Memory layout
This document discusses lexical and syntax analysis in programming language implementation. It covers the topics of lexical analysis, parsing problems, and different parsing approaches like recursive descent and bottom-up parsing. Lexical analysis breaks source code into tokens by identifying patterns in character strings. It is usually implemented with a finite state machine and table-driven approach. Syntax analysis uses a parser based on a context-free grammar to analyze the syntactic structure of a program.
The document discusses the key aspects of programming language grammar and compilers. It defines lexical and syntactic features, formal languages, grammars, terminals, non-terminals, productions, derivation, syntax trees, ambiguity in grammars, compilers, cross-compilers, p-code compilers, phases of compilation including analysis of source text and synthesis of target text, and code optimization techniques. The overall goal of a compiler is to translate a high-level language program into an equivalent machine language program.
Designing A Syntax Based Retrieval System03Avelin Huo
The document proposes a syntax-based text retrieval system to support grammatical querying of tagged corpora for language learners and teachers. It describes building an index of part-of-speech tagged n-grams from a corpus, with a filter to select discriminative index terms. A regular expression query is rewritten and its positions in the index are used to find candidate matching text units efficiently. An evaluation compares the proposed index to a complete index in terms of size and query performance.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
The document discusses syntax analysis and parsing. It covers context-free grammars, Backus-Naur Form (BNF), Extended BNF, and different parsing techniques like recursive descent parsing and LL parsing. It also discusses Scala's combinator parser, which uses parser combinators to parse input based on a grammar.
The document provides an introduction to the study of automata theory. It discusses five major topics covered in automata theory: finite state automata, context-free languages, Turing machines, undecidability and complexity. Finite state automata are defined as abstract computing devices composed of a finite number of states. They can be used to model hardware, software, algorithms and other processes. The document provides examples of finite state automata, including an on-off switch and a gas furnace.
This document describes syntax and semantics in programming languages. It defines syntax as the form of expressions, statements, and program units, while semantics refers to their meaning. It provides examples of syntax rules for Java statements in Backus-Naur Form (BNF) and describes how BNF is used to formally define a programming language's syntax through rules and derivations. It also discusses parse trees for representing the syntactic structure of statements generated from a grammar.
Syntax defines the grammatical rules of a programming language. There are three levels of syntax: lexical, concrete, and abstract. Lexical syntax defines tokens like literals and identifiers. Concrete syntax defines the actual representation using tokens. Abstract syntax describes a program's information without implementation details. Backus-Naur Form (BNF) uses rewriting rules to specify a grammar. BNF grammars can be ambiguous. Extended BNF simplifies recursive rules. Syntax analysis transforms a program into an abstract syntax tree used for semantic analysis and code generation.
This document discusses various topics related to programming languages including:
- It defines different types of programming languages like procedural, functional, scripting, logic, and object-oriented.
- It explains why studying programming languages is important for expressing ideas, choosing appropriate languages, learning new languages, and better understanding implementation.
- It describes programming paradigms like imperative, object-oriented, functional, and logic programming.
- It covers concepts like syntax, semantics, pragmatics, grammars, and translation models in programming languages.
The document discusses lexical analysis, which is the first stage of syntax analysis for programming languages. It covers terminology, using finite automata and regular expressions to describe tokens, and how lexical analyzers work. Lexical analyzers extract lexemes from source code and return tokens to the parser. They are often implemented using finite state machines generated from regular grammar descriptions of the lexical patterns in a language.
This document discusses automata theory and focuses on grammars, languages, and finite state machines. It defines key terminology like alphabets, strings, languages, and regular expressions. It explains Chomsky's hierarchy of formal languages from type-3 regular languages to type-0 recursively enumerable languages. The document also discusses finite state automata (FSA), deterministic finite automata (DFA), non-deterministic finite automata (NFA), context-free grammars, pushdown automata, and Turing machines. Examples of grammars, languages, and finite state machines are provided to illustrate these concepts.
This document discusses parsing techniques for programming languages. It begins by defining regular expressions and context-free grammars. It then describes LL(1) parsing which is a top-down parsing technique that uses prediction sets to parse in linear time. The document provides an example of an LL(1) grammar and parsing table. It also discusses problems that can arise with making grammars LL(1) and techniques for resolving them. The document concludes by introducing LR parsing as a bottom-up technique that uses states and shift-reduce actions to parse in linear time.
The document discusses lexical analysis of computer programming languages. It introduces lexical analysis as the process of reading a string of characters and categorizing them into tokens based on their roles. This involves constructing regular expressions to define the patterns for different token classes like keywords, identifiers, and numbers. The document then explains how to specify the lexical structure of a language by defining regular expressions for each token class and using them to build a lexical analyzer that takes a string as input and outputs the sequence of tokens.
This document provides an overview of regular expressions and the grep command in Unix/Linux. It defines what regular expressions are, describes common regex patterns like characters, character classes, anchors, repetition, and groups. It also explains the differences between the grep, egrep, and fgrep commands and provides examples of using grep with regular expressions to search files.
Regular languages can be described using regular expressions, which use operations like union, concatenation, and Kleene star. Regular expressions allow specifying languages that can be recognized by finite automata. Practical uses of regular expressions include text search tools like grep and generating lexical analyzers for compilers.
This document provides an overview of different data types in programming languages. It discusses primitive data types like integers, floating point numbers, Booleans, characters and strings. It also covers user-defined types like enumerated types, subrange types, arrays, associative arrays and records. For each type, it describes common design considerations, examples from different languages, and how they are typically implemented.
Using Language Oriented Programming to Execute Computations on the GPUSkills Matter
F# has a number of features that support language oriented programming (LOP) – the ability to create an abstract description of a problem then have this description executed in another environment. In this talk we’ll look at the design of an F# library that uses LOP techniques to a user execute matrix calculations either on the CPU or GPU. We’ll examine the features that F# provides to support this technique. We’ll start by taking a look at union types and active patterns, and then we’ll see how these are used by F#’s quotation system to give access to an abstract description of functions. Finally, we’ll see how these descriptions of functions can then be translated into computations the GPU understands and executed.
Lisp and Scheme are dialects of the Lisp programming language. Scheme is favored for teaching due to its simplicity. Key features of Lisp include S-expressions as the universal data type, functional programming style with first-class functions, and uniform representation of code and data. Functions are defined using DEFINE and special forms like IF and COND control program flow. Common list operations include functions like CAR, CDR, CONS, LIST, and REVERSE.
A compiler is a program that translates a program written in a source language into an equivalent program in a target language. It has two major parts - analysis and synthesis. In analysis, an intermediate representation is created by analyzing the lexical, syntactic and semantic properties of the source program. In synthesis, the target program is created from the intermediate representation through code generation and optimization. The major techniques used in compiler design like parsing and code generation can also be applied to other domains like natural language processing.
This document discusses different data types and type systems in programming languages. It covers:
1) Four approaches to defining data types - collection of values, internal structure, equivalence classes, and collections of operations. Most languages combine a structural and abstraction approach.
2) The purposes of types, including providing context and type checking to prevent meaningless operations. Strong and static typing are discussed.
3) Examples of type systems in different languages like Pascal, Java, C, and Python. Common type categories like discrete, scalar, composite, and pointers are also outlined.
4) Concepts related to type checking including equivalence, compatibility, inference, casting, coercion, records, variants, and arrays. Memory layout
This document discusses lexical and syntax analysis in programming language implementation. It covers the topics of lexical analysis, parsing problems, and different parsing approaches like recursive descent and bottom-up parsing. Lexical analysis breaks source code into tokens by identifying patterns in character strings. It is usually implemented with a finite state machine and table-driven approach. Syntax analysis uses a parser based on a context-free grammar to analyze the syntactic structure of a program.
The document discusses the key aspects of programming language grammar and compilers. It defines lexical and syntactic features, formal languages, grammars, terminals, non-terminals, productions, derivation, syntax trees, ambiguity in grammars, compilers, cross-compilers, p-code compilers, phases of compilation including analysis of source text and synthesis of target text, and code optimization techniques. The overall goal of a compiler is to translate a high-level language program into an equivalent machine language program.
Designing A Syntax Based Retrieval System03Avelin Huo
The document proposes a syntax-based text retrieval system to support grammatical querying of tagged corpora for language learners and teachers. It describes building an index of part-of-speech tagged n-grams from a corpus, with a filter to select discriminative index terms. A regular expression query is rewritten and its positions in the index are used to find candidate matching text units efficiently. An evaluation compares the proposed index to a complete index in terms of size and query performance.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION