This document provides an outline and overview of topics covered in a lecture on Python and Perl, including data abstraction using lists and tuples to build Huffman trees, and list comprehension. It discusses data encoding and decoding methods like Huffman coding, which uses variable-length bit sequences to represent characters based on their frequency. Examples are given of constructing and traversing Huffman trees to encode and decode messages. List comprehension is also introduced as a syntactic construct for building lists from specifications using predicates and output functions.
This document provides an outline and overview of topics covered in a lecture on Python and Perl including Huffman trees, list comprehension, and an introduction to object-oriented programming. Key points covered include encoding and decoding messages with Huffman trees, using list comprehension for building lists more concisely than for loops, and the basic concepts of classes and objects in OOP.
The document discusses the basic language of functions. It defines a function as a procedure that assigns each input exactly one output. Functions can be represented by formulas using typical variables like f(x) = x^2 - 2x + 3, where x is the input and f(x) is the output. Functions have a domain, which is the set of all possible inputs, and a range, which is the set of all possible outputs. Functions can be depicted graphically or via tables listing inputs and outputs.
The document discusses different ways to define functions. It states that a function assigns each input exactly one output. It provides examples of defining functions verbally and using tables. For a procedure to be a function, it must produce a unique output for each input. The document also introduces the concepts of domain and range, explaining that the domain is the set of all valid inputs and the range is the set of all outputs. Functions can also be defined graphically by plotting the relationship between inputs and outputs.
The document discusses the basic language of functions. A function assigns each input exactly one output. Functions can be defined through written instructions, tables, or mathematical formulas. The domain is the set of all inputs, and the range is the set of all outputs. Functions are widely used in mathematics to model real-world relationships.
The document defines and explains functions. A function assigns each input exactly one output. Functions can be represented by formulas, tables, or graphs. The domain is the set of all possible inputs, and the range is the set of all possible outputs. Examples demonstrate evaluating functions by substituting inputs into formulas.
The document defines and explains key concepts about functions including:
- A function assigns each input exactly one output. Functions are often represented by formulas like f(x) and named variables like f, g, h.
- The domain is the set of all possible inputs, and the range is the set of all possible outputs.
- Functions can be represented graphically, through tables listing inputs and outputs, or through mathematical formulas.
- Examples demonstrate how to evaluate specific functions by inputting values according to the function definition.
From my November 3, 2011 talk at MNPHP. Regular expressions are a powerful tool available in nearly every programming language or platform, including PHP. I go over the history of POSIX vs. PCRE, examples in PHP, and optimizations on how to write faster expressions.
This document discusses strings and regular expressions in C#. It covers building and formatting strings, using the StringBuilder class, and examining regular expressions patterns and examples. Regular expressions can be used to perform sophisticated string operations like identifying repeated words or extracting parts of a URI.
This document provides an outline and overview of topics covered in a lecture on Python and Perl including Huffman trees, list comprehension, and an introduction to object-oriented programming. Key points covered include encoding and decoding messages with Huffman trees, using list comprehension for building lists more concisely than for loops, and the basic concepts of classes and objects in OOP.
The document discusses the basic language of functions. It defines a function as a procedure that assigns each input exactly one output. Functions can be represented by formulas using typical variables like f(x) = x^2 - 2x + 3, where x is the input and f(x) is the output. Functions have a domain, which is the set of all possible inputs, and a range, which is the set of all possible outputs. Functions can be depicted graphically or via tables listing inputs and outputs.
The document discusses different ways to define functions. It states that a function assigns each input exactly one output. It provides examples of defining functions verbally and using tables. For a procedure to be a function, it must produce a unique output for each input. The document also introduces the concepts of domain and range, explaining that the domain is the set of all valid inputs and the range is the set of all outputs. Functions can also be defined graphically by plotting the relationship between inputs and outputs.
The document discusses the basic language of functions. A function assigns each input exactly one output. Functions can be defined through written instructions, tables, or mathematical formulas. The domain is the set of all inputs, and the range is the set of all outputs. Functions are widely used in mathematics to model real-world relationships.
The document defines and explains functions. A function assigns each input exactly one output. Functions can be represented by formulas, tables, or graphs. The domain is the set of all possible inputs, and the range is the set of all possible outputs. Examples demonstrate evaluating functions by substituting inputs into formulas.
The document defines and explains key concepts about functions including:
- A function assigns each input exactly one output. Functions are often represented by formulas like f(x) and named variables like f, g, h.
- The domain is the set of all possible inputs, and the range is the set of all possible outputs.
- Functions can be represented graphically, through tables listing inputs and outputs, or through mathematical formulas.
- Examples demonstrate how to evaluate specific functions by inputting values according to the function definition.
From my November 3, 2011 talk at MNPHP. Regular expressions are a powerful tool available in nearly every programming language or platform, including PHP. I go over the history of POSIX vs. PCRE, examples in PHP, and optimizations on how to write faster expressions.
This document discusses strings and regular expressions in C#. It covers building and formatting strings, using the StringBuilder class, and examining regular expressions patterns and examples. Regular expressions can be used to perform sophisticated string operations like identifying repeated words or extracting parts of a URI.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
The document discusses syntax-directed translation and intermediate code generation in compilers. It covers syntax-directed definitions and translation schemes for associating semantic rules with context-free grammar productions. Attribute grammars are defined where semantic rules only evaluate attributes without side effects. Syntax-directed translation builds an abstract syntax tree where each node represents a language construct. Intermediate representations like postfix notation, three-address code, and quadruples are discussed for implementing syntax trees and facilitating code optimization during compilation.
This document introduces regular expressions (regex) in PHP. It discusses the basic syntax of regex including patterns, indicators, quantifiers and logical operators. It provides examples of using the ereg and preg functions in PHP to perform matches and replacements using both POSIX and Perl-compatible regex. Common regex rules are also explained including matching characters, character classes, beginning/end of string, alternation and quantifiers.
The document discusses string manipulation and regular expressions. It provides explanations of regular expression syntax including brackets, quantifiers, predefined character ranges, and flags. It also summarizes PHP functions for regular expressions like ereg(), eregi(), ereg_replace(), split(), and sql_regcase(). Practical examples of using these functions are shown.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
The document discusses regular expressions (regex) in PHP. It begins with a brief introduction to regex, then provides examples of common PHP functions for using regex like preg_match(), preg_replace(), and preg_quote(). The document also shares a very long regex pattern that is intended to match all valid email addresses. It notes that accurately matching email addresses with a regex is challenging.
The document discusses scanning (lexical analysis) in compiler construction. It covers the scanning process, regular expressions, and finite automata. The scanning process identifies tokens from source code by categorizing characters as reserved words, special symbols, or other tokens. Regular expressions are used to represent patterns of character strings and define the language of tokens. Finite automata are mathematical models for describing scanning algorithms using states, transitions, and acceptance.
The document summarizes key concepts about context-free grammars and parsing from the book "Compiler Construction: Principles and Practice" by Kenneth C. Louden. It covers notations like EBNF and syntax diagrams for representing grammars, properties of context-free languages, and provides grammar rules and diagrams for a sample TINY language as an example.
This document introduces some new concepts available in Fortran 90-2003 compared to Fortran 77, including:
1) Free format source code which removes the need for fixed column formatting.
2) New data types like assumed shape arrays which provide meta information about array dimensions, allowing whole array operations.
3) Modules which group functionality and avoid name clashes by hiding implementation details.
4) Interface blocks which enable operator and function overloading.
5) C-binding features which allow mixing Fortran and C code by controlling argument passing and data translation between the languages.
This document provides an overview of Fortran 77 programming concepts including input/output statements, format specifiers, and the OPEN statement. Key points covered include:
- READ and WRITE statements are used for formatted and list-directed input/output. FORMAT defines the format for READ/WRITE.
- Common format specifiers include I, F, E for integers, reals, and scientific notation. A is for characters.
- The OPEN statement makes a file available for input/output using READ and WRITE and assigns a unit number for file access.
This document contains information about trees and binary trees. It begins with definitions of trees, tree terminology like root, child, parent and traversals like preorder, inorder and postorder. It then discusses properties of binary trees like complete and full binary trees. Various representations of trees like linked and sequential representations are described. Finally, it provides examples of using trees to represent expressions and evaluating them using traversals.
The document discusses various topics related to code generation for assignment statements including:
1) Names in the symbol table, reusing temporary names, addressing array elements, and accessing fields in records.
2) The translation scheme for generating three address code from assignment statements and expressions involving operators like addition and multiplication.
3) How to address elements in multi-dimensional arrays by computing the address based on bounds, dimensions, and offsets from the base address.
1. A parse tree shows how the start symbol of a grammar derives a string in the language by using interior nodes labeled with nonterminals and children labeled with terminals or nonterminals according to the grammar's productions.
2. An ambiguous grammar is one where a single string can have more than one parse tree, leftmost derivation, rightmost derivation, or other derivation according to the grammar.
3. Operator precedence and associativity determine the order of operations when multiple operators are present in an expression. For example, multiplication generally has higher precedence than addition, and most operators associate left-to-right.
This document discusses data structures and arrays. It covers one-dimensional arrays, how they are initialized and indexed, and their relationship to pointers. Pointers can be used to reference array elements, and arrays can be passed as arguments to functions by passing their address. The document also provides examples of using arrays, including an example of bubble sort to sort an array and examples of dynamically allocating memory for arrays using calloc() and malloc().
ANTLR v3 is an improved version of ANTLR that provides more robust grammars, error recovery, attributes, tree construction and code generation capabilities compared to version 2. Some key features include single element EBNF grammar syntax, support for parameters and return values in rules, dynamic scoping of attributes, automatic and rewrite-based tree construction, tree grammars, and internationalization through string templates. The runtime is also better organized and separated into modules for parsing, trees, and debugging.
The document discusses parsing and context-free grammars. It defines parsing as constructing a parse tree from a stream of tokens using the rules of a context-free grammar. It provides examples of parse trees being built from both top-down and bottom-up parsing approaches. Key aspects of context-free grammars like non-terminals, terminals, production rules, and the start symbol are also summarized.
The document discusses the structure and process of a compiler. It has two major phases - the front-end and back-end. The front-end performs analysis of the source code by recognizing legal/illegal programs, understanding semantics, and producing an intermediate representation. The back-end translates the intermediate representation into target code. The general structure includes lexical analysis, syntax analysis, semantic analysis, code generation and optimization phases.
Huffman coding is a lossless data compression algorithm that uses variable-length binary codes for characters based on their frequency of occurrence, with more common characters represented by shorter bit sequences; it constructs a Huffman tree by assigning codes to characters such that the encoded output has minimum expected length, allowing for more efficient data storage and transmission. The document provides an overview of Huffman coding and trees, including an example to demonstrate how character frequencies are used to assign bit codes and compress a sample string from 56 bits to 13 bits.
Compiler Construction | Lecture 4 | Parsing Eelco Visser
This lecture covers parsing and turning syntax definitions into parsers. It discusses context-free grammars and derivations. Grammars can be ambiguous, allowing multiple parse trees for a sentence. Grammar transformations like disambiguation, eliminating left recursion, and left factoring can address issues while preserving the language. Associativity and priority can be defined through transformations. The reading material covers parsing schemata, classical compiler textbooks, and papers on disambiguation filters and parsing algorithms.
Compiler Construction | Lecture 8 | Type ConstraintsEelco Visser
This lecture covers type checking with constraints. It introduces the NaBL2 meta-language for writing type specifications as constraint generators that map a program to constraints. The constraints are then solved to determine if a program is well-typed. NaBL2 supports defining name binding and type structures through scope graphs and constraints over names, types, and scopes. Examples show type checking patterns in NaBL2 including variables, functions, records, and name spaces.
The document discusses syntax-directed translation and intermediate code generation in compilers. It covers syntax-directed definitions and translation schemes for associating semantic rules with context-free grammar productions. Attribute grammars are defined where semantic rules only evaluate attributes without side effects. Syntax-directed translation builds an abstract syntax tree where each node represents a language construct. Intermediate representations like postfix notation, three-address code, and quadruples are discussed for implementing syntax trees and facilitating code optimization during compilation.
This document introduces regular expressions (regex) in PHP. It discusses the basic syntax of regex including patterns, indicators, quantifiers and logical operators. It provides examples of using the ereg and preg functions in PHP to perform matches and replacements using both POSIX and Perl-compatible regex. Common regex rules are also explained including matching characters, character classes, beginning/end of string, alternation and quantifiers.
The document discusses string manipulation and regular expressions. It provides explanations of regular expression syntax including brackets, quantifiers, predefined character ranges, and flags. It also summarizes PHP functions for regular expressions like ereg(), eregi(), ereg_replace(), split(), and sql_regcase(). Practical examples of using these functions are shown.
Declare Your Language: Syntax DefinitionEelco Visser
This document provides information about syntax definition and the lab organization for a compiler construction course. It includes links to papers on declarative syntax definition and the SDF3 syntax definition formalism. It also provides details about submitting lab assignments through GitHub, including instructions to fork a repository template and submit solutions as pull requests. Grades will be published on a web lab and early feedback may be provided on pull requests and pushed changes.
The document discusses regular expressions (regex) in PHP. It begins with a brief introduction to regex, then provides examples of common PHP functions for using regex like preg_match(), preg_replace(), and preg_quote(). The document also shares a very long regex pattern that is intended to match all valid email addresses. It notes that accurately matching email addresses with a regex is challenging.
The document discusses scanning (lexical analysis) in compiler construction. It covers the scanning process, regular expressions, and finite automata. The scanning process identifies tokens from source code by categorizing characters as reserved words, special symbols, or other tokens. Regular expressions are used to represent patterns of character strings and define the language of tokens. Finite automata are mathematical models for describing scanning algorithms using states, transitions, and acceptance.
The document summarizes key concepts about context-free grammars and parsing from the book "Compiler Construction: Principles and Practice" by Kenneth C. Louden. It covers notations like EBNF and syntax diagrams for representing grammars, properties of context-free languages, and provides grammar rules and diagrams for a sample TINY language as an example.
This document introduces some new concepts available in Fortran 90-2003 compared to Fortran 77, including:
1) Free format source code which removes the need for fixed column formatting.
2) New data types like assumed shape arrays which provide meta information about array dimensions, allowing whole array operations.
3) Modules which group functionality and avoid name clashes by hiding implementation details.
4) Interface blocks which enable operator and function overloading.
5) C-binding features which allow mixing Fortran and C code by controlling argument passing and data translation between the languages.
This document provides an overview of Fortran 77 programming concepts including input/output statements, format specifiers, and the OPEN statement. Key points covered include:
- READ and WRITE statements are used for formatted and list-directed input/output. FORMAT defines the format for READ/WRITE.
- Common format specifiers include I, F, E for integers, reals, and scientific notation. A is for characters.
- The OPEN statement makes a file available for input/output using READ and WRITE and assigns a unit number for file access.
This document contains information about trees and binary trees. It begins with definitions of trees, tree terminology like root, child, parent and traversals like preorder, inorder and postorder. It then discusses properties of binary trees like complete and full binary trees. Various representations of trees like linked and sequential representations are described. Finally, it provides examples of using trees to represent expressions and evaluating them using traversals.
The document discusses various topics related to code generation for assignment statements including:
1) Names in the symbol table, reusing temporary names, addressing array elements, and accessing fields in records.
2) The translation scheme for generating three address code from assignment statements and expressions involving operators like addition and multiplication.
3) How to address elements in multi-dimensional arrays by computing the address based on bounds, dimensions, and offsets from the base address.
1. A parse tree shows how the start symbol of a grammar derives a string in the language by using interior nodes labeled with nonterminals and children labeled with terminals or nonterminals according to the grammar's productions.
2. An ambiguous grammar is one where a single string can have more than one parse tree, leftmost derivation, rightmost derivation, or other derivation according to the grammar.
3. Operator precedence and associativity determine the order of operations when multiple operators are present in an expression. For example, multiplication generally has higher precedence than addition, and most operators associate left-to-right.
This document discusses data structures and arrays. It covers one-dimensional arrays, how they are initialized and indexed, and their relationship to pointers. Pointers can be used to reference array elements, and arrays can be passed as arguments to functions by passing their address. The document also provides examples of using arrays, including an example of bubble sort to sort an array and examples of dynamically allocating memory for arrays using calloc() and malloc().
ANTLR v3 is an improved version of ANTLR that provides more robust grammars, error recovery, attributes, tree construction and code generation capabilities compared to version 2. Some key features include single element EBNF grammar syntax, support for parameters and return values in rules, dynamic scoping of attributes, automatic and rewrite-based tree construction, tree grammars, and internationalization through string templates. The runtime is also better organized and separated into modules for parsing, trees, and debugging.
The document discusses parsing and context-free grammars. It defines parsing as constructing a parse tree from a stream of tokens using the rules of a context-free grammar. It provides examples of parse trees being built from both top-down and bottom-up parsing approaches. Key aspects of context-free grammars like non-terminals, terminals, production rules, and the start symbol are also summarized.
The document discusses the structure and process of a compiler. It has two major phases - the front-end and back-end. The front-end performs analysis of the source code by recognizing legal/illegal programs, understanding semantics, and producing an intermediate representation. The back-end translates the intermediate representation into target code. The general structure includes lexical analysis, syntax analysis, semantic analysis, code generation and optimization phases.
Huffman coding is a lossless data compression algorithm that uses variable-length binary codes for characters based on their frequency of occurrence, with more common characters represented by shorter bit sequences; it constructs a Huffman tree by assigning codes to characters such that the encoded output has minimum expected length, allowing for more efficient data storage and transmission. The document provides an overview of Huffman coding and trees, including an example to demonstrate how character frequencies are used to assign bit codes and compress a sample string from 56 bits to 13 bits.
The document discusses Huffman coding, which is a data compression technique that uses variable-length codes to encode symbols based on their frequency of occurrence, with more common symbols getting shorter codes. It provides details on how a Huffman tree is constructed by assigning codes to characters based on their frequency, with the most frequent characters assigned the shortest binary codes to achieve data compression. Examples are given to demonstrate how characters are encoded using a Huffman tree and how the storage size is calculated based on the path lengths and frequencies of characters.
This document provides a summary of key elements of the C programming language including program structure, data types, operators, flow control statements, standard libraries, and common functions. It covers topics such as functions, variables, comments, preprocessor directives, constants, pointers, arrays, structures, I/O, math functions, and limits of integer and floating point types. The summary is presented in a reference card format organized by sections.
Cs6660 compiler design may june 2016 Answer Keyappasami
The document describes the various phases of a compiler:
1. Lexical analysis breaks the source code into tokens.
2. Syntax analysis generates a parse tree from the tokens.
3. Semantic analysis checks for semantic correctness using the parse tree and symbol table.
4. Intermediate code generation produces machine-independent code.
5. Code optimization improves the intermediate code.
6. Code generation translates the optimized code into target machine code.
Intermediate representations span the gap between source and target languages by being closer to the target language while remaining mostly machine independent. This allows optimizations to be performed in a machine-independent way.
There are two main types of intermediate languages: high-level representations like syntax trees that are close to the source language, and low-level representations like three-address code that are closer to the target machine. Syntax trees separate parsing from subsequent processing while three-address code translates expressions into single assignment form using temporaries.
Semantic analysis enforces static semantic rules by constructing a syntax tree, evaluating attributes, and checking properties that can be analyzed at compile time. Attribute grammars annotate syntax rules with semantic attributes to define meaning while dynamic
The document discusses intermediate code in compilers. It defines intermediate code as the interface between a compiler's front end and back end. Using an intermediate representation facilitates retargeting a compiler to different machines and applying machine-independent optimizations. The document then describes different types of intermediate code like triples, quadruples and SSA form. It provides details on three-address code including quadruples, triples and indirect triples. It also discusses addressing of array elements and provides an example of translating a C program to intermediate code.
This document provides an overview of the C programming language, covering topics such as basic syntax, data types, operators, expressions, statements, functions, pointers, strings, input/output operations, and some example programs. It discusses the structure of a basic C program and shows how to include header files. Various C programming concepts are defined, such as variables, arrays, comments, and control flow. Example code is provided to demonstrate printing patterns and multiple trees.
Compiler chapter six .ppt course materialgadisaAdamu
The document discusses intermediate code generation in compilers. It explains that intermediate code serves as a bridge between the high-level source code and final machine code. It presents different types of intermediate representations like syntax trees and three-address code. Syntax trees abstract away details from parse trees while three-address code translates expressions into a linear representation using temporary variables. The document also provides examples and explanations of different data structures used to represent three-address code like quadruples and triples.
Is it easier to add functional programming features to a query language, or to add query capabilities to a functional language? In Morel, we have done the latter.
Functional and query languages have much in common, and yet much to learn from each other. Functional languages have a rich type system that includes polymorphism and functions-as-values and Turing-complete expressiveness; query languages have optimization techniques that can make programs several orders of magnitude faster, and runtimes that can use thousands of nodes to execute queries over terabytes of data.
Morel is an implementation of Standard ML on the JVM, with language extensions to allow relational expressions. Its compiler can translate programs to relational algebra and, via Apache Calcite’s query optimizer, run those programs on relational backends.
In this talk, we describe the principles that drove Morel’s design, the problems that we had to solve in order to implement a hybrid functional/relational language, and how Morel can be applied to implement data-intensive systems.
(A talk given by Julian Hyde at Strange Loop 2021, St. Louis, MO, on October 1st, 2021.)
The document discusses different types of header files, character functions, string functions, math functions, and random functions available in C/C++ libraries. Header files provide function prototypes, library definitions, and standard input/output streams. Common header files include stdio.h, string.h, math.h, and stdlib.h. Character functions check properties like whether a character is a letter, digit, lowercase, or uppercase. String functions allow concatenation, comparison, copying of strings. Math functions perform tasks like absolute value and remainder calculations. Random number generation functions generate random numbers within a range.
Morel, a data-parallel programming languageJulian Hyde
This document discusses Morel, a data-parallel programming language that is an extension of Standard ML with relational operators. Morel aims to provide the expressiveness of a functional programming language, the power and conciseness of SQL, and efficient execution on different hardware. It is implemented on top of Apache Calcite's relational algebra framework. The talk describes Morel's evolution and how it is pushing Calcite's capabilities with graph and recursive queries. Standard ML concepts like functions, recursion, and higher-order functions are extended in Morel with relational operators like "from" to enable data-parallel programming over immutable datasets. Functions can also be treated as values in Morel.
This document discusses input and output functions in C, specifically printf(). It provides examples of using printf() to format output with integers, floating point numbers, strings, characters, field widths, precisions, and flags. Key functions covered include printf(), scanf(), gets(), puts(), getchar(), putchar(). Conversion specifiers for integers (%d, %i, %u, %o, %x, %X) floating point numbers (%f, %e, %g, %G), characters (%c) and strings (%s) are explained.
For this assignment, download the A6 code pack. This zip fil.docxalfred4lewis58146
For this assignment, download the
A6 code pack
. This zip file contains several files:
main.cpp
- the predetermined main.cpp. This file shows the usage and functionality that is expected of your program. You are not allowed to edit this file. You will not be submitting this file with your assignment.
CMakeLists.txt
- the preset CMake file to build with your functions files.
input/greeneggsandham.txt
- the contents of Green Eggs and Ham in text format.
input/aliceChapter1.txt
- the first chapter of Alice in Wonderland in text format.
output/greeneggsandham.out
- the expected output when running your program against the
greeneggsandham.txt
file
output/aliceChapter1.out
- the expected output when running your program against the
aliceChapter1.txt
file
Your task is to provide the implementations for all of the referenced functions. You will need to create two files:
functions.h
and
functions.cpp
to make the program work as intended.
You will want to make your program as general as possible by not having any assumptions about the data hardcoded in. Two public input files have been supplied with the starter pack. We will run your program against a third private input file.
Function Requirements
The requirements of each function are given below. The input, output, and task of each function is described. The functions are:
promptUserForFilename()
openFile()
readWordsFromFile()
removePunctuation()
capitalizeWords()
filterUniqueWords()
alphabetizeWords()
countUniqueWords()
printWordsAndCounts()
countLetters()
printLetterCounts()
printMaxMinWord()
printMaxMinLetter()
promptUserForFilename()
Input
: None
Output
: A string
Task
: Prompt the user to enter a filename.
openFile()
Input
: (1) The input file stream (2) The string filename to open
Output
: True if the file successfully opened, False if the file could not be opened
Task
: Open the input file stream for the corresponding filename. Check that the file opened correctly. The string filename will remain unchanged.
readWordsFromFile()
Input
: The input file stream
Output
: A vector of strings
Task
: Read all of the words that are in the filestream and return a list of all the words in the order present in the file.
removePunctuation()
Input
: (1) A vector of strings (2) A string of all the punctuation characters to remove
Output
: None
Task
: For each word in the vector, remove all occurrences of all the punctuation characters denoted by the punctuation string. When complete, the input vector will now hold all the words with punctuation removed. The punctuation string will remain unchanged.
capitalizeWords()
Input
: A vector of strings
Output
: None
Task
: For each word in the vector, convert each character to its upper case equivalent. When complete, the input vector will now hold all the words capitalized.
filterUniqueWords()
Input
: A vector of strings
Output
: A vector of strings
Task
: The function will return only th.
The document describes a simple "Hello World" C program that prints three strings to the screen. It contains the main() function which uses printf statements to output the text. Comments provide explanations of key elements like header files, escape characters, and functions.
This document discusses data structures and binary trees. It provides examples of inserting nodes into a binary tree, discusses the cost of searching a binary tree, and describes different ways to traverse binary trees including preorder, inorder, and postorder traversal. It also explains how function calls and the call stack work during recursive calls when traversing a binary tree.
The document discusses intermediate code generation in compiler construction. It covers several intermediate representations including postfix notation, three-address code, and quadruples. It also discusses generating three-address code through syntax-directed translation and the use of symbol tables to handle name resolution and scoping.
This document provides an overview of the C programming language. It discusses that C was developed in 1972 at Bell Labs and is a popular systems and applications programming language. The document then covers various C language concepts like data types, variables, operators, input/output functions, and provides examples of basic C programs and code snippets.
The document discusses string handling and manipulation functions in C. It covers character handling functions that test properties of characters, string conversion functions that convert strings to numeric values, standard input/output functions for character and string I/O, string manipulation functions for copying and appending strings, comparison functions for comparing strings, and search functions for locating characters and substrings within strings. Examples are provided to demonstrate the use of several functions.
This document provides an overview of arrays, references, multi-dimensional arrays, hashes, and sorting in Perl. It discusses array references, constructing and iterating over multi-dimensional arrays, default and customized sorting of arrays, and how to construct, manipulate, and reference hashes in Perl code. Examples are provided to demonstrate these key concepts.
This document provides an overview of a lecture on introduction to Perl programming. It discusses installing and running Perl programs, basic data types like numbers and strings, control structures, and operators. Perl can be used for tasks like web scripting, database programming, and rapid prototyping. It has advantages like being free, portable, and object-oriented, but also drawbacks such as sometimes being difficult to read. Resources for learning more about Perl are provided.
This document discusses using Python for Android application development through the Scripting Layer for Android (SL4A) project. It describes how SL4A allows the use of several scripting languages, including Python, for Android development instead of just Java. It then provides steps for installing SL4A on an Android device and downloading and installing the Python interpreter for Android development.
This document discusses image processing techniques in Python including converting RGB images to grayscale and binary images. It describes calculating luminosity from RGB pixels to grayscale values. It also discusses edge detection using gradients by computing the directional change in pixel intensity between neighboring pixels to find edges, which are defined as having a gradient magnitude above a threshold and direction within a specified range. The document provides code examples for calculating gradient magnitude and direction using pixel differences in the x and y directions.
The document discusses scopes and arrays in Perl. It defines three scopes for identifiers in Perl - global, lexical, and dynamic. Global identifiers can be accessed from anywhere, lexical identifiers exist only in the block they are defined, and dynamic identifiers also exist in the called subroutines. The document also discusses array creation using lists, non-existing indices, the qw operator, and range operator. It demonstrates array manipulation functions like push, pop, shift, and unshift.
This document provides an overview of iterators, generators, and the Python Imaging Library (PIL) basics in Python. It discusses how iterators allow iteration over objects using the iterator protocol with __iter__ and next() methods. Generators are lazy functions that yield values and can be created via generator factories or comprehensions. The PIL allows loading, saving, and modifying image files in Python through functions like Image.new(), getpixel(), putpixel(), and ImageDraw for drawing primitives.
This document provides an outline and overview of topics related to Pygame, a Python library for game development. It discusses collision detection using Rect objects, loading images and sounds, and provides an example of a "Chimp game". The chimp game demonstrates creating a graphics window, loading assets, rendering text, handling input events, updating and drawing sprites. It includes code snippets for creating sprite classes like Fist and Chimp and initializing the game.
This document discusses building a simple game in Python using Pygame where circular sprites called "creeps" move randomly around the screen. It outlines goals of having configurable creeps that bounce off walls and exhibit semi-random behavior. Key aspects covered include using the Pygame sprite and vector capabilities, implementing a creep class with movement logic in its update method, rotating and drawing the creep sprites, computing displacement of creeps over time, and detecting wall collisions to bounce creeps off boundaries. The main game loop calls the creep update method each frame to animate the creep movements.
This document outlines key concepts in object-oriented programming including constructors, polymorphism, encapsulation, inheritance, and an introduction to Pygame. It discusses how constructors are called to initialize objects, how polymorphism allows the same method to work on different types, and how encapsulation hides unnecessary details from users. Inheritance and multiple inheritance are explained with examples of subclasses inheriting and overriding methods. Finally, installing and initializing Pygame is briefly covered along with an overview of the game loop structure.
This document provides an overview of dictionaries in Python. It discusses how dictionaries are defined using curly braces {}, how keys can be any immutable object like strings or numbers while values can be any object, and how to access values using keys. It also covers adding and deleting key-value pairs, checking for keys, shallow and deep copying, and creating dictionaries from keys or sequences.
This document provides an overview of key Python concepts including types, sequences, lists, functions, and parameters. It discusses how Python is both strongly and dynamically typed. The main built-in sequences - lists, tuples, strings, and ranges - are described. Lists are covered in detail including construction, operations like indexing, slicing, and built-in methods. Finally, the document outlines the different types of function parameters - positional, keyword, and combining the two - and how to handle parameter collections using the * operator.
This document provides a summary of a lecture on Python and Perl. It recaps previous topics, outlines goals for the next few weeks including creating simple games with PyGame. It then outlines the topics to be covered, including Python built-in objects like numbers, strings, lists, tuples, dictionaries and files. It discusses numeric operations and formats. It also covers modules, user input, and string formatting.
This document provides an overview and outline of a lecture on Python & Perl. It discusses editing Python code, running Python programs, sample programs, Python code execution, functional abstraction using Newton's square root approximation, and tuples. Key points covered include how Python uses indentation instead of curly braces, running Python code from the command line on Windows, Linux and Mac, and how tuples are immutable sequences defined with parentheses.
This document provides an overview and outline of a course on Python and Perl programming. The course will cover Python for 10 weeks and Perl for 5 weeks, with exams on each language. Students will complete weekly coding assignments and a final project. The document discusses installing Python on various operating systems, key Python concepts like variables, lists, strings and tuples, and recommends texts for further reading.
This document provides an overview and outline of lecture 11 of CS 3430: Introduction to Python and Perl at Utah State University. It covers basic network programming concepts like sockets, clients, servers, TCP, UDP and includes code examples for minimal servers and clients. It also discusses handling multiple connections, accessing URLs, opening remote files, getting HTML source, and installing and checking availability of the Python Imaging Library (PIL).
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
4. Background
●
●
●
In information theory, coding refers to methods that
represent data in terms of bit sequences (sequences
of 0's and 1's)
Encoding is a method of taking data structures and
mapping them to bit sequences
Decoding is a method of taking bit sequences and
outputting the corresponding data structure
5. Example: Standard ASCII & Unicode
●
Standard ASCII encodes each character as a 7-bit sequence
●
Using 7 bits allows us to encode 27 possible characters
●
●
●
Unicode has three standards: UTF-8 (uses 8-bit sequences),
UTF-16 (uses 16-bit sequences), and UTF-32 (uses 32-bit
sequences)
UTF stands for Unicode Transformation Format
Python 2.X's Unicode support: “Python represents Unicode strings as either 16- or 32-bit integers), depending on how the Python interpreter was compiled.”
6. Two Types of Codes
●
●
●
There are two types of codes: fixed-length and variable-length
Fixed-length (e.g., ASCII, Unicode) codes encode every
character in terms of the same number of bits
Variable-length codes (e.g., Morse, Huffman) encode characters in terms of variable numbers of bits: more frequent symbols are encoded with fewer bits
7. Example: Fixed-Length Code
●
A – 000
C – 010
E – 100
G – 110
●
B – 001
D – 011
F – 101
H – 111
●
AADF = 000000011101
●
The encoding of AADF is 12 bits
8. Example: Variable-Length Code
●
A–0
C – 1010
●
B – 100
●
AADF = 0010111101
●
The encoding of AADF is 10 bits
D – 1011
E – 1100
F – 1101
G – 1110
H – 1111
9. End of Character in Variable-Length Code
●
●
●
One of the challenges in variable-length codes is knowing
where one character ends and the one begins
Morse uses a special character (separator code)
Prefix coding is another solution: the prefix of every
character is unique – no code of any character
starts another character
10. Huffman Code
●
●
●
●
Huffman code is a variable-length code that takes advantage of relative frequencies of characters
Huffman code is named after David Huffman, the researcher who discovered it
Huffman code is represented as a binary tree where leaves
are individual characters and their frequencies
Each non-leaf node is a set of characters in all of its subnodes and the sum of their relative frequencies
12. Using Huffman Tree to Encode/Decode
Characters
●
The tree on the previous slide, these are the encodings:
A is encoded as 0
B is encoded as 100
C is encoded as 1010
D is encoded as 1011
E is encoded as 1100
F is encoded as 1101
G is encoded as 1110
H is encoded as 1111
15. Constructing Leaves
### a leaf is a tuple whose first element is symbol
### represented as a string and whose second element is
### the symbol's frequency
def make_leaf(symbol, freq):
return (symbol, freq)
def is_leaf(x):
return isinstance(x, tuple) and
len(x) == 2 and
isinstance(x[0], str) and
isinstance(x[1], int)
16. Constructing Leaves
### return the character (symbol) of the leaf
def get_leaf_symbol(leaf):
return leaf[0]
### return the frequency of the leaf's character
def get_leaf_freq(leaf):
return leaf[1]
17. Constructing Huffman Trees
### A Non-Leaf node (internal node) is represented as
### a list of four elements:
### 1. left brach
### 2. right branch
### 3. list of symbols
### 4. combined frequency of symbols
[left_branch, right_branch, symbols, frequency]
19. Accessing Huffman Trees
def get_symbols(huff_tree):
if is_leaf(huff_tree):
return [get_leaf_symbol(huff_tree)]
else:
return huff_tree[2]
def get_freq(huff_tree):
if is_leaf(huff_tree):
return get_leaf_freq(huff_tree)
else:
return huff_tree[3]
20. Constructing Huffman Trees
### A Huffman tree is constructed from its left branch, which can
### be a huffman tree or a leaf, and its right branch, another
### huffman tree or a leaf. The new tree has the symbols of the
### left branch and the right branch and the frequency of the left
### branch and the right branch
def make_huffman_tree(left_branch, right_branch):
return [left_branch,
right_branch,
get_symbols(left_branch) + get_symbols(right_branch),
get_freq(left_branch) + get_freq(right_branch)]
27. Symbol Encoding
1. Given a symbol s and a Huffman tree ht, set current_node to the root
node and encoding to an empty list (you can also check if s is in the root
node's symbol leaf and, if not, signal error)
2. If current_node is a leaf, return encoding
3. Check if s is in current_node's left branch or right branch
4. If in the left, add 0 to encoding, set current_node to the root of the left
branch, and go to step 2
5. If in the right, add 1 to encoding, set current_node to the root of the
right branch, and go to step 2
6. If in neither branch, signal error
28. Example
●
Encode B with the sample Huffman tree
●
Set current_node to the root node
●
●
●
●
B is in current_node's the right branch, so add 1 to encoding &
recurse into the right branch (current_node is set to the root of the
right branch – {B, C, D, E, F, G, H}: 9)
B is in current_node's left branch, so add 0 to encoding and recurse into the left branch (current_node is {B, C, D}: 5)
B is in current_node's left branch, so add 0 to encoding & recurse
into the left branch (current_node is B: 3)
current_node is a leaf, so return 100 (value of encoding)
29. Message Encoding
●
●
●
Given a sequence of symbols message and a Huffman
tree ht
Concatenate the encoding of each symbol in message
from left to right
Return the concatenation of encodings
30. Example
●
Encode ABBA with the sample Huffman tree
●
Encoding for A is 0
●
Encoding for B is 100
●
Encoding for B is 100
●
Encoding for A is 0
●
Concatenation of encodings is 01001000
31. Message Decoding
1. Given a sequence of bits message and a Huffman tree ht, set current_node to
the root and decoding to an empty list
2. If current_node is a leaf, add its symbol to decoding and set current_node to
ht's root
3. If current_node is ht's root and message has no more bits, return decoding
4. If no more bits in message & current_node is not a leaf, signal error
5. If message's current bit is 0, set current_node to its left child, read the bit, & go
to step 2
6. If message's current bit is 1, set current_node to its right child, read the bit, &
go to step 2
32. Example
●
●
Decode 0100 with the sample Huffman tree
Read 0, go left to A:8 & add A to decoding and reset
current_node to the root
●
Read 1, go right to {B, C, D, E, F, G, H}: 9
●
Read 0, go left to {B, C, D}:5
●
Read 0, go left to B:3
●
Add B to decoding & reset current_node to the root
●
No more bits & current_node is the root, so return AB
34. List Comprehension
●
●
List comprehension is an syntactic construct in some
programming languages for building lists from list specifications
List comprehension derives its conceptual roots from
the set-former (set-builder) notation in mathematics
[Y for X in LIST]
●
List comprehension is available in other programming
languages such as Common Lisp, Haskell, and Ocaml
35. Set-Former Notation Example
4 x | x N , x
100
4 x is the output function
x is the variable
N is the input set
2
x 100 is the predicate
2
36. Set-Former Notation Examples
x a, b | x 3is the set of all strings over a, b
*
whose length is 0, 1, 2, or 3.
a b
n
n
| n 1 is the set of non - empty strings over a, b such
that a ' s precede b' s and the number of a ' s is equal to
the number of b' s.
xy | x a, b, y aa, ccis the set of strings where
a or b is followed by aa or cc.
37. For-Loop Implementation
### building the list of the set-former example with forloop
>>> rslt = []
>>> for x in xrange(201):
if x ** 2 < 100:
rslt.append(4 * x)
>>> rslt
[0, 4, 8, 12, 16, 20, 24, 28, 32, 36]
38. List Comprehension Equivalent
### building the same list with list comprehension
>>> s = [ 4 * x for x in xrange(201) if x ** 2 < 100]
>>> s
[0, 4, 8, 12, 16, 20, 24, 28, 32, 36]
39. For-Loop
### building list of squares of even numbers in [0, 10]
### with for-loop
>>> rslt = []
>>> for x in xrange(11):
if x % 2 == 0:
rslt.append(x**2)
>>> rslt
[0, 4, 16, 36, 64, 100]
40. List Comprehension Equivalent
### building the same list with list comprehension
>>> [x ** 2 for x in xrange(11) if x % 2 == 0]
[0, 4, 16, 36, 64, 100]
41. For-Loop
## building list of squares of odd numbers in [0,
10]
>>> rslt = []
>>> for x in xrange(11):
if x % 2 != 0:
rslt.append(x**2)
>>> rslt
[1, 9, 25, 49, 81]
42. List Comprehension Equivalent
## building list of squares of odd numbers [0, 10]
## with list comprehension
>>> [x ** 2 for x in xrange(11) if x % 2 != 0]
[1, 9, 25, 49, 81]
44. For-Loop
>>> rslt = []
>>> for x in xrange(6):
if x % 2 == 0:
for y in xrange(6):
if y % 2 != 0:
rslt.append((x, y))
>>> rslt
[(0, 1), (0, 3), (0, 5), (2, 1), (2, 3), (2, 5), (4, 1), (4,
3), (4, 5)]
45. List Comprehension Equivalent
>>> [(x, y) for x in xrange(6) if x % 2 == 0
for y in xrange(6) if y % 2 != 0]
[(0, 1), (0, 3), (0, 5), (2, 1), (2, 3), (2, 5), (4, 1), (4,
3), (4, 5)]
47. List Comprehension with Matrices
●
List comprehension can be used to scan rows and columns in matrices
>>> matrix = [
[10, 20, 30],
[40, 50, 60],
[70, 80, 90]
]
### extract all rows
>>> [r for r in matrix]
[[10, 20, 30], [40, 50, 60], [70, 80, 90]]
48. List Comprehension with Matrices
>>> matrix = [
[10, 20, 30],
[40, 50, 60],
[70, 80, 90]
]
### extract column 0
>>> [r[0] for r in matrix]
[10, 40, 70]
49. List Comprehension with Matrices
>>> matrix = [
[10, 20, 30],
[40, 50, 60],
[70, 80, 90]
]
### extract column 1
>>> [r[1] for r in matrix]
[20, 50, 80]
50. List Comprehension with Matrices
>>> matrix = [
[10, 20, 30],
[40, 50, 60],
[70, 80, 90]
]
### extract column 2
>>> [r[2] for r in matrix]
[30, 60, 90]
51. List Comprehension with Matrices
### turn matrix columns into rows
>>> rslt = []
>>> for c in xrange(len(matrix)):
rslt.append([matrix[r][c]
xrange(len(matrix))])
for
>>> rslt
[[10, 40, 70], [20, 50, 80], [30, 60, 90]]
r
in
52. List Comprehension with Matrices
●
List comprehension can work with iterables (e.g., dictionaries)
>>> dict = {'a' : 'A', 'bb' : 'BB', 'ccc' : 'CCC'}
>>> [(item[0], item[1], len(item[0]+item[1]))
for item in dict.items()]
[('a', 'A', 2), ('ccc', 'CCC', 6), ('bb', 'BB', 4)]
53. List Comprehension
●
If the expression inside [ ] is a tuple, parentheses are a must
>>> cubes = [(x, x**3) for x in xrange(5)]
>>> cubes
[(0, 0), (1, 1), (2, 8), (3, 27), (4, 64)]
●
Sequences can be unpacked in list comprehension
>>> sums = [x + y for x, y in cubes]
>>> sums
[0, 2, 10, 30, 68]
54. List Comprehension
●
for-clauses in list comprehensions can iterate over
any sequences:
>>> rslt = [ c * n for c in 'math' for n in (1, 2,
3)]
>>> rslt
['m', 'mm', 'mmm', 'a', 'aa', 'aaa', 't', 'tt','ttt', 'h',
'hh', 'hhh']
55. List Comprehension & Loop Variables
●
The loop variables used in the list comprehension for-loops
(and in regular for-loops) stay after the execution.
>>> for i in [1, 2, 3]: print i
1
2
3
>>> i + 4
7
>>> [j for j in xrange(10) if j % 2 == 0]
[0, 2, 4, 6, 8]
>>> j * 2
18
56. When To Use List Comprehension
●
For-loops are easier to understand and debug
●
List comprehensions may be harder to understand
●
●
●
List comprehensions are faster than for-loops in the interpreter
List comprehensions are worth using to speed up simpler
tasks
For-loops are worth using when logic gets complex