(download for best quality slides) Gain a deeper understanding of why right folds over very large and infinite lists are sometimes possible in Haskell.
See how lazy evaluation and function strictness affect left and right folds in Haskell.
Learn when an ordinary left fold results in a space leak and how to avoid it using a strict left fold.
Errata:
slide 15: "as sharing is required" should be "as sharing is not required"
slide 43: ๐ ๐๐๐๐๐ (โ) ๐ should be ๐ ๐๐๐๐๐ (โ) ๐
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part ...Philip Schwarz
ย
(download for best quality slides) Gain a deeper understanding of why right folds over very large and infinite lists are sometimes possible in Haskell.
See how lazy evaluation and function strictness affect left and right folds in Haskell.
Learn when an ordinary left fold results in a space leak and how to avoid it using a strict left fold.
This version eliminates some minor imperfections and corrects the following two errors:
slide 15: "as sharing is required" should be "as sharing is not required"
slide 43: ํ ํํํํํ (โ) ํ should be ํ ํํํํํ (โ) ํ
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part 4Philip Schwarz
ย
(download for flawless image quality) Can a left fold ever work over an infinite list? What about a right fold? Find out.
Learn about the other two functions used by functional programmers to implement mathematical induction: iterating and scanning.
Learn about the limitations of the accumulator technique and about tupling, a technique that is the dual of the accumulator trick.
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part ...Philip Schwarz
ย
(download for flawless image quality) Can a left fold ever work over an infinite list? What about a right fold? Find out.
Learn about the other two functions used by functional programmers to implement mathematical induction: iterating and scanning.
Learn about the limitations of the accumulator technique and about tupling, a technique that is the dual of the accumulator trick.
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit โ Haskell and...Philip Schwarz
ย
See how the guard function has migrated from MonadPlus to Alternative and learn something about the latter.
Learn how to write a Scala program that draws an N-Queens solution board using the Doodle graphics library.
See how to write the equivalent Haskell program using the Gloss graphics library.
Learn how to use Monoid and Foldable to compose images both in Haskell and in Scala.
Link to part 1: https://www.slideshare.net/pjschwarz/nqueens-combinatorial-problem-polyglot-fp-for-fun-and-profit-haskell-and-scala-part-1
Errata:
On slide 22, the last line of the showQueens function should of course be show(solution).draw(frame) rather than show(solution).draw
On slide 43, it would be better if the definitions of the beside, above and on Monoids were also shown.
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and ScalaPhilip Schwarz
ย
(download for perfect quality) - See how recursive functions and structural induction relate to recursive datatypes.
Follow along as the fold abstraction is introduced and explained.
Watch as folding is used to simplify the definition of recursive functions over recursive datatypes
Part 1 - through the work of Richard Bird and Graham Hutton.
Errata:
slide 7, 11 fib(0) is 0,rather than 1
slide 23: was supposed to be followed by 2-3 slides recapitulating definitions of factorial and fibonacci with and without foldr, plus translation to scala
slide 36: concat not invoked in concat example
slides 48 and 49: unwanted 'm' in definition of sum
throughout: a couple of typographical errors
throughout: several aesthetic imperfections (wrong font, wrong font colour)
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - with ...Philip Schwarz
ย
(download for perfect quality) - See how recursive functions and structural induction relate to recursive datatypes.
Follow along as the fold abstraction is introduced and explained.
Watch as folding is used to simplify the definition of recursive functions over recursive datatypes
Part 1 - through the work of Richard Bird and Graham Hutton.
This version corrects the following issues:
slide 7, 11 fib(0) is 0,rather than 1
slide 23: was supposed to be followed by 2-3 slides recapitulating definitions of factorial and fibonacci with and without foldr, plus translation to scala
slide 36: concat not invoked in concat example
slides 48 and 49: unwanted 'm' in definition of sum
throughout: a couple of typographical errors
throughout: several aesthetic imperfections (wrong font, wrong font colour)
Download for flawless quality (slides viewed online look a bit grainy and out of focus). A monad is an implementation of one of the minimal sets of monadic combinators, satisfying the laws of associativity and identity - see how compositional responsibilities are distributed in each combinator set
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part 2Philip Schwarz
ย
(download for perfect quality) See aggregation functions defined inductively and implemented using recursion.
Learn how in many cases, tail-recursion and the accumulator trick can be used to avoid stack-overflow errors.
Watch as general aggregation is implemented and see duality theorems capturing the relationship between left folds and right folds.
Through the work of Sergei Winitzki and Richard Bird.
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part ...Philip Schwarz
ย
(download for best quality slides) Gain a deeper understanding of why right folds over very large and infinite lists are sometimes possible in Haskell.
See how lazy evaluation and function strictness affect left and right folds in Haskell.
Learn when an ordinary left fold results in a space leak and how to avoid it using a strict left fold.
This version eliminates some minor imperfections and corrects the following two errors:
slide 15: "as sharing is required" should be "as sharing is not required"
slide 43: ํ ํํํํํ (โ) ํ should be ํ ํํํํํ (โ) ํ
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part 4Philip Schwarz
ย
(download for flawless image quality) Can a left fold ever work over an infinite list? What about a right fold? Find out.
Learn about the other two functions used by functional programmers to implement mathematical induction: iterating and scanning.
Learn about the limitations of the accumulator technique and about tupling, a technique that is the dual of the accumulator trick.
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part ...Philip Schwarz
ย
(download for flawless image quality) Can a left fold ever work over an infinite list? What about a right fold? Find out.
Learn about the other two functions used by functional programmers to implement mathematical induction: iterating and scanning.
Learn about the limitations of the accumulator technique and about tupling, a technique that is the dual of the accumulator trick.
N-Queens Combinatorial Problem - Polyglot FP for Fun and Profit โ Haskell and...Philip Schwarz
ย
See how the guard function has migrated from MonadPlus to Alternative and learn something about the latter.
Learn how to write a Scala program that draws an N-Queens solution board using the Doodle graphics library.
See how to write the equivalent Haskell program using the Gloss graphics library.
Learn how to use Monoid and Foldable to compose images both in Haskell and in Scala.
Link to part 1: https://www.slideshare.net/pjschwarz/nqueens-combinatorial-problem-polyglot-fp-for-fun-and-profit-haskell-and-scala-part-1
Errata:
On slide 22, the last line of the showQueens function should of course be show(solution).draw(frame) rather than show(solution).draw
On slide 43, it would be better if the definitions of the beside, above and on Monoids were also shown.
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and ScalaPhilip Schwarz
ย
(download for perfect quality) - See how recursive functions and structural induction relate to recursive datatypes.
Follow along as the fold abstraction is introduced and explained.
Watch as folding is used to simplify the definition of recursive functions over recursive datatypes
Part 1 - through the work of Richard Bird and Graham Hutton.
Errata:
slide 7, 11 fib(0) is 0,rather than 1
slide 23: was supposed to be followed by 2-3 slides recapitulating definitions of factorial and fibonacci with and without foldr, plus translation to scala
slide 36: concat not invoked in concat example
slides 48 and 49: unwanted 'm' in definition of sum
throughout: a couple of typographical errors
throughout: several aesthetic imperfections (wrong font, wrong font colour)
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - with ...Philip Schwarz
ย
(download for perfect quality) - See how recursive functions and structural induction relate to recursive datatypes.
Follow along as the fold abstraction is introduced and explained.
Watch as folding is used to simplify the definition of recursive functions over recursive datatypes
Part 1 - through the work of Richard Bird and Graham Hutton.
This version corrects the following issues:
slide 7, 11 fib(0) is 0,rather than 1
slide 23: was supposed to be followed by 2-3 slides recapitulating definitions of factorial and fibonacci with and without foldr, plus translation to scala
slide 36: concat not invoked in concat example
slides 48 and 49: unwanted 'm' in definition of sum
throughout: a couple of typographical errors
throughout: several aesthetic imperfections (wrong font, wrong font colour)
Download for flawless quality (slides viewed online look a bit grainy and out of focus). A monad is an implementation of one of the minimal sets of monadic combinators, satisfying the laws of associativity and identity - see how compositional responsibilities are distributed in each combinator set
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part 2Philip Schwarz
ย
(download for perfect quality) See aggregation functions defined inductively and implemented using recursion.
Learn how in many cases, tail-recursion and the accumulator trick can be used to avoid stack-overflow errors.
Watch as general aggregation is implemented and see duality theorems capturing the relationship between left folds and right folds.
Through the work of Sergei Winitzki and Richard Bird.
Left and Right Folds- Comparison of a mathematical definition and a programm...Philip Schwarz
ย
We compare typical definitions of the left and right fold functions, with their mathematical definitions in Sergei Winitzkiโs upcoming book: The Science of Functional Programming.
Errata:
Slide 13: "The way ๐๐๐๐๐ does it is by associating to the right" - should, of course ,end in "to the left".
The Functional Programming Triad of Folding, Scanning and Iteration - a first...Philip Schwarz
ย
This slide deck can work both as an aide mรฉmoire (memory jogger), or as a first (not completely trivial) example of using left folds, left scans and iteration, to implement mathematical induction.
Errata: on almost half of the slides there is some minor typo or imperfection or in some cases a minor error and in one case, an omission. See a later version for corrections and some improvements.
(for best quality images, either download or view here: https://philipschwarz.dev/fpilluminated/?page_id=455)
Scala code for latest version: https://github.com/philipschwarz/fp-fold-scan-iterate-triad-a-first-example-scala
Here I link up up some very useful material by Robert Norris (@tpolecat) and Martin Odersky (@odersky) to introduce Monad laws and reinforce the importance of checking the laws. E.g. while Option satisfies the laws, Try is not a lawful Monad: it trades the left identity law for the bullet-proof principle.
* ERRATA *
1) on the first slide the Kleisli composition signature appears three times, but one of the occurrences is incorrect in that it is (>=>)::(a->mb)->(b->mb)->(a->mc) whereas is should be (>=>)::(a->mb)->(b->mc)->(a->mc) - thank you Jules Ivanic.
Sierpinski Triangle - Polyglot FP for Fun and Profit - Haskell and ScalaPhilip Schwarz
ย
Take the very first baby steps on the path to doing graphics in Haskell and Scala.
Learn about a simple yet educational recursive algorithm producing images that are pleasing to the eye.
Learn how functional programs deal with the side effects required to draw images.
See how libraries like Gloss and Doodle make drawing Sierpinskiโs triangle a doddle.
Code for this slide deck:
https://github.com/philipschwarz/sierpinski-triangle-haskell-gloss
https://github.com/philipschwarz/sierpinski-triangle-scala-cats-io
https://github.com/philipschwarz/sierpinski-triangle-scala-awt-and-doodle
Errata:
1. the title 'Sierpinski Triangle' on the front slide could be improved by replacing it with 'Sierpinski's Triangle'.
2. a couple of typos on two slides
3. the triangles drawn using Doodle are not equilateral, as intended but isosceles.
(UPDATE 2021-06-15 I opened PR https://github.com/creativescala/doodle/pull/99 and as a result, an equilateral triangle has now been added to Doodle: https://github.com/creativescala/doodle/commit/30d20efebcc2016942e9cdbae85fefca5b95fa3c).
Here is a corrected version of the deck: https://www.slideshare.net/pjschwarz/sierpinski-triangle-polyglot-fp-for-fun-and-profit-haskell-and-scala-with-minor-corrections
Quicksort - a whistle-stop tour of the algorithm in five languages and four p...Philip Schwarz
ย
Quicksort - a whistle-stop tour of the algorithm in five languages and four paradigms.
Programming Paradigms: Functional, Logic, Imperative, Imperative Functional
Languages: Haskell, Scala, Java, Clojure, Prolog
Functional Core and Imperative Shell - Game of Life Example - Haskell and ScalaPhilip Schwarz
ย
See a program structure flowchart used to highlight how an FP program breaks down into a functional core and imperative shell
View a program structure flowchart for the Game of Life
See the code for Game of Lifeโs functional core and imperative shell, both in Haskell and in Scala.
Code:
https://github.com/philipschwarz/functional-core-imperative-shell-scala
https://github.com/philipschwarz/functional-core-imperative-shell-haskell
Big picture of category theory in scala with deep dive into contravariant and...Piotr Paradziลski
ย
A big picture of category theory in Scala - starting from regular functors with additional structure (Apply, Applicative, Monad) to Comonads. Usually, we think about structures like Monoids in a monoidal category with particular tensor. In here I analyze just signatures of different abstractions.
Exploration of Contravariant functors as a way to model computation "backward" or abstract over input with the ability to prepend operation. Examples for predicates, sorting, show and function input (or any other function parameter except the last one).
Profunctors as abstraction unifying Functors and Contravariant functors to model both input and output. Example for Profunctor - function with one argument.
Relation to Bifunctors, Kan extensions, Adjunctions, and Free constructions.
The Functional Programming Triad of Folding, Scanning and Iteration - a first...Philip Schwarz
ย
This slide deck can work both as an aide mรฉmoire (memory jogger), or as a first (not completely trivial) example of using left folds, left scans and iteration, to implement mathematical induction.
This is just an updated version of the original slide deck which makes minor improvements and minor corrections in almost half of the slides.
(for best quality images, either download or view here: https://philipschwarz.dev/fpilluminated/?page_id=455)
You can see the Scala code here: https://github.com/philipschwarz/fp-fold-scan-iterate-triad-a-first-example-scala
Download for better quality.
Monads do not Compose. Not in a generic way - There is no general way of composing monads.
A comment from Rรบnar Bjarnason, coauthor of FP in Scala: "They do compose in a different generic way. For any two monads F and G we can take the coproduct which is roughly Free of Either F or G (up to normalization)".
Another comment from Sergei Winitzki (which caused me to upload https://www.slideshare.net/pjschwarz/addendum-to-monads-do-not-compose): "It is a mistake to think that a traversable monad can be composed with another monad. It is true that, given `Traversable`, you can implement the monad's methods (pure and flatMap) for the composition with another monad (as in your slides 21 to 26), but this is a deceptive appearance. The laws of the `Traversable` typeclass are far insufficient to guarantee the laws of the resulting composed monad. The only traversable monads that work correctly are Option, Either, and Writer. It is true that you can implement the type signature of the `swap` function for any `Traversable` monad. However, the `swap` function for monads needs to satisfy very different and stronger laws than the `sequence` function from the `Traversable` type class. I'll have to look at the "Book of Monads"; but, if my memory serves, the FPiS book does not derive any of these laws." See https://www.linkedin.com/feed/update/urn:li:groupPost:41001-6523141414614814720?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A41001-6523141414614814720%2C6532108273053761536%29
download for better quality - Learn about the sequence and traverse functions
through the work of Runar Bjarnason and Paul Chiusano, authors of Functional Programming in Scala https://www.manning.com/books/functional-programming-in-scala, and others (Martin Odersky, Derek Wyatt, Adelbert Chang)
Introducing Assignment invalidates the Substitution Model of Evaluation and v...Philip Schwarz
ย
(download for better quality)
Introducing Assignment invalidates the Substitution Model of Evaluation and violates Referential Transparency
- as explained in SICP (the Wizard Book)
This ppt includes -
History
Introduction
R Basics
Download and Install R
GUI /IDE
Datatypes and Operators
Conditional Statements and Loops
Functions
Plotting
Features
Comparison with other programming languages
Merits / Advantages
Demerits / Disadvantages
Conclusion
Left and Right Folds- Comparison of a mathematical definition and a programm...Philip Schwarz
ย
We compare typical definitions of the left and right fold functions, with their mathematical definitions in Sergei Winitzkiโs upcoming book: The Science of Functional Programming.
Errata:
Slide 13: "The way ๐๐๐๐๐ does it is by associating to the right" - should, of course ,end in "to the left".
The Functional Programming Triad of Folding, Scanning and Iteration - a first...Philip Schwarz
ย
This slide deck can work both as an aide mรฉmoire (memory jogger), or as a first (not completely trivial) example of using left folds, left scans and iteration, to implement mathematical induction.
Errata: on almost half of the slides there is some minor typo or imperfection or in some cases a minor error and in one case, an omission. See a later version for corrections and some improvements.
(for best quality images, either download or view here: https://philipschwarz.dev/fpilluminated/?page_id=455)
Scala code for latest version: https://github.com/philipschwarz/fp-fold-scan-iterate-triad-a-first-example-scala
Here I link up up some very useful material by Robert Norris (@tpolecat) and Martin Odersky (@odersky) to introduce Monad laws and reinforce the importance of checking the laws. E.g. while Option satisfies the laws, Try is not a lawful Monad: it trades the left identity law for the bullet-proof principle.
* ERRATA *
1) on the first slide the Kleisli composition signature appears three times, but one of the occurrences is incorrect in that it is (>=>)::(a->mb)->(b->mb)->(a->mc) whereas is should be (>=>)::(a->mb)->(b->mc)->(a->mc) - thank you Jules Ivanic.
Sierpinski Triangle - Polyglot FP for Fun and Profit - Haskell and ScalaPhilip Schwarz
ย
Take the very first baby steps on the path to doing graphics in Haskell and Scala.
Learn about a simple yet educational recursive algorithm producing images that are pleasing to the eye.
Learn how functional programs deal with the side effects required to draw images.
See how libraries like Gloss and Doodle make drawing Sierpinskiโs triangle a doddle.
Code for this slide deck:
https://github.com/philipschwarz/sierpinski-triangle-haskell-gloss
https://github.com/philipschwarz/sierpinski-triangle-scala-cats-io
https://github.com/philipschwarz/sierpinski-triangle-scala-awt-and-doodle
Errata:
1. the title 'Sierpinski Triangle' on the front slide could be improved by replacing it with 'Sierpinski's Triangle'.
2. a couple of typos on two slides
3. the triangles drawn using Doodle are not equilateral, as intended but isosceles.
(UPDATE 2021-06-15 I opened PR https://github.com/creativescala/doodle/pull/99 and as a result, an equilateral triangle has now been added to Doodle: https://github.com/creativescala/doodle/commit/30d20efebcc2016942e9cdbae85fefca5b95fa3c).
Here is a corrected version of the deck: https://www.slideshare.net/pjschwarz/sierpinski-triangle-polyglot-fp-for-fun-and-profit-haskell-and-scala-with-minor-corrections
Quicksort - a whistle-stop tour of the algorithm in five languages and four p...Philip Schwarz
ย
Quicksort - a whistle-stop tour of the algorithm in five languages and four paradigms.
Programming Paradigms: Functional, Logic, Imperative, Imperative Functional
Languages: Haskell, Scala, Java, Clojure, Prolog
Functional Core and Imperative Shell - Game of Life Example - Haskell and ScalaPhilip Schwarz
ย
See a program structure flowchart used to highlight how an FP program breaks down into a functional core and imperative shell
View a program structure flowchart for the Game of Life
See the code for Game of Lifeโs functional core and imperative shell, both in Haskell and in Scala.
Code:
https://github.com/philipschwarz/functional-core-imperative-shell-scala
https://github.com/philipschwarz/functional-core-imperative-shell-haskell
Big picture of category theory in scala with deep dive into contravariant and...Piotr Paradziลski
ย
A big picture of category theory in Scala - starting from regular functors with additional structure (Apply, Applicative, Monad) to Comonads. Usually, we think about structures like Monoids in a monoidal category with particular tensor. In here I analyze just signatures of different abstractions.
Exploration of Contravariant functors as a way to model computation "backward" or abstract over input with the ability to prepend operation. Examples for predicates, sorting, show and function input (or any other function parameter except the last one).
Profunctors as abstraction unifying Functors and Contravariant functors to model both input and output. Example for Profunctor - function with one argument.
Relation to Bifunctors, Kan extensions, Adjunctions, and Free constructions.
The Functional Programming Triad of Folding, Scanning and Iteration - a first...Philip Schwarz
ย
This slide deck can work both as an aide mรฉmoire (memory jogger), or as a first (not completely trivial) example of using left folds, left scans and iteration, to implement mathematical induction.
This is just an updated version of the original slide deck which makes minor improvements and minor corrections in almost half of the slides.
(for best quality images, either download or view here: https://philipschwarz.dev/fpilluminated/?page_id=455)
You can see the Scala code here: https://github.com/philipschwarz/fp-fold-scan-iterate-triad-a-first-example-scala
Download for better quality.
Monads do not Compose. Not in a generic way - There is no general way of composing monads.
A comment from Rรบnar Bjarnason, coauthor of FP in Scala: "They do compose in a different generic way. For any two monads F and G we can take the coproduct which is roughly Free of Either F or G (up to normalization)".
Another comment from Sergei Winitzki (which caused me to upload https://www.slideshare.net/pjschwarz/addendum-to-monads-do-not-compose): "It is a mistake to think that a traversable monad can be composed with another monad. It is true that, given `Traversable`, you can implement the monad's methods (pure and flatMap) for the composition with another monad (as in your slides 21 to 26), but this is a deceptive appearance. The laws of the `Traversable` typeclass are far insufficient to guarantee the laws of the resulting composed monad. The only traversable monads that work correctly are Option, Either, and Writer. It is true that you can implement the type signature of the `swap` function for any `Traversable` monad. However, the `swap` function for monads needs to satisfy very different and stronger laws than the `sequence` function from the `Traversable` type class. I'll have to look at the "Book of Monads"; but, if my memory serves, the FPiS book does not derive any of these laws." See https://www.linkedin.com/feed/update/urn:li:groupPost:41001-6523141414614814720?commentUrn=urn%3Ali%3Acomment%3A%28groupPost%3A41001-6523141414614814720%2C6532108273053761536%29
download for better quality - Learn about the sequence and traverse functions
through the work of Runar Bjarnason and Paul Chiusano, authors of Functional Programming in Scala https://www.manning.com/books/functional-programming-in-scala, and others (Martin Odersky, Derek Wyatt, Adelbert Chang)
Introducing Assignment invalidates the Substitution Model of Evaluation and v...Philip Schwarz
ย
(download for better quality)
Introducing Assignment invalidates the Substitution Model of Evaluation and violates Referential Transparency
- as explained in SICP (the Wizard Book)
This ppt includes -
History
Introduction
R Basics
Download and Install R
GUI /IDE
Datatypes and Operators
Conditional Statements and Loops
Functions
Plotting
Features
Comparison with other programming languages
Merits / Advantages
Demerits / Disadvantages
Conclusion
Based on the readings and content for this course.docxBased on.docxikirkton
ย
Based on the readings and content for this course.docx
Based on the readings and content for this course, which topic did you find most useful or interesting? How will you use it later in life? What makes it valuable?
Ans:
Well, there are many small but significant things in real life, that I understand, are related to maths.
For example the screen size of a TV or a laptop. When we say 14 inch laptop we mean the diagonal of the screen is approximately 14 inches.
This is a direct application of Pythagorean Theorem.
And depending on the shape of the room determines the formula for area that I use. For example, if our room was a perfect square (which none of them are) I would utilize the formula a = s^2, since our rooms our rectangle, the formula we more commonly use is a = lw.
Again I understand the amount of money we spend on gas can be modeled by a mathematical function.
When we throw a ball up, the time taken for it to come down can be modeled by a quadratic equation.
There are so many things.
I won't pick up any particular thing.
But after this course, I am able to look at many things in a more analytical way.
I can understand the mathematical logic behind them.
So all these are valuable to me
Do you always use the property of distribution when multiplying monomials and polynomials.docx
Do you always use the property of distribution when multiplying monomials and polynomials? Explain why or why not. In what situations would distribution become important? Provide an example for the class to practice with.
Ans:
The property of distribution is one important tool in solving the polynomial multiplications.
For example:
3x*(x+5) = 3x*x + 3x*5 = 3x^2 +15x
But this property is used only when one of the bracketed terms contains two or more terms of different order.
For example in the above case, the bracket consists of two different order terms, x and 5.
Letโs take another case:
3x*(x+2x)
This can be solved in two ways.
3x(x+2x) = 3x*x + 3x*2x = 3x^2 +6x^2 = 9x^2
Or we can say:
3x(x+2x) = 3x*3x = 9x^2
So in the 1st method we used the distribute property.
But in the 2nd case we did not.
So it depends on the particular problem and looking at the different terms we can decide whether or not to use distributive property.
Explain how to factor the following trinomials forms.docx
Explain how to factor the following trinomials forms: x2 + bx + c and ax2 + bx + c. Is there more than one way to factor this? Show your answer using both words and mathematical notation
Ans:
Actually I am not very sure how to answer this.
For me x2 + bx + c and ax2 + bx + c are not different from each other.
The 1st one is a special case of the second expression where a=1.
To factor the expression ax^2 + bx + c we first need to factor the middle term bx cleverly.
Now it's to be understood that not all trinomials can be factored. But some of them can be.
Basically, we have to write b in the form b= p+q so that p*q= a*c. This is the only trick.
Then ...
* Evaluate a polynomial using the Remainder Theorem.
* Use the Factor Theorem to solve a polynomial equation.
* Use the Rational Zero Theorem to find rational zeros.
* Find zeros of a polynomial function.
* Use the Linear Factorization Theorem to find polynomials with given zeros.
* Use Descartesโ Rule of Signs.
Here is the first set of notes for the first class in Analysis of Algorithm. I added a dedicatory for my dear Fabi... she has showed me what real idealism is....
Direct Style Effect Systems -The Print[A] Example- A Comprehension AidPhilip Schwarz
ย
The subject of this deck is the small Print[A] program in the following blog post by Noel Welsh: https://www.inner-product.com/posts/direct-style-effects/.
Keywords: "direct-style", "context function", "context functions", "algebraic effect", "algebraic effects", "scala", "effect system", "effect systems", "effect", "side effect", "composition", "fp", "functional programming"
Folding Cheat Sheet #4 - fourth in a seriesPhilip Schwarz
ย
For functions that can be defined both as an instance of a right fold and as an instance of a left fold, one may be more efficient than the other.
Let's look at the example of a function 'decimal' that converts a list of digits into the corresponding decimal number.
Erratum: it has been pointed out that it is possible to define the zip function using a right fold (see slide 5).
Tagless Final Encoding - Algebras and Interpreters and also ProgramsPhilip Schwarz
ย
Tagless Final Encoding - Algebras and Interpreters and also Programs - An introduction, through the work of Gabriel Volpe.
Slide deck home: http://fpilluminated.com/assets/tagless-final-encoding-algebras-interpreters-and-programs.html
A sighting of traverseFilter and foldMap in Practical FP in ScalaPhilip Schwarz
ย
Slide deck home: http://fpilluminated.com/assets/sighting-of-scala-cats-traverseFilter-and-foldMap-in-practical-fp-in-scala.html.
Download PDF for perfect image quality.
A sighting of sequence function in Practical FP in ScalaPhilip Schwarz
ย
Slide deck home: http://fpilluminated.com/assets/sighting-of-scala-cats-sequence-function-in-practical-fp-in-scala.html.
Download PDF for perfect image quality.
This talk was presented on Aug 3rd 2023 during the Scala in the City event a ITV in London https://www.meetup.com/scala-in-the-city/events/292844968/
Visit the following for a description, slideshow, all slides with transcript, pdf, github repo, and eventually a video recording: http://fpilluminated.com/assets/n-queens-combinatorial-puzzle-meets-cats.html
At the centre of this talk is the N-Queens combinatorial puzzle. The reason why this puzzle features in the Scala book and functional programming course by Martin Odersky (the languageโs creator), is that such puzzles are a particularly suitable application area of 'for comprehensions'.
Weโll start by (re)acquainting ourselves with the puzzle, and seeing the role played in it by permutations. Next, weโll see how, when wanting to visualise candidate puzzle solutions, Catsโ monoidal functions fold and foldMap are a great fit for combining images.
While we are all very familiar with the triad providing the bread, butter and jam of functional programming, i.e. map, filter and fold, not everyone knows about the corresponding functions in Catsโ monadic variant of the triad, i.e. mapM, filterM and foldM, which we are going to learn about next.
As is often the case in functional programming, the traverse function makes an appearance, and we shall grab the opportunity to point out the symmetry that exists in the interrelation of flatMap / foldMap / traverse and flatten / fold / sequence.
Armed with an understanding of foldM, we then look at how such a function can be used to implement an iterative algorithm for the N-Queens puzzle.
The talk ends by pointing out that the iterative algorithm is smarter than the recursive one, because it โremembersโ where it has already placed previous queens.
Kleisli composition, flatMap, join, map, unit - implementation and interrelat...Philip Schwarz
ย
Kleisli composition, flatMap, join, map, unit. A study/memory aid, to help learn/recall their implementation/interrelation.
Version 2, updated for Scala 3
Nat, List and Option Monoids -from scratch -Combining and Folding -an examplePhilip Schwarz
ย
Nat, List and Option Monoids, from scratch. Combining and Folding: an example.
This is a new version of the original which has some cosmetic changes and a new 7th slide which only differs from slide 6 in that it defines the fold function in terms of the foldRight function.
Code: https://github.com/philipschwarz/nat-list-and-option-monoids-from-scratch-combining-and-folding-an-example
Nat, List and Option Monoids -from scratch -Combining and Folding -an examplePhilip Schwarz
ย
Nat, List and Option Monoids, from scratch. Combining and Folding: an example.
Code: https://github.com/philipschwarz/nat-list-and-option-monoids-from-scratch-combining-and-folding-an-example
The Sieve of Eratosthenes - Part II - Genuine versus Unfaithful Sieve - Haske...Philip Schwarz
ย
When I posted the deck for Part 1 to the Scala users forum, Odd Mรถller linked to a paper titled "The Genuine Sieve of Eratosthenes", which speaks of the Unfaithful Sieve.
Part 2 is based on that paper and on Richard Bird's faithful Haskell implementation of the Sieve, which we translate into Scala.
Scala code for Richard Bird's infinite primes Haskell program: https://github.com/philipschwarz/sieve-of-eratosthenes-part-2-scala
Sum and Product Types -The Fruit Salad & Fruit Snack Example - From F# to Ha...Philip Schwarz
ย
Sum and Product Types -The Fruit Salad & Fruit Snack Example - From F# to Haskell, Scala and Java.
Inspired by the example in Scott Wlaschinโs F# book: Domain Modeling Made Functional.
Download for better results.
Java 19 Code: https://github.com/philipschwarz/fruit-salad-and-fruit-snack-ADT-example-java
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
ย
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
ย
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
ย
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadarโs Dark Web News.
Stay Informed on Threat Actorsโ Activity on the Dark Web with SOCRadar!
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
ย
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
ย
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
ย
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Accelerate Enterprise Software Engineering with PlatformlessWSO2
ย
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
ย
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our teamโs work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
ย
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
ย
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
ย
Even though at surface level โjava.lang.OutOfMemoryErrorโ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
AI Pilot Review: The Worldโs First Virtual Assistant Marketing SuiteGoogle
ย
AI Pilot Review: The Worldโs First Virtual Assistant Marketing Suite
๐๐ Click Here To Get More Info ๐๐
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
โ Deploy AI expert bots in Any Niche With Just A Click
โ With one keyword, generate complete funnels, websites, landing pages, and more.
โ More than 85 AI features are included in the AI pilot.
โ No setup or configuration; use your voice (like Siri) to do whatever you want.
โ You Can Use AI Pilot To Create your version of AI Pilot And Charge People For Itโฆ
โ ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
โ ZERO Limits On Features Or Usages
โ Use Our AI-powered Traffic To Get Hundreds Of Customers
โ No Complicated Setup: Get Up And Running In 2 Minutes
โ 99.99% Up-Time Guaranteed
โ 30 Days Money-Back Guarantee
โ ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Folding Unfolded - Polyglot FP for Fun and Profit - Haskell and Scala - Part 5
1. gain a deeper understanding of why right folds over very large and infinite lists are sometimes possible in Haskell
see how lazy evaluation and function strictness affect left and right folds in Haskell
learn when an ordinary left fold results in a space leak and how to avoid it using a strict left fold
Part 5 - through the work of
Folding Unfolded
Polyglot FP for Fun and Profit
Haskell and Scala
Richard Bird
http://www.cs.ox.ac.uk/people/richard.bird/
Graham Hutton
@haskellhutt
Bryan OโSullivan
John Goerzen
Donald Bruce Stewart
@philip_schwarzslides by https://www.slideshare.net/pjschwarz
2. In Part 4 we said that in this slide deck we were going to cover, in a Scala context, the following subjects:
โข how to do right folds over large lists and infinite lists
โข how to get around limitations in the applicability of the accumulator trick
But I now think that before we do that we ought to get a better understanding of why it is that right folds
over large lists and infinite lists are sometimes possible in Haskell. In the process weโll also get a deeper
understanding of how left folds work: we are in for quite a surprise.
As a result, the original objectives for this slide deck become the objectives for the next deck, i.e. Part 6.
3. Remember in Part 4 when Tony Morris explained the following?
โwhether or not fold right will work on an infinite list depends on the strictness of the function that we are replacing ๐ช๐๐๐ withโ
e.g. if we have an infinite list of 1s and we have a heador function which when applied to a default value and a list returns the value at
the head of the list, unless the list is empty, in which case it returns ๐ต๐๐, then if we do a right fold over the infinite list of ones with
function heador and value 99, we get back 1.
i.e. even though the list is infinite, rather than foldr taking an infinite amount of time to evaluate the list, it is able to return a result
pretty much instantaneously.
As Tony explained, the reason why foldr is able to do this is that the const function used by heador is lazy:
โconst, the function I just used, is lazy, it ignores the second argument, and therefore it works on an infinite list.โ
Tony Morris
@dibblego
OK, so, heador 99 infinity.
This will not go all the way to the right of that list, because when it
gets there, there is just a ๐ช๐๐๐ all the way. So I should get 1. And I do:
$ heador a = foldr const a
$ heador 99 infinity
1
$
$ heador 99 infinity
4. Also, remember in Part 3, when Tony gave this other example of successfully doing a right fold over an infinite list?
5. Although Tony didnโt say it explicitly, the reason why foldr works in the example on the previous
slide is that && is non-strict in its second parameter whenever its first parameter is False.
Before we look at what it means for a function to be strict, we need to understand what lazy
evaluation is, so in the next 10 slides we are going to see how Richard Bird and Graham Hutton
explain this concept. If you are already familiar with lazy evaluation then feel free to skip the slides.
@philip_schwarz
6. Richard Bird
1.2 Evaluation
The computer evaluates an expression by reducing it to its simplest equivalent form and displaying the result. The terms
evaluation, simplification and reduction will be used interchangeably to describe this process. To give a brief flavour, consider the
expression ๐ ๐๐ข๐๐๐ (3 + 4); one possible sequence is
๐ ๐๐ข๐๐๐ (3 + 4)
= { definition of + }
๐ ๐๐ข๐๐๐ 7
= { definition of ๐ ๐๐ข๐๐๐ }
7 ร 7
= { definition of ร}
49
The first and third step refer to the use of the built-in rules for addition and multiplication, while the second step refers to the use
of the rule defining ๐ ๐๐ข๐๐๐ supplied by the programmer. That is to say, the definition of ๐ ๐๐ข๐๐๐ ๐ฅ = ๐ฅ ร ๐ฅ is interpreted by the
computer simply as a left-to-right rewrite rule for reducing expressions involving ๐ ๐๐ข๐๐๐. The expression โ49โ cannot be further
reduced, so that is the result displayed by the computer. An expression is said to be canonical, or in normal form, if it cannot be
further reduced. Hence โ49โ is in normal form.
Another reduction sequence for ๐ ๐๐ข๐๐๐ (3 + 4) is
๐ ๐๐ข๐๐๐ 3 + 4
= { definition of ๐ ๐๐ข๐๐๐ }
7. Richard Bird
3 + 4 ร (3 + 4)
= { definition of + }
7 ร (3 + 4)
= { definition of + }
7 ร 7
= { definition of ร}
49
In this reduction sequence the rule for ๐ ๐๐ข๐๐๐ is applied first, but the final result is the same. A characteristic feature of functional
programming is that if two different reduction sequences both terminate then they lead to the same result. In other words, the
meaning of an expression is its value and the task of the computer is simply to obtain it. Let us give another example. Consider
the script
three :: Integer โ Integer
three x = 3
infinity :: Integer
infinity = infinity + 1
It is not clear what integer, if any, is defined by the second equation, but the computer can nevertheless use the equation as a
rewrite rule. Now consider simplification of three infinity . If we try to simplify infinity first, then we get the reduction sequence
three infinity
= { definition of ๐๐๐๐๐๐๐ก๐ฆ }
8. three (infinity + 1)
= { definition of ๐๐๐๐๐๐๐ก๐ฆ }
three ((infinity + 1) + 1)
= { and so on โฆ }
โฆ
This reduction sequence does not terminate. If on the other hand we try to simplify three first, then we get the sequence
three infinity
= { definition of ๐กโ๐๐๐ }
3
This sequence terminates in one step. So some ways of simplifying an expression may terminate while others do not. In Chapter 7
we will describe a reduction strategy, called lazy evaluation, that guarantees termination whenever termination is possible, and
is also reasonably efficient. Haskell is a lazy functional language, and we will explore what consequences such a strategy has in
the rest of the book. However, whichever strategy is in force, the essential point is that expressions are evaluated by a
conceptually simple process of substitution and simplification, using both primitive rules and rules supplied by the programmer in
the form of definitions.
Richard Bird
9. Richard Bird
7.1 Lazy Evaluation
Let us start by revisiting the evaluation of ๐ ๐๐ข๐๐๐ (3 + 4) considered in Chapter 1.
Recall that one reduction sequence is and another reduction sequence is
๐ ๐๐ข๐๐๐ (3 + 4) ๐ ๐๐ข๐๐๐ (3 + 4)
= { definition of + } = { definition of ๐ ๐๐ข๐๐๐ }
๐ ๐๐ข๐๐๐ 7 3 + 4 ร (3 + 4)
= { definition of ๐ ๐๐ข๐๐๐ } = { definition of + }
7 ร 7 7 ร (3 + 4)
= { definition of ร} = { definition of + }
49 7 ร 7
= { definition of ร}
49
These two reduction sequences illustrate two reduction policies, called innermost and outermost reduction, respectively. In the first
sequence, each step reduces an innermost redex. The word โredexโ is short for โreducible expressionโ, and an innermost redex is one that
contains no other redex. In the second sequence, each step reduces an outermost redex. An outermost redex is one that is contained in no
other redex.
Here is another example. First, innermost reduction: The outermost reduction policy for the same expression yields
๐๐ ๐ก (๐ ๐๐ข๐๐๐ 4, ๐ ๐๐ข๐๐๐ 2) ๐๐ ๐ก (๐ ๐๐ข๐๐๐ 4, ๐ ๐๐ข๐๐๐ 2)
= { definition of ๐ ๐๐ข๐๐๐ } = { definition of ๐๐ ๐ก}
๐๐ ๐ก (4 ร 4, ๐ ๐๐ข๐๐๐ 2) ๐ ๐๐ข๐๐๐ 4
= { definition of ร } = { definition of ๐ ๐๐ข๐๐๐ }
๐๐ ๐ก (16, ๐ ๐๐ข๐๐๐ 2) 4 ร 4
= { definition of ๐ ๐๐ข๐๐๐ } = { definition of ร }
๐๐ ๐ก (16, 2 ร 2) 16
= { definition of ร }
๐๐ ๐ก (16, 4)
= { definition of ๐๐ ๐ก }
16
The innermost reduction
takes five steps. In the
first two steps there was
a choice of innermost
redexes and the leftmost
redex is chosen.
The outermost reduction
sequence takes three
steps. By using outermost
reduction, evaluation of
๐ ๐๐ข๐๐๐ 2 was avoided.
10. The two reduction policies have different characteristics. Sometimes outermost reduction will give an answer when innermost reduction fails
to terminate (consider replacing ๐ ๐๐ข๐๐๐ 2 by ๐ข๐๐๐๐๐๐๐๐ in the expression above). However, if both methods terminate, then they give the
same result.
Outermost reduction has the important property that if an expression has a normal form, then outermost reduction will compute it.
Outermost reduction is also called normal-order on account of this property. It would seem therefore, that outermost reduction is a better
choice than innermost reduction, but there is a catch. As the first example shows, outermost reduction can sometimes require more steps
than innermost reduction. The problem arises with any function whose definition contains repeated occurrences of an argument. By binding
such an argument to a suitably large expression, the difference between innermost and outermost reduction can be made arbitrarily large.
This problem can be solved by representing expressions as graphs rather than trees. Unlike trees, graphs can share subexpressions. For
example, the graph
represents the expression 3 + 4 ร (3 + 4). Each occurrence of 3 + 4 is represented by an arrow, called a pointer, to a single instance of
3 + 4 . Now using outermost graph reduction we have
๐ ๐๐ข๐๐๐ (3 + 4)
= { definition of ๐ ๐๐ข๐๐๐ }
= { definition of + }
= { definition of ร}
49
The reduction has only three steps. The representation of expressions as graphs means that duplicated subexpressions can be shared and
reduced at most once. With graph reduction, outermost reduction never takes more steps than innermost reduction. Henceforth, we will
refer to outermost graph reduction by its common name, lazy evaluation, and to innermost graph reduction as eager evaluation.
Richard Bird
(3 + 4)โข โข( )ร
(3 + 4)โข โข( )ร
7โข โข( )ร
11. 15.2 Evaluation Strategies
โฆ
When evaluating an expression, in what order should the reductions be performed? One common strategy, known as innermost
evaluation, is to always choose a redex that is innermost, in the sense that it contains no other redex. If there is more than one
innermost redex, by convention we choose the one that begins at the leftmost position in the expression.
โฆ
Innermost evaluation can also be characterized in terms of how arguments are passed to functions. In particular, using this strategy
ensures that arguments are always fully evaluated before functions are applied. That is, arguments are passed by value. For
example, as shown above, evaluating mult (1+2,2+3) using innermost evaluation proceeds by first evaluating the arguments 1+2
and 2+3, and then applying mult. The fact that we always choose the leftmost innermost redex ensures that the first argument is
evaluated before the second.
โฆ
In terms of how arguments are passed to functions, using outermost evaluation allows functions to be applied before their
arguments are evaluated. For this reason, we say that arguments are passed by name. For example, as shown above, evaluating
mult(1+2,2+3) using outermost evaluation proceeds by first applying the function mult to the two unevaluated arguments 1+2
and 2+3, and then evaluating these two expressions in turn.
Lambda expressions
โฆ
Note that in Haskell, the selection of redexes within the bodies of lambda expressions is prohibited. The rationale for not โreducing
under lambdasโ is that functions are viewed as black boxes that we are not permitted to look inside. More formally, the only
operation that can be performed on a function is that of applying it to an argument. As such, reduction within the body of a
function is only permitted once the function has been applied. For example, the function x โ 1 + 2 is deemed to already be fully
evaluated, even though its body contains the redex 1 + 2, but once this function has been applied to an argument, evaluation of this
Graham Hutton
@haskellhutt
12. redex can then proceed:
(x โ 1 + 2) 0
= { applying the lambda }
1 + 2
= { applying + }
3
Using innermost and outermost evaluation, but not within lambda expressions, is normally referred to as call-by-value and call-
by-name evaluation, respectively. In the next two sections we explore how these two evaluation strategies compare in terms of
two important properties, namely their termination behaviour and the number of reduction steps that they require.
15. 3 Termination
โฆ
call-by-name evaluation may produce a result when call-by-value evaluation fails to terminate. More generally, we have the
following important property: if there exists any evaluation sequence that terminates for a given expression, then call-by-name
evaluation will also terminate for this expression, and produce the same final result. In summary, call-by-name evaluation is
preferable to call-by-value for the purpose of ensuring that evaluation terminates as often as possible.
โฆ
15.4 Number of reductions
โฆ
call-by-name evaluation may require more reduction steps than call-by-value evaluation, in particular when an argument is used
more than once in the body of a function. More generally, we have the following property: arguments are evaluated precisely once
using call-by-value evaluation, but may be evaluated many times using call-by-name. Fortunately, the above efficiency problem
with call-by-name evaluation can easily be solved, by using pointers to indicate sharing of expressions during evaluation. That is,
rather than physically copying an argument if it is used many times in the body of a function, we simply keep one copy of the
argument and make many pointers to it. In this manner, any reductions that are performed on the argument are automatically
shared between each of the pointers to that argument. For example, using this strategy we have:
Graham Hutton
@haskellhutt
13. ๐ ๐๐ข๐๐๐ (1 + 2)
= { applying ๐ ๐๐ข๐๐๐ }
= { applying + }
= { applying โ }
3
That is, when applying the definition square n = n โ n in the first step, we keep a single copy of the argument expression 1+2, and
make two pointers to it. In this manner, when the expression 1+2 is reduced in the second step, both pointers in the expression
share the result. The use of call-by-name evaluation in conjunction with sharing is known as lazy evaluation. This is the
evaluation strategy that is used in Haskell, as a result of which Haskell is known as a lazy programming language. Being based
upon call-by-name evaluation, lazy evaluation has the property that it ensures that evaluation terminates as often as possible.
Moreover, using sharing ensures that lazy evaluation never requires more steps than call-by-value evaluation. The use of the
term โlazyโ will be explained in the next section.
(1 + 2)โข โข( )โ
3โข โข( )โ
Graham Hutton
@haskellhutt
14. 15.5 Infinite structures
An additional property of call-by-name evaluation, and hence lazy evaluation, is that it allows what at first may seem impossible:
programming with infinite structures. We have already seen a simple example of this idea earlier in this chapter, in the form of the
evaluation of fst (0,inf) avoiding the production of the infinite structure 1 + (1 + (1 + ...)) defined by inf. More
interesting forms of behaviour occur when we consider infinite lists. For example, consider the following recursive definition:
ones :: [Int ]
ones = 1 : ones
That is, the list ones is defined as a single one followed by itself. As with inf, evaluating ones does not terminate, regardless of the
strategy used:
ones
= { applying ๐๐๐๐ }
1 : ones
= { applying ๐๐๐๐ }
1 : ( 1 : ones)
= { applying ๐๐๐๐ }
1 : ( 1 : ( 1 : ones))
= { applying ๐๐๐๐ }
โฆ
In practice, evaluating ones using GHCi will produce a never-ending list of ones,
> ones
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, โฆ
Graham Hutton
@haskellhutt
15. Now consider the expression head ones, where head is the library function that selects the first element of a list, defined by head (x :
_ ) = x. Using call-by-value evaluation in this case also results in non-termination.
head ones
= { applying ๐๐๐๐ }
head (1 : ones)
= { applying ๐๐๐๐ }
head (1 : ( 1 : ones))
= { applying ๐๐๐๐ }
head (1 : ( 1 : ( 1 : ones)))
= { applying ๐๐๐๐ }
โฆ
In contrast, using lazy evaluation (or call-by-name evaluation, as sharing is required in this example), results in termination in two
steps:
head ones
= { applying ๐๐๐๐ }
head (1 : ones)
= { applying โ๐๐๐ }
1
This behaviour arises because lazy evaluation proceeds in a lazy manner as its name suggests, only evaluating arguments as and
when this is strictly necessary in order to produce results. For example, when selecting the first element of a list, the remainder of
the list is not required, and hence in head (1 : ones) the further evaluation of the infinite list ones is avoided. More generally, we
have the following property: using lazy evaluation, expressions are only evaluated as much as required by the context in which they
are used. Using this idea, we now see that under lazy evaluation ones is not an infinite list as such, but rather a potentially infinite
list, which is only evaluated as much as required by the context. This idea is not restricted to lists, but applies equally to any form of
data structure in Haskell.
Graham Hutton
@haskellhutt
16. After that introduction to lazy evaluation, we can now look at what
it means for a function to be strict.
In the next two slides we see how Richard Bird explains the concept.
17. Richard Bird
1.3 Values
The evaluator for a functional language prints a value by printing its canonical representation; this representation is dependent
both on the syntax given for forming expressions, and the precise definition of the reduction rules.
Some values have no canonical representations, for example function values. โฆ
โฆ
Other values may have reasonable representations, but no finite ones. For example, the number ๐ โฆ
โฆ
For some expressions the process of reduction never stops and never produces any result. For example, the expression infinity
defined in the previous section leads to an infinite reduction sequence. Recall that the definition was
infinity :: Integer
infinity = infinity + 1
Such expressions do not denote well-defined values in the normal mathematical sense. As another example, assuming the operator
/ denotes numerical division, returning a number of type Float, the expression 1/0 does not denote a well-defined floating point
number. A request to evaluate 1/0 may cause the evaluator to respond with an error message, such as โattempt to divide by zeroโ,
or go into an infinitely long sequence of calculations without producing any result.
In order that we can say that, without exception, every syntactically well-formed expression denotes a value, it is convenient to
introduce a special symbol โฅ, pronounced โbottomโ, to stand for the undefined value of a particular type. In particular, the value
of infinity is the undefined value โฅ of type Integer, and 1/0 is the undefined value โฅ of type Float. Hence we can assert that
1/0 =โฅ.
The computer is not expected to be able to produce the value โฅ. Confronted with an expression whose value is โฅ, the computer
may give an error message or it may remain perpetually silent. The former situation is detectable, but the second one is not (after
all, evaluation might have terminated normally the moment the programmer decided to abort it). Thus โฅ is a special kind of value,
rather like the special value โ in mathematical calculus. Like special values in other branches of mathematics, โฅ can be admitted to
18. Richard Bird
the universe of values oly if we state precisely the properties it is required to have and its relationship with other values.
It is possible, conceptually at least, to apply functions to โฅ. For example, with the definitions three x = 3 and ๐ ๐๐ข๐๐๐ ๐ฅ = ๐ฅ ร ๐ฅ,
we have
? three infinity
3
? square infinity
{ Interrupted! }
In the first evaluation the value of infinity was not needed to compute the calculation, so it was never calculated. This is a
consequence of the lazy evaluation reduction strategy mentioned earlier. On the other hand, in the second evaluation the value of
infinity is needed to complete the computation: one cannot compute ๐ฅ ร ๐ฅ without knowing the value of ๐ฅ. Consequently, the
evaluator goes into an infinite reduction sequence in an attempt to simplify infinity to normal form. Bored by waiting for an
answer that we know will never come, we hit the interrupt key.
If ๐ โฅ = โฅ, then ๐ is said to be a strict function; otherwise it is nonstrict. Thus, square is a strict function, while three is
nonstrict. Lazy evaluation allows nonstrict functions to be defined, some other strategies do not.
19. And here is how Richard Bird describes the
fact that &&, which he calls โ, is strict.
Richard Bird
Two basic functions on Booleans are the operations of conjunction, denoted by the binary operator โ, and disjunction, denoted by
โ. These operations can be defined by
(โ), (โ) :: Bool โ Bool โ Bool
False โ x = False
True โ x = x
False โ x = x
True โ x = True
The definitions use pattern matching on the left-hand argument. For example, in order to simplify expressions of the form e1 โ e2 ,
the computer first reduces e1 to normal form. If the result is False then the first equation for โ is used, so the computer
immediately returns False. If e1 reduces to True, then the second equation is used, so e2 is evaluated. It follows from this
description of how pattern matching works that
โฅ โ False = โฅ
False โ โฅ = False
True โ โฅ = โฅ
Thus โ is strict in its left-hand argument, but not strict in its right-hand argument. Analogous remarks apply to โ.
20. Now that we have a good understanding of the concepts of lazy evaluation and
strictness, we can revisit the two examples in which Tony Morris showed that if
a binary function is nonstrict in its second argument then it is sometimes
possible to successfully do a right fold of the function over an infinite list.
21. โ๐๐๐๐๐ 99 ๐๐๐๐๐๐๐ก๐ฆ
= { definition of โ๐๐๐๐๐ }
๐๐๐๐๐ const 99 ๐๐๐๐๐๐๐ก๐ฆ
= { definition of ๐๐๐๐๐๐๐ก๐ฆ }
๐๐๐๐๐ const 99 (1 : ๐๐๐๐๐๐๐ก๐ฆ )
= { definition of ๐๐๐๐๐ }
๐๐๐๐ ๐ก 1 (๐๐๐๐๐ ๐๐๐๐ ๐ก 99 ๐๐๐๐๐๐๐ก๐ฆ )
= { definition of ๐๐๐๐ ๐ก }
1
const :: ๐ผ โ ๐ฝ โ ๐ผ
const x y = x
๐๐๐๐๐ โท ๐ผ โ ๐ฝ โ ๐ฝ โ ๐ฝ โ ๐ผ โ ๐ฝ
๐๐๐๐๐ ๐ ๐ = ๐
๐๐๐๐๐ ๐ ๐ ๐ฅ: ๐ฅ๐ = ๐ ๐ฅ ๐๐๐๐๐ ๐ ๐ ๐ฅ๐
โ๐๐๐๐๐ โท ๐ผ โ ๐ผ โ ๐ผ
โ๐๐๐๐๐ ๐ = ๐๐๐๐๐ ๐๐๐๐ ๐ก ๐
๐๐๐๐๐๐๐ก๐ฆ :: [Integer ]
๐๐๐๐๐๐๐ก๐ฆ = 1 โถ ๐๐๐๐๐๐๐ก๐ฆ
ฮป> infinity = 1 : infinity
ฮป> const x y = x
ฮป> heador a = foldr const a
ฮป> heador 99 (1 : undefined)
1
ฮป> heador 99 infinity
1
ฮป> const 99 undefined
99
$ infinity
[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1 etc, etc
Lazy evaluation causes
just enough of ๐๐๐๐๐๐๐ก๐ฆ
to be valuated to allow
๐๐๐๐๐ to be invoked.
Neither ๐๐๐๐ ๐ก nor โ๐๐๐๐๐ are
strict in their second argument.
@philip_schwarz
22. ๐๐๐๐๐ข๐๐๐ก โท Bool โ Bool
๐๐๐๐๐ข๐๐๐ก = ๐๐๐๐๐ (&&) ๐๐๐ข๐
๐๐๐๐๐ข๐๐๐ก ๐๐๐๐๐๐๐ก๐ฆ
= { definition of ๐๐๐๐๐ข๐๐๐ก }
๐๐๐๐๐ (&&) ๐๐๐ขe ๐๐๐๐๐๐๐ก๐ฆ
= { definition of ๐๐๐๐๐๐๐ก๐ฆ}
๐๐๐๐๐ (&&) ๐๐๐ขe (False : ๐๐๐๐๐๐๐ก๐ฆ )
= { definition of ๐๐๐๐๐}
False && (๐๐๐๐๐ (&&) ๐๐๐ขe ๐๐๐๐๐๐๐ก๐ฆ )
= { definition of && }
False
(&&) :: Bool โ Bool โ Bool
False && x = False
True && x = x
ฮป> False && undefined
False
ฮป> :{
(&&) False x = False
(&&) True x = x
:}
ฮป> infinity = False : infinity
ฮป> conjunct = foldr (&&) True
๐๐๐๐๐ โท ๐ผ โ ๐ฝ โ ๐ฝ โ ๐ฝ โ ๐ผ โ ๐ฝ
๐๐๐๐๐ ๐ ๐ = ๐
๐๐๐๐๐ ๐ ๐ ๐ฅ: ๐ฅ๐ = ๐ ๐ฅ ๐๐๐๐๐ ๐ ๐ ๐ฅ๐
๐๐๐๐๐๐๐ก๐ฆ :: [Bool ]
๐๐๐๐๐๐๐ก๐ฆ = ๐น๐๐๐ ๐ โถ ๐๐๐๐๐๐๐ก๐ฆ
ฮป> conjunct infinity
False
ฮป> conjunct (False:undefined)
False
Lazy evaluation causes
just enough of ๐๐๐๐๐๐๐ก๐ฆ
to be valuated to allow
๐๐๐๐๐ to be invoked.
ฮป> infinity
[False,False,False,False,False,False,False,False
,False,False,False,False,False,False, etc, etc
(&&) is strict in its second argument
only when its first argument is True.
ฮป> infinity = True : infinity
ฮป> conjunct infinity
{ Interrupted }
23. Right Fold Scenario Code Result Approximate Duration
foldr (&&) True (replicate 1,000,000,000 True) True
38 seconds
Ghc memory:
โข initial: 75.3 MB
โข final: 75.3 MB
foldr (&&) True (replicate 1,000,000,000 False) False
0 seconds
โข initial: 75.3 MB
โข final: 75.3 MB
trues = True : trues
foldr (&&) True trues
Does not
terminate
Keeps going
I stopped it after 3 min.
โข initial: 75.3 MB
โข final: 75.3 MB
falses = False : falses
foldr (&&) True falses
False 0 Seconds
List Size huge
Function Strictness nonstrict in 2nd argument
when 1st is False
List Size huge
Function Strictness nonstrict in 2nd argument
when 1st is False
List Size infinite
Function Strictness nonstrict in 2nd argument
when 1st is False
List Size infinite
Function Strictness nonstrict in 2nd argument
when 1st is False
If we rigth fold (&&) over a list then the folding ends as soon as a False is encountered. e.g. if the first
element of the list is False then the folding ends immediately. If the list we are folding over is infinite,
then if no False is encountered the folding never ends. Note that because of how (&&) works, there is
no need to keep building a growing intermediate expression during the fold: memory usage is constant.
(&&) :: Bool โ Bool โ Bool
False && x = False
True && x = x
24. Right Fold Scenario (continued) Code Result Approximate Duration
foldr (+) 0 [1..10,000,000] 50000005000000 2 seconds
foldr (+) 0 [1..100,000,000] *** Exception: stack overflow 3 seconds
foldr (+) 0 [1..] *** Exception: stack overflow 3 seconds
List Size large
Function Strictness strict in both arguments
List Size huge
Function Strictness strict in both arguments
List Size infinite
Function Strictness strict in both arguments
Letโs contrast that with what happens when the function with which we are doing a right fold is strict in both of its arguments.
e.g. we are able to successfully right fold (+) over a large list, but if the list is huge or outright infinite, then folding fails with a stack overflow
error, because the growing intermediate expression that gets built and which represents the sum of all the listโs elements eventually exhausts
the available stack memory.
25. Left Fold Scenario Code Result Approximate Duration
foldl (+) 0 [1..10,000,000] 50000005000000
4 seconds
Ghc memory:
โข initial: 27.3MB
โข final: 1.10 GB
foldl (+) 0 [1..100,000,000] *** Exception: stack overflow
3 seconds
Ghc memory:
โข initial: 27.3MB
โข final: 10 GB
foldl (+) 0 [1..] Does not terminate
Keeps going
I stopped it after 3 min
Ghc memory:
โข initial: 27.3MB
โข final: 22 GB
List Size large
Function Strictness strict in both arguments
List Size huge
Function Strictness strict in both arguments
List Size infinite
Function Strictness strict in both arguments
As we know, the reason why on the previous slide we saw foldr encounter a stack overflow exception when processing an infinite list, or a
sufficiently long list, is that foldr is not tail recursive.
So just for completeness, letโs go through the same scenarios as on the previous slide, but this time using foldl rather than foldr. Since foldl
behaves like a loop, it should not encounter any stack overflow exception due to processing an infinite list or a sufficiently long list.
@philip_schwarz
26. That was a bit of a surprise!
When the list is infinite, foldl does not terminate, which is what we expect, given that a left fold is like a loop.
But surprisingly, when the list is finite yet sufficiently large, foldl encounters a stack overflow exception!
How can that be? Shouldnโt the fact that foldl is tail recursive guarantee that it is stack safe? If you need a refresher on tail recursion
then see the next three slides, otherwise you can just skip them.
Also, note how the larger the list, the more heap space foldl uses. Why is foldl using so much heap space? Isnโt it supposed to only
require space for an accumulator that holds an intermediate result? Again, if you need a refresher on accumulators and intermediate
results then see the next three slides.
27. 2.2.3 Tail recursion
The code of lengthS will fail for large enough sequences. To see why, consider an inductive definition of the .length method as a
function lengthS:
def lengthS(s: Seq[Int]): Int =
if (s.isEmpty) 0
else 1 + lengthS(s.tail)
scala> lengthS((1 to 1000).toList)
res0: Int = 1000
scala> val s = (1 to 100_000).toList
s : List[Int] = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22,
23, 24, 25, 26, 27, 28, 29, 30, 31, 32, ...
scala> lengthS(s)
java.lang.StackOverflowError
at .lengthS(<console>:12)
at .lengthS(<console>:12)
at .lengthS(<console>:12)
at .lengthS(<console>:12)
...
The problem is not due to insufficient main memory: we are able to compute and hold in memory the entire sequence s. The
problem is with the code of the function lengthS. This function calls itself inside the expression 1 + lengthS(...). So we can
visualize how the computer evaluates this code:
lengthS(Seq(1, 2, ..., 100000))
= 1 + lengthS(Seq(2, ..., 100000))
= 1 + (1 + lengthS(Seq(3, ..., 100000)))
= ...
Sergei Winitzki
sergei-winitzki-11a6431
28. The function body of lengthS will evaluate the inductive step, that is, the โelseโ part of the โif/elseโ, about 100_000 times. Each
time, the sub-expression with nested computations 1+(1+(...)) will get larger.
This intermediate sub-expression needs to be held somewhere in memory, until at some point the function body goes into the
base case and returns a value. When that happens, the entire intermediate sub-expression will contain about 100_000_nested
function calls still waiting to be evaluated.
This sub-expression is held in a special area of memory called stack memory, where the not-yet-evaluated nested function calls
are held in the order of their calls, as if on a โstackโ. Due to the way computer memory is managed, the stack memory has a fixed
size and cannot grow automatically. So, when the intermediate expression becomes large enough, it causes an overflow of the
stack memory and crashes the program.
A way to avoid stack overflows is to use a trick called tail recursion. Using tail recursion means rewriting the code so that all
recursive calls occur at the end positions (at the โtailsโ) of the function body. In other words, each recursive call must be itself the
last computation in the function body, rather than placed inside other computations. Here is an example of tail-recursive code:
def lengthT(s: Seq[Int], res: Int): Int =
if (s.isEmpty)
res
else
lengthT(s.tail, 1 + res)
In this code, one of the branches of the if/else returns a fixed value without doing any recursive calls, while the other branch
returns the result of a recursive call to lengthT(...). In the code of lengthT, recursive calls never occur within any sub-
expressions.
def lengthS(s: Seq[Int]): Int =
if (s.isEmpty) 0
else 1 + lengthS(s.tail)
lengthS(Seq(1, 2, ..., 100000))
= 1 + lengthS(Seq(2, ..., 100000))
= 1 + (1 + lengthS(Seq(3, ..., 100000)))
= ...
Sergei Winitzki
sergei-winitzki-11a6431
29. It is not a problem that the recursive call to lengthT has some sub-expressions such as 1 + res as its arguments, because all these
sub-expressions will be computed before lengthT is recursively called.
The recursive call to lengthT is the last computation performed by this branch of the if/else. A tail-recursive function can have
many if/else or match/case branches, with or without recursive calls; but all recursive calls must be always the last expressions
returned.
The Scala compiler has a feature for checking automatically that a functionโs code is tail-recursive : the @tailrec annotation. If a
function with a @tailrec annotation is not tail-recursive, or is not recursive at all, the program will not compile.
@tailrec def lengthT(s: Seq[Int], res: Int): Int =
if (s.isEmpty) res
else lengthT(s.tail, 1 + res)
Let us trace the evaluation of this function on an example:
lengthT(Seq(1,2,3), 0)
= lengthT(Seq(2,3), 1 + 0) // = lengthT(Seq(2,3), 1)
= lengthT(Seq(3), 1 + 1) // = lengthT(Seq(3), 2)
= lengthT(Seq(), 1 + 2) // = lengthT(Seq(), 3)
= 3
All sub-expressions such as 1 + 1 and 1 + 2 are computed before recursive calls to lengthT. Because of that, sub-expressions
do not grow within the stack memory. This is the main benefit of tail recursion.
How did we rewrite the code of lengthS to obtain the tail-recursive code of lengthT? An important difference between lengthS
and lengthT is the additional argument, res, called the accumulator argument. This argument is equal to an intermediate result of
the computation. The next intermediate result (1 + res) is computed and passed on to the next recursive call via the
accumulator argument. In the base case of the recursion, the function now returns the accumulated result, res, rather than 0,
because at that time the computation is finished. Rewriting code by adding an accumulator argument to achieve tail recursion is
called the accumulator technique or the โaccumulator trickโ.
def lengthS(s: Seq[Int]): Int =
if (s.isEmpty) 0
else 1 + lengthS(s.tail)
Sergei Winitzki
sergei-winitzki-11a6431
30. It turns out that in Haskell, a left fold done using foldl does not use constant space,
but rather it uses an amount of space that is proportional to the length of the list!
See the next slide for how Richard Bird describe the problem.
31. Richard Bird
7.5 Controlling Space
Consider reduction of the term sum [1 .. 1000], where ๐ ๐ข๐ = ๐๐๐๐๐ + 0 :
sum [1 .. 1000]
= ๐๐๐๐๐ (+) 0 [1 .. 1000]
= ๐๐๐๐๐ (+) (0+1) [2 .. 1000]
= ๐๐๐๐๐ (+) ((0+1)+2) [3 .. 1000]
โฎ
= ๐๐๐๐๐ (+) (โฆ (0+1)+2) + โฆ + 1000) [ ]
= (โฆ (0+1)+2) + โฆ + 1000)
= 500500
The point to notice is that in computing sum [1 .. n ] by the outermost reduction the expressions grow in size proportional to n. On
the other hand, if we use a judicious mixture of outermost and innermost reduction steps, then we obtain the following reduction
sequence:
sum [1 .. 1000]
= ๐๐๐๐๐ (+) 0 [1 .. 1000]
= ๐๐๐๐๐ (+) (0+1) [2 .. 1000]
= ๐๐๐๐๐ (+) 1 [2 .. 1000]
= ๐๐๐๐๐ (+) (1+2) [3 .. 1000]
= ๐๐๐๐๐ (+) (3) [3 .. 1000]
โฎ
= ๐๐๐๐๐ (+) 500500 [ ]
= 500500
The maximum size of any expression in this sequence is bounded by a constant. In short, reducing to normal form by purely
outermost reduction requires ฮฉ(๐) space, while a combination of innermost and outermost reduction requires only O(1) space.
32. In the case of a function that is strict in its first argument however, it is possible to
to do a left fold that uses constant space by using a strict variant of foldl.
See the next slide for how Richard Bird describes the strict version of foldl.
33. Richard Bird
7.51 Head-normal form and the function strict
Reduction order may be controlled by use of a special function ๐ ๐ก๐๐๐๐ก. A term of the form ๐ ๐ก๐๐๐๐ก ๐ ๐ is reduced first by reducing ๐
to head-normal from, and then applying ๐. An expression ๐ is in head-normal form if ๐ is a function or if ๐ takes the form of a
datatype constructor applied to zero or more arguments. Every expression in normal form is in head-normal form, but not vice-
versa. For example, ๐1 โถ ๐2 is in head-normal form but is in normal form only when ๐1 and ๐2 are both in normal form. Similarly
๐น๐๐๐ ๐1 ๐2 and (๐1, ๐2), are in head-normal form but are not in normal form unless ๐1 and ๐2 are in normal form. In the expression
๐ ๐ก๐๐๐๐ก ๐ ๐, the term ๐ will itself be reduced by outermost reduction, except, of course, if further calls of ๐ ๐ก๐๐๐๐ก appear while
reducing ๐.
As a simple example, let ๐ ๐ข๐๐ ๐ฅ = ๐ฅ + 1. Then
๐ ๐ข๐๐ (๐ ๐ข๐๐ (8 ร 5))
= ๐ ๐ข๐๐ 8 ร 5 + 1
= (8 ร 5) + 1 + 1
= 40 + 1 + 1
= 41 + 1
= 42
On the other hand,
๐ ๐ก๐๐๐๐ก ๐ ๐ข๐๐ (๐ ๐ก๐๐๐๐ก ๐ ๐ข๐๐ (8 ร 5))
= ๐ ๐ก๐๐๐๐ก ๐ ๐ข๐๐ ๐ ๐ก๐๐๐๐ก ๐ ๐ข๐๐ 40
= ๐ ๐ก๐๐๐๐ก ๐ ๐ข๐๐ (๐ ๐ข๐๐ 40)
= ๐ ๐ก๐๐๐๐ก ๐ ๐ข๐๐ (40 + 1)
= ๐ ๐ก๐๐๐๐ก ๐ ๐ข๐๐ (41)
= ๐ ๐ข๐๐ (41)
= 41 + 1
= 42
Both cases perform the same reduction steps, but in a different order. Currying applies to ๐ ๐ก๐๐๐๐ก as to anything else. From this it
34. follows that if ๐ is a function of three arguments, writing ๐ ๐ก๐๐๐๐ก ๐ ๐1 ๐2 ๐3 causes the second argument ๐2 to be reduced early, but
not the first or third.
Given this, we can define a function ๐ ๐๐๐๐๐, a strict version of ๐๐๐๐๐, as follows:
๐ ๐๐๐๐๐ โ ๐ = ๐
๐ ๐๐๐๐๐ โ ๐ ๐ฅ: ๐ฅ๐ = ๐ ๐ก๐๐๐๐ก (๐ ๐๐๐๐๐ โ ) ๐ โ ๐ฅ ๐ฅ๐
With ๐ ๐ข๐ = ๐ ๐๐๐๐๐ + 0 we now have
sum [1 .. 1000]
= ๐ ๐๐๐๐๐ (+) 0 [1 .. 1000]
= ๐ ๐ก๐๐๐๐ก (๐ ๐๐๐๐๐ (+)) (0+1) [2 .. 1000]
= ๐ ๐๐๐๐๐ (+) 1 [2 .. 1000]
= ๐ ๐ก๐๐๐๐ก (๐ ๐๐๐๐๐ (+)) (1+2) [3 .. 1000]
= ๐ ๐๐๐๐๐ (+) 3 [3 .. 1000]
โฎ
= ๐ ๐๐๐๐๐ (+) 500500 [ ]
= 500500
This reduction sequence evaluates sum in constant space.
โฆ
The operational definition of strict can be re-expressed in the following way:
๐ ๐ก๐๐๐๐ก ๐ ๐ฅ = ๐ข๐ ๐ฅ = โฅ ๐ญ๐ก๐๐ง โฅ ๐๐ฅ๐ฌ๐ ๐ ๐ฅ
Recall that a function ๐ is said to be strict if ๐ โฅ = โฅ. It follows from the above equation that ๐ = ๐ ๐ก๐๐๐๐ก ๐ if and only if ๐ is a strict
function. To see this, just consider the values of ๐ ๐ฅ and ๐ ๐ก๐๐๐๐ก ๐ ๐ฅ in the two cases ๐ฅ = โฅ and ๐ฅ โ โฅ. This explains the name ๐ ๐ก๐๐๐๐ก.
Richard Bird
35. Furthermore, if ๐ is strict, but not everywhere โฅ, and ๐ โ โฅ, then reduction of ๐ ๐ eventually entails
reduction of ๐. Thus, if ๐ is a strict function, evaluation of ๐ ๐ and ๐ ๐ก๐๐๐๐ก ๐ ๐ perform the same reduction
steps, though possibly in a different order. In other words, when ๐ is strict, replacing it by ๐ ๐ก๐๐๐๐ก ๐ does
not change the meaning or the asymptotic time required to apply it, although it may change the space
required by the computation.
It it easy to show that if โฅโ ๐ฅ = โฅ for every ๐ฅ, then ๐๐๐๐๐ โ โฅ ๐ฅ๐ = โฅ for every finite list ๐ฅ๐ . In other
words, if โ is strict in its left argument, then ๐๐๐๐๐ โ is strict, and so is equivalent to
๐ ๐ก๐๐๐๐ก (๐๐๐๐๐ โ ), and hence also equivalent to ๐ ๐๐๐๐๐ โ . It follows that replacing ๐๐๐๐๐ by ๐ ๐๐๐๐๐ in
the definition of sum is valid, and the same replacement is valid whenever ๐๐๐๐๐ is applied to a binary
operation that is strict in its first argument.
Richard Bird
It turns out, as weโll see later, that if โ is strict in both arguments, and can be computed in ๐(1) time and ๐(1) space,
then instead of computing ๐๐๐๐๐ โ ๐ ๐ฅ๐ , which requires ๐(๐) time and ๐(๐) space to compute (where ๐ is the length
of ๐ฅ๐ ), we can compute ๐ ๐๐๐๐๐ โ ๐ ๐ฅ๐ which, while still requiring ๐(๐) time, only requires ๐(1) space.
sum [1 .. 1000]
= ๐๐๐๐๐ (+) 0 [1 .. 1000]
= ๐๐๐๐๐ (+) (0+1) [2 .. 1000]
= ๐๐๐๐๐ (+) ((0+1)+2) [3 .. 1000]
โฎ
= ๐๐๐๐๐ (+) (โฆ (0+1)+2) + โฆ + 1000) [ ]
= (โฆ (0+1)+2) + โฆ + 1000)
= 500500
We saw earlier that the reason why ๐๐๐๐๐ requires ๐(๐) space is that it builds a growing
intermediate expression that only gets reduced once the whole list has been traversed. For
this reason, ๐๐๐๐๐ canโt possibly be using an accumulator for the customary purpose of
maintaining a running intermediate result so that only constant space is required.
Also, where is that intermediate expression stored? While it makes sense to store it in heap memory, why is it that earlier, when we
computed ๐๐๐๐๐ (+) 0 [1..100,000,000], it resulted in a stack overflow exception? It looks like ๐๐๐๐๐ canโt possibly be using tailโ
recursion for the customary purpose of avoiding stack overflows. The next two slides begin to answer these questions.
@philip_schwarz
36. Left Folds, Laziness, and Space Leaks
To keep our initial discussion simple, we use foldl throughout most of this section. This is convenient for testing, but we will
never use foldl in practice. The reason has to do with Haskellโs nonstrict evaluation. If we apply foldl (+) [1,2,3], it
evaluates to the expression (((0 + 1) + 2) + 3). We can see this occur if we revisit the way in which the function gets
expanded:
foldl (+) 0 (1:2:3:[])
== foldl (+) (0 + 1) (2:3:[])
== foldl (+) ((0 + 1) + 2) (3:[])
== foldl (+) (((0 + 1) + 2) + 3) []
== (((0 + 1) + 2) + 3)
The final expression will not be evaluated to 6 until its value is demanded. Before it is evaluated, it must be stored as a
thunk. Not surprisingly, a thunk is more expensive to store than a single number, and the more complex the thunked
expression, the more space it needs. For something cheap such as arithmetic, thunking an expression is more
computationally expensive than evaluating it immediately. We thus end up paying both in space and in time.
When GHC is evaluating a thunked expression, it uses an internal stack to do so. Because a thunked expression could
potentially be infinitely large, GHC places a fixed limit on the maximum size of this stack. Thanks to this limit, we can try
a large thunked expression in ghci without needing to worry that it might consume all the memory:
ghci> foldl (+) 0 [1..1000]
500500
From looking at this expansion, we can surmise that this creates a thunk that consists of 1,000 integers and 999 applications
of (+). Thatโs a lot of memory and effort to represent a single number! With a larger expression, although the size is still
modest, the results are more dramatic:
ghci> foldl (+) 0 [1..1000000]
*** Exception: stack overflow
On small expressions, foldl will work correctly but slowly, due to the thunking_overhead that it incurs.
Bryan OโSullivan
John Goerzen
Donald Bruce Stewart
37. We refer to this invisible thunking as a space leak, because our code is operating normally, but it is using far more memory
than it should.
On larger expressions, code with a space leak will simply fail, as above. A space leak with foldl is a classic roadblock for new
Haskell programmers. Fortunately, this is easy to avoid.
The Data.List module defines a function named foldl' that is similar to foldl, but does not build up thunks. The difference in
behavior between the two is immediately obvious:
ghci> foldl (+) 0 [1..1000000]
*** Exception: stack overflow
ghci> :module +Data.List
ghci> foldl' (+) 0 [1..1000000]
500000500000
Due to foldlโs thunking behavior, it is wise to avoid this function in real programs, even if it doesnโt fail outright, it will be
unnecessarily inefficient. Instead, import Data.List and use foldl'.
Bryan OโSullivan
John Goerzen
Donald Bruce Stewart
That explanation clears up things a lot. The next
slide reinforces it and complements it nicely.
So the sfoldl function described
by Richard Bird is called foldlโ.
38. Performance/Strictness https://wiki.haskell.org/Performance/Strictness
Haskell is a non-strict language, and most implementations use a strategy called laziness to run your program. Basically laziness == non-
strictness + sharing.
Laziness can be a useful tool for improving performance, but more often than not it reduces performance by adding a constant overhead to
everything. Because of laziness, the compiler can't evaluate a function argument and pass the value to the function, it has to record the
expression in the heap in a suspension (or thunk) in case it is evaluated later. Storing and evaluating suspensions is costly, and unnecessary if
the expression was going to be evaluated anyway.
Strictness analysis
Optimising compilers like GHC try to reduce the cost of laziness using strictness analysis, which attempts to determine which function arguments
are always evaluated by the function, and hence can be evaluated by the caller insteadโฆ
The common case of misunderstanding of strictness analysis is when folding (reducing) lists. If this program
main = print (foldl (+) 0 [1..1000000])
is compiled in GHC without "-O" flag, it uses a lot of heap and stackโฆ Look at the definition from the standard library:
foldl :: (a -> b -> a) -> a -> [b] -> a
foldl f z0 xs0 = lgo z0 xs0
where lgo z [] = z
lgo z (x:xs) = lgo (f z x) xs
lgo, instead of adding elements of the long list, creates a thunk for (f z x). z is stored within that thunk, and z is a thunk also, created
during the previous call to lgo. The program creates the long chain of thunks. Stack is bloated when evaluating that chain.
With "-O" flag GHC performs strictness analysis, then it knows that lgo is strict in z argument, therefore thunks are not needed and are
not created.
39. So the reason why we see all that memory consumption in the first two scenarios below is that foldl creates a huge chain of thunks that is
the intermediate expression representing the sum of all the listโs elements.
The reason why the stack overflows in the second scenario is that it is not large enough to permit evaluation of the final expression.
The reason why the fold does not terminate in the third scenario is that since the list is infinite, foldl never finishes building the intermediate
expression. The reason it does not overflow the stack is that it doesnโt even use the stack to evaluate the final expression since it never
finishes building the expression.
Left Fold Scenario Code Result Approximate Duration
foldl (+) 0 [1..10,000,000] 50000005000000
4 seconds
Ghc memory:
โข initial: 27.3MB
โข final: 1.10 GB
foldl (+) 0 [1..100,000,000] *** Exception: stack overflow
3 seconds
Ghc memory:
โข initial: 27.3MB
โข final: 10 GB
foldl (+) 0 [1..] Does not terminate
Keeps going
I stopped it after 3 min
Ghc memory:
โข initial: 27.3MB
โข final: 22 GB
List Size large
Function Strictness strict in both arguments
List Size huge
Function Strictness strict in both arguments
List Size infinite
Function Strictness strict in both arguments
40. In the next slide we see Haskell expert Michael Snoyman make the
point that foldl is broken and that foldlโ is the one true left fold.
@philip_schwarz
41. foldl
Duncan Coutts already did this one. foldl is broken. Itโs a bad function. Left folds are supposed to be strict, not lazy. End of story. Goodbye. Too many
space leaks have been caused by this function. We should gut it out entirely.
But wait! A lazy left fold makes perfect sense for a Vector! Yeah, no one ever meant that. And the problem isnโt the fact that this function exists. Itโs
the name. It has taken the hallowed spot of the One True Left Fold. Iโm sorry, the One True Left Fold is strict.
Also, side note: we canโt raise linked lists to a position of supreme power within our ecosystem and then pretend like we actually care about vectors. We
donโt, we just pay lip service to them. Until we fix the wart which is overuse of lists, foldl is only ever used on lists.
OK, back to this bad left fold. This is all made worse by the fact that the true left fold, foldl', is not even exported by the Prelude. We Haskellers are a lazy
bunch. And if you make me type in import Data.List (foldl'), I just wonโt. Iโd rather have a space leak than waste precious time typing in those characters.
Alright, so what should you do? Use an alternative prelude that doesnโt export a bad function, and does export a good function. If you really, really want
a lazy left fold: add a comment, or use a function named foldlButLazyIReallyMeanIt. Otherwise Iโm going to fix your code during my code review.
42. We said earlier that if โ is strict in both arguments, and can be computed in ๐(1) time and ๐(1) space, then instead of computing
๐๐๐๐๐ โ ๐ ๐ฅ๐ , which requires ๐(๐) time and ๐(๐) space to compute (where ๐ is the length of ๐ฅ๐ ), we can compute ๐ ๐๐๐๐๐ โ ๐ ๐ฅ๐ which,
while still requiring ๐(๐) time, only requires ๐(1) space.
This is explained by Richard Bird in the next slide, in which he makes some other very useful observations as he revisits fold.
Remember earlier, when we looked in more detail at Tony Morrisโ example in which he folds an infinite list of booleans using (&&)? That is also
covered in the next slide.
43. 7.5.2 Fold revisited
The first duality theorem states that if โ is associative with identity ๐, then
๐๐๐๐๐ โ ๐ ๐ฅ๐ = ๐๐๐๐๐ โ ๐ ๐ฅ๐
for all finite lists ๐ฅ๐ . On the other hand, the two expressions may have different time and space complexities. Which one to use
depends on the properties of โ .
First, suppose that โ is strict in both arguments, and can be computed in ๐(1) time and ๐(1) space. Examples that fall into this
category are (+) and (ร). In this case it is not hard to verify that ๐๐๐๐๐ โ ๐ and ๐๐๐๐๐ โ ๐ both require ๐(๐) time and
๐(๐) space to compute a list of length ๐. However, the same argument used above for sum generalizes to show that, in this case,
๐๐๐๐๐ may safely be replaced by ๐ ๐๐๐๐๐. While ๐ ๐๐๐๐๐ โ ๐ still requires ๐(๐) time to evaluate on a list of length ๐, it only
requires ๐(1) space. So in this case, ๐ ๐๐๐๐๐ is the clear winner.
If โ does not satisfy the above properties, then choosing a winner may not be so easy. A good rule of thumb, though, is that if โ is
nonstrict in either argument, then ๐๐๐๐๐ is usually more efficient than ๐๐๐๐๐. We saw one example in section 7.2: the function
๐๐๐๐๐๐ก is more efficiently computed using ๐๐๐๐๐ than using ๐๐๐๐๐. Observe that while โงบ is strict in its first argument, it is not strict
in its second.
Another example is provided by the function ๐๐๐ = ๐๐๐๐๐ โ ๐๐๐ข๐. Like โงบ, the operator โ is strict in its first argument, but
nonstrict in its second. In particular, ๐น๐๐๐ ๐ โ ๐ฅ returns without evaluating ๐ฅ. Assume we are given a list ๐ฅ๐ of ๐ boolean values and
๐ is the first value for which ๐ฅ๐ โผ ๐ = ๐น๐๐๐ ๐. Then evaluation of ๐๐๐๐๐ โ ๐๐๐ข๐ ๐ฅ๐ takes ๐(๐) steps, whereas ๐๐๐๐๐ โ ๐๐๐ข๐ ๐ฅ๐
requires ๐บ(๐) steps. Again, ๐๐๐๐๐ is a better choice.
To summarise: for functions such as + or ร, that are strict in both arguments and can be computed in constant time and space,
๐ ๐๐๐๐๐ is more efficient. But for functions such as โ and and โงบ, that are nonstrict in some argument, ๐๐๐๐๐ is often more efficient.
Richard Bird
44. Here is how Richard Bird defined a strict left fold:
๐ ๐๐๐๐๐ โ ๐ = ๐
๐ ๐๐๐๐๐ โ ๐ ๐ฅ: ๐ฅ๐ = ๐ ๐ก๐๐๐๐ก (๐ ๐๐๐๐๐ โ ) ๐ โ ๐ฅ ๐ฅ๐
As we saw earlier, these days the strict left fold function is called foldlโ. How is it defined?
To answer that, we conclude this slide deck by going through sections of Graham Huttonโs explanation of strict application.
@philip_schwarz
45. 15.7 Strict Application
Haskell uses lazy evaluation by default, but also provides a special strict version of function application, written as $!, which can
sometimes be useful. Informally, an expression of the form f $! x behaves in the same way as the normal functional application f
x, except that the top-level of evaluation of the argument expression x is forced before the function f is applied.
โฆ
In Haskell, strict application is mainly used to improve the space performance of programs. For example, consider a function sumwith
that calculates the sum of a list of integers using an accumulator value:
sumwith :: Int -> [Int] -> Int
sumwith v [] = v
sumwith v (x:xs) = sumwith (v+x) xs
Then, using lazy evaluation, we have:
sumwith 0 [1,2,3]
= { applying sumwith }
sumwith (0+1) [2,3]
= { applying sumwith }
sumwith ((0+1)+2) [3]
= { applying sumwith }
sumwith (((0+1)+2)+3) []
= { applying sumwith }
((0+1)+2)+3
= { applying the first + }
(1+2)+3
= { applying the first + }
3+3
= { applying + }
6
Graham Hutton
@haskellhutt
46. Note that the entire summation ((0+1)+2)+3 is constructed before any of the component additions are actually performed.
More generally, sumwith will construct a summation whose size is proportional to the number of integers in the original list, which
for a long list may require a significant amount of space. In practice, it would be preferable to perform each addition as soon as it is
introduced, to improve the space performance of the function.
This behaviour can be achieved by redefining sumwith using strict application, to force evaluation of its accumulator value:
sumwith v [] = v
sumwith v (x:xs) = (sumwith $! (v+x)) xs
For example, we now have:
sumwith 0 [1,2,3]
= { applying sumwith }
(sumwith $! (0+1)) [2,3]
= { applying + }
(sumwith $! 1) [2,3]
= { applying $! }
sumwith 1 [2,3]
= { applying sumwith }
(sumwith $! (1+2)) [3]
= { applying + }
(sumwith $! 3) [3]
= { applying $! }
sumwith 3 [3]
= { applying sumwith }
(sumwith $! (3+3)) []
= { applying + }
(sumwith $! 6) []
Graham Hutton
@haskellhutt
47. = { applying $! }
sumwith 6 []
= { applying sumwith }
6
This evaluation requires more steps than previously, due to the additional overhead of using strict application, but now performs
each addition as soon as it is introduced, rather than constructing a large summation. Generalising from the above example, the
library Data.Foldable provides a strict version of the higher-order library function foldl that forces evaluation of its accumulator
prior to processing the tail of the list:
foldlโ :: (a -> b -> a) -> a -> [b] -> a
foldlโ f v [] = v
foldlโ f v (x:xs) = ((foldlโ f) $! (f v x)) xs
For example, using this function we can define sumwith = foldlโ (+). It is important to note, however, that strict application is not a
silver bullet that automatically improves the space behaviour of Haskell programs. Even for relatively simple examples, the use of
strict application is a specialist topic that requires careful consideration of the behaviour of lazy evaluation.
Graham Hutton
@haskellhutt
48. Thatโs all for Part 5. I hope you found that useful.
See you in Part 6.