This document discusses recursion versus iteration in Lisp and Common Lisp. It notes that the original Lisp language was purely functional and used recursion to solve problems since it lacked local variables and iteration. While recursion is more elegant and leads to less code in some cases, it is also harder to understand, debug, and less efficient due to function call overhead. However, recursion is necessary for some problems like tree and graph traversals. Common Lisp makes recursion easier through its runtime stack and debugger. Several examples of recursive functions are provided, including ones for lists, trees, searching, and factorials. Tail recursion is discussed as a way to make recursion more efficient.
Introduction to data structures and complexity.pptxPJS KUMAR
The document discusses data structures and algorithms. It defines data structures as the logical organization of data and describes common linear and nonlinear structures like arrays and trees. It explains that the choice of data structure depends on accurately representing real-world relationships while allowing effective processing. Key data structure operations are also outlined like traversing, searching, inserting, deleting, sorting, and merging. The document then defines algorithms as step-by-step instructions to solve problems and analyzes the complexity of algorithms in terms of time and space. Sub-algorithms and their use are also covered.
An algorithm is a finite set of instructions to accomplish a predefined task. Performance of an algorithm is measured by its time and space complexity, with common metrics being big O, big Omega, and big Theta notation. Common data structures include arrays, linked lists, stacks, queues, trees and graphs. Key concepts are asymptotic analysis of algorithms, recursion, and analyzing complexity classes like constant, linear, quadratic and logarithmic time.
The document discusses algorithm analysis and complexity. It defines a priori and a posteriori analysis, and explains that algorithm analysis deals with running time. There are two main complexity measures: time complexity, which describes how time scales with input size, and space complexity, which describes how memory usage scales with input size. Time complexity can be best-case, average-case, or worst-case. Asymptotic notation like Big-O, Big-Omega, and Big-Theta are used to describe these complexities. Common loop types like linear, logarithmic, quadratic, and dependent quadratic loops are covered along with their time complexities.
The document discusses recursion, including:
1) Recursion involves breaking a problem down into smaller subproblems until a base case is reached, then building up the solution to the overall problem from the solutions to the subproblems.
2) A recursive function is one that calls itself, with each call typically moving closer to a base case where the problem can be solved without recursion.
3) Recursion can be linear, involving one recursive call, or binary, involving two recursive calls to solve similar subproblems.
Assg 05 QuicksortCOSC 2336 Data StructuresObjectives.docxjane3dyson92312
Assg 05: Quicksort
COSC 2336 Data Structures
Objectives
• Practice writing functions
• Practice writing recursive functions.
• Learn about Analysis of algorithms and O(n log n) sorts
Description
In this assignment we will be implementing one of the most popular sorting
algorithms used in libraries (like the C++ STL library, and the UNIX qsort
function) to provide basic sorting abilities, the Quicksort algorithm. I would
recommend that you at least read section 7.5 from our supplemental Shaffer
textbook on Quicksort, if not sections 7.1-7.5 talking about three well known
O(n log n) sorting algorithms, and the 3 O(n2) algorithms we discussed last
week.
Quicksort, when properly implemented, is very attractive because it pro-
vides a way to do a fast sort completely in-place (without having to allocate
additional memory to do the sort, beyond a single value needed when swap-
ping two values in the list being sorted). In the worst case, Quicksort is
actually O(n2), no better than bubble sort. But this worse case only occurs
when every pivot selected is the wort possible, and does not divide the list
at all. This is very unlikely to happen, unless you know how the pivot is
selected, and specifically design the input list to always choose the worst
possible pivot. On average the cost of Quicksort is O(n log n), and it is
usually very likely that average case performance will result when lists to be
sorted are relatively random.
The most direct implementation of Quicksort is as a recursive algorithm.
Quicksort is an example of a divide and conquer approach to solving the
problem of sorting the list. We are given a list of items, A and indexes left
1
and right that indicate a sub-portion of the list to be sorted. left and right
indicate the actual indexes, so if the list is a regular C array of integers, and
the array is of size 10
int left;
int right;
const inst SIZE = 10;
int A[size];
Then to sort the whole list we set left = 0 and right = 9 to initially
call the Quicksort function:
left = 0;
right = size-1;
quicksort(A, left, right);
Conceptually the steps of the Quicksort algorithm are as follows:
1. if list size is 0 or 1 (left <= right) return (lists of this size are sorted
by definition).
2. Choose a pivot value and swap the pivot value to the end of the list
swap(pivotIndex, right)
3. Partition the list. Partitioning means all values less than the pivot
value should end up on the left of the list, and all values greater will
be on the right. The first index k where a value >= to the pivot value
is at indicates the new left and right side sub-lists.
4. Swap the pivot value to its correct position k swap(k, right)
5. Recursively call Quicksort on the new left and right sub-lists
• quicksort(A, left, k-1)
• quicksort(A, k+1, right)
Most of the real work happens in the function/code to partition the list.
The partitioning of the list, for Quicksort to be an in-place sort, must work
by swapping values in-place in the list o.
Assg 05 QuicksortCOSC 2336 Data StructuresObjectives.docxfestockton
Assg 05: Quicksort
COSC 2336 Data Structures
Objectives
• Practice writing functions
• Practice writing recursive functions.
• Learn about Analysis of algorithms and O(n log n) sorts
Description
In this assignment we will be implementing one of the most popular sorting
algorithms used in libraries (like the C++ STL library, and the UNIX qsort
function) to provide basic sorting abilities, the Quicksort algorithm. I would
recommend that you at least read section 7.5 from our supplemental Shaffer
textbook on Quicksort, if not sections 7.1-7.5 talking about three well known
O(n log n) sorting algorithms, and the 3 O(n2) algorithms we discussed last
week.
Quicksort, when properly implemented, is very attractive because it pro-
vides a way to do a fast sort completely in-place (without having to allocate
additional memory to do the sort, beyond a single value needed when swap-
ping two values in the list being sorted). In the worst case, Quicksort is
actually O(n2), no better than bubble sort. But this worse case only occurs
when every pivot selected is the wort possible, and does not divide the list
at all. This is very unlikely to happen, unless you know how the pivot is
selected, and specifically design the input list to always choose the worst
possible pivot. On average the cost of Quicksort is O(n log n), and it is
usually very likely that average case performance will result when lists to be
sorted are relatively random.
The most direct implementation of Quicksort is as a recursive algorithm.
Quicksort is an example of a divide and conquer approach to solving the
problem of sorting the list. We are given a list of items, A and indexes left
1
and right that indicate a sub-portion of the list to be sorted. left and right
indicate the actual indexes, so if the list is a regular C array of integers, and
the array is of size 10
int left;
int right;
const inst SIZE = 10;
int A[size];
Then to sort the whole list we set left = 0 and right = 9 to initially
call the Quicksort function:
left = 0;
right = size-1;
quicksort(A, left, right);
Conceptually the steps of the Quicksort algorithm are as follows:
1. if list size is 0 or 1 (left <= right) return (lists of this size are sorted
by definition).
2. Choose a pivot value and swap the pivot value to the end of the list
swap(pivotIndex, right)
3. Partition the list. Partitioning means all values less than the pivot
value should end up on the left of the list, and all values greater will
be on the right. The first index k where a value >= to the pivot value
is at indicates the new left and right side sub-lists.
4. Swap the pivot value to its correct position k swap(k, right)
5. Recursively call Quicksort on the new left and right sub-lists
• quicksort(A, left, k-1)
• quicksort(A, k+1, right)
Most of the real work happens in the function/code to partition the list.
The partitioning of the list, for Quicksort to be an in-place sort, must work
by swapping values in-place in the list o ...
ECS140A-F16-07 October 27, 2016ASSIGNMENT 5 LISPDue .docxSALU18
This document describes an assignment in LISP involving writing various functions. It is broken into 9 parts where students will write functions to: 1) generate cross products of lists, 2) write a version of the LISP every function, 3) check if a list is flat, 4) find the length of the longest flat list in a list, 5) check if a cond expression is legal, 6) rewrite cond expressions as nested if statements, 7) check if rewriting a cond expression matches evaluating it, and 8) rewrite cond expressions using if/else for the last condition being t. Testing code is provided to check the functions and students are given restrictions on allowed LISP functions for each part.
Introduction to data structures and complexity.pptxPJS KUMAR
The document discusses data structures and algorithms. It defines data structures as the logical organization of data and describes common linear and nonlinear structures like arrays and trees. It explains that the choice of data structure depends on accurately representing real-world relationships while allowing effective processing. Key data structure operations are also outlined like traversing, searching, inserting, deleting, sorting, and merging. The document then defines algorithms as step-by-step instructions to solve problems and analyzes the complexity of algorithms in terms of time and space. Sub-algorithms and their use are also covered.
An algorithm is a finite set of instructions to accomplish a predefined task. Performance of an algorithm is measured by its time and space complexity, with common metrics being big O, big Omega, and big Theta notation. Common data structures include arrays, linked lists, stacks, queues, trees and graphs. Key concepts are asymptotic analysis of algorithms, recursion, and analyzing complexity classes like constant, linear, quadratic and logarithmic time.
The document discusses algorithm analysis and complexity. It defines a priori and a posteriori analysis, and explains that algorithm analysis deals with running time. There are two main complexity measures: time complexity, which describes how time scales with input size, and space complexity, which describes how memory usage scales with input size. Time complexity can be best-case, average-case, or worst-case. Asymptotic notation like Big-O, Big-Omega, and Big-Theta are used to describe these complexities. Common loop types like linear, logarithmic, quadratic, and dependent quadratic loops are covered along with their time complexities.
The document discusses recursion, including:
1) Recursion involves breaking a problem down into smaller subproblems until a base case is reached, then building up the solution to the overall problem from the solutions to the subproblems.
2) A recursive function is one that calls itself, with each call typically moving closer to a base case where the problem can be solved without recursion.
3) Recursion can be linear, involving one recursive call, or binary, involving two recursive calls to solve similar subproblems.
Assg 05 QuicksortCOSC 2336 Data StructuresObjectives.docxjane3dyson92312
Assg 05: Quicksort
COSC 2336 Data Structures
Objectives
• Practice writing functions
• Practice writing recursive functions.
• Learn about Analysis of algorithms and O(n log n) sorts
Description
In this assignment we will be implementing one of the most popular sorting
algorithms used in libraries (like the C++ STL library, and the UNIX qsort
function) to provide basic sorting abilities, the Quicksort algorithm. I would
recommend that you at least read section 7.5 from our supplemental Shaffer
textbook on Quicksort, if not sections 7.1-7.5 talking about three well known
O(n log n) sorting algorithms, and the 3 O(n2) algorithms we discussed last
week.
Quicksort, when properly implemented, is very attractive because it pro-
vides a way to do a fast sort completely in-place (without having to allocate
additional memory to do the sort, beyond a single value needed when swap-
ping two values in the list being sorted). In the worst case, Quicksort is
actually O(n2), no better than bubble sort. But this worse case only occurs
when every pivot selected is the wort possible, and does not divide the list
at all. This is very unlikely to happen, unless you know how the pivot is
selected, and specifically design the input list to always choose the worst
possible pivot. On average the cost of Quicksort is O(n log n), and it is
usually very likely that average case performance will result when lists to be
sorted are relatively random.
The most direct implementation of Quicksort is as a recursive algorithm.
Quicksort is an example of a divide and conquer approach to solving the
problem of sorting the list. We are given a list of items, A and indexes left
1
and right that indicate a sub-portion of the list to be sorted. left and right
indicate the actual indexes, so if the list is a regular C array of integers, and
the array is of size 10
int left;
int right;
const inst SIZE = 10;
int A[size];
Then to sort the whole list we set left = 0 and right = 9 to initially
call the Quicksort function:
left = 0;
right = size-1;
quicksort(A, left, right);
Conceptually the steps of the Quicksort algorithm are as follows:
1. if list size is 0 or 1 (left <= right) return (lists of this size are sorted
by definition).
2. Choose a pivot value and swap the pivot value to the end of the list
swap(pivotIndex, right)
3. Partition the list. Partitioning means all values less than the pivot
value should end up on the left of the list, and all values greater will
be on the right. The first index k where a value >= to the pivot value
is at indicates the new left and right side sub-lists.
4. Swap the pivot value to its correct position k swap(k, right)
5. Recursively call Quicksort on the new left and right sub-lists
• quicksort(A, left, k-1)
• quicksort(A, k+1, right)
Most of the real work happens in the function/code to partition the list.
The partitioning of the list, for Quicksort to be an in-place sort, must work
by swapping values in-place in the list o.
Assg 05 QuicksortCOSC 2336 Data StructuresObjectives.docxfestockton
Assg 05: Quicksort
COSC 2336 Data Structures
Objectives
• Practice writing functions
• Practice writing recursive functions.
• Learn about Analysis of algorithms and O(n log n) sorts
Description
In this assignment we will be implementing one of the most popular sorting
algorithms used in libraries (like the C++ STL library, and the UNIX qsort
function) to provide basic sorting abilities, the Quicksort algorithm. I would
recommend that you at least read section 7.5 from our supplemental Shaffer
textbook on Quicksort, if not sections 7.1-7.5 talking about three well known
O(n log n) sorting algorithms, and the 3 O(n2) algorithms we discussed last
week.
Quicksort, when properly implemented, is very attractive because it pro-
vides a way to do a fast sort completely in-place (without having to allocate
additional memory to do the sort, beyond a single value needed when swap-
ping two values in the list being sorted). In the worst case, Quicksort is
actually O(n2), no better than bubble sort. But this worse case only occurs
when every pivot selected is the wort possible, and does not divide the list
at all. This is very unlikely to happen, unless you know how the pivot is
selected, and specifically design the input list to always choose the worst
possible pivot. On average the cost of Quicksort is O(n log n), and it is
usually very likely that average case performance will result when lists to be
sorted are relatively random.
The most direct implementation of Quicksort is as a recursive algorithm.
Quicksort is an example of a divide and conquer approach to solving the
problem of sorting the list. We are given a list of items, A and indexes left
1
and right that indicate a sub-portion of the list to be sorted. left and right
indicate the actual indexes, so if the list is a regular C array of integers, and
the array is of size 10
int left;
int right;
const inst SIZE = 10;
int A[size];
Then to sort the whole list we set left = 0 and right = 9 to initially
call the Quicksort function:
left = 0;
right = size-1;
quicksort(A, left, right);
Conceptually the steps of the Quicksort algorithm are as follows:
1. if list size is 0 or 1 (left <= right) return (lists of this size are sorted
by definition).
2. Choose a pivot value and swap the pivot value to the end of the list
swap(pivotIndex, right)
3. Partition the list. Partitioning means all values less than the pivot
value should end up on the left of the list, and all values greater will
be on the right. The first index k where a value >= to the pivot value
is at indicates the new left and right side sub-lists.
4. Swap the pivot value to its correct position k swap(k, right)
5. Recursively call Quicksort on the new left and right sub-lists
• quicksort(A, left, k-1)
• quicksort(A, k+1, right)
Most of the real work happens in the function/code to partition the list.
The partitioning of the list, for Quicksort to be an in-place sort, must work
by swapping values in-place in the list o ...
ECS140A-F16-07 October 27, 2016ASSIGNMENT 5 LISPDue .docxSALU18
This document describes an assignment in LISP involving writing various functions. It is broken into 9 parts where students will write functions to: 1) generate cross products of lists, 2) write a version of the LISP every function, 3) check if a list is flat, 4) find the length of the longest flat list in a list, 5) check if a cond expression is legal, 6) rewrite cond expressions as nested if statements, 7) check if rewriting a cond expression matches evaluating it, and 8) rewrite cond expressions using if/else for the last condition being t. Testing code is provided to check the functions and students are given restrictions on allowed LISP functions for each part.
NumPy is a Python library that provides multidimensional arrays and matrices for numerical computing along with high-level mathematical functions to operate on these arrays. NumPy arrays can represent vectors, matrices, images, and tensors. NumPy allows fast numerical computing by taking advantage of optimized low-level C/C++ implementations and parallel computing on multicore processors. Common operations like element-wise array arithmetic and universal functions are much faster with NumPy than with native Python.
The document discusses time and space complexity analysis for algorithms, including using Big O notation to describe an algorithm's efficiency. It provides examples of time complexity for different codes, such as O(n) for a simple loop and O(n^2) for a double loop. The document also covers space complexity and how to estimate the complexity of a problem based on input size constraints.
Here is a recursive function to check if a list contains an element:
(defun contains (element list)
(cond ((null list) nil)
((equal element (car list)) t)
(t (contains element (cdr list)))))
To check the guest list:
(contains 'robocop guest-list)
This function:
1. Base case: If list is empty, element is not contained - return nil
2. Check if element equals car of list - if so, return t
3. Otherwise, recursively call contains on element and cdr of list
So it will recursively traverse the list until it finds a match or reaches empty list.
Recursion is a technique in programming where a function calls itself to solve smaller instances of the same problem. It involves two main components: the base case and the recursive case. The base case is the condition that stops the recursive calls, preventing infinite loops. The recursive case is where the function continues to call itself, breaking the problem down into smaller parts. For example, calculating the factorial of a number can be done using recursion, where `n! = n * (n-1)!` with `0! = 1` as the base case.
In data structures, recursion is commonly used for tasks like traversing or searching through trees and linked lists. It simplifies complex problems by dividing them into more manageable subproblems. For instance, in binary trees, recursive methods are used for traversals such as Preorder, Inorder, and Postorder, which visit nodes in a specific order using recursive function calls.
This document discusses loops in R and when to use them versus vectorization. It provides examples of for, while, and repeat loops in R. The key points are:
- Loops allow repeating operations but can be inefficient; vectorization is preferred when possible.
- For loops execute a fixed number of times based on an index or counter. Nested for loops allow manipulating multi-dimensional arrays.
- While and repeat loops execute until a condition is met, but repeat ensures at least one iteration.
- Break and next statements allow interrupting loop iterations early.
This document discusses algorithm analysis tools. It explains that algorithm analysis is used to determine which of several algorithms to solve a problem is most efficient. Theoretical analysis counts primitive operations to approximate runtime as a function of input size. Common complexity classes like constant, linear, quadratic, and exponential time are defined based on how quickly runtime grows with size. Big-O notation represents the asymptotic upper bound of a function's growth rate to classify algorithms.
Functional Programming by Examples using Haskellgoncharenko
The document discusses functional programming concepts in Haskell compared to traditional imperative languages like C++. It provides:
1) An example of quicksort implemented in both C++ and Haskell to illustrate the differences in approach and syntax between the two paradigms. The Haskell version is much more concise, using only 5 lines compared to 14 lines in C++.
2) Explanations of key functional programming concepts in Haskell including pure functions, recursion, pattern matching, and higher-order functions like map and fold.
3) Examples and definitions of commonly used Haskell functions and data types to summarize lists, sorting, and traversing elements - highlighting the more declarative style of functional programming.
The document discusses recursion, which is a technique that solves a problem by solving smaller instances of the same problem. It provides examples of recursive functions, including factorial, combinations, multiplication, Fibonacci, and greatest common divisor (GCD). Recursive functions contain a base case with a straightforward solution and recursively define the problem in terms of smaller instances until reaching the base case. C uses a call stack to maintain the state of each recursive call.
data structures using C 2 sem BCA univeristy of mysoreambikavenkatesh2
The document discusses reallocating memory using the realloc() function in C. It provides code to allocate memory for an integer array, print the memory addresses, reallocate the array to a larger size, and print the new memory addresses. The memory addresses for the previously allocated blocks do not change after reallocating, but new contiguous blocks are added to increase the array size.
This document provides a brief introduction to the Lisp programming language. It discusses Lisp's history from its origins in 1958 to modern implementations like Common Lisp and Scheme. It also covers Lisp's support for functional, imperative, and object-oriented paradigms. A key feature of Lisp is its use of s-expressions as both code and data, which enables powerful macros to transform and generate code at compile time.
The document discusses different types of loops in Python including while loops, for loops, and infinite loops. It provides examples of using while loops to iterate until a condition is met, using for loops to iterate over a set of elements when the number of iterations is known, and how to terminate loops early using break or skip iterations using continue. It also discusses using the range() function to generate a sequence of numbers to iterate over in for loops.
This document summarizes a lecture on recursive least squares (RLS) algorithms. RLS is an iterative approach based on Newton's method that uses all previous data to estimate the gradient, converging exponentially faster than LMS. The key steps are: (1) initialize the autocorrelation matrix R and weight vector f, (2) update R and f recursively using new data and the matrix inversion lemma to avoid direct inversion of R. This maintains an optimal solution at each step. The RLS algorithm can also be expressed using intermediate variables like an error vector z to simplify the update equations.
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...2022cspaawan12556
What is Complexity Analysis?
What is the need for Complexity Analysis?
Asymptotic Notations
How to measure complexity?
1. Time Complexity
2. Space Complexity
3. Auxiliary Space
How does Complexity affect any algorithm?
How to optimize the time and space complexity of an Algorithm?
Different types of Complexity exist in the program:
1. Constant Complexity
2. Logarithmic Complexity
3. Linear Complexity
4. Quadratic Complexity
5. Factorial Complexity
6. Exponential Complexity
Worst Case time complexity of different data structures for different operations
Complexity Analysis Of Popular Algorithms
Practice some questions on Complexity Analysis
practice with giving Quiz
Conclusion
This document discusses recursion, including:
- Recursive functions call themselves, with a base case to end recursion and a general case that calls a smaller version.
- Recursive solutions are generally less efficient than iterative ones but work for some problems.
- Examples include computing powers and factorials recursively, and solving the Towers of Hanoi puzzle recursively by breaking it into smaller subproblems.
This document provides an overview of the LISP (List Processing) programming language. It discusses how LISP was commonly used for artificial intelligence programming due to its ability to modify programs dynamically. The document then describes various LISP dialects and the invention of LISP by John McCarthy. It also summarizes key LISP features like being machine-independent, providing object-oriented programming and advanced data types. The document concludes by explaining functions, predicates, conditionals, recursion, arrays, property lists, mapping functions and lambda expressions in LISP.
The document discusses asymptotic analysis and algorithm complexity. It defines asymptotic notations like Big-O, Omega, and Theta notations which are used to describe the time complexity of algorithms. Big-O gives the upper bound/worst case, Omega gives the lower bound/best case, and Theta gives both upper and lower bounds for average case. It also discusses space and time complexity analysis of algorithms and different types of time complexities including constant, logarithmic, linear, quadratic, and nlogn time. Finally, it covers arrays, types of arrays, dynamic memory allocation functions in C like malloc, calloc, free, and realloc.
This document provides lecture notes on dataflow analysis techniques. It introduces liveness analysis, neededness analysis, and reaching definitions analysis. Liveness analysis is extended to handle memory references. Neededness analysis is introduced to identify dead code by determining which variables are needed, rather than just live. Reaching definitions analysis is a forward dataflow analysis used for optimizations like constant propagation. Examples are provided and the analysis rules for each technique are specified.
LISP, an acronym for list processing, is a programming language that was designed for easy manipulation of data strings. It is a commonly used language for artificial intelligence (AI) programming.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
NumPy is a Python library that provides multidimensional arrays and matrices for numerical computing along with high-level mathematical functions to operate on these arrays. NumPy arrays can represent vectors, matrices, images, and tensors. NumPy allows fast numerical computing by taking advantage of optimized low-level C/C++ implementations and parallel computing on multicore processors. Common operations like element-wise array arithmetic and universal functions are much faster with NumPy than with native Python.
The document discusses time and space complexity analysis for algorithms, including using Big O notation to describe an algorithm's efficiency. It provides examples of time complexity for different codes, such as O(n) for a simple loop and O(n^2) for a double loop. The document also covers space complexity and how to estimate the complexity of a problem based on input size constraints.
Here is a recursive function to check if a list contains an element:
(defun contains (element list)
(cond ((null list) nil)
((equal element (car list)) t)
(t (contains element (cdr list)))))
To check the guest list:
(contains 'robocop guest-list)
This function:
1. Base case: If list is empty, element is not contained - return nil
2. Check if element equals car of list - if so, return t
3. Otherwise, recursively call contains on element and cdr of list
So it will recursively traverse the list until it finds a match or reaches empty list.
Recursion is a technique in programming where a function calls itself to solve smaller instances of the same problem. It involves two main components: the base case and the recursive case. The base case is the condition that stops the recursive calls, preventing infinite loops. The recursive case is where the function continues to call itself, breaking the problem down into smaller parts. For example, calculating the factorial of a number can be done using recursion, where `n! = n * (n-1)!` with `0! = 1` as the base case.
In data structures, recursion is commonly used for tasks like traversing or searching through trees and linked lists. It simplifies complex problems by dividing them into more manageable subproblems. For instance, in binary trees, recursive methods are used for traversals such as Preorder, Inorder, and Postorder, which visit nodes in a specific order using recursive function calls.
This document discusses loops in R and when to use them versus vectorization. It provides examples of for, while, and repeat loops in R. The key points are:
- Loops allow repeating operations but can be inefficient; vectorization is preferred when possible.
- For loops execute a fixed number of times based on an index or counter. Nested for loops allow manipulating multi-dimensional arrays.
- While and repeat loops execute until a condition is met, but repeat ensures at least one iteration.
- Break and next statements allow interrupting loop iterations early.
This document discusses algorithm analysis tools. It explains that algorithm analysis is used to determine which of several algorithms to solve a problem is most efficient. Theoretical analysis counts primitive operations to approximate runtime as a function of input size. Common complexity classes like constant, linear, quadratic, and exponential time are defined based on how quickly runtime grows with size. Big-O notation represents the asymptotic upper bound of a function's growth rate to classify algorithms.
Functional Programming by Examples using Haskellgoncharenko
The document discusses functional programming concepts in Haskell compared to traditional imperative languages like C++. It provides:
1) An example of quicksort implemented in both C++ and Haskell to illustrate the differences in approach and syntax between the two paradigms. The Haskell version is much more concise, using only 5 lines compared to 14 lines in C++.
2) Explanations of key functional programming concepts in Haskell including pure functions, recursion, pattern matching, and higher-order functions like map and fold.
3) Examples and definitions of commonly used Haskell functions and data types to summarize lists, sorting, and traversing elements - highlighting the more declarative style of functional programming.
The document discusses recursion, which is a technique that solves a problem by solving smaller instances of the same problem. It provides examples of recursive functions, including factorial, combinations, multiplication, Fibonacci, and greatest common divisor (GCD). Recursive functions contain a base case with a straightforward solution and recursively define the problem in terms of smaller instances until reaching the base case. C uses a call stack to maintain the state of each recursive call.
data structures using C 2 sem BCA univeristy of mysoreambikavenkatesh2
The document discusses reallocating memory using the realloc() function in C. It provides code to allocate memory for an integer array, print the memory addresses, reallocate the array to a larger size, and print the new memory addresses. The memory addresses for the previously allocated blocks do not change after reallocating, but new contiguous blocks are added to increase the array size.
This document provides a brief introduction to the Lisp programming language. It discusses Lisp's history from its origins in 1958 to modern implementations like Common Lisp and Scheme. It also covers Lisp's support for functional, imperative, and object-oriented paradigms. A key feature of Lisp is its use of s-expressions as both code and data, which enables powerful macros to transform and generate code at compile time.
The document discusses different types of loops in Python including while loops, for loops, and infinite loops. It provides examples of using while loops to iterate until a condition is met, using for loops to iterate over a set of elements when the number of iterations is known, and how to terminate loops early using break or skip iterations using continue. It also discusses using the range() function to generate a sequence of numbers to iterate over in for loops.
This document summarizes a lecture on recursive least squares (RLS) algorithms. RLS is an iterative approach based on Newton's method that uses all previous data to estimate the gradient, converging exponentially faster than LMS. The key steps are: (1) initialize the autocorrelation matrix R and weight vector f, (2) update R and f recursively using new data and the matrix inversion lemma to avoid direct inversion of R. This maintains an optimal solution at each step. The RLS algorithm can also be expressed using intermediate variables like an error vector z to simplify the update equations.
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...2022cspaawan12556
What is Complexity Analysis?
What is the need for Complexity Analysis?
Asymptotic Notations
How to measure complexity?
1. Time Complexity
2. Space Complexity
3. Auxiliary Space
How does Complexity affect any algorithm?
How to optimize the time and space complexity of an Algorithm?
Different types of Complexity exist in the program:
1. Constant Complexity
2. Logarithmic Complexity
3. Linear Complexity
4. Quadratic Complexity
5. Factorial Complexity
6. Exponential Complexity
Worst Case time complexity of different data structures for different operations
Complexity Analysis Of Popular Algorithms
Practice some questions on Complexity Analysis
practice with giving Quiz
Conclusion
This document discusses recursion, including:
- Recursive functions call themselves, with a base case to end recursion and a general case that calls a smaller version.
- Recursive solutions are generally less efficient than iterative ones but work for some problems.
- Examples include computing powers and factorials recursively, and solving the Towers of Hanoi puzzle recursively by breaking it into smaller subproblems.
This document provides an overview of the LISP (List Processing) programming language. It discusses how LISP was commonly used for artificial intelligence programming due to its ability to modify programs dynamically. The document then describes various LISP dialects and the invention of LISP by John McCarthy. It also summarizes key LISP features like being machine-independent, providing object-oriented programming and advanced data types. The document concludes by explaining functions, predicates, conditionals, recursion, arrays, property lists, mapping functions and lambda expressions in LISP.
The document discusses asymptotic analysis and algorithm complexity. It defines asymptotic notations like Big-O, Omega, and Theta notations which are used to describe the time complexity of algorithms. Big-O gives the upper bound/worst case, Omega gives the lower bound/best case, and Theta gives both upper and lower bounds for average case. It also discusses space and time complexity analysis of algorithms and different types of time complexities including constant, logarithmic, linear, quadratic, and nlogn time. Finally, it covers arrays, types of arrays, dynamic memory allocation functions in C like malloc, calloc, free, and realloc.
This document provides lecture notes on dataflow analysis techniques. It introduces liveness analysis, neededness analysis, and reaching definitions analysis. Liveness analysis is extended to handle memory references. Neededness analysis is introduced to identify dead code by determining which variables are needed, rather than just live. Reaching definitions analysis is a forward dataflow analysis used for optimizations like constant propagation. Examples are provided and the analysis rules for each technique are specified.
LISP, an acronym for list processing, is a programming language that was designed for easy manipulation of data strings. It is a commonly used language for artificial intelligence (AI) programming.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
SOCRadar's Aviation Industry Q1 Incident Report is out now!
The aviation industry has always been a prime target for cybercriminals due to its critical infrastructure and high stakes. In the first quarter of 2024, the sector faced an alarming surge in cybersecurity threats, revealing its vulnerabilities and the relentless sophistication of cyber attackers.
SOCRadar’s Aviation Industry, Quarterly Incident Report, provides an in-depth analysis of these threats, detected and examined through our extensive monitoring of hacker forums, Telegram channels, and dark web platforms.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
What is Master Data Management by PiLog Groupaymanquadri279
PiLog Group's Master Data Record Manager (MDRM) is a sophisticated enterprise solution designed to ensure data accuracy, consistency, and governance across various business functions. MDRM integrates advanced data management technologies to cleanse, classify, and standardize master data, thereby enhancing data quality and operational efficiency.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Drona Infotech is a premier mobile app development company in Noida, providing cutting-edge solutions for businesses.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Oracle 23c New Features For DBAs and Developers.pptx
recursion.ppt
1. Recursion vs. Iteration
• The original Lisp language was truly a functional
language:
– Everything was expressed as functions
– No local variables
– No iteration
• You had to use recursion to get around these problems
– Although they weren’t considered problems, functions are
usually described recursively, and since the vision of Lisp was
to model mathematical functions in a language, this was
appropriate
• But recursion is hard, in CL why should we use it?
• Can’t we just use iteration?
2. Should we avoid recursive?
• Recursion is hard:
– It is hard to conceptualize how a problem can be solved
recursively
– Once implemented, it is often very difficult to debug a recursive
program
– When reading recursive code, it is sometimes hard to really see
how it solves the problem
• Recursion is inefficient:
– Every time we recurse, we are doing another function call, this
results in manipulating the run-time stack in memory, passing
parameters, and transferring control
• So recursion costs us both in time and memory usage
– Consider the example on the next slide which compares
iterative and recursive factorial solutions
3. A Simple Comparison
(defun ifact (n)
(let ((product 1))
(do ((j 0 (+ 1 j))) ((= j n))
(setf product (* product (+ j 1))))
product))
(defun rfact (n)
(if (< n 1) 1 (* n (rfact (- n 1)))))
Here, the function is
called once, there are two
local variables
The loop does a comparison
and if the terminating
condition is not yet true, we
branch back up to the top
Total instructions: n * 5 + 3
Here, we have less code, no local variables (only a parameter) and fewer total
instructions in the code, each iteration has a comparison and then either returns
1 or performs 3 function calls (-, rfact, * in that order)
But we aren’t seeing the stack manipulations which require pushing a new n,
space for the function’s return value, and updating the stack pointer register,
and popping off the return value and n when done
4. Why Recursion?
• If recursion is harder to understand and less efficient,
why use it?
– It leads to elegant solutions – less code, less need for local
variables, etc
– If we can define a function mathematically, the solution is easy
to codify
– Some problems require recursion
• Tree traversals
• Graph traversals
• Search problems
• Some sorting algorithms (quicksort, mergesort)
– Note: this is not strictly speaking true, we can accomplish a solution
without recursion by using iteration and a stack, but in effect we would be
simulating recursion, so why not use it?
• In some cases, an algorithm with a recursive solution leads to a lesser
computational complexity than an algorithm without recursion
– Compare Insertion Sort to Merge Sort for example
5. Lisp is Set Up For Recursion
• As stated earlier, the original intention of Lisp
was to model mathematical functions so the
language calls for using recursion
– Basic form:
– The components here are to test for a base case and
if true, return the base cases’ value, otherwise
recurse passing the function the parameter(s)
manipulated for the next level
(defun name (params)
(if (terminating condition)
return-base-case-value
(name (manipulate params))))
6. What Happens During Recursion
• You should have studied this in 262,
but as a refresher:
– We use the run-time stack to coordinate
recursion
– The stack gives us a LIFO access
• Imagine that function1 calls function2 which
calls function3
• When function3 ends, where do we return to?
– the run-time stack stores the location in
function2 to return to
• When function2 ends, where do we return to?
– the run-time stack stores the location in
function1 to return to
– etc
– Using a stack makes it easy to
“backtrack” to the proper location when a
method ends
• Notice that we want this behavior whether we
are doing normal function calls or recursion
Run-time stack:
main calls m1
m1 calls m2
m2 calls m3
m3 calls m4
We are currently in m4
Main
m1
m2
m3
m4
stack
pointer
7. More On the Run-time Stack
• For each active function, the run-time stack stores an “activation
record instance”
– This is a description of the function’s execution and stores
• Local variables, Parameters, Return value
• Return pointer (where to return to in the calling function upon function termination)
• Every time a function (or method in Java) is called
– the run-time stack is manipulated by pushing a new activation record instance
onto it
– proper memory space is allocated on the stack for all local variables and
parameters
– the return pointer is set up
– the stack pointer register is adjusted
• Every time a function terminates
– the run-time stack has the top activation record instance popped off of it,
returning the value that the function returns
– the PC (program counter register) is adjusted to the proper location in the calling
function
– the stack pointer register is adjusted
8. An Example
AR for factorial
n = 3
return value: ___
return to: interpreter
(defun fact (n)
(if (<= n 1) 1
(* n (fact (- n 1)))))
The activation record
instance (AR) for
fact stores three things,
n, the return value,
and the pointer of
where to return to in
the next AR when
fact terminates
We start with (fact 3)
AR for factorial
n = 3
return value: ___
return to: interpreter
AR for factorial
n = 3
return value: ___
return to: interpreter
AR for factorial
n = 2
return value: ___
return to: (fact 3) *
AR for factorial
n = 2
return value: ___
return to: (fact 3) *
AR for factorial
n = 1
return value: ___
return to: (fact 2) *
9. Example Continued
AR for factorial
n = 3
return value: ___
return to: interpreter
AR for factorial
n = 3
return value: ___
return to: interpreter
AR for factorial
n = 2
return value:
return to: (fact 3) *
AR for factorial
n = 1
return value: 1
return to: (fact 2) *
AR for factorial
n = 2
return value: 2
return to: (fact 3) *
6 is returned and
printed in the
interpreter
AR for factorial
n = 3
return value: 6
return to: interpreter
10. Lisp Makes Recursion Easy
• Well, strictly speaking, recursion in Lisp is similar to recursion in
any language
• What Lisp can do for us is give us easy access to the debugger
– You can insert a (break) instruction which forces the evaluation step of the
REPL cycle to stop executing, leaving us in the debugger
– Or, if you have a run-time error, you are automatically placed into the
debugger
– From the debugger you can
• inspect the run-time stack to see what values are there
• return to a previous level of recursive call
• provide a value to be returned
– Thus, you can either determine
• why you got an error by inspecting the stack
• see what is going on in the program by inspecting the stack
• return from an error by inserting a partial or complete solution
• CL can also make a recursive program more efficient (to be
explained later)
11. Examples of Recursive Code
• Every List function
in CL can be
implemented
recursively
– whether they are or
not I’m not sure, but
probably they are
• Here we start with 3
versions of last
(defun last (lis)
(cond ((null (cdr lis)) (car lis))
(t (last (cdr lis)))))
(defun last2 (lis)
(cond
((and (listp lis) (= (length lis) 1)) lis)
(t (last2 (cdr lis)))))
(defun last3 (lis)
(cond ((atom lis) (list lis))
((and (listp lis) (= (length lis) 1)) lis)
(t (last3 (cdr lis)))))
The top definition returns an atom, the bottom definition
can handle atoms and Lists, CL’s last is probably last2
12. Butlast With and Without Recursion
(defun butlast1 (lis)
(cdr (reverse (cdr lis))))
(defun mybutlast (lis)
(cond ((null (cdr lis)) nil)
(t (cons (car lis) (mybutlast (cdr lis))))))
(defun reverse1 (lis)
(let (temp (size (length lis)))
(dotimes (a size)
(setf temp (append temp
(list (nth (- (- size a) 1) lis)))))
temp)
(defun reverse2 (lis)
(if (null lis) nil
(append (reverse2 (cdr lis)) (list (car lis)))))
The iterative
version of reverse
builds a list
iteratively using
a local variable
The recursive
version, while
being harder
to understand,
contains far
less code
13. Member
(defun member1 (a lis)
(dotimes (i (length lis))
(if (equal a (car lis))
(return lis)
(setf lis (cdr lis)))))
(defun member2 (a lis)
(cond ((null lis) nil)
((equal a (car lis)) lis)
(t (member2 a (cdr lis)))))
• In actuality, member does
not work as indicated here
because member only
tests top level items using
eql instead of equal
– So
• (member2 '(1 2) '(1 (1 2) 2))
returns ((1 2) 2)
– While
• (member '(1 2) '(1 (1 2) 2))
returns nil
14. Nth and Nthcdr
(defun nth (n lis)
(dotimes (a n)
(setf lis (cdr lis)))
(car lis))
(defun nth2 (a lis)
(cond ((= a 0) (car lis))
(t (nth2 (- a 1) (cdr lis)))))
(defun nth3 (a lis)
(cond ((< a 0) nil)
((= a 0) (car lis))
(t (nth3 (- a 1) (cdr lis)))))
(defun nthcdr1 (n lis)
(dotimes (a n)
(setf lis (cdr lis)))
lis)
(defun nthcdr2 (n lis)
(cond ((< n 0) nil)
((= n 0) lis)
(t (nthcdr2 (- n 1)
(cdr lis)))))
15. Remove from a List
(defun remove1 (a lis)
(let ((temp nil))
(dolist (i lis)
(if (not (equal a i))
(setf temp
(append temp (list i)))))
temp))
(defun remove2 (a lis)
(cond ((null lis) nil)
((equal a (car lis)) (remove2 a (cdr lis)))
(t (cons (car lis) (remove2 a (cdr lis))))))
(defun remove-first (a lis)
(cond ((null lis) nil)
((equal a (car lis)) (cdr lis))
(t (cons (car lis) (remove-first a (cdr lis))))))
What would
remove-last
look like?
16. Substitute Item In List
(defun subs1 (a b lis)
(let ((temp nil))
(dolist (i lis)
(if (equal a i)
(setf temp (append temp (list b)))
(setf temp (append temp (list i)))))
temp))
(defun subs2 (a b lis)
(cond ((null lis) nil)
((equal a (car lis)) (cons b (subs2 a b (cdr lis))))
(t (cons (car lis) (subs2 a b (cdr lis))))))
(defun sub-first (a b lis)
(cond ((null lis) nil)
((equal a (car lis)) (cons b (cdr lis)))
(t (cons (car lis) (sub-first a b (cdr lis))))))
17. Flattening a List
• Now consider the problem of delistifying a list
– That is, taking all of the items in sublists and moving them into
the top-level list
– The recursive version is fairly straightforward
• If the parameter is nil, return the empty list
• If the parameter is an atom, return the atom as a list
• Otherwise, append what we get back by recursively calling this function
with the car of the parameter (null, an atom, or a list) and the cdr of the
parameter (null or a list)
– Since sublists may contain subsublists, etc, an iterative version
would be extremely complicated!
(defun flatten (lis)
(cond ((null lis) nil)
((atom lis) (list lis))
(t (append
(flatten (car lis))
(flatten (cdr lis))))))
(flatten ’(a (b c (d e) f (g)) ((h) i)))
(A B C D E F G H I)
18. Counting List Items
• We can count the top level items using length
– (length ’(1 2 3 4)) 4 but (length ’(1 (2 3) 4)) 3
• We can implement a counting function easily
enough as:
• Counting the total number of atoms in a list that
might contain sublists requires flattening, so we
instead would do this:
(defun countitems (lis)
(if (null lis) 0 (+ 1 (countitems (cdr lis)))))
(defun countallitems (lis)
(cond ((null lis) 0)
((atom (car lis)) (+ 1 (countallitems (cdr lis))))
(t (+ (countallitems (car lis)) (countallitems (cdr lis))))))
19. Remove All
• We might similarly want to remove all of an
atom from the lists and sublists of a given list
so again we turn to flattening:
(defun removeall (a lis)
(cond ((null lis) nil)
((equal a (car lis)) (removeall a (cdr lis)))
((listp (car lis))
(cons (removeall a (car lis)) (removeall a (cdr lis))))
(t (cons (car lis) (removeall a (cdr lis))))))
(removeall 'a '(a b (a c) (d ((a) b) a) c a)) returns (B (C) (D (NIL B)) C)
Notice the nil inserted into the list because we replace (a) with nil, can
we fix this? If so, how?
20. Towers of Hanoi
(defun hanoi (n a b c)
(cond ((= n 1)
(print (list ’move n ’from a ’to c))
’done) ;; used so that the last message is not
;; repeated as the return value of the function
(t (hanoi (- n 1) a c b)
(print (list ’move n ’from a ’to c))
(hanoi (- n 1) b a c))))
Partial solution
Towers of Hanoi with 4 disks
Start Intermediate Final
21. Tail Recursion
• When writing recursive code, we typically write the
recursive function call in a cond statement as:
– (t (name (manipulate params)))
• If the last thing that this function does is call itself, then
this is known as tail recursion
– Tail recursion is important because it can be implemented
more efficiently
– Consider the following implementation of factorial, why isn’t
it tail recursive?
(defun fact (n)
(cond ((<= n 1) 1)
(t (* n (fact (- n 1))))))
The last thing fact does is *, not fact
so this is not tail recursive!
22. Writing Factorial with Tail Recursion
• If you look carefully, you can see that * is done after
we return from calling fact
• This seems like a necessity because we define
factorial as f(n) = n * f(n – 1)
– So we must subtract 1, then call f, and then do
multiplication
– Can we somehow rearrange the code so that * is not
performed last? Yes, by also passing a partial product as
follows
(defun fact-with-tr (n prod)
(cond ((<= n 1) prod)
(t (fact-with-tr (- n 1) (* n prod)))))
The last thing this function does is call fact-with-tr, not – or *
We call this function as
(fact-with-tr n 1)
23. Why Bother With TR?
• Optimizing compilers available in Common Lisp can benefit from
tail recursion if they detect it
– Rather than placing multiple Activation Record Instances on the stack, a
single activation record instance is pushed onto the stack when the function
is called the first time
– For each recursive call, the same activation record instance is manipulated
• in this case, n would be decremented and prod would be updated
– Since we are guaranteed in any single recursive call that we will never need
to use the parameter again in this call, we can change it
• why are we guaranteed that a parameter’s value won’t change in this call?
– And the return location is always to the same location in this function
– Once the function terminates, it is popped off the stack and we return to the
calling function’s location, or the interpreter
– Note: we have a significant problem if an error arises and we are dropped
into the debugger – what is that problem?
• Aside from saving on memory usage and a bit of run-time
memory allocation, this optimization doesn’t do anything else for
us, so we don’t really have to worry about tail recursion
24. Search Problems
• Lisp was the primary language for AI research
– Many AI problems revolve around searching for an answer
• Consider chess – you have to make a move, what move do you make?
• A computer program must search from all the possibilities to decide
what move to make
– but you don’t want to search by just looking 1 move ahead
– if you look 2 moves ahead, you don’t have twice as many possible moves,
but the number of possible moves2
– if you look 3 moves ahead, number of possible moves3
– this can quickly get out of hand
• So we limit our search by evaluating a top-level move using a heuristic
function
• If the function says “don’t make this move”, we don’t consider it and
don’t search any further along that path
• If the function says “possibly a good move”, then we recursively search
– by using recursion, we can “backup” and try another route if needed, this is
known as backtracking