This document is the cover and copyright page for the second edition of the book "Fluent Python" by Luciano Ramalho. It details the book's title, author, publisher information, copyright statement, and table of contents.
Report on I-LEACH: An Energy Efficient Routing Protocol in Wireless Sensor Ne...divya_prabha
This document presents a synopsis for a seminar on an improved energy efficient routing protocol for wireless sensor networks. It begins with an introduction to wireless sensor networks and routing techniques, focusing on hierarchical routing protocols. It then describes the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol and some of its variants. The main focus is on an Improved LEACH (I-LEACH) protocol, which aims to maximize network lifetime and minimize average energy consumption per node by selecting cluster heads based on residual energy, number of neighbors, and distance to the base station. The synopsis concludes with an outline of the seminar presentation and includes diagrams of the LEACH and I-LEACH protocols.
This document proposes a system for recognizing Indian sign language using a region of interest convolutional neural network. It begins with an introduction describing the need for such a system to help deaf people communicate. It then reviews related work on sign language recognition systems. The document outlines the objectives of developing an easy to use and low cost system. It describes the proposed use of a convolutional neural network with region of interest to recognize signs, rather than conventional feature extraction and classification approaches. Details of the system design, architecture and results are provided. The goal is to help deaf individuals communicate more easily with others.
cellular automata as a test pattern generator and output response compactor f...Shiva Narayan Reddy
this is the presentation shows how test pattern generators and output response compactors are developed using cellular automata techniques which are used for built in self test purpose.
The document is a presentation about TensorFlow. It begins with an introduction that defines machine learning and deep learning. It then discusses what TensorFlow is, including that it is an open-source library for deep learning and ML, was developed by Google Brain, and uses data flow graphs to represent computations. The presentation explains benefits of TensorFlow like parallelism, distributed execution, and portability. It provides examples of companies using TensorFlow and demonstrates cool projects that can be built with it, like image classification, object detection, and speech recognition. Finally, it concludes that TensorFlow is helping achieve amazing advancements in machine learning.
Underwater sensor networks have the potential to enable new applications and enhance ocean observation. They consist of sensors, autonomous underwater vehicles, and communication architecture. Challenges include limited bandwidth, multipath effects, and power constraints. The network topology and protocol stack must be designed to address issues like delays and bandwidth restrictions. Underwater sensor networks differ from terrestrial networks in deployment, power, memory and other factors due to the underwater environment. They can be used for applications like environmental monitoring, exploration, and disaster prevention.
The document discusses security challenges in wireless ad-hoc networks and potential solutions. It outlines that ad-hoc networks have no infrastructure, are self-organizing, and use multi-hop wireless communication. This introduces security vulnerabilities like eavesdropping, fake messages, and dynamic topology changes. The document then covers common attacks, why security is needed, challenges, and proposes solutions like using a trusted third party, secure routing protocols, and packet leashes to address issues like black holes and wormholes.
The main Java collections are lists, sets, and maps. Lists store elements in sequential order and allow duplicates. Sets store unique elements. Maps store key-value pairs to retrieve values later. Common implementations include ArrayList, LinkedList, HashSet, TreeSet, HashMap, and TreeMap. The Collections class provides utility methods like sort to organize collections.
YouTube Link: https://youtu.be/giJimUEkI7U
**Java, J2EE & SOA Certification Training - https://www.edureka.co/java-j2ee-training-course **
This Edureka PPT will provide you with detailed knowledge about Linked Lists in Java and along with it, This PPT will also cover some examples of Linked Lists in Java, in order to provide you with a deep understanding of their functionality. This PPT will cover the following topics:
What is a Linked List?
Types of Linked Lists
Features of Linked Lists
Methods in Linked Lists
Array v/s Linked List
Complete Java Playlist: http://bit.ly/2XcYNH5
Complete Blog Series: http://bit.ly/2YoabkT
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Report on I-LEACH: An Energy Efficient Routing Protocol in Wireless Sensor Ne...divya_prabha
This document presents a synopsis for a seminar on an improved energy efficient routing protocol for wireless sensor networks. It begins with an introduction to wireless sensor networks and routing techniques, focusing on hierarchical routing protocols. It then describes the Low-Energy Adaptive Clustering Hierarchy (LEACH) protocol and some of its variants. The main focus is on an Improved LEACH (I-LEACH) protocol, which aims to maximize network lifetime and minimize average energy consumption per node by selecting cluster heads based on residual energy, number of neighbors, and distance to the base station. The synopsis concludes with an outline of the seminar presentation and includes diagrams of the LEACH and I-LEACH protocols.
This document proposes a system for recognizing Indian sign language using a region of interest convolutional neural network. It begins with an introduction describing the need for such a system to help deaf people communicate. It then reviews related work on sign language recognition systems. The document outlines the objectives of developing an easy to use and low cost system. It describes the proposed use of a convolutional neural network with region of interest to recognize signs, rather than conventional feature extraction and classification approaches. Details of the system design, architecture and results are provided. The goal is to help deaf individuals communicate more easily with others.
cellular automata as a test pattern generator and output response compactor f...Shiva Narayan Reddy
this is the presentation shows how test pattern generators and output response compactors are developed using cellular automata techniques which are used for built in self test purpose.
The document is a presentation about TensorFlow. It begins with an introduction that defines machine learning and deep learning. It then discusses what TensorFlow is, including that it is an open-source library for deep learning and ML, was developed by Google Brain, and uses data flow graphs to represent computations. The presentation explains benefits of TensorFlow like parallelism, distributed execution, and portability. It provides examples of companies using TensorFlow and demonstrates cool projects that can be built with it, like image classification, object detection, and speech recognition. Finally, it concludes that TensorFlow is helping achieve amazing advancements in machine learning.
Underwater sensor networks have the potential to enable new applications and enhance ocean observation. They consist of sensors, autonomous underwater vehicles, and communication architecture. Challenges include limited bandwidth, multipath effects, and power constraints. The network topology and protocol stack must be designed to address issues like delays and bandwidth restrictions. Underwater sensor networks differ from terrestrial networks in deployment, power, memory and other factors due to the underwater environment. They can be used for applications like environmental monitoring, exploration, and disaster prevention.
The document discusses security challenges in wireless ad-hoc networks and potential solutions. It outlines that ad-hoc networks have no infrastructure, are self-organizing, and use multi-hop wireless communication. This introduces security vulnerabilities like eavesdropping, fake messages, and dynamic topology changes. The document then covers common attacks, why security is needed, challenges, and proposes solutions like using a trusted third party, secure routing protocols, and packet leashes to address issues like black holes and wormholes.
The main Java collections are lists, sets, and maps. Lists store elements in sequential order and allow duplicates. Sets store unique elements. Maps store key-value pairs to retrieve values later. Common implementations include ArrayList, LinkedList, HashSet, TreeSet, HashMap, and TreeMap. The Collections class provides utility methods like sort to organize collections.
YouTube Link: https://youtu.be/giJimUEkI7U
**Java, J2EE & SOA Certification Training - https://www.edureka.co/java-j2ee-training-course **
This Edureka PPT will provide you with detailed knowledge about Linked Lists in Java and along with it, This PPT will also cover some examples of Linked Lists in Java, in order to provide you with a deep understanding of their functionality. This PPT will cover the following topics:
What is a Linked List?
Types of Linked Lists
Features of Linked Lists
Methods in Linked Lists
Array v/s Linked List
Complete Java Playlist: http://bit.ly/2XcYNH5
Complete Blog Series: http://bit.ly/2YoabkT
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Minor project Report for "Quiz Application"Harsh Verma
This document summarizes a project report for a quiz application submitted to Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal. The report describes a web-based quiz application developed by students Bhumika Kanojia, Durgesh Mishra, and Harsh Verma to fulfill their Bachelor of Engineering degree requirements. The application allows users to play various quizzes on technical, general knowledge, and other topics. Users can view answers, ask questions, and submit their own questions and answers after admin approval. The report covers requirements analysis, design, implementation, and testing of the quiz application.
Arrays allow storing multiple values in a single variable. An array is a collection of consecutive memory locations with the same name and type. Elements in an array are accessed using an index. There are different types of arrays including one-dimensional, two-dimensional, and multi-dimensional arrays. Common operations on arrays include accessing elements, searching, and sorting. Sorting techniques for arrays include selection sort and bubble sort. Two-dimensional arrays can represent data in a table with rows and columns.
This document discusses state-centric programming and collaboration groups in wireless sensor networks. It defines a collaboration group as a set of entities that contribute to a state update, with a scope defining its membership and a structure defining roles and data flow. Examples of groups include geographically constrained groups based on region, N-hop neighborhood groups within a number of hops from an anchor, publish/subscribe groups of consumers and producers, and acquaintance groups where members invite others. Mixing and matching different group types can make algorithms more scalable and efficient.
Make a query regarding a topic of interest and come to know the sentiment for the day in pie-chart or for the week in form of line-chart for the tweets gathered from twitter.com
This document provides an overview of security challenges and solutions in ad hoc networks. It discusses topics like authentication, key management, secure routing, cooperation between nodes, wireless sensor networks, and intrusion detection systems. The main challenges outlined are the lack of centralized control, dynamic topology, resource constraints of nodes, and vulnerability of wireless links. Lightweight, distributed, reactive, and fault-tolerant approaches are needed to address these challenges.
The document summarizes an Eckovation machine learning project that classified images of cats and dogs using deep learning techniques. The team used a training set of 25,000 images and test set of 12,500 images to build a deep neural network model for image classification. Random forest was also explored as an algorithm, but deep learning was found to be better suited due to its ability to handle more complex problems like image classification. The model achieved a 60% accuracy on the test set.
This document describes a minor project on developing a face mask detector using computer vision and deep learning techniques. The project aims to create a model that can detect faces with and without masks using OpenCV, Keras/TensorFlow. A two-class classifier was trained on a mask/no mask dataset to obtain 99% accuracy. The fine-tuned MobileNetV2 model can accurately detect faces and identify whether masks are being worn, making it suitable for deployment on embedded systems.
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...Simplilearn
The document provides an overview of using TensorFlow to build deep learning models. It discusses how TensorFlow uses computational graphs to process data and perform computations. Tensors represent multi-dimensional data and are core to TensorFlow's operations. The document also demonstrates how to build simple models like linear regression and recurrent neural networks (RNNs) using TensorFlow. An example RNN model predicts monthly milk production using time series data.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
The document presents a graduate project on efficient data aggregation from polling points in wireless sensor networks. The proposed system called Mobi-Cluster aims to minimize overall network overhead and energy expenditure associated with multi-hop data retrieval while ensuring balanced energy consumption and prolonged network lifetime. This is achieved through building cluster structures consisting of member nodes that route data to assigned cluster heads, and selecting appropriate polling points to act as intermediaries between clusters and a mobile collector. The key stages of the Mobi-Cluster protocol are described as cluster head selection, polling point selection, cluster head attachment to polling points, data aggregation and forwarding to polling points, and communication between polling points and the mobile collector.
Sentiment analysis of Twitter data using pythonHetu Bhavsar
Twitter is a popular social networking website where users posts and interact with messages known as “tweets”. To automate the analysis of such data, the area of Sentiment Analysis has emerged. It aims at identifying opinionative data in the Web and classifying them according to their polarity, i.e., whether they carry a positive or negative connotation. We will attempt to conduct sentiment analysis on “tweets” using various different machine learning algorithms.
In this deck, Huihuo Zheng from Argonne National Laboratory presents: Data Parallel Deep Learning.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-lsl
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Sentimental analysis is a context based mining of text, which extracts and identify subjective information from a text or sentence provided. Here the main concept is extracting the sentiment of the text using machine learning techniques such as LSTM Long short term memory . This text classification method analyses the incoming text and determines whether the underlined emotion is positive or negative along with probability associated with that positive or negative statements. Probability depicts the strength of a positive or negative statement, if the probability is close to zero, it implies that the sentiment is strongly negative and if probability is close to1, it means that the statement is strongly positive. Here a web application is created to deploy this model using a Python based micro framework called flask. Many other methods, such as RNN and CNN, are inefficient when compared to LSTM. Dirash A R | Dr. S K Manju Bargavi "LSTM Based Sentiment Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42345.pdf Paper URL: https://www.ijtsrd.comcomputer-science/data-processing/42345/lstm-based-sentiment-analysis/dirash-a-r
This document contains the requirement specification and design for a chat application. It includes use case diagrams and tables describing the authentication system, contacts form, chat form, maintenance, and monitoring features. It also includes activity diagrams, class diagrams, entity relationship diagrams, and sequence diagrams modeling the application's functionality and architecture. The data flow diagrams show the high-level data flows and data transformation processes within the chat application.
The document advertises free computer science notes, programs, projects and books that can be downloaded for various university students from the website https://www.tutorialsduniya.com. It lists topics covered in the materials such as PHP, Python, algorithms, data structures, machine learning, operating systems and more. It encourages sharing the notes with friends.
This document discusses internet congestion and congestion control techniques. It defines internet congestion as reduced quality of service that occurs when a network is carrying more data than it can handle, causing effects like queueing delays and packet loss. Congestion control techniques aim to prevent or remove congestion. Open loop techniques prevent congestion proactively using policies around retransmissions, windows, acknowledgements, and routing. Closed loop techniques detect and remove congestion reactively using methods like backpressure, choke packets, and implicit or explicit signaling between nodes and sources. In conclusion, as internet usage increases, efficient congestion control is important, though no single algorithm can fully solve the problem.
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...Edureka!
( ** Deep Learning Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** )
This Edureka PyTorch Tutorial (Blog: https://goo.gl/4zxMfU) will help you in understanding various important basics of PyTorch. It also includes a use-case in which we will create an image classifier that will predict the accuracy of an image data-set using PyTorch.
Below are the topics covered in this tutorial:
1. What is Deep Learning?
2. What are Neural Networks?
3. Libraries available in Python
4. What is PyTorch?
5. Use-Case of PyTorch
6. Summary
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This document provides an overview and introduction to TensorFlow. It describes that TensorFlow is an open source software library for numerical computation using data flow graphs. The graphs are composed of nodes, which are operations on data, and edges, which are multidimensional data arrays (tensors) passing between operations. It also provides pros and cons of TensorFlow and describes higher level APIs, requirements and installation, program structure, tensors, variables, operations, and other key concepts.
Python for Data Analysis_ Data Wrangling with Pandas, Numpy, and Ipython ( PD...R.K.College of engg & Tech
This document provides an overview and table of contents for the book "Python for Data Analysis" by Wes McKinney. The book covers using Python and essential Python data analysis libraries like NumPy, pandas, matplotlib, and others for data wrangling, exploration, and modeling. It includes chapters on data structures and manipulation, data loading and storage, data cleaning, aggregation, visualization, and more. The second edition was published in 2017 and builds on the first edition.
Python isn't hard to learn for most people. In fact, Python is considered one of the easiest programming languages to learn. While anyone can learn Python programming — even if you've never written a line of Python code before — you should expect that it will take time, and you should expect moments of frustration.
Minor project Report for "Quiz Application"Harsh Verma
This document summarizes a project report for a quiz application submitted to Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal. The report describes a web-based quiz application developed by students Bhumika Kanojia, Durgesh Mishra, and Harsh Verma to fulfill their Bachelor of Engineering degree requirements. The application allows users to play various quizzes on technical, general knowledge, and other topics. Users can view answers, ask questions, and submit their own questions and answers after admin approval. The report covers requirements analysis, design, implementation, and testing of the quiz application.
Arrays allow storing multiple values in a single variable. An array is a collection of consecutive memory locations with the same name and type. Elements in an array are accessed using an index. There are different types of arrays including one-dimensional, two-dimensional, and multi-dimensional arrays. Common operations on arrays include accessing elements, searching, and sorting. Sorting techniques for arrays include selection sort and bubble sort. Two-dimensional arrays can represent data in a table with rows and columns.
This document discusses state-centric programming and collaboration groups in wireless sensor networks. It defines a collaboration group as a set of entities that contribute to a state update, with a scope defining its membership and a structure defining roles and data flow. Examples of groups include geographically constrained groups based on region, N-hop neighborhood groups within a number of hops from an anchor, publish/subscribe groups of consumers and producers, and acquaintance groups where members invite others. Mixing and matching different group types can make algorithms more scalable and efficient.
Make a query regarding a topic of interest and come to know the sentiment for the day in pie-chart or for the week in form of line-chart for the tweets gathered from twitter.com
This document provides an overview of security challenges and solutions in ad hoc networks. It discusses topics like authentication, key management, secure routing, cooperation between nodes, wireless sensor networks, and intrusion detection systems. The main challenges outlined are the lack of centralized control, dynamic topology, resource constraints of nodes, and vulnerability of wireless links. Lightweight, distributed, reactive, and fault-tolerant approaches are needed to address these challenges.
The document summarizes an Eckovation machine learning project that classified images of cats and dogs using deep learning techniques. The team used a training set of 25,000 images and test set of 12,500 images to build a deep neural network model for image classification. Random forest was also explored as an algorithm, but deep learning was found to be better suited due to its ability to handle more complex problems like image classification. The model achieved a 60% accuracy on the test set.
This document describes a minor project on developing a face mask detector using computer vision and deep learning techniques. The project aims to create a model that can detect faces with and without masks using OpenCV, Keras/TensorFlow. A two-class classifier was trained on a mask/no mask dataset to obtain 99% accuracy. The fine-tuned MobileNetV2 model can accurately detect faces and identify whether masks are being worn, making it suitable for deployment on embedded systems.
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...Simplilearn
The document provides an overview of using TensorFlow to build deep learning models. It discusses how TensorFlow uses computational graphs to process data and perform computations. Tensors represent multi-dimensional data and are core to TensorFlow's operations. The document also demonstrates how to build simple models like linear regression and recurrent neural networks (RNNs) using TensorFlow. An example RNN model predicts monthly milk production using time series data.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
The document presents a graduate project on efficient data aggregation from polling points in wireless sensor networks. The proposed system called Mobi-Cluster aims to minimize overall network overhead and energy expenditure associated with multi-hop data retrieval while ensuring balanced energy consumption and prolonged network lifetime. This is achieved through building cluster structures consisting of member nodes that route data to assigned cluster heads, and selecting appropriate polling points to act as intermediaries between clusters and a mobile collector. The key stages of the Mobi-Cluster protocol are described as cluster head selection, polling point selection, cluster head attachment to polling points, data aggregation and forwarding to polling points, and communication between polling points and the mobile collector.
Sentiment analysis of Twitter data using pythonHetu Bhavsar
Twitter is a popular social networking website where users posts and interact with messages known as “tweets”. To automate the analysis of such data, the area of Sentiment Analysis has emerged. It aims at identifying opinionative data in the Web and classifying them according to their polarity, i.e., whether they carry a positive or negative connotation. We will attempt to conduct sentiment analysis on “tweets” using various different machine learning algorithms.
In this deck, Huihuo Zheng from Argonne National Laboratory presents: Data Parallel Deep Learning.
"The Argonne Training Program on Extreme-Scale Computing (ATPESC) provides intensive, two weeks of training on the key skills, approaches, and tools to design, implement, and execute computational science and engineering applications on current high-end computing systems and the leadership-class computing systems of the future."
Watch the video: https://wp.me/p3RLHQ-lsl
Learn more: https://extremecomputingtraining.anl.gov/archive/atpesc-2019/agenda-2019/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Sentimental analysis is a context based mining of text, which extracts and identify subjective information from a text or sentence provided. Here the main concept is extracting the sentiment of the text using machine learning techniques such as LSTM Long short term memory . This text classification method analyses the incoming text and determines whether the underlined emotion is positive or negative along with probability associated with that positive or negative statements. Probability depicts the strength of a positive or negative statement, if the probability is close to zero, it implies that the sentiment is strongly negative and if probability is close to1, it means that the statement is strongly positive. Here a web application is created to deploy this model using a Python based micro framework called flask. Many other methods, such as RNN and CNN, are inefficient when compared to LSTM. Dirash A R | Dr. S K Manju Bargavi "LSTM Based Sentiment Analysis" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42345.pdf Paper URL: https://www.ijtsrd.comcomputer-science/data-processing/42345/lstm-based-sentiment-analysis/dirash-a-r
This document contains the requirement specification and design for a chat application. It includes use case diagrams and tables describing the authentication system, contacts form, chat form, maintenance, and monitoring features. It also includes activity diagrams, class diagrams, entity relationship diagrams, and sequence diagrams modeling the application's functionality and architecture. The data flow diagrams show the high-level data flows and data transformation processes within the chat application.
The document advertises free computer science notes, programs, projects and books that can be downloaded for various university students from the website https://www.tutorialsduniya.com. It lists topics covered in the materials such as PHP, Python, algorithms, data structures, machine learning, operating systems and more. It encourages sharing the notes with friends.
This document discusses internet congestion and congestion control techniques. It defines internet congestion as reduced quality of service that occurs when a network is carrying more data than it can handle, causing effects like queueing delays and packet loss. Congestion control techniques aim to prevent or remove congestion. Open loop techniques prevent congestion proactively using policies around retransmissions, windows, acknowledgements, and routing. Closed loop techniques detect and remove congestion reactively using methods like backpressure, choke packets, and implicit or explicit signaling between nodes and sources. In conclusion, as internet usage increases, efficient congestion control is important, though no single algorithm can fully solve the problem.
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...Edureka!
( ** Deep Learning Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** )
This Edureka PyTorch Tutorial (Blog: https://goo.gl/4zxMfU) will help you in understanding various important basics of PyTorch. It also includes a use-case in which we will create an image classifier that will predict the accuracy of an image data-set using PyTorch.
Below are the topics covered in this tutorial:
1. What is Deep Learning?
2. What are Neural Networks?
3. Libraries available in Python
4. What is PyTorch?
5. Use-Case of PyTorch
6. Summary
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
This document provides an overview and introduction to TensorFlow. It describes that TensorFlow is an open source software library for numerical computation using data flow graphs. The graphs are composed of nodes, which are operations on data, and edges, which are multidimensional data arrays (tensors) passing between operations. It also provides pros and cons of TensorFlow and describes higher level APIs, requirements and installation, program structure, tensors, variables, operations, and other key concepts.
Python for Data Analysis_ Data Wrangling with Pandas, Numpy, and Ipython ( PD...R.K.College of engg & Tech
This document provides an overview and table of contents for the book "Python for Data Analysis" by Wes McKinney. The book covers using Python and essential Python data analysis libraries like NumPy, pandas, matplotlib, and others for data wrangling, exploration, and modeling. It includes chapters on data structures and manipulation, data loading and storage, data cleaning, aggregation, visualization, and more. The second edition was published in 2017 and builds on the first edition.
Python isn't hard to learn for most people. In fact, Python is considered one of the easiest programming languages to learn. While anyone can learn Python programming — even if you've never written a line of Python code before — you should expect that it will take time, and you should expect moments of frustration.
This document is the table of contents for the 4th edition of the book "Learning Python" by Mark Lutz. It contains over 400 pages divided into 16 chapters covering topics like Python basics, types, statements, functions and object-oriented programming. The book is published by O'Reilly Media and is intended to help readers learn the Python programming language.
This document provides an overview of the fifth edition of the book "Learning Python" by Mark Lutz. It discusses the book's contents, contributors, copyright information, and revision history. The book is published by O'Reilly Media and is intended to teach readers how to program in the Python language. It covers topics such as Python basics, built-in types, statements and syntax, and other core programming concepts.
pythondatasciencehandbook with oops concepts.pdfRMani7
Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originally released in 1991. 'Python' has an easy-going ring to it and so is its operation. Python has a reputation for being a beginner-friendly language.Technically defined, Python is an object-oriented, interpreted, high-level programming language, developed by Guido van Rossum and originall
Wes McKinney - Python for Data Analysis-O'Reilly Media (2012).pdfBlue Sea
This book provides an overview of Python libraries for data analysis and scientific computing. It covers core libraries like NumPy, pandas, matplotlib and IPython, which are essential for tasks like loading, cleaning, transforming and visualizing data. The book also teaches best practices for data analysis workflows in Python and key programming techniques.
This document provides an overview and table of contents for the book "Hands-On Machine Learning with Scikit-Learn and TensorFlow" by Aurélien Géron. The book covers concepts, tools, and techniques for building intelligent systems using machine learning frameworks Scikit-Learn and TensorFlow. It is divided into two parts, with Part I covering fundamental machine learning concepts and algorithms, and Part II focusing on neural networks and deep learning. The table of contents lists 12 chapters that progress from basic machine learning topics to more advanced deep learning techniques.
Ibm db2 10.5 for linux, unix, and windows x query referencebupbechanhgmail
This document provides an overview and reference for IBM DB2 10.5 XQuery functionality. It covers XQuery concepts, data types, the type system, the prolog, expressions, and built-in functions. The type system chapter describes the built-in XQuery data types like string, date, and QName. The prolog chapter covers declaration keywords that can be used to configure XQuery processing. The expressions chapter details the syntax and semantics of XQuery expressions, including path expressions, FLWOR expressions, and updating expressions. The built-in functions chapter categorizes the standard XQuery functions supported in DB2 like concat, substring, and current-dateTime.
Kotlin provides several ways to write a simple "Hello World" program, including using top-level functions, object declarations, and companion objects. It can be compiled from the command line using kotlinc and run using kotlin. Main functions support varargs and reading input from the command line. Kotlin is compatible with Java and can be used to target the JVM, Android, and JavaScript.
Introduction to Machine Learning with Python ( PDFDrive.com ).pdfbisan3
This document is the preface to the book "Introduction to Machine Learning with Python" by Andreas C. Müller and Sarah Guido. The preface provides an overview of the book, including who it is intended for, why the authors wrote it, how it is organized, and conventions used. It is an introductory book on machine learning using Python and scikit-learn that requires no previous experience, focusing on practical applications rather than advanced mathematics.
This document provides an overview of Active Directory, including:
- Its basic building blocks such as domains, forests, organizational units, and the global catalog.
- Tools for managing Active Directory such as Active Directory Users and Computers and LDP.
- The structure and components of the Active Directory schema.
- How Active Directory replication works to synchronize data between domain controllers.
- Group policies and how they are used to centrally manage configurations and settings.
- Authentication protocols like Kerberos that Active Directory relies on.
- Best practices for designing an Active Directory structure, including naming conventions and organizational unit layouts.
Top Python Online Training Institutes in BangaloreSaagTechnologies
Python is a very powerful general-purpose, interpreted, interactive, object-oriented, open source, high-level scripting and programming language, that highlights code readability and enables developers to use fewer lines of code in comparison with any other programming languages. Python supports modules and packages, which encourages program modularity and code reuse. Python supports multiple programming standards and has a large standard library. Python is useful for Data science, Machine learning , DevOps, IoT and AI development.
This document provides an overview of the SAP system architecture and environment for ABAP programs. It describes the typical 3-tier client/server architecture with presentation, application, and database layers. It also discusses the common landscape architecture with separate development, testing, and production systems. The environment for ABAP programs includes work processes that run programs on the application server independently of the operating system and database. The document introduces reports and dynpro programs that make up the main types of ABAP programs.
This document discusses the advantages of puzzle-based learning for improving coding skills. It argues that puzzles help learners by opening a knowledge gap, providing eureka moments when the gap is closed, dividing complex learning into smaller achievable steps, and giving immediate feedback to support learning. The overall goal of the book and its puzzles is to help readers efficiently learn Python programming concepts and skills through an active problem-solving approach.
This document contains lecture notes for a course on principles of programming languages. It covers topics related to functional programming including expressions, procedures, recursion, types, and evaluators. It also briefly introduces logic programming and imperative programming concepts. The notes were prepared by Mira Balaban for a class at Ben-Gurion University with help from several teaching assistants.
Python functions allow breaking down code into reusable blocks to perform tasks. There are several types of functions including built-in, user-defined, and anonymous functions. User-defined functions are defined using the def keyword and can take in parameters. Functions can return values using the return statement. Functions are called by their name along with any arguments. Arguments are passed into parameters and can be positional, keyword, or have default values. Functions increase code reuse and readability.
The document is a tutorial on function pointers in C and C++. It introduces function pointers and provides examples of defining, assigning, comparing, and calling functions using pointers. It also covers callbacks and functors, which allow functions to be passed as arguments or returned.
Similar to Luciano Ramalho - Fluent Python_ Clear, Concise, and Effective Programming-O'Reilly Media (2022).pdf (20)
Nashik's top web development company, Upturn India Technologies, crafts innovative digital solutions for your success. Partner with us and achieve your goals
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
Manyata Tech Park Bangalore_ Infrastructure, Facilities and Morenarinav14
Located in the bustling city of Bangalore, Manyata Tech Park stands as one of India’s largest and most prominent tech parks, playing a pivotal role in shaping the city’s reputation as the Silicon Valley of India. Established to cater to the burgeoning IT and technology sectors
14 th Edition of International conference on computer visionShulagnaSarkar2
About the event
14th Edition of International conference on computer vision
Computer conferences organized by ScienceFather group. ScienceFather takes the privilege to invite speakers participants students delegates and exhibitors from across the globe to its International Conference on computer conferences to be held in the Various Beautiful cites of the world. computer conferences are a discussion of common Inventions-related issues and additionally trade information share proof thoughts and insight into advanced developments in the science inventions service system. New technology may create many materials and devices with a vast range of applications such as in Science medicine electronics biomaterials energy production and consumer products.
Nomination are Open!! Don't Miss it
Visit: computer.scifat.com
Award Nomination: https://x-i.me/ishnom
Conference Submission: https://x-i.me/anicon
For Enquiry: Computer@scifat.com
Baha Majid WCA4Z IBM Z Customer Council Boston June 2024.pdfBaha Majid
IBM watsonx Code Assistant for Z, our latest Generative AI-assisted mainframe application modernization solution. Mainframe (IBM Z) application modernization is a topic that every mainframe client is addressing to various degrees today, driven largely from digital transformation. With generative AI comes the opportunity to reimagine the mainframe application modernization experience. Infusing generative AI will enable speed and trust, help de-risk, and lower total costs associated with heavy-lifting application modernization initiatives. This document provides an overview of the IBM watsonx Code Assistant for Z which uses the power of generative AI to make it easier for developers to selectively modernize COBOL business services while maintaining mainframe qualities of service.
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Photoshop Tutorial for Beginners (2024 Edition)alowpalsadig
Photoshop Tutorial for Beginners (2024 Edition)
Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."
Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
Photoshop Tutorial for Beginners (2024 Edition)Explore the evolution of programming and software development and design in 2024. Discover emerging trends shaping the future of coding in our insightful analysis."Here's an overview:Introduction: The Evolution of Programming and Software DevelopmentThe Rise of Artificial Intelligence and Machine Learning in CodingAdopting Low-Code and No-Code PlatformsQuantum Computing: Entering the Software Development MainstreamIntegration of DevOps with Machine Learning: MLOpsAdvancements in Cybersecurity PracticesThe Growth of Edge ComputingEmerging Programming Languages and FrameworksSoftware Development Ethics and AI RegulationSustainability in Software EngineeringThe Future Workforce: Remote and Distributed TeamsConclusion: Adapting to the Changing Software Development LandscapeIntroduction: The Evolution of Programming and Software Development
The importance of developing and designing programming in 2024
Programming design and development represents a vital step in keeping pace with technological advancements and meeting ever-changing market needs. This course is intended for anyone who wants to understand the fundamental importance of software development and design, whether you are a beginner or a professional seeking to update your knowledge.
Course objectives:
1. **Learn about the basics of software development:
- Understanding software development processes and tools.
- Identify the role of programmers and designers in software projects.
2. Understanding the software design process:
- Learn about the principles of good software design.
- Discussing common design patterns such as Object-Oriented Design.
3. The importance of user experience (UX) in modern software:
- Explore how user experience can improve software acceptance and usability.
- Tools and techniques to analyze and improve user experience.
4. Increase efficiency and productivity through modern development tools:
- Access to the latest programming tools and languages used in the industry.
- Study live examples of applications
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Boost Your Savings with These Money Management AppsJhone kinadey
A money management app can transform your financial life by tracking expenses, creating budgets, and setting financial goals. These apps offer features like real-time expense tracking, bill reminders, and personalized insights to help you save and manage money effectively. With a user-friendly interface, they simplify financial planning, making it easier to stay on top of your finances and achieve long-term financial stability.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
3. Luciano Ramalho
Fluent Python
Clear, Concise, and
Effective Programming
SECOND EDITION
Boston Farnham Sebastopol Tokyo
Beijing Boston Farnham Sebastopol Tokyo
Beijing
7. Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Part I. Data Structures
1. The Python Data Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
What’s New in This Chapter 4
A Pythonic Card Deck 5
How Special Methods Are Used 8
Emulating Numeric Types 9
String Representation 12
Boolean Value of a Custom Type 13
Collection API 14
Overview of Special Methods 15
Why len Is Not a Method 17
Chapter Summary 18
Further Reading 18
2. An Array of Sequences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
What’s New in This Chapter 22
Overview of Built-In Sequences 22
List Comprehensions and Generator Expressions 25
List Comprehensions and Readability 25
Listcomps Versus map and filter 27
Cartesian Products 27
Generator Expressions 29
Tuples Are Not Just Immutable Lists 30
Tuples as Records 30
v
8. Tuples as Immutable Lists 32
Comparing Tuple and List Methods 34
Unpacking Sequences and Iterables 35
Using * to Grab Excess Items 36
Unpacking with * in Function Calls and Sequence Literals 37
Nested Unpacking 37
Pattern Matching with Sequences 38
Pattern Matching Sequences in an Interpreter 43
Slicing 47
Why Slices and Ranges Exclude the Last Item 47
Slice Objects 48
Multidimensional Slicing and Ellipsis 49
Assigning to Slices 50
Using + and * with Sequences 50
Building Lists of Lists 51
Augmented Assignment with Sequences 53
A += Assignment Puzzler 54
list.sort Versus the sorted Built-In 56
When a List Is Not the Answer 59
Arrays 59
Memory Views 62
NumPy 64
Deques and Other Queues 67
Chapter Summary 70
Further Reading 71
3. Dictionaries and Sets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
What’s New in This Chapter 78
Modern dict Syntax 78
dict Comprehensions 79
Unpacking Mappings 80
Merging Mappings with | 80
Pattern Matching with Mappings 81
Standard API of Mapping Types 83
What Is Hashable 84
Overview of Common Mapping Methods 85
Inserting or Updating Mutable Values 87
Automatic Handling of Missing Keys 90
defaultdict: Another Take on Missing Keys 90
The __missing__ Method 91
Inconsistent Usage of __missing__ in the Standard Library 94
Variations of dict 95
vi | Table of Contents
9. collections.OrderedDict 95
collections.ChainMap 95
collections.Counter 96
shelve.Shelf 97
Subclassing UserDict Instead of dict 97
Immutable Mappings 99
Dictionary Views 101
Practical Consequences of How dict Works 102
Set Theory 103
Set Literals 105
Set Comprehensions 106
Practical Consequences of How Sets Work 107
Set Operations 107
Set Operations on dict Views 110
Chapter Summary 112
Further Reading 113
4. Unicode Text Versus Bytes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
What’s New in This Chapter 118
Character Issues 118
Byte Essentials 120
Basic Encoders/Decoders 123
Understanding Encode/Decode Problems 125
Coping with UnicodeEncodeError 125
Coping with UnicodeDecodeError 126
SyntaxError When Loading Modules with Unexpected Encoding 128
How to Discover the Encoding of a Byte Sequence 128
BOM: A Useful Gremlin 129
Handling Text Files 131
Beware of Encoding Defaults 134
Normalizing Unicode for Reliable Comparisons 140
Case Folding 142
Utility Functions for Normalized Text Matching 143
Extreme “Normalization”: Taking Out Diacritics 144
Sorting Unicode Text 148
Sorting with the Unicode Collation Algorithm 150
The Unicode Database 150
Finding Characters by Name 151
Numeric Meaning of Characters 153
Dual-Mode str and bytes APIs 155
str Versus bytes in Regular Expressions 155
str Versus bytes in os Functions 156
Table of Contents | vii
10. Chapter Summary 157
Further Reading 158
5. Data Class Builders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
What’s New in This Chapter 164
Overview of Data Class Builders 164
Main Features 167
Classic Named Tuples 169
Typed Named Tuples 172
Type Hints 101 173
No Runtime Effect 173
Variable Annotation Syntax 174
The Meaning of Variable Annotations 175
More About @dataclass 179
Field Options 180
Post-init Processing 183
Typed Class Attributes 185
Initialization Variables That Are Not Fields 186
@dataclass Example: Dublin Core Resource Record 187
Data Class as a Code Smell 190
Data Class as Scaffolding 191
Data Class as Intermediate Representation 191
Pattern Matching Class Instances 192
Simple Class Patterns 192
Keyword Class Patterns 193
Positional Class Patterns 194
Chapter Summary 195
Further Reading 196
6. Object References, Mutability, and Recycling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
What’s New in This Chapter 202
Variables Are Not Boxes 202
Identity, Equality, and Aliases 204
Choosing Between == and is 206
The Relative Immutability of Tuples 207
Copies Are Shallow by Default 208
Deep and Shallow Copies of Arbitrary Objects 211
Function Parameters as References 213
Mutable Types as Parameter Defaults: Bad Idea 214
Defensive Programming with Mutable Parameters 216
del and Garbage Collection 219
Tricks Python Plays with Immutables 221
viii | Table of Contents
11. Chapter Summary 223
Further Reading 224
Part II. Functions as Objects
7. Functions as First-Class Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
What’s New in This Chapter 232
Treating a Function Like an Object 232
Higher-Order Functions 234
Modern Replacements for map, filter, and reduce 235
Anonymous Functions 236
The Nine Flavors of Callable Objects 237
User-Defined Callable Types 239
From Positional to Keyword-Only Parameters 240
Positional-Only Parameters 242
Packages for Functional Programming 243
The operator Module 243
Freezing Arguments with functools.partial 247
Chapter Summary 249
Further Reading 250
8. Type Hints in Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
What’s New in This Chapter 254
About Gradual Typing 254
Gradual Typing in Practice 255
Starting with Mypy 256
Making Mypy More Strict 257
A Default Parameter Value 258
Using None as a Default 260
Types Are Defined by Supported Operations 260
Types Usable in Annotations 266
The Any Type 266
Simple Types and Classes 269
Optional and Union Types 270
Generic Collections 271
Tuple Types 274
Generic Mappings 276
Abstract Base Classes 278
Iterable 280
Parameterized Generics and TypeVar 282
Static Protocols 286
Table of Contents | ix
12. Callable 291
NoReturn 294
Annotating Positional Only and Variadic Parameters 295
Imperfect Typing and Strong Testing 296
Chapter Summary 297
Further Reading 298
9. Decorators and Closures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
What’s New in This Chapter 304
Decorators 101 304
When Python Executes Decorators 306
Registration Decorators 308
Variable Scope Rules 308
Closures 311
The nonlocal Declaration 315
Variable Lookup Logic 316
Implementing a Simple Decorator 317
How It Works 318
Decorators in the Standard Library 320
Memoization with functools.cache 320
Using lru_cache 323
Single Dispatch Generic Functions 324
Parameterized Decorators 329
A Parameterized Registration Decorator 329
The Parameterized Clock Decorator 332
A Class-Based Clock Decorator 335
Chapter Summary 336
Further Reading 336
10. Design Patterns with First-Class Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
What’s New in This Chapter 342
Case Study: Refactoring Strategy 342
Classic Strategy 342
Function-Oriented Strategy 347
Choosing the Best Strategy: Simple Approach 350
Finding Strategies in a Module 351
Decorator-Enhanced Strategy Pattern 353
The Command Pattern 355
Chapter Summary 357
Further Reading 358
x | Table of Contents
13. Part III. Classes and Protocols
11. A Pythonic Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
What’s New in This Chapter 364
Object Representations 364
Vector Class Redux 365
An Alternative Constructor 368
classmethod Versus staticmethod 369
Formatted Displays 370
A Hashable Vector2d 374
Supporting Positional Pattern Matching 377
Complete Listing of Vector2d, Version 3 378
Private and “Protected” Attributes in Python 382
Saving Memory with __slots__ 384
Simple Measure of __slot__ Savings 387
Summarizing the Issues with __slots__ 388
Overriding Class Attributes 389
Chapter Summary 391
Further Reading 392
12. Special Methods for Sequences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
What’s New in This Chapter 398
Vector: A User-Defined Sequence Type 398
Vector Take #1: Vector2d Compatible 399
Protocols and Duck Typing 402
Vector Take #2: A Sliceable Sequence 403
How Slicing Works 404
A Slice-Aware __getitem__ 406
Vector Take #3: Dynamic Attribute Access 407
Vector Take #4: Hashing and a Faster == 411
Vector Take #5: Formatting 418
Chapter Summary 425
Further Reading 426
13. Interfaces, Protocols, and ABCs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
The Typing Map 432
What’s New in This Chapter 433
Two Kinds of Protocols 434
Programming Ducks 435
Python Digs Sequences 436
Monkey Patching: Implementing a Protocol at Runtime 438
Defensive Programming and “Fail Fast” 440
Table of Contents | xi
14. Goose Typing 442
Subclassing an ABC 447
ABCs in the Standard Library 449
Defining and Using an ABC 451
ABC Syntax Details 457
Subclassing an ABC 458
A Virtual Subclass of an ABC 460
Usage of register in Practice 463
Structural Typing with ABCs 464
Static Protocols 466
The Typed double Function 466
Runtime Checkable Static Protocols 468
Limitations of Runtime Protocol Checks 471
Supporting a Static Protocol 472
Designing a Static Protocol 474
Best Practices for Protocol Design 476
Extending a Protocol 477
The numbers ABCs and Numeric Protocols 478
Chapter Summary 481
Further Reading 482
14. Inheritance: For Better or for Worse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
What’s New in This Chapter 488
The super() Function 488
Subclassing Built-In Types Is Tricky 490
Multiple Inheritance and Method Resolution Order 494
Mixin Classes 500
Case-Insensitive Mappings 500
Multiple Inheritance in the Real World 502
ABCs Are Mixins Too 502
ThreadingMixIn and ForkingMixIn 503
Django Generic Views Mixins 504
Multiple Inheritance in Tkinter 507
Coping with Inheritance 510
Favor Object Composition over Class Inheritance 510
Understand Why Inheritance Is Used in Each Case 510
Make Interfaces Explicit with ABCs 511
Use Explicit Mixins for Code Reuse 511
Provide Aggregate Classes to Users 511
Subclass Only Classes Designed for Subclassing 512
Avoid Subclassing from Concrete Classes 513
Tkinter: The Good, the Bad, and the Ugly 513
xii | Table of Contents
15. Chapter Summary 514
Further Reading 515
15. More About Type Hints. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
What’s New in This Chapter 519
Overloaded Signatures 520
Max Overload 521
Takeaways from Overloading max 525
TypedDict 526
Type Casting 534
Reading Type Hints at Runtime 537
Problems with Annotations at Runtime 538
Dealing with the Problem 540
Implementing a Generic Class 541
Basic Jargon for Generic Types 544
Variance 544
An Invariant Dispenser 545
A Covariant Dispenser 546
A Contravariant Trash Can 547
Variance Review 549
Implementing a Generic Static Protocol 552
Chapter Summary 554
Further Reading 555
16. Operator Overloading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561
What’s New in This Chapter 562
Operator Overloading 101 562
Unary Operators 563
Overloading + for Vector Addition 566
Overloading * for Scalar Multiplication 572
Using @ as an Infix Operator 574
Wrapping-Up Arithmetic Operators 576
Rich Comparison Operators 577
Augmented Assignment Operators 580
Chapter Summary 585
Further Reading 587
Part IV. Control Flow
17. Iterators, Generators, and Classic Coroutines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
What’s New in This Chapter 594
Table of Contents | xiii
16. A Sequence of Words 594
Why Sequences Are Iterable: The iter Function 596
Using iter with a Callable 598
Iterables Versus Iterators 599
Sentence Classes with __iter__ 603
Sentence Take #2: A Classic Iterator 603
Don’t Make the Iterable an Iterator for Itself 605
Sentence Take #3: A Generator Function 606
How a Generator Works 607
Lazy Sentences 610
Sentence Take #4: Lazy Generator 610
Sentence Take #5: Lazy Generator Expression 611
When to Use Generator Expressions 613
An Arithmetic Progression Generator 615
Arithmetic Progression with itertools 618
Generator Functions in the Standard Library 619
Iterable Reducing Functions 630
Subgenerators with yield from 632
Reinventing chain 633
Traversing a Tree 634
Generic Iterable Types 639
Classic Coroutines 641
Example: Coroutine to Compute a Running Average 643
Returning a Value from a Coroutine 646
Generic Type Hints for Classic Coroutines 650
Chapter Summary 652
Further Reading 652
18. with, match, and else Blocks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 657
What’s New in This Chapter 658
Context Managers and with Blocks 658
The contextlib Utilities 663
Using @contextmanager 664
Pattern Matching in lis.py: A Case Study 669
Scheme Syntax 669
Imports and Types 671
The Parser 671
The Environment 673
The REPL 675
The Evaluator 676
Procedure: A Class Implementing a Closure 685
Using OR-patterns 686
xiv | Table of Contents
17. Do This, Then That: else Blocks Beyond if 687
Chapter Summary 689
Further Reading 690
19. Concurrency Models in Python. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
What’s New in This Chapter 696
The Big Picture 696
A Bit of Jargon 697
Processes, Threads, and Python’s Infamous GIL 699
A Concurrent Hello World 701
Spinner with Threads 701
Spinner with Processes 704
Spinner with Coroutines 706
Supervisors Side-by-Side 711
The Real Impact of the GIL 713
Quick Quiz 713
A Homegrown Process Pool 716
Process-Based Solution 718
Understanding the Elapsed Times 718
Code for the Multicore Prime Checker 719
Experimenting with More or Fewer Processes 723
Thread-Based Nonsolution 724
Python in the Multicore World 725
System Administration 726
Data Science 727
Server-Side Web/Mobile Development 728
WSGI Application Servers 730
Distributed Task Queues 732
Chapter Summary 733
Further Reading 734
Concurrency with Threads and Processes 734
The GIL 736
Concurrency Beyond the Standard Library 736
Concurrency and Scalability Beyond Python 738
20. Concurrent Executors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743
What’s New in This Chapter 743
Concurrent Web Downloads 744
A Sequential Download Script 746
Downloading with concurrent.futures 749
Where Are the Futures? 751
Launching Processes with concurrent.futures 754
Table of Contents | xv
18. Multicore Prime Checker Redux 755
Experimenting with Executor.map 758
Downloads with Progress Display and Error Handling 762
Error Handling in the flags2 Examples 766
Using futures.as_completed 769
Chapter Summary 772
Further Reading 772
21. Asynchronous Programming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 775
What’s New in This Chapter 776
A Few Definitions 777
An asyncio Example: Probing Domains 778
Guido’s Trick to Read Asynchronous Code 780
New Concept: Awaitable 781
Downloading with asyncio and HTTPX 782
The Secret of Native Coroutines: Humble Generators 784
The All-or-Nothing Problem 785
Asynchronous Context Managers 786
Enhancing the asyncio Downloader 787
Using asyncio.as_completed and a Thread 788
Throttling Requests with a Semaphore 790
Making Multiple Requests for Each Download 794
Delegating Tasks to Executors 797
Writing asyncio Servers 799
A FastAPI Web Service 800
An asyncio TCP Server 804
Asynchronous Iteration and Asynchronous Iterables 811
Asynchronous Generator Functions 812
Async Comprehensions and Async Generator Expressions 818
async Beyond asyncio: Curio 821
Type Hinting Asynchronous Objects 824
How Async Works and How It Doesn’t 825
Running Circles Around Blocking Calls 825
The Myth of I/O-Bound Systems 826
Avoiding CPU-Bound Traps 826
Chapter Summary 827
Further Reading 828
xvi | Table of Contents
19. Part V. Metaprogramming
22. Dynamic Attributes and Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835
What’s New in This Chapter 836
Data Wrangling with Dynamic Attributes 836
Exploring JSON-Like Data with Dynamic Attributes 838
The Invalid Attribute Name Problem 842
Flexible Object Creation with __new__ 843
Computed Properties 845
Step 1: Data-Driven Attribute Creation 846
Step 2: Property to Retrieve a Linked Record 848
Step 3: Property Overriding an Existing Attribute 852
Step 4: Bespoke Property Cache 853
Step 5: Caching Properties with functools 855
Using a Property for Attribute Validation 857
LineItem Take #1: Class for an Item in an Order 857
LineItem Take #2: A Validating Property 858
A Proper Look at Properties 860
Properties Override Instance Attributes 861
Property Documentation 864
Coding a Property Factory 865
Handling Attribute Deletion 868
Essential Attributes and Functions for Attribute Handling 869
Special Attributes that Affect Attribute Handling 870
Built-In Functions for Attribute Handling 870
Special Methods for Attribute Handling 871
Chapter Summary 873
Further Reading 873
23. Attribute Descriptors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 879
What’s New in This Chapter 880
Descriptor Example: Attribute Validation 880
LineItem Take #3: A Simple Descriptor 880
LineItem Take #4: Automatic Naming of Storage Attributes 887
LineItem Take #5: A New Descriptor Type 889
Overriding Versus Nonoverriding Descriptors 892
Overriding Descriptors 894
Overriding Descriptor Without __get__ 895
Nonoverriding Descriptor 896
Overwriting a Descriptor in the Class 897
Methods Are Descriptors 898
Descriptor Usage Tips 900
Table of Contents | xvii
20. Descriptor Docstring and Overriding Deletion 902
Chapter Summary 903
Further Reading 904
24. Class Metaprogramming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907
What’s New in This Chapter 908
Classes as Objects 908
type: The Built-In Class Factory 909
A Class Factory Function 911
Introducing __init_subclass__ 914
Why __init_subclass__ Cannot Configure __slots__ 921
Enhancing Classes with a Class Decorator 922
What Happens When: Import Time Versus Runtime 925
Evaluation Time Experiments 926
Metaclasses 101 931
How a Metaclass Customizes a Class 933
A Nice Metaclass Example 934
Metaclass Evaluation Time Experiment 937
A Metaclass Solution for Checked 942
Metaclasses in the Real World 947
Modern Features Simplify or Replace Metaclasses 947
Metaclasses Are Stable Language Features 948
A Class Can Only Have One Metaclass 948
Metaclasses Should Be Implementation Details 949
A Metaclass Hack with __prepare__ 950
Wrapping Up 952
Chapter Summary 953
Further Reading 954
Afterword. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 959
Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 963
xviii | Table of Contents
21. 1 Message to the comp.lang.python Usenet group, Dec. 23, 2002: “Acrimony in c.l.p”.
Preface
Here’s the plan: when someone uses a feature you don’t understand, simply shoot
them. This is easier than learning something new, and before too long the only living
coders will be writing in an easily understood, tiny subset of Python 0.9.6 <wink>.1
—Tim Peters, legendary core developer and author of The Zen of Python
“Python is an easy to learn, powerful programming language.” Those are the first
words of the official Python 3.10 tutorial. That is true, but there is a catch: because
the language is easy to learn and put to use, many practicing Python programmers
leverage only a fraction of its powerful features.
An experienced programmer may start writing useful Python code in a matter of
hours. As the first productive hours become weeks and months, a lot of developers go
on writing Python code with a very strong accent carried from languages learned
before. Even if Python is your first language, often in academia and in introductory
books it is presented while carefully avoiding language-specific features.
As a teacher introducing Python to programmers experienced in other languages, I
see another problem that this book tries to address: we only miss stuff we know
about. Coming from another language, anyone may guess that Python supports regu‐
lar expressions, and look that up in the docs. But if you’ve never seen tuple unpacking
or descriptors before, you will probably not search for them, and you may end up not
using those features just because they are specific to Python.
This book is not an A-to-Z exhaustive reference of Python. Its emphasis is on the lan‐
guage features that are either unique to Python or not found in many other popular
languages. This is also mostly a book about the core language and some of its libra‐
ries. I will rarely talk about packages that are not in the standard library, even though
the Python package index now lists more than 60,000 libraries, and many of them are
incredibly useful.
xix
22. Who This Book Is For
This book was written for practicing Python programmers who want to become pro‐
ficient in Python 3. I tested the examples in Python 3.10—most of them also in
Python 3.9 and 3.8. When an example requires Python 3.10, it should be clearly
marked.
If you are not sure whether you know enough Python to follow along, review the top‐
ics of the official Python tutorial. Topics covered in the tutorial will not be explained
here, except for some features that are new.
Who This Book Is Not For
If you are just learning Python, this book is going to be hard to follow. Not only that,
if you read it too early in your Python journey, it may give you the impression that
every Python script should leverage special methods and metaprogramming tricks.
Premature abstraction is as bad as premature optimization.
Five Books in One
I recommend that everyone read Chapter 1, “The Python Data Model”. The core
audience for this book should not have trouble jumping directly to any part in this
book after Chapter 1, but often I assume you’ve read preceding chapters in each spe‐
cific part. Think of Parts I through V as books within the book.
I tried to emphasize using what is available before discussing how to build your own.
For example, in Part I, Chapter 2 covers sequence types that are ready to use, includ‐
ing some that don’t get a lot of attention, like collections.deque. Building user-
defined sequences is only addressed in Part III, where we also see how to leverage the
abstract base classes (ABCs) from collections.abc. Creating your own ABCs is dis‐
cussed even later in Part III, because I believe it’s important to be comfortable using
an ABC before writing your own.
This approach has a few advantages. First, knowing what is ready to use can save you
from reinventing the wheel. We use existing collection classes more often than we
implement our own, and we can give more attention to the advanced usage of avail‐
able tools by deferring the discussion on how to create new ones. We are also more
likely to inherit from existing ABCs than to create a new ABC from scratch. And
finally, I believe it is easier to understand the abstractions after you’ve seen them in
action.
The downside of this strategy is the forward references scattered throughout the
chapters. I hope these will be easier to tolerate now that you know why I chose this
path.
xx | Preface
23. How the Book Is Organized
Here are the main topics in each part of the book:
Part I, “Data Structures”
Chapter 1 introduces the Python Data Model and explains why the special meth‐
ods (e.g., __repr__) are the key to the consistent behavior of objects of all types.
Special methods are covered in more detail throughout the book. The remaining
chapters in this part cover the use of collection types: sequences, mappings, and
sets, as well as the str versus bytes split—the cause of much celebration among
Python 3 users and much pain for Python 2 users migrating their codebases. Also
covered are the high-level class builders in the standard library: named tuple fac‐
tories and the @dataclass decorator. Pattern matching—new in Python 3.10—is
covered in sections in Chapters 2, 3, and 5, which discuss sequence patterns,
mapping patterns, and class patterns. The last chapter in Part I is about the life
cycle of objects: references, mutability, and garbage collection.
Part II, “Functions as Objects”
Here we talk about functions as first-class objects in the language: what that
means, how it affects some popular design patterns, and how to implement func‐
tion decorators by leveraging closures. Also covered here is the general concept
of callables in Python, function attributes, introspection, parameter annotations,
and the new nonlocal declaration in Python 3. Chapter 8 introduces the major
new topic of type hints in function signatures.
Part III, “Classes and Protocols”
Now the focus is on building classes “by hand”—as opposed to using the class
builders covered in Chapter 5. Like any Object-Oriented (OO) language, Python
has its particular set of features that may or may not be present in the language in
which you and I learned class-based programming. The chapters explain how
to build your own collections, abstract base classes (ABCs), and protocols, as well
as how to cope with multiple inheritance, and how to implement operator
overloading—when that makes sense. Chapter 15 continues the coverage of
type hints.
Part IV, “Control Flow”
Covered in this part are the language constructs and libraries that go beyond tra‐
ditional control flow with conditionals, loops, and subroutines. We start with
generators, then visit context managers and coroutines, including the challenging
but powerful new yield from syntax. Chapter 18 includes a significant example
using pattern matching in a simple but functional language interpreter. Chap‐
ter 19, “Concurrency Models in Python” is a new chapter presenting an overview
of alternatives for concurrent and parallel processing in Python, their limitations,
and how software architecture allows Python to operate at web scale. I rewrote
Preface | xxi
24. the chapter about asynchronous programming to emphasize core language fea‐
tures—e.g., await, async dev, async for, and async with, and show how they
are used with asyncio and other frameworks.
Part V, “Metaprogramming”
This part starts with a review of techniques for building classes with attributes
created dynamically to handle semi-structured data, such as JSON datasets. Next,
we cover the familiar properties mechanism, before diving into how object
attribute access works at a lower level in Python using descriptors. The relation‐
ship among functions, methods, and descriptors is explained. Throughout
Part V, the step-by-step implementation of a field validation library uncovers
subtle issues that lead to the advanced tools of the final chapter: class decorators
and metaclasses.
Hands-On Approach
Often we’ll use the interactive Python console to explore the language and libraries. I
feel it is important to emphasize the power of this learning tool, particularly for those
readers who’ve had more experience with static, compiled languages that don’t pro‐
vide a read-eval-print loop (REPL).
One of the standard Python testing packages, doctest, works by simulating console
sessions and verifying that the expressions evaluate to the responses shown. I used
doctest to check most of the code in this book, including the console listings. You
don’t need to use or even know about doctest to follow along: the key feature of
doctests is that they look like transcripts of interactive Python console sessions, so
you can easily try out the demonstrations yourself.
Sometimes I will explain what we want to accomplish by showing a doctest before the
code that makes it pass. Firmly establishing what is to be done before thinking about
how to do it helps focus our coding effort. Writing tests first is the basis of test-driven
development (TDD), and I’ve also found it helpful when teaching. If you are unfami‐
liar with doctest, take a look at its documentation and this book’s example code
repository.
I also wrote unit tests for some of the larger examples using pytest—which I find eas‐
ier to use and more powerful than the unittest module in the standard library. You’ll
find that you can verify the correctness of most of the code in the book by typing
python3 -m doctest example_script.py or pytest in the command shell of your
OS. The pytest.ini configuration at the root of the example code repository ensures
that doctests are collected and executed by the pytest command.
xxii | Preface
25. Soapbox: My Personal Perspective
I have been using, teaching, and debating Python since 1998, and I enjoy studying
and comparing programming languages, their design, and the theory behind them.
At the end of some chapters, I have added “Soapbox” sidebars with my own perspec‐
tive about Python and other languages. Feel free to skip these if you are not into such
discussions. Their content is completely optional.
Companion Website: fluentpython.com
Covering new features—like type hints, data classes, and pattern matching—made
this second edition almost 30% larger than the first. To keep the book luggable,
I moved some content to fluentpython.com. You will find links to articles I published
there in several chapters. Some sample chapters are also in the companion website.
The full text is available online at the O’Reilly Learning subscription service. The
example code repository is on GitHub.
Conventions Used in This Book
The following typographical conventions are used in this book:
Italic
Indicates new terms, URLs, email addresses, filenames, and file extensions.
Constant width
Used for program listings, as well as within paragraphs to refer to program ele‐
ments such as variable or function names, databases, data types, environment
variables, statements, and keywords.
Note that when a line break falls within a constant_width term, a hyphen is not
added—it could be misunderstood as part of the term.
Constant width bold
Shows commands or other text that should be typed literally by the user.
Constant width italic
Shows text that should be replaced with user-supplied values or by values deter‐
mined by context.
Preface | xxiii
26. This element signifies a tip or suggestion.
This element signifies a general note.
This element indicates a warning or caution.
Using Code Examples
Every script and most code snippets that appear in the book are available in the Flu‐
ent Python code repository on GitHub at https://fpy.li/code.
If you have a technical question or a problem using the code examples, please send
email to bookquestions@oreilly.com.
This book is here to help you get your job done. In general, if example code is offered
with this book, you may use it in your programs and documentation. You do not
need to contact us for permission unless you’re reproducing a significant portion
of the code. For example, writing a program that uses several chunks of code from
this book does not require permission. Selling or distributing examples from O’Reilly
books does require permission. Answering a question by citing this book and quoting
example code does not require permission. Incorporating a significant amount of
example code from this book into your product’s documentation does require
permission.
We appreciate, but generally do not require, attribution. An attribution usually
includes the title, author, publisher, and ISBN, e.g., “Fluent Python, 2nd ed., by
Luciano Ramalho (O’Reilly). Copyright 2022 Luciano Ramalho, 978-1-492-05635-5.”
If you feel your use of code examples falls outside fair use or the permission given
above, feel free to contact us at permissions@oreilly.com.
xxiv | Preface
27. O’Reilly Online Learning
For more than 40 years, O’Reilly Media has provided technol‐
ogy and business training, knowledge, and insight to help
companies succeed.
Our unique network of experts and innovators share their knowledge and expertise
through books, articles, and our online learning platform. O’Reilly’s online learning
platform gives you on-demand access to live training courses, in-depth learning
paths, interactive coding environments, and a vast collection of text and video from
O’Reilly and 200+ other publishers. For more information, visit http://oreilly.com.
How to Contact Us
Please address comments and questions concerning this book to the publisher:
O’Reilly Media, Inc.
1005 Gravenstein Highway North
Sebastopol, CA 95472
800-998-9938 (in the United States or Canada)
707-829-0515 (international or local)
707-829-0104 (fax)
We have a web page for this book, where we list errata, examples, and any additional
information. You can access this page at https://fpy.li/p-4.
Email bookquestions@oreilly.com to comment or ask technical questions about this
book.
For news and information about our books and courses, visit http://oreilly.com.
Find us on Facebook: http://facebook.com/oreilly.
Follow us on Twitter: https://twitter.com/oreillymedia.
Watch us on YouTube: http://www.youtube.com/oreillymedia.
Acknowledgments
I did not expect updating a Python book five years later to be such a major undertak‐
ing, but it was. Marta Mello, my beloved wife, was always there when I needed her.
My dear friend Leonardo Rochael helped me from the earliest writing to the final
technical review, including consolidating and double-checking the feedback from the
other tech reviewers, readers, and editors. I honestly don’t know if I’d have made it
without your support, Marta and Leo. Thank you so much!
Preface | xxv
28. Jürgen Gmach, Caleb Hattingh, Jess Males, Leonardo Rochael, and Miroslav Šedivý
were the outstanding technical review team for the second edition. They reviewed the
whole book. Bill Behrman, Bruce Eckel, Renato Oliveira, and Rodrigo Bernardo
Pimentel reviewed specific chapters. Their many suggestions from different perspec‐
tives made the book much better.
Many readers sent corrections or made other contributions during the early release
phase, including: Guilherme Alves, Christiano Anderson, Konstantin Baikov, K. Alex
Birch, Michael Boesl, Lucas Brunialti, Sergio Cortez, Gino Crecco, Chukwuerika
Dike, Juan Esteras, Federico Fissore, Will Frey, Tim Gates, Alexander Hagerman,
Chen Hanxiao, Sam Hyeong, Simon Ilincev, Parag Kalra, Tim King, David Kwast,
Tina Lapine, Wanpeng Li, Guto Maia, Scott Martindale, Mark Meyer, Andy McFar‐
land, Chad McIntire, Diego Rabatone Oliveira, Francesco Piccoli, Meredith Rawls,
Michael Robinson, Federico Tula Rovaletti, Tushar Sadhwani, Arthur Constantino
Scardua, Randal L. Schwartz, Avichai Sefati, Guannan Shen, William Simpson, Vivek
Vashist, Jerry Zhang, Paul Zuradzki—and others who did not want to be named, sent
corrections after I delivered the draft, or are omitted because I failed to record their
names—sorry.
During my research, I learned about typing, concurrency, pattern matching, and
metaprogramming while interacting with Michael Albert, Pablo Aguilar, Kaleb Bar‐
rett, David Beazley, J. S. O. Bueno, Bruce Eckel, Martin Fowler, Ivan Levkivskyi, Alex
Martelli, Peter Norvig, Sebastian Rittau, Guido van Rossum, Carol Willing, and Jelle
Zijlstra.
O’Reilly editors Jeff Bleiel, Jill Leonard, and Amelia Blevins made suggestions that
improved the flow of the book in many places. Jeff Bleiel and production editor
Danny Elfanbaum supported me throughout this long marathon.
The insights and suggestions of every one of them made the book better and more
accurate. Inevitably, there will still be bugs of my own creation in the final product. I
apologize in advance.
Finally, I want to extend my heartfelt thanks to my colleagues at Thoughtworks Brazil
—and especially to my sponsor, Alexey Bôas—who supported this project in many
ways, all the way.
Of course, everyone who helped me understand Python and write the first edition
now deserves double thanks. There would be no second edition without a successful
first.
xxvi | Preface
29. Acknowledgments for the First Edition
The Bauhaus chess set by Josef Hartwig is an example of excellent design: beautiful,
simple, and clear. Guido van Rossum, son of an architect and brother of a master font
designer, created a masterpiece of language design. I love teaching Python because it
is beautiful, simple, and clear.
Alex Martelli and Anna Ravenscroft were the first people to see the outline of this
book and encouraged me to submit it to O’Reilly for publication. Their books taught
me idiomatic Python and are models of clarity, accuracy, and depth in technical writ‐
ing. Alex’s 6,200+ Stack Overflow posts are a fountain of insights about the language
and its proper use.
Martelli and Ravenscroft were also technical reviewers of this book, along with Len‐
nart Regebro and Leonardo Rochael. Everyone in this outstanding technical review
team has at least 15 years of Python experience, with many contributions to high-
impact Python projects in close contact with other developers in the community.
Together they sent me hundreds of corrections, suggestions, questions, and opinions,
adding tremendous value to the book. Victor Stinner kindly reviewed Chapter 21,
bringing his expertise as an asyncio maintainer to the technical review team. It was a
great privilege and a pleasure to collaborate with them over these past several
months.
Editor Meghan Blanchette was an outstanding mentor, helping me improve the orga‐
nization and flow of the book, letting me know when it was boring, and keeping me
from delaying even more. Brian MacDonald edited chapters in Part II while Meghan
was away. I enjoyed working with them, and with everyone I’ve contacted at O’Reilly,
including the Atlas development and support team (Atlas is the O’Reilly book pub‐
lishing platform, which I was fortunate to use to write this book).
Mario Domenech Goulart provided numerous, detailed suggestions starting with the
first early release. I also received valuable feedback from Dave Pawson, Elias Dor‐
neles, Leonardo Alexandre Ferreira Leite, Bruce Eckel, J. S. Bueno, Rafael Gonçalves,
Alex Chiaranda, Guto Maia, Lucas Vido, and Lucas Brunialti.
Over the years, a number of people urged me to become an author, but the most per‐
suasive were Rubens Prates, Aurelio Jargas, Rudá Moura, and Rubens Altimari.
Mauricio Bussab opened many doors for me, including my first real shot at writing a
book. Renzo Nuccitelli supported this writing project all the way, even if that meant a
slow start for our partnership at python.pro.br.
The wonderful Brazilian Python community is knowledgeable, generous, and fun.
The Python Brasil group has thousands of people, and our national conferences bring
together hundreds, but the most influential in my journey as a Pythonista were Leo‐
nardo Rochael, Adriano Petrich, Daniel Vainsencher, Rodrigo RBP Pimentel, Bruno
Gola, Leonardo Santagada, Jean Ferri, Rodrigo Senra, J. S. Bueno, David Kwast, Luiz
Preface | xxvii
30. Irber, Osvaldo Santana, Fernando Masanori, Henrique Bastos, Gustavo Niemayer,
Pedro Werneck, Gustavo Barbieri, Lalo Martins, Danilo Bellini, and Pedro Kroger.
Dorneles Tremea was a great friend (incredibly generous with his time and knowl‐
edge), an amazing hacker, and the most inspiring leader of the Brazilian Python
Association. He left us too early.
My students over the years taught me a lot through their questions, insights, feed‐
back, and creative solutions to problems. Érico Andrei and Simples Consultoria
made it possible for me to focus on being a Python teacher for the first time.
Martijn Faassen was my Grok mentor and shared invaluable insights with me about
Python and Neanderthals. His work and that of Paul Everitt, Chris McDonough, Tres
Seaver, Jim Fulton, Shane Hathaway, Lennart Regebro, Alan Runyan, Alexander
Limi, Martijn Pieters, Godefroid Chapelle, and others from the Zope, Plone, and Pyr‐
amid planets have been decisive in my career. Thanks to Zope and surfing the first
web wave, I was able to start making a living with Python in 1998. José Octavio Cas‐
tro Neves was my partner in the first Python-centric software house in Brazil.
I have too many gurus in the wider Python community to list them all, but besides
those already mentioned, I am indebted to Steve Holden, Raymond Hettinger, A.M.
Kuchling, David Beazley, Fredrik Lundh, Doug Hellmann, Nick Coghlan, Mark Pil‐
grim, Martijn Pieters, Bruce Eckel, Michele Simionato, Wesley Chun, Brandon Craig
Rhodes, Philip Guo, Daniel Greenfeld, Audrey Roy, and Brett Slatkin for teaching me
new and better ways to teach Python.
Most of these pages were written in my home office and in two labs: CoffeeLab and
Garoa Hacker Clube. CoffeeLab is the caffeine-geek headquarters in Vila Madalena,
São Paulo, Brazil. Garoa Hacker Clube is a hackerspace open to all: a community lab
where anyone can freely try out new ideas.
The Garoa community provided inspiration, infrastructure, and slack. I think Aleph
would enjoy this book.
My mother, Maria Lucia, and my father, Jairo, always supported me in every way. I
wish he was here to see the book; I am glad I can share it with her.
My wife, Marta Mello, endured 15 months of a husband who was always working,
but remained supportive and coached me through some critical moments in the
project when I feared I might drop out of the marathon.
Thank you all, for everything.
xxviii | Preface
33. 1 “Story of Jython”, written as a foreword to Jython Essentials by Samuele Pedroni and Noel Rappin (O’Reilly).
CHAPTER 1
The Python Data Model
Guido’s sense of the aesthetics of language design is amazing. I’ve met many fine lan‐
guage designers who could build theoretically beautiful languages that no one would
ever use, but Guido is one of those rare people who can build a language that is just
slightly less theoretically beautiful but thereby is a joy to write programs in.
—Jim Hugunin, creator of Jython, cocreator of AspectJ, and architect of
the .Net DLR1
One of the best qualities of Python is its consistency. After working with Python for a
while, you are able to start making informed, correct guesses about features that are
new to you.
However, if you learned another object-oriented language before Python, you may
find it strange to use len(collection) instead of collection.len(). This apparent
oddity is the tip of an iceberg that, when properly understood, is the key to every‐
thing we call Pythonic. The iceberg is called the Python Data Model, and it is the API
that we use to make our own objects play well with the most idiomatic language
features.
You can think of the data model as a description of Python as a framework. It formal‐
izes the interfaces of the building blocks of the language itself, such as sequences,
functions, iterators, coroutines, classes, context managers, and so on.
When using a framework, we spend a lot of time coding methods that are called by
the framework. The same happens when we leverage the Python Data Model to build
new classes. The Python interpreter invokes special methods to perform basic object
operations, often triggered by special syntax. The special method names are always
written with leading and trailing double underscores. For example, the syntax
3
34. obj[key] is supported by the __getitem__ special method. In order to evaluate
my_collection[key], the interpreter calls my_collection.__getitem__(key).
We implement special methods when we want our objects to support and interact
with fundamental language constructs such as:
• Collections
• Attribute access
• Iteration (including asynchronous iteration using async for)
• Operator overloading
• Function and method invocation
• String representation and formatting
• Asynchronous programming using await
• Object creation and destruction
• Managed contexts using the with or async with statements
Magic and Dunder
The term magic method is slang for special method, but how do we
talk about a specific method like __getitem__? I learned to say
“dunder-getitem” from author and teacher Steve Holden. “Dun‐
der” is a shortcut for “double underscore before and after.” That’s
why the special methods are also known as dunder methods. The
“Lexical Analysis” chapter of The Python Language Reference warns
that “Any use of __*__ names, in any context, that does not follow
explicitly documented use, is subject to breakage without warning.”
What’s New in This Chapter
This chapter had few changes from the first edition because it is an introduction to
the Python Data Model, which is quite stable. The most significant changes are:
• Special methods supporting asynchronous programming and other new features,
added to the tables in “Overview of Special Methods” on page 15.
• Figure 1-2 showing the use of special methods in “Collection API” on page 14,
including the collections.abc.Collection abstract base class introduced in
Python 3.6.
4 | Chapter 1: The Python Data Model
35. Also, here and throughout this second edition I adopted the f-string syntax intro‐
duced in Python 3.6, which is more readable and often more convenient than the
older string formatting notations: the str.format() method and the % operator.
One reason to still use my_fmt.format() is when the definition of
my_fmt must be in a different place in the code than where the for‐
matting operation needs to happen. For instance, when my_fmt has
multiple lines and is better defined in a constant, or when it must
come from a configuration file, or from the database. Those are
real needs, but don’t happen very often.
A Pythonic Card Deck
Example 1-1 is simple, but it demonstrates the power of implementing just two spe‐
cial methods, __getitem__ and __len__.
Example 1-1. A deck as a sequence of playing cards
import collections
Card = collections.namedtuple('Card', ['rank', 'suit'])
class FrenchDeck:
ranks = [str(n) for n in range(2, 11)] + list('JQKA')
suits = 'spades diamonds clubs hearts'.split()
def __init__(self):
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
def __len__(self):
return len(self._cards)
def __getitem__(self, position):
return self._cards[position]
The first thing to note is the use of collections.namedtuple to construct a simple
class to represent individual cards. We use namedtuple to build classes of objects that
are just bundles of attributes with no custom methods, like a database record. In the
example, we use it to provide a nice representation for the cards in the deck, as shown
in the console session:
>>> beer_card = Card('7', 'diamonds')
>>> beer_card
Card(rank='7', suit='diamonds')
A Pythonic Card Deck | 5
36. But the point of this example is the FrenchDeck class. It’s short, but it packs a punch.
First, like any standard Python collection, a deck responds to the len() function by
returning the number of cards in it:
>>> deck = FrenchDeck()
>>> len(deck)
52
Reading specific cards from the deck—say, the first or the last—is easy, thanks to the
__getitem__ method:
>>> deck[0]
Card(rank='2', suit='spades')
>>> deck[-1]
Card(rank='A', suit='hearts')
Should we create a method to pick a random card? No need. Python already has a
function to get a random item from a sequence: random.choice. We can use it on a
deck instance:
>>> from random import choice
>>> choice(deck)
Card(rank='3', suit='hearts')
>>> choice(deck)
Card(rank='K', suit='spades')
>>> choice(deck)
Card(rank='2', suit='clubs')
We’ve just seen two advantages of using special methods to leverage the Python Data
Model:
• Users of your classes don’t have to memorize arbitrary method names for stan‐
dard operations. (“How to get the number of items? Is it .size(), .length(), or
what?”)
• It’s easier to benefit from the rich Python standard library and avoid reinventing
the wheel, like the random.choice function.
But it gets better.
Because our __getitem__ delegates to the [] operator of self._cards, our deck
automatically supports slicing. Here’s how we look at the top three cards from a
brand-new deck, and then pick just the aces by starting at index 12 and skipping 13
cards at a time:
>>> deck[:3]
[Card(rank='2', suit='spades'), Card(rank='3', suit='spades'),
Card(rank='4', suit='spades')]
>>> deck[12::13]
[Card(rank='A', suit='spades'), Card(rank='A', suit='diamonds'),
Card(rank='A', suit='clubs'), Card(rank='A', suit='hearts')]
6 | Chapter 1: The Python Data Model
37. Just by implementing the __getitem__ special method, our deck is also iterable:
>>> for card in deck: # doctest: +ELLIPSIS
... print(card)
Card(rank='2', suit='spades')
Card(rank='3', suit='spades')
Card(rank='4', suit='spades')
...
We can also iterate over the deck in reverse:
>>> for card in reversed(deck): # doctest: +ELLIPSIS
... print(card)
Card(rank='A', suit='hearts')
Card(rank='K', suit='hearts')
Card(rank='Q', suit='hearts')
...
Ellipsis in doctests
Whenever possible, I extracted the Python console listings in this
book from doctest to ensure accuracy. When the output was too
long, the elided part is marked by an ellipsis (...), like in the last
line in the preceding code. In such cases, I used the # doctest:
+ELLIPSIS directive to make the doctest pass. If you are trying
these examples in the interactive console, you may omit the doctest
comments altogether.
Iteration is often implicit. If a collection has no __contains__ method, the in opera‐
tor does a sequential scan. Case in point: in works with our FrenchDeck class because
it is iterable. Check it out:
>>> Card('Q', 'hearts') in deck
True
>>> Card('7', 'beasts') in deck
False
How about sorting? A common system of ranking cards is by rank (with aces being
highest), then by suit in the order of spades (highest), hearts, diamonds, and clubs
(lowest). Here is a function that ranks cards by that rule, returning 0 for the 2 of clubs
and 51 for the ace of spades:
suit_values = dict(spades=3, hearts=2, diamonds=1, clubs=0)
def spades_high(card):
rank_value = FrenchDeck.ranks.index(card.rank)
return rank_value * len(suit_values) + suit_values[card.suit]
Given spades_high, we can now list our deck in order of increasing rank:
A Pythonic Card Deck | 7
38. 2 A C struct is a record type with named fields.
>>> for card in sorted(deck, key=spades_high): # doctest: +ELLIPSIS
... print(card)
Card(rank='2', suit='clubs')
Card(rank='2', suit='diamonds')
Card(rank='2', suit='hearts')
... (46 cards omitted)
Card(rank='A', suit='diamonds')
Card(rank='A', suit='hearts')
Card(rank='A', suit='spades')
Although FrenchDeck implicitly inherits from the object class, most of its function‐
ality is not inherited, but comes from leveraging the data model and composition. By
implementing the special methods __len__ and __getitem__, our FrenchDeck
behaves like a standard Python sequence, allowing it to benefit from core language
features (e.g., iteration and slicing) and from the standard library, as shown by the
examples using random.choice, reversed, and sorted. Thanks to composition, the
__len__ and __getitem__ implementations can delegate all the work to a list
object, self._cards.
How About Shuffling?
As implemented so far, a FrenchDeck cannot be shuffled because it
is immutable: the cards and their positions cannot be changed,
except by violating encapsulation and handling the _cards
attribute directly. In Chapter 13, we will fix that by adding a one-
line __setitem__ method.
How Special Methods Are Used
The first thing to know about special methods is that they are meant to be called by
the Python interpreter, and not by you. You don’t write my_object.__len__(). You
write len(my_object) and, if my_object is an instance of a user-defined class, then
Python calls the __len__ method you implemented.
But the interpreter takes a shortcut when dealing for built-in types like list, str,
bytearray, or extensions like the NumPy arrays. Python variable-sized collections
written in C include a struct2
called PyVarObject, which has an ob_size field holding
the number of items in the collection. So, if my_object is an instance of one of those
built-ins, then len(my_object) retrieves the value of the ob_size field, and this is
much faster than calling a method.
8 | Chapter 1: The Python Data Model
39. More often than not, the special method call is implicit. For example, the statement
for i in x: actually causes the invocation of iter(x), which in turn may call
x.__iter__() if that is available, or use x.__getitem__(), as in the FrenchDeck
example.
Normally, your code should not have many direct calls to special methods. Unless
you are doing a lot of metaprogramming, you should be implementing special meth‐
ods more often than invoking them explicitly. The only special method that is fre‐
quently called by user code directly is __init__ to invoke the initializer of the
superclass in your own __init__ implementation.
If you need to invoke a special method, it is usually better to call the related built-in
function (e.g., len, iter, str, etc.). These built-ins call the corresponding special
method, but often provide other services and—for built-in types—are faster than
method calls. See, for example, “Using iter with a Callable” on page 598 in Chapter 17.
In the next sections, we’ll see some of the most important uses of special methods:
• Emulating numeric types
• String representation of objects
• Boolean value of an object
• Implementing collections
Emulating Numeric Types
Several special methods allow user objects to respond to operators such as +. We will
cover that in more detail in Chapter 16, but here our goal is to further illustrate the
use of special methods through another simple example.
We will implement a class to represent two-dimensional vectors—that is, Euclidean
vectors like those used in math and physics (see Figure 1-1).
The built-in complex type can be used to represent two-
dimensional vectors, but our class can be extended to represent n-
dimensional vectors. We will do that in Chapter 17.
How Special Methods Are Used | 9
40. Figure 1-1. Example of two-dimensional vector addition; Vector(2, 4) + Vector(2, 1)
results in Vector(4, 5).
We will start designing the API for such a class by writing a simulated console session
that we can use later as a doctest. The following snippet tests the vector addition pic‐
tured in Figure 1-1:
>>> v1 = Vector(2, 4)
>>> v2 = Vector(2, 1)
>>> v1 + v2
Vector(4, 5)
Note how the + operator results in a new Vector, displayed in a friendly format at the
console.
The abs built-in function returns the absolute value of integers and floats, and the
magnitude of complex numbers, so to be consistent, our API also uses abs to calcu‐
late the magnitude of a vector:
>>> v = Vector(3, 4)
>>> abs(v)
5.0
We can also implement the * operator to perform scalar multiplication (i.e., multi‐
plying a vector by a number to make a new vector with the same direction and a
multiplied magnitude):
>>> v * 3
Vector(9, 12)
>>> abs(v * 3)
15.0
10 | Chapter 1: The Python Data Model
41. Example 1-2 is a Vector class implementing the operations just described, through
the use of the special methods __repr__, __abs__, __add__, and __mul__.
Example 1-2. A simple two-dimensional vector class
"""
vector2d.py: a simplistic class demonstrating some special methods
It is simplistic for didactic reasons. It lacks proper error handling,
especially in the ``__add__`` and ``__mul__`` methods.
This example is greatly expanded later in the book.
Addition::
>>> v1 = Vector(2, 4)
>>> v2 = Vector(2, 1)
>>> v1 + v2
Vector(4, 5)
Absolute value::
>>> v = Vector(3, 4)
>>> abs(v)
5.0
Scalar multiplication::
>>> v * 3
Vector(9, 12)
>>> abs(v * 3)
15.0
"""
import math
class Vector:
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def __repr__(self):
return f'Vector({self.x!r}, {self.y!r})'
def __abs__(self):
return math.hypot(self.x, self.y)
def __bool__(self):
How Special Methods Are Used | 11
42. return bool(abs(self))
def __add__(self, other):
x = self.x + other.x
y = self.y + other.y
return Vector(x, y)
def __mul__(self, scalar):
return Vector(self.x * scalar, self.y * scalar)
We implemented five special methods in addition to the familiar __init__. Note that
none of them is directly called within the class or in the typical usage of the class
illustrated by the doctests. As mentioned before, the Python interpreter is the only
frequent caller of most special methods.
Example 1-2 implements two operators: + and *, to show basic usage of __add__ and
__mul__. In both cases, the methods create and return a new instance of Vector, and
do not modify either operand—self or other are merely read. This is the expected
behavior of infix operators: to create new objects and not touch their operands. I will
have a lot more to say about that in Chapter 16.
As implemented, Example 1-2 allows multiplying a Vector by a
number, but not a number by a Vector, which violates the commu‐
tative property of scalar multiplication. We will fix that with the
special method __rmul__ in Chapter 16.
In the following sections, we discuss the other special methods in Vector.
String Representation
The __repr__ special method is called by the repr built-in to get the string represen‐
tation of the object for inspection. Without a custom __repr__, Python’s console
would display a Vector instance <Vector object at 0x10e100070>.
The interactive console and debugger call repr on the results of the expressions eval‐
uated, as does the %r placeholder in classic formatting with the % operator, and the !r
conversion field in the new format string syntax used in f-strings the str.format
method.
Note that the f-string in our __repr__ uses !r to get the standard representation of
the attributes to be displayed. This is good practice, because it shows the crucial dif‐
ference between Vector(1, 2) and Vector('1', '2')—the latter would not work in
the context of this example, because the constructor’s arguments should be numbers,
not str.
12 | Chapter 1: The Python Data Model
43. The string returned by __repr__ should be unambiguous and, if possible, match the
source code necessary to re-create the represented object. That is why our Vector
representation looks like calling the constructor of the class (e.g., Vector(3, 4)).
In contrast, __str__ is called by the str() built-in and implicitly used by the print
function. It should return a string suitable for display to end users.
Sometimes same string returned by __repr__ is user-friendly, and you don’t need to
code __str__ because the implementation inherited from the object class calls
__repr__ as a fallback. Example 5-2 is one of several examples in this book with a
custom __str__.
Programmers with prior experience in languages with a toString
method tend to implement __str__ and not __repr__. If you only
implement one of these special methods in Python, choose
__repr__.
“What is the difference between __str__ and __repr__ in
Python?” is a Stack Overflow question with excellent contributions
from Pythonistas Alex Martelli and Martijn Pieters.
Boolean Value of a Custom Type
Although Python has a bool type, it accepts any object in a Boolean context, such as
the expression controlling an if or while statement, or as operands to and, or, and
not. To determine whether a value x is truthy or falsy, Python applies bool(x), which
returns either True or False.
By default, instances of user-defined classes are considered truthy, unless either
__bool__ or __len__ is implemented. Basically, bool(x) calls x.__bool__() and uses
the result. If __bool__ is not implemented, Python tries to invoke x.__len__(), and
if that returns zero, bool returns False. Otherwise bool returns True.
Our implementation of __bool__ is conceptually simple: it returns False if the mag‐
nitude of the vector is zero, True otherwise. We convert the magnitude to a Boolean
using bool(abs(self)) because __bool__ is expected to return a Boolean. Outside of
__bool__ methods, it is rarely necessary to call bool() explicitly, because any object
can be used in a Boolean context.
Note how the special method __bool__ allows your objects to follow the truth value
testing rules defined in the “Built-in Types” chapter of The Python Standard Library
documentation.
How Special Methods Are Used | 13
44. A faster implementation of Vector.__bool__ is this:
def __bool__(self):
return bool(self.x or self.y)
This is harder to read, but avoids the trip through abs, __abs__, the
squares, and square root. The explicit conversion to bool is needed
because __bool__ must return a Boolean, and or returns either
operand as is: x or y evaluates to x if that is truthy, otherwise the
result is y, whatever that is.
Collection API
Figure 1-2 documents the interfaces of the essential collection types in the language.
All the classes in the diagram are ABCs—abstract base classes. ABCs and the collec
tions.abc module are covered in Chapter 13. The goal of this brief section is to give
a panoramic view of Python’s most important collection interfaces, showing how
they are built from special methods.
Figure 1-2. UML class diagram with fundamental collection types. Method names in
italic are abstract, so they must be implemented by concrete subclasses such as list
and dict. The remaining methods have concrete implementations, therefore subclasses
can inherit them.
Each of the top ABCs has a single special method. The Collection ABC (new in
Python 3.6) unifies the three essential interfaces that every collection should
implement:
14 | Chapter 1: The Python Data Model
45. • Iterable to support for, unpacking, and other forms of iteration
• Sized to support the len built-in function
• Container to support the in operator
Python does not require concrete classes to actually inherit from any of these ABCs.
Any class that implements __len__ satisfies the Sized interface.
Three very important specializations of Collection are:
• Sequence, formalizing the interface of built-ins like list and str
• Mapping, implemented by dict, collections.defaultdict, etc.
• Set, the interface of the set and frozenset built-in types
Only Sequence is Reversible, because sequences support arbitrary ordering of their
contents, while mappings and sets do not.
Since Python 3.7, the dict type is officially “ordered,” but that only
means that the key insertion order is preserved. You cannot
rearrange the keys in a dict however you like.
All the special methods in the Set ABC implement infix operators. For example,
a & b computes the intersection of sets a and b, and is implemented in the __and__
special method.
The next two chapters will cover standard library sequences, mappings, and sets in
detail.
Now let’s consider the major categories of special methods defined in the Python
Data Model.
Overview of Special Methods
The “Data Model” chapter of The Python Language Reference lists more than 80 spe‐
cial method names. More than half of them implement arithmetic, bitwise, and com‐
parison operators. As an overview of what is available, see the following tables.
Table 1-1 shows special method names, excluding those used to implement infix
operators or core math functions like abs. Most of these methods will be covered
throughout the book, including the most recent additions: asynchronous special
methods such as __anext__ (added in Python 3.5), and the class customization hook,
__init_subclass__ (from Python 3.6).
Overview of Special Methods | 15
46. Table 1-1. Special method names (operators excluded)
Category Method names
String/bytes representation __repr__ __str__ __format__ __bytes__ __fspath__
Conversion to number __bool__ __complex__ __int__ __float__ __hash__
__index__
Emulating collections __len__ __getitem__ __setitem__ __delitem__
__contains__
Iteration __iter__ __aiter__ __next__ __anext__ __reversed__
Callable or coroutine execution __call__ __await__
Context management __enter__ __exit__ __aexit__ __aenter__
Instance creation and destruction __new__ __init__ __del__
Attribute management __getattr__ __getattribute__ __setattr__ __delattr__
__dir__
Attribute descriptors __get__ __set__ __delete__ __set_name__
Abstract base classes __instancecheck__ __subclasscheck__
Class metaprogramming __prepare__ __init_subclass__ __class_getitem__
__mro_entries__
Infix and numerical operators are supported by the special methods listed in
Table 1-2. Here the most recent names are __matmul__, __rmatmul__, and __imat
mul__, added in Python 3.5 to support the use of @ as an infix operator for matrix
multiplication, as we’ll see in Chapter 16.
Table 1-2. Special method names and symbols for operators
Operator category Symbols Method names
Unary numeric - + abs() __neg__ __pos__ __abs__
Rich comparison < <= == != > >= __lt__ __le__ __eq__ __ne__
__gt__ __ge__
Arithmetic + - * / // % @
divmod() round() **
pow()
__add__ __sub__ __mul__ __truediv__
__floordiv__ __mod__ __matmul__ __div
mod__ __round__ __pow__
Reversed arithmetic (arithmetic operators with swapped
operands)
__radd__ __rsub__ __rmul__ __rtrue
div__ __rfloordiv__ __rmod__ __rmat
mul__ __rdivmod__ __rpow__
Augmented
assignment
arithmetic
+= -= *= /= //= %=
@= **=
__iadd__ __isub__ __imul__ __itrue
div__ __ifloordiv__ __imod__ __imat
mul__ __ipow__
Bitwise & | ^ << >> ~ __and__ __or__ __xor__ __lshift__
__rshift__ __invert__
Reversed bitwise (bitwise operators with swapped
operands)
__rand__ __ror__ __rxor__
__rlshift__ __rrshift__
16 | Chapter 1: The Python Data Model
47. Operator category Symbols Method names
Augmented
assignment bitwise
&= |= ^= <<= >>= __iand__ __ior__ __ixor__
__ilshift__ __irshift__
Python calls a reversed operator special method on the second
operand when the corresponding special method on the first
operand cannot be used. Augmented assignments are shortcuts
combining an infix operator with variable assignment, e.g., a += b.
Chapter 16 explains reversed operators and augmented assignment
in detail.
Why len Is Not a Method
I asked this question to core developer Raymond Hettinger in 2013, and the key to
his answer was a quote from “The Zen of Python”: “practicality beats purity.” In
“How Special Methods Are Used” on page 8, I described how len(x) runs very fast
when x is an instance of a built-in type. No method is called for the built-in objects of
CPython: the length is simply read from a field in a C struct. Getting the number of
items in a collection is a common operation and must work efficiently for such basic
and diverse types as str, list, memoryview, and so on.
In other words, len is not called as a method because it gets special treatment as part
of the Python Data Model, just like abs. But thanks to the special method __len__,
you can also make len work with your own custom objects. This is a fair compromise
between the need for efficient built-in objects and the consistency of the language.
Also from “The Zen of Python”: “Special cases aren’t special enough to break the
rules.”
If you think of abs and len as unary operators, you may be more
inclined to forgive their functional look and feel, as opposed to the
method call syntax one might expect in an object-oriented lan‐
guage. In fact, the ABC language—a direct ancestor of Python that
pioneered many of its features—had an # operator that was the
equivalent of len (you’d write #s). When used as an infix operator,
written x#s, it counted the occurrences of x in s, which in Python
you get as s.count(x), for any sequence s.
Why len Is Not a Method | 17
48. Chapter Summary
By implementing special methods, your objects can behave like the built-in types,
enabling the expressive coding style the community considers Pythonic.
A basic requirement for a Python object is to provide usable string representations of
itself, one used for debugging and logging, another for presentation to end users.
That is why the special methods __repr__ and __str__ exist in the data model.
Emulating sequences, as shown with the FrenchDeck example, is one of the most
common uses of the special methods. For example, database libraries often return
query results wrapped in sequence-like collections. Making the most of existing
sequence types is the subject of Chapter 2. Implementing your own sequences will be
covered in Chapter 12, when we create a multidimensional extension of the Vector
class.
Thanks to operator overloading, Python offers a rich selection of numeric types, from
the built-ins to decimal.Decimal and fractions.Fraction, all supporting infix
arithmetic operators. The NumPy data science libraries support infix operators
with matrices and tensors. Implementing operators—including reversed operators
and augmented assignment—will be shown in Chapter 16 via enhancements of the
Vector example.
The use and implementation of the majority of the remaining special methods of the
Python Data Model are covered throughout this book.
Further Reading
The “Data Model” chapter of The Python Language Reference is the canonical source
for the subject of this chapter and much of this book.
Python in a Nutshell, 3rd ed. by Alex Martelli, Anna Ravenscroft, and Steve Holden
(O’Reilly) has excellent coverage of the data model. Their description of the mechan‐
ics of attribute access is the most authoritative I’ve seen apart from the actual
C source code of CPython. Martelli is also a prolific contributor to Stack Overflow,
with more than 6,200 answers posted. See his user profile at Stack Overflow.
David Beazley has two books covering the data model in detail in the context of
Python 3: Python Essential Reference, 4th ed. (Addison-Wesley), and Python Cook‐
book, 3rd ed. (O’Reilly), coauthored with Brian K. Jones.
The Art of the Metaobject Protocol (MIT Press) by Gregor Kiczales, Jim des Rivieres,
and Daniel G. Bobrow explains the concept of a metaobject protocol, of which the
Python Data Model is one example.
18 | Chapter 1: The Python Data Model
49. Soapbox
Data Model or Object Model?
What the Python documentation calls the “Python Data Model,” most authors would
say is the “Python object model.” Martelli, Ravenscroft, and Holden’s Python in a
Nutshell, 3rd ed., and David Beazley’s Python Essential Reference, 4th ed. are the best
books covering the Python Data Model, but they refer to it as the “object model.” On
Wikipedia, the first definition of “object model” is: “The properties of objects in gen‐
eral in a specific computer programming language.” This is what the Python Data
Model is about. In this book, I will use “data model” because the documentation
favors that term when referring to the Python object model, and because it is the title
of the chapter of The Python Language Reference most relevant to our discussions.
Muggle Methods
The Original Hacker’s Dictionary defines magic as “yet unexplained, or too compli‐
cated to explain” or “a feature not generally publicized which allows something other‐
wise impossible.”
The Ruby community calls their equivalent of the special methods magic methods.
Many in the Python community adopt that term as well. I believe the special methods
are the opposite of magic. Python and Ruby empower their users with a rich metaob‐
ject protocol that is fully documented, enabling muggles like you and me to emulate
many of the features available to core developers who write the interpreters for those
languages.
In contrast, consider Go. Some objects in that language have features that are magic,
in the sense that we cannot emulate them in our own user-defined types. For exam‐
ple, Go arrays, strings, and maps support the use brackets for item access, as in a[i].
But there’s no way to make the [] notation work with a new collection type that you
define. Even worse, Go has no user-level concept of an iterable interface or an iterator
object, therefore its for/range syntax is limited to supporting five “magic” built-in
types, including arrays, strings, and maps.
Maybe in the future, the designers of Go will enhance its metaobject protocol. But
currently, it is much more limited than what we have in Python or Ruby.
Metaobjects
The Art of the Metaobject Protocol (AMOP) is my favorite computer book title. But I
mention it because the term metaobject protocol is useful to think about the Python
Data Model and similar features in other languages. The metaobject part refers to the
objects that are the building blocks of the language itself. In this context, protocol is a
synonym of interface. So a metaobject protocol is a fancy synonym for object model:
an API for core language constructs.
Further Reading | 19
50. A rich metaobject protocol enables extending a language to support new program‐
ming paradigms. Gregor Kiczales, the first author of the AMOP book, later became a
pioneer in aspect-oriented programming and the initial author of AspectJ, an exten‐
sion of Java implementing that paradigm. Aspect-oriented programming is much
easier to implement in a dynamic language like Python, and some frameworks do it.
The most important example is zope.interface, part of the framework on which the
Plone content management system is built.
20 | Chapter 1: The Python Data Model
51. 1 Leo Geurts, Lambert Meertens, and Steven Pemberton, ABC Programmer’s Handbook, p. 8. (Bosko Books).
CHAPTER 2
An Array of Sequences
As you may have noticed, several of the operations mentioned work equally for texts,
lists and tables. Texts, lists and tables together are called ‘trains’. [...] The FOR com‐
mand also works generically on trains.
—Leo Geurts, Lambert Meertens, and Steven Pembertonm, ABC Programmer’s
Handbook1
Before creating Python, Guido was a contributor to the ABC language—a 10-year
research project to design a programming environment for beginners. ABC intro‐
duced many ideas we now consider “Pythonic”: generic operations on different types
of sequences, built-in tuple and mapping types, structure by indentation, strong
typing without variable declarations, and more. It’s no accident that Python is so
user-friendly.
Python inherited from ABC the uniform handling of sequences. Strings, lists, byte
sequences, arrays, XML elements, and database results share a rich set of common
operations, including iteration, slicing, sorting, and concatenation.
Understanding the variety of sequences available in Python saves us from reinventing
the wheel, and their common interface inspires us to create APIs that properly sup‐
port and leverage existing and future sequence types.
Most of the discussion in this chapter applies to sequences in general, from the famil‐
iar list to the str and bytes types added in Python 3. Specific topics on lists, tuples,
arrays, and queues are also covered here, but the specifics of Unicode strings and byte
sequences appear in Chapter 4. Also, the idea here is to cover sequence types that are
ready to use. Creating your own sequence types is the subject of Chapter 12.
21
52. These are the main topics this chapter will cover:
• List comprehensions and the basics of generator expressions
• Using tuples as records versus using tuples as immutable lists
• Sequence unpacking and sequence patterns
• Reading from slices and writing to slices
• Specialized sequence types, like arrays and queues
What’s New in This Chapter
The most important update in this chapter is “Pattern Matching with Sequences” on
page 38. That’s the first time the new pattern matching feature of Python 3.10 appears
in this second edition.
Other changes are not updates but improvements over the first edition:
• New diagram and description of the internals of sequences, contrasting contain‐
ers and flat sequences
• Brief comparison of the performance and storage characteristics of list versus
tuple
• Caveats of tuples with mutable elements, and how to detect them if needed
I moved coverage of named tuples to “Classic Named Tuples” on page 169 in Chapter 5,
where they are compared to typing.NamedTuple and @dataclass.
To make room for new content and keep the page count within
reason, the section “Managing Ordered Sequences with Bisect”
from the first edition is now a post in the fluentpython.com com‐
panion website.
Overview of Built-In Sequences
The standard library offers a rich selection of sequence types implemented in C:
Container sequences
Can hold items of different types, including nested containers. Some examples:
list, tuple, and collections.deque.
Flat sequences
Hold items of one simple type. Some examples: str, bytes, and array.array.
22 | Chapter 2: An Array of Sequences
53. A container sequence holds references to the objects it contains, which may be of any
type, while a flat sequence stores the value of its contents in its own memory space,
not as distinct Python objects. See Figure 2-1.
Figure 2-1. Simplified memory diagrams for a tuple and an array, each with three
items. Gray cells represent the in-memory header of each Python object—not drawn to
proportion. The tuple has an array of references to its items. Each item is a separate
Python object, possibly holding references to other Python objects, like that two-item
list. In contrast, the Python array is a single object, holding a C language array of three
doubles.
Thus, flat sequences are more compact, but they are limited to holding primitive
machine values like bytes, integers, and floats.
Every Python object in memory has a header with metadata. The
simplest Python object, a float, has a value field and two metadata
fields:
• ob_refcnt: the object’s reference count
• ob_type: a pointer to the object’s type
• ob_fval: a C double holding the value of the float
On a 64-bit Python build, each of those fields takes 8 bytes. That’s
why an array of floats is much more compact than a tuple of floats:
the array is a single object holding the raw values of the floats,
while the tuple consists of several objects—the tuple itself and each
float object contained in it.
Overview of Built-In Sequences | 23
54. Another way of grouping sequence types is by mutability:
Mutable sequences
For example, list, bytearray, array.array, and collections.deque.
Immutable sequences
For example, tuple, str, and bytes.
Figure 2-2 helps visualize how mutable sequences inherit all methods from immuta‐
ble sequences, and implement several additional methods. The built-in concrete
sequence types do not actually subclass the Sequence and MutableSequence abstract
base classes (ABCs), but they are virtual subclasses registered with those ABCs—as
we’ll see in Chapter 13. Being virtual subclasses, tuple and list pass these tests:
>>> from collections import abc
>>> issubclass(tuple, abc.Sequence)
True
>>> issubclass(list, abc.MutableSequence)
True
Figure 2-2. Simplified UML class diagram for some classes from collections.abc (super‐
classes are on the left; inheritance arrows point from subclasses to superclasses; names
in italic are abstract classes and abstract methods).
Keep in mind these common traits: mutable versus immutable; container versus flat.
They are helpful to extrapolate what you know about one sequence type to others.
The most fundamental sequence type is the list: a mutable container. I expect you
are very familiar with lists, so we’ll jump right into list comprehensions, a powerful
way of building lists that is sometimes underused because the syntax may look
unusual at first. Mastering list comprehensions opens the door to generator expres‐
sions, which—among other uses—can produce elements to fill up sequences of any
type. Both are the subject of the next section.
24 | Chapter 2: An Array of Sequences
55. List Comprehensions and Generator Expressions
A quick way to build a sequence is using a list comprehension (if the target is a list)
or a generator expression (for other kinds of sequences). If you are not using these
syntactic forms on a daily basis, I bet you are missing opportunities to write code that
is more readable and often faster at the same time.
If you doubt my claim that these constructs are “more readable,” read on. I’ll try to
convince you.
For brevity, many Python programmers refer to list comprehen‐
sions as listcomps, and generator expressions as genexps. I will use
these words as well.
List Comprehensions and Readability
Here is a test: which do you find easier to read, Example 2-1 or Example 2-2?
Example 2-1. Build a list of Unicode code points from a string
>>> symbols = '$¢£¥€¤'
>>> codes = []
>>> for symbol in symbols:
... codes.append(ord(symbol))
...
>>> codes
[36, 162, 163, 165, 8364, 164]
Example 2-2. Build a list of Unicode code points from a string, using a listcomp
>>> symbols = '$¢£¥€¤'
>>> codes = [ord(symbol) for symbol in symbols]
>>> codes
[36, 162, 163, 165, 8364, 164]
Anybody who knows a little bit of Python can read Example 2-1. However, after
learning about listcomps, I find Example 2-2 more readable because its intent is
explicit.
A for loop may be used to do lots of different things: scanning a sequence to count or
pick items, computing aggregates (sums, averages), or any number of other tasks.
The code in Example 2-1 is building up a list. In contrast, a listcomp is more explicit.
Its goal is always to build a new list.
List Comprehensions and Generator Expressions | 25
56. 2 Thanks to reader Tina Lapine for pointing this out.
Of course, it is possible to abuse list comprehensions to write truly incomprehensible
code. I’ve seen Python code with listcomps used just to repeat a block of code for its
side effects. If you are not doing something with the produced list, you should not use
that syntax. Also, try to keep it short. If the list comprehension spans more than two
lines, it is probably best to break it apart or rewrite it as a plain old for loop. Use your
best judgment: for Python, as for English, there are no hard-and-fast rules for clear
writing.
Syntax Tip
In Python code, line breaks are ignored inside pairs of [], {}, or ().
So you can build multiline lists, listcomps, tuples, dictionaries, etc.,
without using the line continuation escape, which doesn’t work if
you accidentally type a space after it. Also, when those delimiter
pairs are used to define a literal with a comma-separated series of
items, a trailing comma will be ignored. So, for example, when cod‐
ing a multiline list literal, it is thoughtful to put a comma after the
last item, making it a little easier for the next coder to add one
more item to that list, and reducing noise when reading diffs.
Local Scope Within Comprehensions and Generator Expressions
In Python 3, list comprehensions, generator expressions, and their siblings set and
dict comprehensions, have a local scope to hold the variables assigned in the for
clause.
However, variables assigned with the “Walrus operator” := remain accessible after
those comprehensions or expressions return—unlike local variables in a function.
PEP 572—Assignment Expressions defines the scope of the target of := as the enclos‐
ing function, unless there is a global or nonlocal declaration for that target.2
>>> x = 'ABC'
>>> codes = [ord(x) for x in x]
>>> x
'ABC'
>>> codes
[65, 66, 67]
>>> codes = [last := ord(c) for c in x]
>>> last
67
>>> c
Traceback (most recent call last):
26 | Chapter 2: An Array of Sequences
57. File "<stdin>", line 1, in <module>
NameError: name 'c' is not defined
x was not clobbered: it’s still bound to 'ABC'.
last remains.
c is gone; it existed only inside the listcomp.
List comprehensions build lists from sequences or any other iterable type by filtering
and transforming items. The filter and map built-ins can be composed to do the
same, but readability suffers, as we will see next.
Listcomps Versus map and filter
Listcomps do everything the map and filter functions do, without the contortions of
the functionally challenged Python lambda. Consider Example 2-3.
Example 2-3. The same list built by a listcomp and a map/filter composition
>>> symbols = '$¢£¥€¤'
>>> beyond_ascii = [ord(s) for s in symbols if ord(s) > 127]
>>> beyond_ascii
[162, 163, 165, 8364, 164]
>>> beyond_ascii = list(filter(lambda c: c > 127, map(ord, symbols)))
>>> beyond_ascii
[162, 163, 165, 8364, 164]
I used to believe that map and filter were faster than the equivalent listcomps, but
Alex Martelli pointed out that’s not the case—at least not in the preceding examples.
The 02-array-seq/listcomp_speed.py script in the Fluent Python code repository is a
simple speed test comparing listcomp with filter/map.
I’ll have more to say about map and filter in Chapter 7. Now we turn to the use of
listcomps to compute Cartesian products: a list containing tuples built from all items
from two or more lists.
Cartesian Products
Listcomps can build lists from the Cartesian product of two or more iterables. The
items that make up the Cartesian product are tuples made from items from every
input iterable. The resulting list has a length equal to the lengths of the input iterables
multiplied. See Figure 2-3.
List Comprehensions and Generator Expressions | 27
58. Figure 2-3. The Cartesian product of 3 card ranks and 4 suits is a sequence of 12
pairings.
For example, imagine you need to produce a list of T-shirts available in two colors
and three sizes. Example 2-4 shows how to produce that list using a listcomp. The
result has six items.
Example 2-4. Cartesian product using a list comprehension
>>> colors = ['black', 'white']
>>> sizes = ['S', 'M', 'L']
>>> tshirts = [(color, size) for color in colors for size in sizes]
>>> tshirts
[('black', 'S'), ('black', 'M'), ('black', 'L'), ('white', 'S'),
('white', 'M'), ('white', 'L')]
>>> for color in colors:
... for size in sizes:
... print((color, size))
...
('black', 'S')
('black', 'M')
('black', 'L')
('white', 'S')
('white', 'M')
('white', 'L')
>>> tshirts = [(color, size) for size in sizes
... for color in colors]
>>> tshirts
[('black', 'S'), ('white', 'S'), ('black', 'M'), ('white', 'M'),
('black', 'L'), ('white', 'L')]
28 | Chapter 2: An Array of Sequences
59. This generates a list of tuples arranged by color, then size.
Note how the resulting list is arranged as if the for loops were nested in the same
order as they appear in the listcomp.
To get items arranged by size, then color, just rearrange the for clauses; adding a
line break to the listcomp makes it easier to see how the result will be ordered.
In Example 1-1 (Chapter 1), I used the following expression to initialize a card deck
with a list made of 52 cards from all 13 ranks of each of the 4 suits, sorted by suit,
then rank:
self._cards = [Card(rank, suit) for suit in self.suits
for rank in self.ranks]
Listcomps are a one-trick pony: they build lists. To generate data for other sequence
types, a genexp is the way to go. The next section is a brief look at genexps in the
context of building sequences that are not lists.
Generator Expressions
To initialize tuples, arrays, and other types of sequences, you could also start from a
listcomp, but a genexp (generator expression) saves memory because it yields items
one by one using the iterator protocol instead of building a whole list just to feed
another constructor.
Genexps use the same syntax as listcomps, but are enclosed in parentheses rather
than brackets.
Example 2-5 shows basic usage of genexps to build a tuple and an array.
Example 2-5. Initializing a tuple and an array from a generator expression
>>> symbols = '$¢£¥€¤'
>>> tuple(ord(symbol) for symbol in symbols)
(36, 162, 163, 165, 8364, 164)
>>> import array
>>> array.array('I', (ord(symbol) for symbol in symbols))
array('I', [36, 162, 163, 165, 8364, 164])
If the generator expression is the single argument in a function call, there is no
need to duplicate the enclosing parentheses.
The array constructor takes two arguments, so the parentheses around the gen‐
erator expression are mandatory. The first argument of the array constructor
defines the storage type used for the numbers in the array, as we’ll see in “Arrays”
on page 59.
List Comprehensions and Generator Expressions | 29
60. Example 2-6 uses a genexp with a Cartesian product to print out a roster of T-shirts
of two colors in three sizes. In contrast with Example 2-4, here the six-item list of T-
shirts is never built in memory: the generator expression feeds the for loop produc‐
ing one item at a time. If the two lists used in the Cartesian product had a thousand
items each, using a generator expression would save the cost of building a list with a
million items just to feed the for loop.
Example 2-6. Cartesian product in a generator expression
>>> colors = ['black', 'white']
>>> sizes = ['S', 'M', 'L']
>>> for tshirt in (f'{c} {s}' for c in colors for s in sizes):
... print(tshirt)
...
black S
black M
black L
white S
white M
white L
The generator expression yields items one by one; a list with all six T-shirt varia‐
tions is never produced in this example.
Chapter 17 explains how generators work in detail. Here the idea
was just to show the use of generator expressions to initialize
sequences other than lists, or to produce output that you don’t
need to keep in memory.
Now we move on to the other fundamental sequence type in Python: the tuple.
Tuples Are Not Just Immutable Lists
Some introductory texts about Python present tuples as “immutable lists,” but that is
short selling them. Tuples do double duty: they can be used as immutable lists and
also as records with no field names. This use is sometimes overlooked, so we will start
with that.
Tuples as Records
Tuples hold records: each item in the tuple holds the data for one field, and the posi‐
tion of the item gives its meaning.
If you think of a tuple just as an immutable list, the quantity and the order of the
items may or may not be important, depending on the context. But when using a
30 | Chapter 2: An Array of Sequences
61. tuple as a collection of fields, the number of items is usually fixed and their order is
always important.
Example 2-7 shows tuples used as records. Note that in every expression, sorting the
tuple would destroy the information because the meaning of each field is given by its
position in the tuple.
Example 2-7. Tuples used as records
>>> lax_coordinates = (33.9425, -118.408056)
>>> city, year, pop, chg, area = ('Tokyo', 2003, 32_450, 0.66, 8014)
>>> traveler_ids = [('USA', '31195855'), ('BRA', 'CE342567'),
... ('ESP', 'XDA205856')]
>>> for passport in sorted(traveler_ids):
... print('%s/%s' % passport)
...
BRA/CE342567
ESP/XDA205856
USA/31195855
>>> for country, _ in traveler_ids:
... print(country)
...
USA
BRA
ESP
Latitude and longitude of the Los Angeles International Airport.
Data about Tokyo: name, year, population (thousands), population change (%),
and area (km²).
A list of tuples of the form (country_code, passport_number).
As we iterate over the list, passport is bound to each tuple.
The % formatting operator understands tuples and treats each item as a separate
field.
The for loop knows how to retrieve the items of a tuple separately—this is called
“unpacking.” Here we are not interested in the second item, so we assign it to _, a
dummy variable.
Tuples Are Not Just Immutable Lists | 31
62. In general, using _ as a dummy variable is just a convention. It’s
just a strange but valid variable name. However, in a match/case
statement, _ is a wildcard that matches any value but is not bound
to a value. See “Pattern Matching with Sequences” on page 38. And
in the Python console, the result of the preceding command is
assigned to _—unless the result is None.
We often think of records as data structures with named fields. Chapter 5 presents
two ways of creating tuples with named fields.
But often, there’s no need to go through the trouble of creating a class just to name
the fields, especially if you leverage unpacking and avoid using indexes to access the
fields. In Example 2-7, we assigned ('Tokyo', 2003, 32_450, 0.66, 8014) to
city, year, pop, chg, area in a single statement. Then, the % operator assigned
each item in the passport tuple to the corresponding slot in the format string in the
print argument. Those are two examples of tuple unpacking.
The term tuple unpacking is widely used by Pythonistas, but itera‐
ble unpacking is gaining traction, as in the title of PEP 3132 —
Extended Iterable Unpacking.
“Unpacking Sequences and Iterables” on page 35 presents a lot more
about unpacking not only tuples, but sequences and iterables in
general.
Now let’s consider the tuple class as an immutable variant of the list class.
Tuples as Immutable Lists
The Python interpreter and standard library make extensive use of tuples as immuta‐
ble lists, and so should you. This brings two key benefits:
Clarity
When you see a tuple in code, you know its length will never change.
Performance
A tuple uses less memory than a list of the same length, and it allows Python
to do some optimizations.
However, be aware that the immutability of a tuple only applies to the references
contained in it. References in a tuple cannot be deleted or replaced. But if one of
those references points to a mutable object, and that object is changed, then the value
of the tuple changes. The next snippet illustrates this point by creating two tuples—a
and b—which are initially equal. Figure 2-4 represents the initial layout of the b tuple
in memory.
32 | Chapter 2: An Array of Sequences
63. Figure 2-4. The content of the tuple itself is immutable, but that only means the refer‐
ences held by the tuple will always point to the same objects. However, if one of the ref‐
erenced objects is mutable—like a list—its content may change.
When the last item in b is changed, b and a become different:
>>> a = (10, 'alpha', [1, 2])
>>> b = (10, 'alpha', [1, 2])
>>> a == b
True
>>> b[-1].append(99)
>>> a == b
False
>>> b
(10, 'alpha', [1, 2, 99])
Tuples with mutable items can be a source of bugs. As we’ll see in “What Is Hasha‐
ble” on page 84, an object is only hashable if its value cannot ever change. An unhasha‐
ble tuple cannot be inserted as a dict key, or a set element.
If you want to determine explicitly if a tuple (or any object) has a fixed value, you can
use the hash built-in to create a fixed function like this:
>>> def fixed(o):
... try:
... hash(o)
... except TypeError:
... return False
... return True
...
>>> tf = (10, 'alpha', (1, 2))
>>> tm = (10, 'alpha', [1, 2])
>>> fixed(tf)
True
Tuples Are Not Just Immutable Lists | 33