This document provides an overview of popular Python libraries for data science, including NumPy, SciPy, Pandas, Scikit-Learn, matplotlib and Seaborn. It describes what each library is used for, such as NumPy for multidimensional arrays and mathematical operations, Pandas for data manipulation and analysis, and Scikit-Learn for machine learning algorithms. It also discusses reading and exploring data frames, selecting and filtering data, aggregating and grouping data, handling missing values, and data visualization.
This document provides an overview of popular Python libraries for data science and analysis. It discusses NumPy for efficient numerical computations, SciPy for scientific computing functions, Pandas for data structures and analysis, Scikit-Learn for machine learning algorithms, and Matplotlib and Seaborn for data visualization. It also describes common operations in Pandas like reading data, selecting and filtering data, descriptive statistics, and grouping data.
This document provides an overview of popular Python libraries for data science, including NumPy, SciPy, Pandas, Scikit-Learn, matplotlib and Seaborn. It describes the main functions of each library, such as NumPy for multidimensional arrays and mathematical operations, Pandas for data structures and data manipulation, Scikit-Learn for machine learning algorithms, and matplotlib and Seaborn for data visualization. The document also covers reading and exploring data frames, selecting and filtering data, aggregating and grouping data, handling missing values, and basic statistical analysis and graphics.
This document provides an overview of popular Python libraries for data science and analysis. It discusses NumPy for efficient numerical computations, SciPy for scientific computing functions, Pandas for data structures and manipulation, Scikit-Learn for machine learning algorithms, and Matplotlib and Seaborn for data visualization. It also describes common operations in Pandas like reading data, exploring data frames, selecting columns and rows, filtering, grouping, and descriptive statistics.
Python for Data Science is a must learn for professionals in the Data Analytics domain. With the growth in IT industry, there is a booming demand for skilled Data Scientists and Python has evolved as the most preferred programming language. Through this blog, you will learn the basics, how to analyze data and then create some beautiful visualizations using Python.
The document discusses various Python libraries used for data science tasks. It describes NumPy for numerical computing, SciPy for algorithms, Pandas for data structures and analysis, Scikit-Learn for machine learning, Matplotlib for visualization, and Seaborn which builds on Matplotlib. It also provides examples of loading data frames in Pandas, exploring and manipulating data, grouping and aggregating data, filtering, sorting, and handling missing values.
Best Data Science Ppt using Python
Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining, machine learning and big data.
The document provides an overview of popular Python libraries for data science such as NumPy, SciPy, Pandas, SciKit-Learn, matplotlib, and Seaborn. It discusses the key features and uses of each library. The document also demonstrates how to load data into Pandas data frames, explore and manipulate the data frames using various methods like head(), groupby(), filtering, and slicing. Summary statistics, plotting and other analyses can be performed on the data frames using these libraries.
This document provides an overview of Python libraries for data analysis and data science. It discusses popular Python libraries such as NumPy, Pandas, SciPy, Scikit-Learn and visualization libraries like matplotlib and Seaborn. It describes the functionality of these libraries for tasks like reading and manipulating data, descriptive statistics, inferential statistics, machine learning and data visualization. It also provides examples of using these libraries to explore a sample dataset and perform operations like data filtering, aggregation, grouping and missing value handling.
This document provides an overview of popular Python libraries for data science and analysis. It discusses NumPy for efficient numerical computations, SciPy for scientific computing functions, Pandas for data structures and analysis, Scikit-Learn for machine learning algorithms, and Matplotlib and Seaborn for data visualization. It also describes common operations in Pandas like reading data, selecting and filtering data, descriptive statistics, and grouping data.
This document provides an overview of popular Python libraries for data science, including NumPy, SciPy, Pandas, Scikit-Learn, matplotlib and Seaborn. It describes the main functions of each library, such as NumPy for multidimensional arrays and mathematical operations, Pandas for data structures and data manipulation, Scikit-Learn for machine learning algorithms, and matplotlib and Seaborn for data visualization. The document also covers reading and exploring data frames, selecting and filtering data, aggregating and grouping data, handling missing values, and basic statistical analysis and graphics.
This document provides an overview of popular Python libraries for data science and analysis. It discusses NumPy for efficient numerical computations, SciPy for scientific computing functions, Pandas for data structures and manipulation, Scikit-Learn for machine learning algorithms, and Matplotlib and Seaborn for data visualization. It also describes common operations in Pandas like reading data, exploring data frames, selecting columns and rows, filtering, grouping, and descriptive statistics.
Python for Data Science is a must learn for professionals in the Data Analytics domain. With the growth in IT industry, there is a booming demand for skilled Data Scientists and Python has evolved as the most preferred programming language. Through this blog, you will learn the basics, how to analyze data and then create some beautiful visualizations using Python.
The document discusses various Python libraries used for data science tasks. It describes NumPy for numerical computing, SciPy for algorithms, Pandas for data structures and analysis, Scikit-Learn for machine learning, Matplotlib for visualization, and Seaborn which builds on Matplotlib. It also provides examples of loading data frames in Pandas, exploring and manipulating data, grouping and aggregating data, filtering, sorting, and handling missing values.
Best Data Science Ppt using Python
Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from many structural and unstructured data. Data science is related to data mining, machine learning and big data.
The document provides an overview of popular Python libraries for data science such as NumPy, SciPy, Pandas, SciKit-Learn, matplotlib, and Seaborn. It discusses the key features and uses of each library. The document also demonstrates how to load data into Pandas data frames, explore and manipulate the data frames using various methods like head(), groupby(), filtering, and slicing. Summary statistics, plotting and other analyses can be performed on the data frames using these libraries.
This document provides an overview of Python libraries for data analysis and data science. It discusses popular Python libraries such as NumPy, Pandas, SciPy, Scikit-Learn and visualization libraries like matplotlib and Seaborn. It describes the functionality of these libraries for tasks like reading and manipulating data, descriptive statistics, inferential statistics, machine learning and data visualization. It also provides examples of using these libraries to explore a sample dataset and perform operations like data filtering, aggregation, grouping and missing value handling.
This document provides a summary of a seminar presentation on robotic process automation and virtual internships. It introduces popular Python libraries for data science like NumPy, SciPy, Pandas, matplotlib and Seaborn. It covers reading, exploring and manipulating data frames; filtering and selecting data; grouping; descriptive statistics. It also discusses missing value handling and aggregation functions. The goal is to provide an overview of key Python tools and techniques for data analysis.
This document provides an introduction to data analysis techniques using Python. It discusses key Python libraries for data analysis like NumPy, Pandas, SciPy, Scikit-Learn and libraries for data visualization like matplotlib and Seaborn. It covers essential concepts in data analysis like Series, DataFrames and how to perform data cleaning, transformation, aggregation and visualization on data frames. It also discusses statistical analysis, machine learning techniques and how big data and data analytics can work together. The document is intended as an overview and hands-on guide to getting started with data analysis in Python.
This document provides an overview of a machine learning course that teaches Pandas basics. The course aims to teach students how to handle and visualize data, apply basic learning algorithms, develop supervised and unsupervised learning techniques, and build machine learning models. The document outlines the course objectives, outcomes, syllabus including data preprocessing, feature extraction, and data visualization techniques. It also provides references for further reading.
Vectorization refers to performing operations on entire NumPy arrays or sequences of data without using explicit loops. This allows computations to be performed more efficiently by leveraging optimized low-level code. Traditional Python code may use loops to perform operations element-wise, whereas NumPy allows the same operations to be performed vectorized on entire arrays. Broadcasting rules allow operations between arrays of different shapes by automatically expanding dimensions. Vectorization is a key technique for speeding up numerical Python code using NumPy.
- Exploratory data analysis (EDA) is used to summarize and visualize data to understand its key characteristics, variables, and relationships.
- In R, EDA involves descriptive statistics like mean, median, and mode as well as graphical methods like histograms, density plots, and box plots.
- Functions like head(), tail(), summary(), and str() provide information about the structure, dimensions, and descriptive statistics of data frames in R. Additional functions like pairs plots and faceted histograms allow visualizing relationships between variables.
Gilles Louppe (CERN): “Tree models with scikit-learn: great learners with little assumptions”
Abstract: This talk gives an introduction to tree-based methods, both from a theoretical and practical point of view. It covers decision trees, random forests and boosting estimators, along with concrete examples based on Scikit-Learn about how they work, when they work and why they work.
Bio: Core contributor of Scikit-Learn, Researcher in machine learning, currently at CERN (Switzerland).
The document discusses various methods for reading data into R from different sources:
- CSV files can be read using read.csv()
- Excel files can be read using the readxl package
- SAS, Stata, and SPSS files can be imported using the haven package functions read_sas(), read_dta(), and read_sav() respectively
- SAS files with the .sas7bdat extension can also be read using the sas7bdat package
Deals with CSV Files operations in Pandas like reading, writing, performing joins and other operations in python using dataframes and Series in Pandas.
1. NumPy is a fundamental Python library for numerical computing that provides support for arrays and vectorized computations.
2. Pandas is a popular Python library for data manipulation and analysis that provides DataFrame and Series data structures to work with tabular data.
3. When performing arithmetic operations between DataFrames or Series in Pandas, the data is automatically aligned based on index and column labels to maintain data integrity. NumPy also automatically broadcasts arrays during arithmetic to align dimensions element-wise.
This document provides an introduction to data analysis using Pandas and NumPy. It discusses the key data structures in Pandas like Series and DataFrames, and how to load CSV files into DataFrames. It also covers common DataFrame methods for exploring data like shape, head, tail, info, and describe. The document then discusses data cleansing techniques. Finally, it introduces NumPy, describing it as a memory efficient library for scientific computing with N-dimensional arrays and various array manipulation functions.
This document outlines key concepts for learning Python, including:
1. How to import common packages like NumPy, Pandas, and Matplotlib for numerical processing, data analysis, and visualization.
2. The basics of Python data structures like lists, arrays, and functions.
3. Intermediate topics such as date/time handling, dictionaries, if/else statements, and loops.
4. How to work with Pandas DataFrames for reading, manipulating, and analyzing data.
5. Importing and managing financial data from sources like CSV, Excel, Google Finance, and the Federal Reserve.
6. Performing financial analysis and backtesting trading strategies using packages like TA-Lib and backtrader.
The document discusses data wrangling and manipulation techniques in machine learning. It covers topics like data exploration, data wrangling, data acquisition, and data manipulation in Python. It demonstrates techniques like loading CSV and Excel files, exploring data through dimensionality checks, slicing, and correlation analysis. The objectives are to perform data wrangling and understand its significance, manipulate data in Python using coercion and merging, and explore data using Python.
This document provides an agenda for an R programming presentation. It includes an introduction to R, commonly used packages and datasets in R, basics of R like data structures and manipulation, looping concepts, data analysis techniques using dplyr and other packages, data visualization using ggplot2, and machine learning algorithms in R. Shortcuts for the R console and IDE are also listed.
This document provides an overview of Python concepts for working with AI, including:
1. Measuring runtime of code using the time module.
2. Reviewing dictionaries and how to access keys and values.
3. Using modules like Pandas to work with datasets for machine learning.
4. An introduction to Pandas DataFrames for reading CSV files and exploring dataset properties.
5. Methods for sorting, filtering, and adding data to DataFrames from CSV files.
XII - 2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdfKrishnaJyotish1
The document provides study material and sample papers for Class XII students of Kendriya Vidyalaya Sangathan Regional Office Raipur for the 2022-23 session. It lists the subject coordination by Mrs. Sandhya Lakra, Principal of KV No. 4 Korba and the content team comprising of 7 PGT Computer Science teachers from different KVs. The compilation, review and vetting is done by Mr. Sumit Kumar Choudhary, PGT CS of KV No. 2 Korba NTPC. The document contains introduction and concepts related to data handling using Pandas and Matplotlib libraries in Python.
This lecture covers overloaded functions, constant objects and member functions, friend functions, the this pointer, static data members, and composition with objects as members of classes. Specifically, it discusses:
1) Overloading constructors to initialize objects with different values.
2) Using const to declare objects, member functions, and data members as constant to prevent modification.
3) Defining member functions outside the class and using the scope resolution operator.
4) Passing objects as arguments to other functions and returning objects.
5) Using the this pointer implicitly and explicitly to access members of the current object.
6) Declaring static data members that are shared across all objects of a class.
This document provides an overview of working with DataFrames in Python using the Pandas library. It discusses:
1. What a DataFrame is - a two-dimensional, size-mutable, tabular data structure in Pandas for data manipulation.
2. How to create DataFrames from dictionaries, lists, CSV files and more.
3. Common tasks like viewing data, selecting rows/columns, modifying data, analysis and saving DataFrames.
It also covers indexing and filtering DataFrames using labels or boolean conditions, arithmetic alignment in Pandas and NumPy, and vectorized computation in NumPy.
James Jesus Bermas on Crash Course on PythonCP-Union
This document provides an overview of the Python programming language. It introduces Python, discusses its uses in industries like Google and Industrial Light & Magic, and covers key Python concepts like data types, functions, object-oriented programming, modules, and tools. The document is intended to explain what Python is and give an introduction to programming in Python.
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
This document provides a summary of a seminar presentation on robotic process automation and virtual internships. It introduces popular Python libraries for data science like NumPy, SciPy, Pandas, matplotlib and Seaborn. It covers reading, exploring and manipulating data frames; filtering and selecting data; grouping; descriptive statistics. It also discusses missing value handling and aggregation functions. The goal is to provide an overview of key Python tools and techniques for data analysis.
This document provides an introduction to data analysis techniques using Python. It discusses key Python libraries for data analysis like NumPy, Pandas, SciPy, Scikit-Learn and libraries for data visualization like matplotlib and Seaborn. It covers essential concepts in data analysis like Series, DataFrames and how to perform data cleaning, transformation, aggregation and visualization on data frames. It also discusses statistical analysis, machine learning techniques and how big data and data analytics can work together. The document is intended as an overview and hands-on guide to getting started with data analysis in Python.
This document provides an overview of a machine learning course that teaches Pandas basics. The course aims to teach students how to handle and visualize data, apply basic learning algorithms, develop supervised and unsupervised learning techniques, and build machine learning models. The document outlines the course objectives, outcomes, syllabus including data preprocessing, feature extraction, and data visualization techniques. It also provides references for further reading.
Vectorization refers to performing operations on entire NumPy arrays or sequences of data without using explicit loops. This allows computations to be performed more efficiently by leveraging optimized low-level code. Traditional Python code may use loops to perform operations element-wise, whereas NumPy allows the same operations to be performed vectorized on entire arrays. Broadcasting rules allow operations between arrays of different shapes by automatically expanding dimensions. Vectorization is a key technique for speeding up numerical Python code using NumPy.
- Exploratory data analysis (EDA) is used to summarize and visualize data to understand its key characteristics, variables, and relationships.
- In R, EDA involves descriptive statistics like mean, median, and mode as well as graphical methods like histograms, density plots, and box plots.
- Functions like head(), tail(), summary(), and str() provide information about the structure, dimensions, and descriptive statistics of data frames in R. Additional functions like pairs plots and faceted histograms allow visualizing relationships between variables.
Gilles Louppe (CERN): “Tree models with scikit-learn: great learners with little assumptions”
Abstract: This talk gives an introduction to tree-based methods, both from a theoretical and practical point of view. It covers decision trees, random forests and boosting estimators, along with concrete examples based on Scikit-Learn about how they work, when they work and why they work.
Bio: Core contributor of Scikit-Learn, Researcher in machine learning, currently at CERN (Switzerland).
The document discusses various methods for reading data into R from different sources:
- CSV files can be read using read.csv()
- Excel files can be read using the readxl package
- SAS, Stata, and SPSS files can be imported using the haven package functions read_sas(), read_dta(), and read_sav() respectively
- SAS files with the .sas7bdat extension can also be read using the sas7bdat package
Deals with CSV Files operations in Pandas like reading, writing, performing joins and other operations in python using dataframes and Series in Pandas.
1. NumPy is a fundamental Python library for numerical computing that provides support for arrays and vectorized computations.
2. Pandas is a popular Python library for data manipulation and analysis that provides DataFrame and Series data structures to work with tabular data.
3. When performing arithmetic operations between DataFrames or Series in Pandas, the data is automatically aligned based on index and column labels to maintain data integrity. NumPy also automatically broadcasts arrays during arithmetic to align dimensions element-wise.
This document provides an introduction to data analysis using Pandas and NumPy. It discusses the key data structures in Pandas like Series and DataFrames, and how to load CSV files into DataFrames. It also covers common DataFrame methods for exploring data like shape, head, tail, info, and describe. The document then discusses data cleansing techniques. Finally, it introduces NumPy, describing it as a memory efficient library for scientific computing with N-dimensional arrays and various array manipulation functions.
This document outlines key concepts for learning Python, including:
1. How to import common packages like NumPy, Pandas, and Matplotlib for numerical processing, data analysis, and visualization.
2. The basics of Python data structures like lists, arrays, and functions.
3. Intermediate topics such as date/time handling, dictionaries, if/else statements, and loops.
4. How to work with Pandas DataFrames for reading, manipulating, and analyzing data.
5. Importing and managing financial data from sources like CSV, Excel, Google Finance, and the Federal Reserve.
6. Performing financial analysis and backtesting trading strategies using packages like TA-Lib and backtrader.
The document discusses data wrangling and manipulation techniques in machine learning. It covers topics like data exploration, data wrangling, data acquisition, and data manipulation in Python. It demonstrates techniques like loading CSV and Excel files, exploring data through dimensionality checks, slicing, and correlation analysis. The objectives are to perform data wrangling and understand its significance, manipulate data in Python using coercion and merging, and explore data using Python.
This document provides an agenda for an R programming presentation. It includes an introduction to R, commonly used packages and datasets in R, basics of R like data structures and manipulation, looping concepts, data analysis techniques using dplyr and other packages, data visualization using ggplot2, and machine learning algorithms in R. Shortcuts for the R console and IDE are also listed.
This document provides an overview of Python concepts for working with AI, including:
1. Measuring runtime of code using the time module.
2. Reviewing dictionaries and how to access keys and values.
3. Using modules like Pandas to work with datasets for machine learning.
4. An introduction to Pandas DataFrames for reading CSV files and exploring dataset properties.
5. Methods for sorting, filtering, and adding data to DataFrames from CSV files.
XII - 2022-23 - IP - RAIPUR (CBSE FINAL EXAM).pdfKrishnaJyotish1
The document provides study material and sample papers for Class XII students of Kendriya Vidyalaya Sangathan Regional Office Raipur for the 2022-23 session. It lists the subject coordination by Mrs. Sandhya Lakra, Principal of KV No. 4 Korba and the content team comprising of 7 PGT Computer Science teachers from different KVs. The compilation, review and vetting is done by Mr. Sumit Kumar Choudhary, PGT CS of KV No. 2 Korba NTPC. The document contains introduction and concepts related to data handling using Pandas and Matplotlib libraries in Python.
This lecture covers overloaded functions, constant objects and member functions, friend functions, the this pointer, static data members, and composition with objects as members of classes. Specifically, it discusses:
1) Overloading constructors to initialize objects with different values.
2) Using const to declare objects, member functions, and data members as constant to prevent modification.
3) Defining member functions outside the class and using the scope resolution operator.
4) Passing objects as arguments to other functions and returning objects.
5) Using the this pointer implicitly and explicitly to access members of the current object.
6) Declaring static data members that are shared across all objects of a class.
This document provides an overview of working with DataFrames in Python using the Pandas library. It discusses:
1. What a DataFrame is - a two-dimensional, size-mutable, tabular data structure in Pandas for data manipulation.
2. How to create DataFrames from dictionaries, lists, CSV files and more.
3. Common tasks like viewing data, selecting rows/columns, modifying data, analysis and saving DataFrames.
It also covers indexing and filtering DataFrames using labels or boolean conditions, arithmetic alignment in Pandas and NumPy, and vectorized computation in NumPy.
James Jesus Bermas on Crash Course on PythonCP-Union
This document provides an overview of the Python programming language. It introduces Python, discusses its uses in industries like Google and Industrial Light & Magic, and covers key Python concepts like data types, functions, object-oriented programming, modules, and tools. The document is intended to explain what Python is and give an introduction to programming in Python.
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
Generative Classifiers: Classifying with Bayesian decision theory, Bayes’ rule, Naïve Bayes classifier.
Discriminative Classifiers: Logistic Regression, Decision Trees: Training and Visualizing a Decision Tree, Making Predictions, Estimating Class Probabilities, The CART Training Algorithm, Attribute selection measures- Gini impurity; Entropy, Regularization Hyperparameters, Regression Trees, Linear Support vector machines.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of March 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
Overview IFM June 2024 Consumer Confidence INDEX Report.pdf
Python-for-Data-Analysis.pdf
1. Python for Data Analysis
Research Computing Services
Katia Oleinik (koleinik@bu.edu)
2. Tutorial
Content
2
Overview of Python Libraries for Data
Scientists
Reading Data; Selecting and Filtering the Data; Data manipulation,
sorting, grouping, rearranging
Plotting the data
Descriptive statistics
Inferential statistics
3. Python Libraries for Data Science
Many popular Python toolboxes/libraries:
• NumPy
• SciPy
• Pandas
• SciKit-Learn
Visualization libraries
• matplotlib
• Seaborn
and many more …
3
All these libraries are
installed on the SCC
4. Python Libraries for Data Science
NumPy:
▪ introduces objects for multidimensional arrays and matrices, as well as
functions that allow to easily perform advanced mathematical and statistical
operations on those objects
▪ provides vectorization of mathematical operations on arrays and matrices
which significantly improves the performance
▪ many other python libraries are built on NumPy
4
Link: http://www.numpy.org/
5. Python Libraries for Data Science
SciPy:
▪ collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more
▪ part of SciPy Stack
▪ built on NumPy
5
Link: https://www.scipy.org/scipylib/
6. Python Libraries for Data Science
Pandas:
▪ adds data structures and tools designed to work with table-like data (similar
to Series and Data Frames in R)
▪ provides tools for data manipulation: reshaping, merging, sorting, slicing,
aggregation etc.
▪ allows handling missing data
6
Link: http://pandas.pydata.org/
7. Link: http://scikit-learn.org/
Python Libraries for Data Science
SciKit-Learn:
▪ provides machine learning algorithms: classification, regression, clustering,
model validation etc.
▪ built on NumPy, SciPy and matplotlib
7
8. matplotlib:
▪ python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats
▪ a set of functionalities similar to those of MATLAB
▪ line plots, scatter plots, barcharts, histograms, pie charts etc.
▪ relatively low-level; some effort needed to create advanced visualization
Link: https://matplotlib.org/
Python Libraries for Data Science
8
9. Seaborn:
▪ based on matplotlib
▪ provides high level interface for drawing attractive statistical graphics
▪ Similar (in style) to the popular ggplot2 library in R
Link: https://seaborn.pydata.org/
Python Libraries for Data Science
9
10. Login to the Shared Computing Cluster
• Use your SCC login information if you have SCC account
• If you are using tutorial accounts see info on the blackboard
Note: Your password will not be displayed while you enter it.
10
11. Selecting Python Version on the SCC
# view available python versions on the SCC
[scc1 ~] module avail python
# load python 3 version
[scc1 ~] module load python/3.6.2
11
12. Download tutorial notebook
# On the Shared Computing Cluster
[scc1 ~] cp /project/scv/examples/python/data_analysis/dataScience.ipynb .
# On a local computer save the link:
http://rcs.bu.edu/examples/python/data_analysis/dataScience.ipynb
12
14. In [ ]:
Loading Python Libraries
14
#Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
import seaborn as sns
Press Shift+Enter to execute the jupyter cell
15. In [ ]:
Reading data using pandas
15
#Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
There is a number of pandas commands to read other data formats:
pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
Note: The above command has many optional arguments to fine-tune the data import process.
17. Hands-on exercises
17
✓ Try to read the first 10, 20, 50 records;
✓ Can you guess how to view the last few records; Hint:
18. Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be
assigned to your column if column
has mixed types (numbers and
strings).
int64 int Numeric characters. 64 refers to
the memory allocated to hold this
character.
float64 float Numeric characters with decimals.
If a column contains numbers and
NaNs(see below), pandas will
default to float64, in case your
missing value has a decimal.
datetime64, timedelta[ns] N/A (but see the datetime module
in Python’s standard library)
Values meant to hold time data.
Look into these for time series
experiments.
18
19. In [4]:
Data Frame data types
19
#Check a particular column type
df['salary'].dtype
Out[4]: dtype('int64')
In [5]: #Check types for all the columns
df.dtypes
Out[4]: rank
discipline
phd
service
sex
salary
dtype: object
object
object
int64
int64
object
int64
20. Data Frames attributes
20
Python objects have attributes and methods.
df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
size number of elements
shape return a tuple representing the dimensionality
values numpy representation of the data
21. Hands-on exercises
21
✓ Find how many records this data frame has;
✓ How many elements are there?
✓ What are the column names?
✓ What types of columns we have in this data frame?
22. Data Frames methods
22
df.method() description
head( [n] ), tail( [n] ) first/last n rows
describe() generate descriptive statistics (for numeric columns only)
max(), min() return max/min values for all numeric columns
mean(), median() return mean/median values for all numeric columns
std() standard deviation
sample([n]) returns a random sample of the data frame
dropna() drop all the records with missing values
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)
23. Hands-on exercises
23
✓ Give the summary for the numeric columns in the dataset
✓ Calculate standard deviation for all numeric columns;
✓ What are the mean values of the first 50 records in the dataset? Hint: use
head() method to subset the first 50 records and then calculate the mean
24. Selecting a column in a Data Frame
Method 1: Subset the data frame using column name:
df['sex']
Method 2: Use the column name as an attribute:
df.sex
Note: there is an attribute rank for pandas data frames, so to select a column with a name
"rank" we should use method 1.
24
25. Hands-on exercises
25
✓ Calculate the basic statistics for the salary column;
✓ Find how many values in the salary column (use count method);
✓ Calculate the average salary;
26. Data Frames groupby method
26
Using "group by" method we can:
• Split the data into groups based on some criteria
• Calculate statistics (or apply a function) to each group
• Similar to dplyr() function in R
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])
In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()
27. Data Frames groupby method
27
Once groupby object is create we can calculate various statistics for each group:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()
Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
28. Data Frames groupby method
28
groupby performance notes:
- no grouping/splitting occurs until it's needed. Creating the groupby object
only verifies that you have passed a valid mapping
- by default the group keys are sorted during the groupby operation. You may
want to pass sort=False for potential speedup:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby(['rank'], sort=False)[['salary']].mean()
29. Data Frame: filtering
29
To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the salary
value is greater than $120K:
In [ ]: #Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]
In [ ]: #Select only those rows that contain female professors:
df_f = df[ df['sex'] == 'Female' ]
Any Boolean operator can be used to subset the data:
> greater; >= greater or equal;
< less; <= less or equal;
== equal; != not equal;
30. Data Frames: Slicing
30
There are a number of ways to subset the Data Frame:
• one or more columns
• one or more rows
• a subset of rows and columns
Rows and columns can be selected by their position or label
31. Data Frames: Slicing
31
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
In [ ]: #Select column salary:
df['salary']
When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
In [ ]: #Select column salary:
df[['rank','salary']]
32. Data Frames: Selecting rows
32
If we need to select a range of rows, we can specify the range using ":"
In [ ]: #Select rows by their position:
df[10:20]
Notice that the first row has a position 0, and the last value in the range is omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9
33. Data Frames: method loc
33
If we need to select a range of rows, using their labels we can use method loc:
In [ ]: #Select rows by their labels:
df_sub.loc[10:20,['rank','sex','salary']]
Out[ ]:
34. Data Frames: method iloc
34
If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their labels:
df_sub.iloc[10:20,[0, 3, 4, 5]]
Out[ ]:
35. Data Frames: method iloc (summary)
35
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row
df.iloc[:, 0] # First column
df.iloc[:, -1] # Last column
df.iloc[0:7] #First 7 rows
df.iloc[:, 0:2] #First 2 columns
df.iloc[1:3, 0:2] #Second through third rows and first 2 columns
df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns
36. Data Frames: Sorting
36
We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.
In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='service')
df_sorted.head()
Out[ ]:
37. Data Frames: Sorting
37
We can sort the data using 2 or more columns:
In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False])
df_sorted.head(10)
Out[ ]:
38. Missing Values
38
Missing values are marked as NaN
In [ ]: # Read a dataset with missing values
flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv")
In [ ]: # Select the rows that have at least one missing value
flights[flights.isnull().any(axis=1)].head()
Out[ ]:
39. Missing Values
39
There are a number of methods to deal with missing values in the data frame:
df.method() description
dropna() Drop missing observations
dropna(how='all') Drop observations where all cells is NA
dropna(axis=1, how='all') Drop column if all the values are missing
dropna(thresh = 5) Drop rows that contain less than 5 non-missing values
fillna(0) Replace missing values with zeros
isnull() returns True if the value is missing
notnull() Returns True for non-missing values
40. Missing Values
40
• When summing the data, missing values will be treated as zero
• If all values are missing, the sum will be equal to NaN
• cumsum() and cumprod() methods ignore missing values but preserve them in
the resulting arrays
• Missing values in GroupBy method are excluded (just like in R)
• Many descriptive statistics methods have skipna option to control if missing
data should be excluded . This value is set to True by default (unlike R)
41. Aggregation Functions in Pandas
41
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts
Common aggregation functions:
min, max
count, sum, prod
mean, median, mode, mad
std, var
42. Aggregation Functions in Pandas
42
agg() method are useful when multiple statistics are computed per column:
In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max'])
Out[ ]:
43. Basic Descriptive Statistics
43
df.method() description
describe Basic statistics (count, mean, std, min, quantiles, max)
min, max Minimum and maximum values
mean, median, mode Arithmetic average, median and mode
var, std Variance and standard deviation
sem Standard error of mean
skew Sample skewness
kurt kurtosis
44. Graphics to explore the data
44
To show graphs within Python notebook include inline directive:
In [ ]: %matplotlib inline
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization
45. Graphics
45
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot similar to boxplot, also shows the probability density of the
data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
46. Basic statistical Analysis
46
statsmodel and scikit-learn - both have a number of function for statistical analysis
The first one is mostly used for regular analysis using R style formulas, while scikit-learn is
more tailored for Machine Learning.
statsmodels:
• linear regressions
• ANOVA tests
• hypothesis testings
• many more ...
scikit-learn:
• kmeans
• support vector machines
• random forests
• many more ...
See examples in the Tutorial Notebook
47. Conclusion
Thank you for attending the tutorial.
Please fill the evaluation form:
http://scv.bu.edu/survey/tutorial_evaluation.html
Questions:
email: koleinik@bu.edu (Katia Oleinik)
47