Cheat Sheet for Machine Learning in Python: Scikit-learnKarlijn Willems
Get started with machine learning in Python thanks to this scikit-learn cheat sheet, which is a handy one-page reference that guides you through the several steps to make your own machine learning models. Thanks to the code examples, you won't get lost!
Array
Introduction
One-dimensional array
Multidimensional array
Advantage of Array
Write a C program using arrays that produces the multiplication of two matrices.
Cheat Sheet for Machine Learning in Python: Scikit-learnKarlijn Willems
Get started with machine learning in Python thanks to this scikit-learn cheat sheet, which is a handy one-page reference that guides you through the several steps to make your own machine learning models. Thanks to the code examples, you won't get lost!
Array
Introduction
One-dimensional array
Multidimensional array
Advantage of Array
Write a C program using arrays that produces the multiplication of two matrices.
C Programming : Arrays, One Dimensional Arrays, Two Dimensional Arrays, Three Dimensional Arrays, Operations on Arrays like Insertion, Deletion, Searching, Sorting, Merging, Traversing, Matrix Manipulation like Addition, Multiplication etc. : Visit us at : www.rozyph.com
This is the basic introduction of the pandas library, you can use it for teaching this library for machine learning introduction. This slide will be able to help to understand the basics of pandas to the students with no coding background.
C programming, learn array 2020 week 5 and week 6, Students should know how to define/declare, initialize arrays, and multidimensional arrays types. so they can apply this knowledge during the implementation of software applications.
Abstract: This PDSG workshop introduces the basics of Python libraries used in machine learning. Libraries covered are Numpy, Pandas and MathlibPlot.
Level: Fundamental
Requirements: One should have some knowledge of programming and some statistics.
Kickstart your data science journey with this Python cheat sheet that contains code examples for strings, lists, importing libraries and NumPy arrays.
Find more cheat sheets and learn data science with Python at www.datacamp.com.
C Programming : Arrays, One Dimensional Arrays, Two Dimensional Arrays, Three Dimensional Arrays, Operations on Arrays like Insertion, Deletion, Searching, Sorting, Merging, Traversing, Matrix Manipulation like Addition, Multiplication etc. : Visit us at : www.rozyph.com
This is the basic introduction of the pandas library, you can use it for teaching this library for machine learning introduction. This slide will be able to help to understand the basics of pandas to the students with no coding background.
C programming, learn array 2020 week 5 and week 6, Students should know how to define/declare, initialize arrays, and multidimensional arrays types. so they can apply this knowledge during the implementation of software applications.
Abstract: This PDSG workshop introduces the basics of Python libraries used in machine learning. Libraries covered are Numpy, Pandas and MathlibPlot.
Level: Fundamental
Requirements: One should have some knowledge of programming and some statistics.
Kickstart your data science journey with this Python cheat sheet that contains code examples for strings, lists, importing libraries and NumPy arrays.
Find more cheat sheets and learn data science with Python at www.datacamp.com.
Learn more about customer journey maps on https://blog.morethanmetrics.com/your-step-by-step-guide-customer-journey-maps/
___
Slide deck about the six aspects to consider when analyzing a customer journey map.
- What's the difference between a current-state vs. a future-state journey map?
- Assumption-based vs. research-based?
- High-level perspective vs. detailed scope?
- Main actors?
- Product-focused vs. experience-focused?
- Different kinds of visualization?
Our accounting professor permitted us to use one 8x11 sheet of paper during our comprehensive final exam. Within a short amount of time I laid out all the major concepts we covered along with my own notes/examples. I also recruited Pac Man to help out with making room for our final chapter topics.
Edit: Full res version here --> http://www.joejan6.com/scratch/GuideSheet13.pdf
Covid19py by Konstantinos Kamaropoulos
A tiny Python package for easy access to up-to-date Coronavirus (COVID-19, SARS-CoV-2) cases data.
ref:https://github.com/Kamaropoulos/COVID19Py
https://pypi.org/project/COVID19Py/?fbclid=IwAR0zFKe_1Y6Nm0ak1n0W1ucFZcVT4VBWEP4LOFHJP-DgoL32kx3JCCxkGLQ
"optrees" package in R and examples.(optrees:finds optimal trees in weighted ...Dr. Volkan OBAN
Finds optimal trees in weighted graphs. In
particular, this package provides solving tools for minimum cost spanning
tree problems, minimum cost arborescence problems, shortest path tree
problems and minimum cut tree problem.
by Volkan OBAN
k-means Clustering in Python
scikit-learn--Machine Learning in Python
from sklearn.cluster import KMeans
k-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. k-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells.
The problem is computationally difficult (NP-hard); however, there are efficient heuristic algorithms that are commonly employed and converge quickly to a local optimum. These are usually similar to the expectation-maximization algorithm for mixtures of Gaussian distributions via an iterative refinement approach employed by both algorithms. Additionally, they both use cluster centers to model the data; however, k-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.[wikipedia]
ref: http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_iris.html
Forecasting through ARIMA Modeling using R
ref:http://ucanalytics.com/blogs/step-by-step-graphic-guide-to-forecasting-through-arima-modeling-in-r-manufacturing-case-study-example/
k-means Clustering and Custergram with R.
K Means Clustering is an unsupervised learning algorithm that tries to cluster data based on their similarity. Unsupervised learning means that there is no outcome to be predicted, and the algorithm just tries to find patterns in the data. In k means clustering, we have the specify the number of clusters we want the data to be grouped into. The algorithm randomly assigns each observation to a cluster, and finds the centroid of each cluster.
ref:https://www.r-bloggers.com/k-means-clustering-in-r/
ref:https://rpubs.com/FelipeRego/K-Means-Clustering
ref:https://www.r-bloggers.com/clustergram-visualization-and-diagnostics-for-cluster-analysis-r-code/
Data Science and its Relationship to Big Data and Data-Driven Decision MakingDr. Volkan OBAN
Data Science and its Relationship to Big Data and Data-Driven Decision Making
To cite this article:
Foster Provost and Tom Fawcett. Big Data. February 2013, 1(1): 51-59. doi:10.1089/big.2013.1508.
Foster Provost and Tom Fawcett
Published in Volume: 1 Issue 1: February 13, 2013
ref:http://online.liebertpub.com/doi/full/10.1089/big.2013.1508
https://www.researchgate.net/publication/256439081_Data_Science_and_Its_Relationship_to_Big_Data_and_Data-Driven_Decision_Making
R Machine Learning packages( generally used)
prepared by Volkan OBAN
reference:
https://github.com/josephmisiti/awesome-machine-learning#r-general-purpose
Data visualization with R.
Mosaic plot .
---Ref: https://www.stat.auckland.ac.nz/~ihaka/120/Lectures/lecture17.pdf
http://www.statmethods.net/advgraphs/mosaic.html
https://stat.ethz.ch/R-manual/R-devel/library/graphics/html/mosaicplot.html
imager package in R and example
References:
http://dahtah.github.io/imager/
http://dahtah.github.io/imager/imager.html
https://cran.r-project.org/web/packages/imager/imager.pdf
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
1. 1 www.quandl.com
NumPy / SciPy / Pandas Cheat Sheet
Select column.
Select row by label.
Return DataFrame index.
Delete given row or column. Pass axis=1 for columns.
Reindex df1 with index of df2.
Reset index, putting old index in column named index.
Change DataFrame index, new indecies set to NaN.
Show first n rows.
Show last n rows.
Sort index.
Sort columns.
Pivot DataFrame, using new conditions.
Transpose DataFrame.
Change lowest level of column labels into innermost row index.
Change innermost row index into lowest level of column labels.
NumPy / SciPy
arr = array([])
arr.shape
convolve(a,b)
arr.reshape()
sum(arr)
mean(arr)
std(arr)
dot(arr1,arr2)
vectorize()
Create a Series.
Create a Dataframe.
Create a Panel.
Pandas
Create Structures
s = Series (data, index)
df = DataFrame (data, index, columns)
p = Panel (data, items, major_axis, minor_axis)
df.stack()
df.unstack()
df.pivot(index,column,values)
df.T
DataFrame commands
df[col]
df.iloc[label]
df.index
df.drop()
df1 = df1.reindex_like(df1,df2)
df.reset_index()
df.reindex()
df.head(n)
df.tail(n)
df.sort()
df.sort(axis=1)
Create numpy array.
Shape of an array.
Linear convolution of two sequences.
Reshape array.
Sum all elements of array.
Compute mean of array.
Compute standard deviation of array.
Compute inner product of two arrays.
Turn a scalar function into one which
accepts & returns vectors.
2. 2 www.quandl.com
Create a time series index.date_range(start, end, freq)
Pandas Time Series
Business Day
Calender day
Weekly
Monthly
Quarterly
Annual
Hourly
B
D
W
M
Q
A
H
Freq has many options including:
Any Structure with a datetime index
Split DataFrame by columns. Creates a GroupBy object (gb).
Apply function (single or list) to a GroupBy object.
Applies function and returns object with same index as one
being grouped.
Filter GroupBy object by a given function.
Return dict whose keys are the unique groups, and values
are axis labels belonging to each group.
Groupby
groupby()
gb.agg()
gb.transform()
gb.filter()
gb.groups
Save to CSV.
Read CSV into DataFrame.
Save to Excel.
Read exel into DataFrame.
I/O
df.to_csv(‘foo.csv’)
read_csv(‘foo.csv’)
to_excel(‘foo.xlsx’, sheet_name)
read_excel(‘foo.xlsx’,’sheet1’, index_col =
None, na_values = [‘NA’])
df.dropna()
df.count()
df.min()
df.max()
df.describe()
concat()
Drops rows where any data is missing.
Returns Series of row counts for every column.
Return minimum of every column.
Return maximum of every column.
Generate various summary statistics for every column.
Merge DataFrame or Series objects.
Apply function to every element in DataFrame.
Apply function along a given axis.
df.applymap()
df.apply()
Resample data with new frequency.ts.resample()
3. 3 www.quandl.com
Select current axis.
Change axis color, none to remove.
Change axis position. Can change coordinate space.
Create legend. Set to ‘best’ for auto placement.
Save plot.
ax=gca()
ax.spines[].set_color()
ax.spines[].set_position()
Label the x-axis.
Label the y-axis.
Title the graph.
Set tick values for x-axis. First array for values, second
for labels.
Set tick values for y-axis. First array for values, second
for labels.
yticks([],[])
Plot data or plot a function against a range.
Create multiple plots; n- number of plots, x - number
horizontally displayed, y- number vertically displayed.
plot()
subplot(n,x,y)
xlabel()
ylabel()
title()
xticks([],[])
Plotting
Matplotlib is an extremely powerful module.
See www.matplotlib.org for complete documentation.
Quandl is a search engine for numerical data, allowing
easy access to financial, social, and demographic data
from hundreds of sources.
Quandl
The Quandl package enables Quandl API access from
within Python, which makes acquiring and manipulating
numerical data as quick and easy as possible.
In your first Quandl function call you should specifiy your
authtoken (found on Quandl’s website after signing up) to
avoid certain API call limits.
See www.quandl.com/help/packages/python for more.
savefig(‘foo.png’)
legend(loc=’ ‘)
Return data for nearest time interval.
Return data for specific time.
Return data between specific interval.
Convert Pandas DatetimeIndex to datetime.datetime object.
Conver a list of date-like objects (strings, epochs, etc.) to a
DatetimeIndex.
ts.ix[start:end]
ts[]
ts.between_time()
to_pydatetime()
to_datetime()
4. 4 www.quandl.com
Plotting Example
import Quandl as q
import matplotlib.pyplot as plt
rural = q.get(‘WORLDBANK/USA_SP_RUR_TOTL_ZS’)
urban = q.get(‘WORLDBANK/USA_SP_URB_TOTL_IN_ZS’)
plt.subplot(2, 1, 1)
plt.plot(rural.index,rural)
plt.xticks(rural.index[0::3],[])
plt.title(‘American Population’)
plt.ylabel(‘% Rural’)
plt.subplot(2, 1, 2)
plt.plot(urban.index,urban)
plt.xlabel(‘year’)
plt.ylabel(‘% Urban’)
plt.show()
Add the following to any function call.
Download Quandl data for a certain Quandl code
as a Dataframe.
Search Quandl. Outputs first 4 results.
Upload a Pandas DataFrame (with a time series index) to
Quandl. Code must be all capital alphanumeric.
authtoken = ‘YOURTOKENHERE’
get(‘QUANDL/CODE’)
search(‘searchterm’)
push(data, code, name)