How can we scale forecasting to many people and problems within an organization? Here I argue for a strategy that makes model building equivalent to feature construction, making building a forecaster similar to building a classifier.
Interacting with Accumulo and Graphulo using Julia/Python D4MAccumulo Summit
Julia and Python have become dominant languages in scientific computing, prompting the implementation of D4M in those languages. Graphulo provides a convenient mechanism for analyzing data within Accumulo using in-database graph analytics. The MATLAB®/Octave interface for Graphulo, accessible through the D4M library, provides both Accumulo and Graphulo capabilities to analysts comfortable with the matrix-based syntax of MATLAB and D4M. By introducing the Julia and Python implementations of D4M, including the Graphulo connectors, we have made D4M, Graphulo, and Accumulo more accessible to the Julia and Python community – already being explored is using Julia D4M to use Accumulo to store human brain-scale sparse deep neural networks.
In this presentation, we will introduce to the Accumulo community the Julia and Python implementations of D4M and demonstrate how users can leverage those to interface with Graphulo and Accumulo.
These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning
How can we scale forecasting to many people and problems within an organization? Here I argue for a strategy that makes model building equivalent to feature construction, making building a forecaster similar to building a classifier.
Interacting with Accumulo and Graphulo using Julia/Python D4MAccumulo Summit
Julia and Python have become dominant languages in scientific computing, prompting the implementation of D4M in those languages. Graphulo provides a convenient mechanism for analyzing data within Accumulo using in-database graph analytics. The MATLAB®/Octave interface for Graphulo, accessible through the D4M library, provides both Accumulo and Graphulo capabilities to analysts comfortable with the matrix-based syntax of MATLAB and D4M. By introducing the Julia and Python implementations of D4M, including the Graphulo connectors, we have made D4M, Graphulo, and Accumulo more accessible to the Julia and Python community – already being explored is using Julia D4M to use Accumulo to store human brain-scale sparse deep neural networks.
In this presentation, we will introduce to the Accumulo community the Julia and Python implementations of D4M and demonstrate how users can leverage those to interface with Graphulo and Accumulo.
These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning
How to Build a Neural Network and Make PredictionsDeveloper Helps
Lately, people have been really into neural networks. They’re like a computer system that works like a brain, with nodes connected together. These networks are great at sorting through big piles of data and figuring out patterns to solve hard problems or guess stuff. And you know what’s super cool? They can keep on learning forever.
Creating and deploying neural networks can be a challenging process, which largely depends on the specific task and dataset you’re dealing with. To succeed in this endeavor, it’s crucial to possess a solid grasp of machine learning concepts, along with strong programming skills. Additionally, a deep understanding of the chosen deep learning framework is essential. Moreover, it’s imperative to prioritize responsible and ethical usage of AI models, especially when integrating them into real-world applications.
Learn from : https://www.developerhelps.com/how-to-build-a-neural-network-and-make-predictions/
Learning Predictive Modeling with TSA and KaggleYvonne K. Matos
Ever wanted to do a challenging data science project but feel like you don’t have enough experience? Just go for it! Diving into a 3 TB, 3D image dataset has been my best learning experience. I want to share this deep learning project with you, and tips for overcoming challenges along the way.
Yufeng Guo | Coding the 7 steps of machine learning | Codemotion Madrid 2018 Codemotion
Machine learning has gained a lot of attention as the next big thing. But what is it, really, and how can we use it? In this talk, you'll learn the meaning behind buzzwords like hyperparameter tuning, and see the code behind each step of machine learning. This talk will help demystify the "magic" behind machine learning. You'll come away with a foundation that you can build on, and an understanding of the tools to build with!
DN 2017 | Multi-Paradigm Data Science - On the many dimensions of Knowledge D...Dataconomy Media
Gaining insight from data is not as straightforward as we often wish it would be – as diverse as the questions we’re asking are the quality and the quantity of the data we may have at hand. Any attempt to turn data into knowledge thus strongly depends on it dealing with big or not-so-big data, high- or low-dimensional data, exact or fuzzy data, exact or fuzzy questions, and the goal being accurate prediction or understanding. This presentation emphasizes the need for a multi-paradigm data science to tackle all the challenges we are facing today and may be facing in the future. Luckily, solutions are starting to emerge...
First project in DNN
The steps for first project are as follows:
Load Data.
Define Keras Model.
Compile Keras Model.
Fit Keras Model.
Evaluate Keras Model.
Make Predictions
Aspect-based sentiment analysis is a text analysis technique that breaks down text into aspects (attributes or components of a product or service), and then scores the sentiment level (positive, negative or neutral) of each aspect. In this talk we'll walk through a production pipeline for training large Aspect Based Sentiment Analysis model in python with the Intel NLP Architect package based on the following open sourced code https://github.com/microsoft/nlp-recipes/tree/master/examples/sentiment_analysis/absa
Towards a Unified Data Analytics Optimizer with Yanlei DiaoDatabricks
Today’s big data analytics systems are best effort only: despite the wide adoption, they still lack the ability to take user monetary constraints and performance goals, and automatically configure an analytic job to achieve those goals. Our work aims to take a step further towards building a new data analytics optimizer that works for arbitrary dataflow programs and determines the job configuration in an automated manner based on user objectives regarding latency, throughput, monetary cost, etc.
At the core of the optimizer are a principled multi-objective optimization framework that enables one to explore the tradeoffs between different objectives, and a deep learning-based modeling approach that can learn a model for each user objective as complex as necessary for the user computing environment. Using both SQL-like and machine learning jobs in Spark, we show that our techniques can learn a model of each objective with high accuracy, and the multi-objective optimizer can automatically recommend new configurations that significantly improve performance from the configurations manually set by engineers.
In this workshop, we introduce you to the open source deep learning framework Apache MXNet, and show you how to install it on a Raspberry Pi. Then, using a camera and a pre-trained object detection model, we show real-life objects to the Pi and listen to what it thinks the objects are, thanks to the text-to-speech capabilities of Amazon Polly. To participate in this workshop, attendees will need to bring a laptop and headphones.
The ABC of Implementing Supervised Machine Learning with Python.pptxRuby Shrestha
It is to our fact that machine learning has taken a significant height. However, knowing and understanding how small problems can be solved from a machine learning perspective is necessary to form a good base, appreciate the process of implementation and get started in this domain. Therefore, in this post, I would like to talk about the ABC of implementing Supervised Machine Learning with Python by navigating through a simple example, which is, adding two numbers. So, to put it in simple terms, I would like to make a machine learn to add. This can be put in other words; I would like to develop a predictive model that can add. Sounds simple, right? View the presentation for more details.
How to create a neural network that detects people wearing masks. Ultimate description, the A-to-Z workflow for creating a neural network that recognizes images.
A short intro to the paper: https://blog.fulcrum.rocks/neural-network-image-recognition
How to Build a Neural Network and Make PredictionsDeveloper Helps
Lately, people have been really into neural networks. They’re like a computer system that works like a brain, with nodes connected together. These networks are great at sorting through big piles of data and figuring out patterns to solve hard problems or guess stuff. And you know what’s super cool? They can keep on learning forever.
Creating and deploying neural networks can be a challenging process, which largely depends on the specific task and dataset you’re dealing with. To succeed in this endeavor, it’s crucial to possess a solid grasp of machine learning concepts, along with strong programming skills. Additionally, a deep understanding of the chosen deep learning framework is essential. Moreover, it’s imperative to prioritize responsible and ethical usage of AI models, especially when integrating them into real-world applications.
Learn from : https://www.developerhelps.com/how-to-build-a-neural-network-and-make-predictions/
Learning Predictive Modeling with TSA and KaggleYvonne K. Matos
Ever wanted to do a challenging data science project but feel like you don’t have enough experience? Just go for it! Diving into a 3 TB, 3D image dataset has been my best learning experience. I want to share this deep learning project with you, and tips for overcoming challenges along the way.
Yufeng Guo | Coding the 7 steps of machine learning | Codemotion Madrid 2018 Codemotion
Machine learning has gained a lot of attention as the next big thing. But what is it, really, and how can we use it? In this talk, you'll learn the meaning behind buzzwords like hyperparameter tuning, and see the code behind each step of machine learning. This talk will help demystify the "magic" behind machine learning. You'll come away with a foundation that you can build on, and an understanding of the tools to build with!
DN 2017 | Multi-Paradigm Data Science - On the many dimensions of Knowledge D...Dataconomy Media
Gaining insight from data is not as straightforward as we often wish it would be – as diverse as the questions we’re asking are the quality and the quantity of the data we may have at hand. Any attempt to turn data into knowledge thus strongly depends on it dealing with big or not-so-big data, high- or low-dimensional data, exact or fuzzy data, exact or fuzzy questions, and the goal being accurate prediction or understanding. This presentation emphasizes the need for a multi-paradigm data science to tackle all the challenges we are facing today and may be facing in the future. Luckily, solutions are starting to emerge...
First project in DNN
The steps for first project are as follows:
Load Data.
Define Keras Model.
Compile Keras Model.
Fit Keras Model.
Evaluate Keras Model.
Make Predictions
Aspect-based sentiment analysis is a text analysis technique that breaks down text into aspects (attributes or components of a product or service), and then scores the sentiment level (positive, negative or neutral) of each aspect. In this talk we'll walk through a production pipeline for training large Aspect Based Sentiment Analysis model in python with the Intel NLP Architect package based on the following open sourced code https://github.com/microsoft/nlp-recipes/tree/master/examples/sentiment_analysis/absa
Towards a Unified Data Analytics Optimizer with Yanlei DiaoDatabricks
Today’s big data analytics systems are best effort only: despite the wide adoption, they still lack the ability to take user monetary constraints and performance goals, and automatically configure an analytic job to achieve those goals. Our work aims to take a step further towards building a new data analytics optimizer that works for arbitrary dataflow programs and determines the job configuration in an automated manner based on user objectives regarding latency, throughput, monetary cost, etc.
At the core of the optimizer are a principled multi-objective optimization framework that enables one to explore the tradeoffs between different objectives, and a deep learning-based modeling approach that can learn a model for each user objective as complex as necessary for the user computing environment. Using both SQL-like and machine learning jobs in Spark, we show that our techniques can learn a model of each objective with high accuracy, and the multi-objective optimizer can automatically recommend new configurations that significantly improve performance from the configurations manually set by engineers.
In this workshop, we introduce you to the open source deep learning framework Apache MXNet, and show you how to install it on a Raspberry Pi. Then, using a camera and a pre-trained object detection model, we show real-life objects to the Pi and listen to what it thinks the objects are, thanks to the text-to-speech capabilities of Amazon Polly. To participate in this workshop, attendees will need to bring a laptop and headphones.
The ABC of Implementing Supervised Machine Learning with Python.pptxRuby Shrestha
It is to our fact that machine learning has taken a significant height. However, knowing and understanding how small problems can be solved from a machine learning perspective is necessary to form a good base, appreciate the process of implementation and get started in this domain. Therefore, in this post, I would like to talk about the ABC of implementing Supervised Machine Learning with Python by navigating through a simple example, which is, adding two numbers. So, to put it in simple terms, I would like to make a machine learn to add. This can be put in other words; I would like to develop a predictive model that can add. Sounds simple, right? View the presentation for more details.
How to create a neural network that detects people wearing masks. Ultimate description, the A-to-Z workflow for creating a neural network that recognizes images.
A short intro to the paper: https://blog.fulcrum.rocks/neural-network-image-recognition
MISS TEEN GONDA 2024 - WINNER ABHA VISHWAKARMADK PAGEANT
Abha Vishwakarma, a rising star from Uttar Pradesh, has been selected as the victor from Gonda for Miss High Schooler India 2024. She is a glad representative of India, having won the title through her commitment and efforts in different talent competitions conducted by DK Exhibition, where she was crowned Miss Gonda 2024.
Leadership Ambassador club Adventist modulekakomaeric00
Aims to equip people who aspire to become leaders with good qualities,and with Christian values and morals as per Biblical teachings.The you who aspire to be leaders should first read and understand what the ambassador module for leadership says about leadership and marry that to what the bible says.Christians sh
Jill Pizzola's Tenure as Senior Talent Acquisition Partner at THOMSON REUTERS...dsnow9802
Jill Pizzola's tenure as Senior Talent Acquisition Partner at THOMSON REUTERS in Marlton, New Jersey, from 2018 to 2023, was marked by innovation and excellence.
Exploring Career Paths in Cybersecurity for Technical CommunicatorsBen Woelk, CISSP, CPTC
Brief overview of career options in cybersecurity for technical communicators. Includes discussion of my career path, certification options, NICE and NIST resources.
2. Topics in this video
1. Define a purpose
2. Apply the Strategy to jump start
3. Hands-on : Create your 1st Deep Learning solution
Jump start your AI journey !
3. The inspiration
Francois Chollet
Artificial Intelligence Researcher,
Google
ISBN-13: 978-1617294433
Deep Learning with Python
https://www.manning.com/books/deep-learning-with-python
4. Design of Deep Learning experiment
Define problem What is the purpose
Prepare the data
Collect data, spilit data,
vectorize, reshape,
normalize, OHE
Define the network
architecture
architecture
Define loss,
optimizers
Define the metrics,
optimizers, loss function,
and Compile the model
Train the network Train the network
Improve power to
generalize
Evaluate the network,
Hypertune
Test Test, Predict
Deploy Deploy
1
2
3
4
5
6
7
8AIforEveryone.org
5. Define the idea
• App idea: Recognize your hand gesture to
unlock your smartphone (Secure unlock)
• Problem: Recognize hand draw gestures to
unlock your phone
Define your
larger purpose
1
8. Method: 7 steps to develop your Deep Learning model
1: Define problem
Collect data representing
the purpose
2: Format the data
Spilt data, vectorize, reshape,
normalize, OHE
3: Design the network Design the Neural Network
4: Define loss Think of what to optimize for
5: Train the network
Allow the network to learn
patterns in the data
6: Validate & Improve Power of generalization?
7: Predict Predict
Adjust model’s
capacity to learn
“just the patterns”
Develop model
that overfits
Develop 1st model
12. Simplify your goal
Problem: Users draws a hand drawn gestures, your app identifies it.
Simplified Problem: User can draw any number between 0 to 9
Define your
problem
1
13.
14.
15. 1. Define
problem
Collect the data
representing the
world you what
to predict
Load the data Visualize data
•
Labeled dataset
16. Method: 7 steps to develop your Deep Learning model
1: Define problem
Collect data representing
the purpose
2: Format the data
Spilt data, vectorize, reshape,
normalize, OHE
3: Design the network Design the Neural Network
4: Define loss Think of what to optimize for
5: Train the network
Allow the network to learn
patterns in the data
6: Validate & Improve Power of generalization?
7: Predict Predict
Adjust model’s
capacity to learn
“just the patterns”
Develop model
that overfits
Develop 1st model
17. 1. Define
problem
Collect the data
representing the
world you what
to predict
Load the data Visualize data
from keras.datasets import mnist
#1: Define problem, Collect data, Visualize it
# Recognize handwritten numbers
(trainX, trainY), (testX, testY) = mnist.load_data(
from keras.datasets import mnist
(trainX, trainY), (testX, testY) = mnist. load_data()
len(trainX)
keras. datasets. mnist. load_data()
type ( myObject )
18. 1. Define
problem
Collect the data
representing the
world you what
to predict
Load the data Visualize data
from keras.datasets import mnist
Visualize it
exampleImage = trainY [2]
pyplot. imshow (exampleImage )
pyplot. show()
from matplotlib import pyplot
matplotlib. pyplot. imshow ( myArrayImage )
19. Method: 7 steps to develop your Deep Learning model
1: Define problem
Collect data representing
the purpose
2: Format the data
Spilt data, vectorize, reshape,
normalize, OHE
3: Design the network Design the Neural Network
4: Define loss Think of what to optimize for
5: Train the network
Allow the network to learn
patterns in the data
6: Validate & Improve Power of generalization?
7: Predict Predict
Adjust model’s
capacity to learn
“just the patterns”
Develop model
that overfits
Develop 1st model
20. 2. Prepare data
2. Prepare
the data
Spilt data in
training &
testing
Vectortize
as tensors
Reshape Normalize
One Hot
encode
AIforEveryone.org
21. 2. Prepare
the data
Spilt data in
training &
testing
Vectortize
as tensors
Reshape Normalize
One Hot
encode
No of training records = 60000
No of test records = 10000
Shape of training image = (60000, 28, 28)
60000
22. 2. Prepare
the data
Spilt data in
training &
testing
Vectortize
as tensors
Reshape Normalize
One Hot
encode
trainXready = trainX. reshape (800, 28*28)
trainX.shape
Shape of trainX: (800, 28, 28)
Shape of trainXready: (800, 784)
newArray = myArray . reshape (28 * 28)
25. 2. Prepare
the data
Spilt data in
training &
testing
Vectortize
as tensors
Reshape Normalize
One Hot
encode
trainXready = temptrainX.astype('float32') / 255
newArray = myArray / 255
afloat = aInteger. astype(‘float32’)
27. 2. Prepare
the data
Spilt data in
training &
testing
Vectortize
as tensors
Reshape Normalize
One Hot
encode2. Prepare
the data
Spilt data in
training &
testing
Vectortize
as tensors
Reshape Normalize
One Hot
encode
myArray = keras.utils. to_categorical ( aInteger)
Data: [2]
Data after OHE : [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]]
28. 3. Design the network
AIforEveryone.org
Adjust model’s capacity
to learn
“just the patterns”
Develop model that
overfits
Develop 1st model
Data
RepresentationofData
29. Method: 7 steps to develop your Deep Learning model
1: Define problem
Collect data representing
the purpose
2: Format the data
Spilt data, vectorize, reshape,
normalize, OHE
3: Design the network Design the Neural Network
4: Define loss Think of what to optimize for
5: Train the network
Allow the network to learn
patterns in the data
6: Validate & Improve Power of generalization?
7: Predict Predict
Adjust model’s
capacity to learn
“just the patterns”
Develop model
that overfits
Develop 1st model
34. Method: 7 steps to develop your Deep Learning model
1: Define problem
Collect data representing
the purpose
2: Format the data
Spilt data, vectorize, reshape,
normalize, OHE
3: Design the network Design the Neural Network
4: Define loss Think of what to optimize for
5: Train the network
Allow the network to learn
patterns in the data
6: Validate & Improve Power of generalization?
7: Predict Predict
Adjust model’s
capacity to learn
“just the patterns”
Develop model
that overfits
Develop 1st model
42. 7. Predict
Problem: Users draws a hand drawn gestures , your app predicts it.
predictedResults = network.predict(testXready[1:3])
Predicted Results:
[ 1.35398892e-08 3.91871922e-08 9.99876499e-01 1.22501457e-04
1.77109036e-10 1.30937892e-07 2.99046292e-07 2.58880729e-11
4.58700157e-07 2.09212196e-11]
Ground Truth OHE : [[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]]
Ground Truth : [2]
AIforEveryone.org
43. Let’s it together
• App idea: Recognize your hand gesture to
unlock your smartphone (Secure unlock)
• Biz outcome: Recognize hand draw gestures
to unlock your phone
Test accuracy is: 0.9157
#Step 6: Evaluate the network
metrics_test_loss, metrics_test_accuracy = network.evaluate(testXready, testYready)
45. Design of Deep Learning experiment
Define problem What is the purpose
Prepare the data
Collect data, spilit data,
vectorize, reshape,
normalize, OHE
Define the network
architecture
architecture
Define loss,
optimizers
Define the metrics,
optimizers, loss function,
and Compile the model
Train the network Train the network
Improve power to
generalize
Evaluate the network,
Hypertune
Test Test, Predict
Deploy Deploy
1
2
3
4
5
6
7
8