Traditional Machine Learning had used handwritten features and modality-specific machine learning to classify images, text or recognize voices. Deep learning / Neural network identifies features and finds different patterns automatically. Time to build these complex tasks has been drastically reduced and accuracy has exponentially increased because of advancements in Deep learning. Neural networks have been partly inspired from how 86 billion neurons work in a human and become more of a mathematical and a computer problem. We will see by the end of the blog how neural networks can be intuitively understood and implemented as a set of matrix multiplications, cost function, and optimization algorithms.
Deep Learning - Overview of my work IIMohamed Loey
Deep Learning Machine Learning MNIST CIFAR 10 Residual Network AlexNet VGGNet GoogleNet Nvidia Deep learning (DL) is a hierarchical structure network which through simulates the human brain’s structure to extract the internal and external input data’s features
From Conventional Machine Learning to Deep Learning and Beyond.pptxChun-Hao Chang
In this slide, Deep Learning are compared with Conventional Learning and the strength of DNN models will be explained.
The target audience are people who have the knowledge of Machine Learning or Data Mining but not familiar with Deep Learning.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Deep Learning - Overview of my work IIMohamed Loey
Deep Learning Machine Learning MNIST CIFAR 10 Residual Network AlexNet VGGNet GoogleNet Nvidia Deep learning (DL) is a hierarchical structure network which through simulates the human brain’s structure to extract the internal and external input data’s features
From Conventional Machine Learning to Deep Learning and Beyond.pptxChun-Hao Chang
In this slide, Deep Learning are compared with Conventional Learning and the strength of DNN models will be explained.
The target audience are people who have the knowledge of Machine Learning or Data Mining but not familiar with Deep Learning.
It’s long ago, approx. 30 years, since AI was not only a topic for Science-Fiction writers, but also a major research field surrounded with huge hopes and investments. But the over-inflated expectations ended in a subsequent crash and followed by a period of absent funding and interest – the so-called AI winter. However, the last 3 years changed everything – again. Deep learning, a machine learning technique inspired by the human brain, successfully crushed one benchmark after another and tech companies, like Google, Facebook and Microsoft, started to invest billions in AI research. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new Hype? How is Deep Learning different from previous approaches? Are the advancing AI technologies really a threat for humanity? Let’s look behind the curtain and unravel the reality. This talk will explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why "Deep Learning is probably one of the most exciting things that is happening in the computer industry” (Jen-Hsun Huang – CEO NVIDIA).
Either a new AI “winter is coming” (Ned Stark – House Stark) or this new wave of innovation might turn out as the “last invention humans ever need to make” (Nick Bostrom – AI Philosoph). Or maybe it’s just another great technology helping humans to achieve more.
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
This is a deep learning presentation based on Deep Neural Network. It reviews the deep learning concept, related works and specific application areas.It describes a use case scenario of deep learning and highlights the current trends and research issues of deep learning
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
A simplified way of approaching machine learning and deep learning from the ground up. The case for deep learning and an attempt to develop intuition for how/why it works. Advantages, state-of-the-art, and trends.
Presented at NYU Center for Genomics for NY Deep Learning Meetup
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
"Mainstream access to deep learning technology will greatly impact most industries over the next three to five years."
So what exactly is deep learning? How does it work? And most importantly, why should you even care?
Deep learning is used in the research community and in industry to help solve many big data problems such as computer vision, speech recognition, and natural language processing.
Practical examples include:
-Vehicle, pedestrian and landmark identification for driver assistance
-Image recognition
-Speech recognition and translation
-Natural language processing
-Life sciences
-What You Will Learn
-Understand the intuition behind Artificial Neural Networks
-Apply Artificial Neural Networks in practice
-Understand the intuition behind Convolutional Neural Networks
-Apply Convolutional Neural Networks in practice
-Understand the intuition behind Recurrent Neural Networks
-Apply Recurrent Neural Networks in practice
-Understand the intuition behind Self-Organizing Maps
-Apply Self-Organizing Maps in practice
-Understand the intuition behind Boltzmann Machines
-Apply Boltzmann Machines in practice
-Understand the intuition behind AutoEncoders
-Apply AutoEncoders in practice
Artificial Intelligence, Machine Learning, Deep Learning
The 5 myths of AI
Deep Learning in action
Basics of Deep Learning
NVIDIA Volta V100 and AWS P3
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
This is a deep learning presentation based on Deep Neural Network. It reviews the deep learning concept, related works and specific application areas.It describes a use case scenario of deep learning and highlights the current trends and research issues of deep learning
The presentation briefly answers the questions:
1. What is Machine Learning?
2. Ideas behind Neural Networks?
3. What is Deep Learning? How different is it from NN?
4. Practical examples of applications.

For more information:
https://www.quora.com/How-does-deep-learning-work-and-how-is-it-different-from-normal-neural-networks-and-or-SVM
http://stats.stackexchange.com/questions/114385/what-is-the-difference-between-convolutional-neural-networks-restricted-boltzma
https://www.youtube.com/watch?v=n1ViNeWhC24 - presentation by Ng
http://techtalks.tv/talks/deep-learning/58122/ - deep learning tutorial and slides - http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf
Deep learning for NLP - http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial
papers: http://www.cs.toronto.edu/~hinton/science.pdf
http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_ErhanCBV10.pdf
http://arxiv.org/pdf/1206.5538v3.pdf
http://arxiv.org/pdf/1404.7828v4.pdf
More recommendations - https://www.quora.com/What-are-the-best-resources-to-learn-about-deep-learning
A simplified way of approaching machine learning and deep learning from the ground up. The case for deep learning and an attempt to develop intuition for how/why it works. Advantages, state-of-the-art, and trends.
Presented at NYU Center for Genomics for NY Deep Learning Meetup
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RManish Saraswat
Simple guide which explains deep learning and neural network with hands on experience in R using MXnet and H2o package. It also explains gradient descent and backpropagation algorithm.
Complete tutorial: http://blog.hackerearth.com/understanding-deep-learning-parameter-tuning-with-mxnet-h2o-package-r
MachinaFiesta: A Vision into Machine Learning 🚀GDSCNiT
🕵️♂️ Embark on an exhilarating journey into the realm of Machine learning and Generative AI with MachinaFiesta! 🚀. Join us for MachinaFiesta, a two-hour event exploring the fascinating world of machine learning and generative AI where you can Vision, Innovate and learn new technologies.
Slide contets:
🎤 Brief introduction to the agenda and speakers of the event
🌐 Get to know the importance and future prospects of machine learning
🧠 Interactive session on core machine learning concepts
🚀 Exploration of cutting-edge generative AI advancements
🤖 Introduction to Gemini, the open-source factual language model
🤔Discussion on Gemini's capabilities and potential applications in research and development
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Deep learning is a part of machine learning, which involves the use of computer algorithms to learn, improve and evolve on its own. Deep learning may be considered similar to machine learning. However, while machine learning works with simple concepts, deep learning uses artificial neural networks, which imitate the way humans learn and think.
Build a simple image recognition system with tensor flowDebasisMohanty37
A perfect working model to detect mnist dataset using TensorFlow.
Dataset:
http://yann.lecun.com/exdb/mnist/
For code check the below GitHub links:
https://github.com/Jitudebz/psychic-pancake
Automatic Attendace using convolutional neural network Face Recognitionvatsal199567
Automatic Attendance System will recognize the face of the student through the camera in the class and mark the attendance. It was built in Python with Machine Learning.
Machine learning, which is simply a neural network with three or more layers, is a subset of deep learning. Even though they are much below the capacity of the human brain, these neural networks make an effort to mimic its behaviour and enable it to "learn" from massive amounts of data. Burraq IT solutions provide the best Deep Learning Training courses in Lahore. While a single-layer neural network can still make rough predictions, the accuracy can be improved and optimized by adding hidden layers.
Human Emotion Recognition using Machine Learningijtsrd
It is quite interesting to recognize the human emotions in the field of machine learning. Using a person's facial expression one can know his emotions or what the person wants to express. But at the same time it's not easy to recognize one's emotion easily its quite challenging at times. Facial expression consist of various human emotions such as sad, happy , excited, angry, frustrated and surprise. Few years back Natural language processing was used to detect the sentiment from the text and then it took a step forward towards emotion detection. Sentiments can be positive, negative or neutral where as emotions are more refined categories. There are many techniques used to recognize emotions. This paper provides a review of research work carried out and published in the field of human emotion recognition and various techniques used for human emotions recognition. Prof. Mrs. Dhanamma Jagli | Ms. Pooja Shetty "Human Emotion Recognition using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25217.pdfPaper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/25217/human-emotion-recognition-using-machine-learning/prof-mrs-dhanamma-jagli
“It’s Not About Sensor Making, it’s About Sense Making” - Moriya Kassis @Prod...Product of Things
Deep learning involves learning through layers which allows a computer to build a hierarchy of complex concepts out of simpler concepts. Just like Product Management, the objective of Deep Learning is to solve ‘intuitive’ problems i.e. problems characterized by High dimensionality and no rules.
In this talk, Moriya discussed with us how deep is the future of IoT, how is it changing the way we create products and what will be its implications.
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
2. Affine www.affinenalytics.com
Affine Blog
Deep Learning Demystified
What is Deep Learning?
History of Deep Learning
Traditional Machine Learning had used handwritten features and modality-specific machine learning to classify images, text or recognize voices.
Deep learning / Neural network identifies features and finds different patterns automatically.Time to build these complex tasks has been drastical-
ly reduced and accuracy has exponentially increased because of advancements in Deep learning. Neural networks have been partly inspired from
how 86 billion neurons work in a human and become more of a mathematical and a computer problem. We will see by the end of the blog how
neural networks can be intuitively understood and implemented as a set of matrix multiplications, cost function, and optimization algorithms.
Though DL is being adopted across various enterprises in the last few years, the theory and techniques back to 1940.DL was called with various
names.
Cybernetics 1940–1960
Connectionism 1980–1960
Deep learning 2006-*
Why now?
GPU from the gaming industry, Computation power on demand, and advancements in optimization algorithms have made it the right time for
building applications using Deep learning.
Biological analogy of Neural Network
Note: A neuron is also called as a Unit or a function since a human neuron in the real sense is much more complex in nature than the one
we use in our Neural Networks.
Mathematical/Functional analogy of a Neural Network
3. The growth of Deep Learning
A recent talk by Jeff Dean has highlighted the massive increase in Deep learning at google. At google, DL is being used in various departments and
usage has increased exponentially in the last few years.
1. DL at a high level can be seen as a mathematical function and the nature of DL is that there exists a neural network for every problem.
This is often called universality.Though there is no guarantee that we can find one. (More on: An intuitive explanation from Michael
Nielsen)
2. An algorithm which fine tunes the parameters of the function. Thanks to Back Propagation and Gradient descent and various versions of
it which precisely does that.
3. The above 2 steps which turns out to be a set of matrix multiplications and derivative calculations, which can be executed much faster
on a GPU. Thanks to the Gaming industry for popularizing and a giving us faster GPU’s which was previously used for performing matrix
operations to manipulate the pixels on the game screens.
4. Thanks to cloud providers like Amazon, Microsoft for allowing customers to use GPU-based instances to build deep learning models on
demand.
Affine www.affinenalytics.com
Affine Blog
Transmorphosizing Banking Through Artificial Intelligence
4. Popular tools that ease the use of Deep Learning
Types of Neural Networks
From Andrew Ng’s point of view.“AI with deep learning is the new electricity, Just as 100 years ago electricity transformed
industry after industry, AI will now do the same.”
DL is also being applied across multiple impactful applications by different teams across the globe implying its ease of use. Few of them are
• Classification of cucumber: It’s astonishing to know how a farmer from Japan used deep learning and TensorFlow to build a machine
that uses image recognition to segregate cucumbers. Which previously used to take his mother 8 hours of work for few days
• Classification of skin Lesions: Using a pre-trained Convolutional Neural Network, the group created a model that could accuratel
classify skin lesions by 60%, a four-fold increase over the previous benchmark of 15.6%. That’s a massive improvement!
• Heart condition diagnosis: Two hedge fund analysts built this model to diagnose heart diseases with accuracy matching that of
doctors.
• Clothes Classification: This group successfully built a model that could recognize articles of clothing, as well as their style.
The applications that are being built has the ability to transform health sector, fashion industry and much more.
1. Keras: We love keras, it allows us in building and executing our DL models very quickly. It provides a clean API in Python and runs either
on TensorFlow and Theano. It was built with an aim of building DL models easier and faster. It comes with a support for algorithms like
Convolutional Networks, Recurrent Networks and both.
2. TensorFlow: TensorFlow is an open source software from google for numerical computing using data flow graphs. It helps researchers
in experimenting and building new machine learning algorithms and new specialized neural network models. In addition to that, it
provides a higher level tool called TF-learn which used for building deep neural networks quicker. Google in 2016 has open sourced the
distributed version of tensor flow making it scalable. TensorFlow provides you for shipping models to android/ios devices thus reducing
the time to deploy. It also comes with a serving layer which exposes the DL models to other systems.
3. Theano: Theano has similar capabilities like Tensor Flow. We have observed a close tie between TensorFlow and Theano capabilities.
Time has to say which is going to be better.
In the image we have a neuron which takes an inputs X along with weights W and passes it to
function f which acts as a classifier. It is very similar to logistic regression or SVM. We have the
three important parts of building a neural network.
There are many tools that are available today that are helping companies adopt Deep learning faster. Few of them are
Let us start with understanding how a single neural network works.
Affine www.affinenalytics.com
Affine Blog
Transmorphosizing Banking Through Artificial Intelligence
5. Popular tools that ease the use of Deep Learning
Let us take an example of predicting whether a customer is planning a trip. We build multiple ML models which capture different pattern in the
data. Then we use the output from these models to run another ML model to discover patterns which were not captured by the previous models.
It becomes more complex when we keep on adding more models to the prediction system. Let us look how the stacking of multiple models could
look like.
1. Score function (f): The score function applies the activation function on the inputs along with weights. In this example, the activation
function is a sigmoid which outputs values between 0 to 1, which acts as probabilities for our binary classifier network.
2. Cost function: To evaluate how the neural network has performed against the ground truth we need to use an evaluation metric. We
can use binary cross entropy for the same which tells us how good our network has performed.
3. Optimization algorithm: Our function has input variables which are going to remain constant and the goal is to learn the weights to
reduce the cost thus improving accuracy. There are different ways like searching the weights randomly or using algorithms like SGD
(stochastic gradient descent) which helps in finding the optimal weights for the particular problem.
1. Hidden Layer: The name sounds too crazy when I heard it first but it is quite simple. Any layer that is between input and output is
called hidden layers.
2. Deep Networks: Any network that has more than 1 layer is called deep networks and as the depth increases the computational
complexity also increases.
3. Activation function: We have different activation functions available like sigmoid, tanh, Rectified Linear unit(relu) — applied
max(0,-), leaky relu to fix some drawbacks that come with relu, max out. The most predominantly used ones are relu and max out usage
has also picked up in the recent months.
All the above steps are common to most of the machine learning algorithms. Let’s discuss few more jargons which are typically used in the deep
learning domain before looking at different types of network.
Affine www.affinenalytics.com
Affine Blog
Transmorphosizing Banking Through Artificial Intelligence
Deep Neural Network (Stacking of ML models)
6. Convolutional Neural Network
Feed forward neural network comes with its limitations when applied to problems like image recognition.Computers understand an image as a
matrix with dimensions as height, width, channels (RGB). To use images in FFN we need to flatten the matrix into a vector of pixels. On simple
image recognition like MNIST data set this gives accuracy around 95% but for complex image recognition the accuracy drops drastically. When
images are flattened as vectors the spatial information about the data is lost. Convolutional networks help in capturing the spatial information.
When the network becomes deep that is more than one layer, a new challenge arises. How do I associate the error with the cost function to
multiple layers? Back propagation algorithm helps in propagating the error to all the weights in the previous layers. The best part is most of the
today’s deep learning tool take care of the back propagation automatically. All we need to specify is build the topology of the network that is the
number of layers, a number of nodes, what activation function to be used, how many epochs/iterations to be executed.
Feed forward neural network does this exactly.It helps in stacking up multiple models together and discovers hidden patterns. When observed
closely the above network can be represented as nested matrix multiplication and can be represented as below
1. Input Layer — X, Weights to hidden layer 1 — W where W,X are represented in the form of a matrix.
2. Output to hidden layer 2 — Activation function f1(WX)
3. 2nd Hidden layer can be represented as f2(f1(wx))
Affine www.affinenalytics.com
Affine Blog
Transmorphosizing Banking Through Artificial Intelligence