Group 4
INTRODUCTION TO DEEP LEARNING AND
REASONS TO DEEP LEARNING
Introduction to Deep learning
Deep learning is a branch of machine learning which is based on artificial neural
networks. It is capable of learning complex patterns and relationships within data.
In deep learning, we don’t need to explicitly program everything. It has become
increasingly popular in recent years due to the advances in processing power and the
availability of large datasets. Because it is based on artificial neural networks (ANNs)
also known as deep neural networks (DNNs).
These neural networks are inspired by the structure and function of the human brain’s
biological neurons, and they are designed to learn from large amounts of data.
Key Concepts in Deep Learning
1. Neural Networks:
o Artificial Neurons: Basic units of a neural network, inspired by biological neurons. They
take inputs, apply a weighted sum and an activation function, and produce an output.
o Layers: Neural networks consist of input layers, hidden layers, and output layers. Deep
networks have multiple hidden layers.
2. Activation Functions:
o Functions that introduce non-linearity into the network, allowing it to learn complex
patterns.
o Common functions: Sigmoid, Tanh, ReLU (Rectified Linear Unit), Leaky ReLU etc.
3. Loss Function:
o Measures how well the neural network's predictions match the actual target values.
o Common functions: Mean Squared Error (MSE), Cross-Entropy Loss, Hinge Loss.
Key Concepts in Deep Learning (continued)
Deep learning Architecture
• Generative Adversarial Network (GAN) is deep learning architecture
designed by Ian Goodfellow and his colleagues in 2014.
• GANs consist of two neural networks, the generator and the discriminator,
which are trained simultaneously through a process of competition.
• This network is used to generate new, synthetic data that resembles a given
training dataset.
• The components of (GANs) are :
1. Generator - The generator takes a random noise vector as input and
generates random data samples.
2. Discriminator - The discriminator takes both real data (from the training
set) and fake data (generated by the generator) as input and tries to classify
them correctly as real or fake.
Generative Adversarial Network(GANs)
Popular Deep Learning Frameworks
• deep learning framework is a software package that helps researchers and
data scientists design and train deep learning models.
• The goal of these frameworks is to allow users to train their models
without needing to understand the underlying algorithms of deep learning,
neural networks, and machine learningThey provide a clear and concise
way for defining models using a collection of pre-built and optimized
components.
• Deep learning works by using artificial neural networks to learn from data.
Neural networks are made up of layers of interconnected nodes, and each
node is responsible for learning a specific feature of the data
Popular deep learning frameworks
Applications of Deep Learning
Virtual Assistants:-
Virtual Assistants are cloud-based applications that understand natural language voice commands
and complete tasks for the user. Amazon Alexa, Cortana, Siri, and Google Assistant are typical
examples of virtual assistants. They need internet-connected devices to work with their full
capabilities. Each time a command is fed to the assistant, they tend to provide a better user
experience based on past experiences using Deep Learning algorithms.
Natural Language Processing:- Another important field where Deep Learning is showing
promising results is NLP, or Natural Language Processing. It is the procedure for allowing
robots to study and comprehend human language.with comprehending human language are
being addressed by Deep Learning-based NLP by teaching computers (Autoencoders and
Distributed Representation) to provide suitable responses to linguistic inputs.
Robotics:-Deep Learning is heavily used for building robots to perform human-like
tasks. Robots powered by Deep Learning use real-time updates to sense obstacles in
their path and pre-plan their journey instantly. It can be used to carry goods in hospitals,
factories, warehouses, inventory management, manufacturing products, etc.
Entertainment:-Companies such as Netflix, Amazon, YouTube, and Spotify give relevant
movies, songs, and video recommendations to enhance their customer experience. This is all
thanks to Deep Learning. Based on a person’s browsing history, interest, and behavior, online
streaming companies give suggestions to help them make product and service choices. Deep
learning techniques are also used to add sound to silent movies and generate subtitles
automatically.
Healthcare:-Deep Learning has found its application in the Healthcare sector. Computer-aided
disease detection and computer-aided diagnosis have been possible using Deep Learning. It is
widely used for medical research, drug discovery, and diagnosis of life-threatening diseases such as
cancer and diabetic retinopathy through the process of medical imaging.
Image Recognition:-To identify objects and features in images such as people,animals,places etc
Reasons to Go Deep Learning
• Ability to Process Large Volumes of Data : Processing large volumes of data effectively
requires a combination of robust tools, strategies, and methodologies.
• Improved Accuracy and Performance : Improving accuracy and performance in machine
learning involves a combination of strategies, techniques, and best practices across different
stages of the machine learning lifecycle.
• Automation of Feature Extraction : Automation of feature extraction in machine learning is
crucial for making the model development process more efficient and scalable. It reduces manual
effort, accelerates experimentation, and often leads to better performance by uncovering complex
patterns in the data.
• Advancements in Hardware (GPUs, TPUs) : Advancements in hardware, particularly GPUs
(Graphics Processing Units) and TPUs (Tensor Processing Units), have revolutionized machine
learning and deep learning by providing the computational power necessary for handling large
datasets and complex models
Applications
1. Healthcare
* Medical Diagnosis: ML models assist in diagnosing diseases from
medical images (e.g., X-rays, MRIs) and patient data. For instance,
algorithms can detect cancerous cells or predict diseases based on
symptoms and medical history.
2. Entertainment
* Content Recommendations: Streaming services like Netflix and
Spotify use ML to recommend movies, shows, or music based on users'
preferences and viewing/listening history.
3. Autonomous Systems
* Self-Driving Cars: Enabling vehicles to navigate and make decisions
autonomously (e.g., Tesla’s Autopilot).
* Robotics: Enhancing robots' abilities to perform complex tasks and
interact with humans (e.g., robotic assistants in manufacturing).
Challenges of Deep Learning
Data Requirements: Deep learning models need vast amounts of data to perform well. Gathering,
cleaning, and labeling this data is time-consuming and expensive.
Computational Power: Training deep learning models requires powerful hardware, like GPUs and
TPUs, which can be costly and consume a lot of energy.
Overfitting: Models can perform exceptionally well on training data but fail to generalize to new,
unseen data. This means they can be very good at memorizing rather than understanding.
Interpretability: Deep learning models, especially deep neural networks, are often seen as "black
boxes." It’s hard to understand how they make decisions, which can be problematic in critical
applications like healthcare or finance.
Continuous Learning: Adapting to new information without forgetting what has been previously
learned (catastrophic forgetting) is a challenge for deep learning models.
Future of Deep Learning
Better Algorithms: Researchers are developing new algorithms that require less data and
computational power,making deep learning more efficient.
Explainability: Efforts are underway to make models more interpretable. So we’ll get better at
understanding how deep learning models make decisions, which will build trust and make them more
useful.
Ethical AI: AI will become fairer and more ethical, reducing bias and protecting privacy.
Using Pre-trained Models: Pre-trained models will be more commonly used and adapted for
specific tasks, saving time and resources.
On-Device AI: Deep learning models will run on devices like phones and smart home gadgets,
making applications faster and more efficient.
Integration with Other Fields: Combining deep learning with other areas like neuroscience,
cognitive science, and quantum computing could lead to new breakthroughs and applications.
Ethical and Social Implications of Deep learning
Ethical Implications of Deep Learning
Bias and Fairness:
Data Bias: Training data may reinforce existing prejudices.
Algorithmic Bias: Design and training processes can introduce new biases.
Privacy:
Data Collection: Large datasets can include sensitive personal information.
Data Security: Risks of breaches and unauthorized access.
Transparency and Accountability:
Black-Box Nature: Hard to interpret how decisions are made.
Responsibility: Unclear accountability for AI-driven decisions.
Autonomy and Control:
Job Displacement: Automation may render certain jobs obsolete.
Decision-Making Power: Risk of over-reliance on AI for critical decisions.
Social Implications of Deep Learning
Economic Impact:
Inequality: Uneven distribution of AI benefits can widen economic gaps.
Market Disruption: Innovations may disrupt existing industries and job markets.
Social Dynamics:
Surveillance: Potential for invasive monitoring by governments or corporations.
Manipulation: AI systems can influence public opinion and behavior.
Educational and Skill Development:
Skill Requirements: Increased need for AI-related education.
Digital Divide: Unequal access to technology and education.
Ethical AI Development:
Regulation and Standards: Need for frameworks to guide ethical AI practices.
Resources and tools for Deep learning
->Leading Platforms for Training Deep Learning Models
1.Google Colab: Colab is a hosted Jupyter Notebook service that requires no setup to use and
provides free access to computing resources, including GPUs and TPUs. Colab is especially well
suited to machine learning, data science, and education.
2.Amazon Web Services (AWS): It is an expanded cloud computing platform provided by Amazon
Company. AWS provides a wide range of services with a pay-as-per-use pricing model over the
Internet such as Storage, Computing power, Databases and machine learning services.
3.Microsoft Azure: Azure provides a wide variety of services such as cloud storage, compute
services, network services, cognitive services, databases, analytics, and IoT. It makes building,
deploying, and managing applications very easy
->Essential Datasets for Deep Learning Projects
1.ImageNet: The ImageNet project is a large visual database designed for use in visual object
recognition system research. ImageNet contains more than 20,000 categories ,consisting of several
hundred images
2.MNIST: The MNIST database (Modified National Institute of Standards and Technology
database) is a large database of handwritten digits that is commonly used
for training various image processing systems.
->Tools for Visualizing Deep Learning Models and Data
3.TensorBoard: It is used for analyzing Data Flow Graph also used to understand machine-
learning models. The TensorBoard visualization is said to be very interactive where a user can pan,
zoom and expand the nodes(Data with weights) to display the details.
4.Matplotlib: Matplotlib is a powerful plotting library in Python used for creating static, animated,
and interactive visualizations. Matplotlib’s primary purpose is to provide users with the tools and
functionality to represent data graphically, making it easier to analyze and understand.
Conclusion
• What Is Deep Learning?
Deep learning is a subset of machine learning, which is a subset
of artificial intelligence. Artificial intelligence is a general term that
refers to techniques that enable computers to mimic human behavior.
Machine learning represents a set of algorithms trained on data that
make all of this possible. Deep learning is just a type of machine
learning, inspired by the structure of the human brain.
• How Does Deep Learning Work?
Deep learning algorithms attempt to draw similar conclusions as
humans would by constantly analyzing data with a given logical
structure. To achieve this, deep learning uses a multi-layered structure of
algorithms called neural networks.
Conclusion
• Why Is Deep Learning Popular?
No Feature Extraction:
we relied on traditional machine learning methods
including decision trees, SVM, naïve Bayes classifier
and logistic regression. These algorithms are also
called flat algorithms. “Flat” here refers to the fact
these algorithms cannot normally be applied directly
to the raw data (such as .csv, images, text, etc.). We
need a preprocessing step called feature extraction

MlmlmlmlmlmlmlmlklklklDEEP LEARNING.pptx

  • 1.
    Group 4 INTRODUCTION TODEEP LEARNING AND REASONS TO DEEP LEARNING
  • 2.
    Introduction to Deeplearning Deep learning is a branch of machine learning which is based on artificial neural networks. It is capable of learning complex patterns and relationships within data. In deep learning, we don’t need to explicitly program everything. It has become increasingly popular in recent years due to the advances in processing power and the availability of large datasets. Because it is based on artificial neural networks (ANNs) also known as deep neural networks (DNNs). These neural networks are inspired by the structure and function of the human brain’s biological neurons, and they are designed to learn from large amounts of data.
  • 5.
    Key Concepts inDeep Learning 1. Neural Networks: o Artificial Neurons: Basic units of a neural network, inspired by biological neurons. They take inputs, apply a weighted sum and an activation function, and produce an output. o Layers: Neural networks consist of input layers, hidden layers, and output layers. Deep networks have multiple hidden layers. 2. Activation Functions: o Functions that introduce non-linearity into the network, allowing it to learn complex patterns. o Common functions: Sigmoid, Tanh, ReLU (Rectified Linear Unit), Leaky ReLU etc. 3. Loss Function: o Measures how well the neural network's predictions match the actual target values. o Common functions: Mean Squared Error (MSE), Cross-Entropy Loss, Hinge Loss.
  • 6.
    Key Concepts inDeep Learning (continued)
  • 7.
    Deep learning Architecture •Generative Adversarial Network (GAN) is deep learning architecture designed by Ian Goodfellow and his colleagues in 2014. • GANs consist of two neural networks, the generator and the discriminator, which are trained simultaneously through a process of competition. • This network is used to generate new, synthetic data that resembles a given training dataset. • The components of (GANs) are : 1. Generator - The generator takes a random noise vector as input and generates random data samples. 2. Discriminator - The discriminator takes both real data (from the training set) and fake data (generated by the generator) as input and tries to classify them correctly as real or fake.
  • 8.
  • 9.
    Popular Deep LearningFrameworks • deep learning framework is a software package that helps researchers and data scientists design and train deep learning models. • The goal of these frameworks is to allow users to train their models without needing to understand the underlying algorithms of deep learning, neural networks, and machine learningThey provide a clear and concise way for defining models using a collection of pre-built and optimized components. • Deep learning works by using artificial neural networks to learn from data. Neural networks are made up of layers of interconnected nodes, and each node is responsible for learning a specific feature of the data
  • 10.
  • 11.
    Applications of DeepLearning Virtual Assistants:- Virtual Assistants are cloud-based applications that understand natural language voice commands and complete tasks for the user. Amazon Alexa, Cortana, Siri, and Google Assistant are typical examples of virtual assistants. They need internet-connected devices to work with their full capabilities. Each time a command is fed to the assistant, they tend to provide a better user experience based on past experiences using Deep Learning algorithms. Natural Language Processing:- Another important field where Deep Learning is showing promising results is NLP, or Natural Language Processing. It is the procedure for allowing robots to study and comprehend human language.with comprehending human language are being addressed by Deep Learning-based NLP by teaching computers (Autoencoders and Distributed Representation) to provide suitable responses to linguistic inputs.
  • 12.
    Robotics:-Deep Learning isheavily used for building robots to perform human-like tasks. Robots powered by Deep Learning use real-time updates to sense obstacles in their path and pre-plan their journey instantly. It can be used to carry goods in hospitals, factories, warehouses, inventory management, manufacturing products, etc. Entertainment:-Companies such as Netflix, Amazon, YouTube, and Spotify give relevant movies, songs, and video recommendations to enhance their customer experience. This is all thanks to Deep Learning. Based on a person’s browsing history, interest, and behavior, online streaming companies give suggestions to help them make product and service choices. Deep learning techniques are also used to add sound to silent movies and generate subtitles automatically. Healthcare:-Deep Learning has found its application in the Healthcare sector. Computer-aided disease detection and computer-aided diagnosis have been possible using Deep Learning. It is widely used for medical research, drug discovery, and diagnosis of life-threatening diseases such as cancer and diabetic retinopathy through the process of medical imaging. Image Recognition:-To identify objects and features in images such as people,animals,places etc
  • 13.
    Reasons to GoDeep Learning • Ability to Process Large Volumes of Data : Processing large volumes of data effectively requires a combination of robust tools, strategies, and methodologies. • Improved Accuracy and Performance : Improving accuracy and performance in machine learning involves a combination of strategies, techniques, and best practices across different stages of the machine learning lifecycle. • Automation of Feature Extraction : Automation of feature extraction in machine learning is crucial for making the model development process more efficient and scalable. It reduces manual effort, accelerates experimentation, and often leads to better performance by uncovering complex patterns in the data. • Advancements in Hardware (GPUs, TPUs) : Advancements in hardware, particularly GPUs (Graphics Processing Units) and TPUs (Tensor Processing Units), have revolutionized machine learning and deep learning by providing the computational power necessary for handling large datasets and complex models
  • 14.
    Applications 1. Healthcare * MedicalDiagnosis: ML models assist in diagnosing diseases from medical images (e.g., X-rays, MRIs) and patient data. For instance, algorithms can detect cancerous cells or predict diseases based on symptoms and medical history. 2. Entertainment * Content Recommendations: Streaming services like Netflix and Spotify use ML to recommend movies, shows, or music based on users' preferences and viewing/listening history. 3. Autonomous Systems * Self-Driving Cars: Enabling vehicles to navigate and make decisions autonomously (e.g., Tesla’s Autopilot). * Robotics: Enhancing robots' abilities to perform complex tasks and interact with humans (e.g., robotic assistants in manufacturing).
  • 15.
    Challenges of DeepLearning Data Requirements: Deep learning models need vast amounts of data to perform well. Gathering, cleaning, and labeling this data is time-consuming and expensive. Computational Power: Training deep learning models requires powerful hardware, like GPUs and TPUs, which can be costly and consume a lot of energy. Overfitting: Models can perform exceptionally well on training data but fail to generalize to new, unseen data. This means they can be very good at memorizing rather than understanding. Interpretability: Deep learning models, especially deep neural networks, are often seen as "black boxes." It’s hard to understand how they make decisions, which can be problematic in critical applications like healthcare or finance. Continuous Learning: Adapting to new information without forgetting what has been previously learned (catastrophic forgetting) is a challenge for deep learning models.
  • 16.
    Future of DeepLearning Better Algorithms: Researchers are developing new algorithms that require less data and computational power,making deep learning more efficient. Explainability: Efforts are underway to make models more interpretable. So we’ll get better at understanding how deep learning models make decisions, which will build trust and make them more useful. Ethical AI: AI will become fairer and more ethical, reducing bias and protecting privacy. Using Pre-trained Models: Pre-trained models will be more commonly used and adapted for specific tasks, saving time and resources. On-Device AI: Deep learning models will run on devices like phones and smart home gadgets, making applications faster and more efficient. Integration with Other Fields: Combining deep learning with other areas like neuroscience, cognitive science, and quantum computing could lead to new breakthroughs and applications.
  • 17.
    Ethical and SocialImplications of Deep learning Ethical Implications of Deep Learning Bias and Fairness: Data Bias: Training data may reinforce existing prejudices. Algorithmic Bias: Design and training processes can introduce new biases. Privacy: Data Collection: Large datasets can include sensitive personal information. Data Security: Risks of breaches and unauthorized access. Transparency and Accountability: Black-Box Nature: Hard to interpret how decisions are made. Responsibility: Unclear accountability for AI-driven decisions. Autonomy and Control: Job Displacement: Automation may render certain jobs obsolete. Decision-Making Power: Risk of over-reliance on AI for critical decisions.
  • 18.
    Social Implications ofDeep Learning Economic Impact: Inequality: Uneven distribution of AI benefits can widen economic gaps. Market Disruption: Innovations may disrupt existing industries and job markets. Social Dynamics: Surveillance: Potential for invasive monitoring by governments or corporations. Manipulation: AI systems can influence public opinion and behavior. Educational and Skill Development: Skill Requirements: Increased need for AI-related education. Digital Divide: Unequal access to technology and education. Ethical AI Development: Regulation and Standards: Need for frameworks to guide ethical AI practices.
  • 19.
    Resources and toolsfor Deep learning ->Leading Platforms for Training Deep Learning Models 1.Google Colab: Colab is a hosted Jupyter Notebook service that requires no setup to use and provides free access to computing resources, including GPUs and TPUs. Colab is especially well suited to machine learning, data science, and education. 2.Amazon Web Services (AWS): It is an expanded cloud computing platform provided by Amazon Company. AWS provides a wide range of services with a pay-as-per-use pricing model over the Internet such as Storage, Computing power, Databases and machine learning services. 3.Microsoft Azure: Azure provides a wide variety of services such as cloud storage, compute services, network services, cognitive services, databases, analytics, and IoT. It makes building, deploying, and managing applications very easy
  • 20.
    ->Essential Datasets forDeep Learning Projects 1.ImageNet: The ImageNet project is a large visual database designed for use in visual object recognition system research. ImageNet contains more than 20,000 categories ,consisting of several hundred images 2.MNIST: The MNIST database (Modified National Institute of Standards and Technology database) is a large database of handwritten digits that is commonly used for training various image processing systems. ->Tools for Visualizing Deep Learning Models and Data 3.TensorBoard: It is used for analyzing Data Flow Graph also used to understand machine- learning models. The TensorBoard visualization is said to be very interactive where a user can pan, zoom and expand the nodes(Data with weights) to display the details. 4.Matplotlib: Matplotlib is a powerful plotting library in Python used for creating static, animated, and interactive visualizations. Matplotlib’s primary purpose is to provide users with the tools and functionality to represent data graphically, making it easier to analyze and understand.
  • 21.
    Conclusion • What IsDeep Learning? Deep learning is a subset of machine learning, which is a subset of artificial intelligence. Artificial intelligence is a general term that refers to techniques that enable computers to mimic human behavior. Machine learning represents a set of algorithms trained on data that make all of this possible. Deep learning is just a type of machine learning, inspired by the structure of the human brain. • How Does Deep Learning Work? Deep learning algorithms attempt to draw similar conclusions as humans would by constantly analyzing data with a given logical structure. To achieve this, deep learning uses a multi-layered structure of algorithms called neural networks.
  • 22.
    Conclusion • Why IsDeep Learning Popular? No Feature Extraction: we relied on traditional machine learning methods including decision trees, SVM, naïve Bayes classifier and logistic regression. These algorithms are also called flat algorithms. “Flat” here refers to the fact these algorithms cannot normally be applied directly to the raw data (such as .csv, images, text, etc.). We need a preprocessing step called feature extraction