Deep learning uses neural networks with many hidden layers to learn representations of data with multiple levels of abstraction. It has been shown to outperform simpler models with fewer layers on complex tasks like image and speech recognition. Deep learning works by defining a set of candidate functions (neural networks) and using gradient descent to optimize the network parameters to minimize loss on training data. Deeper networks with more parameters generally perform better but require large datasets and computational resources to train effectively.
Applying your Convolutional Neural NetworksDatabricks
Part 3 of the Deep Learning Fundamentals Series, this session starts with a quick primer on activation functions, learning rates, optimizers, and backpropagation. Then it dives deeper into convolutional neural networks discussing convolutions (including kernels, local connectivity, strides, padding, and activation functions), pooling (or subsampling to reduce the image size), and fully connected layer. The session also provides a high-level overview of some CNN architectures. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Learn to Build an App to Find Similar Images using Deep Learning- Piotr TeterwakPyData
This document discusses using deep learning and deep features to build an app that finds similar images. It begins with an overview of deep learning and how neural networks can learn complex patterns in data. The document then discusses how pre-trained neural networks can be used as feature extractors for other domains through transfer learning. This reduces data and tuning requirements compared to training new deep learning models. The rest of the document focuses on building an image similarity service using these techniques, including training a model with GraphLab Create and deploying it as a web service with Dato Predictive Services.
We present basic concepts of machine learning such as: supervised and unsupervised learning, types of tasks, how some algorithms work, neural networks, deep learning concepts, how to apply it in your work.
Neural network basic and introduction of Deep learningTapas Majumdar
Deep learning tools and techniques can be used to build convolutional neural networks (CNNs). Neural networks learn from observational training data by automatically inferring rules to solve problems. Neural networks use multiple hidden layers of artificial neurons to process input data and produce output. Techniques like backpropagation, cross-entropy cost functions, softmax activations, and regularization help neural networks learn more effectively and avoid issues like overfitting.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Applying your Convolutional Neural NetworksDatabricks
Part 3 of the Deep Learning Fundamentals Series, this session starts with a quick primer on activation functions, learning rates, optimizers, and backpropagation. Then it dives deeper into convolutional neural networks discussing convolutions (including kernels, local connectivity, strides, padding, and activation functions), pooling (or subsampling to reduce the image size), and fully connected layer. The session also provides a high-level overview of some CNN architectures. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Learn to Build an App to Find Similar Images using Deep Learning- Piotr TeterwakPyData
This document discusses using deep learning and deep features to build an app that finds similar images. It begins with an overview of deep learning and how neural networks can learn complex patterns in data. The document then discusses how pre-trained neural networks can be used as feature extractors for other domains through transfer learning. This reduces data and tuning requirements compared to training new deep learning models. The rest of the document focuses on building an image similarity service using these techniques, including training a model with GraphLab Create and deploying it as a web service with Dato Predictive Services.
We present basic concepts of machine learning such as: supervised and unsupervised learning, types of tasks, how some algorithms work, neural networks, deep learning concepts, how to apply it in your work.
Neural network basic and introduction of Deep learningTapas Majumdar
Deep learning tools and techniques can be used to build convolutional neural networks (CNNs). Neural networks learn from observational training data by automatically inferring rules to solve problems. Neural networks use multiple hidden layers of artificial neurons to process input data and produce output. Techniques like backpropagation, cross-entropy cost functions, softmax activations, and regularization help neural networks learn more effectively and avoid issues like overfitting.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Summary:
There are three parts in this presentation.
A. Why do we need Convolutional Neural Network
- Problems we face today
- Solutions for problems
B. LeNet Overview
- The origin of LeNet
- The result after using LeNet model
C. LeNet Techniques
- LeNet structure
- Function of every layer
In the following Github Link, there is a repository that I rebuilt LeNet without any deep learning package. Hope this can make you more understand the basic of Convolutional Neural Network.
Github Link : https://github.com/HiCraigChen/LeNet
LinkedIn : https://www.linkedin.com/in/YungKueiChen
1. Neural networks are a type of machine learning model that can learn highly non-linear functions to map inputs to outputs. They consist of interconnected layers of nodes that mimic biological neurons.
2. Backpropagation is an algorithm that allows neural networks to be trained using gradient descent by efficiently computing the gradient of the loss function with respect to the network parameters. It works by propagating gradients from the output layer back through the network using the chain rule.
3. There are many design decisions that go into building a neural network architecture, such as the number of hidden layers and nodes, choice of activation functions, objective function, and training algorithm like stochastic gradient descent. Common activation functions are the sigmoid, tanh, and rectified linear
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.
I’ll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I’ll separate the hype from reality around questions like:
• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you ‘steal’ pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?
Deep Learning Made Easy with Deep FeaturesTuri, Inc.
Deep learning models can learn hierarchical feature representations from raw input data. These learned features can then be used to build simple classifiers that achieve high accuracy, even when training data is limited. Transfer learning involves using features extracted from a model pre-trained on a large dataset to build classifiers for other related problems. This approach has been shown to outperform traditional feature engineering with hand-designed features. Deep features extracted from neural networks trained on large image or text datasets have proven to work well as general purpose features for other visual and language problems.
This document provides an overview of deep learning and its applications. It discusses how deep learning can be used for image classification and how neural networks learn hierarchical representations from data. The document highlights some of the challenges of deep learning, such as the large amounts of data and computation required. It also covers how deep learning models can be deployed in production using services like Amazon Web Services to ensure low latency, high availability, and continuous learning.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
In this talk we detail the step to creating a Visual Search engine for 1M Amazon product using MXNet Gluon and the K-Nearest Neighbor search library HNSW.
For implementation details, check this repository: https://github.com/ThomasDelteil/VisualSearch_MXNet
Video available here:
https://www.youtube.com/watch?v=9a8MAtfFVwI
Demo website available here:
https://thomasdelteil.github.io/VisualSearch_MXNet/
SeRanet is super resolution software that uses deep learning to enhance low-resolution images. It introduces concepts of "split" and "splice" where the input image is divided into four branches representing different pixel regions, and these branches are fused to form the output image. This approach provides flexibility in model design compared to processing the entire image as once. SeRanet also uses a technique called "fusion" where it combines two different CNNs - one for the main task and one for an auxiliary task - to leverage their complementary representations and improve performance. Experimental results show SeRanet produces higher quality super resolution than conventional methods like bicubic resizing as well as other deep learning based methods like waifu2x.
Hardware Acceleration for Machine LearningCastLabKAIST
This document provides an overview of a lecture on hardware acceleration for machine learning. The lecture will cover deep neural network models like convolutional neural networks and recurrent neural networks. It will also discuss various hardware accelerators developed for machine learning, including those designed for mobile/edge and cloud computing environments. The instructor's background and the agenda topics are also outlined.
Deep learning techniques like convolutional neural networks (CNNs) and deep neural networks have achieved human-level performance on certain tasks. Pioneers in the field include Geoffrey Hinton, who co-invented backpropagation, Yann LeCun who developed CNNs for image recognition, and Andrew Ng who helped apply these techniques at companies like Baidu and Coursera. Deep learning is now widely used for applications such as image recognition, speech recognition, and distinguishing objects like dogs from cats, often outperforming previous machine learning methods.
This document summarizes a presentation on deep image processing and computer vision. It introduces common deep learning techniques like CNNs, autoencoders, variational autoencoders and generative adversarial networks. It then discusses applications including image classification using models like LeNet, AlexNet and VGG. It also covers face detection, segmentation, object detection algorithms like R-CNN, Fast R-CNN and Faster R-CNN. Additional topics include document automation using character recognition and graphical element analysis, as well as identity recognition using face detection. Real-world examples are provided for document processing, handwritten letter recognition and event pass verification.
Machine Learning, Deep Learning and Data Analysis IntroductionTe-Yen Liu
The document provides an introduction and overview of machine learning, deep learning, and data analysis. It discusses key concepts like supervised and unsupervised learning. It also summarizes the speaker's experience taking online courses and studying resources to learn machine learning techniques. Examples of commonly used machine learning algorithms and neural network architectures are briefly outlined.
Machine Learning from a Software Engineer's perspectiveMarijn van Zelst
The document provides an overview of machine learning from a software engineer's perspective. It discusses how machine learning can solve various problems, introduces TensorFlow as an open-source machine learning library, and provides an example of using TensorFlow to build a neural network model for handwritten digit classification that achieves 97.5% accuracy on MNIST data. The document emphasizes that machine learning requires examples to learn from and recommends starting simply before gradually increasing complexity when experimenting with neural networks.
Machine learning from a software engineer's perspective - Marijn van Zelst - ...Codemotion
Lot's of software engineers seem to avoid the field of machine learning because it seems hard. In this talk I want to give developers an intuition of what machine learning is using visual examples and without using mathematical formulas. I want to show that machine learning will make things possible that cannot be achieved using traditional procedural programming. I will identify high level components of a supervised machine learning algorithm: vectors, feature spaces, neural networks and labels.
Digit recognizer by convolutional neural networkDing Li
A convolutional neural network is used to recognize handwritten digits from images. The CNN uses convolutional and max pooling layers to extract local features from the images. These local features are then fed into fully connected layers to combine them into global features used to predict the digit (0-9) in each image with a softmax output layer. The model is trained on 60,000 images and achieves 99.67% accuracy on the test set after 30 training epochs. While powerful, it is unclear if humans can fully understand the "mind" and logic of artificial neural networks.
Summary:
There are three parts in this presentation.
A. Why do we need Convolutional Neural Network
- Problems we face today
- Solutions for problems
B. LeNet Overview
- The origin of LeNet
- The result after using LeNet model
C. LeNet Techniques
- LeNet structure
- Function of every layer
In the following Github Link, there is a repository that I rebuilt LeNet without any deep learning package. Hope this can make you more understand the basic of Convolutional Neural Network.
Github Link : https://github.com/HiCraigChen/LeNet
LinkedIn : https://www.linkedin.com/in/YungKueiChen
1. Neural networks are a type of machine learning model that can learn highly non-linear functions to map inputs to outputs. They consist of interconnected layers of nodes that mimic biological neurons.
2. Backpropagation is an algorithm that allows neural networks to be trained using gradient descent by efficiently computing the gradient of the loss function with respect to the network parameters. It works by propagating gradients from the output layer back through the network using the chain rule.
3. There are many design decisions that go into building a neural network architecture, such as the number of hidden layers and nodes, choice of activation functions, objective function, and training algorithm like stochastic gradient descent. Common activation functions are the sigmoid, tanh, and rectified linear
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.
I’ll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I’ll separate the hype from reality around questions like:
• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you ‘steal’ pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?
Deep Learning Made Easy with Deep FeaturesTuri, Inc.
Deep learning models can learn hierarchical feature representations from raw input data. These learned features can then be used to build simple classifiers that achieve high accuracy, even when training data is limited. Transfer learning involves using features extracted from a model pre-trained on a large dataset to build classifiers for other related problems. This approach has been shown to outperform traditional feature engineering with hand-designed features. Deep features extracted from neural networks trained on large image or text datasets have proven to work well as general purpose features for other visual and language problems.
This document provides an overview of deep learning and its applications. It discusses how deep learning can be used for image classification and how neural networks learn hierarchical representations from data. The document highlights some of the challenges of deep learning, such as the large amounts of data and computation required. It also covers how deep learning models can be deployed in production using services like Amazon Web Services to ensure low latency, high availability, and continuous learning.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
In this talk we detail the step to creating a Visual Search engine for 1M Amazon product using MXNet Gluon and the K-Nearest Neighbor search library HNSW.
For implementation details, check this repository: https://github.com/ThomasDelteil/VisualSearch_MXNet
Video available here:
https://www.youtube.com/watch?v=9a8MAtfFVwI
Demo website available here:
https://thomasdelteil.github.io/VisualSearch_MXNet/
SeRanet is super resolution software that uses deep learning to enhance low-resolution images. It introduces concepts of "split" and "splice" where the input image is divided into four branches representing different pixel regions, and these branches are fused to form the output image. This approach provides flexibility in model design compared to processing the entire image as once. SeRanet also uses a technique called "fusion" where it combines two different CNNs - one for the main task and one for an auxiliary task - to leverage their complementary representations and improve performance. Experimental results show SeRanet produces higher quality super resolution than conventional methods like bicubic resizing as well as other deep learning based methods like waifu2x.
Hardware Acceleration for Machine LearningCastLabKAIST
This document provides an overview of a lecture on hardware acceleration for machine learning. The lecture will cover deep neural network models like convolutional neural networks and recurrent neural networks. It will also discuss various hardware accelerators developed for machine learning, including those designed for mobile/edge and cloud computing environments. The instructor's background and the agenda topics are also outlined.
Deep learning techniques like convolutional neural networks (CNNs) and deep neural networks have achieved human-level performance on certain tasks. Pioneers in the field include Geoffrey Hinton, who co-invented backpropagation, Yann LeCun who developed CNNs for image recognition, and Andrew Ng who helped apply these techniques at companies like Baidu and Coursera. Deep learning is now widely used for applications such as image recognition, speech recognition, and distinguishing objects like dogs from cats, often outperforming previous machine learning methods.
This document summarizes a presentation on deep image processing and computer vision. It introduces common deep learning techniques like CNNs, autoencoders, variational autoencoders and generative adversarial networks. It then discusses applications including image classification using models like LeNet, AlexNet and VGG. It also covers face detection, segmentation, object detection algorithms like R-CNN, Fast R-CNN and Faster R-CNN. Additional topics include document automation using character recognition and graphical element analysis, as well as identity recognition using face detection. Real-world examples are provided for document processing, handwritten letter recognition and event pass verification.
Machine Learning, Deep Learning and Data Analysis IntroductionTe-Yen Liu
The document provides an introduction and overview of machine learning, deep learning, and data analysis. It discusses key concepts like supervised and unsupervised learning. It also summarizes the speaker's experience taking online courses and studying resources to learn machine learning techniques. Examples of commonly used machine learning algorithms and neural network architectures are briefly outlined.
Machine Learning from a Software Engineer's perspectiveMarijn van Zelst
The document provides an overview of machine learning from a software engineer's perspective. It discusses how machine learning can solve various problems, introduces TensorFlow as an open-source machine learning library, and provides an example of using TensorFlow to build a neural network model for handwritten digit classification that achieves 97.5% accuracy on MNIST data. The document emphasizes that machine learning requires examples to learn from and recommends starting simply before gradually increasing complexity when experimenting with neural networks.
Machine learning from a software engineer's perspective - Marijn van Zelst - ...Codemotion
Lot's of software engineers seem to avoid the field of machine learning because it seems hard. In this talk I want to give developers an intuition of what machine learning is using visual examples and without using mathematical formulas. I want to show that machine learning will make things possible that cannot be achieved using traditional procedural programming. I will identify high level components of a supervised machine learning algorithm: vectors, feature spaces, neural networks and labels.
Digit recognizer by convolutional neural networkDing Li
A convolutional neural network is used to recognize handwritten digits from images. The CNN uses convolutional and max pooling layers to extract local features from the images. These local features are then fed into fully connected layers to combine them into global features used to predict the digit (0-9) in each image with a softmax output layer. The model is trained on 60,000 images and achieves 99.67% accuracy on the test set after 30 training epochs. While powerful, it is unclear if humans can fully understand the "mind" and logic of artificial neural networks.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
2. Deep learning
attracts lots of attention.
• I believe you have seen lots of exciting results
before.
Deep learning trends at Google. Source: SIGMOD/Jeff Dean
3. Ups and downs of Deep Learning
• 1958: Perceptron (linear model)
• 1969: Perceptron has limitation
• 1980s: Multi-layer perceptron
• Do not have significant difference from DNN today
• 1986: Backpropagation
• Usually more than 3 hidden layers is not helpful
• 1989: 1 hidden layer is “good enough”, why deep?
• 2006: RBM initialization (breakthrough)
• 2009: GPU
• 2011: Start to be popular in speech recognition
• 2012: win ILSVRC image competition
4. Step 1:
define a set
of function
Step 2:
goodness of
function
Step 3: pick
the best
function
Three Steps for Deep Learning
Deep Learning is so simple ……
Neural
Network
5. Neural Network
z
z
z
z
“Neuron”
Different connection leads to different network
structures
Neural Network
Network parameter 𝜃: all the weights and biases in the “neurons”
16. Example Application
Input Output
16 x 16 = 256
1
x
2
x
256
x
……
Ink → 1
No ink → 0
……
y1
y2
y10
Each dimension represents
the confidence of a digit.
is 1
is 2
is 0
……
0.1
0.7
0.2
The image
is “2”
17. Example Application
• Handwriting Digit Recognition
Machine “2”
1
x
2
x
256
x
……
……
y1
y2
y10
is 1
is 2
is 0
……
What is needed is a
function ……
Input:
256-dim vector
output:
10-dim vector
Neural
Network
18. Output
Layer
Hidden Layers
Input
Layer
Example Application
Input Output
1
x
2
x
Layer 1
……
N
x
……
Layer 2
……
Layer L
……
……
……
……
“2”
……
y1
y2
y10
is 1
is 2
is 0
……
A function set containing the
candidates for
Handwriting Digit Recognition
You need to decide the network structure to
let a good function in your function set.
19. FAQ
• Q: How many layers? How many neurons for each
layer?
• Q: Can the structure be automatically determined?
• E.g. Evolutionary Artificial Neural Networks
• Q: Can we design the network structure?
Trial and Error Intuition
+
Convolutional Neural Network (CNN)
20. Step 1:
define a set
of function
Step 2:
goodness of
function
Step 3: pick
the best
function
Three Steps for Deep Learning
Deep Learning is so simple ……
Neural
Network
21. Loss for an Example
1
x
2
x
……
256
x
……
……
……
……
……
y1
y2
y10
Cross
Entropy
“1”
……
1
0
0
……
target
Softmax
𝐶 𝑦 , 𝑦 = −
𝑖=1
10
𝑦𝑖𝑙𝑛𝑦𝑖
𝑦1
𝑦2
𝑦10
……
Given a set of
parameters
𝑦 𝑦
23. Step 1:
define a set
of function
Step 2:
goodness of
function
Step 3: pick
the best
function
Three Steps for Deep Learning
Deep Learning is so simple ……
Neural
Network
26. Gradient Descent
This is the “learning” of machines in deep
learning ……
Even alpha go using this approach.
I hope you are not too disappointed :p
People image …… Actually …..
27. Backpropagation
• Backpropagation: an efficient way to compute 𝜕𝐿 𝜕𝑤 in
neural network
libdnn
台大周伯威
同學開發
Ref:
http://speech.ee.ntu.edu.tw/~tlkagk/courses/MLDS_2015_2/Lecture/DNN%20b
ackprop.ecm.mp4/index.html
28. Step 1:
define a set
of function
Step 2:
goodness of
function
Step 3: pick
the best
function
Concluding Remarks
Neural
Network
What are the benefits of deep architecture?
29. Layer X Size
Word Error
Rate (%)
Layer X Size
Word Error
Rate (%)
1 X 2k 24.2
2 X 2k 20.4
3 X 2k 18.4
4 X 2k 17.8
5 X 2k 17.2 1 X 3772 22.5
7 X 2k 17.1 1 X 4634 22.6
1 X 16k 22.1
Deeper is Better?
Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription
Using Context-Dependent Deep Neural Networks." Interspeech. 2011.
Not surprised, more
parameters, better
performance
30. Universality Theorem
Reference for the reason:
http://neuralnetworksandde
eplearning.com/chap4.html
Any continuous function f
M
: R
R
f N
Can be realized by a network
with one hidden layer
(given enough hidden
neurons)
Why “Deep” neural network not “Fat” neural network?
(next lecture)
31. “深度學習深度學習”
• My Course: Machine learning and having it deep and
structured
• http://speech.ee.ntu.edu.tw/~tlkagk/courses_MLSD15_2.
html
• 6 hour version: http://www.slideshare.net/tw_dsconf/ss-
62245351
• “Neural Networks and Deep Learning”
• written by Michael Nielsen
• http://neuralnetworksanddeeplearning.com/
• “Deep Learning”
• written by Yoshua Bengio, Ian J. Goodfellow and Aaron
Courville
• http://www.deeplearningbook.org
Editor's Notes
Google DeepMind團隊將以機器學習技術分析匿名的視網膜掃瞄圖像資料
蔡明介:台灣想搞AI 再加2個零
They were popularised by Frank Rosenblatt in the early 1960’s.
During the 1950s and ’60s, neural networks were in vogue among computer scientists. In 1958, Cornell research psychologist Frank Rosenblatt, in a Navy-backed project, built a prototype neural net, which he called the Perceptron, at a lab in Buffalo. It used a punch-card computer that filled an entire room. After 50 trials it learned to distinguish between cards marked on the left and cards marked on the right. Reporting on the event, the New York Times wrote, “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
In 1969, MIT MArvin Minsky and Papert published a book called “Perceptrons” that analysed what they could do and showed their limitations.
The backpropagation algorithm was originally introduced in the 1970s, but its importance wasn't fully appreciated until a famous 1986 paper by David Rumelhart, Geoffrey Hinton, and Ronald Williams.
1989: http://deeplearning.cs.cmu.edu/notes/Sonia_Hornik.pdf
Speech: begin 2009
2012: Times
For example, if we modify “1” to “2”, then we have another function
You can connect the neurons by other ways you like
How many layer is deep?
CNN just another way to connect the neuros.
You can always connect the neurons in your own way.
“+” is ignored
Each dimension corresponds to a digit (10 dimension is needed)
169層
Make sure you know how to do it
author: Adam Coates, Baidu, Inc.
Deep Learning (hopefully faster)
http://videolectures.net/deeplearning2015_coates_deep_learning/
Draw it?
Draw it?
The same for even more complex tasks.
The same approach for other cases
CNN just another way to connect the neuros.
You can always connect the neurons in your own way.
“+” is ignored
Each dimension corresponds to a digit (10 dimension is needed)
https://www.youtube.com/watch?v=XWTfgehRxzU
You can never tind this in the textbook!
With softmax, the summation of all the ouputs would be one.
Can be considered as probability if you want ……
Randomly picked one
Two approaches update the parameters towards the same direction, but stochastic is faster!
Better!