This is a slide deck from a presentation, that my colleague Uwe Friedrichsen (https://www.slideshare.net/ufried/) and I did together. As we created our respective parts of the presentation on our own, it is quite easy to figure out who did which part of the presentation as the two slide decks look quite different ... :)
For the sake of simplicity and completeness, Uwe copied the two slide decks together. As he did the "surrounding" part, he added my part at the place where I took over and then added concluding slides at the end. Well, I'm sure, you will figure it out easily ... ;)
The presentation was intended to be an introduction to deep learning (DL) for people who are new to the topic. It starts with some DL success stories as motivation. Then a quick classification and a bit of history follows before the "how" part starts.
The first part of the "how" is some theory of DL, to demystify the topic and explain and connect some of the most important terms on the one hand, but also to give an idea of the broadness of the topic on the other hand.
After that the second part dives deeper into the question how to actually implement DL networks. This part starts with coding it all on your own and then moves on to less coding step by step, depending on where you want to start.
The presentation ends with some pitfalls and challenges that you should have in mind if you want to dive deeper into DL - plus the invitation to become part of it.
As always the voice track of the presentation is missing. I hope that the slides are of some use for you, though.
Although a new technological advancement, the scope of Deep Learning is expanding exponentially. Advanced Deep Learning technology aims to imitate the biological neural network, that is, of the human brain.
https://takeoffprojects.com/advanced-deep-learning-projects
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance.
Deep learning is receiving phenomenal attention due to breakthrough results in several AI tasks and significant research investment by top technology companies like Google, Facebook, Microsoft, IBM. For someone who has not been introduced to this technology, it may be daunting to learn several concepts such as feature learning, Restricted Boltzmann Machines, Autoencoders, etc all at once and start applying it to their own AI applications. This presentation is the first of several in this series that is intended at practitioners.
Although a new technological advancement, the scope of Deep Learning is expanding exponentially. Advanced Deep Learning technology aims to imitate the biological neural network, that is, of the human brain.
https://takeoffprojects.com/advanced-deep-learning-projects
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance.
Deep learning is receiving phenomenal attention due to breakthrough results in several AI tasks and significant research investment by top technology companies like Google, Facebook, Microsoft, IBM. For someone who has not been introduced to this technology, it may be daunting to learn several concepts such as feature learning, Restricted Boltzmann Machines, Autoencoders, etc all at once and start applying it to their own AI applications. This presentation is the first of several in this series that is intended at practitioners.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-parodi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Facundo Parodi, Research and Machine Learning Engineer at Tryolabs, presents the "An Introduction to Machine Learning and How to Teach Machines to See" tutorial at the May 2019 Embedded Vision Summit.
What is machine learning? How can machines distinguish a cat from a dog in an image? What’s the magic behind convolutional neural networks? These are some of the questions Parodi answers in this introductory talk on machine learning in computer vision.
Parodi introduces machine learning and explores the different types of problems it can solve. He explains the main components of practical machine learning, from data gathering and training to deployment. Parodi then focuses on deep learning as an important machine learning technique and provides an introduction to convolutional neural networks and how they can be used to solve image classification problems. He also touches on recent advancements in deep learning and how they have revolutionized the entire field of computer vision.
Artificial Intelligence Course: Linear models ananth
In this presentation we present the linear models: Regression and Classification. We illustrate with several examples. Concepts such as underfitting (Bias) and overfitting (Variance) are presented. Linear models can be used as stand alone classifiers for simple cases and they are essential building blocks as a part of larger deep learning networks
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Daniel Lewis
Piotr Mirowski (of Microsoft Bing London) presented Review of Auto-Encoders to the Computational Intelligence Unconference 2014, with our Deep Learning stream. These are his slides. Original link here: https://piotrmirowski.files.wordpress.com/2014/08/piotrmirowski_ciunconf_2014_reviewautoencoders.pptx
He also has Matlab-based tutorial on auto-encoders available here:
https://github.com/piotrmirowski/Tutorial_AutoEncoders/
Deep Learning in Recommender Systems - RecSys Summer School 2017Balázs Hidasi
This is the presentation accompanying my tutorial about deep learning methods in the recommender systems domain. The tutorial consists of a brief general overview of deep learning and the introduction of the four most prominent research direction of DL in recsys as of 2017. Presented during RecSys Summer School 2017 in Bolzano, Italy.
PR-258: From ImageNet to Image Classification: Contextualizing Progress on Be...Jinwon Lee
TensorFlow Korea 논문읽기모임 PR12 258번째 논문 review입니다.
이번 논문은 MIT에서 나온 From ImageNet to Image Classification: Contextualizing Progress on Benchmarks입니다.
Deep Learning 하시는 분들이면 ImageNet 모르시는 분들이 없을텐데요, 이 논문은 ImageNet의 labeling 방법의 한계와 문제점에 대해서 얘기하고 top-1 accuracy 기반의 평가 방법에도 문제가 있을 수 있음을 지적하고 있습니다.
ImageNet data의 20% 이상이 multi object를 포함하고 있지만 그 중에 하나만 정답으로 인정되는 문제가 있고, annotation 방법의 한계로 인하여 실제로 사람이 생각하는 것과 다른 class가 정답으로 labeling되어 있는 경우도 많았습니다. 또한 terrier만 20종이 넘는 등 전문가가 아니면 판단하기 어려운 label도 많다는 문제도 있었구요. 이 밖에도 다양한 실험을 통해서 정량적인 분석과 함께 human-in-the-loop을 이용한 평가로 현재 model들의 성능이 어디까지 와있는지, 그리고 앞으로 더 높은 성능을 내기 위해서 data labeling 측면에서 해결해야할 과제는 무엇인지에 대해서 이야기하고 있습니다. 논문이 양이 좀 많긴 하지만 기술적인 내용이 별로 없어서 쉽게 읽으실 수 있는데요, 자세한 내용이 궁금하신 분들은 영상을 참고해주세요!
논문링크: https://arxiv.org/abs/2005.11295
발표영상링크: https://youtu.be/CPMgX5ikL_8
Deep Style: Using Variational Auto-encoders for Image GenerationTJ Torres
This talk is about some work done at Stitch Fix surrounding the use of Variational Auto-encoders to efficiently create distributed representation spaces of style and generative image models for new clothing.
For the full video of this presentation, please visit:
https://www.embedded-vision.com/platinum-members/embedded-vision-alliance/embedded-vision-training/videos/pages/may-2019-embedded-vision-summit-parodi
For more information about embedded vision, please visit:
http://www.embedded-vision.com
Facundo Parodi, Research and Machine Learning Engineer at Tryolabs, presents the "An Introduction to Machine Learning and How to Teach Machines to See" tutorial at the May 2019 Embedded Vision Summit.
What is machine learning? How can machines distinguish a cat from a dog in an image? What’s the magic behind convolutional neural networks? These are some of the questions Parodi answers in this introductory talk on machine learning in computer vision.
Parodi introduces machine learning and explores the different types of problems it can solve. He explains the main components of practical machine learning, from data gathering and training to deployment. Parodi then focuses on deep learning as an important machine learning technique and provides an introduction to convolutional neural networks and how they can be used to solve image classification problems. He also touches on recent advancements in deep learning and how they have revolutionized the entire field of computer vision.
Artificial Intelligence Course: Linear models ananth
In this presentation we present the linear models: Regression and Classification. We illustrate with several examples. Concepts such as underfitting (Bias) and overfitting (Variance) are presented. Linear models can be used as stand alone classifiers for simple cases and they are essential building blocks as a part of larger deep learning networks
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Daniel Lewis
Piotr Mirowski (of Microsoft Bing London) presented Review of Auto-Encoders to the Computational Intelligence Unconference 2014, with our Deep Learning stream. These are his slides. Original link here: https://piotrmirowski.files.wordpress.com/2014/08/piotrmirowski_ciunconf_2014_reviewautoencoders.pptx
He also has Matlab-based tutorial on auto-encoders available here:
https://github.com/piotrmirowski/Tutorial_AutoEncoders/
Deep Learning in Recommender Systems - RecSys Summer School 2017Balázs Hidasi
This is the presentation accompanying my tutorial about deep learning methods in the recommender systems domain. The tutorial consists of a brief general overview of deep learning and the introduction of the four most prominent research direction of DL in recsys as of 2017. Presented during RecSys Summer School 2017 in Bolzano, Italy.
PR-258: From ImageNet to Image Classification: Contextualizing Progress on Be...Jinwon Lee
TensorFlow Korea 논문읽기모임 PR12 258번째 논문 review입니다.
이번 논문은 MIT에서 나온 From ImageNet to Image Classification: Contextualizing Progress on Benchmarks입니다.
Deep Learning 하시는 분들이면 ImageNet 모르시는 분들이 없을텐데요, 이 논문은 ImageNet의 labeling 방법의 한계와 문제점에 대해서 얘기하고 top-1 accuracy 기반의 평가 방법에도 문제가 있을 수 있음을 지적하고 있습니다.
ImageNet data의 20% 이상이 multi object를 포함하고 있지만 그 중에 하나만 정답으로 인정되는 문제가 있고, annotation 방법의 한계로 인하여 실제로 사람이 생각하는 것과 다른 class가 정답으로 labeling되어 있는 경우도 많았습니다. 또한 terrier만 20종이 넘는 등 전문가가 아니면 판단하기 어려운 label도 많다는 문제도 있었구요. 이 밖에도 다양한 실험을 통해서 정량적인 분석과 함께 human-in-the-loop을 이용한 평가로 현재 model들의 성능이 어디까지 와있는지, 그리고 앞으로 더 높은 성능을 내기 위해서 data labeling 측면에서 해결해야할 과제는 무엇인지에 대해서 이야기하고 있습니다. 논문이 양이 좀 많긴 하지만 기술적인 내용이 별로 없어서 쉽게 읽으실 수 있는데요, 자세한 내용이 궁금하신 분들은 영상을 참고해주세요!
논문링크: https://arxiv.org/abs/2005.11295
발표영상링크: https://youtu.be/CPMgX5ikL_8
Deep Style: Using Variational Auto-encoders for Image GenerationTJ Torres
This talk is about some work done at Stitch Fix surrounding the use of Variational Auto-encoders to efficiently create distributed representation spaces of style and generative image models for new clothing.
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.
I’ll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I’ll separate the hype from reality around questions like:
• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you ‘steal’ pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?
V2.0 open power ai virtual university deep learning and ai introductionGanesan Narayanasamy
OpenPOWER AI virtual University's - focus on bringing together industry, government and academic expertise to connect and help shape the AI future .
https://www.youtube.com/channel/UCYLtbUp0AH0ZAv5mNut1Kcg
Deep learning: the future of recommendationsBalázs Hidasi
An informative talk about deep learning and its potential uses in recommender systems. Presented at the Budapest Startup Safary, 21 April, 2016.
The breakthroughs of the last decade in neural network research and the quick increasing of computational power resulted in the revival of deep neural networks and the field focusing on their training: deep learning. Deep learning methods have succeeded in complex tasks where other machine learning methods have failed, such as computer vision and natural language processing. Recently deep learning has began to gain ground in recommender systems as well. This talk introduces deep learning and its applications, with emphasis on how deep learning methods can solve long standing recommendation problems.
Machine learning for IoT - unpacking the blackboxIvo Andreev
Have you ever considered Machine Learning as a black box? It sounds as a kind of magic happening. Although being one among many solutions available, Azure ML has proved to be a great balance between flexibility, usability and affordable price. But how does Azure ML compare with the other ML providers? How to choose the appropriate algorithm? Do you understand the key performance indicators and how to improve the quality of your models? The session is about understanding the black box and using it for IoT workload and not only.
This talk was presented in Startup Master Class 2017 - http://aaiitkblr.org/smc/ 2017 @ Christ College Bangalore. Hosted by IIT Kanpur Alumni Association and co-presented by IIT KGP Alumni Association, IITACB, PanIIT, IIMA and IIMB alumni.
My co-presenter was Biswa Gourav Singh. And contributor was Navin Manaswi.
http://dataconomy.com/2017/04/history-neural-networks/ - timeline for neural networks
An introduction to Machine Learning (and a little bit of Deep Learning)Thomas da Silva Paula
25-min talk about Machine Learning and a little bit of Deep Learning. Starts with some basic definitions (Supervised and Unsupervised Learning). Then, neural networks basic functionality is explained, ending up in Deep Learning and Convolutional Neural Networks.
Machine Learning Meetup that happened in Porto Alegre, Brazil.
Transparente und Verantwortungsbewusste KI: Was bedeutet Explainable und Responsible AI und warum sollten wir uns damit beschäftigen?
In dieser Session widmen wir uns den aktuellen Themen Explainable AI (XAI) und Responsible AI (RAI), bei denen es um die Entwicklung von Künstlicher Intelligenz geht, die transparent, erklärbar und ethisch verantwortlich agiert. Wir wollen ein breites Publikum ansprechen, von KI-Experten und Forschern bis hin zu Vertretern aus Wirtschaft, Politik und der Zivilgesellschaft. Ziel ist es, ein Verständnis für die Bedeutung von Explainable und Responsible AI zu schaffen, mögliche Herausforderungen zu diskutieren und Lösungsansätze für eine verantwortungsvolle KI-Nutzung zu entwickeln.
Die digitale Transformation und der Aufstieg von Künstlicher Intelligenz (KI) haben die Geschäftswelt revolutioniert. Unternehmen, die ihre Geschäftsentscheidungen auf Daten und KI stützen, gewinnen einen klaren Wettbewerbsvorteil und schaffen innovative Lösungen für ihre Kunden. Diese Veranstaltung richtet sich an Führungskräfte, Manager, Unternehmer und Data-Enthusiasten, die mehr darüber erfahren möchten, ob und wann sie eine Datenstrategie brauchen, wie sie eine effektive Datenstrategie entwickeln und wie sie damit ein datengetriebenes und KI-basiertes Business aufbauen oder bereichern können.
Vortrag auf der M3 Online-Konferenz am 16.06.2020 (https://online.m3-konferenz.de/lecture.php?id=12337&source=0)
Mit Machine Learning getroffene Entscheidungen sind inhärent schwierig – wenn nicht gar unmöglich – nachzuvollziehen. Ein scheinbar gutes Ergebnis mit Hilfe von maschinellen Lernverfahren ist oft schnell erzielt oder wird von anderen als bahnbrechend verkauft.
Die Komplexität einiger der besten Modelle wie neuronaler Netze ist genau das, was sie so erfolgreich macht. Aber es macht sie gleichzeitig zu einer Black Box. Das kann problematisch sein, denn Geschäftsführer oder Vorstände werden weniger geneigt sein, einer Entscheidung zu vertrauen und nach ihr zu handeln, wenn sie sie nicht verstehen.
Shapley Values, Local Interpretable Model-Agnostic Explanations (LIME) und Anchors sind Ansätze, diese komplexen Modelle zumindest teilweise nachvollziehbar zu machen.
In diesem Vortrag erkläre ich, wie diese Ansätze funktionieren, und zeige Anwendungsbeispiele.
LERNZIELE
* Die Teilnehmer erhalten Einblick in Möglichkeit, die komplexe Modelle erklärbar machen.
* Sie lernen, Datensätze kritisch zu hinterfragen und angemessen aufzuteilen.
* Und sie erfahren, unter welchen Bedingungen sie Entscheidungen durch Machine Learning vertrauen können.
These are slides from a lecture I gave at the School of Applied Sciences in Münster. In this lecture, I talked about **Real-World Data Science** at showed examples on **Fraud Detection, Customer Churn & Predictive Maintenance**.
SAP webinar: Explaining Keras Image Classification Models with LIMEShirin Elsinghorst
Keras is a high-level open-source deep learning framework that by default works on top of TensorFlow. Keras is minimalistic, efficient and highly flexible because it works with a modular layer system to define, compile and fit neural networks. It has been written in Python but can also be used from within R. Because the underlying backend can be changed from TensorFlow to Theano and CNTK (with more options being developed right now) it is designed to be framework-independent. Models can be trained on CPU or GPU, locally or in the cloud.
I will show an example how to build an image classifier with Keras. We'll be using a convolutional neural net to classify fruits in images. But that's not all! We not only want to judge our black-box model based on accuracy and loss measures - we want to get a better understanding of how the model works. We will use an algorithm called LIME (local interpretable model-agnostic explanations) to find out what part of the different test images contributed most strongly to the classification that was made by our model. I will introduce LIME and explain how it works. And finally, I will show how to apply LIME to the image classifier we built before, as well as to a pretrained Imagenet model.
You will get:
* an introduction to Keras
* an overview about deep learning and neural nets
* a demo how to build an image classifier with Keras
* an introduction to explaining black box models, specifically to the LIME algorithm
* a demo how to apply LIME to explain the predictions of our own Keras image classifier, as well as of a pretrained Imagenet
Further Information:
* www.shirin-glander.de<http://www.shirin-glander.de>
* https://blog.codecentric.de/author/shirin-glander/
* www.youtube.com/codecentricAI
These are the slides from workshop: Introduction to Machine Learning with R which I gave at the University of Heidelberg, Germany on June 28th 2018.
The accompanying code to generate all plots in these slides (plus additional code) can be found on my blog: https://shirinsplayground.netlify.com/2018/06/intro_to_ml_workshop_heidelberg/
The workshop covered the basics of machine learning. With an example dataset I went through a standard machine learning workflow in R with the packages caret and h2o:
- reading in data
- exploratory data analysis
- missingness
- feature engineering
- training and test split
- model training with Random Forests, Gradient Boosting, Neural Nets, etc.
- hyperparameter tuning
HH Data Science Meetup: Explaining complex machine learning models with LIMEShirin Elsinghorst
On April 12th, 2018 I gave a talk about Explaining complex machine learning models with LIME at the Hamburg Data Science Meetup:
Traditional machine learning workflows focus heavily on model training and optimization; the best model is usually chosen via performance measures like accuracy or error and we tend to assume that a model is good enough for deployment if it passes certain thresholds of these performance criteria. Why a model makes the predictions it makes, however, is generally neglected. But being able to understand and interpret such models can be immensely important for improving model quality, increasing trust and transparency and for reducing bias. Because complex machine learning models are essentially black boxes and too complicated to understand, we need to use approximations to get a better sense of how they work. One such approach is LIME, which stands for Local Interpretable Model-agnostic Explanations and is a tool that helps understand and explain the decisions made by complex machine learning models.
– slide deck was produced with beautiful.ai –
HH Data Science Meetup: Explaining complex machine learning models with LIMEShirin Elsinghorst
Unfortunately, slideshare doesn't allow re-uploading slides any more, so there is an updated version with some corrected errors here: https://www.slideshare.net/ShirinGlander/hh-data-science-meetup-explaining-complex-machine-learning-models-with-lime-94218890
Ruhr.PY - Introducing Deep Learning with Keras and PythonShirin Elsinghorst
Ruhr.PY - Python Developer Meetup:
Keras is a high-level API written in Python for building and prototyping neural networks. It can be used on top of TensorFlow, Theano or CNTK. In this talk we build, train and visualize a Model using Python and Keras - all interactive with Jupyter Notebooks!
https://www.meetup.com/Ruhr-py/events/248093628/
-- slide deck generated with beautiful.ai --
-- video recording can be seen here: https://youtu.be/Q8hVXnpEPmc --
-- comment here: https://shirinsplayground.netlify.com/2018/04/ruhrpy_meetup_2018_slides/ --
From Biology to Industry. A Blogger’s Journey to Data Science.Shirin Elsinghorst
What does blogging mean for Data Sciences?
What is Big Data today?
How to become a Data Scientist and what type of work results from this transformation?
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
18. Traditional AI
Focus on problems that are ...
• ... hard for humans
• ... straightforward for computers
• ... can be formally described
Deep Learning
Focus on problems that are ...
• ... intuitive for humans
• ... difficult for computers
(hard to be described formally)
• ... best learnt from experience
20. General evolution
• Two opposed forces
• Recreation of biological neural processing
• Abstract mathematical models (mostly linear algebra)
• Results in different models and algorithms
• No clear winner yet
21. Cybernetics (ca. 1940 - 1960)
• ADALINE, Perceptron
• Linear models, typically no hidden layers
• Stochastic Gradient Descent (SGD)
• Limited applicability
• E.g., ADALINE could not learn XOR
• Resulted in “First winter of ANN” (Artificial Neural Networks)
22. Connectionism (ca. 1980 - 1990)
• Neocognitron
• Non-linear models, distributed feature representation
• Backpropagation
• Typically 1, rarely more hidden layers
• First approaches of sequence modeling
• LSTM (Long short-term memory) in 1997
• Unrealistic expectations nurtured by ventures
• Resulted in “Second winter of ANN”
23. Deep Learning (ca. 2006 -)
• Improved algorithms, advanced computing power
• Enabled training much larger and deeper networks
• Enabled training much larger data sets
• Typically several to many hidden layers
• Overcame the “feature extraction dilemma”
25. Deep Learning application areas
• Classification (incl. missing inputs)
• Regression (value prediction)
• Function prediction
• Density estimation
• Structured output (e.g., translation)
• Anomaly detection
• Synthesis and sampling
• Denoising
• Compression (dimension reduction)
• ...
26. How does Deep Learning work?
A first (scientifically inspired) approach
27. „A computer program is said to learn
• from experience E
• with respect to some class of tasks T
• and performance measure P
if its performance at tasks in T,
as measured by P,
improves with experience E.”
-- T. Mitchell, Machine Learning, p. 2, McGraw Hill (1997)
Supervised learning,
unsupervised learning,
reinforcement learning, ...
Too difficult to solve
with fixed programs
designed by humans
Accuracy vs. error rate,
training vs. test set, ...
36. Neuron
• Design inspired by biological neurons
• One or more inputs
• Processing (and state storage) unit
• One or more outputs
• In practice often implemented as tensor transformations
• Relevance of internal state depends on network type
• Usually negligible for feed-forward networks
• Usually relevant for recurrent networks
Neuron
Processing
(+ State)
Output(s)
Input(s)
...
...
38. Layer
• Neurons typically organized in layers
• Input and output layer as default
• Optionally one or more hidden layer
• Layer layout can have 1-n dimensions
• Neurons in different layers can have different properties
• Different layers responsible for different (sub-)tasks
Output layer
Input layer
...
N
1
2
Hidden layer(s)
...
40. Connection
• Usually connect input and output tensor in a 1:1 manner
• Connect between layers (output layer N-1 à input layer N)
• Layers can be fully or partially (sparsely) connected
• RNNs also have backward and/or self connections
• Some networks have connections between neurons
of the same layer (e.g., Hopfield nets, Boltzmann machines)
Input tensor(s)
Output tensor(s)
42. Weight
• (Logically) augments a connection
• Used to amplify or dampen a signal sent over a connection
• The actual “memory” of the network
• The “right” values of the weights are learned during training
• Can also be used to introduce a bias for a neuron
• By connecting it to an extra neuron that constantly emits 1
W
Weight
44. Input tensor(s)
Output tensor(s)
Step 1
• For each neuron of input layer
• Copy resp. input tensor’s value to neuron’s input
• Calculate state/output using activation function
(typically linear function, passing value through)
Step 2-N
• For each hidden layer and output layer in their order
• For each neuron of the layer
• Calculate weighted sum on inputs
• Calculate state/output using activation function
(see examples later)
Final step
• For each neuron of output layer
• Copy neuron’s output to resp. output tensor’s value
45. Input tensor(s)
Output tensor(s)
Step 1
Final step
Step 2-N
• Default update procedure (most widespread)
• All neuron per layer in parallel
• Different update procedures exist
• E.g., some Hopfield net implementations
randomly pick neurons for update
53. Hyperparameter
• Influence network and algorithm behavior
• Often influence model capacity
• Not learned, but usually manually optimized
• Currently quite some research interest in
automatic hyperparameter optimization
Examples
• Number of hidden layers
• Number of hidden units
• Learning rate
• Number of clusters
• Weight decay coefficient
• Convolution kernel width
• ...
57. Cost function (a.k.a. loss function)
• Determines distance from optimal performance
• Mean squared error as simple (and widespread) example
58. Cost function (a.k.a. loss function)
• Determines distance from optimal performance
• Mean squared error as simple (and widespread) example
• Often augmented with regularization term
for better generalization (see challenges)
61. Stochastic gradient descent
• Direct calculation of minimum often not feasible
• Instead stepwise “descent” using the gradient
à Gradient descent
62. Stochastic gradient descent
• Direct calculation of minimum often not feasible
• Instead stepwise “descent” using the gradient
à Gradient descent
• Not feasible for large training sets
• Use (small) random sample of training set per iteration
à Stochastic gradient descent (SGD)
66. Backpropagation
• Procedure to calculate new weights based on loss function
Depends on
cost function
Depends on
activation function
Depends on
input calculation
67. Backpropagation
• Procedure to calculate new weights based on loss function
• Usually “back-propagated” layer-wise
• Most widespread optimization procedure
Depends on
cost function
Depends on
activation function
Depends on
input calculation
70. Data set
• Consists of examples (a.k.a. data points)
• Example always contains input tensor
• Sometimes also contains expected output tensor
(depending on training type)
• Data set usually split up in several parts
• Training set – optimize accuracy (always used)
• Test set – test generalization (often used)
• Validation set – tune hyperparameters (sometimes used)
73. Supervised learning
• Typically learns from a large, yet finite set of examples
• Examples consist of input and output tensor
• Output tensor describes desired output
• Output tensor also called label or target
• Typical application areas
• Classification
• Regression and function prediction
• Structured output problems
75. Unsupervised learning
• Typically learns from a large, yet finite set of examples
• Examples consist of input tensor only
• Learning algorithm tries to learn useful properties of the data
• Requires different type of cost functions
• Typical application areas
• Clustering, density estimations
• Denoising, compression (dimension reduction)
• Synthesis and sampling
77. Reinforcement learning
• Continuously optimizes interaction with an environment
based on reward-based learning
Agent
Environment
State t
Reward t
State t+1
Reward t+1
Action t
78. Reinforcement learning
• Continuously optimizes interaction with an environment
based on reward-based learning
• Goal is selection of action with highest expected reward
• Takes (discounted) expected future rewards into account
• Labeling of examples replaced by reward function
• Can continuously learn à data set can be infinite
• Typically used to solve complex tasks in (increasingly)
complex environments with (very) limited feedback
81. Underfitting and Overfitting
• Training error describes how good training data is learnt
• Test error is an indicator for generalization capability
• Core challenge for all machine learning type algorithms
1. Make training error small
2. Make gap between training and test error small
• Underfitting is the violation of #1
• Overfitting is the violation of #2
83. Underfitting and Overfitting
• Under- and overfitting influenced by model capacity
• Too low capacity usually leads to underfitting
• Too high capacity usually leads to overfitting
• Finding the right capacity is a challenge
85. Regularization
• Regularization is a modification applied to learning algorithm
• to reduce the generalization error
• but not the training error
• Weight decay is a typical regularization measure
87. Transfer learning
• How to transfer insights between related tasks
• E.g., is it possible to transfer knowledge gained while training
to recognize cars on the problem of recognizing trucks?
• General machine learning problem
• Subject of many research activities
92. Convolutional neural network (CNN)
• Special type of MLP for image processing
• Connects convolutional neuron only with receptive field
• Advantages
• Less computing
power required
• Often even better
recognition rates
• Inspired by organization of visual cortex
Image source: https://deeplearning4j.org
94. Recurrent neural network (RNN)
• Implements internal feedback loops
• Provides a temporal memory
• Typically used for
• Speech recognition
• Text recognition
• Time series processing
Image source: https://deeplearning4j.org
96. Long short-term memory (LSTM)
• Special type of RNN
• Uses special LSTM units
• Can implement very long-term memory
while avoiding the vanishing/exploding
gradient problem
• Same application areas as RNN
Image source: https://deeplearning4j.org
98. Autoencoder
• Special type of MLP
• Reproduces input at output layer
• Consists of encoder and decoder
• Usually configured undercomplete
• Learns efficient feature codings
• Dimension reduction (incl. compression)
• Denoising
• Usually needs pre-training for not only
reconstructing average of training set
Image source: https://deeplearning4j.org
100. Generative adversarial networks (GAN)
• Consists of two (adversarial) networks
• Generator creating fake images
• Discriminator trying to identify
fake images
• Typically used for
• Synthesis and sampling
(e.g., textures in games)
• Structured output with variance (e.g., variations of a design or voice generation)
• Probably best known for creating fake celebrity images
Image source: https://deeplearning4j.org
133. Issues you might face
• Very fast moving research domain
• You need the math. Really!
• How much data do you have?
• GDPR: Can you explain the decision of your network?
• Meta-Learning as the next step
• Monopolization of research and knowledge
135. Wrap-up
• Broad, diverse topic
• Very good library support and more
• Very active research topic
• No free lunch
• You need the math!
à Exciting and important topic – become a part of it!
136. References
• I. Goodfellow, Y. Bengio, A. Courville, ”Deep learning",
MIT press, 2016, also available via https://www.deeplearningbook.org
• C. Perez, “The Deep Learning AI Playbook”,
Intuition Machine Inc., 2017
• F. Chollet, "Deep Learning with Python",
Manning Publications, 2017
• OpenAI, https://openai.com
• Keras, https://keras.io
• Deep Learning for Java, https://deeplearning4j.org/index.html
• Deep Learning (Resource site), http://deeplearning.net