Presentation by Tommy Lofstedt, Associated Professor at Umeå University (Sweden), at the FogGuru Workshop on linking with other disciplines in October 2019.
Efficient end-to-end learning for quantizable representationsNAVER Engineering
발표자: 정연우(서울대 박사과정)
발표일: 2018.7.
유사한 이미지 검색을 위해 neural network를 이용해 이미지의 embedding을 학습시킨다. 기존 연구에서는 검색 속도 증가를 위해 binary code의 hamming distance를 활용하지만 여전히 전체 데이터 셋을 검색해야 하며 정확도가 떨어지는 다는 단점이 있다. 이 논문에서는 sparse한 binary code를 학습하여 검색 정확도가 떨어지지 않으면서 검색 속도도 향상시키는 해쉬 테이블을 생성한다. 또한 mini-batch 상에서 optimal한 sparse binary code를 minimum cost flow problem을 통해 찾을 수 있음을 보였다. 우리의 방법은 Cifar-100과 ImageNet에서 precision@k, NMI에서 최고의 검색 정확도를 보였으며 각각 98× 와 478×의 검색 속도 증가가 있었다.
We present basic concepts of machine learning such as: supervised and unsupervised learning, types of tasks, how some algorithms work, neural networks, deep learning concepts, how to apply it in your work.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Efficient end-to-end learning for quantizable representationsNAVER Engineering
발표자: 정연우(서울대 박사과정)
발표일: 2018.7.
유사한 이미지 검색을 위해 neural network를 이용해 이미지의 embedding을 학습시킨다. 기존 연구에서는 검색 속도 증가를 위해 binary code의 hamming distance를 활용하지만 여전히 전체 데이터 셋을 검색해야 하며 정확도가 떨어지는 다는 단점이 있다. 이 논문에서는 sparse한 binary code를 학습하여 검색 정확도가 떨어지지 않으면서 검색 속도도 향상시키는 해쉬 테이블을 생성한다. 또한 mini-batch 상에서 optimal한 sparse binary code를 minimum cost flow problem을 통해 찾을 수 있음을 보였다. 우리의 방법은 Cifar-100과 ImageNet에서 precision@k, NMI에서 최고의 검색 정확도를 보였으며 각각 98× 와 478×의 검색 속도 증가가 있었다.
We present basic concepts of machine learning such as: supervised and unsupervised learning, types of tasks, how some algorithms work, neural networks, deep learning concepts, how to apply it in your work.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
(Prefer mailing. Call in emergency )
Dynamic Programming design technique is one of the fundamental algorithm design techniques, and possibly one of the ones that are hardest to master for those who did not study it formally. In these slides (which are continuation of part 1 slides), we cover two problems: maximum value contiguous subarray, and maximum increasing subsequence.
Machine Learning, Financial Engineering and Quantitative InvestingShengyuan Wang Steven
Explain ML from a computational learning theory perspective, illustrate FE using an end-to-end framework, and tackle QI via a concrete example with implementation.
Randomized Signature or random feature selection are two instances of machine learning, where randomly chosen structures appear to be highly expressive. We analyze several aspects of the theory behind it, show that these structures have several theoretically attractive properties and introduce two classes of examples from finance (joint works with Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Martin Larsson, and Juan-Pablo Ortega).
Presentation by Veronique Trub, Consultant at the Cabinet Trub communication (France), at the FogGuru training Business Modeling and Development in November 2019.
Presentation by Diego Useche, Associated Professor at the University of Rennes 1 (France), at the FogGuru training Business Modeling and Development in November 2019.
Presentation by Pooya Hedaytinia, PhD at the University of Rennes 1 (France), at the FogGuru training Business Modeling and Development in November 2019.
Presentation by Diego Useche, Associated Professor at the University of Rennes 1 (France), at the FogGuru training Business Modeling and Development in November 2019.
Presentation by Laura Sabbado de Rosa, Associated Professor at the University of Rennes 1 (France), at the FogGuru training Business Modeling and Development in November 2019.
Presentation by Tommy Lofstedt, Associated Professor at Umeå University (Sweden), at the FogGuru Workshop on linking with other disciplines in October 2019.
Presentation by Cedric Tedeshi, Associate Professor of Computer Science at the University of Rennes 1 (France), at the FogGuru Workshop on research methods in January 2019.
Presentation by Guillaume Pierre, Professor of Computer Science at the University of Rennes 1 (France), at the FogGuru Workshop on research methods in January 2019.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3
Introduction to Machine Learning
1. Presentation Introduction Machine Learning Deep Learning
Introduction to Machine Learning
Machine Learning and Deep Learning
Tommy Löfstedt
Umeå University, Umeå, Sweden
tommy.lofstedt@umu.se
October 22, 2019
Tommy Löfstedt — Introduction to Machine Learning 1/62
2. Presentation Introduction Machine Learning Deep Learning
Introduction
Decomposition of the AI field
Deep learning
Representation learning
Machine learning
AI
Tommy Löfstedt — Introduction to Machine Learning 2/62
3. Presentation Introduction Machine Learning Deep Learning
Introduction
Artificial Intelligence
Fundamental idea:
Intelligent agent (brain)
Perceives inputs from environment (environment)
Maps inputs to actions (body)
Artificial intelligence (AI) is a huge field
Logic
Probability
Reasoning
Planning
Action
Philosophy
Perception
Learning
Tommy Löfstedt — Introduction to Machine Learning 3/62
4. Presentation Introduction Machine Learning Deep Learning
Introduction
Machine learning
“Machine learning is the subfield of computer
science that gives computers the ability to learn
without being explicitly programmed.”
— Arthur Samuel, 1959
Tommy Löfstedt — Introduction to Machine Learning 4/62
5. Presentation Introduction Machine Learning Deep Learning
Introduction
Machine learning
Definition
A computer program is said to learn from experience E with
respect to some class of tasks T and performance measure P,
if its performance at tasks in T, as measured by P, improves
with experience E.
— Mitchell (1997). “Machine Learning”.
Tommy Löfstedt — Introduction to Machine Learning 5/62
6. Presentation Introduction Machine Learning Deep Learning
Introduction
Machine learning
Traditional computer programming:
Inputs Program Computer Outputs
Machine learning:
Computer
Inputs
Outputs
Program
Tommy Löfstedt — Introduction to Machine Learning 6/62
7. Presentation Introduction Machine Learning Deep Learning
Introduction
Machine learning
Take-home idea: Learn from data
Use case: When manually programming something is not
feasible
Autonomous vehicles
Speech recognition
Natural language processing
Computer vision
etc.
How then? Use some data to achieve some maximal score
on some task!
Tommy Löfstedt — Introduction to Machine Learning 7/62
8. Presentation Introduction Machine Learning Deep Learning
Introduction
Machine learning
Example:
T: Classify a tumour as malignant or benign
P: Fraction of correctly classified tumours (accuracy)
E: Set of training data and ground truth outputs
Tommy Löfstedt — Introduction to Machine Learning 8/62
9. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Two main types of learning
Supervised learning
Labelled/annotated data
Example input-output pairs, {(xi , yi )}, i = 1, . . . , n
Learn to predict output from input, H 3 h ≈ f : X → Y
Unsupervised learning
No labels
Only “input” data, {xi }, i = 1, . . . , n
Find underlying latent structure in the input data
Tommy Löfstedt — Introduction to Machine Learning 9/62
10. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Two (four) main types of learning
Supervised
Regression
Classification
Unsupervised
Clustering
Dimensionality reduction
(And many more . . . )
Tommy Löfstedt — Introduction to Machine Learning 10/62
11. Presentation Introduction Machine Learning Deep Learning
Machine Learning
What is required?
Define an hypothesis space, H
Formulate loss function (cost, error, objective, risk, etc.)
to evaluate choices:
` : P → R
Collect data
Input: independent variable, variable, feature, covariate,
predictor, factor, etc.
Output: dependent variable, outcome, target, etc.
Minimise the loss function over the hypothesis space
using the data
Tommy Löfstedt — Introduction to Machine Learning 11/62
12. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Regression
Relationship between input and output: f : X → Y
In most cases Y = R
Linear regression has
H 3 h(x; β) =
p
X
j=1
xj βj + β0
Linear regression uses the mean squared error loss:
`(β) =
1
n
n
X
i=1
yi − h(xi ; β)
2
We thus attempt
minimise
β∈Rp,β0∈R
`(β)
Tommy Löfstedt — Introduction to Machine Learning 12/62
13. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Regression
Tommy Löfstedt — Introduction to Machine Learning 13/62
14. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Classification
Relationship between input and output: f : X → C
In most cases C is a finite set of categories
Example: Logistic regression
Two classes: C = {0, 1}
Denote: pi = P(yi = 1)
Assume: log pi
1−pi
=
Pp
j=1 xij βj + β0
Tommy Löfstedt — Introduction to Machine Learning 14/62
15. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Classification
Straight-forward algebra gives:
P(yi = 1) = pi =
1
1 + e−
Pp
j=1 xij βj +β0
=: σ
p
X
j=1
xij βj + β0
!
−10.0 −7.5 −5.0 −2.5 0.0 2.5 5.0 7.5 10.0
logits
0.0
0.2
0.4
0.6
0.8
1.0
σ(logits)
The Logistic Sigmoid Function
Tommy Löfstedt — Introduction to Machine Learning 15/62
16. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Classification
Family of hypotheses:
H 3 h(xi ; β) = σ
p
X
j=1
xij βj + β0
!
Logistic regression uses the binary cross-entropy loss:
`(β) = −
1
n
n
X
i=1
yi log(h(xi ; β)) + (1 − yi ) log(1 − h(xi ; β))
We again seek
minimise
β∈Rp,β0∈R
`(β)
Tommy Löfstedt — Introduction to Machine Learning 16/62
17. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Classification
Tommy Löfstedt — Introduction to Machine Learning 17/62
18. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Clustering
Relationships amongst the inputs: f : X → C
In most cases C is a finite set of cluster indices
Input: objects with associated distance function
Output: a cluster, or likelihood to belong to each cluster
Example: K-means clustering (Lloyd’s algorithm)
Assume K clusters: C = {1, . . . , K}
The clusters have means µk for k = 1, . . . , K
Let h(x; µ1, . . . , µK ) = arg mink∈C kx − µkk2
Tommy Löfstedt — Introduction to Machine Learning 18/62
19. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Clustering
Family of hypotheses:
H 3 h(x; µ1, . . . , µK ) = arg min
k∈C
kx − µkk2
K-means clustering minimises the within-cluster sum of
squares:
`(µ1, . . . , µK , C1, . . . , CK ) =
K
X
k=1
X
x∈Ck
kx − µkk2
2
We seek to
minimise
µ1,...,µK ∈X
`(µ1, . . . , µK , C1, . . . , CK )
Tommy Löfstedt — Introduction to Machine Learning 19/62
20. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Clustering
Tommy Löfstedt — Introduction to Machine Learning 20/62
21. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Dimensionality Reduction
Reducing the input space: f : X → b
X, with b
X ⊂ X
Feature selection
Feature extraction
Example: Principal Component Analysis
Assumption: The data are well-approximated by a
d-dimensional linear subspace, d p
Tommy Löfstedt — Introduction to Machine Learning 21/62
22. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Dimensionality Reduction
Family of hypotheses: H 3 h(x; P) = PT
x
Principal component analysis maximises variance:
`(P) = −Var(h(x; P))
We seek
minimise
P
`(P)
subject to pT
i pj = 0, i 6= j
kpi k2
2 = 1 ∀i = 1, . . . , d
Tommy Löfstedt — Introduction to Machine Learning 22/62
23. Presentation Introduction Machine Learning Deep Learning
Machine Learning
Example: Dimensionality Reduction
Tommy Löfstedt — Introduction to Machine Learning 23/62
24. Presentation Introduction Machine Learning Deep Learning
Machine Learning
What have we learned?
We have some problem that we cannot solve manually
We have some data ({(xi , yi )})
Select a machine learning method (H)
Select a loss function (`)
Minimise the loss over the hypothesis space
Ok, easy! Are we done?
Tommy Löfstedt — Introduction to Machine Learning 24/62
25. Presentation Introduction Machine Learning Deep Learning
Machine Learning
The Variance-Bias Trade-Off and Model Selection
Example: Polynomial regression
Tommy Löfstedt — Introduction to Machine Learning 25/62
26. Presentation Introduction Machine Learning Deep Learning
Machine Learning
The Variance-Bias Trade-Off and Model Selection
Problem: We have
P(|`(h) − EX [`(h)]| ε) ≤ 2De−2ε2n
for D ∝ p
What can we do?
Collect more data (increase n)
Ask less of your model (increase ε)
Reduce the input space (dim. reduction, decrease p)
Constrain/regularise the hypothesis space (decrease D)
Tommy Löfstedt — Introduction to Machine Learning 26/62
27. Presentation Introduction Machine Learning Deep Learning
Machine Learning
The Variance-Bias Trade-Off and Model Selection
Constrain/regularise the hypothesis space
minimise
β∈P
`(β)
subject to ϕ(β) ≤ C
⇐⇒
minimise
β∈P
`(β) + λϕ(β)
Tommy Löfstedt — Introduction to Machine Learning 27/62
28. Presentation Introduction Machine Learning Deep Learning
Machine Learning
The Variance-Bias Trade-Off and Model Selection
Example: Polynomial ridge regression
minimise
β∈Rp
1
n
ky − Xβk2
2 + λkβk2
2
where Xi = [1, xi , x2
i , . . . , xp−1
i ].
Tommy Löfstedt — Introduction to Machine Learning 28/62
29. Presentation Introduction Machine Learning Deep Learning
Machine Learning
The Variance-Bias Trade-Off and Model Selection
Tommy Löfstedt — Introduction to Machine Learning 29/62
30. Presentation Introduction Machine Learning Deep Learning
Short break?
Next Up: Deep Learning
Tommy Löfstedt — Introduction to Machine Learning 30/62
31. Presentation Introduction Machine Learning Deep Learning
Deep Learning
History
Wang and Raj (2017)
Tommy Löfstedt — Introduction to Machine Learning 31/62
32. Presentation Introduction Machine Learning Deep Learning
Deep Learning
History
1940–1970: Cybernetics
Biological learning
Mimicking the brain
The perceptron (1 neuron)
1980–1995: Connectionism
Back-propagation algorithm
Up to a couple of hidden layers
2006–Now: Deep learning
Pre-training using restricted Boltzmann machines
Deep belief networks
Tens or hundreds (thousands!) of hidden layers
Tommy Löfstedt — Introduction to Machine Learning 32/62
33. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Introduction
Traditionally, a human provided the features (6=data)
Machine learning method maximised the score on the task
Now, “the machine” discovers the mapping from data to
features, and the mapping from features to output
End-to-end system
Data
Feature
Extractor
Machine
Learning
Outputs
Data
Feature
Learning
Machine
Learning
Outputs
Deep Learning
Tommy Löfstedt — Introduction to Machine Learning 33/62
34. Presentation Introduction Machine Learning Deep Learning
Deep Learning
What is it?
Basic idea:
Features are learned from raw data
Features are learned from features
A hierarchy of features
Lower layers in the hierarchy contains
“simpler” features
Higher layers in the hierarchy contains
“complex” features
Lines and blobs in images combines to
form objects
Tommy Löfstedt — Introduction to Machine Learning 34/62
35. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Neural Networks: Inspiration from Neuroscience
Original idea: Recreate the building
blocks of the brain
Your brain:
1011 neurons
1014 synapses
1017 ops./sec.
Dendrites receive information
Soma “processes” the information
Action potential if enough signal
from other neurons
Sends processed information
through axon
Tommy Löfstedt — Introduction to Machine Learning 35/62
36. Presentation Introduction Machine Learning Deep Learning
Deep Learning
An Artificial Neural Network
Fundamental idea:
“Artificial neurons” receive
information
Sums the inputs
Sends inputs through “activation
function”, φ
Outputs processed information
y = φ β0 +
p
X
j=1
xj βj
!
Tommy Löfstedt — Introduction to Machine Learning 36/62
37. Presentation Introduction Machine Learning Deep Learning
Deep Learning
An Artificial Neural Network
y1 = φ(XW0 + b0)
yi+1 = φ(yi Wi + bi ), i = 1, 2, . . . , L
ŷ = ϕ(yLWL + bL)
ŷ = ϕ(φ(· · · (φ(φ(XW0 + b0)W1 + b1) · · · )WL−1 + bL−1)WL + bL)
Tommy Löfstedt — Introduction to Machine Learning 37/62
38. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Example: A Shallow Artificial Neural Network for Regression
One hidden layer: y1 = φ(xW1 + b1)
ŷ = ϕ(y1W2 + b2)
Rectified linear unit: Let φ(x) = max(0, x)
“Linear” output activation: Let ϕ(x) = x
Family of hypotheses:
H 3 h(x; W1, b1, W2, b2) = ϕ(φ(xW1 + b1)W2 + b2)
We then minimise the mean squared error loss:
`(W1, b1, W2, b2) =
1
n
n
X
i=1
yi − h(xi ; W1, b1, W2, b2)
2
Tommy Löfstedt — Introduction to Machine Learning 38/62
41. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Example: Artificial Neural Networks for Classification
What’s really going on here?
X XW1 φ(XW1)
φ(XW1)W2 φ(φ(XW1)W2)
Tommy Löfstedt — Introduction to Machine Learning 41/62
42. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Stochastic Gradient Descent—The power-house of deep learning
Intuition:
Compute network output
Determine how each parameter
affected the error
Update each parameter such that
the error is reduced
Repeat
Tommy Löfstedt — Introduction to Machine Learning 42/62
43. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Stochastic Gradient Descent—The power-house of deep learning
β(k+1)
= β(k)
− α(k)
∇f (β(k)
)
Gradient, ∇f , computed by back-propagation
Expensive to compute ∇f for large data sets
∇f (β|X) ≈
1
n
n
X
i=1
∇f (β | xi ), n 1
→ E[∇f (β)], when n → ∞
Insight: E[∇f (β)] = E[∇f (β|xi )]
Mini-batches to reduce noise
Results indicate it is part of the success of deep learning!
Provides a regulariser for the solution
Tommy Löfstedt — Introduction to Machine Learning 43/62
44. Presentation Introduction Machine Learning Deep Learning
Deep Learning
The success
Amount of available training data
Computer hardware (GPUs)
Computer software (tensor/graph libraries)
Improved regularisation techniques
Dropout
Batch normalisation
Data augmentation
Small perturbations of the data
Scaling, translations, rotations, reflections, etc.
Cropping, noise, linear combinations of samples, etc.
Tommy Löfstedt — Introduction to Machine Learning 44/62
45. Presentation Introduction Machine Learning Deep Learning
Deep Learning
The success
Deeper models
Transfer learning
Better initialisation
Better training algorithms
Better models
Residual Networks
Densely Connected
Networks
U-Net
Skip connections!
Li et al. (2017)
Tommy Löfstedt — Introduction to Machine Learning 45/62
46. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
Introduced by LeCun et al. (1989)
ILSVRC2012: Krizhevsky et al. (2012)
Revolutionised image analysis
Also in speech recognition, synthesis, etc.
Very similar to fully connected layers
y1j = φ
b0j +
X
i
Xi ∗ W0ij
Tommy Löfstedt — Introduction to Machine Learning 46/62
47. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
The Convolution Operator
Tommy Löfstedt — Introduction to Machine Learning 47/62
48. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
Gabor-like filters found in mammals (e.g. in cats)
Gabor-like filters appear in the first layers of CNNs
Combined to detect complex shapes
Tommy Löfstedt — Introduction to Machine Learning 48/62
49. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
Zeiler and Fergus (2013)
Tommy Löfstedt — Introduction to Machine Learning 49/62
50. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
Tommy Löfstedt — Introduction to Machine Learning 50/62
51. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
Ren et al. (2016):
Choi et al. (2017):
Tommy Löfstedt — Introduction to Machine Learning 51/62
52. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
Nvidia (2015):
Kontschieder et al. (2017):
Tommy Löfstedt — Introduction to Machine Learning 52/62
53. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks
Karpathy and Fei-Fei (2015):
Gatys et al. (2015):
Tommy Löfstedt — Introduction to Machine Learning 53/62
54. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Convolutional Neural Networks in Medical Imaging
Roughly a 2.2 times increase of papers per year since 2011
Tommy Löfstedt — Introduction to Machine Learning 54/62
55. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Generative adversarial networks (GANs)
Two networks play a min-max game
A generator network outputs samples
A discriminator network distinguishes generated samples
from real samples
z xfake
G(z)
pθ(z)
xreal
pdata(x)
x real?
D(x)
Tommy Löfstedt — Introduction to Machine Learning 55/62
56. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Progressive GANs
A recent development of GANs (Karras et al., 2018)
The generator and discriminator are grown progressively
Provides fast and stable training of GANs
Tommy Löfstedt — Introduction to Machine Learning 56/62
57. Presentation Introduction Machine Learning Deep Learning
Deep Learning
StyleGAN
State of the art for human face synthesis
Examples: http://thispersondoesnotexist.com
Tommy Löfstedt — Introduction to Machine Learning 57/62
58. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Image Synthesis using the StyleGAN
Random example images generated using the StyleGAN model
Tommy Löfstedt — Introduction to Machine Learning 58/62
59. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Image Synthesis using the StyleGAN
Example images generated from the MR and CT parts of the
disentangled space (W)
Tommy Löfstedt — Introduction to Machine Learning 59/62
60. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Reinforcement Learning
Multi-agent hide and seek by OpenAI (2019)
Tommy Löfstedt — Introduction to Machine Learning 60/62
61. Presentation Introduction Machine Learning Deep Learning
Deep Learning
Reinforcement Learning
Robot hand solving Rubik’s cube by OpenAI (2019)
Tommy Löfstedt — Introduction to Machine Learning 61/62
62. Presentation Introduction Machine Learning Deep Learning
The end
Thank you for your attention!
Questions?
Tommy Löfstedt — Introduction to Machine Learning 62/62
63. This training material is part of the FogGuru project that has received funding
from the European Union’s Horizon 2020 research and innovation programme
under the Marie Skłodowska-Curie grant agreement No 765452. The information
and views set out in this material are those of the author(s) and do not
necessarily reflect the official opinion of the European Union. Neither the
European Union institutions and bodies nor any person acting on their behalf
may be held responsible for the use which may be made of the information
contained therein.