SlideShare a Scribd company logo
Transfer	Learning	for	Improving	Model	Predictions	
in	Robotic	Systems
Pooyan Jamshidi, Miguel Velez, Christian Kästner
Norbert Siegmund, Prasad Kawthekar
Overview	of	our	approach
ML Model
Learn
Model
Measure
Measure
Data
Source
Target
Simulator (Gazebo) Robot (TurtleBot)
Predict
Performance
Predictions
Adaptation
Reason
2
Overview	of	our	approach
ML Model
Learn
Model
Measure
Measure
Data
Source
Target
Simulator (Gazebo) Robot (TurtleBot)
Predict
Performance
Predictions
Adaptation
Reason
Data
3
Overview	of	our	approach
We	get	more cheaper samples	from	the	simulator	and	only	few	expensive	from	the	
real	robot	to	learn	an	accurate	model	to	predict	the	performance	of	the	robot.
Here,	performance	may	denote	“battery	usage”,	“localization	error”	or	“mission	time”.
ML Model
Learn
Model
Measure
Measure
Data
Source
Target
Simulator (Gazebo) Robot (TurtleBot)
Predict
Performance
Predictions
Adaptation
Reason
4
Traditional	machine	learning	vs.	transfer	learning
Source	Domain
(cheap)
Target	Domain
(expensive)
Learning	
Algorithm
Different	Domains
Learning	
Algorithm
Learning	
Algorithm
Learning	
Algorithm
Traditional	Machine	Learning Transfer	Learning
Knowledge
Data
Source
Target
5
Traditional	machine	learning	vs.	transfer	learning
Source	Domain
(cheap)
Target	Domain
(expensive)
Learning	
Algorithm
Different	Domains
Learning	
Algorithm
Learning	
Algorithm
Learning	
Algorithm
Traditional	Machine	Learning Transfer	Learning
The	goal	of	transfer	learning	is	to	improve	learning	in	the	target	
domain	by	leveraging	knowledge	from	the	source	domain.	
Knowledge
Data
Source
Target
6
Performance	
prediction
for	CoBot
5 10 15 20 25
5
10
15
20
25
14
16
18
20
22
24
5 10 15 20 25
5
10
15
20
25
8
10
12
14
16
18
20
22
24
26
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
30
CPU usage [%] CPU usage [%]
(a) (b)
(c) (d)
Prediction without transfer learning
5 10 15 20 25
5
10
15
20
25
10
15
20
25
Prediction with transfer learning
(a) Source	response	
function	(cheap)
(b) Target	response	
function	(expensive)
7
Performance	
prediction
for	CoBot
5 10 15 20 25
5
10
15
20
25
14
16
18
20
22
24
5 10 15 20 25
5
10
15
20
25
8
10
12
14
16
18
20
22
24
26
5 10 15 20 25
5
10
15
20
25
5
10
15
20
25
30
CPU usage [%] CPU usage [%]
(a) (b)
(c) (d)
Prediction without transfer learning
5 10 15 20 25
5
10
15
20
25
10
15
20
25
Prediction with transfer learning
(a) Source	response	
function	(cheap)
(b) Target	response	
function	(expensive)
(c) Prediction	without	
transfer	learning
(d) Prediction	with	
transfer	learning
8
Prediction	
accuracy	and	
model	reliability	
2
2
2
2
2
2
2
2
2
44
4
4
6
6
6
8
8 1012
14 16
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
5
5
5
5
5
5
5
5
5
10
10
10
10
15
15
15
20
20
25
303540
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
(a) (b)
0.5 0.5 0.5
1 1 1
1.5 1.5 1.5
2
2 2
2.5 2.5
2.5
3
3
3
3
3.5
3.5
3.5
3.5
4
4
4
4
4
4.5 4.5
4.5
5
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
0.005 0.005 0.005
0.01 0.01 0.01
0.015 0.015 0.0150.02 0.02 0.02
0.02
0.025
0.025
0.025 0.025
0.025
0.025
0.025
0.03
0.03
0.03
0.03
0.03
0.03
0.03
0.035
0.035
0.035
0.04
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
(c) (d)
(a) Prediction	error
• Trade-off	among	using	more	
source	or	target	samples	
(b) Model	reliability	
• Transfer	learning	contributes	
to	lower	the	prediction	
uncertainty
9
Prediction	
accuracy	and	
model	reliability	
2
2
2
2
2
2
2
2
2
44
4
4
6
6
6
8
8 1012
14 16
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
5
5
5
5
5
5
5
5
5
10
10
10
10
15
15
15
20
20
25
303540
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
(a) (b)
0.5 0.5 0.5
1 1 1
1.5 1.5 1.5
2
2 2
2.5 2.5
2.5
3
3
3
3
3.5
3.5
3.5
3.5
4
4
4
4
4
4.5 4.5
4.5
5
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
0.005 0.005 0.005
0.01 0.01 0.01
0.015 0.015 0.0150.02 0.02 0.02
0.02
0.025
0.025
0.025 0.025
0.025
0.025
0.025
0.03
0.03
0.03
0.03
0.03
0.03
0.03
0.035
0.035
0.035
0.04
1 2 3 4 5 6 7 8 9 10
0
10
20
30
40
50
60
70
80
90
100
(c) (d)
(a) Prediction	error
• Trade-off	among	using	more	
source	or	target	samples	
(b) Model	reliability	
• Transfer	learning	contributes	
to	lower	the	prediction	
uncertainty
(c) Training	overhead
(d) Evaluation	overhead
• Appropriate	for	runtime	
usage
10
Prediction	accuracy
• The	model	provides	more	
accurate	predictions	as	we	
exploit	more	data	
• Transfer	learning	may	(i)	boost	
initial	performance,	(ii)	
increase	the	learning	speed,	
(iii)	lead	to	a	more	accurate	
performance	finally
correspondences. In much of the work on transfer learning, a human p
this mapping, but some methods provide ways to perform the mappin
matically. Another section of the chapter discusses work in this area.
performance
training
with transfer
without transfer
higher start
higher slope higher asymptote
Fig. 2. Three ways in which transfer might improve learning.
2
[Lisa	Torrey	and	Jude	Shavlik]	
11
Current	priority
We	have	looked	into	transfer	learning	from	simulator	to	real	robot,	but	
now,	we	consider	the	following	scenarios:
• Workload	change	(New	tasks	or	missions,	new	environmental	
conditions)
• Infrastructure	change	(New	Intel	NUC,	new	Camera,	new	Sensors)
• Code	change	(new	versions	of	ROS,	new	localization	algorithm)
12
Backup	slides
13
Problem-solution	overview
• The	robotic	software	is	considered	as	a	highly	configurable	system.
• The	configuration	of	the	robot	influence	the	performance (as	well	as	
energy	usage).
• Problem:	there	are	many	different	parameters	making	the	configuration	
space	highly	dimensional	and	difficult	to	understand	the	influence	of	
configuration	on	system	performance.
• Solution:	learn	a	black-box	performance	model	using	measurements	from	
the	robot.	performance_value=f(configuration)
• Challenge:	Measurements	from	real	robot	is	expensive	(time	consuming,	
require	human	resource,	risky	when	fail)
• Our	contribution:	performing	most	measurements	on	simulator	(Gazebo)	
and	taking	only	few	samples	from	the	real	robot	to	learn	a	reliable	and	
accurate	performance	model	given	a	certain	experimental	budget.		 14
Synthetic	
example
(a) Source	and	target	
samples
(b) Learning	without	
transfer
(c) Learning	with	
transfer	
(d) Negative	transfer
1 3 5 9 10 11 12 13 14
0
50
100
150
200
source samples
target samples
(a) (b)
(c) (d) 15
Integrating	cost	model	with	transfer	learning
1. Model
Learning
2. Predictive
Model
4. Transfer
Learning
3. Cost
Model
Improved
Predictive
Model
Conf.
Parameters
Samples
0 500 1000 1500
Throughput (ops/sec)
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
Averagewritelatency(µs)
TL4CO
BO4CO
We	have	only	a	
limited
experimental	
budget and	we	
need	to	spend	that	
wisely
16
Assumption:	Source	and	target	are	correlated
0 2 4 6 8 10
0
0.5
1
1.5
2
2.5
10
4
0 2 4 6 8 10
0
0.5
1
1.5
2
2.5
3
10
4
0 2 4 6 8 10
0
0.5
1
1.5
2
2.5
10
4
(a) (b) (c)
1.5
2
2.5
3
104
(d)0 2 4 6 8 10
-0.5
0
0.5
1
1.5
2
2.5
10
4
1.5
2
2.5
3
104
1.5
2
2.5
3
104
ntime
it is
cisely
using
ration
i). In
uch as
rithm
binary
There-
tesian
terest
space
R is
ations
✓ X.
noise,
words,
from
c that
s real
dimensional spaces. Relevant to software engineering commu-
nity, several approaches tried different experimental designs
for highly configurable software [14] and some even consider
cost as an explicit factor to determine optimal sampling [26].
Recently, researchers have tried novel way of sampling with
a feedback embedded inside the process where new samples
are derived based on information gain from previous set of
samples. Recursive Random Sampling (RRS) [33] integrates
a restarting mechanism into the random sampling to achieve
high search efficiency. Smart Hill Climbing (SHC) [32] inte-
grates the importance sampling with Latin Hypercube Design
(lhd). SHC estimates the local regression at each potential
region, then it searches toward the steepest descent direction.
An approach based on direct search [35] forms a simplex in
the parameter space by a number of samples, and iteratively
updates a simplex through a number of well-defined operations
including reflection, expansion, and contraction to guide the
sample generation. Quick Optimization via Guessing (QOG)
in [23] speeds up the optimization process exploiting some
heuristics to filter out sub-optimal configurations. Some recent
work [34] exploited a characteristic of the response surface of
the configurable software to learn Fourier sparse functions by
only a small sample size. Another approach also exploited this
fact, but iteratively construct a regression model representing
performance influences in an active learning process.
time constrained environments (e.g., runtime
making in a feedback loop for robots), it is
to select the sources purposefully.
ormulation: Model learning
ntroduce the concepts in our approach precisely
we define the model learning problem using
notation. Let Xi indicate the i-th configuration
hich ranges in a finite domain Dom(Xi). In
ay either indicate (i) an integer variable such as
iterative refinements in a localization algorithm
orical variable such as sensor names or binary
local vs global localization method). There-
figuration space is mathematically a Cartesian
of the domains of the parameters of interest
) ⇥ · · · ⇥ Dom(Xd).
ation x resides in the design parameter space
black-box response function f : X ! R is
a performance model given some observations
performance under different settings, D ✓ X.
hough, such measurements may contain noise,
xi) + ✏i where ✏i ⇠ N (0, i). In other words,
e model is simply a function (mapping) from
space to a measurable performance metric that
val-scaled data (here we assume it produces real
dimensional spaces. Relevant to software engineering commu-
nity, several approaches tried different experimental designs
for highly configurable software [14] and some even consider
cost as an explicit factor to determine optimal sampling [26].
Recently, researchers have tried novel way of sampling with
a feedback embedded inside the process where new samples
are derived based on information gain from previous set of
samples. Recursive Random Sampling (RRS) [33] integrates
a restarting mechanism into the random sampling to achieve
high search efficiency. Smart Hill Climbing (SHC) [32] inte-
grates the importance sampling with Latin Hypercube Design
(lhd). SHC estimates the local regression at each potential
region, then it searches toward the steepest descent direction.
An approach based on direct search [35] forms a simplex in
the parameter space by a number of samples, and iteratively
updates a simplex through a number of well-defined operations
including reflection, expansion, and contraction to guide the
sample generation. Quick Optimization via Guessing (QOG)
in [23] speeds up the optimization process exploiting some
heuristics to filter out sub-optimal configurations. Some recent
work [34] exploited a characteristic of the response surface of
the configurable software to learn Fourier sparse functions by
only a small sample size. Another approach also exploited this
fact, but iteratively construct a regression model representing
performance influences in an active learning process.
in some time constrained environments (e.g., runtime
decision making in a feedback loop for robots), it is
important to select the sources purposefully.
C. Problem formulation: Model learning
In order to introduce the concepts in our approach precisely
and concisely, we define the model learning problem using
mathematical notation. Let Xi indicate the i-th configuration
parameter, which ranges in a finite domain Dom(Xi). In
general, Xi may either indicate (i) an integer variable such as
the number of iterative refinements in a localization algorithm
or (ii) a categorical variable such as sensor names or binary
options (e.g., local vs global localization method). There-
fore, the configuration space is mathematically a Cartesian
product of all of the domains of the parameters of interest
X = Dom(X1) ⇥ · · · ⇥ Dom(Xd).
A configuration x resides in the design parameter space
x 2 X. A black-box response function f : X ! R is
used to build a performance model given some observations
of the system performance under different settings, D ✓ X.
In practice, though, such measurements may contain noise,
i.e., yi = f(xi) + ✏i where ✏i ⇠ N (0, i). In other words,
a performance model is simply a function (mapping) from
configuration space to a measurable performance metric that
produces interval-scaled data (here we assume it produces real
dimensional spaces. Relevant to software engineering commu-
nity, several approaches tried different experimental designs
for highly configurable software [14] and some even consider
cost as an explicit factor to determine optimal sampling [26].
Recently, researchers have tried novel way of sampling with
a feedback embedded inside the process where new samples
are derived based on information gain from previous set of
samples. Recursive Random Sampling (RRS) [33] integrates
a restarting mechanism into the random sampling to achieve
high search efficiency. Smart Hill Climbing (SHC) [32] inte-
grates the importance sampling with Latin Hypercube Design
(lhd). SHC estimates the local regression at each potential
region, then it searches toward the steepest descent direction.
An approach based on direct search [35] forms a simplex in
the parameter space by a number of samples, and iteratively
updates a simplex through a number of well-defined operations
including reflection, expansion, and contraction to guide the
sample generation. Quick Optimization via Guessing (QOG)
in [23] speeds up the optimization process exploiting some
heuristics to filter out sub-optimal configurations. Some recent
work [34] exploited a characteristic of the response surface of
the configurable software to learn Fourier sparse functions by
only a small sample size. Another approach also exploited this
fact, but iteratively construct a regression model representing
performance influences in an active learning process.
in some time constrained environments (e.g., runtime
decision making in a feedback loop for robots), it is
important to select the sources purposefully.
C. Problem formulation: Model learning
In order to introduce the concepts in our approach precisely
and concisely, we define the model learning problem using
mathematical notation. Let Xi indicate the i-th configuration
parameter, which ranges in a finite domain Dom(Xi). In
general, Xi may either indicate (i) an integer variable such as
the number of iterative refinements in a localization algorithm
or (ii) a categorical variable such as sensor names or binary
options (e.g., local vs global localization method). There-
fore, the configuration space is mathematically a Cartesian
product of all of the domains of the parameters of interest
X = Dom(X1) ⇥ · · · ⇥ Dom(Xd).
A configuration x resides in the design parameter space
x 2 X. A black-box response function f : X ! R is
used to build a performance model given some observations
of the system performance under different settings, D ✓ X.
In practice, though, such measurements may contain noise,
i.e., yi = f(xi) + ✏i where ✏i ⇠ N (0, i). In other words,
a performance model is simply a function (mapping) from
configuration space to a measurable performance metric that
produces interval-scaled data (here we assume it produces real
dimensional spaces. Relevant to s
nity, several approaches tried dif
for highly configurable software [
cost as an explicit factor to deter
Recently, researchers have tried
a feedback embedded inside the
are derived based on information
samples. Recursive Random Sam
a restarting mechanism into the
high search efficiency. Smart Hil
grates the importance sampling w
(lhd). SHC estimates the local
region, then it searches toward th
An approach based on direct sea
the parameter space by a numbe
updates a simplex through a numb
including reflection, expansion, a
sample generation. Quick Optimi
in [23] speeds up the optimizati
heuristics to filter out sub-optimal
work [34] exploited a characterist
the configurable software to learn
only a small sample size. Another
fact, but iteratively construct a re
performance influences in an acti
a) Source	may	be	a	shift	to	the	target
b) Source	may	be	a	noisy	version	of	the	target
c) Source	may	be	different	to	target	in	some	extreme	positions	in	the	configuration	space
d) Source	may	be	irrelevant	to	the	target	->	negative	transfer	will	happen!
17
Level	of	correlation	between	source	and	
target	is	important
10
20
30
40
50
60
AbsolutePercentageError[%]
Sources s s1 s2 s3 s4 s5 s6
noise-level 0 5 10 15 20 25 30
corr. coeff. 0.98 0.95 0.89 0.75 0.54 0.34 0.19
µ(pe) 15.34 14.14 17.09 18.71 33.06 40.93 46.75
TABL
colum
datase
measu• Model	becomes	more	accurate	
when	the	source	is	more	
related	to	the	target
• Even	learning	from	a	source	
with	a	small	correlation	is	
better	than	no	transfer
18
Active	learning	+	transfer	learning
Learned	function	with	bad	
samples	->	overfitting
Goal:	find	best	sample	points	iteratively	by	gaining	
knowledge	from	source	and	target	domain
Learned	function	with	
good	samples
19
Conclusion:	Our	cost-aware	transfer	learning	
• Improves	the	model	accuracy	up	to	several	orders	of	magnitude
• Is	able	to	trade-off	between	different	number	of	samples	from	source	
and	target	enabling	a	cost-aware	model
• Imposes	an	acceptable	model	building	and	evaluation	cost	making	
appropriate	for	application	in	the	robotics	domain
20

More Related Content

What's hot

MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and ArchitecturesMetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
MLAI2
 
Detection focal loss 딥러닝 논문읽기 모임 발표자료
Detection focal loss 딥러닝 논문읽기 모임 발표자료Detection focal loss 딥러닝 논문읽기 모임 발표자료
Detection focal loss 딥러닝 논문읽기 모임 발표자료
taeseon ryu
 
Dear - 딥러닝 논문읽기 모임 김창연님
Dear - 딥러닝 논문읽기 모임 김창연님Dear - 딥러닝 논문읽기 모임 김창연님
Dear - 딥러닝 논문읽기 모임 김창연님
taeseon ryu
 
safe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learningsafe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learning
Ryo Iwaki
 
Incremental collaborative filtering via evolutionary co clustering
Incremental collaborative filtering via evolutionary co clusteringIncremental collaborative filtering via evolutionary co clustering
Incremental collaborative filtering via evolutionary co clustering
Allen Wu
 
A scalable collaborative filtering framework based on co clustering
A scalable collaborative filtering framework based on co clusteringA scalable collaborative filtering framework based on co clustering
A scalable collaborative filtering framework based on co clustering
AllenWu
 
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...
Jinwon Lee
 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
byteLAKE
 
HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...
HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...
HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...
Sunny Kr
 
Genetic Algorithms
Genetic AlgorithmsGenetic Algorithms
Genetic Algorithms
Oğuzhan TAŞ Akademi
 
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)[unofficial] Pyramid Scene Parsing Network (CVPR 2017)
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)
Shunta Saito
 
Towards a Unified Data Analytics Optimizer with Yanlei Diao
Towards a Unified Data Analytics Optimizer with Yanlei DiaoTowards a Unified Data Analytics Optimizer with Yanlei Diao
Towards a Unified Data Analytics Optimizer with Yanlei Diao
Databricks
 
ゆるふわ強化学習入門
ゆるふわ強化学習入門ゆるふわ強化学習入門
ゆるふわ強化学習入門
Ryo Iwaki
 
自然方策勾配法の基礎と応用
自然方策勾配法の基礎と応用自然方策勾配法の基礎と応用
自然方策勾配法の基礎と応用
Ryo Iwaki
 
Co-clustering of multi-view datasets: a parallelizable approach
Co-clustering of multi-view datasets: a parallelizable approachCo-clustering of multi-view datasets: a parallelizable approach
Co-clustering of multi-view datasets: a parallelizable approach
Allen Wu
 
201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture Search201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture Search
DaeJin Kim
 
In datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unitIn datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unit
Jinwon Lee
 
Lecture 6: Convolutional Neural Networks
Lecture 6: Convolutional Neural NetworksLecture 6: Convolutional Neural Networks
Lecture 6: Convolutional Neural Networks
Sang Jun Lee
 
Producer consumer-problems
Producer consumer-problemsProducer consumer-problems
Producer consumer-problemsRichard Ashworth
 
Energy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud InfrastructureEnergy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Mario Jose Villamizar Cano
 

What's hot (20)

MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and ArchitecturesMetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
MetaPerturb: Transferable Regularizer for Heterogeneous Tasks and Architectures
 
Detection focal loss 딥러닝 논문읽기 모임 발표자료
Detection focal loss 딥러닝 논문읽기 모임 발표자료Detection focal loss 딥러닝 논문읽기 모임 발표자료
Detection focal loss 딥러닝 논문읽기 모임 발표자료
 
Dear - 딥러닝 논문읽기 모임 김창연님
Dear - 딥러닝 논문읽기 모임 김창연님Dear - 딥러닝 논문읽기 모임 김창연님
Dear - 딥러닝 논문읽기 모임 김창연님
 
safe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learningsafe and efficient off policy reinforcement learning
safe and efficient off policy reinforcement learning
 
Incremental collaborative filtering via evolutionary co clustering
Incremental collaborative filtering via evolutionary co clusteringIncremental collaborative filtering via evolutionary co clustering
Incremental collaborative filtering via evolutionary co clustering
 
A scalable collaborative filtering framework based on co clustering
A scalable collaborative filtering framework based on co clusteringA scalable collaborative filtering framework based on co clustering
A scalable collaborative filtering framework based on co clustering
 
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...
PR-330: How To Train Your ViT? Data, Augmentation, and Regularization in Visi...
 
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
AI optimizing HPC simulations (presentation from  6th EULAG Workshop)AI optimizing HPC simulations (presentation from  6th EULAG Workshop)
AI optimizing HPC simulations (presentation from 6th EULAG Workshop)
 
HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...
HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...
HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardin...
 
Genetic Algorithms
Genetic AlgorithmsGenetic Algorithms
Genetic Algorithms
 
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)[unofficial] Pyramid Scene Parsing Network (CVPR 2017)
[unofficial] Pyramid Scene Parsing Network (CVPR 2017)
 
Towards a Unified Data Analytics Optimizer with Yanlei Diao
Towards a Unified Data Analytics Optimizer with Yanlei DiaoTowards a Unified Data Analytics Optimizer with Yanlei Diao
Towards a Unified Data Analytics Optimizer with Yanlei Diao
 
ゆるふわ強化学習入門
ゆるふわ強化学習入門ゆるふわ強化学習入門
ゆるふわ強化学習入門
 
自然方策勾配法の基礎と応用
自然方策勾配法の基礎と応用自然方策勾配法の基礎と応用
自然方策勾配法の基礎と応用
 
Co-clustering of multi-view datasets: a parallelizable approach
Co-clustering of multi-view datasets: a parallelizable approachCo-clustering of multi-view datasets: a parallelizable approach
Co-clustering of multi-view datasets: a parallelizable approach
 
201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture Search201907 AutoML and Neural Architecture Search
201907 AutoML and Neural Architecture Search
 
In datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unitIn datacenter performance analysis of a tensor processing unit
In datacenter performance analysis of a tensor processing unit
 
Lecture 6: Convolutional Neural Networks
Lecture 6: Convolutional Neural NetworksLecture 6: Convolutional Neural Networks
Lecture 6: Convolutional Neural Networks
 
Producer consumer-problems
Producer consumer-problemsProducer consumer-problems
Producer consumer-problems
 
Energy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud InfrastructureEnergy-aware VM Allocation on An Opportunistic Cloud Infrastructure
Energy-aware VM Allocation on An Opportunistic Cloud Infrastructure
 

Viewers also liked

Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
Pooyan Jamshidi
 
Cloud Migration Patterns: A Multi-Cloud Architectural Perspective
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectiveCloud Migration Patterns: A Multi-Cloud Architectural Perspective
Cloud Migration Patterns: A Multi-Cloud Architectural Perspective
Pooyan Jamshidi
 
Scalable machine learning
Scalable machine learningScalable machine learning
Scalable machine learning
Arnaud Rachez
 
Autonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based SoftwareAutonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based Software
Pooyan Jamshidi
 
Towards Quality-Aware Development of Big Data Applications with DICE
Towards Quality-Aware Development of Big Data Applications with DICETowards Quality-Aware Development of Big Data Applications with DICE
Towards Quality-Aware Development of Big Data Applications with DICE
Pooyan Jamshidi
 
Configuration Optimization Tool
Configuration Optimization ToolConfiguration Optimization Tool
Configuration Optimization Tool
Pooyan Jamshidi
 
Sensitivity Analysis for Building Adaptive Robotic Software
Sensitivity Analysis for Building Adaptive Robotic SoftwareSensitivity Analysis for Building Adaptive Robotic Software
Sensitivity Analysis for Building Adaptive Robotic Software
Pooyan Jamshidi
 
Self learning cloud controllers
Self learning cloud controllersSelf learning cloud controllers
Self learning cloud controllersPooyan Jamshidi
 
ESM Machine learning 5주차 Review by Mario Cho
ESM Machine learning 5주차 Review by Mario ChoESM Machine learning 5주차 Review by Mario Cho
ESM Machine learning 5주차 Review by Mario Cho
Mario Cho
 
Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...
Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...
Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...
Cloudera, Inc.
 
How to Get My Paper Accepted at Top Software Engineering Conferences
How to Get My Paper Accepted at Top Software Engineering ConferencesHow to Get My Paper Accepted at Top Software Engineering Conferences
How to Get My Paper Accepted at Top Software Engineering Conferences
Alex Orso
 
Airline flights delay prediction- 2014 Spring Data Mining Project
Airline flights delay prediction- 2014 Spring Data Mining ProjectAirline flights delay prediction- 2014 Spring Data Mining Project
Airline flights delay prediction- 2014 Spring Data Mining Project
Haozhe Wang
 
Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...
Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...
Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...
Universitat Politècnica de Catalunya
 
Large Scale Machine Learning with Apache Spark
Large Scale Machine Learning with Apache SparkLarge Scale Machine Learning with Apache Spark
Large Scale Machine Learning with Apache Spark
Cloudera, Inc.
 
Transfer Learning and Fine-tuning Deep Neural Networks
 Transfer Learning and Fine-tuning Deep Neural Networks Transfer Learning and Fine-tuning Deep Neural Networks
Transfer Learning and Fine-tuning Deep Neural Networks
PyData
 
Machine Learning with Apache Spark
Machine Learning with Apache SparkMachine Learning with Apache Spark
Machine Learning with Apache Spark
IBM Cloud Data Services
 
Nano robot / nano technology
Nano robot / nano technology Nano robot / nano technology
Nano robot / nano technology
Vijay Patil
 
Transfer of Learning
Transfer of LearningTransfer of Learning
Transfer of LearningAbby Rondilla
 
Wireless robot ppt
Wireless robot pptWireless robot ppt
Wireless robot pptVarun B P
 
Fundamental of robotic manipulator
Fundamental of robotic manipulatorFundamental of robotic manipulator
Fundamental of robotic manipulatorsnkalepvpit
 

Viewers also liked (20)

Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
Microservices Architecture Enables DevOps: Migration to a Cloud-Native Archit...
 
Cloud Migration Patterns: A Multi-Cloud Architectural Perspective
Cloud Migration Patterns: A Multi-Cloud Architectural PerspectiveCloud Migration Patterns: A Multi-Cloud Architectural Perspective
Cloud Migration Patterns: A Multi-Cloud Architectural Perspective
 
Scalable machine learning
Scalable machine learningScalable machine learning
Scalable machine learning
 
Autonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based SoftwareAutonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based Software
 
Towards Quality-Aware Development of Big Data Applications with DICE
Towards Quality-Aware Development of Big Data Applications with DICETowards Quality-Aware Development of Big Data Applications with DICE
Towards Quality-Aware Development of Big Data Applications with DICE
 
Configuration Optimization Tool
Configuration Optimization ToolConfiguration Optimization Tool
Configuration Optimization Tool
 
Sensitivity Analysis for Building Adaptive Robotic Software
Sensitivity Analysis for Building Adaptive Robotic SoftwareSensitivity Analysis for Building Adaptive Robotic Software
Sensitivity Analysis for Building Adaptive Robotic Software
 
Self learning cloud controllers
Self learning cloud controllersSelf learning cloud controllers
Self learning cloud controllers
 
ESM Machine learning 5주차 Review by Mario Cho
ESM Machine learning 5주차 Review by Mario ChoESM Machine learning 5주차 Review by Mario Cho
ESM Machine learning 5주차 Review by Mario Cho
 
Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...
Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...
Introduction to Machine Learning on Apache Spark MLlib by Juliet Hougland, Se...
 
How to Get My Paper Accepted at Top Software Engineering Conferences
How to Get My Paper Accepted at Top Software Engineering ConferencesHow to Get My Paper Accepted at Top Software Engineering Conferences
How to Get My Paper Accepted at Top Software Engineering Conferences
 
Airline flights delay prediction- 2014 Spring Data Mining Project
Airline flights delay prediction- 2014 Spring Data Mining ProjectAirline flights delay prediction- 2014 Spring Data Mining Project
Airline flights delay prediction- 2014 Spring Data Mining Project
 
Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...
Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...
Deep Learning for Computer Vision: Transfer Learning and Domain Adaptation (U...
 
Large Scale Machine Learning with Apache Spark
Large Scale Machine Learning with Apache SparkLarge Scale Machine Learning with Apache Spark
Large Scale Machine Learning with Apache Spark
 
Transfer Learning and Fine-tuning Deep Neural Networks
 Transfer Learning and Fine-tuning Deep Neural Networks Transfer Learning and Fine-tuning Deep Neural Networks
Transfer Learning and Fine-tuning Deep Neural Networks
 
Machine Learning with Apache Spark
Machine Learning with Apache SparkMachine Learning with Apache Spark
Machine Learning with Apache Spark
 
Nano robot / nano technology
Nano robot / nano technology Nano robot / nano technology
Nano robot / nano technology
 
Transfer of Learning
Transfer of LearningTransfer of Learning
Transfer of Learning
 
Wireless robot ppt
Wireless robot pptWireless robot ppt
Wireless robot ppt
 
Fundamental of robotic manipulator
Fundamental of robotic manipulatorFundamental of robotic manipulator
Fundamental of robotic manipulator
 

Similar to Transfer Learning for Improving Model Predictions in Robotic Systems

Integrated Model Discovery and Self-Adaptation of Robots
Integrated Model Discovery and Self-Adaptation of RobotsIntegrated Model Discovery and Self-Adaptation of Robots
Integrated Model Discovery and Self-Adaptation of Robots
Pooyan Jamshidi
 
Using Simulation to Investigate Requirements Prioritization Strategies
Using Simulation to Investigate Requirements Prioritization StrategiesUsing Simulation to Investigate Requirements Prioritization Strategies
Using Simulation to Investigate Requirements Prioritization Strategies
CS, NcState
 
Keynote at IWLS 2017
Keynote at IWLS 2017Keynote at IWLS 2017
Keynote at IWLS 2017
Manish Pandey
 
Polymorphism in java
Polymorphism in javaPolymorphism in java
Polymorphism in java
sureshraj43
 
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...
IOSR Journals
 
Model-based Regression Testing of Autonomous Robots
Model-based Regression Testing of Autonomous RobotsModel-based Regression Testing of Autonomous Robots
Model-based Regression Testing of Autonomous Robots
Zoltan Micskei
 
ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18
Suzanne Wallace
 
Polymorphism in java
Polymorphism in java Polymorphism in java
Polymorphism in java
Janu Jahnavi
 
MEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational ExperimentsMEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational ExperimentsGIScRG
 
AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)
Joaquin Vanschoren
 
Design patterns
Design patternsDesign patterns
Design patterns
Anas Alpure
 
Transfer Learning for Performance Analysis of Highly-Configurable Software
Transfer Learning for Performance Analysis of Highly-Configurable SoftwareTransfer Learning for Performance Analysis of Highly-Configurable Software
Transfer Learning for Performance Analysis of Highly-Configurable Software
Pooyan Jamshidi
 
Sim-to-Real Transfer in Deep Reinforcement Learning
Sim-to-Real Transfer in Deep Reinforcement LearningSim-to-Real Transfer in Deep Reinforcement Learning
Sim-to-Real Transfer in Deep Reinforcement Learning
atulshah16
 
A Survey of Machine Learning Methods Applied to Computer ...
A Survey of Machine Learning Methods Applied to Computer ...A Survey of Machine Learning Methods Applied to Computer ...
A Survey of Machine Learning Methods Applied to Computer ...butest
 
Making Robots Learn
Making Robots LearnMaking Robots Learn
Making Robots Learn
inside-BigData.com
 
Automated Testing of Autonomous Driving Assistance Systems
Automated Testing of Autonomous Driving Assistance SystemsAutomated Testing of Autonomous Driving Assistance Systems
Automated Testing of Autonomous Driving Assistance Systems
Lionel Briand
 
Surrogate modeling for industrial design
Surrogate modeling for industrial designSurrogate modeling for industrial design
Surrogate modeling for industrial design
Shinwoo Jang
 
Cs854 lecturenotes01
Cs854 lecturenotes01Cs854 lecturenotes01
Cs854 lecturenotes01
Mehmet Çelik
 
Presentation
PresentationPresentation
Presentationbutest
 
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
Editor IJCATR
 

Similar to Transfer Learning for Improving Model Predictions in Robotic Systems (20)

Integrated Model Discovery and Self-Adaptation of Robots
Integrated Model Discovery and Self-Adaptation of RobotsIntegrated Model Discovery and Self-Adaptation of Robots
Integrated Model Discovery and Self-Adaptation of Robots
 
Using Simulation to Investigate Requirements Prioritization Strategies
Using Simulation to Investigate Requirements Prioritization StrategiesUsing Simulation to Investigate Requirements Prioritization Strategies
Using Simulation to Investigate Requirements Prioritization Strategies
 
Keynote at IWLS 2017
Keynote at IWLS 2017Keynote at IWLS 2017
Keynote at IWLS 2017
 
Polymorphism in java
Polymorphism in javaPolymorphism in java
Polymorphism in java
 
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...
Optimizing Mobile Robot Path Planning and Navigation by Use of Differential E...
 
Model-based Regression Testing of Autonomous Robots
Model-based Regression Testing of Autonomous RobotsModel-based Regression Testing of Autonomous Robots
Model-based Regression Testing of Autonomous Robots
 
ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18
 
Polymorphism in java
Polymorphism in java Polymorphism in java
Polymorphism in java
 
MEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational ExperimentsMEME – An Integrated Tool For Advanced Computational Experiments
MEME – An Integrated Tool For Advanced Computational Experiments
 
AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)
 
Design patterns
Design patternsDesign patterns
Design patterns
 
Transfer Learning for Performance Analysis of Highly-Configurable Software
Transfer Learning for Performance Analysis of Highly-Configurable SoftwareTransfer Learning for Performance Analysis of Highly-Configurable Software
Transfer Learning for Performance Analysis of Highly-Configurable Software
 
Sim-to-Real Transfer in Deep Reinforcement Learning
Sim-to-Real Transfer in Deep Reinforcement LearningSim-to-Real Transfer in Deep Reinforcement Learning
Sim-to-Real Transfer in Deep Reinforcement Learning
 
A Survey of Machine Learning Methods Applied to Computer ...
A Survey of Machine Learning Methods Applied to Computer ...A Survey of Machine Learning Methods Applied to Computer ...
A Survey of Machine Learning Methods Applied to Computer ...
 
Making Robots Learn
Making Robots LearnMaking Robots Learn
Making Robots Learn
 
Automated Testing of Autonomous Driving Assistance Systems
Automated Testing of Autonomous Driving Assistance SystemsAutomated Testing of Autonomous Driving Assistance Systems
Automated Testing of Autonomous Driving Assistance Systems
 
Surrogate modeling for industrial design
Surrogate modeling for industrial designSurrogate modeling for industrial design
Surrogate modeling for industrial design
 
Cs854 lecturenotes01
Cs854 lecturenotes01Cs854 lecturenotes01
Cs854 lecturenotes01
 
Presentation
PresentationPresentation
Presentation
 
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
Algorithmic Analysis to Video Object Tracking and Background Segmentation and...
 

More from Pooyan Jamshidi

Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachLearning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
Pooyan Jamshidi
 
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
 A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn... A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
Pooyan Jamshidi
 
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
Pooyan Jamshidi
 
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Pooyan Jamshidi
 
Transfer Learning for Performance Analysis of Machine Learning Systems
Transfer Learning for Performance Analysis of Machine Learning SystemsTransfer Learning for Performance Analysis of Machine Learning Systems
Transfer Learning for Performance Analysis of Machine Learning Systems
Pooyan Jamshidi
 
Transfer Learning for Performance Analysis of Configurable Systems: A Causal ...
Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...
Transfer Learning for Performance Analysis of Configurable Systems: A Causal ...
Pooyan Jamshidi
 
Machine Learning meets DevOps
Machine Learning meets DevOpsMachine Learning meets DevOps
Machine Learning meets DevOps
Pooyan Jamshidi
 
Learning to Sample
Learning to SampleLearning to Sample
Learning to Sample
Pooyan Jamshidi
 
Architectural Tradeoff in Learning-Based Software
Architectural Tradeoff in Learning-Based SoftwareArchitectural Tradeoff in Learning-Based Software
Architectural Tradeoff in Learning-Based Software
Pooyan Jamshidi
 
Production-Ready Machine Learning for the Software Architect
Production-Ready Machine Learning for the Software ArchitectProduction-Ready Machine Learning for the Software Architect
Production-Ready Machine Learning for the Software Architect
Pooyan Jamshidi
 
Transfer Learning for Software Performance Analysis: An Exploratory Analysis
Transfer Learning for Software Performance Analysis: An Exploratory AnalysisTransfer Learning for Software Performance Analysis: An Exploratory Analysis
Transfer Learning for Software Performance Analysis: An Exploratory Analysis
Pooyan Jamshidi
 
Architecting for Scale
Architecting for ScaleArchitecting for Scale
Architecting for Scale
Pooyan Jamshidi
 
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
Pooyan Jamshidi
 
Fuzzy Control meets Software Engineering
Fuzzy Control meets Software EngineeringFuzzy Control meets Software Engineering
Fuzzy Control meets Software Engineering
Pooyan Jamshidi
 
Autonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based SoftwareAutonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based Software
Pooyan Jamshidi
 

More from Pooyan Jamshidi (15)

Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery ApproachLearning LWF Chain Graphs: A Markov Blanket Discovery Approach
Learning LWF Chain Graphs: A Markov Blanket Discovery Approach
 
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
 A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn... A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
A Framework for Robust Control of Uncertainty in Self-Adaptive Software Conn...
 
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
Machine Learning Meets Quantitative Planning: Enabling Self-Adaptation in Aut...
 
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
Ensembles of Many Diverse Weak Defenses can be Strong: Defending Deep Neural ...
 
Transfer Learning for Performance Analysis of Machine Learning Systems
Transfer Learning for Performance Analysis of Machine Learning SystemsTransfer Learning for Performance Analysis of Machine Learning Systems
Transfer Learning for Performance Analysis of Machine Learning Systems
 
Transfer Learning for Performance Analysis of Configurable Systems: A Causal ...
Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...Transfer Learning for Performance Analysis of Configurable Systems:A Causal ...
Transfer Learning for Performance Analysis of Configurable Systems: A Causal ...
 
Machine Learning meets DevOps
Machine Learning meets DevOpsMachine Learning meets DevOps
Machine Learning meets DevOps
 
Learning to Sample
Learning to SampleLearning to Sample
Learning to Sample
 
Architectural Tradeoff in Learning-Based Software
Architectural Tradeoff in Learning-Based SoftwareArchitectural Tradeoff in Learning-Based Software
Architectural Tradeoff in Learning-Based Software
 
Production-Ready Machine Learning for the Software Architect
Production-Ready Machine Learning for the Software ArchitectProduction-Ready Machine Learning for the Software Architect
Production-Ready Machine Learning for the Software Architect
 
Transfer Learning for Software Performance Analysis: An Exploratory Analysis
Transfer Learning for Software Performance Analysis: An Exploratory AnalysisTransfer Learning for Software Performance Analysis: An Exploratory Analysis
Transfer Learning for Software Performance Analysis: An Exploratory Analysis
 
Architecting for Scale
Architecting for ScaleArchitecting for Scale
Architecting for Scale
 
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
Workload Patterns for Quality-driven Dynamic Cloud Service Configuration and...
 
Fuzzy Control meets Software Engineering
Fuzzy Control meets Software EngineeringFuzzy Control meets Software Engineering
Fuzzy Control meets Software Engineering
 
Autonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based SoftwareAutonomic Resource Provisioning for Cloud-Based Software
Autonomic Resource Provisioning for Cloud-Based Software
 

Recently uploaded

Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
slg6lamcq
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
v3tuleee
 
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
oz8q3jxlp
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
Tiktokethiodaily
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
ocavb
 
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
u86oixdj
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
John Andrews
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
ahzuo
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
ewymefz
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
nscud
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
yhkoc
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Subhajit Sahu
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
NABLAS株式会社
 

Recently uploaded (20)

Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
一比一原版(UniSA毕业证书)南澳大学毕业证如何办理
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
 
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
 
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
原版制作(Deakin毕业证书)迪肯大学毕业证学位证一模一样
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
 

Transfer Learning for Improving Model Predictions in Robotic Systems

  • 1. Transfer Learning for Improving Model Predictions in Robotic Systems Pooyan Jamshidi, Miguel Velez, Christian Kästner Norbert Siegmund, Prasad Kawthekar
  • 2. Overview of our approach ML Model Learn Model Measure Measure Data Source Target Simulator (Gazebo) Robot (TurtleBot) Predict Performance Predictions Adaptation Reason 2
  • 3. Overview of our approach ML Model Learn Model Measure Measure Data Source Target Simulator (Gazebo) Robot (TurtleBot) Predict Performance Predictions Adaptation Reason Data 3
  • 7. Performance prediction for CoBot 5 10 15 20 25 5 10 15 20 25 14 16 18 20 22 24 5 10 15 20 25 5 10 15 20 25 8 10 12 14 16 18 20 22 24 26 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 30 CPU usage [%] CPU usage [%] (a) (b) (c) (d) Prediction without transfer learning 5 10 15 20 25 5 10 15 20 25 10 15 20 25 Prediction with transfer learning (a) Source response function (cheap) (b) Target response function (expensive) 7
  • 8. Performance prediction for CoBot 5 10 15 20 25 5 10 15 20 25 14 16 18 20 22 24 5 10 15 20 25 5 10 15 20 25 8 10 12 14 16 18 20 22 24 26 5 10 15 20 25 5 10 15 20 25 5 10 15 20 25 30 CPU usage [%] CPU usage [%] (a) (b) (c) (d) Prediction without transfer learning 5 10 15 20 25 5 10 15 20 25 10 15 20 25 Prediction with transfer learning (a) Source response function (cheap) (b) Target response function (expensive) (c) Prediction without transfer learning (d) Prediction with transfer learning 8
  • 9. Prediction accuracy and model reliability 2 2 2 2 2 2 2 2 2 44 4 4 6 6 6 8 8 1012 14 16 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 5 5 5 5 5 5 5 5 5 10 10 10 10 15 15 15 20 20 25 303540 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 (a) (b) 0.5 0.5 0.5 1 1 1 1.5 1.5 1.5 2 2 2 2.5 2.5 2.5 3 3 3 3 3.5 3.5 3.5 3.5 4 4 4 4 4 4.5 4.5 4.5 5 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 0.005 0.005 0.005 0.01 0.01 0.01 0.015 0.015 0.0150.02 0.02 0.02 0.02 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.035 0.035 0.035 0.04 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 (c) (d) (a) Prediction error • Trade-off among using more source or target samples (b) Model reliability • Transfer learning contributes to lower the prediction uncertainty 9
  • 10. Prediction accuracy and model reliability 2 2 2 2 2 2 2 2 2 44 4 4 6 6 6 8 8 1012 14 16 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 5 5 5 5 5 5 5 5 5 10 10 10 10 15 15 15 20 20 25 303540 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 (a) (b) 0.5 0.5 0.5 1 1 1 1.5 1.5 1.5 2 2 2 2.5 2.5 2.5 3 3 3 3 3.5 3.5 3.5 3.5 4 4 4 4 4 4.5 4.5 4.5 5 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 0.005 0.005 0.005 0.01 0.01 0.01 0.015 0.015 0.0150.02 0.02 0.02 0.02 0.025 0.025 0.025 0.025 0.025 0.025 0.025 0.03 0.03 0.03 0.03 0.03 0.03 0.03 0.035 0.035 0.035 0.04 1 2 3 4 5 6 7 8 9 10 0 10 20 30 40 50 60 70 80 90 100 (c) (d) (a) Prediction error • Trade-off among using more source or target samples (b) Model reliability • Transfer learning contributes to lower the prediction uncertainty (c) Training overhead (d) Evaluation overhead • Appropriate for runtime usage 10
  • 11. Prediction accuracy • The model provides more accurate predictions as we exploit more data • Transfer learning may (i) boost initial performance, (ii) increase the learning speed, (iii) lead to a more accurate performance finally correspondences. In much of the work on transfer learning, a human p this mapping, but some methods provide ways to perform the mappin matically. Another section of the chapter discusses work in this area. performance training with transfer without transfer higher start higher slope higher asymptote Fig. 2. Three ways in which transfer might improve learning. 2 [Lisa Torrey and Jude Shavlik] 11
  • 14. Problem-solution overview • The robotic software is considered as a highly configurable system. • The configuration of the robot influence the performance (as well as energy usage). • Problem: there are many different parameters making the configuration space highly dimensional and difficult to understand the influence of configuration on system performance. • Solution: learn a black-box performance model using measurements from the robot. performance_value=f(configuration) • Challenge: Measurements from real robot is expensive (time consuming, require human resource, risky when fail) • Our contribution: performing most measurements on simulator (Gazebo) and taking only few samples from the real robot to learn a reliable and accurate performance model given a certain experimental budget. 14
  • 15. Synthetic example (a) Source and target samples (b) Learning without transfer (c) Learning with transfer (d) Negative transfer 1 3 5 9 10 11 12 13 14 0 50 100 150 200 source samples target samples (a) (b) (c) (d) 15
  • 16. Integrating cost model with transfer learning 1. Model Learning 2. Predictive Model 4. Transfer Learning 3. Cost Model Improved Predictive Model Conf. Parameters Samples 0 500 1000 1500 Throughput (ops/sec) 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 Averagewritelatency(µs) TL4CO BO4CO We have only a limited experimental budget and we need to spend that wisely 16
  • 17. Assumption: Source and target are correlated 0 2 4 6 8 10 0 0.5 1 1.5 2 2.5 10 4 0 2 4 6 8 10 0 0.5 1 1.5 2 2.5 3 10 4 0 2 4 6 8 10 0 0.5 1 1.5 2 2.5 10 4 (a) (b) (c) 1.5 2 2.5 3 104 (d)0 2 4 6 8 10 -0.5 0 0.5 1 1.5 2 2.5 10 4 1.5 2 2.5 3 104 1.5 2 2.5 3 104 ntime it is cisely using ration i). In uch as rithm binary There- tesian terest space R is ations ✓ X. noise, words, from c that s real dimensional spaces. Relevant to software engineering commu- nity, several approaches tried different experimental designs for highly configurable software [14] and some even consider cost as an explicit factor to determine optimal sampling [26]. Recently, researchers have tried novel way of sampling with a feedback embedded inside the process where new samples are derived based on information gain from previous set of samples. Recursive Random Sampling (RRS) [33] integrates a restarting mechanism into the random sampling to achieve high search efficiency. Smart Hill Climbing (SHC) [32] inte- grates the importance sampling with Latin Hypercube Design (lhd). SHC estimates the local regression at each potential region, then it searches toward the steepest descent direction. An approach based on direct search [35] forms a simplex in the parameter space by a number of samples, and iteratively updates a simplex through a number of well-defined operations including reflection, expansion, and contraction to guide the sample generation. Quick Optimization via Guessing (QOG) in [23] speeds up the optimization process exploiting some heuristics to filter out sub-optimal configurations. Some recent work [34] exploited a characteristic of the response surface of the configurable software to learn Fourier sparse functions by only a small sample size. Another approach also exploited this fact, but iteratively construct a regression model representing performance influences in an active learning process. time constrained environments (e.g., runtime making in a feedback loop for robots), it is to select the sources purposefully. ormulation: Model learning ntroduce the concepts in our approach precisely we define the model learning problem using notation. Let Xi indicate the i-th configuration hich ranges in a finite domain Dom(Xi). In ay either indicate (i) an integer variable such as iterative refinements in a localization algorithm orical variable such as sensor names or binary local vs global localization method). There- figuration space is mathematically a Cartesian of the domains of the parameters of interest ) ⇥ · · · ⇥ Dom(Xd). ation x resides in the design parameter space black-box response function f : X ! R is a performance model given some observations performance under different settings, D ✓ X. hough, such measurements may contain noise, xi) + ✏i where ✏i ⇠ N (0, i). In other words, e model is simply a function (mapping) from space to a measurable performance metric that val-scaled data (here we assume it produces real dimensional spaces. Relevant to software engineering commu- nity, several approaches tried different experimental designs for highly configurable software [14] and some even consider cost as an explicit factor to determine optimal sampling [26]. Recently, researchers have tried novel way of sampling with a feedback embedded inside the process where new samples are derived based on information gain from previous set of samples. Recursive Random Sampling (RRS) [33] integrates a restarting mechanism into the random sampling to achieve high search efficiency. Smart Hill Climbing (SHC) [32] inte- grates the importance sampling with Latin Hypercube Design (lhd). SHC estimates the local regression at each potential region, then it searches toward the steepest descent direction. An approach based on direct search [35] forms a simplex in the parameter space by a number of samples, and iteratively updates a simplex through a number of well-defined operations including reflection, expansion, and contraction to guide the sample generation. Quick Optimization via Guessing (QOG) in [23] speeds up the optimization process exploiting some heuristics to filter out sub-optimal configurations. Some recent work [34] exploited a characteristic of the response surface of the configurable software to learn Fourier sparse functions by only a small sample size. Another approach also exploited this fact, but iteratively construct a regression model representing performance influences in an active learning process. in some time constrained environments (e.g., runtime decision making in a feedback loop for robots), it is important to select the sources purposefully. C. Problem formulation: Model learning In order to introduce the concepts in our approach precisely and concisely, we define the model learning problem using mathematical notation. Let Xi indicate the i-th configuration parameter, which ranges in a finite domain Dom(Xi). In general, Xi may either indicate (i) an integer variable such as the number of iterative refinements in a localization algorithm or (ii) a categorical variable such as sensor names or binary options (e.g., local vs global localization method). There- fore, the configuration space is mathematically a Cartesian product of all of the domains of the parameters of interest X = Dom(X1) ⇥ · · · ⇥ Dom(Xd). A configuration x resides in the design parameter space x 2 X. A black-box response function f : X ! R is used to build a performance model given some observations of the system performance under different settings, D ✓ X. In practice, though, such measurements may contain noise, i.e., yi = f(xi) + ✏i where ✏i ⇠ N (0, i). In other words, a performance model is simply a function (mapping) from configuration space to a measurable performance metric that produces interval-scaled data (here we assume it produces real dimensional spaces. Relevant to software engineering commu- nity, several approaches tried different experimental designs for highly configurable software [14] and some even consider cost as an explicit factor to determine optimal sampling [26]. Recently, researchers have tried novel way of sampling with a feedback embedded inside the process where new samples are derived based on information gain from previous set of samples. Recursive Random Sampling (RRS) [33] integrates a restarting mechanism into the random sampling to achieve high search efficiency. Smart Hill Climbing (SHC) [32] inte- grates the importance sampling with Latin Hypercube Design (lhd). SHC estimates the local regression at each potential region, then it searches toward the steepest descent direction. An approach based on direct search [35] forms a simplex in the parameter space by a number of samples, and iteratively updates a simplex through a number of well-defined operations including reflection, expansion, and contraction to guide the sample generation. Quick Optimization via Guessing (QOG) in [23] speeds up the optimization process exploiting some heuristics to filter out sub-optimal configurations. Some recent work [34] exploited a characteristic of the response surface of the configurable software to learn Fourier sparse functions by only a small sample size. Another approach also exploited this fact, but iteratively construct a regression model representing performance influences in an active learning process. in some time constrained environments (e.g., runtime decision making in a feedback loop for robots), it is important to select the sources purposefully. C. Problem formulation: Model learning In order to introduce the concepts in our approach precisely and concisely, we define the model learning problem using mathematical notation. Let Xi indicate the i-th configuration parameter, which ranges in a finite domain Dom(Xi). In general, Xi may either indicate (i) an integer variable such as the number of iterative refinements in a localization algorithm or (ii) a categorical variable such as sensor names or binary options (e.g., local vs global localization method). There- fore, the configuration space is mathematically a Cartesian product of all of the domains of the parameters of interest X = Dom(X1) ⇥ · · · ⇥ Dom(Xd). A configuration x resides in the design parameter space x 2 X. A black-box response function f : X ! R is used to build a performance model given some observations of the system performance under different settings, D ✓ X. In practice, though, such measurements may contain noise, i.e., yi = f(xi) + ✏i where ✏i ⇠ N (0, i). In other words, a performance model is simply a function (mapping) from configuration space to a measurable performance metric that produces interval-scaled data (here we assume it produces real dimensional spaces. Relevant to s nity, several approaches tried dif for highly configurable software [ cost as an explicit factor to deter Recently, researchers have tried a feedback embedded inside the are derived based on information samples. Recursive Random Sam a restarting mechanism into the high search efficiency. Smart Hil grates the importance sampling w (lhd). SHC estimates the local region, then it searches toward th An approach based on direct sea the parameter space by a numbe updates a simplex through a numb including reflection, expansion, a sample generation. Quick Optimi in [23] speeds up the optimizati heuristics to filter out sub-optimal work [34] exploited a characterist the configurable software to learn only a small sample size. Another fact, but iteratively construct a re performance influences in an acti a) Source may be a shift to the target b) Source may be a noisy version of the target c) Source may be different to target in some extreme positions in the configuration space d) Source may be irrelevant to the target -> negative transfer will happen! 17
  • 18. Level of correlation between source and target is important 10 20 30 40 50 60 AbsolutePercentageError[%] Sources s s1 s2 s3 s4 s5 s6 noise-level 0 5 10 15 20 25 30 corr. coeff. 0.98 0.95 0.89 0.75 0.54 0.34 0.19 µ(pe) 15.34 14.14 17.09 18.71 33.06 40.93 46.75 TABL colum datase measu• Model becomes more accurate when the source is more related to the target • Even learning from a source with a small correlation is better than no transfer 18