Transfer Learning
Bushra Jbawi Noor Alhuda Espil
Machine Learning
A psychological point of view
Transfer Learning is the dependency
of human conduct, learning, or
performance on prior experience
Machine Learning community
point of view
Transfer learning attempts to develop
methods to transfer knowledge
learned in one or more source tasks
and use it to improve learning in a
related target task
Source-Task
Knowledge
Target-Task
Data
Given Learned
Learn new Model :
1. Collect new Labeled
Data
2. Build new model
Reuse and
Adapt already
learned model!
$$
Example!!!
Image Classification
Features
Task One
Model
One
Example!!!
Example!!!
Image Classification.CONT
Features
Task Two
Model
Two
Example!!!
Cars
Motorcycles
Features
Task One
Reuse
Transefer Learning Goal
improve learning in the target task by
leveraging knowledge from the source task.
By three common measures:
1- initial performance
2- amount of time
3- final performance
higher start
higher slop
higher asymptote
With transfer
Without transfer
Transfer in an inductive Learning
Works by allowing source-task
knowledge to affect the target task’s
inductive bias ((a set of assumptions
about the true distribution of the
training data)).
Concerned with improving the speed
with which a model is learned.
Concerned with improving its
generalization capability.
Transfer in an inductive Learning
Inductive Transfer:
◦ the target-task inductive bias is chosen
or adjusted based on the source-task
knowledge.
◦ depending on which inductive learning
algorithm is used to learn the source
and target tasks.
Search
Allowed Hypotheses Allowed Hypotheses
Inductive learning Inductive Transfer
All Hypotheses All Hypotheses
Search
Transfer in an inductive Learning
Bayesian Transfer:
◦ Bayesian learning uses a prior
distribution to smooth the estimates
from training data.
◦ Bayesian transfer may provide a more
informative prior from source-task
knowledge.
Posterior
Distribution
Prior
Distribution
Data
+
=
Bayesian learning Bayesian Transfer
Transfer in an inductive Learning
Hierarchical Transfer:
◦ Solutions to simple tasks are combined
or provided as tools to produce a
solution to a more complex task.
◦ Can involve many tasks.
◦ The target task might use entire source-
task solutions as parts of its own.
Pipe
Surface Circle
CurveLine
Transfer in reinforcement
learning
Transfer in reinforcement
learning
Starting-Point Methods:
Transfer in reinforcement
learning
Imitation Methods:
Transfer in reinforcement
learning
Hierarchical Methods:
Transfer in reinforcement
learning
Alteration Methods:
Transfer in reinforcement
learning
New RL Algorithms:
AVOIDING NEGATIVE
TRANSFER
If a transfer method actually decreases
performance, then negative transfer
has occurred.
AVOIDING NEGATIVE TRANSFER
Rejecting Bad
Information
reject harmful
source-task
knowledge while
learning the target
task. The goal is to
minimize the
impact of bad
information, so
that the transfer
performance is at
least no worse
than learning the
target task
without transfer
Choosing a
Source Task
the problem
becomes
choosing the
best source task.
Transfer
methods without
much protection
may still be
effective, as long
as the best
source task is at
least a decent
match
Modeling Task
Similarity
explicitly model
relationships
between tasks
and include this
information in
the transfer
method. This can
lead to better
use of source-
task knowledge
and decrease the
risk of negative
transfer.
AUTOMATICALLY
MAPPING TASKS
When an agent applies knowledge
from one task in another, it is often
necessary to map the characteristics of
one task onto those of the other to
specify correspondences.
Source Task Target Task
Property1
Property2
Property1
Property M
Property N
…
…
AUTOMATICALLY MAPPING TASKS
Mapping by
Analogy
it may be
possible to avoid
the mapping
problem
altogether by
ensuring that the
source and
target tasks have
the same
representation.
Trying Multiple
Mappings
One
straightforward
way of solving
the mapping
problem is to
generate several
possible
mappings and
allow the target-
task agent to try
them all.
Equalizing Task
Representations
There are some
methods that
construct a
mapping by
analogy. That
examine the
characteristics of
the source and
target tasks and
find elements
that correspond.
Conclusion
Transfer learning
 has become a sizeable subfield in
machine learning.
 is seen as an important aspect of
human learning.
 can make machine learning more
efficient.
 has some challenges
should be faced.
Thanks!

Transfer learning-presentation

  • 1.
    Transfer Learning Bushra JbawiNoor Alhuda Espil Machine Learning
  • 2.
    A psychological pointof view Transfer Learning is the dependency of human conduct, learning, or performance on prior experience
  • 3.
    Machine Learning community pointof view Transfer learning attempts to develop methods to transfer knowledge learned in one or more source tasks and use it to improve learning in a related target task Source-Task Knowledge Target-Task Data Given Learned
  • 4.
    Learn new Model: 1. Collect new Labeled Data 2. Build new model Reuse and Adapt already learned model! $$
  • 5.
  • 6.
  • 7.
    Transefer Learning Goal improvelearning in the target task by leveraging knowledge from the source task. By three common measures: 1- initial performance 2- amount of time 3- final performance higher start higher slop higher asymptote With transfer Without transfer
  • 8.
    Transfer in aninductive Learning Works by allowing source-task knowledge to affect the target task’s inductive bias ((a set of assumptions about the true distribution of the training data)). Concerned with improving the speed with which a model is learned. Concerned with improving its generalization capability.
  • 9.
    Transfer in aninductive Learning Inductive Transfer: ◦ the target-task inductive bias is chosen or adjusted based on the source-task knowledge. ◦ depending on which inductive learning algorithm is used to learn the source and target tasks. Search Allowed Hypotheses Allowed Hypotheses Inductive learning Inductive Transfer All Hypotheses All Hypotheses Search
  • 10.
    Transfer in aninductive Learning Bayesian Transfer: ◦ Bayesian learning uses a prior distribution to smooth the estimates from training data. ◦ Bayesian transfer may provide a more informative prior from source-task knowledge. Posterior Distribution Prior Distribution Data + = Bayesian learning Bayesian Transfer
  • 11.
    Transfer in aninductive Learning Hierarchical Transfer: ◦ Solutions to simple tasks are combined or provided as tools to produce a solution to a more complex task. ◦ Can involve many tasks. ◦ The target task might use entire source- task solutions as parts of its own. Pipe Surface Circle CurveLine
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
    AVOIDING NEGATIVE TRANSFER If atransfer method actually decreases performance, then negative transfer has occurred.
  • 19.
    AVOIDING NEGATIVE TRANSFER RejectingBad Information reject harmful source-task knowledge while learning the target task. The goal is to minimize the impact of bad information, so that the transfer performance is at least no worse than learning the target task without transfer Choosing a Source Task the problem becomes choosing the best source task. Transfer methods without much protection may still be effective, as long as the best source task is at least a decent match Modeling Task Similarity explicitly model relationships between tasks and include this information in the transfer method. This can lead to better use of source- task knowledge and decrease the risk of negative transfer.
  • 20.
    AUTOMATICALLY MAPPING TASKS When anagent applies knowledge from one task in another, it is often necessary to map the characteristics of one task onto those of the other to specify correspondences. Source Task Target Task Property1 Property2 Property1 Property M Property N … …
  • 21.
    AUTOMATICALLY MAPPING TASKS Mappingby Analogy it may be possible to avoid the mapping problem altogether by ensuring that the source and target tasks have the same representation. Trying Multiple Mappings One straightforward way of solving the mapping problem is to generate several possible mappings and allow the target- task agent to try them all. Equalizing Task Representations There are some methods that construct a mapping by analogy. That examine the characteristics of the source and target tasks and find elements that correspond.
  • 22.
    Conclusion Transfer learning  hasbecome a sizeable subfield in machine learning.  is seen as an important aspect of human learning.  can make machine learning more efficient.  has some challenges should be faced.
  • 23.