SlideShare a Scribd company logo
1 of 54
FDG-PET combined with LVQ and relevance learning:
• diagnosis of neurodegenerative diseases
• harmonization of multi-source data
Rick van Veen Sofie Lövdal
Michael Biehl
Bernoulli Institute for Mathematics,
Computer Science and Artificial
Intelligence
University of Groningen / NL
APPIS 2023
brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
overview
brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
machine learning analysis:
FDG-PET scan brain images subject scores derived from 3D images
overview
brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
machine learning analysis:
FDG-PET scan brain images subject scores derived from 3D images
diagnosis of neurodegenerative diseases
Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders
overview
brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
machine learning analysis:
FDG-PET scan brain images subject scores derived from 3D images
diagnosis of neurodegenerative diseases
Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders
reliable classification across centers ?
suggested correction scheme for GMLVQ
overview
brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
machine learning analysis:
FDG-PET scan brain images subject scores derived from 3D images
diagnosis of neurodegenerative diseases
Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders
reliable classification across centers ?
suggested correction scheme for GMLVQ
Outlook / open problems
overview
prototype-based classification
• represent the data by one or
several prototypes per class
• classify a query according to the
label of the nearest prototype
• local decision boundaries acc.
to (e.g.) Euclidean distances
N-dim. feature space
?
Learning Vector Quantization [Kohonen]
can be modified to
- differentiable distance / dissimilarity measures
- adaptive distances in Relevance Learning
Generalized Matrix Relevance LVQ (GMLVQ)
relevance learning: parameterized adaptive distance measure
instead of naïve Euclidean
generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009]
= [ ⌦ (w x) ]
2
d(w, x) = (w x)
>
⇤ (w x)
Generalized Matrix Relevance LVQ (GMLVQ)
relevance learning: parameterized adaptive distance measure
instead of naïve Euclidean
generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009]
training: adaptation of prototypes
and distance measure guided by
asuitable cost function w.r.t.
= [ ⌦ (w x) ]
2
d(w, x) = (w x)
>
⇤ (w x)
Generalized Matrix Relevance LVQ (GMLVQ)
relevance learning: parameterized adaptive distance measure
instead of naïve Euclidean
<latexit sha1_base64="evXoIa2ePqY2a6L8Tk2ZdP7HS8U=">AAAB+3icbVDLSsNAFJ3UV42Pxrp0M1gEBSmJKLqzIII7K9gHNKFMppN26EwSZiZqCfkVNy4UcevKv3DnJ/gXTtoutPXAwOGce7lnjh8zKpVtfxmFhcWl5ZXiqrm2vrFZsrbKTRklApMGjlgk2j6ShNGQNBRVjLRjQRD3GWn5w4vcb90RIWkU3qpRTDyO+iENKEZKS12r7F5z0keH0OVIDfwgvc+6VsWu2mPAeeJMSaVW+j4/MD8u613r0+1FOOEkVJghKTuOHSsvRUJRzEhmuokkMcJD1CcdTUPEifTScfYM7mmlB4NI6BcqOFZ/b6SISznivp7ME8pZLxf/8zqJCs68lIZxokiIJ4eChEEVwbwI2KOCYMVGmiAsqM4K8QAJhJWuy9QlOLNfnifNo6pzUrVvdBvHYIIi2AG7YB844BTUwBWogwbA4AE8gmfwYmTGk/FqvE1GC8Z0Zxv8gfH+AzUjlsk=</latexit>
⌦, w
generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009]
training: adaptation of prototypes
and distance measure guided by
asuitable cost function w.r.t.
= [ ⌦ (w x) ]
2
d(w, x) = (w x)
>
⇤ (w x)
Generalized Matrix Relevance LVQ (GMLVQ)
summarizes
• the contribution of a single dimension
• the relevance of original features in the classifier
relevance learning: parameterized adaptive distance measure
instead of naïve Euclidean
interpretation:
<latexit sha1_base64="evXoIa2ePqY2a6L8Tk2ZdP7HS8U=">AAAB+3icbVDLSsNAFJ3UV42Pxrp0M1gEBSmJKLqzIII7K9gHNKFMppN26EwSZiZqCfkVNy4UcevKv3DnJ/gXTtoutPXAwOGce7lnjh8zKpVtfxmFhcWl5ZXiqrm2vrFZsrbKTRklApMGjlgk2j6ShNGQNBRVjLRjQRD3GWn5w4vcb90RIWkU3qpRTDyO+iENKEZKS12r7F5z0keH0OVIDfwgvc+6VsWu2mPAeeJMSaVW+j4/MD8u613r0+1FOOEkVJghKTuOHSsvRUJRzEhmuokkMcJD1CcdTUPEifTScfYM7mmlB4NI6BcqOFZ/b6SISznivp7ME8pZLxf/8zqJCs68lIZxokiIJ4eChEEVwbwI2KOCYMVGmiAsqM4K8QAJhJWuy9QlOLNfnifNo6pzUrVvdBvHYIIi2AG7YB844BTUwBWogwbA4AE8gmfwYmTGk/FqvE1GC8Z0Zxv8gfH+AzUjlsk=</latexit>
⌦, w
empirical observation / theory:
relevance matrix becomes
singular, dominated by
very few eigenvectors
identifies a discriminative
(low-dimensional) subspace
Relevance Matrix
empirical observation / theory:
relevance matrix becomes
singular, dominated by
very few eigenvectors
identifies a discriminative
(low-dimensional) subspace
facilitates discriminative
visualization / low-dimensional
representation of datasets
Relevance Matrix
3-class data set (iris flowers)
Analysis of FDG-PET image data for the
diagnosis of neurodegenerative disorders
R. van Veen, S.K. Meles, R.J. Renken, F.E. Reesink, W.H. Oertel,
A. Janzen, G.-J. de Vries, K.L. Leenders, M. Biehl
FDG-PET combined with learning vector quantization allows
classification of neurodegenerative diseases and reveals the
trajectory of idiopathic REM sleep behavior disorder
Computer Methods and Programs in Biomedicine 225: 107042 (2022)
Glucose
uptake
http://glimpsproject.com
subjects
A
B
C
FDG-PET 3D images
18 F-Fluorodeoxyglucose
positron emission tomography
data
Glucose
uptake
http://glimpsproject.com
subjects
A
B
C
FDG-PET 3D images
18 F-Fluorodeoxyglucose
positron emission tomography
Healthy Controls HC
Parkinson’s Disease PD
Alzheimer’s Disease AD
data
Subjects
Source HC PD AD
CUN 19 49 -
UGOSM 44 58 55
UMCG 19 20 21
FDG-PET brain scans from 3 centers
• Clínica Universidad de Navarra
• Univ. Genoa/IRCCS San Martino
• Univ. Medical Center Groningen
Glucose
uptake
http://glimpsproject.com
subjects
A
B
C
FDG-PET 3D images
18 F-Fluorodeoxyglucose
positron emission tomography
Healthy Controls HC
Parkinson’s Disease PD
Alzheimer’s Disease AD
data
8
work flow
subjects
~
200000
voxels
8
work flow
subjects
~
200000
voxels
subject specific
anatomy
high intensity,
low noise voxels
log-transform
masking
low-dimensional
projections
by SSM/PCA
8
work flow
subjects
~
200000
voxels
subject specific
anatomy
high intensity,
low noise voxels
log-transform
masking
low-dimensional
projections
by SSM/PCA
subjects
~50
subject
scores
(PCA)
Scaled Subprofile Model / PCA based
on a separate reference group of subjects
9
work flow
subjects
~50
subject
scores
(PCA)
subjects
labels
(condition)
classification:
GMLVQ, SVM (lin.)
~
200000
voxels
Scaled Subprofile Model / PCA based
on a separate reference group of subjects
9
work flow
subjects
~50
subject
scores
(PCA)
subjects
novel
test
labels
(condition)
classification:
GMLVQ, SVM (lin.)
?
~
200000
voxels
Scaled Subprofile Model / PCA based
on a separate reference group of subjects
9
work flow
subjects
~50
subject
scores
(PCA)
subjects
novel
test
labels
(condition)
classification:
GMLVQ, SVM (lin.)
?
~
200000
voxels
Scaled Subprofile Model / PCA based
on a separate reference group of subjects
results
performance evaluation:
averages over 10 randomized runs of 10-fold cross-validation
accuracies, sensitivity /specificity
Receiver Operating Characteristics for binary classification
results
subjects from one center only (training and testing)
e.g. UGOSM
(ROC)
±0.008
performance evaluation:
averages over 10 randomized runs of 10-fold cross-validation
accuracies, sensitivity /specificity
Receiver Operating Characteristics for binary classification
(lin.)
(lin.)
(lin.)
results
subjects from one center only (training and testing)
e.g. UGOSM
(ROC)
±0.008
performance evaluation:
averages over 10 randomized runs of 10-fold cross-validation
accuracies, sensitivity /specificity
Receiver Operating Characteristics for binary classification
also: good multi-class accuracies (within centers)
(lin.)
(lin.)
(lin.)
GMLVQ multi-class (UMCG)
PD: Parkinson’s disease
DLB: Dementia with
Lewy Bodies
HC: Healthy Control
AD: Alzheimer’s Disease
GMLVQ multi-class (UMCG)
PD: Parkinson’s disease
DLB: Dementia with
Lewy Bodies
HC: Healthy Control
AD: Alzheimer’s Disease
identify outliers (*) and clusters
PD (1a): young patients, onset of PD and/or mild cognitive impairment
PD (1b): elder PD patients, progressed to PD dementia later
AD (2a): AD subtype (n=3?)
AD (2b): AD patients with mild cognitive impairment
HC
PD DLB
AD
RBD1 (baseline, first scan)
RBD2 (follow up, ca 4 yrs.)
Rapid Eye Movement sleep behavior disorder (iRBD)
trajectory of patients in the {HC, PD, DLB, AD}-discriminative space,
post-hoc projections, iRBD not used in training process
frequent trend:
HC → PD/DLB
disease progression (iRBD)
HC
PD DLB
AD
RBD1 (baseline, first scan)
RBD2 (follow up, ca 4 yrs.)
Rapid Eye Movement sleep behavior disorder (iRBD)
trajectory of patients in the {HC, PD, DLB, AD}-discriminative space,
post-hoc projections, iRBD not used in training process
frequent trend:
HC → PD/DLB
RBD3 (in progress)
disease progression (iRBD)
Center / source harmonization in GMLVQ training
R. van Veen, N.R. Bari Tamboli, S. Lövdal, S.K. Meles, R.J. Renken,
G.-J. de Vries, D. Arnaldij, S. Morbellik, M.C. Rodriguez Oroz,
P. Claver, K.L. Leenders,T. Villmann, M. Biehl
Subspace Corrected Relevance Learning with Application in Neuroimaging
in preparation (2023)
general problem in many application domains: different sources,
e.g. technical platforms, preprocessing pipelines, batch effects …
here: data from different medical centers
e.g. PD vs. HC
unbiased classifiers (ROC)
within center
(example: UGOSM)
across-center performance
(lin.)
e.g. PD vs. HC
unbiased classifiers (ROC)
within center
(example: UGOSM)
across centers: poor performance
across-center performance
(lin.)
(lin.)
(lin.)
(lin.)
classification experiment - can we infer the medical center ?
identification of centers
HC only Classifier Sens. (%) Spec. (%) AUC (ROC)
CUN vs.
UGOSM
SVM (lin.) 99.75 93.00 1.00
GMLVQ 97.30 91.00 0.99
classification experiment - can we infer the medical center ?
identification of centers
possible explanations
- center-specific (pre-)processing despite supposedly
identical equipment and work flows
- significantly different patient cohorts (not the case in HC)
HC only Classifier Sens. (%) Spec. (%) AUC (ROC)
CUN vs.
UGOSM
SVM (lin.) 99.75 93.00 1.00
GMLVQ 97.30 91.00 0.99
HC vs. PD, 3 centers
HC vs. PD, 3 centers
HC vs. PD
Genoa
Groningen
San Martino
HC vs. PD, 3 centers
HC vs. PD
centers
Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
subspace corrected GMLVQ
Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
2) train a second GMLVQ system for the actual target classification
(discrimation of diseases) restricting the relevance matrix
to the space orthogonal to subspace U by a correction scheme
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
<latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit>
⌦ ! ⌦
"
I
K
X
k=1
ukuk
>
#
e.g. after each individual GMLVQ update
subspace corrected GMLVQ
Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
2) train a second GMLVQ system for the actual target classification
(discrimation of diseases) restricting the relevance matrix
to the space orthogonal to subspace U by a correction scheme
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
<latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit>
⌦ ! ⌦
"
I
K
X
k=1
ukuk
>
#
e.g. after each individual GMLVQ update
subspace corrected GMLVQ
UGOSM (Italy): 49 HC 38 PD [early stage PD]
CUN (Spain): 20 HC 68 PD [late stage PD]
a clear-cut example problem (subset of data, 2 centers only):
Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
2) train a second GMLVQ system for the actual target classification
(discrimation of diseases) restricting the relevance matrix
to the space orthogonal to subspace U by a correction scheme
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
<latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit>
⌦ ! ⌦
"
I
K
X
k=1
ukuk
>
#
e.g. after each individual GMLVQ update
subspace corrected GMLVQ
UGOSM (Italy): 49 HC 38 PD [early stage PD]
CUN (Spain): 20 HC 68 PD [late stage PD]
a clear-cut example problem (subset of data, 2 centers only):
target
classification
First step: classify according to data source (CUN, UGOSM)
on the basis of Healthy Controls only, identify
discriminative eigenvector of the relevance matrix (1)
early PD vs. late PD
First step: classify according to data source (CUN, UGOSM)
on the basis of Healthy Controls only, identify
discriminative eigenvector of the relevance matrix (1)
<latexit sha1_base64="eNcDcD5ml+Qrz97LtpCpN2NmuIg=">AAAB8XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKojcDXjxGMA9MQpidzCZDZmeXmV4hLPkLLxEU8eoX+Bve/Btnkxw0saChqOqmq9uPpTDout9ObmV1bX0jv1nY2t7Z3SvuH9RNlGjGayySkW761HApFK+hQMmbseY09CVv+MObzG88cm1EpO5xFPNOSPtKBIJRtNJDO6Q48IM0GXeLJbfsTkGWiTcnpevPSYbnarf41e5FLAm5QiapMS3PjbGTUo2CST4utBPDY8qGtM9blioactNJp4nH5MQqPRJE2pZCMlV/T6Q0NGYU+rYzS2gWvUz8z2slGFx1UqHiBLlis0VBIglGJDuf9ITmDOXIEsq0sFkJG1BNGdonFewTvMWTl0n9rOxdlN07t1Q5hxnycATHcAoeXEIFbqEKNWCg4Ale4NUxzsR5c95nrTlnPnMIf+B8/AAWcJWa</latexit>
u
early PD vs. late PD
First step: classify according to data source (CUN, UGOSM)
on the basis of Healthy Controls only, identify
discriminative eigenvector of the relevance matrix (1)
Second step: train classifier “early PD” vs. “late PD” with
correct relevance matrix
after each individual GMLVQ update step in (2)
<latexit sha1_base64="cSc/CoXqNvnOdAZQgm7ryaLEmJg=">AAACKnicbZC7SgNBFIZnvcZ4i1pqMRgECw27omgZsdFKBaOB7BpmJ2eTIbMXZs4GwpLSZ7Gxs/UVbFIowdYHcZIoePth4OM/5zDn/H4ihUbbHlgTk1PTM7O5ufz8wuLScmFl9VrHqeJQ4bGMVdVnGqSIoIICJVQTBSz0Jdz47ZNh/aYDSos4usJuAl7ImpEIBGdorHrh2D0PocmoizH9wh3qSgiwdkZ3qRsybPlBlva+4a3pTqirRLOFXr1QtEv2SPQvOJ9QLG8Ed2+dx6eLeqHvNmKehhAhl0zrmmMn6GVMoeASenk31ZAw3mZNqBmMWAjay0an9uiWcRo0iJV5EdKR+30iY6HW3dA3ncN19e/a0PyvVksxOPIyESUpQsTHHwWppCaXYW60IRRwlF0DjCthdqW8xRTjaNLNmxCc3yf/heu9knNQsi9NGvtkrBxZJ5tkmzjkkJTJKbkgFcLJPXkmL+TVerD61sB6G7dOWJ8za+SHrPcP0OiqsQ==</latexit>
⌦ ! ⌦
⇥
I uu>
⇤
<latexit sha1_base64="eNcDcD5ml+Qrz97LtpCpN2NmuIg=">AAAB8XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKojcDXjxGMA9MQpidzCZDZmeXmV4hLPkLLxEU8eoX+Bve/Btnkxw0saChqOqmq9uPpTDout9ObmV1bX0jv1nY2t7Z3SvuH9RNlGjGayySkW761HApFK+hQMmbseY09CVv+MObzG88cm1EpO5xFPNOSPtKBIJRtNJDO6Q48IM0GXeLJbfsTkGWiTcnpevPSYbnarf41e5FLAm5QiapMS3PjbGTUo2CST4utBPDY8qGtM9blioactNJp4nH5MQqPRJE2pZCMlV/T6Q0NGYU+rYzS2gWvUz8z2slGFx1UqHiBLlis0VBIglGJDuf9ITmDOXIEsq0sFkJG1BNGdonFewTvMWTl0n9rOxdlN07t1Q5hxnycATHcAoeXEIFbqEKNWCg4Ale4NUxzsR5c95nrTlnPnMIf+B8/AAWcJWa</latexit>
u
early PD vs. late PD
uncorrected: acc. 98% corrected: acc. 86%
early PD vs. late PD
uncorrected: acc. 98% corrected: acc. 86%
early PD vs. late PD
consistent with
continuos progression
well-separated
early/late stages (?)
uncorrected: acc. 98% corrected: acc. 86%
early PD vs. late PD
consistent with
continuos progression
well-separated
early/late stages (?)
uncorrected: acc. 98% corrected: acc. 86%
early PD vs. late PD
seemingly favorable performance
mainlydue to center-specific bias
purely center-specific
variation removed
consistent with
continuos progression
well-separated
early/late stages (?)
outlook
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
outlook
A Learning Vector Quantization Architecture for Transfer Learning Based
Classification in Case of Multiple Sources by Means of Null-Space Evaluation
T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden
Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205
single-step training, combining of target classification and
source separation with orthogonality constraint
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
outlook
A Learning Vector Quantization Architecture for Transfer Learning Based
Classification in Case of Multiple Sources by Means of Null-Space Evaluation
T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden
Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205
single-step training, combining of target classification and
source separation with orthogonality constraint
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
iterative removal of center-discriminative directions
outlook
A Learning Vector Quantization Architecture for Transfer Learning Based
Classification in Case of Multiple Sources by Means of Null-Space Evaluation
T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden
Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205
single-step training, combining of target classification and
source separation with orthogonality constraint
training of target classification with
penalty term w.r.t. variance of HC data
[with Umberto Petruzzello]
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
iterative removal of center-discriminative directions
Thanks!
[generated with Open-AI DALL-E 2]
wolves
Huskies

More Related Content

Similar to APPIS-FDGPET.pdf

Protein Distance Map Prediction based on a Nearest Neighbors Approach
Protein Distance Map Prediction based on a Nearest Neighbors ApproachProtein Distance Map Prediction based on a Nearest Neighbors Approach
Protein Distance Map Prediction based on a Nearest Neighbors Approach
Gualberto Asencio Cortés
 
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
MLconf
 
AnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdf
AnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdfAnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdf
AnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdf
Danish Raza Rizvi
 
Probability Forecasting - a Machine Learning Perspective
Probability Forecasting - a Machine Learning PerspectiveProbability Forecasting - a Machine Learning Perspective
Probability Forecasting - a Machine Learning Perspective
butest
 

Similar to APPIS-FDGPET.pdf (20)

14 00-20171207 rance-piv_c
14 00-20171207 rance-piv_c14 00-20171207 rance-piv_c
14 00-20171207 rance-piv_c
 
2014 2nd neuralactivityimaging_pe_tand_cal2
2014 2nd neuralactivityimaging_pe_tand_cal22014 2nd neuralactivityimaging_pe_tand_cal2
2014 2nd neuralactivityimaging_pe_tand_cal2
 
SLOPE 1st workshop - presentation 2
SLOPE 1st workshop - presentation 2SLOPE 1st workshop - presentation 2
SLOPE 1st workshop - presentation 2
 
Digital cephalometry
Digital cephalometryDigital cephalometry
Digital cephalometry
 
Protein Distance Map Prediction based on a Nearest Neighbors Approach
Protein Distance Map Prediction based on a Nearest Neighbors ApproachProtein Distance Map Prediction based on a Nearest Neighbors Approach
Protein Distance Map Prediction based on a Nearest Neighbors Approach
 
DEEP FACIAL DIAGNOSIS: DEEP TRANSFER LEARNING FROM FACE RECOGNITION TO FACIAL...
DEEP FACIAL DIAGNOSIS: DEEP TRANSFER LEARNING FROM FACE RECOGNITION TO FACIAL...DEEP FACIAL DIAGNOSIS: DEEP TRANSFER LEARNING FROM FACE RECOGNITION TO FACIAL...
DEEP FACIAL DIAGNOSIS: DEEP TRANSFER LEARNING FROM FACE RECOGNITION TO FACIAL...
 
Switchable and tunable deep beamformer using adaptive instance normalization ...
Switchable and tunable deep beamformer using adaptive instance normalization ...Switchable and tunable deep beamformer using adaptive instance normalization ...
Switchable and tunable deep beamformer using adaptive instance normalization ...
 
Glaucoma Disease Diagnosis Using Feed Forward Neural Network
Glaucoma Disease Diagnosis Using Feed Forward Neural Network Glaucoma Disease Diagnosis Using Feed Forward Neural Network
Glaucoma Disease Diagnosis Using Feed Forward Neural Network
 
wireless camera application
wireless camera applicationwireless camera application
wireless camera application
 
D04472327
D04472327D04472327
D04472327
 
Lec3: Pre-Processing Medical Images
Lec3: Pre-Processing Medical ImagesLec3: Pre-Processing Medical Images
Lec3: Pre-Processing Medical Images
 
Biosensors And Bioelectronics Presentation by Sijung Hu
Biosensors And Bioelectronics Presentation by Sijung HuBiosensors And Bioelectronics Presentation by Sijung Hu
Biosensors And Bioelectronics Presentation by Sijung Hu
 
Recent advances in diagnosis and treatment planning1 /certified fixed orthod...
Recent advances in diagnosis and treatment  planning1 /certified fixed orthod...Recent advances in diagnosis and treatment  planning1 /certified fixed orthod...
Recent advances in diagnosis and treatment planning1 /certified fixed orthod...
 
Neural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear RegressionNeural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear Regression
 
fuzzy LBP for face recognition ppt
fuzzy LBP for face recognition pptfuzzy LBP for face recognition ppt
fuzzy LBP for face recognition ppt
 
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
 
Diffusion Deformable Model for 4D Temporal Medical Image Generation
Diffusion Deformable Model for 4D Temporal Medical Image GenerationDiffusion Deformable Model for 4D Temporal Medical Image Generation
Diffusion Deformable Model for 4D Temporal Medical Image Generation
 
AnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdf
AnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdfAnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdf
AnLSTMbasedDeeplearningmodelforvoice-baseddetectionofParkinsonsdisease.pdf
 
Probability Forecasting - a Machine Learning Perspective
Probability Forecasting - a Machine Learning PerspectiveProbability Forecasting - a Machine Learning Perspective
Probability Forecasting - a Machine Learning Perspective
 
ct_meeting_final_jcy (1).pdf
ct_meeting_final_jcy (1).pdfct_meeting_final_jcy (1).pdf
ct_meeting_final_jcy (1).pdf
 

More from University of Groningen

More from University of Groningen (20)

Interpretable machine learning in endocrinology, M. Biehl, APPIS 2024
Interpretable machine learning in endocrinology, M. Biehl, APPIS 2024Interpretable machine learning in endocrinology, M. Biehl, APPIS 2024
Interpretable machine learning in endocrinology, M. Biehl, APPIS 2024
 
ESE-Eyes-2023.pdf
ESE-Eyes-2023.pdfESE-Eyes-2023.pdf
ESE-Eyes-2023.pdf
 
stat-phys-appis-reduced.pdf
stat-phys-appis-reduced.pdfstat-phys-appis-reduced.pdf
stat-phys-appis-reduced.pdf
 
prototypes-AMALEA.pdf
prototypes-AMALEA.pdfprototypes-AMALEA.pdf
prototypes-AMALEA.pdf
 
stat-phys-AMALEA.pdf
stat-phys-AMALEA.pdfstat-phys-AMALEA.pdf
stat-phys-AMALEA.pdf
 
Evidence for tissue and stage-specific composition of the ribosome: machine l...
Evidence for tissue and stage-specific composition of the ribosome: machine l...Evidence for tissue and stage-specific composition of the ribosome: machine l...
Evidence for tissue and stage-specific composition of the ribosome: machine l...
 
The statistical physics of learning revisted: Phase transitions in layered ne...
The statistical physics of learning revisted: Phase transitions in layered ne...The statistical physics of learning revisted: Phase transitions in layered ne...
The statistical physics of learning revisted: Phase transitions in layered ne...
 
Interpretable machine-learning (in endocrinology and beyond)
Interpretable machine-learning (in endocrinology and beyond)Interpretable machine-learning (in endocrinology and beyond)
Interpretable machine-learning (in endocrinology and beyond)
 
Biehl hanze-2021
Biehl hanze-2021Biehl hanze-2021
Biehl hanze-2021
 
2020: Prototype-based classifiers and relevance learning: medical application...
2020: Prototype-based classifiers and relevance learning: medical application...2020: Prototype-based classifiers and relevance learning: medical application...
2020: Prototype-based classifiers and relevance learning: medical application...
 
2020: Phase transitions in layered neural networks: ReLU vs. sigmoidal activa...
2020: Phase transitions in layered neural networks: ReLU vs. sigmoidal activa...2020: Phase transitions in layered neural networks: ReLU vs. sigmoidal activa...
2020: Phase transitions in layered neural networks: ReLU vs. sigmoidal activa...
 
2020: So you thought the ribosome was constant and conserved ...
2020: So you thought the ribosome was constant and conserved ... 2020: So you thought the ribosome was constant and conserved ...
2020: So you thought the ribosome was constant and conserved ...
 
Prototype-based classifiers and their applications in the life sciences
Prototype-based classifiers and their applications in the life sciencesPrototype-based classifiers and their applications in the life sciences
Prototype-based classifiers and their applications in the life sciences
 
Prototype-based models in machine learning
Prototype-based models in machine learningPrototype-based models in machine learning
Prototype-based models in machine learning
 
The statistical physics of learning - revisited
The statistical physics of learning - revisitedThe statistical physics of learning - revisited
The statistical physics of learning - revisited
 
2013: Sometimes you can trust a rat - The sbv improver species translation ch...
2013: Sometimes you can trust a rat - The sbv improver species translation ch...2013: Sometimes you can trust a rat - The sbv improver species translation ch...
2013: Sometimes you can trust a rat - The sbv improver species translation ch...
 
2013: Prototype-based learning and adaptive distances for classification
2013: Prototype-based learning and adaptive distances for classification2013: Prototype-based learning and adaptive distances for classification
2013: Prototype-based learning and adaptive distances for classification
 
2015: Distance based classifiers: Basic concepts, recent developments and app...
2015: Distance based classifiers: Basic concepts, recent developments and app...2015: Distance based classifiers: Basic concepts, recent developments and app...
2015: Distance based classifiers: Basic concepts, recent developments and app...
 
2016: Classification of FDG-PET Brain Data
2016: Classification of FDG-PET Brain Data2016: Classification of FDG-PET Brain Data
2016: Classification of FDG-PET Brain Data
 
2016: Predicting Recurrence in Clear Cell Renal Cell Carcinoma
2016: Predicting Recurrence in Clear Cell Renal Cell Carcinoma2016: Predicting Recurrence in Clear Cell Renal Cell Carcinoma
2016: Predicting Recurrence in Clear Cell Renal Cell Carcinoma
 

Recently uploaded

PODOCARPUS...........................pptx
PODOCARPUS...........................pptxPODOCARPUS...........................pptx
PODOCARPUS...........................pptx
Cherry
 
Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.
Cherry
 
Human genetics..........................pptx
Human genetics..........................pptxHuman genetics..........................pptx
Human genetics..........................pptx
Cherry
 
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
Scintica Instrumentation
 
Reboulia: features, anatomy, morphology etc.
Reboulia: features, anatomy, morphology etc.Reboulia: features, anatomy, morphology etc.
Reboulia: features, anatomy, morphology etc.
Cherry
 
COMPOSTING : types of compost, merits and demerits
COMPOSTING : types of compost, merits and demeritsCOMPOSTING : types of compost, merits and demerits
COMPOSTING : types of compost, merits and demerits
Cherry
 
Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cherry
 
development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virus
NazaninKarimi6
 

Recently uploaded (20)

Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
Human & Veterinary Respiratory Physilogy_DR.E.Muralinath_Associate Professor....
 
PODOCARPUS...........................pptx
PODOCARPUS...........................pptxPODOCARPUS...........................pptx
PODOCARPUS...........................pptx
 
Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.Phenolics: types, biosynthesis and functions.
Phenolics: types, biosynthesis and functions.
 
GBSN - Biochemistry (Unit 3) Metabolism
GBSN - Biochemistry (Unit 3) MetabolismGBSN - Biochemistry (Unit 3) Metabolism
GBSN - Biochemistry (Unit 3) Metabolism
 
Efficient spin-up of Earth System Models usingsequence acceleration
Efficient spin-up of Earth System Models usingsequence accelerationEfficient spin-up of Earth System Models usingsequence acceleration
Efficient spin-up of Earth System Models usingsequence acceleration
 
Daily Lesson Log in Science 9 Fourth Quarter Physics
Daily Lesson Log in Science 9 Fourth Quarter PhysicsDaily Lesson Log in Science 9 Fourth Quarter Physics
Daily Lesson Log in Science 9 Fourth Quarter Physics
 
Human genetics..........................pptx
Human genetics..........................pptxHuman genetics..........................pptx
Human genetics..........................pptx
 
Plasmid: types, structure and functions.
Plasmid: types, structure and functions.Plasmid: types, structure and functions.
Plasmid: types, structure and functions.
 
Understanding Partial Differential Equations: Types and Solution Methods
Understanding Partial Differential Equations: Types and Solution MethodsUnderstanding Partial Differential Equations: Types and Solution Methods
Understanding Partial Differential Equations: Types and Solution Methods
 
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
(May 9, 2024) Enhanced Ultrafast Vector Flow Imaging (VFI) Using Multi-Angle ...
 
Site specific recombination and transposition.........pdf
Site specific recombination and transposition.........pdfSite specific recombination and transposition.........pdf
Site specific recombination and transposition.........pdf
 
Reboulia: features, anatomy, morphology etc.
Reboulia: features, anatomy, morphology etc.Reboulia: features, anatomy, morphology etc.
Reboulia: features, anatomy, morphology etc.
 
Information science research with large language models: between science and ...
Information science research with large language models: between science and ...Information science research with large language models: between science and ...
Information science research with large language models: between science and ...
 
COMPOSTING : types of compost, merits and demerits
COMPOSTING : types of compost, merits and demeritsCOMPOSTING : types of compost, merits and demerits
COMPOSTING : types of compost, merits and demerits
 
CONTRIBUTION OF PANCHANAN MAHESHWARI.pptx
CONTRIBUTION OF PANCHANAN MAHESHWARI.pptxCONTRIBUTION OF PANCHANAN MAHESHWARI.pptx
CONTRIBUTION OF PANCHANAN MAHESHWARI.pptx
 
Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.Cyathodium bryophyte: morphology, anatomy, reproduction etc.
Cyathodium bryophyte: morphology, anatomy, reproduction etc.
 
Role of AI in seed science Predictive modelling and Beyond.pptx
Role of AI in seed science  Predictive modelling and  Beyond.pptxRole of AI in seed science  Predictive modelling and  Beyond.pptx
Role of AI in seed science Predictive modelling and Beyond.pptx
 
GBSN - Biochemistry (Unit 2) Basic concept of organic chemistry
GBSN - Biochemistry (Unit 2) Basic concept of organic chemistry GBSN - Biochemistry (Unit 2) Basic concept of organic chemistry
GBSN - Biochemistry (Unit 2) Basic concept of organic chemistry
 
development of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virusdevelopment of diagnostic enzyme assay to detect leuser virus
development of diagnostic enzyme assay to detect leuser virus
 
Concept of gene and Complementation test.pdf
Concept of gene and Complementation test.pdfConcept of gene and Complementation test.pdf
Concept of gene and Complementation test.pdf
 

APPIS-FDGPET.pdf

  • 1. FDG-PET combined with LVQ and relevance learning: • diagnosis of neurodegenerative diseases • harmonization of multi-source data Rick van Veen Sofie Lövdal Michael Biehl Bernoulli Institute for Mathematics, Computer Science and Artificial Intelligence University of Groningen / NL APPIS 2023
  • 2. brief recap: Generalized Matrix Relevance Learning Vector Quantization (GMLVQ) overview
  • 3. brief recap: Generalized Matrix Relevance Learning Vector Quantization (GMLVQ) machine learning analysis: FDG-PET scan brain images subject scores derived from 3D images overview
  • 4. brief recap: Generalized Matrix Relevance Learning Vector Quantization (GMLVQ) machine learning analysis: FDG-PET scan brain images subject scores derived from 3D images diagnosis of neurodegenerative diseases Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders overview
  • 5. brief recap: Generalized Matrix Relevance Learning Vector Quantization (GMLVQ) machine learning analysis: FDG-PET scan brain images subject scores derived from 3D images diagnosis of neurodegenerative diseases Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders reliable classification across centers ? suggested correction scheme for GMLVQ overview
  • 6. brief recap: Generalized Matrix Relevance Learning Vector Quantization (GMLVQ) machine learning analysis: FDG-PET scan brain images subject scores derived from 3D images diagnosis of neurodegenerative diseases Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders reliable classification across centers ? suggested correction scheme for GMLVQ Outlook / open problems overview
  • 7. prototype-based classification • represent the data by one or several prototypes per class • classify a query according to the label of the nearest prototype • local decision boundaries acc. to (e.g.) Euclidean distances N-dim. feature space ? Learning Vector Quantization [Kohonen] can be modified to - differentiable distance / dissimilarity measures - adaptive distances in Relevance Learning
  • 8. Generalized Matrix Relevance LVQ (GMLVQ) relevance learning: parameterized adaptive distance measure instead of naïve Euclidean
  • 9. generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009] = [ ⌦ (w x) ] 2 d(w, x) = (w x) > ⇤ (w x) Generalized Matrix Relevance LVQ (GMLVQ) relevance learning: parameterized adaptive distance measure instead of naïve Euclidean
  • 10. generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009] training: adaptation of prototypes and distance measure guided by asuitable cost function w.r.t. = [ ⌦ (w x) ] 2 d(w, x) = (w x) > ⇤ (w x) Generalized Matrix Relevance LVQ (GMLVQ) relevance learning: parameterized adaptive distance measure instead of naïve Euclidean <latexit sha1_base64="evXoIa2ePqY2a6L8Tk2ZdP7HS8U=">AAAB+3icbVDLSsNAFJ3UV42Pxrp0M1gEBSmJKLqzIII7K9gHNKFMppN26EwSZiZqCfkVNy4UcevKv3DnJ/gXTtoutPXAwOGce7lnjh8zKpVtfxmFhcWl5ZXiqrm2vrFZsrbKTRklApMGjlgk2j6ShNGQNBRVjLRjQRD3GWn5w4vcb90RIWkU3qpRTDyO+iENKEZKS12r7F5z0keH0OVIDfwgvc+6VsWu2mPAeeJMSaVW+j4/MD8u613r0+1FOOEkVJghKTuOHSsvRUJRzEhmuokkMcJD1CcdTUPEifTScfYM7mmlB4NI6BcqOFZ/b6SISznivp7ME8pZLxf/8zqJCs68lIZxokiIJ4eChEEVwbwI2KOCYMVGmiAsqM4K8QAJhJWuy9QlOLNfnifNo6pzUrVvdBvHYIIi2AG7YB844BTUwBWogwbA4AE8gmfwYmTGk/FqvE1GC8Z0Zxv8gfH+AzUjlsk=</latexit> ⌦, w
  • 11. generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009] training: adaptation of prototypes and distance measure guided by asuitable cost function w.r.t. = [ ⌦ (w x) ] 2 d(w, x) = (w x) > ⇤ (w x) Generalized Matrix Relevance LVQ (GMLVQ) summarizes • the contribution of a single dimension • the relevance of original features in the classifier relevance learning: parameterized adaptive distance measure instead of naïve Euclidean interpretation: <latexit sha1_base64="evXoIa2ePqY2a6L8Tk2ZdP7HS8U=">AAAB+3icbVDLSsNAFJ3UV42Pxrp0M1gEBSmJKLqzIII7K9gHNKFMppN26EwSZiZqCfkVNy4UcevKv3DnJ/gXTtoutPXAwOGce7lnjh8zKpVtfxmFhcWl5ZXiqrm2vrFZsrbKTRklApMGjlgk2j6ShNGQNBRVjLRjQRD3GWn5w4vcb90RIWkU3qpRTDyO+iENKEZKS12r7F5z0keH0OVIDfwgvc+6VsWu2mPAeeJMSaVW+j4/MD8u613r0+1FOOEkVJghKTuOHSsvRUJRzEhmuokkMcJD1CcdTUPEifTScfYM7mmlB4NI6BcqOFZ/b6SISznivp7ME8pZLxf/8zqJCs68lIZxokiIJ4eChEEVwbwI2KOCYMVGmiAsqM4K8QAJhJWuy9QlOLNfnifNo6pzUrVvdBvHYIIi2AG7YB844BTUwBWogwbA4AE8gmfwYmTGk/FqvE1GC8Z0Zxv8gfH+AzUjlsk=</latexit> ⌦, w
  • 12. empirical observation / theory: relevance matrix becomes singular, dominated by very few eigenvectors identifies a discriminative (low-dimensional) subspace Relevance Matrix
  • 13. empirical observation / theory: relevance matrix becomes singular, dominated by very few eigenvectors identifies a discriminative (low-dimensional) subspace facilitates discriminative visualization / low-dimensional representation of datasets Relevance Matrix 3-class data set (iris flowers)
  • 14. Analysis of FDG-PET image data for the diagnosis of neurodegenerative disorders R. van Veen, S.K. Meles, R.J. Renken, F.E. Reesink, W.H. Oertel, A. Janzen, G.-J. de Vries, K.L. Leenders, M. Biehl FDG-PET combined with learning vector quantization allows classification of neurodegenerative diseases and reveals the trajectory of idiopathic REM sleep behavior disorder Computer Methods and Programs in Biomedicine 225: 107042 (2022)
  • 15. Glucose uptake http://glimpsproject.com subjects A B C FDG-PET 3D images 18 F-Fluorodeoxyglucose positron emission tomography data
  • 16. Glucose uptake http://glimpsproject.com subjects A B C FDG-PET 3D images 18 F-Fluorodeoxyglucose positron emission tomography Healthy Controls HC Parkinson’s Disease PD Alzheimer’s Disease AD data
  • 17. Subjects Source HC PD AD CUN 19 49 - UGOSM 44 58 55 UMCG 19 20 21 FDG-PET brain scans from 3 centers • Clínica Universidad de Navarra • Univ. Genoa/IRCCS San Martino • Univ. Medical Center Groningen Glucose uptake http://glimpsproject.com subjects A B C FDG-PET 3D images 18 F-Fluorodeoxyglucose positron emission tomography Healthy Controls HC Parkinson’s Disease PD Alzheimer’s Disease AD data
  • 19. 8 work flow subjects ~ 200000 voxels subject specific anatomy high intensity, low noise voxels log-transform masking low-dimensional projections by SSM/PCA
  • 20. 8 work flow subjects ~ 200000 voxels subject specific anatomy high intensity, low noise voxels log-transform masking low-dimensional projections by SSM/PCA subjects ~50 subject scores (PCA) Scaled Subprofile Model / PCA based on a separate reference group of subjects
  • 21. 9 work flow subjects ~50 subject scores (PCA) subjects labels (condition) classification: GMLVQ, SVM (lin.) ~ 200000 voxels Scaled Subprofile Model / PCA based on a separate reference group of subjects
  • 22. 9 work flow subjects ~50 subject scores (PCA) subjects novel test labels (condition) classification: GMLVQ, SVM (lin.) ? ~ 200000 voxels Scaled Subprofile Model / PCA based on a separate reference group of subjects
  • 23. 9 work flow subjects ~50 subject scores (PCA) subjects novel test labels (condition) classification: GMLVQ, SVM (lin.) ? ~ 200000 voxels Scaled Subprofile Model / PCA based on a separate reference group of subjects
  • 24. results performance evaluation: averages over 10 randomized runs of 10-fold cross-validation accuracies, sensitivity /specificity Receiver Operating Characteristics for binary classification
  • 25. results subjects from one center only (training and testing) e.g. UGOSM (ROC) ±0.008 performance evaluation: averages over 10 randomized runs of 10-fold cross-validation accuracies, sensitivity /specificity Receiver Operating Characteristics for binary classification (lin.) (lin.) (lin.)
  • 26. results subjects from one center only (training and testing) e.g. UGOSM (ROC) ±0.008 performance evaluation: averages over 10 randomized runs of 10-fold cross-validation accuracies, sensitivity /specificity Receiver Operating Characteristics for binary classification also: good multi-class accuracies (within centers) (lin.) (lin.) (lin.)
  • 27. GMLVQ multi-class (UMCG) PD: Parkinson’s disease DLB: Dementia with Lewy Bodies HC: Healthy Control AD: Alzheimer’s Disease
  • 28. GMLVQ multi-class (UMCG) PD: Parkinson’s disease DLB: Dementia with Lewy Bodies HC: Healthy Control AD: Alzheimer’s Disease identify outliers (*) and clusters PD (1a): young patients, onset of PD and/or mild cognitive impairment PD (1b): elder PD patients, progressed to PD dementia later AD (2a): AD subtype (n=3?) AD (2b): AD patients with mild cognitive impairment
  • 29. HC PD DLB AD RBD1 (baseline, first scan) RBD2 (follow up, ca 4 yrs.) Rapid Eye Movement sleep behavior disorder (iRBD) trajectory of patients in the {HC, PD, DLB, AD}-discriminative space, post-hoc projections, iRBD not used in training process frequent trend: HC → PD/DLB disease progression (iRBD)
  • 30. HC PD DLB AD RBD1 (baseline, first scan) RBD2 (follow up, ca 4 yrs.) Rapid Eye Movement sleep behavior disorder (iRBD) trajectory of patients in the {HC, PD, DLB, AD}-discriminative space, post-hoc projections, iRBD not used in training process frequent trend: HC → PD/DLB RBD3 (in progress) disease progression (iRBD)
  • 31. Center / source harmonization in GMLVQ training R. van Veen, N.R. Bari Tamboli, S. Lövdal, S.K. Meles, R.J. Renken, G.-J. de Vries, D. Arnaldij, S. Morbellik, M.C. Rodriguez Oroz, P. Claver, K.L. Leenders,T. Villmann, M. Biehl Subspace Corrected Relevance Learning with Application in Neuroimaging in preparation (2023) general problem in many application domains: different sources, e.g. technical platforms, preprocessing pipelines, batch effects … here: data from different medical centers
  • 32. e.g. PD vs. HC unbiased classifiers (ROC) within center (example: UGOSM) across-center performance (lin.)
  • 33. e.g. PD vs. HC unbiased classifiers (ROC) within center (example: UGOSM) across centers: poor performance across-center performance (lin.) (lin.) (lin.) (lin.)
  • 34. classification experiment - can we infer the medical center ? identification of centers HC only Classifier Sens. (%) Spec. (%) AUC (ROC) CUN vs. UGOSM SVM (lin.) 99.75 93.00 1.00 GMLVQ 97.30 91.00 0.99
  • 35. classification experiment - can we infer the medical center ? identification of centers possible explanations - center-specific (pre-)processing despite supposedly identical equipment and work flows - significantly different patient cohorts (not the case in HC) HC only Classifier Sens. (%) Spec. (%) AUC (ROC) CUN vs. UGOSM SVM (lin.) 99.75 93.00 1.00 GMLVQ 97.30 91.00 0.99
  • 36. HC vs. PD, 3 centers
  • 37. HC vs. PD, 3 centers HC vs. PD
  • 38. Genoa Groningen San Martino HC vs. PD, 3 centers HC vs. PD centers
  • 39. Basic idea: 1) identify K discriminative directions, subspace from a GMLVQ-system for the discrimination of centers ideally w.r.t. to a separate cohort (e.g. matching HC) <latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit> U = span ⇣ {uk} K k=1 ⌘ subspace corrected GMLVQ
  • 40. Basic idea: 1) identify K discriminative directions, subspace from a GMLVQ-system for the discrimination of centers ideally w.r.t. to a separate cohort (e.g. matching HC) 2) train a second GMLVQ system for the actual target classification (discrimation of diseases) restricting the relevance matrix to the space orthogonal to subspace U by a correction scheme <latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit> U = span ⇣ {uk} K k=1 ⌘ <latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit> ⌦ ! ⌦ " I K X k=1 ukuk > # e.g. after each individual GMLVQ update subspace corrected GMLVQ
  • 41. Basic idea: 1) identify K discriminative directions, subspace from a GMLVQ-system for the discrimination of centers ideally w.r.t. to a separate cohort (e.g. matching HC) 2) train a second GMLVQ system for the actual target classification (discrimation of diseases) restricting the relevance matrix to the space orthogonal to subspace U by a correction scheme <latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit> U = span ⇣ {uk} K k=1 ⌘ <latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit> ⌦ ! ⌦ " I K X k=1 ukuk > # e.g. after each individual GMLVQ update subspace corrected GMLVQ UGOSM (Italy): 49 HC 38 PD [early stage PD] CUN (Spain): 20 HC 68 PD [late stage PD] a clear-cut example problem (subset of data, 2 centers only):
  • 42. Basic idea: 1) identify K discriminative directions, subspace from a GMLVQ-system for the discrimination of centers ideally w.r.t. to a separate cohort (e.g. matching HC) 2) train a second GMLVQ system for the actual target classification (discrimation of diseases) restricting the relevance matrix to the space orthogonal to subspace U by a correction scheme <latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit> U = span ⇣ {uk} K k=1 ⌘ <latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit> ⌦ ! ⌦ " I K X k=1 ukuk > # e.g. after each individual GMLVQ update subspace corrected GMLVQ UGOSM (Italy): 49 HC 38 PD [early stage PD] CUN (Spain): 20 HC 68 PD [late stage PD] a clear-cut example problem (subset of data, 2 centers only): target classification
  • 43. First step: classify according to data source (CUN, UGOSM) on the basis of Healthy Controls only, identify discriminative eigenvector of the relevance matrix (1) early PD vs. late PD
  • 44. First step: classify according to data source (CUN, UGOSM) on the basis of Healthy Controls only, identify discriminative eigenvector of the relevance matrix (1) <latexit sha1_base64="eNcDcD5ml+Qrz97LtpCpN2NmuIg=">AAAB8XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKojcDXjxGMA9MQpidzCZDZmeXmV4hLPkLLxEU8eoX+Bve/Btnkxw0saChqOqmq9uPpTDout9ObmV1bX0jv1nY2t7Z3SvuH9RNlGjGayySkW761HApFK+hQMmbseY09CVv+MObzG88cm1EpO5xFPNOSPtKBIJRtNJDO6Q48IM0GXeLJbfsTkGWiTcnpevPSYbnarf41e5FLAm5QiapMS3PjbGTUo2CST4utBPDY8qGtM9blioactNJp4nH5MQqPRJE2pZCMlV/T6Q0NGYU+rYzS2gWvUz8z2slGFx1UqHiBLlis0VBIglGJDuf9ITmDOXIEsq0sFkJG1BNGdonFewTvMWTl0n9rOxdlN07t1Q5hxnycATHcAoeXEIFbqEKNWCg4Ale4NUxzsR5c95nrTlnPnMIf+B8/AAWcJWa</latexit> u early PD vs. late PD
  • 45. First step: classify according to data source (CUN, UGOSM) on the basis of Healthy Controls only, identify discriminative eigenvector of the relevance matrix (1) Second step: train classifier “early PD” vs. “late PD” with correct relevance matrix after each individual GMLVQ update step in (2) <latexit sha1_base64="cSc/CoXqNvnOdAZQgm7ryaLEmJg=">AAACKnicbZC7SgNBFIZnvcZ4i1pqMRgECw27omgZsdFKBaOB7BpmJ2eTIbMXZs4GwpLSZ7Gxs/UVbFIowdYHcZIoePth4OM/5zDn/H4ihUbbHlgTk1PTM7O5ufz8wuLScmFl9VrHqeJQ4bGMVdVnGqSIoIICJVQTBSz0Jdz47ZNh/aYDSos4usJuAl7ImpEIBGdorHrh2D0PocmoizH9wh3qSgiwdkZ3qRsybPlBlva+4a3pTqirRLOFXr1QtEv2SPQvOJ9QLG8Ed2+dx6eLeqHvNmKehhAhl0zrmmMn6GVMoeASenk31ZAw3mZNqBmMWAjay0an9uiWcRo0iJV5EdKR+30iY6HW3dA3ncN19e/a0PyvVksxOPIyESUpQsTHHwWppCaXYW60IRRwlF0DjCthdqW8xRTjaNLNmxCc3yf/heu9knNQsi9NGvtkrBxZJ5tkmzjkkJTJKbkgFcLJPXkmL+TVerD61sB6G7dOWJ8za+SHrPcP0OiqsQ==</latexit> ⌦ ! ⌦ ⇥ I uu> ⇤ <latexit sha1_base64="eNcDcD5ml+Qrz97LtpCpN2NmuIg=">AAAB8XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKojcDXjxGMA9MQpidzCZDZmeXmV4hLPkLLxEU8eoX+Bve/Btnkxw0saChqOqmq9uPpTDout9ObmV1bX0jv1nY2t7Z3SvuH9RNlGjGayySkW761HApFK+hQMmbseY09CVv+MObzG88cm1EpO5xFPNOSPtKBIJRtNJDO6Q48IM0GXeLJbfsTkGWiTcnpevPSYbnarf41e5FLAm5QiapMS3PjbGTUo2CST4utBPDY8qGtM9blioactNJp4nH5MQqPRJE2pZCMlV/T6Q0NGYU+rYzS2gWvUz8z2slGFx1UqHiBLlis0VBIglGJDuf9ITmDOXIEsq0sFkJG1BNGdonFewTvMWTl0n9rOxdlN07t1Q5hxnycATHcAoeXEIFbqEKNWCg4Ale4NUxzsR5c95nrTlnPnMIf+B8/AAWcJWa</latexit> u early PD vs. late PD
  • 46. uncorrected: acc. 98% corrected: acc. 86% early PD vs. late PD
  • 47. uncorrected: acc. 98% corrected: acc. 86% early PD vs. late PD consistent with continuos progression well-separated early/late stages (?)
  • 48. uncorrected: acc. 98% corrected: acc. 86% early PD vs. late PD consistent with continuos progression well-separated early/late stages (?)
  • 49. uncorrected: acc. 98% corrected: acc. 86% early PD vs. late PD seemingly favorable performance mainlydue to center-specific bias purely center-specific variation removed consistent with continuos progression well-separated early/late stages (?)
  • 50. outlook - simplified realizations, variations of the idea ? - availability of separate control groups ? - suitable dimension of center-specific subspace ? - appropriate quality measure for evaluation ?
  • 51. outlook A Learning Vector Quantization Architecture for Transfer Learning Based Classification in Case of Multiple Sources by Means of Null-Space Evaluation T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205 single-step training, combining of target classification and source separation with orthogonality constraint - simplified realizations, variations of the idea ? - availability of separate control groups ? - suitable dimension of center-specific subspace ? - appropriate quality measure for evaluation ?
  • 52. outlook A Learning Vector Quantization Architecture for Transfer Learning Based Classification in Case of Multiple Sources by Means of Null-Space Evaluation T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205 single-step training, combining of target classification and source separation with orthogonality constraint - simplified realizations, variations of the idea ? - availability of separate control groups ? - suitable dimension of center-specific subspace ? - appropriate quality measure for evaluation ? iterative removal of center-discriminative directions
  • 53. outlook A Learning Vector Quantization Architecture for Transfer Learning Based Classification in Case of Multiple Sources by Means of Null-Space Evaluation T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205 single-step training, combining of target classification and source separation with orthogonality constraint training of target classification with penalty term w.r.t. variance of HC data [with Umberto Petruzzello] - simplified realizations, variations of the idea ? - availability of separate control groups ? - suitable dimension of center-specific subspace ? - appropriate quality measure for evaluation ? iterative removal of center-discriminative directions
  • 54. Thanks! [generated with Open-AI DALL-E 2] wolves Huskies