SlideShare a Scribd company logo
Information	in	the	Weights	
Mark	Chang
2020/06/10
Outline
• Traditional	Machine	Learning	v.s. Deep	Learning
• Basic	Concepts	in	Information	Theory
• Information	in	the	Weights
Traditional	Machine	Learning	v.s.	Deep	
Learning
• VC	Bound
• Generalization	in	Deep	Learning
• PAC-Bayesian	Bound	for	Deep	Learning
VC	Bound
• What	cause	over-fitting?
• Too	many	parameters	->	over	fitting	?	
• Too	many	parameters	->	high	VC	Dimension	->	over	fitting	?
• …	?	
h
h h
under	fitting over	fittingappropriate	fitting
too	few	
parameters
adequate
parameters
too	many
parameters
training	data
testing	data
VC	Bound
• Over-fitting is caused by high VC Dimension
• For	a	given	dataset	(n	is	constant),	search	for	the	best	VC	Dimension
d=n (shatter)
d	(VC	Dimension)
over-fittingerror
best	VC	Dimension
numbef of
training instances
VC	Dimension
(model complexity)✏(h)  ˆ✏(h) +
r
8
n
log(
4(2n)d
)
training	error
testing	error
VC	Dimension
• VC	Dimension	of	linear	model:	
• O(W)
• W	=	number	of	parameters	
• VC	Dimension	of	fully-connected	
neural	networks:	
• O(LW	log	W)
• L	=	number	of	layers
• W	=	number	of	parameters
• VC Dimension is	independent from data distribution,	and	only	
depends	on	model
Generalization	in	Deep	Learning
• Considering	a	toy	example:
neural	networks
input:	
780
hidden:	
600
d	≈	26M
dataset
n	=	50,000
d	>>	n,	but	testing	error	<	0.1
Generalization	in	Deep	Learning
• However,	when	you	are	solving	your	problem	…
neural	networks
input:	
780
hidden:	
600
testing	error	=	0.6
over	fitting	!!	
your	
dataset
n	=	50,000
10	classes
testing	error	=	0.6
over	fitting	!!	
reduce	
VC	Dimension
neural	networks
input:	
780
hidden:	
200
…
reduce	
VC	Dimension
Generalization	in	Deep	Learning
d	(VC	Dimension)
error
✏(h)  ˆ✏(h) +
r
8
n
log(
4(2n)d
)
d=n
over-fitting
over-parameterization
model	with	
extremely	high	VC-
Dimension
Generalization	in	Deep	Learning
ICLR2017
Generalization	in	Deep	Learning
1				 0 1 0 2
random	noise	features
shatter	!
deep
neural
networks
(Inception)
feature
:label: 1						0 1 0 2
original	dataset	(CIFAR)
0 1 1 2 0
random	label
ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0
Generalization	in	Deep	Learning
1				 0 1 0 2
random	noise	features
deep
neural
networks
(Inception)
feature
:label: 1						0 1 0 2
original	dataset	(CIFAR)
0 1 1 2 0
random	label
ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0
✏(h) ⇡ 0.14 ✏(h) ⇡ 0.9 ✏(h) ⇡ 0.9
Generalization	in	Deep	Learning
• Testing error depends on data distribution
• However,	VC-Bound does not depend on data distribution
1					0 1 0 2
random	noise	
features
original	dataset
feature
:label: 1						0 1 0 2
random	label
0 1 1 2 0
✏(h) ⇡ 0.14 ✏(h) ⇡ 0.9
Generalization	in	Deep	Learning
Generalization	in	Deep	Learning
• high	sharpness	->	high	testing	error
PAC-Bayesian	Bound	for	Deep	Learning
UAI	2017
PAC-Bayesian	Bound	for	Deep	Learning
• Deterministic Model • Stochastic Model	(Gibbs	Classifier)
data ✏(h)
hypothesis
h
error
data
✏(Q)
= Eh⇠Q(✏(h))
distribution	of	
hypothesis
hypothesis
h
Gibbs
error
sampling
PAC-Bayesian	Bound	for	Deep	Learning
• Considering	the	sharpness	of	local	minimums	
single	
hypothesis
h
distribution	of
hypothesis	
Q
flat	minimum:
• low	
• low		
ˆ✏(h)
ˆ✏(Q)
sharp	minimum:
• low
• high
ˆ✏(h)
ˆ✏(Q)
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
PAC-Bayesian	Bound	for	Deep	Learning
• With 1-ẟ probability, the following inequality (PAC-Bayesian	Bound)	is
satisfied
number of
training instances
KL	divergence
between	P	and	Q
Distribution	of	model	
before	training	(prior)
Distribution	of	models	
after	training	(posterior)
KL	divergence
between	testing	error	
&	training	error
PAC-Bayesian	Bound	for	Deep	Learning
✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) + 2
2n 1
high ✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) + 2
2n 1
,		high
under	fitting over	fittingappropriate	fitting
✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) + 2
2n 1
moderate ✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) +
2n 1
,		moderate ✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
2n 1
low ✏(Q)  ˆ✏(Q) +
s
,		high
low KL(Q||P) moderate KL(Q||P) high KL(Q||P)
training	data
testing	data
P
Q
P
Q
P
Q
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
lowKL(ˆ✏(Q)k✏(Q)) moderae KL(ˆ✏(Q)k✏(Q)) high KL(ˆ✏(Q)k✏(Q))
PAC-Bayesian	Bound	for	Deep	Learning
• PAC-Bayesian	Bound	is	data-dependent
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
• High	VC	Dimension,	but	clean	data
->	low	KL	(Q||P)
• High	VC	Dimension,	and	noisy	data
->	high	KL(Q||P)
P
Q Q
P
PAC-Bayesian	Bound	for	Deep	Learning
• Data	:	M-NIST	(binary	classification,	class0	:	0~4,	class1	:	5~9)
• Model	:	2	layer	or	3	layer	NN
Training
Testing
VC	Bound
Pac-Bayesian
Bound
0.028
0.034
26m
0.161
0.027
0.035
56m
0.179
Varying	the	width	of	
hidden	layer	(2	layer	NN)
600 1200
0.027
0.032
121m
0.201
0.028
0.034
26m
0.161
0.028
0.033
66m
0.186
Varying	the	number	of	
hidden	layers
2 3 4
0.028
0.034
26m
0.161
0.112
0.503
26m
1.352
original random
original	M-NIST	
v.s.	random	label
PAC-Bayesian	Bound	for	Deep	Learning
• PAC-Bayesian	Bound	is	Data	Dependent
clean	data:
->small	KL(Q||P)
->small	ε(Q)
noisy	data:
->large	KL(Q||P)
->large	ε(Q)
feature:
label:
original	M-NIST
0 0 0 1 1
random	label
1						0 1						0						1		
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
Basic	Concepts	in	Information	Theory
• Entropy
• Joint	Entropy	
• Conditional	Entropy
• Mutual	Information
• Cross	Entropy
Entropy
• The	uncertainty	of	a	random	variable	X	
H(X) =
X
x2X
p(x) log p(x)
x 1 2
P(X=x) 0.5 0.5
x 1 2
P(X=x) 0.9 0.1
x 1 2 3 4
P(X=x) 0.25 0.25 0.25 0.25
H(X) = 2 ⇥ 0.5 log(0.5) = 1
H(X) = 0.9 log(0.9) 0.1 log(0.1) = 0.469
H(X) = 4 ⇥ 0.25 log(0.25) = 2
Joint	Entropy	
• The	uncertainty	of	a	joint	distribution	involving	two	random	variables	
X,	Y	
H(X, Y ) =
X
x2X,y2Y
p(x, y) log p(x, y)
P(X,Y) Y=1 Y=2
X=1 0.25 0.25
X=2 0.25 0.25
H(X, Y ) = 4 ⇥ 0.25 log(0.25) = 2
Conditional	Entropy
• The	uncertainty	of	a	random	variable	Y	given	the	value	of		another	
random	variable	X
H(Y |X) =
X
x2X
p(x)
X
y2Y
p(y|x) log p(y|x)
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent Y	is	a	stochasit function	of	X
H(X, Y ) = 1.722
H(Y |X) = 0.722
H(X, Y ) = 1.722
H(Y |X) = 1
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
H(X, Y ) = 1
H(Y |X) = 0
Conditional	Entropy
Information	Diagram
H(X)H(Y )
H(X, Y )
H(Y |X)
H(Y |X) =
X
x2X
p(x)
X
y2Y
p(y|x) log p(y|x)
=
X
x2X
p(x)
X
y2Y
p(y|x)(log p(x, y) log p(x))
=
X
x2X,y2Y
p(x, y) log p(x, y) +
X
x2X
p(x) log p(x)
= H(X, Y ) H(X)
Conditional	Entropy
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent Y	is	a	stochasit function	of	X
H(X, Y ) = 1.722
H(X) = 0.722, H(Y ) = 1
H(Y |X) = H(X, Y ) H(X)
= 1 = H(Y )
H(X, Y ) = 1.722
H(X) = 1, H(Y ) = 1
H(Y |X) = H(X, Y ) H(X)
= 0.722
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
H(X, Y ) = 1
H(X) = 1, H(Y ) = 1
H(Y |X) = H(X, Y ) H(X)
= 0
H(X)
H(X, Y )
H(Y |X) = H(Y )
H(Y )
H(X, Y )
H(X)
H(Y |X) = 0
H(Y |X)
H(X, Y )
Mutual	Information
• The	mutual	dependence	between	two	variables	X,	Y
I(X; Y ) =
X
x2X,y2Y
p(x, y) log
p(x, x)
p(x)p(y)
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent
I(X; Y ) = 0 I(X; Y ) = 0.278
Y	is	a	stochasit function	of	X
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
I(X, Y ) = 1
Mutual	Information
H(X)H(Y )
H(X, Y )
Information	
Diagram
I(X; Y )
I(X; Y ) =
X
x2X,y2Y
p(x, y) log
p(x, x)
p(x)p(y)
=
X
x2X,y2Y
p(x, y)( log p(x) log p(y) + log p(x, x))
=
X
x2X
p(x) log p(x)
X
y2Y
p(y) log p(y) +
X
x2X,y2Y
p(x, y) log p(x, x))
= H(X) + H(Y ) H(X, Y )
Mutual	Information
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent
H(Y )
H(X, Y )
H(X)
I(X; Y )
H(X)
H(X, Y )
H(Y )
I(X; Y ) = 0
Y	is	a	stochasit function	of	X
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
H(X, Y ) = 1.722
H(X) = 0.722, H(Y ) = 1
I(X; Y ) = H(X) + H(Y )
H(X, Y ) = 0
H(X, Y ) = 1.722
H(X) = 1, H(Y ) = 1
I(X; Y ) = H(X) + H(Y )
H(X, Y ) = 0.278
H(X, Y ) = 1
H(X) = 1, H(Y ) = 1
I(X; Y ) = H(X) + H(Y )
H(X, Y ) = 1
H(Y )
H(X, Y )
H(X)
I(X; Y )
Entropy,	Joint	Entropy	,	Conditional	Entropy	&	
Mutual	Information
H(X)
H(Y ) H(Z)
H(X|Y, Z)
H(Y |X, Z) H(Z|X, Y )
I(X; Z|Y )I(X; Y |Z)
I(Y ; Z|X)
I(X; Y ; Z)
Cross	Entropy
Hp,q(X) =
X
x2X
p(x) log q(x)
=
X
x2X
p(x) log p(x)
X
x2X
p(x) log q(x) log p(x)
= Hp(X) + KL p(X)kq(X)
Information	in	the	Weights
• Cause	of	Over-fitting	
• Information	in	the	Weights	as a Regularizer
• Bounding	the	Information	in	the	Weights	
• Connection	with	Flat	Minimum
• Connection	with	PAC-Bayesian	Bound
• Experiments
Information	in	the	Weights
Cause	of	Over-fitting	
• Training	loss	(Cross-Entropy):
Hp,q(y|x, w) = Ex,y
⇥
p(y|x, w) log q(y|x, w)
⇤
p : probability density function of data
q : probability density function predicted by model
x : input feature of training data
y : label of training data
w : weights of model
✓ : latent parameters of data distribution
Hp,q(y|x, w) = Hp(y|x, w) + Ex,wKL p(y|x, w)kq(y|x, w)
Cause	of	Over-fitting	
the	uncertainty	of	y	given	w and	x
Hp(x)
Hp(y)
Hp(y|x, w)
Hp(w)
Cause	of	Over-fitting	
• lower	
->	lower	uncertainty	of	y	given	w	and	x	 ->	lower	training	error	
• ex:	 given		x as									,	and	a	fixed	w
8
>><
>>:
p(y = 1|x, w) = 0.9
p(y = 2|x, w) = 0.1
p(y = 3|x, w) = 0.0
p(y = 4|x, w) = 0.0
8
>><
>>:
p(y = 1|x, w) = 0.3
p(y = 2|x, w) = 0.3
p(y = 3|x, w) = 0.2
p(y = 4|x, w) = 0.2
higher Hp(y|x, w)lower Hp(y|x, w)
Hp(y|x, w)
Hp,q(y|x, w) = Hp(y|x, w) + Ex,wKL p(y|x, w)kq(y|x, w)
Cause	of	Over-fitting	
Hp,q(y|x, w) = Hp(y|x, w) + Ex,wKL p(y|x, w)kq(y|x, w)
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
✓ : latent parameters of (training & testing) data distribution
Hp(y|x)Hp(ytest|xtest)
Hp(✓)
Cause	of	Over-fitting	
Hp(y|x)Hp(ytest|xtest)
Hp(✓)
✓ : latent parameters of (training & testing) data distribution
1
2
3
x y
3
2
x y
3
2
x y
3
Hp(y|x, ✓)
Ip(y; ✓|x)
useful	information	in	
training	data	
noisy information
and	outlier in	
training	data
noise	and	outlier	
in	testing	data
normal	samples	not	in	
training	data
Cause	of	Over-fitting	
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(x)
Hp(y)
Hp(✓)
Hp(y|x, ✓) the	uncertainty	of	
y	given	w and	x
Hp(y|x, w)
Hp(w)
I(y; ✓|x, w)
noisy information
and	outlier in training data
useful	information	not	
learned	by	weights
noisy information
and	outlier learned	by	
weights
I(y; w|x, ✓)
Cause	of	Over-fitting	
Hp(x)
Hp(y)
Hp(✓)
Hp(y|x, ✓)
Hp(w)
noisy information
and	outlier in training data
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Cause	of	Over-fitting	
• lower																						->	less	noise	and	outlier	in	training	data.		low Hp(y|x, ✓)
lower Hp(y|x, ✓) higher Hp(y|x, ✓)
Hp(y|x)Hp(✓)
3
2
x y
1
3
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(y|x)Hp(✓)
3
2
x y
2
3
Cause	of	Over-fitting	
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(x)
Hp(y)
Hp(✓)
Hp(w)
I(y; ✓|x, w)
useful	information	not	
learned	by	weights
Cause	of	Over-fitting	
• lower
->	more	useful	information	learned	by	weights
->	lower
->	lower	testing	error		
Hp(✓)
Hp(w1)
Hp(w2)
I(y; ✓|x, w)
Hp(y|x)
Hp(ytest|xtest)
I(y; ✓|x, w2) < I(y; ✓|x, w1)
) Hp(ytest|xtest, w2) < Hp(ytest|xtest, w1)
Hp(ytest|xtest, w)
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Cause	of	Over-fitting	
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(x)
Hp(y)
Hp(✓)
Hp(w)
noisy information
and	outlier learned	by	
weights
I(y; w|x, ✓)
w
• Cause	of	over	fitting:	weights	memorize	the noisy informationin	training	data.
Cause	of	Over-fitting	
High	VC	Dimension,	but	clean	data
->	few	noise	to	memorize
High	VC	Dimension,	and	noisy	data
->	much	noise	to	memorize	
training	data
testing	data
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
ww
• is	unknown,																					cannot	be	compute
• ,	the	information	in	the	weight,	is an upper bound of
Information	in	the	Weights	as a Regularizer
I(y; w|x, ✓)  I(y, x; w|✓) = I(D; w|✓)  I(D; w)
I(y; w|x, ✓)
I(y; w|x, ✓)I(D; w)
D
Hp(x)
Hp(y)
Hp(✓)
I(y; w|x, ✓) Hp(x, y) = Hp(D)
I(y, x; w|✓) = I(D; w|✓)I(D; w)
Hp(w)
I(y; w|x, ✓)
Information	in	the	Weights	as a Regularizer
• The	actual	data distribution	p is unknown
• Estimate by
• New loss function : as a regularizerIp(D; w) ⇡ Iq(D; w)
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
I(D; w) Iq(D; w)
Connection	with	Flat	Minimum
• Flat	minimum has	low	information in the weight
kH k⇤ : nuclear norm of the Hessian at the local minimum
Flat	minimum ->	low	neclear norm of the Hessian	 -> low information
Iq(w; D) 
1
2
K
⇥
log k ˆwk2
2 + log kH k⇤ K log(
K2
2
)
⇤
Connection	with	PAC-Bayesian	Bound
• Given a prior distribution , we have:p(w)
distribution	of	weights
before training	(prior)
distributionof	weights	
after	training on dataset D
(posterior)
Iq(w, D) = EDKL(q(w|D)kq(w))
 EDKL(q(w|D)kq(w)) + EDKL(q(w|D)kp(w))
 EDKL(q(w|D)kp(w))
Connection	with	PAC-Bayesian	Bound
• Loss function with the	regularizer :
• PAC Bayesian Bound :
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
 Hp,q(y|x, w) + EDKL(q(w|D)kp(w))
Ip(D; w) ⇡ Iq(D; w)
ED
⇥
Ltest
(q(w|D))
⇤

Hp,q(y|x, w) + LmaxED
⇥
KL(q(w|D)kp(w))
⇤
n(1 1
2 )
Ltest : test error of the network with weights q(w|D)
Lmax : maximum	per-sample	loss	function
Experiments
• Random	Labels
Dataset	size
Information	complexity
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
Experiments
Dataset	size
Information	complexity
• Real Labels
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
Experiments
• information	in	the	weights	v.s. percentage	of	corrupted	labels

More Related Content

What's hot

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Leo Asselborn
 
Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space
Eliezer Silva
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
cyclic_code.pdf
cyclic_code.pdfcyclic_code.pdf
cyclic_code.pdf
rahelbirhanu1
 
High-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K CharactersHigh-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K Characters
Holistic Benchmarking of Big Linked Data
 
Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex Optimization
Gabriel Peyré
 
DissertationSlides169
DissertationSlides169DissertationSlides169
DissertationSlides169Ryan White
 
Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...
BigMine
 
Claire98
Claire98Claire98
Claire98
Yves Caseau
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
Gabriel Peyré
 
ABC: How Bayesian can it be?
ABC: How Bayesian can it be?ABC: How Bayesian can it be?
ABC: How Bayesian can it be?
Christian Robert
 
Parallel Optimization in Machine Learning
Parallel Optimization in Machine LearningParallel Optimization in Machine Learning
Parallel Optimization in Machine Learning
Fabian Pedregosa
 
Decision Making with Hierarchical Credal Sets (IPMU 2014)
Decision Making with Hierarchical Credal Sets (IPMU 2014)Decision Making with Hierarchical Credal Sets (IPMU 2014)
Decision Making with Hierarchical Credal Sets (IPMU 2014)
Alessandro Antonucci
 
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Gabriel Peyré
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30Ryan White
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems Regularization
Gabriel Peyré
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
Gabriel Peyré
 

What's hot (19)

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
 
Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
cyclic_code.pdf
cyclic_code.pdfcyclic_code.pdf
cyclic_code.pdf
 
High-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K CharactersHigh-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K Characters
 
Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex Optimization
 
DissertationSlides169
DissertationSlides169DissertationSlides169
DissertationSlides169
 
Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...
 
Claire98
Claire98Claire98
Claire98
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
 
ABC: How Bayesian can it be?
ABC: How Bayesian can it be?ABC: How Bayesian can it be?
ABC: How Bayesian can it be?
 
Parallel Optimization in Machine Learning
Parallel Optimization in Machine LearningParallel Optimization in Machine Learning
Parallel Optimization in Machine Learning
 
Decision Making with Hierarchical Credal Sets (IPMU 2014)
Decision Making with Hierarchical Credal Sets (IPMU 2014)Decision Making with Hierarchical Credal Sets (IPMU 2014)
Decision Making with Hierarchical Credal Sets (IPMU 2014)
 
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems Regularization
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
 

Similar to Information in the Weights

presentation on Fandamental of Probability
presentation on Fandamental of Probabilitypresentation on Fandamental of Probability
presentation on Fandamental of Probability
MaheshGour5
 
MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1
arogozhnikov
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
GireeshNcs
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
EbnulHasanEfte221351
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
RobinBushu
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
ssuserd329601
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
sarahfarhin
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
Sameer607695
 
Probability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.pptProbability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.ppt
ShamshadAli58
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
Yonas992841
 
Probability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledgProbability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledg
nsnayak03
 
Probability statistics assignment help
Probability statistics assignment helpProbability statistics assignment help
Probability statistics assignment help
HomeworkAssignmentHe
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagation
Dong Guo
 
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Gota Morota
 
Bayes Classification
Bayes ClassificationBayes Classification
Bayes Classification
sathish sak
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
Christian Robert
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
Prasenjit Dey
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
Christian Robert
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
PyData
 

Similar to Information in the Weights (20)

presentation on Fandamental of Probability
presentation on Fandamental of Probabilitypresentation on Fandamental of Probability
presentation on Fandamental of Probability
 
MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.pptProbability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledgProbability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledg
 
Probability statistics assignment help
Probability statistics assignment helpProbability statistics assignment help
Probability statistics assignment help
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagation
 
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
 
Bayes Classification
Bayes ClassificationBayes Classification
Bayes Classification
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Bayes 6
Bayes 6Bayes 6
Bayes 6
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
 

More from Mark Chang

Domain Adaptation
Domain AdaptationDomain Adaptation
Domain Adaptation
Mark Chang
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
Mark Chang
 
NTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANsNTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANs
Mark Chang
 
Generative Adversarial Networks
Generative Adversarial NetworksGenerative Adversarial Networks
Generative Adversarial Networks
Mark Chang
 
Applied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksApplied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural Networks
Mark Chang
 
The Genome Assembly Problem
The Genome Assembly ProblemThe Genome Assembly Problem
The Genome Assembly Problem
Mark Chang
 
DRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive WriterDRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive Writer
Mark Chang
 
淺談深度學習
淺談深度學習淺談深度學習
淺談深度學習
Mark Chang
 
Variational Autoencoder
Variational AutoencoderVariational Autoencoder
Variational Autoencoder
Mark Chang
 
TensorFlow 深度學習快速上手班--深度學習
 TensorFlow 深度學習快速上手班--深度學習 TensorFlow 深度學習快速上手班--深度學習
TensorFlow 深度學習快速上手班--深度學習
Mark Chang
 
TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用
Mark Chang
 
TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用
Mark Chang
 
TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習
Mark Chang
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10
Mark Chang
 
Neural Doodle
Neural DoodleNeural Doodle
Neural Doodle
Mark Chang
 
TensorFlow 深度學習講座
TensorFlow 深度學習講座TensorFlow 深度學習講座
TensorFlow 深度學習講座
Mark Chang
 
Computational Linguistics week 5
Computational Linguistics  week 5Computational Linguistics  week 5
Computational Linguistics week 5
Mark Chang
 
Neural Art (English Version)
Neural Art (English Version)Neural Art (English Version)
Neural Art (English Version)
Mark Chang
 
AlphaGo in Depth
AlphaGo in Depth AlphaGo in Depth
AlphaGo in Depth
Mark Chang
 
Image completion
Image completionImage completion
Image completion
Mark Chang
 

More from Mark Chang (20)

Domain Adaptation
Domain AdaptationDomain Adaptation
Domain Adaptation
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
 
NTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANsNTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANs
 
Generative Adversarial Networks
Generative Adversarial NetworksGenerative Adversarial Networks
Generative Adversarial Networks
 
Applied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksApplied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural Networks
 
The Genome Assembly Problem
The Genome Assembly ProblemThe Genome Assembly Problem
The Genome Assembly Problem
 
DRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive WriterDRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive Writer
 
淺談深度學習
淺談深度學習淺談深度學習
淺談深度學習
 
Variational Autoencoder
Variational AutoencoderVariational Autoencoder
Variational Autoencoder
 
TensorFlow 深度學習快速上手班--深度學習
 TensorFlow 深度學習快速上手班--深度學習 TensorFlow 深度學習快速上手班--深度學習
TensorFlow 深度學習快速上手班--深度學習
 
TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用
 
TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用
 
TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10
 
Neural Doodle
Neural DoodleNeural Doodle
Neural Doodle
 
TensorFlow 深度學習講座
TensorFlow 深度學習講座TensorFlow 深度學習講座
TensorFlow 深度學習講座
 
Computational Linguistics week 5
Computational Linguistics  week 5Computational Linguistics  week 5
Computational Linguistics week 5
 
Neural Art (English Version)
Neural Art (English Version)Neural Art (English Version)
Neural Art (English Version)
 
AlphaGo in Depth
AlphaGo in Depth AlphaGo in Depth
AlphaGo in Depth
 
Image completion
Image completionImage completion
Image completion
 

Recently uploaded

FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 

Information in the Weights