SlideShare a Scribd company logo
1 of 56
Download to read offline
Information	in	the	Weights	
Mark	Chang
2020/06/19
Outline
• Traditional	Machine	Learning	v.s. Deep	Learning
• Basic	Concepts	in	Information	Theory
• Information	in	the	Weights
Traditional	Machine	Learning	v.s.	Deep	
Learning
• VC	Bound
• Generalization	in	Deep	Learning
• PAC-Bayesian	Bound	for	Deep	Learning
VC	Bound
• What	cause	over-fitting?
• Too	many	parameters	->	over	fitting	?	
• Too	many	parameters	->	high	VC	Dimension	->	over	fitting	?
• …	?	
h
h h
under	fitting over	fittingappropriate	fitting
too	few	
parameters
adequate
parameters
too	many
parameters
training	data
testing	data
VC	Bound
• Over-fitting is caused by high VC Dimension
• For	a	given	dataset	(n	is	constant),	search	for	the	best	VC	Dimension
d=n (shatter)
d	(VC	Dimension)
over-fittingerror
best	VC	Dimension
numbef of
training instances
VC	Dimension
(model complexity)✏(h)  ˆ✏(h) +
r
8
n
log(
4(2n)d
)
training	error
testing	error
VC	Dimension
• VC	Dimension	of	linear	model:	
• O(W)
• W	=	number	of	parameters	
• VC	Dimension	of	fully-connected	
neural	networks:	
• O(LW	log	W)
• L	=	number	of	layers
• W	=	number	of	parameters
• VC Dimension is	independent from data distribution,	and	only	
depends	on	model
Generalization	in	Deep	Learning
• Considering	a	toy	example:
neural	networks
input:	
780
hidden:	
600
d	≈	26M
dataset
n	=	50,000
d	>>	n,	but	testing	error	<	0.1
Generalization	in	Deep	Learning
• However,	when	you	are	solving	your	problem	…
neural	networks
input:	
780
hidden:	
600
testing	error	=	0.6
over	fitting	!!	
your	
dataset
n	=	50,000
10	classes
testing	error	=	0.6
over	fitting	!!	
reduce	
VC	Dimension
neural	networks
input:	
780
hidden:	
200
…
reduce	
VC	Dimension
Generalization	in	Deep	Learning
d	(VC	Dimension)
error
✏(h)  ˆ✏(h) +
r
8
n
log(
4(2n)d
)
d=n
over-fitting
over-parameterization
model	with	
extremely	high	VC-
Dimension
Generalization	in	Deep	Learning
ICLR2017
Generalization	in	Deep	Learning
1				 0 1 0 2
random	noise	features
shatter	!
deep
neural
networks
(Inception)
feature
:label: 1						0 1 0 2
original	dataset	(CIFAR)
0 1 1 2 0
random	label
ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0
Generalization	in	Deep	Learning
1				 0 1 0 2
random	noise	features
deep
neural
networks
(Inception)
feature
:label: 1						0 1 0 2
original	dataset	(CIFAR)
0 1 1 2 0
random	label
ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0 ˆ✏(h) ⇡ 0
✏(h) ⇡ 0.14 ✏(h) ⇡ 0.9 ✏(h) ⇡ 0.9
Generalization	in	Deep	Learning
• Testing error depends on data distribution
• However,	VC-Bound does not depend on data distribution
1					0 1 0 2
random	noise	
features
original	dataset
feature
:label: 1						0 1 0 2
random	label
0 1 1 2 0
✏(h) ⇡ 0.14 ✏(h) ⇡ 0.9
Generalization	in	Deep	Learning
Generalization	in	Deep	Learning
• high	sharpness	->	high	testing	error
PAC-Bayesian	Bound	for	Deep	Learning
UAI	2017
PAC-Bayesian	Bound	for	Deep	Learning
• Deterministic Model • Stochastic Model	(Gibbs	Classifier)
data ✏(h)
hypothesis
h
error
data
✏(Q)
= Eh⇠Q(✏(h))
distribution	of	
hypothesis
hypothesis
h
Gibbs
error
sampling
PAC-Bayesian	Bound	for	Deep	Learning
• Considering	the	sharpness	of	local	minimums	
single	
hypothesis
h
distribution	of
hypothesis	
Q
flat	minimum:
• low	
• low		
ˆ✏(h)
ˆ✏(Q)
sharp	minimum:
• low
• high
ˆ✏(h)
ˆ✏(Q)
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
PAC-Bayesian	Bound	for	Deep	Learning
• With 1-ẟ probability, the following inequality (PAC-Bayesian	Bound)	is
satisfied
number of
training instances
KL	divergence
between	P	and	Q
Distribution	of	model	
before	training	(prior)
Distribution	of	models	
after	training	(posterior)
KL	divergence
between	testing	error	
&	training	error
PAC-Bayesian	Bound	for	Deep	Learning
✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) + 2
2n 1
high ✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) + 2
2n 1
,		high
under	fitting over	fittingappropriate	fitting
✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) + 2
2n 1
moderate ✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
) +
2n 1
,		moderate ✏(Q)  ˆ✏(Q) +
s
KL(Q||P) + log(n
2n 1
low ✏(Q)  ˆ✏(Q) +
s
,		high
low KL(Q||P) moderate KL(Q||P) high KL(Q||P)
training	data
testing	data
P
Q
P
Q
P
Q
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
lowKL(ˆ✏(Q)k✏(Q)) moderae KL(ˆ✏(Q)k✏(Q)) high KL(ˆ✏(Q)k✏(Q))
PAC-Bayesian	Bound	for	Deep	Learning
• PAC-Bayesian	Bound	is	data-dependent
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
• High	VC	Dimension,	but	clean	data
->	low	KL	(Q||P)
• High	VC	Dimension,	and	noisy	data
->	high	KL(Q||P)
P
Q Q
P
PAC-Bayesian	Bound	for	Deep	Learning
• Data	:	M-NIST	(binary	classification,	class0	:	0~4,	class1	:	5~9)
• Model	:	2	layer	or	3	layer	NN
Training
Testing
VC	Bound
Pac-Bayesian
Bound
0.028
0.034
26m
0.161
0.027
0.035
56m
0.179
Varying	the	width	of	
hidden	layer	(2	layer	NN)
600 1200
0.027
0.032
121m
0.201
0.028
0.034
26m
0.161
0.028
0.033
66m
0.186
Varying	the	number	of	
hidden	layers
2 3 4
0.028
0.034
26m
0.161
0.112
0.503
26m
1.352
original random
original	M-NIST	
v.s.	random	label
PAC-Bayesian	Bound	for	Deep	Learning
• PAC-Bayesian	Bound	is	Data	Dependent
clean	data:
->small	KL(Q||P)
->small	ε(Q)
noisy	data:
->large	KL(Q||P)
->large	ε(Q)
feature:
label:
original	M-NIST
0 0 0 1 1
random	label
1						0 1						0						1		
KL(ˆ✏(Q)k✏(Q)) 
KL(QkP) + log(n
)
n 1
Basic	Concepts	in	Information	Theory
• Entropy
• Joint	Entropy	
• Conditional	Entropy
• Mutual	Information
• Cross	Entropy
Entropy
• The	uncertainty	of	a	random	variable	X	
H(X) =
X
x2X
p(x) log p(x)
x 1 2
P(X=x) 0.5 0.5
x 1 2
P(X=x) 0.9 0.1
x 1 2 3 4
P(X=x) 0.25 0.25 0.25 0.25
H(X) = 2 ⇥ 0.5 log(0.5) = 1
H(X) = 0.9 log(0.9) 0.1 log(0.1) = 0.469
H(X) = 4 ⇥ 0.25 log(0.25) = 2
Joint	Entropy	
• The	uncertainty	of	a	joint	distribution	involving	two	random	variables	
X,	Y	
H(X, Y ) =
X
x2X,y2Y
p(x, y) log p(x, y)
P(X,Y) Y=1 Y=2
X=1 0.25 0.25
X=2 0.25 0.25
H(X, Y ) = 4 ⇥ 0.25 log(0.25) = 2
Conditional	Entropy
• The	uncertainty	of	a	random	variable	Y	given	the	value	of		another	
random	variable	X
H(Y |X) =
X
x2X
p(x)
X
y2Y
p(y|x) log p(y|x)
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent Y	is	a	stochasit function	of	X
H(X, Y ) = 1.722
H(Y |X) = 0.722
H(X, Y ) = 1.722
H(Y |X) = 1
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
H(X, Y ) = 1
H(Y |X) = 0
Conditional	Entropy
Information	Diagram
H(X)H(Y )
H(X, Y )
H(Y |X)
H(Y |X) =
X
x2X
p(x)
X
y2Y
p(y|x) log p(y|x)
=
X
x2X
p(x)
X
y2Y
p(y|x)(log p(x, y) log p(x))
=
X
x2X,y2Y
p(x, y) log p(x, y) +
X
x2X
p(x) log p(x)
= H(X, Y ) H(X)
Conditional	Entropy
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent Y	is	a	stochasit function	of	X
H(X, Y ) = 1.722
H(X) = 0.722, H(Y ) = 1
H(Y |X) = H(X, Y ) H(X)
= 1 = H(Y )
H(X, Y ) = 1.722
H(X) = 1, H(Y ) = 1
H(Y |X) = H(X, Y ) H(X)
= 0.722
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
H(X, Y ) = 1
H(X) = 1, H(Y ) = 1
H(Y |X) = H(X, Y ) H(X)
= 0
H(X)
H(X, Y )
H(Y |X) = H(Y )
H(Y )
H(X, Y )
H(X)
H(Y |X) = 0
H(Y |X)
H(X, Y )
Mutual	Information
• The	mutual	dependence	between	two	variables	X,	Y
I(X; Y ) =
X
x2X,y2Y
p(x, y) log
p(x, x)
p(x)p(y)
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent
I(X; Y ) = 0 I(X; Y ) = 0.278
Y	is	a	stochasit function	of	X
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
I(X, Y ) = 1
Mutual	Information
H(X)H(Y )
H(X, Y )
Information	
Diagram
I(X; Y )
I(X; Y ) =
X
x2X,y2Y
p(x, y) log
p(x, x)
p(x)p(y)
=
X
x2X,y2Y
p(x, y)( log p(x) log p(y) + log p(x, x))
=
X
x2X
p(x) log p(x)
X
y2Y
p(y) log p(y) +
X
x2X,y2Y
p(x, y) log p(x, x))
= H(X) + H(Y ) H(X, Y )
Mutual	Information
P(X,Y) Y=1 Y=2
X=1 0.4 0.4
X=2 0.1 0.1
P(X,Y) Y=1 Y=2
X=1 0.4 0.1
X=2 0.1 0.4
X	&	Y	are	independent
H(Y )
H(X, Y )
H(X)
I(X; Y )
H(X)
H(X, Y )
H(Y )
I(X; Y ) = 0
Y	is	a	stochasit function	of	X
P(X,Y) Y=1 Y=2
X=1 0.5 0
X=2 0 0.5
Y	is	a	deterministic	function	of	X
H(X, Y ) = 1.722
H(X) = 0.722, H(Y ) = 1
I(X; Y ) = H(X) + H(Y )
H(X, Y ) = 0
H(X, Y ) = 1.722
H(X) = 1, H(Y ) = 1
I(X; Y ) = H(X) + H(Y )
H(X, Y ) = 0.278
H(X, Y ) = 1
H(X) = 1, H(Y ) = 1
I(X; Y ) = H(X) + H(Y )
H(X, Y ) = 1
H(Y )
H(X, Y )
H(X)
I(X; Y )
Entropy,	Joint	Entropy	,	Conditional	Entropy	&	
Mutual	Information
H(X)
H(Y ) H(Z)
H(X|Y, Z)
H(Y |X, Z) H(Z|X, Y )
I(X; Z|Y )I(X; Y |Z)
I(Y ; Z|X)
I(X; Y ; Z)
Cross	Entropy
Hp,q(X) =
X
x2X
p(x) log q(x)
=
X
x2X
p(x) log p(x)
X
x2X
p(x) log q(x) log p(x)
= Hp(X) + KL p(X)kq(X)
Information	in	the	Weights
• Cause	of	Over-fitting	
• Information	in	the	Weights	as a Regularizer
• Bounding	the	Information	in	the	Weights	
• Connection	with	Flat	Minimum
• Connection	with	PAC-Bayesian	Bound
• Experiments
Information	in	the	Weights
Cause	of	Over-fitting	
• Training	loss	(Cross-Entropy):
Hp,q(y|x, w) = Ex,y
⇥
p(y|x, w) log q(y|x, w)
⇤
p : probability density function of data
q : probability density function predicted by model
x : input feature of training data
y : label of training data
w : weights of model
✓ : latent parameters of data distribution
Hp,q(y|x, w) = Hp(y|x, w) + Ex,wKL p(y|x, w)kq(y|x, w)
Cause	of	Over-fitting	
the	uncertainty	of	y	given	w and	x
Hp(x)
Hp(y)
Hp(y|x, w)
Hp(w)
Cause	of	Over-fitting	
• lower	
->	lower	uncertainty	of	y	given	w	and	x	 ->	lower	training	error	
• ex:	 given		x as									,	and	a	fixed	w
8
>><
>>:
p(y = 1|x, w) = 0.9
p(y = 2|x, w) = 0.1
p(y = 3|x, w) = 0.0
p(y = 4|x, w) = 0.0
8
>><
>>:
p(y = 1|x, w) = 0.3
p(y = 2|x, w) = 0.3
p(y = 3|x, w) = 0.2
p(y = 4|x, w) = 0.2
higher Hp(y|x, w)lower Hp(y|x, w)
Hp(y|x, w)
Hp,q(y|x, w) = Hp(y|x, w) + Ex,wKL p(y|x, w)kq(y|x, w)
Cause	of	Over-fitting	
Hp,q(y|x, w) = Hp(y|x, w) + Ex,wKL p(y|x, w)kq(y|x, w)
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
✓ : latent parameters of (training & testing) data distribution
Hp(y|x)Hp(ytest|xtest)
Hp(✓)
Cause	of	Over-fitting	
Hp(y|x)Hp(ytest|xtest)
Hp(✓)
✓ : latent parameters of (training & testing) data distribution
1
2
3
x y
3
2
x y
3
2
x y
3
Hp(y|x, ✓)
Ip(y; ✓|x)
useful	information	in	
training	data	
noisy information
and	outlier in	
training	data
noise	and	outlier	
in	testing	data
normal	samples	not	in	
training	data
Cause	of	Over-fitting	
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(x)
Hp(y)
Hp(✓)
Hp(y|x, ✓) the	uncertainty	of	
y	given	w and	x
Hp(y|x, w)
Hp(w)
I(y; ✓|x, w)
noisy information
and	outlier in training data
useful	information	not	
learned	by	weights
noisy information
and	outlier learned	by	
weights
I(y; w|x, ✓)
Cause	of	Over-fitting	
Hp(x)
Hp(y)
Hp(✓)
Hp(y|x, ✓)
Hp(w)
noisy information
and	outlier in training data
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Cause	of	Over-fitting	
• lower																						->	less	noise	and	outlier	in	training	data.		low Hp(y|x, ✓)
lower Hp(y|x, ✓) higher Hp(y|x, ✓)
Hp(y|x)Hp(✓)
3
2
x y
1
3
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(y|x)Hp(✓)
3
2
x y
Cause	of	Over-fitting	
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(x)
Hp(y)
Hp(✓)
Hp(w)
I(y; ✓|x, w)
useful	information	not	
learned	by	weights
Cause	of	Over-fitting	
• lower
->	more	useful	information	learned	by	weights
->	lower
->	lower	testing	error		
Hp(✓)
Hp(w1)
Hp(w2)
I(y; ✓|x, w)
Hp(y|x)
Hp(ytest|xtest)
I(y; ✓|x, w2) < I(y; ✓|x, w1)
) Hp(ytest|xtest, w2) < Hp(ytest|xtest, w1)
Hp(ytest|xtest, w)
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Cause	of	Over-fitting	
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
Hp(x)
Hp(y)
Hp(✓)
Hp(w)
noisy information
and	outlier learned	by	
weights
I(y; w|x, ✓)
w
• Cause	of	over	fitting:	weights	memorize	the noisy informationin	training	data.
Cause	of	Over-fitting	
High	VC	Dimension,	but	clean	data
->	few	noise	to	memorize
High	VC	Dimension,	and	noisy	data
->	much	noise	to	memorize	
training	data
testing	data
Hp(y|x, w) = Hp(y|x, ✓) + I(y; ✓|x, w) I(y; w|x, ✓)
ww
• is	unknown,																					cannot	be	compute
• ,	the	information	in	the	weight,	is an upper bound of
Information	in	the	Weights	as a Regularizer
I(y; w|x, ✓)  I(y, x; w|✓) = I(D; w|✓)  I(D; w)
I(y; w|x, ✓)
I(y; w|x, ✓)I(D; w)
D
Hp(x)
Hp(y)
Hp(✓)
I(y; w|x, ✓) Hp(x, y) = Hp(D)
I(y, x; w|✓) = I(D; w|✓)I(D; w)
Hp(w)
I(y; w|x, ✓)
Information	in	the	Weights	as a Regularizer
• The	actual	data distribution	p is unknown
• Estimate by
• New loss function : as a regularizerIp(D; w) ⇡ Iq(D; w)
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
I(D; w) Iq(D; w)
Connection	with	Flat	Minimum
• Flat	minimum has	low	information in the weight
kH k⇤ : nuclear norm of the Hessian at the local minimum
Flat	minimum ->	low	neclear norm of the Hessian	 -> low information
Iq(w; D) 
1
2
K
⇥
log k ˆwk2
2 + log kH k⇤ K log(
K2
2
)
⇤
Connection	with	PAC-Bayesian	Bound
• Given a prior distribution , we have:p(w)
distribution	of	weights
before training	(prior)
distributionof	weights	
after	training on dataset D
(posterior)
Iq(w, D) = EDKL(q(w|D)kq(w))
 EDKL(q(w|D)kq(w)) + EDKL(q(w|D)kp(w))
 EDKL(q(w|D)kp(w))
Connection	with	PAC-Bayesian	Bound
• Loss function with the	regularizer :
• PAC Bayesian Bound :
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
 Hp,q(y|x, w) + EDKL(q(w|D)kp(w))
Ip(D; w) ⇡ Iq(D; w)
ED
⇥
Ltest
(q(w|D))
⇤

Hp,q(y|x, w) + LmaxED
⇥
KL(q(w|D)kp(w))
⇤
n(1 1
2 )
Ltest : test error of the network with weights q(w|D)
Lmax : maximum	per-sample	loss	function
Experiments
• Random	Labels
Dataset	size
Information	complexity
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
Experiments
Dataset	size
Information	complexity
• Real Labels
L (q(w|D)) = Hp,q(y|x, w) + Iq(D; w)
Experiments
• information	in	the	weights	v.s. percentage	of	corrupted	labels

More Related Content

What's hot

Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space Eliezer Silva
 
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Leo Asselborn
 
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Yandex
 
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...Leo Asselborn
 
Probabilistic Control of Switched Linear Systems with Chance Constraints
Probabilistic Control of Switched Linear Systems with Chance ConstraintsProbabilistic Control of Switched Linear Systems with Chance Constraints
Probabilistic Control of Switched Linear Systems with Chance ConstraintsLeo Asselborn
 
Hyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradientHyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradientFabian Pedregosa
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
 
High-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K CharactersHigh-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K CharactersHolistic Benchmarking of Big Linked Data
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 
Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...BigMine
 
Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationGabriel Peyré
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 

What's hot (20)

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space Locality-sensitive hashing for search in metric space
Locality-sensitive hashing for search in metric space
 
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
Control of Discrete-Time Piecewise Affine Probabilistic Systems using Reachab...
 
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
 
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
Robust Control of Uncertain Switched Linear Systems based on Stochastic Reach...
 
Probabilistic Control of Switched Linear Systems with Chance Constraints
Probabilistic Control of Switched Linear Systems with Chance ConstraintsProbabilistic Control of Switched Linear Systems with Chance Constraints
Probabilistic Control of Switched Linear Systems with Chance Constraints
 
cyclic_code.pdf
cyclic_code.pdfcyclic_code.pdf
cyclic_code.pdf
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
 
Hyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradientHyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradient
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 
High-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K CharactersHigh-Performance Approach to String Similarity using Most Frequent K Characters
High-Performance Approach to String Similarity using Most Frequent K Characters
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...Processing Reachability Queries with Realistic Constraints on Massive Network...
Processing Reachability Queries with Realistic Constraints on Massive Network...
 
Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex Optimization
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Claire98
Claire98Claire98
Claire98
 

Similar to Information in the Weights

presentation on Fandamental of Probability
presentation on Fandamental of Probabilitypresentation on Fandamental of Probability
presentation on Fandamental of ProbabilityMaheshGour5
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.pptGireeshNcs
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.pptRobinBushu
 
MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1arogozhnikov
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.pptssuserd329601
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.pptsarahfarhin
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.pptYonas992841
 
Probability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledgProbability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledgnsnayak03
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.pptSameer607695
 
Probability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.pptProbability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.pptShamshadAli58
 
Probability statistics assignment help
Probability statistics assignment helpProbability statistics assignment help
Probability statistics assignment helpHomeworkAssignmentHe
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagationDong Guo
 
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...Gota Morota
 
Bayes Classification
Bayes ClassificationBayes Classification
Bayes Classificationsathish sak
 
Support vector machine
Support vector machineSupport vector machine
Support vector machinePrasenjit Dey
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014PyData
 
Free Ebooks Download ! Edhole.com
Free Ebooks Download ! Edhole.comFree Ebooks Download ! Edhole.com
Free Ebooks Download ! Edhole.comEdhole.com
 

Similar to Information in the Weights (20)

presentation on Fandamental of Probability
presentation on Fandamental of Probabilitypresentation on Fandamental of Probability
presentation on Fandamental of Probability
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1MLHEP 2015: Introductory Lecture #1
MLHEP 2015: Introductory Lecture #1
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledgProbability_Review.ppt for your knowledg
Probability_Review.ppt for your knowledg
 
Probability_Review.ppt
Probability_Review.pptProbability_Review.ppt
Probability_Review.ppt
 
Probability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.pptProbability_Review HELPFUL IN STATISTICS.ppt
Probability_Review HELPFUL IN STATISTICS.ppt
 
Probability statistics assignment help
Probability statistics assignment helpProbability statistics assignment help
Probability statistics assignment help
 
Expectation propagation
Expectation propagationExpectation propagation
Expectation propagation
 
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
Application of Bayesian and Sparse Network Models for Assessing Linkage Diseq...
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Bayes Classification
Bayes ClassificationBayes Classification
Bayes Classification
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Bayes 6
Bayes 6Bayes 6
Bayes 6
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
 
Free Ebooks Download ! Edhole.com
Free Ebooks Download ! Edhole.comFree Ebooks Download ! Edhole.com
Free Ebooks Download ! Edhole.com
 

More from Mark Chang

Domain Adaptation
Domain AdaptationDomain Adaptation
Domain AdaptationMark Chang
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOWMark Chang
 
NTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANsNTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANsMark Chang
 
Generative Adversarial Networks
Generative Adversarial NetworksGenerative Adversarial Networks
Generative Adversarial NetworksMark Chang
 
Applied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksApplied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksMark Chang
 
The Genome Assembly Problem
The Genome Assembly ProblemThe Genome Assembly Problem
The Genome Assembly ProblemMark Chang
 
DRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive WriterDRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive WriterMark Chang
 
淺談深度學習
淺談深度學習淺談深度學習
淺談深度學習Mark Chang
 
Variational Autoencoder
Variational AutoencoderVariational Autoencoder
Variational AutoencoderMark Chang
 
TensorFlow 深度學習快速上手班--深度學習
 TensorFlow 深度學習快速上手班--深度學習 TensorFlow 深度學習快速上手班--深度學習
TensorFlow 深度學習快速上手班--深度學習Mark Chang
 
TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用Mark Chang
 
TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用Mark Chang
 
TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習Mark Chang
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10Mark Chang
 
TensorFlow 深度學習講座
TensorFlow 深度學習講座TensorFlow 深度學習講座
TensorFlow 深度學習講座Mark Chang
 
Computational Linguistics week 5
Computational Linguistics  week 5Computational Linguistics  week 5
Computational Linguistics week 5Mark Chang
 
Neural Art (English Version)
Neural Art (English Version)Neural Art (English Version)
Neural Art (English Version)Mark Chang
 
AlphaGo in Depth
AlphaGo in Depth AlphaGo in Depth
AlphaGo in Depth Mark Chang
 
Image completion
Image completionImage completion
Image completionMark Chang
 

More from Mark Chang (20)

Domain Adaptation
Domain AdaptationDomain Adaptation
Domain Adaptation
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
 
NTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANsNTHU AI Reading Group: Improved Training of Wasserstein GANs
NTHU AI Reading Group: Improved Training of Wasserstein GANs
 
Generative Adversarial Networks
Generative Adversarial NetworksGenerative Adversarial Networks
Generative Adversarial Networks
 
Applied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural NetworksApplied Deep Learning 11/03 Convolutional Neural Networks
Applied Deep Learning 11/03 Convolutional Neural Networks
 
The Genome Assembly Problem
The Genome Assembly ProblemThe Genome Assembly Problem
The Genome Assembly Problem
 
DRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive WriterDRAW: Deep Recurrent Attentive Writer
DRAW: Deep Recurrent Attentive Writer
 
淺談深度學習
淺談深度學習淺談深度學習
淺談深度學習
 
Variational Autoencoder
Variational AutoencoderVariational Autoencoder
Variational Autoencoder
 
TensorFlow 深度學習快速上手班--深度學習
 TensorFlow 深度學習快速上手班--深度學習 TensorFlow 深度學習快速上手班--深度學習
TensorFlow 深度學習快速上手班--深度學習
 
TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用TensorFlow 深度學習快速上手班--電腦視覺應用
TensorFlow 深度學習快速上手班--電腦視覺應用
 
TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用TensorFlow 深度學習快速上手班--自然語言處理應用
TensorFlow 深度學習快速上手班--自然語言處理應用
 
TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習TensorFlow 深度學習快速上手班--機器學習
TensorFlow 深度學習快速上手班--機器學習
 
Computational Linguistics week 10
 Computational Linguistics week 10 Computational Linguistics week 10
Computational Linguistics week 10
 
Neural Doodle
Neural DoodleNeural Doodle
Neural Doodle
 
TensorFlow 深度學習講座
TensorFlow 深度學習講座TensorFlow 深度學習講座
TensorFlow 深度學習講座
 
Computational Linguistics week 5
Computational Linguistics  week 5Computational Linguistics  week 5
Computational Linguistics week 5
 
Neural Art (English Version)
Neural Art (English Version)Neural Art (English Version)
Neural Art (English Version)
 
AlphaGo in Depth
AlphaGo in Depth AlphaGo in Depth
AlphaGo in Depth
 
Image completion
Image completionImage completion
Image completion
 

Recently uploaded

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAndikSusilo4
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 

Recently uploaded (20)

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Pigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping ElbowsPigging Solutions Piggable Sweeping Elbows
Pigging Solutions Piggable Sweeping Elbows
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Azure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & ApplicationAzure Monitor & Application Insight to monitor Infrastructure & Application
Azure Monitor & Application Insight to monitor Infrastructure & Application
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 

Information in the Weights