SlideShare a Scribd company logo
Deep	Learning
Filling	the	gap	between
practice	and	theory
Preferred Networks
Daisuke Okanohara
hillbig@preferred.jp
Aug. 3rd 2017
Summer School of Correspondence and Fusion of AI and Brain Science
Background:
Unreasonable	success	of	deep	learning
l DL	succeed	in	solving	many	complex	tasks
̶ Image	recognition,	speech	recognition,	natural	language	processing,	robot	
controlling,	computational	chemistry	etc.
l But	we	don’t	understand	why DL	work	so	well
̶ Its	success	is	much	higher	than	our	understanding
Background
DL	research	process	become	close	to	science	process
l Try	first,	examine	next
̶ First,	we	obtain	an	unexpected	good	result	experimentally	
̶ We	then	find	a	theory	that	explains	why	it	work	so	well
l This	process	is	different	from	previous	ML	research
̶ Careful	design	of	new	algorithms	sometimes	(or	often)	doesn’t	work
̶ Many	results	contradict	our	intuition
Outline
Three	main	unsolved	problems	in	deep	learning
l Why	can	DL	learn	?
l Why	can	DL	recognize	and	generate	real	world	data	?
l Why	can	DL	keep	and	manipulate	complex	information	?
Why can DL learn ?
Optimization	in	training	DL
l Learn	a	NN	model	f(x;	q) by	minimizing	a	training	error	L(q)
L(q) = Si l(f(xi; q), yi)
where	l(f(xi; q), yi) is	a	loss	function	and	q is	a	set	of	parameters	
l E.g.	two	layer	feed	forward	NN
f(x;	q))	=	a(W2(a(W1x))
where	a	is	an	element-wise	activate	function	such	as	
a(z)=max(0,	z)
l(f(xi; q), yi) = ||f(xi;	q) – yi||2 (L2	loss)
Gradient	descent
Stochastic	Gradient	Descent
l Gradient	descent
̶ Compute	the	gradient	of	L(q) with	regard	to	q; g(q), then	update	q
using	g(q) as	
qt+1 := qt – at g(qt)
where	at>0 is	a	learning	rate
l Stochastic	gradient	descent:	
̶ Since	the	exact	computation	of	gradient	is	expensive,	we	instead	use	an	
approximated	gradient	by	using	a	sampled	data	set	(mini-batch)
g’(qt) = 1/|B| Si∈B l(qt, xi, yi)
-αg
Ξ2
Ξ1
Contour of L(q)
Optimization	in	Deep	learning
l L(q) is	highly	non-convex	and	includes	many	local	optima,	
plateaus	and	saddle	points
̶ In	plateau	regions,	the	gradient	becomes	almost	zero	and	the	
convergence	becomes	significantly	slow
̶ In	saddle	points,	only	few	directions	will	decrease	L(q) and	it	is	hard	to	
escape	from	such	points
Saddle pointsPlateau
local optimum
Miracle	of	deep	learning	training
l It	was	believed	that	we	cannot	train	large	NNs	using	SGD
̶ Impossible	to	optimize	non-convex	problem	of	over	million	dimensions
l However,	SGD	can	find	a	solution	with	low-training	error
̶ When	using	large	model,	it	often	find	a	solution	with	zero	training	error
̶ Moreover,	an	initialization	doesn’t	matter	
(c.f.	<->	K-means	require	good	initializer)
l More	surprisingly,	SGD	can	find	a	solution	with	low-test	error
̶ Although	the	model	is	over-parameterized,	it	does	not	over-fit	and	
achieves	generalization
l Practically	OK,	but	we	want	to	know	why
Why	can	DL	learn	?
l Why	does	DL	succeed	in	find	a	solution	with	a	low	train	error?
̶ Although	obtimization is	a	highly	non-convex	optimization	problem
l Why	does	DL	succeed	in	finding	a	solution	with	a	low	test	
error	?	
̶ Although	NN	is	over	parametrized	and	no	effective	regularization
Loss	surface	analysis	using	spherical	spin	glass	model	(1/5)
[Choromanska+	2015]
l Consider	a	DNN	with	ReLU s(x)=max(0, x)
where	q	is	the	normalization	factor
l We	can	re-express	this	as	
where	Ai,j=1 if	the	path	(i,	j)	is	active	
and	Ai,j=0 if	the	path	is	inactive
̶ ReLU can	be	considered	as	a	switch;	the	path	is	active	if
all	ReLU are	active	and	is	inactive	otherwise
ReLU is	active
ReLU is	inactive
xi
Y
Path	is	active	if	all	
Relu is	active
Loss	surface	analysis	using	spherical	spin	glass	model	(2/5)	
l After	several	assumptions,	this	function	can	be	re-
exampressed as	a	H-spin	spherical	spin-grass	model
s.t.
l Now,	we	can	use	the	analysis	of	spherical	spin-grass	model
̶ We	now	know	the	distribution	of	critical	points
̶ k:	Index	(the	number	of	negative	eigenvalues	of	the	Hessian)
k=n:	local	minimum,	k>0:	saddle	point
12
Loss	surface	analysis	using	spherical	spin	glass	model	(3/5)
Distribution	of	critical	points
Almost	no	critical	points
with	large	k above	LEinf
->	Few	local	minima
In	the	band [LE0,	LEinf]
many	critical	points	with	
small	k	are	found	in	near
LE0
->local	minima	are	close	
to	the	global	minimum
Loss	surface	analysis	using	spherical	spin	glass	model	(4/5)
Distribution	of	test	losses
14
l This	analysis	is	relied	on	several	unrealistic	assumptions
̶ Such	as	
“Each	activation	is	independent	from	inputs”
“Each	path‘s	input	is	independent”
l Can	we	remove	these	assumptions	or	show	these	assumptions	
hold	in	almost	training	cases	?
Loss	surface	analysis	using	spherical	spin	glass	model	(5/5)
Remaining	problem
Depth	creates	no	bad	local	minima	[Lu+	2017]
l Non	convexity	comes	from	depth	and	nonlinearity
l Depth	only	creates	non	convexity
̶ Weight	space	symmetry	means	that	there	are	many	distinct	
configuration	with	same	loss	values	which	would	result	in	a	non-convex	
epigraph
l Consider	a	following	feed	forward	linear	NN	
minW L(W) = ||WH WH-1 
W1X – Y||2
Then	If X	and	Y have	full	row	rank,	then	all	local	minima	of	L(W)
are	global	minima	[Theorem	2.3,	Lu,	Kawaguchi	2017]
Deep	and	Wide	NN	also	create	no	bad	local	minima	
[Nguyen+	2017]
l If	the	following	conditions	hold	
̶ (1)	Activation	function	s is	analytic	on	R,	strictly	monotonically	
increasing
̶ (2)	s is	bounded*
̶ (3)	the	loss	function	l(a) is	twice	differentiable,
̶ l’(a)=0 if	a is	a	global	minimum
̶ (4)	Training	samples	are	linearly	independent,
then	every	critical	point	for	with	the	weight	matrices	have	full	
column	rank,	is	a	global	minimum
̶ We	can	achieve	these	conditions	if	we	use	sigmoid,	tanh or	softplus for	
s and	the	squared	loss	for	l
̶ ->	Solved	in	the	case	of	non-linear	NN	with	some	conditions
Why	DL	can	learn	?
l Why	does	DL	succeed	in	find	a	solution	with	a	low	train	error?
̶ Although	obtimization is	a	highly	non-convex	optimization	problem
l Why	does	DL	succeed	in	finding	a	solution	with	a	low	test	error	?	
̶ Although	NN	is	over	parametrized	and	no	effective	regularization
NN	is	over	parametrized	but	achieves	generalization
l Although	the	number	of	parameters	of	DNN	is	much	larger	
than	the	number	of	samples,	DNN	does	not	overfit and	
achieves	generalization
l Large	model	tend	to	achieve	low	test	error
Number	of	
parameters
Test	error
(lower	the	better)
When	num.	of	parameters	is	larger	
than	num.	of	training	samples
“overfitting”	is	observed
Conventional	ML	models
DNN
No	over-fitting	is	observed	
Moreover	the	test	error	decreases	as	the		
num.	of	parameters	is	increased
Random	Labeling	experiment	[Zhang+	17]
l Model	capacity	should	be	restricted	to	achieve	generalization
̶ C.f.	Rademacher complexity,	VC-dimension,	uniform	stability
l Conduct	an	experiment	on	a	copy	of	the	test	data	where	the	
true	labels	were	replaced	by	random	labels
->	NN	model	easily	fit	even	for	random	labels
l Compare	the	result	with	that	using	regularization	techniques
->	No	significant	difference
l Therefore	NN	model	has	enough	model	complexity	to	fit	to	
random	labeling	but	it	can	generalize	well	w/o	regularization
̶ For	random	labels,	NN	memorize	the	samples,	but	for	true	labels	NN	
learn	patterns	for	generalization	[Arpit+	17]
l 
 WHY?
SGD	plays	a	significant	role	for	generalization
l SGD	achieves	an	approximate	Bayesian	inference	[Mandt+	17]
̶ Bayesian	inference	provides	a	sample	following	q ~ P(q|D)
l SGD’s	noise	removes	unnecessary	information	of	input	to	
estimate	output	[Shwartz-Ziv+	17]
̶ During	training	the	mutual	information	between	input	and	the	network	
is	decreased	but	that	between	the	network	and	output	is	kept
l Sharpness	and	norms	of	weights	also	relate	to	generalization
̶ Flat	minima	achieve	generalization.	But	it
depends	on	the	scale	of	weights
̶ If	we	find	a	flat	minimum	with	small	norm	of	weights,	then	it	achieves	
generalization	[Neyshabur+	17]
FlatSharp
Training	always	converge	to	the	solution	with	low-test	error
[Wu+	17]
l Even	when	we	optimize	the	model	with	different	initializations,	
they	always	converge	to	the	solution	with	low	test	error
l Flat	minima	have	large	basin	while	sharp	minima	have	small	basin
̶ Almost	parameters	will	converge	to	flat	minima
l Flat	minima	corresponds	to	the	low	model	complexity	
=	low	test	error
l Question:	Why	does	NN	learning	induce	flat	minima	?
Flat	minima	have	large	basin
Sharp	minima	have	small	basin
Why can DL recognize and generate
real world data ?
Why	does	deep	learning	work	?
Lin’s	hypothesis	[Lin+	16]
l Real	world	phenomena	have	following	characteristics
1. Low	order	polynomial
̶ Known	physical	interactions	have	at	most	4th-order	polynomials
2. Local	interaction
̶ Number	of	interactions	between	objects	increases	linearly
3. Symmetry
̶ Small	degree	of	freedoms	
4. Markovian
̶ Most	generation	process	depends	on	only	the	previous	state
l ->	DNN	can	exploit	these	characteristics	
24/50
Generation	and	recognition	(1/2)
l Data	x	is	generated	from	unknown	factors	z
l Generation	and	recognition	are	inverse	operations
z
x
E.g.	Image	generation,	recognition
zObject,	Position	of	camera,	Lighting	condition
(Dragon,	[10,	2,	-4],	white
xImage		
Generation
z
x
Recognition
Inference
Inference:	Infer	the	posterior	
P(z|x)
Generation Recognition
Generation	and	recognition2/2
l Data	is	often	generated	from	multiple	factors
̶ Uninteresting	factors	are	sometimes	called	covariates	or
disturbance	variables	of	hidden	variable
l Generation	process	can	be	very	complex
̶ Each	step	can	be	non-linear
̶ Gaussian,	non-Gaussian	noises	are	added	at	several	steps
̶ E.g.	Image	rendering	requires	dozens	steps
l In	general,	generation	process	is	unknown
̶ Any	generation	process	is	the	approx.	of	actual	process
26/50
z1
x
c
h
z2
hm
Why do we consider generative models?
l For	more	accurate	recognition	and	inference
̶ If	we	know	the	generate	process,	we	can	improve	recognition	and	inference
u “What	I	cannot	create,	I	do	not	understand”
Richard	Feynman
u “Computer	vision	is	inverse	computer	graphics”
Geofferty Hinton
̶ By	inverting	the	generation	process,	we	obtain	recognition	process
l For	transfer	learning
̶ By	changing	covariates,	we	can	transfer	the	learned	model	to	other	
environments
l For	sampling	examples	to	compute	statistics	and	validation
27/50
E.	g.	Mapping	of	hand-written	data	into	2D	using	VAE
Original	hand-written	data	is	high-dimension	(784-dim)
If	we	map	these	data	into	2-dim	space,	types,	shapes	change	smoothly
If	we	want	to	classify	“1”,	
we	need	to	find	this	simple	
boundary
Representation learning is more powerful than
the nearest neighbor method and manifold learning
l Actually	we	can	significantly	reduce	the	required	training	samples	when	
using	representation	learning	 [Arora+	2017]
l Using	the	distance	metric	defined	on	the	original	space,	or	the	
neighborhood	notion	may	not	work	
?
In	reality,	samples	with	the	same	label	are	
located	in	very	different	places	in	the	
original	space.	Their	region	may	not	be	
even	connected	in	original	space
Ideally,	near	sample
will	help	to	determine
the	label
Man with
glasses
Real-world	data	is	distributed	in	low-dimensional	manifold
30/50
Each	point	
corresponds	to	a	
possible	data
Data	distributed	in	
low-dimensional	
space
C.f.	distribution	of	galaxies	
in	the	universe
Why	does	low-dimensional
manifold	appear	?
Low	dimensional	factor	
is	converted	to	
high-dimensional	data
without	increasing	the	
complexity	[Lin+16]
Original	space	and	latent	space
31/50
generate
recognition
l In	the	latent	space,	the	meaning	of	data	is	smoothly	changed
Learning	is	easy	in	the	latent	space
32/50
generate
recognition
l Since	many	tasks	related	to	the	factors,	the	classification	
boundary	becomes	simple	in	the	latent	space
Require	many	training	examples	
in	the	original	space
Require	few	training	examples
in	the	latent	space
How	to	learn	a	generative	and	inference	model	?
l Generation	process	and	its	counterpart	recognition	process	
are	highly	non-linear	and	complex
l ->	Use	a	deep	neural	network	to	approximate	them
z
x
Generation
x	=	f(z)
z
x
Recognition
z	=	g(x)
Deep	generative	models
Fast	sampling	
of	x
Compute
the	likelihood
P(x)
Produce	sharp
image
Stable
Training
VAE
[Kingma+	14]
√ △
Lower-bound
(IW-VAE	
[Burda+15])
X √
GAN
[Goodfellow+ 14,16]
(IPM)
√ X √ X-△
AutoRegressive
[Oord+	16ab]
△-√
(Parallel	
multi-scale	
[Reed+	17])
√ √ √
Energy model
[Zhao+	16]	[Dai+	17]
△-√ △
Up	to	
constant	
√ △
VAE:	Variational AutoEncoder [Kingma+	14]
z
Ό
(ÎŒ,	σ)	=	Dec(z;	φ)
x〜N(ÎŒ,	σ)
σ
x
A	NN	network	outputs	mean	and	covariance
(ÎŒ,	σ)	=	Dec(z;	φ)
Generate	x in	the	following	steps
(1) Sample	z	= N(0,	I)
(2) Compute (ÎŒ,	σ)	=	Dec(z;	φ
(3) Sample	x	=	N(ÎŒ,	σI)
Defined	distribution
p(x)	=	∫p(x|z)p(z)dz
VAE:	Variational Autoencoder
Induced	distribution
l p(x|z)	is	a	Gaussian	and	p(x) corresponds	to	(infinitely-many)	
mixture	of	Gaussians
p(x)	=	∫p(x|z)p(z)dz
̶ Neural	network	can	model	complex	relation	between	z	and	x
VAE:	Variational AutoEncoder
Use	maximum	likelihood	estimation	for	learning	the	parameter	q
Since	the	exact	likelihood	is	intractable,	we	instead	maximize
the	lower	bound	of	likelihood	known	as	ELBO	(Evidence	lower	bound
The	proposal	distribution	q(z|x)
should	be	close	to	the	true	
posterior	p(z|x)
Maximizing	wrt.	q(z|x) correspond	
to	the	minimization	of	
KL(q(z|x)	||	p(z|x))	
=	Learn	the	encoder	as	a	side	effect
Reparametization Trick
Since	we	take	an	expectation	with	regard	to	Q(z|x)	it	is	difficult	to	compute	
the	gradient	of	ELBO	wrt.	Q(z|x)
->	We	can	use	reparamerization trick	!		
ÎŒ' σ'
x'
z
ÎŒ σ
x
Δ
Converted	computation	graph	
can	be	regarded	as	an	
auto-encoder	where	a	noise	Δσ
is	added	to	the	latent	variable	Ό
The	problem	of	maximum	likelihood	estimation	against
low-dimensional	manifold	data	(1/3)	[Arjovsky+	17ab]
l Maximum	likelihood	estimation	(MLE)	estimate	a	distribution	
P(x)	using	a	model	Q(x)	
LMLE(P,	Q)	=	Sx P(x)	log	Q(x)
̶ Usually,	this	is	replaced	with	the	empirical	distribution	(1/N)Si log	Q(xi)
l In	low-dimensional	manifold	data,	P(x)	=	0	in	most	x
l To	model	such	P,	Q(x) also	should	satisfy	Q(x)	=	0	in	most x
l If	we	use	such	Q(x),	log	Q(x)	is	undefined	(or	NaN)	when	Q(xi)	=	
0,	so	we	cannot	optimize	Q(x)	using	MLE
l to	solve	this	->	Use	Q(x) s.t. Q(xi)>0	for	all	{xi}
̶ E.g.	Q(x)	=	N(”, s)	,	this	means	a	sample	is	” with	added	noise	s
The	problem	of	maximum	likelihood	estimation	against
low-dimensional	manifold	data	(2/2)
l MLE	require	Q(xi)	>0	for	all	{xi}
l to	solve	this	->	Use	Q(x) s.t. Q(xi)>0	for	all	{xi}
l Q(x)	=	N(”, s)	 this	means	a	sample	is	” with	added	noise	s
̶ This	makes	blurry	images	
l Another	difficulty	is	there	is	no	notion	of	
the	closeness	wrt.	the	space	geometry	
When	the	area	size	of	the	intersection	are
same,	MLE	will	give	the	same	score.
Although	the	left	distribution	is	close	to	the
true	distribution,	MLE	scores	are	same
GANGenerative	Adversarial	Net
[Goodfellow+	14,	17]
l Compete	two	neural	networks	to	learn	a	distribution
l Generator	(counterfeiters)
̶ Goal:	deceive	the	generator
̶ Learn	to	generate	a	realistic	sample	that	can	deceive	the	generator
l Discriminator	(Police)
̶ Goal:	detect	a	sample	generated	by	the	generator
̶ Learn	to	detect	the	difference	between	real	and	generated	ones
Generator
Real
Discriminator
RealFake
Chosen	randomly
GAN:	Generative	adversarial	
z
x =	G(z)
x
Sample	x	in	the	following	step
(1) Sample	z	〜 U(0,	I)
(2) Compute	x	=	G(z
(without	adding	noise)
No	adding	noise	step	
at	last
Training	of	GAN
l Use	Discriminator	D(x)
̶ Output	1	if	x	is	estimated	as	real	and	0	otherwise
l Train	D	to	maximize	V	and	G	to	minimize	V
̶ If	learning	succeeded,	this	learning	will	reach
the	following	Nash	equilibrium
∫p(z)G(z)dz=P(x),	D(x)=1/2
̶ Since	D	provides	dD(x)/dx to	update	G,	so	
they	are	actually	cooperate	to	learn	P(x)
z
x'
x = G(z)
{1(Real),	0(Fake)}
y = D(x)
x
Modeling	low	dimensional	manifold	
l When	z	is	low-dimensional	data,	the	deterministic	function	
x	=	F(z)	outputs	low-dimensional	manifold	in	the	space	x
l Using	CNN	for	G(z)	and	D(x)	is	also	important	
̶ D(x)	becomes	similar	score	when	x	and	x’	are	similar
l Recent	study	showed	that	training	without	using	discriminator	
is	also	able	to	generate	realistic	data	[Bojanowski+	17]
l These	two	factors	are	important	to	produce	realistic	data
z
x=F(z)
z ∈ R1 x ∈ R2
Demonstration	of	GAN	training
http://www.inference.vc/an-alternative-update-rule-for-generative-adversarial-networks/
45
Each	generated
samples	follows
dD(x)/dx
Training	GAN
https://github.com/mattya/chainer-DCGAN
After	30	minutes
46
After	2	hours
47
After	1	day
48
49
LSGAN [Mao+	16]
Stacked	GAN	
http://mtyka.github.io/machine/learning/2017/06/06/highres-gan-faces.html
New	GAN	papers	are	coming	out	every	week
GAN	Zoo	https://github.com/hindupuravinash/the-gan-zoo	
l Since	GAN	provides	a	new	way	to	train	a	probabilistic	model	
many	GAN	papers	are	coming	out,	(20	papers/mon	Jul.2017)
l Interpretation	of	GAN	framework
̶ Wasserstein	Distance,	Integral	Probability	Measure,	Inverse	RL
l New	stable	training	method
̶ Lipschitzness of	D,	Ensemble	of	Ds,	etc.	
l New	Applications
̶ Speech,	Text,	Inference	model	(q(z|x))
l Conditional	GAN
̶ Multi-class	Super-resolution,
Super	Resolution	+	Regression	loss	for	perception	network
[Chen+	17]
l Generate	photo-realistic	image	from	segmentation	result
̶ High	resolution,	globally	consistent,	stable	training
Output: photo-realistic imageInput: Segmentation
ICA:	Independent	component	analysis
Reference:	[HyvÀrinen 01]
l Find	a	component	z that	generates	data	x
x	=	f(z)
where f	is	an	unknown	function	called	mixture	function	and	
components	are	independent	each	other p(z)	=	Pp(zi)
l When	f is	linear and	p(zi) is	non-Gaussian,	we	can	identify	f and	
z correctly
l However,	when	f is	nonlinear,	we	cannot	identify	f and	z
̶ There	are	infinitely	many	possible	f and	z
l ->	When	data	is	time-series	data	x(1),	x(2),	
,	x(n)	and	they	are	
generate	from	z which	are	(1)	non-stationary	or	(2)	stationary	
independent	sources,	we	can	identify	non-linear f and	z
Non-linear	ICA	for	non-stationary	time	series	data
[HyvÀrinen+ 16]
l When	sources	are	independent	and	non-stationary,	we	can	
identify	a	non-linear	mixture	function	f	and	z
l Assumption:	sources	change	slowly
̶ sources	can	be	considered	as	stationary	in	
short	time	segment
̶ Many	interesting	data	have	this	property
1. Divide	time	series	data	into	segments
2. Train	multi-class	classifier	to	classify	
each	data	point	into	each	segment
3. The	last	layer’s	feature	corresponds	to
(linear	mixture	of)	independent	sources
Non-linear	ICA	for	stationary	time	series	data
[HyvÀrinen+	17]
l When	sources	are	independent	and	stationary,	we	can	also	
identify	a	non-linear	mixture	function	f and	z
l Sources	should	be	uniform	dependent
̶ for	x	=	s(t)	and	y=s(t-1)
1. Train	a	binary	classifier	to	classify	whether	given	data	pairs	are	
taken	from	adjacent	(x(t),	x(t+1)) or	random	(x(t),	x(u))	
2. The	last	layer’s	features	correspond	to
(linear	mixture	of)	independent	sources
Conjectures	[Okanohara]
l Train	a	multi-class	classifier	with	very	large	number	of	classes	
(e.g.	Imagenet).	Then	the	features	of	last	layer	correspond	to	
(mixture-of)	independent	component
̶ To	show	this,	we	need	a	reasonable	model	between	the	set	of	labels	
and	independent	components
̶ Dark	knowledge	[Hinton14]	is	effective	to	transfer	the	model	because	
this	reveals	the	independent	components
l Similarly	GAN’s	discriminators	(or	the	energy	functions)	also	
extract	the	independent	components
Why can DL keep and manipulate
complex information ?
Information	Abstract	Level
l Abstract	knowledge
̶ Text,	relation
l Model
̶ Simulator	/	generative	model
l Raw	Experience
̶ Sensory	stream	
Abstract
Detailed
Small volume
Independent from problem/task
context
Large volume
Dependent on
problem/task/context
Local	representation	vs	distributed	representation
l Local	representation
̶ each	concept	is	represented	by	one	symbol
̶ e.g.	Giraff=1,	Panda=2,	Lion=3,	Tiger=4
̶ no	interfere,	noise	immunity,	precise
l Distributed	representation
̶ each	concept	is	represented	by	a	set	of	symbol,	and	each	symbol	
participates	in	representing	many	concepts
̶ Generalizable
̶ less	accurate
̶ interfere
Giraff Pand Lion Tiger
Long neck ◯
four legs ◯ ◯ ◯ ◯
body hair ◯ ◯ ◯
paw pad ◯ ◯
High	dimensional	vector	vs		low	dimensional	data
l High	dimensional	vector
̶ Random	two	vectors	are	always	almost	orthogonal
̶ many	concepts	can	be	stored	within	one	vector
u w =	x	+	y	+	z,	
̶ Same	characteristics	as	local	representation
l Low	dimensional	vector
̶ Interfere	each	other
̶ Cannot	keep	precise	memory
̶ Beneficial	for	generalization
l Interference	and	generalization	are	strongly	related
Two	layer	feedforward	network	=		memory	augmented	network	
[Vaswani+	17]
l Memory	augmented	network
a	=	V	Softmax(Kq)
̶ K	is	a	key	matrix	(i-th row	corresponds	to	a	key	for	i-th memory)
̶ V	is	a	value	matix.	i-th column	correspond	to	a	value	for	i-th value
̶ We	may	use	winner-take-all	instead	of	Softmax
l Two	layer	feedforward	network
a	=	W2Relu(W1x)
̶ i-th row	of	W1 corresponds	to	a	key	for	i-th memory
̶ i-th column	of	W2 corresponds	to	a	value	for	i-th memory
Three	layer	feed-forward	network	is	also	memory-augmented	
network	[Okanohara unpublished]
l Three	layer	feed-forward	network	can	be	considered	as	first	
layer	is	used	for	computing	keys	and	second	stores	key	and	t	
a	=	W3Relu(W2Relu(W1x))
l key:	Relu(W1x)
l The	i-th row	of	W2 corresponds	to	the	key	of	i-th memory	cell
l The	i-th column	of	W3 corresponds	to	the	value	of	i-th
memory	cell
Two-leayr NN	update	rule	interpretation
[Okanohara unpublished]
l The	update	rule	of	two	layer	feedforward	network	for
h	=	Relu(W1x)
a	=	W2h
is
dh	=	W2
Tda
dW2=	da hT
dW1=	dh	diag(Relu’(W1x))	xT
=	W2
Tda	diag(Relu’(W1x))	xT
l
These	update	rules	correspond	to	storing	the	error	(da)	as	a	
value	and	storing	input	(x)	as	a	key	for	memory	network
̶ Update	only	for	active	memories	(Relu’(W1x))
Resnet is	memory	augmented	network	
[Okanohara unpublished]
l Since	resnet is	the	following	form	
h	=	h	+	Resnet(h)
and	Resnet(h)	consists	of	two	layer,	we	can	interpret	it	as	
recalling	memory	and	add	it	to	the	current	vector
̶ Squeeze	operation	correspond	to	limit	the	number	of	memory	cells
l Resnet lookups	memory	iteratively
̶ Large	number	of	steps	=	large	number	of	memory	lookups
l This	interpretation	is	different	from	using	shortcut	[He+15]	or	
unrolled	iterative	estimation	[Greff+16]
Infinite	memory	network
l What	happen	if	we	increase	the	number	of	hidden	units	
iteratively	for	each	training	sample	?
̶ This	is	similar	to	“Memory	Networks”	where	we	store	previous	hidden	
activation	in	explicit	memory		or	“Progressive	Network”	[Rusu+	16]	
where	we	incrementally	add	new	network	(and	fixed	old	network)	for	
each	new	task
l We	expect	that	it	can	prevent	from	catastrophic	forgetting	and	
achieve	one-shot	learning
̶ How	to	make	sure	generalization	?
Conclusion
l There	are	still	many	unsolved	problems	in	DNN
̶ Why	can	DNN	learn	in	general	setting	?
̶ How	to	represent	real	world	information	?
l There	are	still	many	unsolved	problems	in	AI	
̶ Disentanglement	of	information
̶ One-shot	learning	using	attention	and	memory	mechanism
u Avoid	catastrophic	forgetting,	interference	
̶ Stable,	data-efficient	reinforcement	learning
̶ How	to	abstract	information
u grounding	(language),	strong	noise	(e.g.	dropout),	extract	hidden	
factors	by	using	(non-)stationary	or	commonality	among	task
References
l [Choromanska+	2015]	“The	loss	surface	of	multilayer	networks”,	A.	Choromanska,	
and	et	al.,	AIstats 2015
l [Lu+	2017]	”Depth	creates	No	Bad	Local	Minima”,	H.	Lu,	and	et	al.,	
arXiv:1702.08580
l [Nguyen+	2017]	“The	loss	surface	of	deep	and	wide	neural	networks”,	Q.	Nguyen,	
and	et	al.,	arXiv:1704.08045
l [Zhang+	2017]	“Understanding	deep	learning	requires	rethinking	generalization”,	C.	
Zhang,	and	et	al.,	ICLR	2017
l [Arpit+	2017]	”A	Closer	Look	at	Memorization	in	Deep	Networks”,	D.	Arpit,	and	et	al.,	
ICML	2017
l [Mangt+	2017]	“Stochastic	Gradient	Descent	as	Approximate	Bayesian	Inference”,	S.	
Mandt and	et	al.,	arXiv:1704.04289
l [Shwartz-Ziv+	2017]	“Opening	the	Black	Box	of	Deep	Neural	Networks	via	
Information”,	R.	Shartz-Ziv,	and	et	al.,	arXiv:1703.00810
l [Neyshabur+	17]	“Exploring	Generalization	in	Deep	Learning”,	B.	Neyhabur,	
and	et	al.,	arXiv:1706.08947
l [Wu+	17]	“Towards	Understanding	Generalization	of	Deep	Learning:	
Perspective	of	Loss	Landscapes”,	L.	Wu	and	et	al.,	arXiv:1706.10239
l [Lin	+	16]	“Why	does	deep	and	cheap	learning	work	so	well”,	H	W.	Lin,	and	
et	al.,	arXiv1708.08226
l [Arora+	17]	“Provable	benefits	of	representation	learning”,	S.	Arora,	and	et	
al.,	arXiv:1706.04601
l [Kingma+	14]	”Auto-Encoding	Variational Bayes”,	D.	P.	Kingma and	et	al.,	
ICLR	2014
l [Burda+	15]	“Importance	Weighted	Autoencoders”,	Y.	Burda and	et	al.,	
arXiv:1509.00519
l [Goodfellow+	14]	“Generative	Adversarial	Nets”,	I.	Goodfellow,	and	et	al.,	
NIPS	2014
l [Goodfellow 16]	“NIPS	16	Tutorial:	Generative	Adversarial	Networks”,	
arXiv:1701.00160
l [Oord+	16a],	“Conditional	Image	Generation	with	PixelCNN decoders”,	A.	
Oord	and	et	al.,	NIPS	2016
l [Oord+	16b],	“WaveNet:	A	Generative	Model	for	Raw	Audio”,	A.	Oord	and	
et	al.,	arXiv1609.03499
l [Reed+	17]	“Parallel	Multiscale	Autoregressive	Density	Estimation”,	S.	Reed	
and	et	al,	arXiv:1703.03664
l [Zhao+	17]	”Energy-based	Generative	Adversarial	Network”,	J.	Zhao	and	et	
al.,	arXiv:1609.03126
l [Dai+	17]	“Calibrating	Energy-based	Generative	Adversarial	networks”,	Z.	
Dai	and	et	al.,	ICLR	2017
l [Arjovsky+	17a]	”Towards	principled	methods	for	training	generative	
adversarial	networks”,	M.	Arjovsky,	and	et	al,	arXiv:1701.04862
l [Arjovsky+	17b]	“Wasserstein	Generative	Adversarial	Networks”,	M.	
Arjovsky,	and	et	al.,	ICML	2017
l [Bojanowski+	17]	“Optimizing	the	Latent	Space	of	Generative	Networks”,	P.	
Bojanowski	and	et	al.,	arXiv:1707.05776
l [Chen+	17]	”Photographic	Image	Synthesis	with	Cascaded	Refinement	
Networks”,	Q.	Chen	and	et	al.,	arXiv:1707.09405
l [HyvĂ€rinen+	01]	“Independent	Component	Analysis”,	A.	HyvĂ€rinen and	et	
al.,	John	Wiley	‘	Sons.	2001
l [HyvĂ€rinen+	16]	“Unsupervised	Feature	Extraction	by	Time-Contrastive	
Learning	and	Nonlinear	ICA”,	A.	HyvĂ€rinen and	et	al,	NIPS	2016
l [HyvĂ€rinen+	17]	“Nonlinear	ICA	of	Temporally	Dependent	Stationary	
Sources”,	A.	HyvĂ€rinen and	et	al,	AISTATS	2017
l [Vaswani+	17]	“Attention	is	all	you	need”,	A.	Vaswani,	arxiv:1706.03762	(the	
idea	appears	only	in	version	3	https://arxiv.org/abs/1706.03762v3)
l [He+	15]	“Deep	Residual	Learning	for	Image	Recognition”,	K.	He	and	et	al.,	
arXiv:1512.03385
l [Rusu+	16]	“Progressive	Neural	Networks”,	A.	Rusu+	and	et	al.,	
arXiv:1606.04671

More Related Content

What's hot

Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€
Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€
Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€
hirokazuoishi
 
【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning for Human-AI Coordination "
Deep Learning JP
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
Arithmer Inc.
 
Multilabel pattern
Multilabel patternMultilabel pattern
Multilabel pattern
yohei okawa
 
ë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ 읎핎
ë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ ìŽí•Žë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ 읎핎
ë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ 읎핎
Hee Won Park
 
Deep Learningăšç”»ćƒèȘè­˜   歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœž
Deep Learningăšç”»ćƒèȘè­˜ 歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœžDeep Learningăšç”»ćƒèȘè­˜ 歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœž
Deep Learningăšç”»ćƒèȘè­˜   歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœž
nlab_utokyo
 
æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘング
æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘăƒłă‚°æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘング
æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘング
mlm_kansai
 
äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€
äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€
äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€
Naoya Chiba
 
物䜓怜ć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰
物䜓怜ć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰ç‰©äœ“æ€œć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰
物䜓怜ć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰
HironoriKanazawa
 
[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎
[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎
[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎
Deep Learning JP
 
Graph Attention Network
Graph Attention NetworkGraph Attention Network
Graph Attention Network
Takahiro Kubo
 
[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization
[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization
[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization
Deep Learning JP
 
論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...
論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...
論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...
Kazuki Adachi
 
【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models
【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models
【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models
Deep Learning JP
 
Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™
Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™
Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™
Yusuke Uchida
 
Automatic Mixed Precision たçŽč介
Automatic Mixed Precision たçŽč介Automatic Mixed Precision たçŽč介
Automatic Mixed Precision たçŽč介
Kuninobu SaSaki
 
Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰
Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰
Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰Tatsuya Tojima
 
ć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”š
ć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”šć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”š
ć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”š
Hiroyuki Masuda
 
論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"
論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"
論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"
Ryohei Suzuki
 
Depth Estimation論文çŽč介
Depth Estimation論文çŽč介Depth Estimation論文çŽč介
Depth Estimation論文çŽč介
Keio Robotics Association
 

What's hot (20)

Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€
Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€
Ques12「AIぼテă‚čăƒˆïœžèȘ€æ€œçŸ„ăšæ€œć‡șæŒă‚Œïœžă€
 
【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning for Human-AI Coordination "
【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning  for Human-AI Coordination "【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning  for Human-AI Coordination "
【DLèŒȘèȘ­äŒšă€‘"Language Instructed Reinforcement Learning for Human-AI Coordination "
 
Explainable AI
Explainable AIExplainable AI
Explainable AI
 
Multilabel pattern
Multilabel patternMultilabel pattern
Multilabel pattern
 
ë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ 읎핎
ë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ ìŽí•Žë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ 읎핎
ë”„ëŸŹë‹ êž°ëłž ì›ëŠŹì˜ 읎핎
 
Deep Learningăšç”»ćƒèȘè­˜   歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœž
Deep Learningăšç”»ćƒèȘè­˜ 歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœžDeep Learningăšç”»ćƒèȘè­˜ 歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœž
Deep Learningăšç”»ćƒèȘè­˜   歎ćČăƒ»ç†è«–ăƒ»ćźŸè·”ïœž
 
æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘング
æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘăƒłă‚°æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘング
æœ€èż‘ăźKaggleă«ć­Šă¶ăƒ†ăƒŒăƒ–ăƒ«ăƒ‡ăƒŒă‚żăźç‰čćŸŽé‡ă‚šăƒłă‚žăƒ‹ă‚ąăƒȘング
 
äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€
äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€
äž‰æŹĄć…ƒç‚čçŸ€ă‚’ć–ă‚Šæ‰±ă†ăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźă‚”ăƒŒăƒ™ă‚€
 
物䜓怜ć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰
物䜓怜ć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰ç‰©äœ“æ€œć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰
物䜓怜ć‡șăźæ­ŽćČR-CNNからSSD・YOLOăŸă§ïŒ‰
 
[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎
[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎
[DLèŒȘèȘ­äŒš]Deep Learning 珏5ç«  æ©Ÿæą°ć­Šçż’ăźćŸș瀎
 
Graph Attention Network
Graph Attention NetworkGraph Attention Network
Graph Attention Network
 
[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization
[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization
[DLèŒȘèȘ­äŒš]Learning to Generalize: Meta-Learning for Domain Generalization
 
論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...
論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...
論文çŽč介Grad-CAM: Visual explanations from deep networks via gradient-based loca...
 
【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models
【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models
【DLèŒȘèȘ­äŒšă€‘Mastering Diverse Domains through World Models
 
Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™
Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™
Swin Transformer (ICCV'21 Best Paper) ă‚’ćźŒç’§ă«ç†è§Łă™ă‚‹èł‡æ–™
 
Automatic Mixed Precision たçŽč介
Automatic Mixed Precision たçŽč介Automatic Mixed Precision たçŽč介
Automatic Mixed Precision たçŽč介
 
Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰
Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰
Tokyo.R 41 ă‚”ăƒăƒŒăƒˆăƒ™ă‚Żă‚żăƒŒăƒžă‚·ăƒłă§çœŒéĄăŁćš˜ćˆ†éĄžă‚·ă‚čăƒ†ăƒ æ§‹çŻ‰
 
ć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”š
ć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”šć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”š
ć€±æ•—ă‹ă‚‰ć­Šă¶æ©Ÿæą°ć­Šçż’ćżœç”š
 
論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"
論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"
論文çŽč介: "MolGAN: An implicit generative model for small molecular graphs"
 
Depth Estimation論文çŽč介
Depth Estimation論文çŽč介Depth Estimation論文çŽč介
Depth Estimation論文çŽč介
 

Viewers also liked

20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi Maruyama20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi Maruyama
Preferred Networks
 
An introduction to property based testing
An introduction to property based testingAn introduction to property based testing
An introduction to property based testing
Scott Wlaschin
 
Differences of Deep Learning Frameworks
Differences of Deep Learning FrameworksDifferences of Deep Learning Frameworks
Differences of Deep Learning Frameworks
Seiya Tokui
 
【DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćź
【DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćźă€DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćź
【DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćź
Preferred Networks
 
Introduction to Chainer
Introduction to ChainerIntroduction to Chainer
Introduction to Chainer
Preferred Networks
 
Lecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanoharaLecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanohara
Preferred Networks
 
Deep parking
Deep parkingDeep parking
Deep parking
Shintaro Shiba
 

Viewers also liked (7)

20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi Maruyama20171024 DLLab#04_PFN_Hiroshi Maruyama
20171024 DLLab#04_PFN_Hiroshi Maruyama
 
An introduction to property based testing
An introduction to property based testingAn introduction to property based testing
An introduction to property based testing
 
Differences of Deep Learning Frameworks
Differences of Deep Learning FrameworksDifferences of Deep Learning Frameworks
Differences of Deep Learning Frameworks
 
【DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćź
【DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćźă€DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćź
【DLLïŒ“ă€‘20170904_AIă‚Źă‚€ăƒ‰ăƒ©ă‚€ăƒł_PFNäžžć±±ćź
 
Introduction to Chainer
Introduction to ChainerIntroduction to Chainer
Introduction to Chainer
 
Lecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanoharaLecture univ.tokyo 2017_okanohara
Lecture univ.tokyo 2017_okanohara
 
Deep parking
Deep parkingDeep parking
Deep parking
 

Similar to Deep Learning Practice and Theory

CS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunityCS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunity
Peter Donaldson
 
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
Michael Zock
 
Deep learning short introduction
Deep learning short introductionDeep learning short introduction
Deep learning short introduction
Adwait Bhave
 
Clare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science OnlineClare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science Online
sfdatascience
 
Data Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use CaseData Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use Case
Formulatedby
 
Data Science Salon Miami Presentation
Data Science Salon Miami PresentationData Science Salon Miami Presentation
Data Science Salon Miami Presentation
Greg Werner
 
What every teacher should know about cognitive research
What every teacher should know about cognitive researchWhat every teacher should know about cognitive research
What every teacher should know about cognitive research
Stephanie Chasteen
 
Lessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systemsLessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systems
Xavier Amatriain
 
Deep learning with tensorflow
Deep learning with tensorflowDeep learning with tensorflow
Deep learning with tensorflow
Charmi Chokshi
 
Welcome is431 s11
Welcome is431 s11Welcome is431 s11
Welcome is431 s11Julian Scher
 
What every teacher should know about cognitive science
What every teacher should know about cognitive scienceWhat every teacher should know about cognitive science
What every teacher should know about cognitive science
Stephanie Chasteen
 
Otago maths association pd 2014
Otago maths association pd 2014Otago maths association pd 2014
Otago maths association pd 2014
mshasanbegovic
 
How to Start Doing Data Science
How to Start Doing Data ScienceHow to Start Doing Data Science
How to Start Doing Data Science
Ayodele Odubela
 
Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015
Gill Clough
 
James Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculumJames Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculum
petzanet.HR Kurikulum
 
Deep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdfDeep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdf
Microsoft azure
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
Sololocales1
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
Sololocales1
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales IntroSololocales1
 
Solo Locales Introduction v2
Solo Locales Introduction v2Solo Locales Introduction v2
Solo Locales Introduction v2
Sololocales1
 

Similar to Deep Learning Practice and Theory (20)

CS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunityCS Education for All. A new wave of opportunity
CS Education for All. A new wave of opportunity
 
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
RING panel discussion, Coling 2010 ( E. Hovy + M. Zock)
 
Deep learning short introduction
Deep learning short introductionDeep learning short introduction
Deep learning short introduction
 
Clare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science OnlineClare Corthell: Learning Data Science Online
Clare Corthell: Learning Data Science Online
 
Data Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use CaseData Science Salon: Introduction to Machine Learning - Marketing Use Case
Data Science Salon: Introduction to Machine Learning - Marketing Use Case
 
Data Science Salon Miami Presentation
Data Science Salon Miami PresentationData Science Salon Miami Presentation
Data Science Salon Miami Presentation
 
What every teacher should know about cognitive research
What every teacher should know about cognitive researchWhat every teacher should know about cognitive research
What every teacher should know about cognitive research
 
Lessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systemsLessons learned from building practical deep learning systems
Lessons learned from building practical deep learning systems
 
Deep learning with tensorflow
Deep learning with tensorflowDeep learning with tensorflow
Deep learning with tensorflow
 
Welcome is431 s11
Welcome is431 s11Welcome is431 s11
Welcome is431 s11
 
What every teacher should know about cognitive science
What every teacher should know about cognitive scienceWhat every teacher should know about cognitive science
What every teacher should know about cognitive science
 
Otago maths association pd 2014
Otago maths association pd 2014Otago maths association pd 2014
Otago maths association pd 2014
 
How to Start Doing Data Science
How to Start Doing Data ScienceHow to Start Doing Data Science
How to Start Doing Data Science
 
Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015Presentation for IAA - Oxford Careers Service 24 November 2015
Presentation for IAA - Oxford Careers Service 24 November 2015
 
James Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculumJames Langley presentation about Computer science & ICT curriculum
James Langley presentation about Computer science & ICT curriculum
 
Deep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdfDeep Learning Online Course It's Not as Difficult as You Think.pdf
Deep Learning Online Course It's Not as Difficult as You Think.pdf
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
 
Solo Locales Intro
Solo Locales IntroSolo Locales Intro
Solo Locales Intro
 
Solo Locales Introduction v2
Solo Locales Introduction v2Solo Locales Introduction v2
Solo Locales Introduction v2
 

More from Preferred Networks

PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57
Preferred Networks
 
Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3
Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3
Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3
Preferred Networks
 
Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...
Preferred Networks
 
æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...
æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...
æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...
Preferred Networks
 
Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55
Preferred Networks
 
Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2
Preferred Networks
 
最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2
最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2
最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2
Preferred Networks
 
Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2
Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2
Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2
Preferred Networks
 
ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”
ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”
ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”
Preferred Networks
 
Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
Preferred Networks
 
PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
Preferred Networks
 
è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰
è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰
è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰
Preferred Networks
 
Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹
Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹
Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹
Preferred Networks
 
Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”
Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”
Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”
Preferred Networks
 
PFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒš
PFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒšPFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒš
PFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒš
Preferred Networks
 
ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2
ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2
ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2
Preferred Networks
 
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Preferred Networks
 
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
Preferred Networks
 
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...
Preferred Networks
 
ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50
ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50
ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50
Preferred Networks
 

More from Preferred Networks (20)

PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57
PodSecurityPolicy からGatekeeper ă«ç§»èĄŒă—ăŸă—ăŸ / Kubernetes Meetup Tokyo #57
 
Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3
Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3
Optunaă‚’äœżăŁăŸHuman-in-the-loopæœ€é©ćŒ–ăźçŽč介 - 2023/04/27 W&B 東äșŹăƒŸăƒŒăƒˆă‚ąăƒƒăƒ— #3
 
Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...
Kubernetes + containerd で cgroup v2 ă«ç§»èĄŒă—ăŸă‚‰ "failed to create fsnotify watcher...
 
æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...
æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...
æ·±ć±€ć­Šçż’ăźæ–°ă—ă„ćżœç”šăšă€ ăă‚Œă‚’æ”Żăˆă‚‹èšˆçź—æ©Ÿăźé€Č挖 - Preferred Networks CEO è„żć·ćŸč (SEMICON Japan 2022 Ke...
 
Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55
Kubernetes ControllerをScale-Outさせるæ–čæł• / Kubernetes Meetup Tokyo #55
 
Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2
Kaggle Happywhaleコンペć„Șć‹è§Łæł•ă§ăźOptunaäœżç”šäș‹äŸ‹ - 2022/12/10 Optuna Meetup #2
 
最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2
最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2
最新ăƒȘăƒȘăƒŒă‚čOptuna V3ぼ慹ど - 2022/12/10 Optuna Meetup #2
 
Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2
Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2
Optuna DashboardたçŽčä»‹ăšèš­èšˆè§ŁèȘŹ - 2022/12/10 Optuna Meetup #2
 
ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”
ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”
ă‚čă‚żăƒŒăƒˆă‚ąăƒƒăƒ—ăŒææĄˆă™ă‚‹2030ćčŽăźææ–™é–‹ç™ș - 2022/11/11 QPARCèŹ›æŒ”
 
Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
Deep LearningăźăŸă‚ăźć°‚ç”šăƒ—ăƒ­ă‚»ăƒƒă‚”ă€ŒMN-Core」ぼ開ç™șăšæŽ»ç”šïŒˆ2022/10/19æ±ć€§ć€§ć­Šé™ąă€Œ èžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
 
PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
PFNにおける研究開ç™ș2022/10/19 æ±ć€§ć€§ć­Šé™ąă€Œèžćˆæƒ…ć ±ć­Šç‰čćˆ„èŹ›çŸ©â…ąă€ïŒ‰
 
è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰
è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰
è‡Ș然蚀èȘžć‡Šç†ă‚’ ćœč立おるたはăȘăœé›Łă—ă„ăźă‹ïŒˆ2022/10/25æ±ć€§ć€§ć­Šé™ąă€Œè‡Ș然蚀èȘžć‡Šç†ćżœç”šă€ïŒ‰
 
Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹
Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹
Kubernetes ă«ă“ă‚Œă‹ă‚‰ć…„ă‚‹ă‹ă‚‚ă—ă‚ŒăȘă„æłšç›źæ©ŸèƒœïŒïŒˆ2022ćčŽ11月版 / TechFeed Experts Night #7 〜 ă‚łăƒłăƒ†ăƒŠæŠ€èĄ“ă‚’èȘžă‚‹
 
Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”
Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”
Matlantisâ„ąăźăƒ‹ăƒ„ăƒŒăƒ©ăƒ«ăƒăƒƒăƒˆăƒŻăƒŒă‚Żăƒăƒ†ăƒłă‚·ăƒŁăƒ«PFPた適甚範ć›Čæ‹ĄćŒ”
 
PFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒš
PFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒšPFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒš
PFNたă‚Șăƒłăƒ—ăƒŹèšˆçź—æ©Ÿă‚Żăƒ©ă‚čă‚żăźć–ă‚Šç”„ăż_珏55ć›žæƒ…ć ±ç§‘ć­Šè‹„æ‰‹ăźäŒš
 
ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2
ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2
ç¶šăƒ»PFN たă‚ȘンプレMLćŸșç›€ăźć–ă‚Šç”„ăż / ă‚ȘンプレMLćŸș盀 on Kubernetes 〜PFNă€ăƒ€ăƒ•ăƒŒă€œ #2
 
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
Kubernetes Service Account As Multi-Cloud Identity / Cloud Native Security Co...
 
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
KubeCon + CloudNativeCon Europe 2022 Recap / Kubernetes Meetup Tokyo #51 / #k...
 
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...
KubeCon + CloudNativeCon Europe 2022 Recap - Batch/HPCăźæœźæ”ăšScheduleræ‹ĄćŒ”äș‹äŸ‹ / Kub...
 
ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50
ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50
ç‹Źæ–­ăšćèŠ‹ă§éžă‚“ă  Kubernetes 1.24 ăźæłšç›źæ©Ÿèƒœăšä»ŠćŸŒ! / Kubernetes Meetup Tokyo 50
 

Recently uploaded

Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
Fwdays
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 

Recently uploaded (20)

Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 

Deep Learning Practice and Theory