SlideShare a Scribd company logo
1 of 18
Download to read offline
1.	 Introduction
2.	 Install	Docker	on	Ubuntu	14.04
3.	 Set	Hadoop	Environment	in	a	Docker	Container
4.	 Set	HBase	Environment	in	a	Docker	container
5.	 Export	and	Import	a	Docker	Image	between	Nodes	in	Cluster
6.	 The	Problem	I	Haven't	Solved
7.	 Possible	Problems
Table	of	Contents
This	is	the	basic	tutorial	to	help	developer	or	system	administrator	to	build	a	basic	cloud	environment	with
Docker.
In	this	book,	I	will	not	use	Dockerfile	to	create	a	container	because	I	don't	know	how	to	use	that	yet.
XDDD
In	the	end	of	this	book,	I	will	summary	some	problem	I	haven't	solved	yet.
If	there	is	any	mistake,	please	let	me	know.
Distribute	Cloud	Environment	with	Docker
$	sudo	apt-get	update
$	sudo	apt-get	install	docker.io
Enable	tab-completion	of	Docker	commands	in	BASH
$	source	/etc/bash_completion.d/docker.io
First,	check	your	system	can	deal	with		https		URLs:	the		/usr/lib/apt/methods/https		should	exist.	If
it	doesn't,	you	need	to	install	the	package		apt-transport-https	
$	apt-get	update
$	apt-get	install	apt-transport-https
Add	the	Docker	repository	key	to	your	system	keychain.
$	sudo	apt-key	adv	--keyserver	hkp://keyserver.ubuntu.com:80	--recv-keys	36A1D7869245C8950F966E92D857
Add	the	Docker	repository	to	your	apt	source	list
$	sudo	sh	-c	“echo	deb	http://get.docker.io/ubuntu	docker	main	>	/etc/apt/sources.list.d/docker.list"
$	sudo	apt-get	update
$	sudo	apt-get	install	lxc-docker
To	verify	that	everything	has	worked	as	exepected:
$	sudo	docker	run	-i	-t	ubuntu	/bin/bash
You	should	download	the	latest	version	of	Ubuntu	image,	and	then	start		bash		in	a	new	container.
Show	running	containers:
Install	Docker	on	Ubuntu	14.04	LTS
Ubuntu-maintained	Package	Installation
Docker-maintained	Package	Installation
Note:	If	you	want	install	the	recently	version	of	Docker,	you	do	not	need	to	install
	docker.io		from	Ubuntu
Basic	Docker	Command-line
$	sudo	docker	ps
Show	all	images	in	your	local	repository:
$	sudo	docker	images
Run	a	container	from	a	specific	image
$	sudo	docker	run	-i	-t	<image_id	||	repository:tag>	-bash
Start	a	existed	container:
$	sudo	docker	start	-i	<image_id>
Attach	a	running	container:
$	sudo	docker	attach	<container_id>
Exit	without	shutting	down	a	container:
[Ctrl-p]	+	[Ctrl-q]
https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit
Reference
In	a	new	Docker	container,	you	need	to	set	basic	environment	first	before	setting	Hadoop.
$	sudo	apt-get	update
$	sudo	apt-get	install	default-jdk
The	default	JDK	will	be	installed	at		/usr/lib/jvm/<java-version>	
$	sudo	apt-get	install	git	wget	vim	ssh
$	adduser	hduser
Grant	a	user	privileges
$	visudo
Append	the		hduser		you	just	created	below	the		root		and	just	set	privileges	specification	as		root		
$	ssh-keygen	-t	rsa
$	cat	.ssh/id_rsa.pub	>>	.ssh/authorized_keys
Set	Hadoop	Environment	in	a	Docker	Container
Update	Apt	List
Install	Java	JDK
Install	some	needed	package
Create	a	user	to	manage	hadoop	cluster
Generate	SSH	authorized	key	to	let	socket	connection	without
password
Set	the	port	of	SSH	and	SSHD
Because	I	use		Docker		as	my	distributed	tool,	so	the	default	ssh	port		22		has	been	listened	by	host
machine,	I	need	to	use	another	port	to	communicate	between		Docker	containers		in	different	host
machine.	In	my	example,	I	will	use		2122	port	for	listening	and	sending	request.
ssh
$	sudo	vi	/etc/ssh/ssh_config
->	Port	2122
wq!
sshd
$	sudo	vi	/etc/ssh/sshd_config
->	Port	2122
->	UsePAM	no
wq!
I	supposed	my	cloud	environment	is	as	following:
VM:	5	nodes	(master,	master2,	slave1,	slave2,	slave3)
OS:	Ubuntu	14.04	LTS
Docker	Version:	1.3.1
Hadoop	Version:	2.6.0
Download	hadoop-2.6.0
$	wget	http://ftp.twaren.net/Unix/Web/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
$	tar	zxvf	hadoop-2.6.0.tar.gz
Set	environment	path
$	sudo	vi	/etc/profile
->	export	JAVA_HOME=/usr/lib/jvm/<java-version>
->	export	HADOOP_HOME=<YOUR_HADOOP_PACKAGE_PATH>
->	export	HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
->	export	PATH=$HADOOP_HOME/bin:$PATH
->	export	CLASSPATH=$HADOOP_HOME/lib:$CLASSPATH
:wq!
$	source	/etc/profile
All	needed	will	be	stored	in		<HADOOP_HOME>/etc/hadoop	
Set	Hadoop	Environment
Modify	Hadoop	configuration
core-site.xml
<configuration>
				<property>
								<name>fs.DefaultFS</name>
								<value>hdfs://master:9000</value>
								<description>The	master	endpoint	in	cluster.</description>
				</property>
				<property>
								<name>io.file.buffer.size</name>
								<value>131072</value>
				</property>
				<property>
								<name>hadoop.tmp.dir</name>
								<value>file:/<HADOOP_HOME>/temp</value>
								<description>A	base	for	other	temporary	directories.</description>
				</property>
</configuration>
hdfs-site.xml
<configuration>
				<property>
								<name>dfs.namenode.secondary.http-address</name>
								<value>master2:90001</value>
								<description>Set	secondary	namenode	to	prevent	the	master	node	crash.</description>
				</property>
				<property>
								<name>dfs.namenode.name.dir</name>
								<value>file:/<HADOOP_HOME>/dfs/name</value>
								<description>Set	the	location	of	name	node	version.</description>
				</property>
				<property>
								<name>dfs.namenode.data.dir</name>
								<value>file:/<HADOOP_HOME>/dfs/data</value>
								<description>Set	the	location	of	data	node	version.</description>
				</property>
				<property>
								<name>dfs.replication</name>
								<value>3</value>
								<description>The	number	of	replication	in	cluster.</description>
				</property>
				<property>
								<name>dfs.webhdfs.enabled</name>
								<value>true</value>
				</property>
				<property>
								<name>dfs.datanode.max.xcievers</name>
								<value>4096</value>
				</property>
				<property>
								<name>dfs.permissions</name>
								<value>false</value>
				</property>
				<property>
								<name>dfs.support.append</name>
								<value>true</value>
				</property>
</configuration>
mapred-site.xml
<configuration>
				<property>
								<name>mapreduce.framework.name</name>
								<value>yarn</value>
				</property>
				<property>
								<name>mapreduce.jobhistory.address</name>
								<value>master:10020</value>
				</property>
				<property>
								<name>mapreduce.jobhistory.webapp.address</name>
								<value>master:19888</value>
				</property>
</configuration>
yarn-stie.xml
<configuration>
				<property>
								<name>yarn.nodemanager.aux-services</name>
								<value>mapreduce_shuffle</value>
				</property>
				<property>
								<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
								<value>org.apache.hadoop.mapred.ShuffleHandler</value>
				</property>
				<property>
								<name>yarn.resourcemanager.address</name>
								<value>master:8032</value>
				</property>
				<property>
								<name>yarn.resourcemanager.scheduler.address</name>
								<value>master:8030</value>
				</property>
				<property>
								<name>yarn.resourcemanager.resource-tracker.address</name>
								<value>master:8031</value>
				</property>
				<property>
								<name>yarn.resourcemanager.admin.address</name>
								<value>master:8033</value>
				</property>
				<property>
								<name>yarn.resourcemanager.webapp.address</name>
								<value>master:8088</value>
				</property>
</configuration>
hadoop-env.sh
$	vi	<HADOOP_CONF_DIR>/hadoop-env.sh
->	export	JAVA_HOME=/usr/lib/jvm/<java-version>
wq!
yarn-env.sh
$	vi	<HADOOP_CONF_DIR>/yarn-env.sh
->	export	JAVA_HOME=/usr/lib/jvm/<java-version>
wq!
slaves
$	vi	<HADOOP_CONF_DIR>/slaves
->	master2
->	slave1
->	slave2
->	slave3
After	setting	Hadoop	environment	in	previous	section,	you	can	set	hbase	environment	now.
In	the	following	example,	I	will	use	custom	zookeeper	to	manage	the	resource	of	my	cluster.
HBase	version:	0.99.2
Zookeeper	version:	3.3.6
$	wget	http://ftp.twaren.net/Unix/Web/apache/hbase/hbase-0.99.2/hbase-0.99.2-bin.tar.gz
$	wget	http://ftp.twaren.net/Unix/Web/apache/zookeeper/zookeeper-3.3.6/zookeeper-3.3.6.tar.gz
$	tar	-zxvf	hbase-0.99.2-bin.tar.gz
$	tar	-zxvf	zookeeper-3.3.6.tar.gz
hbase-site.xml
<configuration>
				<property>
								<name>hbase.rootdir</name>
								<value>hdfs://master:9000/hbase</value>
				</property>
				<property>
								<name>hbase.cluster.distributed</name>
								<value>true</value>
				</property>
				<property>
								<name>hbase.master</name>
								<value>hdfs://master:60000</value>
				</property>
				<property>
								<name>hbase.zookeeper.property.clientPort</name>
								<value>2181</value>
				</property>
				<property>
								<name>hbase.zookeeper.property.dataDir</name>
								<value><ZOOKEEPER_HOME>/data</value>
				</property>
				<property>
								<name>hbase.zookeeper.quorum</name>
								<value>master</value>
				</property>
				<property>
								<name>dfs.support.append</name>
								<value>true</value>
				</property>
</configuration>
Set	HBase	Environment	in	a	Docker	container
Download	HBase	and	Zookeeper
Set	the	configuration	of	HBase
hbase-env.sh
$	vi	<HBASE_HOME>/conf/hbase-env.sh
->	export	HBASE_HOME=<HBASE_HOME>
->	export	HADOOP_HOME=<HBASE_HOME>
->	export	HBASE_CLASSPATH=$HADOOP_CONF_DIR
->	HBASE_MANAGES_ZK=false
regionservers
$	vi	<HBASE_HOME>/conf/slaves
->	master2
->	slave1
->	slave2
->	slave3
As	previous	setting	of		hbase-env.sh	,	you	can	see	I	set		HBASE_MANAGES_ZK=false		to	use	my	custom
zookeeper	to	manage	and	monitor	the	resource	of	cluster.
zoo.cfg
$	vi	<ZOOKEEPER_HOME>/conf/zoo.cfg
->	dataDir=<ZOOKEEPER_HOME>/data
->	clientPort=2181
->	server.1=master:2888:3888
And	then	add	a		myid		file	under		<ZOOKEEPER_HOME>/data		to	tell	zookeeper	which	node	is	the
	zookeeper		running	on.
for	example,	as	my	zoo.cfg	set	server.	1	=	master	:2888:3888,	it	means	this		zookeeper		thread	running
on		master		node	binding		2888		port	and		3888		port.	So	I	need	to	tell		zookeeper		which	machine	is	that
run	on.
$	vi	myid
->	1
wq!
$	sudo	vi	/etc/profile
->	export	HBASE_HOME=<HBASE_HOME>
->	export	ZOOKEEPER_HOME=<ZOOKEEPER_HOME>
->	export	PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
Set	the	configuration	of	Zookeeper
Set	System	Environment
After	following	previous	sections	to	set		Hadoop		and		HBase		configurations,	we	can	commit	this		Docker	
image	to	distribute	cloud	cluster.
Brief	Summary
The	following	figure	shows	what	I	plan	to	distribute	my	cloud	cluster.
If	you	follow	the	previous	sections,	you	will	get	a	Docker	container	with	hadoop	environment.	To	use	that,
you	need	to	duplicate	it	to	other	endpoints	you	want	to	distribute	to	build	hadoop	cluster.
Save	the	container	as	a	compressed		tar		file.
$	sudo	docker	save	<image_repository:tag>	>	XXXX.tar
After	finishing	export,	I	use		scp		to	transport	the	image	to	other	nodes	in	cluster.
Export	and	Import	a	Docker	Image	between	Nodes	in
Cluster
NOTE:	My	experiment	environment	is	on	Windows	Azure	Virtual
Machine.
Export	a	Docker	container	to	a	image.
$	scp	-P	[port]	XXXX.tar	[account]@[domain]/<where	you	want	to	store>
Now,	switch	to	the		master2		to	show	how	to	use	the	image	we	just	transport	from		master	.
$	sudo	docker	load	<	XXXX.tar
After	loading		tar		file,	we	can	check	the	image	just	importing	from	local	repository.
$	sudo	docker	images
Now,	we	can	start	to	use	this	image	to	distribute	the	cluster.	In	this	example,	I	write	a	simple		shell
script		to	run	Docker	image	instead	of		Dockerfile		because	I	haven't	knowed	how	to	use		Dockerfile	
to	build	a	Docker	container.
Because		--link		option	is	not	fit	in	my	situation,	I	use	the	basically	port	mapping	to	try	connect	with	each
container	via	internet
$	vi	bootstrap.sh
->	sudo	docker	run	-i	-t	-p	2122:2122	-p	50020:50020	-p	50090:50090	-p	50070:50070	-p	50010:50010	-p	
wq!
$	sh	bootstrap.sh
In	this	example,	I	use		master2		as	hostname	and	listen	all	needed	ports	from	container	to	endpoint
machine.
Now,	we	can	start	to	boot	the	hadoop	cluster	on.	There	are	some	steps	before		start-dfs.sh	.
Each	containers	in	cluster	need	to	do	the	following	statement
$	source	/etc/profile
$	sudo	vi	/etc/hosts
->	put	all	IP	and	host	name
##	for	example
Shell	script
Distribute	with	Docker	Container
127.0.0.5	master
10.0.0.2	master2
10.0.0.3	slave1
10.0.0.4	slave2
10.0.0.5	slave3
wq!
##	restart	ssh	twice
$	sudo	service	ssh	restart
$	sudo	service	ssh	restart
Master	container	need	to	do	this
##	format	hdfs	namenode
$	hdfs	namenode	-format
$	<HADOOP_HOME>	sbin/start-dfs.sh
$	<HADOOP_HOME>	sbin/start-yarn.sh
##	make	the	root	directory	of	hbase
$	hadoop	fs	-mkdir	/hbase
##	start	zookeeper
$	zkServer.sh	start
##	start	hbase
$	start-hbase.sh
##	TEST
$	jps
If	success,	you	will	see	the	following	process	running	on		master
Now,	we	can	start	using	hadoop	and	hbase	to	record	and	analyze	data	by	following	this	tutorial.
Nevertheless,	there	are	still	problems	I've	met	but	not	solved.
In	section	4,	after	I		docker	run		each	containers,	I	have	to	modify	every		/etc/hosts		of	each	containers
to	connect	each	others.	But	it	will	happen	two	problems.
First,	it	is	inconvenient	to	modify	every	hosts	file	if	large	amounts	of	endpoint	machines.
Second,	if	reboot	containers,	the	IP	of	container	will	be	automatically	reset	by	Docker.	Suppose	you	use
HBase	as	your	NoSQL	DB,		Zookeeper		will	store	old	IP	and	all		regionservers		can't	trace	back	the
master	node.
I've	surveyed	some	method	to	solve	but	not	implement	yet.	Look	at	the	following	link:
http://jpetazzo.github.io/2013/10/16/configure-docker-bridge-network/
If	I	solve	problems,	I	will	update	the	book.
I've	referenced	the	before	link	and	try	to	solve	the	second	problem	I	met.	There	are	some	new	problems
happened	so	that	I	don't	solve	it.
I've	modify	the	interface	IP	and	route	of	container	successfully,	but	I	can't		ssh		into	container	after	do
that.	Th	following	code	is	what	I	use,	maybe	everybody	can	discuss	on	this	issue	in		stackoverflow	.
http://stackoverflow.com/questions/27937185/assign-static-ip-to-docker-container
pid=$(sudo	docker	inspect	-f	'{{.State.Pid}}'	<container_name>	2>/dev/null)
sudo	rm	-rf	/var/run/netns/*
sudo	ln	-s	/proc/$pid/ns/net	/var/run/netns/$pid
sudo	ip	link	add	A	type	veth	peer	name	B
sudo	brctl	addif	docker0	A
sudo	ip	link	set	A	up
sudo	ip	link	set	B	netns	$pid
sudo	ip	netns	exec	$pid	ip	link	set	eth0	down
sudo	ip	netns	exec	$pid	ip	link	delete	eth0
sudo	ip	netns	exec	$pid	ip	link	set	dev	B	name	eth0
sudo	ip	netns	exec	$pid	ip	link	set	eth0	address	12:34:56:78:9a:bc
sudo	ip	netns	exec	$pid	ip	link	set	eth0	down
sudo	ip	netns	exec	$pid	ip	link	set	eth0	up
sudo	ip	netns	exec	$pid	ip	addr	add	172.17.0.1/16	dev	eth0
sudo	ip	netns	exec	$pid	ip	route	add	default	via	172.17.42.1
The	Problem	I	Haven't	Solved
Update
2015/01/20
ClockOutOfSyncException
Becasue	the	hadoop	cluster	time	date	is	not	sync,	you	can	use		ntpdate	asia.pool.ntp.org		to	sync
date	time	of	each	hosts.
ConnectionRefused
Please	confirm	your	IP	and	hostname	in		/etc/hosts		is	correct	or	not.
continue...
Possible	Problems

More Related Content

What's hot

Puppet and Vagrant in development
Puppet and Vagrant in developmentPuppet and Vagrant in development
Puppet and Vagrant in developmentAdam Culp
 
Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3
Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3 Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3
Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3 Puppet
 
Installaling Puppet Master and Agent
Installaling Puppet Master and AgentInstallaling Puppet Master and Agent
Installaling Puppet Master and AgentRanjit Avasarala
 
Using Docker with Puppet - PuppetConf 2014
Using Docker with Puppet - PuppetConf 2014Using Docker with Puppet - PuppetConf 2014
Using Docker with Puppet - PuppetConf 2014Puppet
 
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzureDevoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzurePatrick Chanezon
 
Vagrant + Docker provider [+Puppet]
Vagrant + Docker provider [+Puppet]Vagrant + Docker provider [+Puppet]
Vagrant + Docker provider [+Puppet]Nicolas Poggi
 
How to become cloud backup provider
How to become cloud backup providerHow to become cloud backup provider
How to become cloud backup providerCLOUDIAN KK
 
Using Docker in the Real World
Using Docker in the Real WorldUsing Docker in the Real World
Using Docker in the Real WorldTim Haak
 
How to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker ComposeHow to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker ComposeEvoke Technologies
 
From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012Carlos Sanchez
 
Docker Tips And Tricks at the Docker Beijing Meetup
Docker Tips And Tricks at the Docker Beijing MeetupDocker Tips And Tricks at the Docker Beijing Meetup
Docker Tips And Tricks at the Docker Beijing MeetupJérôme Petazzoni
 
Introduction to Docker and deployment and Azure
Introduction to Docker and deployment and AzureIntroduction to Docker and deployment and Azure
Introduction to Docker and deployment and AzureJérôme Petazzoni
 
Scripting Support in GFv3 Prelude - Full Version
Scripting Support in GFv3 Prelude - Full VersionScripting Support in GFv3 Prelude - Full Version
Scripting Support in GFv3 Prelude - Full VersionEduardo Pelegri-Llopart
 
Simplify and run your development environments with Vagrant on OpenStack
Simplify and run your development environments with Vagrant on OpenStackSimplify and run your development environments with Vagrant on OpenStack
Simplify and run your development environments with Vagrant on OpenStackB1 Systems GmbH
 
Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기
Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기
Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기raccoony
 

What's hot (20)

Puppet and Vagrant in development
Puppet and Vagrant in developmentPuppet and Vagrant in development
Puppet and Vagrant in development
 
Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3
Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3 Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3
Puppet Camp Seattle 2014: Docker and Puppet: 1+1=3
 
What is this "docker"
What is this  "docker" What is this  "docker"
What is this "docker"
 
Installaling Puppet Master and Agent
Installaling Puppet Master and AgentInstallaling Puppet Master and Agent
Installaling Puppet Master and Agent
 
Using Docker with Puppet - PuppetConf 2014
Using Docker with Puppet - PuppetConf 2014Using Docker with Puppet - PuppetConf 2014
Using Docker with Puppet - PuppetConf 2014
 
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on AzureDevoxx France 2015 - The Docker Orchestration Ecosystem on Azure
Devoxx France 2015 - The Docker Orchestration Ecosystem on Azure
 
A Hands-on Introduction to Docker
A Hands-on Introduction to DockerA Hands-on Introduction to Docker
A Hands-on Introduction to Docker
 
Vagrant + Docker provider [+Puppet]
Vagrant + Docker provider [+Puppet]Vagrant + Docker provider [+Puppet]
Vagrant + Docker provider [+Puppet]
 
How to become cloud backup provider
How to become cloud backup providerHow to become cloud backup provider
How to become cloud backup provider
 
ABCs of docker
ABCs of dockerABCs of docker
ABCs of docker
 
Docker
DockerDocker
Docker
 
Using Docker in the Real World
Using Docker in the Real WorldUsing Docker in the Real World
Using Docker in the Real World
 
How to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker ComposeHow to Dockerize Web Application using Docker Compose
How to Dockerize Web Application using Docker Compose
 
From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012From Dev to DevOps - Codemotion ES 2012
From Dev to DevOps - Codemotion ES 2012
 
Docker Tips And Tricks at the Docker Beijing Meetup
Docker Tips And Tricks at the Docker Beijing MeetupDocker Tips And Tricks at the Docker Beijing Meetup
Docker Tips And Tricks at the Docker Beijing Meetup
 
Introduction to Docker and deployment and Azure
Introduction to Docker and deployment and AzureIntroduction to Docker and deployment and Azure
Introduction to Docker and deployment and Azure
 
Scripting Support in GFv3 Prelude - Full Version
Scripting Support in GFv3 Prelude - Full VersionScripting Support in GFv3 Prelude - Full Version
Scripting Support in GFv3 Prelude - Full Version
 
Docker by Example - Quiz
Docker by Example - QuizDocker by Example - Quiz
Docker by Example - Quiz
 
Simplify and run your development environments with Vagrant on OpenStack
Simplify and run your development environments with Vagrant on OpenStackSimplify and run your development environments with Vagrant on OpenStack
Simplify and run your development environments with Vagrant on OpenStack
 
Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기
Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기
Django로 만든 웹 애플리케이션 도커라이징하기 + 도커 컴포즈로 개발 환경 구축하기
 

Similar to DISTRIBUTE CLOUD ENVIRONMENT WITH DOCKER

codemotion-docker-2014
codemotion-docker-2014codemotion-docker-2014
codemotion-docker-2014Carlo Bonamico
 
Docker navjot kaur
Docker navjot kaurDocker navjot kaur
Docker navjot kaurNavjot Kaur
 
Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)MeetupDataScienceRoma
 
Docker - Lightweight Virtualization
Docker - Lightweight VirtualizationDocker - Lightweight Virtualization
Docker - Lightweight VirtualizationMehdi Hasan
 
Drupal theming training
Drupal theming trainingDrupal theming training
Drupal theming trainingdropsolid
 
Single node hadoop cluster installation
Single node hadoop cluster installation Single node hadoop cluster installation
Single node hadoop cluster installation Mahantesh Angadi
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopR hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopAiden Seonghak Hong
 
The Dockerfile Explosion and the Need for Higher Level Tools by Gareth Rushgrove
The Dockerfile Explosion and the Need for Higher Level Tools by Gareth RushgroveThe Dockerfile Explosion and the Need for Higher Level Tools by Gareth Rushgrove
The Dockerfile Explosion and the Need for Higher Level Tools by Gareth RushgroveDocker, Inc.
 
#VirtualDesignMaster 3 Challenge 4 – James Brown
#VirtualDesignMaster 3 Challenge 4 – James Brown#VirtualDesignMaster 3 Challenge 4 – James Brown
#VirtualDesignMaster 3 Challenge 4 – James Brownvdmchallenge
 
How to Build Package in Linux Based Systems.
How to Build Package in Linux Based Systems.How to Build Package in Linux Based Systems.
How to Build Package in Linux Based Systems.İbrahim UÇAR
 
Orchestrating Docker containers at scale
Orchestrating Docker containers at scaleOrchestrating Docker containers at scale
Orchestrating Docker containers at scaleMaciej Lasyk
 
Debian Packaging tutorial
Debian Packaging tutorialDebian Packaging tutorial
Debian Packaging tutorialnussbauml
 
Big Data Step-by-Step: Infrastructure 1/3: Local VM
Big Data Step-by-Step: Infrastructure 1/3: Local VMBig Data Step-by-Step: Infrastructure 1/3: Local VM
Big Data Step-by-Step: Infrastructure 1/3: Local VMJeffrey Breen
 
Start your adventure with docker
Start your adventure with dockerStart your adventure with docker
Start your adventure with dockerSagar Dash
 
Learn docker in 90 minutes
Learn docker in 90 minutesLearn docker in 90 minutes
Learn docker in 90 minutesLarry Cai
 

Similar to DISTRIBUTE CLOUD ENVIRONMENT WITH DOCKER (20)

codemotion-docker-2014
codemotion-docker-2014codemotion-docker-2014
codemotion-docker-2014
 
Rally_Docker_deployment_JumpVM
Rally_Docker_deployment_JumpVMRally_Docker_deployment_JumpVM
Rally_Docker_deployment_JumpVM
 
Docker navjot kaur
Docker navjot kaurDocker navjot kaur
Docker navjot kaur
 
Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)Docker for Deep Learning (Andrea Panizza)
Docker for Deep Learning (Andrea Panizza)
 
Docker - Lightweight Virtualization
Docker - Lightweight VirtualizationDocker - Lightweight Virtualization
Docker - Lightweight Virtualization
 
Drupal theming training
Drupal theming trainingDrupal theming training
Drupal theming training
 
Introduction To Docker
Introduction To DockerIntroduction To Docker
Introduction To Docker
 
Single node hadoop cluster installation
Single node hadoop cluster installation Single node hadoop cluster installation
Single node hadoop cluster installation
 
Docker
DockerDocker
Docker
 
R hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing HadoopR hive tutorial supplement 1 - Installing Hadoop
R hive tutorial supplement 1 - Installing Hadoop
 
RHive tutorial - Installation
RHive tutorial - InstallationRHive tutorial - Installation
RHive tutorial - Installation
 
The Dockerfile Explosion and the Need for Higher Level Tools by Gareth Rushgrove
The Dockerfile Explosion and the Need for Higher Level Tools by Gareth RushgroveThe Dockerfile Explosion and the Need for Higher Level Tools by Gareth Rushgrove
The Dockerfile Explosion and the Need for Higher Level Tools by Gareth Rushgrove
 
#VirtualDesignMaster 3 Challenge 4 – James Brown
#VirtualDesignMaster 3 Challenge 4 – James Brown#VirtualDesignMaster 3 Challenge 4 – James Brown
#VirtualDesignMaster 3 Challenge 4 – James Brown
 
How to Build Package in Linux Based Systems.
How to Build Package in Linux Based Systems.How to Build Package in Linux Based Systems.
How to Build Package in Linux Based Systems.
 
Orchestrating Docker containers at scale
Orchestrating Docker containers at scaleOrchestrating Docker containers at scale
Orchestrating Docker containers at scale
 
Debian Packaging tutorial
Debian Packaging tutorialDebian Packaging tutorial
Debian Packaging tutorial
 
Big Data Step-by-Step: Infrastructure 1/3: Local VM
Big Data Step-by-Step: Infrastructure 1/3: Local VMBig Data Step-by-Step: Infrastructure 1/3: Local VM
Big Data Step-by-Step: Infrastructure 1/3: Local VM
 
Start your adventure with docker
Start your adventure with dockerStart your adventure with docker
Start your adventure with docker
 
Workshop Docker for DSpace
Workshop Docker for DSpaceWorkshop Docker for DSpace
Workshop Docker for DSpace
 
Learn docker in 90 minutes
Learn docker in 90 minutesLearn docker in 90 minutes
Learn docker in 90 minutes
 

Recently uploaded

A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024The Digital Insurer
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...Neo4j
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 

Recently uploaded (20)

A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 

DISTRIBUTE CLOUD ENVIRONMENT WITH DOCKER