48. Con$idential
Global VM migration is also available by sharing "storage space" by VM host machines.
Real time availability makes it possible. Actual data copy follows.
(VM operator need virtually common Ethernet segment and fat pipe for memory copy)
live migration of VM
between distributed areas
after Migration
TOYAMA site
Copy to DR-sites
TOKYO site
before Migration
Copy to DR-sites
OSAKA site
Copy to DR-sites
real time and active-active features seem to be just a simple "shared storage".
Live migration is also possible between DR sites
(it requires common subnet and fat pipe for memory copy, of course)
49. Con$idential
Front-end servers aggregate client requests (READ / WRITE) so that,
lots of back-end servers can handle user data in parallel & distributed manner.
Both of performance & storage space are scalable, depends on # of servers.
clients
front-end
(access server)
back-end
(core server)
read blocks
READ req.
WRITE req.
write
blocks
Access Gateway
(via NFS, CIFS or similar)
scalable performance &
scalable storage size
by parallel & distributing
processing technology
51. Con$idential
1. assign a new unique ID for any updated block (to ensure consistency).
2. make replication in local site (for quick ACK) and update meta data.
3. make replication in global distributed environment (for actual data copies).
back-end
(multi-sites)
Most important !
the key for "distributed replication"
(2) create 2 copies in local
for each user data,
write META data,
ant returns ACK
(1) assign a new unique ID
for any updated block, so that,
ID ensures the consistency
a file, consisted from many blocks
(1)
(1')
multiplicity in multi-location,
makes each user data,
redundant in local, at first,
3 distributed copies, at last.
(3-b) remove one
of 2 local blocks,
in a future.
(3-b)
(3-a)
(3-a) make a copy
in different location
right after ACK.
(3-a)
60. iozone -aceI
a: full automatic mode
c: Include close() in the timing calculations
e: Include flush (fsync,fflush) in the timing calculations
I: Use DIRECT IO if possible for all file operations.
72. We have been developing a widely distributed cluster storage system and
evaluating the storage along with various applications. The main advantage of
our storage is its very fast random I/O performance, even though it provides
a POSIX compatible file system interface.
Long Distance Live Migration
for Disaster Recovery
• Long Distance Live Migration with
distributed cluster storage
• Transparent Accessibility during or
after live migration
Applications
on Distributed
Cluster Storage
• Sharing global unique file system on
the distributed cluster storage
• Accessing nearest site based on
file replication algorithm
Content Delivery Platform
over inter-cloud environment
• Deliver large volume data based on
the distributed cluster storage
• Replicate to many sites, automatically
• Works as cache service, as well
Internet
VM
Live Migration
Widely Distributed
Cluster Storage
File Sharing between
inter-cloud environment
Cluster Storage
Delivery Platform
81. Frequency
Frequency
≒
cores
Real demand
Imaginary demand
time
仮想マシン
サービス
Virtualized Machines (VMs)
ユーザ
IT services
4 cores
8GB memory
40GB storage
Users
Frequency
cores
time
Imaginary
resource
cores
クラウド事業者
仮想化サーバ
Frequency
Virtualization Servers
cores
time
Cloud Service Provider
Available supplies
time
ユーザがバカであればあるほど
の見積もり誤りが大きいほど
儲けが大きい
84. ユーザ
Users
サービス
IT services
リソース
提供
リソース
要求
仮想マシン
Virtualized Machines (VMs)
ユーザは複数の仮想マシン
(VM)を確保し、VM上で複数
のサービスが動作する。
サービスはVMにリソースを要
求し、VMはサービスにリソー
スを提供する。
Per day periodicity
Frequency
x
cores
Frequency
x
cores
Per week periodicity
Frequency
x
cores
time (day)
time (sec)
Per year periodicity
Frequency
x
cores
time (sec)
time (week)
90. Array
仮想マシン
ユーザ
Virtualized Machines (VMs)
Users
Frequency
x
cores
サービス
Require
user experience
Supply
time (msec)
Frequency
IT services
Per day periodicity
Frequency
x
cores
cores
time (msec)
Per week periodicity
仮想化サーバ
Frequency
x
cores
Virtualization Servers
time (day)
Per year periodicity
拠点
Frequency
x
cores
Datacenter
time (week)
time