SlideShare a Scribd company logo
1 of 47
Download to read offline
I have 10+ years of experience in the Telco IT sector, working with large enterprise solutions as well as building
specialized solutions from scratch.
I have founded a company called Innologica in 2013 with the mission of developing Next-Gen OSS and BSS
solutions. A side project was born back then called Inoreader, which quickly turned into a leading platform for
content consumption and is now a core product of the company.
Yordan Yordanov
2
CEO Innologica
Introduction
Agenda
3
Presenter and company intro
Who are we and what we do?
Migration to OpenNebula and StorPool
In order to fix our scalability problems we pinpointed the
need for a virtualization layer and distributed storage. After
thorough research we ended up with OpenNebula and
StorPool
Inoreader
What is Inoreader?
Tips
Infrastructure issues
We were facing numerous scalability issues while
at the same time we hade a an array of servers
doing nothing mostly because of filled storage. At
certain point we hit a brick wall.
QA
If you have any questions I will gladly answer
them
Some useful takeaways for you.
Who Are We?
5
Product company
We are not a sweatshop. We
make successful products.
International market
Our customers are all over the
globe.
Relaxed environment
We do not push the devs, but
we cherish top performers.
Smart team
The team is small, but each
member brings great value.
Inoreader
RSS aggregation platform and information hub
6
200,000 MAU
We have 200k monthly active users (MAU) and more than
30k simultaneous sessions in peak times. Recently passed 1M
registrations. 10k+ premium subscribers.
17,000,000,000 articles in MySQL and ES
We keep the full archive in enormous MySQL Databases and
a separate Elasticsearch cluster just for searching. Around
20TB of data without the replicas. 10M+ new articles per day.
1,300,000 feed updates per hour
We need to update our 15+ Million feeds in a timely manner.
A lot of machines are dedicated for this task only.
60 VMs and 14 physical hosts
The platform is currently running on 60 Virtual Machines
mainly in our main DC. There are some physical hosts that
were not good candidates for virtualization mainly for
Elasticsearch.
INFRASTRUCTURE ISSUES
Our main drivers to migrate to fully virtualized environment
Hardware capacity
8
We needed to constantly buy new servers just to keep up with the growing
databases, because local storages were being quickly exhausted.
We were using expensive RAID cards and RAID-10 setups for all
databases. Those severs never used more than 10% of their CPUs, so it
was a complete waste of resources.
Our problem
CPU
10%
Memory
Storage
Rack space
50%
90%
100%
Hardware failures
Not so common but always hair-pulling
9
All components are bound to fail. Whenever we lose a server, there was
always at least some service disruption if not a whole outage. All databases
needed to have replications, which skyrocketed server costs and didn’t
provide automatic HA. If a hard-drive fails in a RAID-10 setup you need
to replace it ASAP. Bigger drives are more prone to cause errors while
rebuilding.
Large databases on RAID-10 are slow to recover from crashes, so
replications should be carefully set up and should be on identical
(expensive) hardware in case a replication should be promoted to a master.
Nobody likes to go to a DC on Saturday to replace a failed drive, reinstall
OS and rotate replications. We much prefer to ride bikes!
Problem description
CHOSEN SOLUTION
We chose to virtualize everything using
OpenNebula + StorPool
Project Timeline
11
2017
Nov 2017
Dec 2017 – Jan 2018
Feb 2018
Mar 2018
PROJECT START
We knew for quite a while
that we need a solution to the
growth problem.
PLANNINGAND FIRST TESTS
While the hardware was in
transit we took our time to
learn OpenNebula and test it
as much as possible. We also
started our first VMs.
SUCCESS
We have finally migrated our
last server and all VMs were
happily running on
OpenNebula and StorPool.
CHOOSINGA SOLUTION
We held some meetings with
vendors and researched
different solutions
EXECUTION
We have migrated all servers
through several iterations
which will be described in
more detail here
Hardware
12
StorPool nodes
We chose a standard 3x SuperMicro SC836 3U servers.
Switches
As recommended by StorPool we chose Quanta LB8 for the
10G network and Quanta LB4-M for the Gigabit network.
Hosts
We have reused our old servers, but modified their CPUs and
memory.
Others
10G LAN cards and cables
StorPool Nodes
13
StorPool recommends to use commodity hardware. Supermicro offers a
good platform without vendor specific requirements for RAID cards, etc.
and is very budget friendly.
Our setup:
• Supermicro CSE-836B chassis
• Supermicro X10SRL-F motherboard
• 1x Intel Xeon E5-1620 v4 CPU (8 threads @3.5Ghz)
• 64GB DDR4-2666 RAM
• Avago 3108L RAID controller with 2G cache
• Intel X520-DA2 10G Ethernet card
• 8x 4TB HDD LFF SATA3 7200 RPM
• 8x 2TB HDD LFF SATA3 7200 RPM (reused from older servers)
Gigabit Network – Quanta LB4M
14
We were struggling with some old TP-Link SG2424 switches that we
wanted to upgrade, so we used the opportunity to upgrade the regular 1G
network too. We chose the Quanta LB4M.
Key aspects
• 48x Gigabit RJ45 ports
• 2x 10G SFP+ ports
• Redundant power supplies
• Very cheap!
• EOL – You might want to stack up some spare switches!
• Stable (4 months without a single flop for now)
10G Network – Quanta LB8
15
Again due to StorPool recommendation we procured three Quanta LB8
switches. They seem to be performing great so far.
Key aspects
• 48x 10G SFP+ ports
• Redundant power supplies
• Very cheap for what they offer!
• EOL – You might want to stack up some spare switches!
• Stable (4 months without a single flop for now)
Hosts
16
We have reused our old servers, but with some significant upgrades. We
currently have 14 hosts, all with the following configuration:
• Supermicro 1U chassis with X9DRW motherboards
• 2x Intel Xeon E5-2650 v2 CPU (32 total threads)
• Dual power supply
• 128G DDR3 12800R Memory
• Intel X520-DA2 10G card
• 2xHDD in mdraid for OS only
EXECUTION
Story with pictures
Preparation and OpenNebula learning
18
While waiting for our hardware to arrive we installed OpenNebula on two
hosts with a shared NFS datastore and we tried everything we can think of
to battle test it.
After we were happy with how things look and work, we started moving
some small things like name servers, smtp servers, ticketing systems, etc.
to dedicated VMs to decouple servers from services, which made our lives
easier later.
New Rack
19
We have rented a new rack in our collocation center since we didn’t have
any more space available in the old rack.
The idea was simple – Deploy StorPool in the new rack only and gradually
migrate hosts.
StorPool Nodes
20
The servers landed in our office in late January.
It was Friday afternoon, but we quickly installed them in the lab and let the
StorPool guys do their magic over the weekend.
InstallationDay
21
The next Monday StorPool finished all tests and the equipment was ready
to be installed in our DC.
InstallationDay
22
Fast forward several hours and we had our first StorPool cluster up and
running. Still nо hosts. StorPool needed to perform a full cluster check in
the real environment to see if everything works well.
First hosts
23
The very next day we installed our first hosts – the temporary ones that
were holding VMs installed during our test period. Those VMs were still
running on local storage and NFS.
The next step was to migrate them to StorPool.
VM Migration to StorPool
24
Shut down the VM
Use SunStone or cli to shut down
the VM.01
Create StorPool volumes
On the host, use the storpool cli to
create volume(s) for the VM with
the exact size of the original images
02
Copy the Volumes
Use dd or qemu-convert for raw and
qcow2 images respectively to copy
the images to the StorPool volumes.
03
Reattach images
Detach local images and attach
StorPool ones. Mind the order.
There’s a catch with large images*
04
Power up the VM
Check if the VM boots properly.
We’re not done yet…05
Finalize the migration
To fully migrate persistent VMs use the
Recover -> delete-recreate function to
redeploy all files to StorPool.
06
*Large images (100G+) takes forever to detach on slow local storage, so we had to kill the cp process and use the onevm recover success option to lie
to OpenNebula that the detach actually completed. This is risky but save a LOT of downtime.
After all VMs are migrated, you can delete the old system and image datastores and leave only StorPool DSs
At this point we are completely on StorPool!
StorPool helps their customers with this step, but here’s the summary of what we did.
Next hosts
25
From here on we had several iterations that consisted of roughly the
following:
• Create a list of servers for migration. The more hosts the more servers
we can move in a single iteration
• Create VMs and migrate the services there
• Use the opportunity to untangle microservices running on the same
machine
• Make sure servers are completely drained from any services.
• Shut down the servers and plan a visit to the DC the next day
• Continue on the next slide…
Remove servers from the old rack
26
Remove HDDs and RAID controllers
27
Upgrade CPUs and RAM
28
Install 10G card and smaller HDDs and reinstall OS
29
Install the servers in the new rack and hand over to StorPool
30
RINSE AND REPEAT
At each iteration we move more servers at once
because we have more capacity for VMs
Current capacity
32
At the end we have achieved 3x capacity boost in terms of processing
power and memory with just a fraction of our previous servers, because
with virtualization we can distribute the resources however we’d like. In
terms of storage we are on a completely different level since we are no
longer restricted to a single machine capacity, we have 3x redundancy and
all the performance we need.
We did it!
Allocated CPU
37%
Allocated Memory
Storage
Rack space
32%
67%
70%
33
Extreme Makeover
The old and the new setup
33
100% Virtualized
No more services running
directly on bare-metal.
Lighter power
footprint300% more capacity with
60% of the previous servers
with room for expansion.
Performance gains
Huge compute and storage
performance gains.
Maintainability is a breeze
too.
Our Dashboard
34
A glimpse at our OpenNebula dashboard.
400 CPU cores and 1.5TB of RAM in just 14 hosts.
Hosts view
35
All hosts are all nicely balanced using the default scheduler.
There’s always enough room to move VMs around in case a host
crashes or if we need to reboot a host.
SOME TIPS
Optimize CPU for homogenous clusters
37
Available as template setting since OpenNebula 5.4.6. Set to host-
passthrough.
This option presents the real CPU model to the VMs instead of the default
QEMU CPU. It can substantially increase the performance especially if
instructions like aes are needed.
Do not use it if you have different CPU models across the cluster since it
will cause the VMs to crash after live migration.
For older OpenNebula setups set this as RAW DATA in the template:
<cpu mode="host-passthrough"/>
Beware of mkfs.xfs on large StorPool volumes inside VMs
38
We noticed that when doing mkfs.xfs on large StorPool volumes (e.g.
4TB) there was a big delay before the command completes. What’s worse
is that during this time all VMs on this host starve for IO, because the
storpool_block.bin process is using 100% CPU time.
The image shown on the left is for 1TB volume.
The reason is that mkfs uses TRIM by default and the StorPool driver
support that.
To remedy it use -K option for mkfs.xfs or -E nodiscard for mkfs.ext4,
e.g.:
• mkfs.xfs -K /dev/sdb1
• mkfs.ext4 -E nodiscard /dev/sdb1
Use the 10G network for OpenNebula too
39
This is probably an obvious one, but it deserves to be mentioned. By
default your hosts will probably resolve others via the regular Gigabit
network. Forcing them to talk through the 10G storage network will
drastically improve the live VM migration. The migration is not IO bound
so it will completely saturate the network.
Usually a simple /etc/hosts modification.
Consult with StorPool for your specific use case before doing that.
Live migrating a VM with 8G of ram takes 7 seconds on 10G. The same
VM will take aboud 1.5 minutes on a Gigabit network and will probably
disturb VM communications if the network is saturated.
Live migration on highly loaded VMs can take significantly longer and
should be monitored. In some cases it’s enough to stop busy services for
just a second for the migration to complete.
Other tips
40
Those are the more obvious ones that probably everyone uses in
production, but still worth mentioning.
• Use cache=none, io=native when attaching volumes
• Use virtio networking instead of the default 8139 nic. The latter has
performance issues and drops packets when host IO is high
• Measure IO latency instead of IO load to judge saturation. We have
several machines with constant 99% IO load which are doing perfectly
fine.
/etc/one/vmm_exec/vmm_exec_kvm.conf:
…
DISK = [ driver = "raw" , cache = "none", io = "native",
discard = "unmap", bus = "scsi" ]
NIC = [ filter = "clean-traffic", model="virtio" ]
….
MONITORING
Dashboards
Grafana Dashboards
42
We have adapted the OpenNebula Dashboards with Graphite
and Grafana scripts by Sebastian Mangelkramer and used them
to create our own Grafana dashboards so we can see at a glance
which hosts are most loaded and how much overall capacity we
have.
Grafana TV Dashboard
43
Why not have a master dashboard on the TV at the office? This gives our
team a very quick and easy way to tell if everything is working smoothly.
If all you see is green, we’re good J
This dashboard show our main DC on the first row, our backup DC on the
second and then some other critical aspects of our system. It’s still a WIP,
hence the empty space.
At the top is our Geckoboard that we use for more business KPIs.
Server Power Usage in Grafana
44
Part of our virtualization project was to optimize the electricity
bill by using less servers. We were able to easily measure our
power usage by using Graphite and Grafana.
If you are interested, the script for getting the data into Graphite
is here:
https://gist.github.com/Jacketbg/6973efdb41a2ecfcf2a83ea84c08
6887
The Grafana Dashboard can be found here:
https://gist.github.com/Jacketbg/7255b4f81ebb2de0e8a5708b433
5c9d7
Obviously you will need to tweak it, especially the formula for
the power bill.
StorPool’s Grafana
45
StorPool were nice to give us an access to their own Grafana
instance where they collect a lot of internal data about the system
and KPIs. It gives us great insights that we couldn’t get otherwise
so we can plan and estimate the system load very well.
What’s Left?
46
SSD Pool
We are currently only using a HDD pool, but we could benefit
from a smaller SSD pool for picky MySQL databases.
Add more hosts
As the service grows our needs will too. We will probably
have rack space for the near years to come.
Add more StorPool nodes
We have maxed out the HDD bays on our our current nodes,
so we’ll probably need to add more nodes in the future.
THANK YOU !
READ MORE ON BLOG.INOREADER.COM
GET THIS PRESENTATION FROM ino.to/one-
amsterdam

More Related Content

What's hot

How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...OpenStack
 
TryStack: A Sandbox for OpenStack Users and Admins
TryStack: A Sandbox for OpenStack Users and AdminsTryStack: A Sandbox for OpenStack Users and Admins
TryStack: A Sandbox for OpenStack Users and AdminsAnne Gentle
 
Making clouds: turning opennebula into a product
Making clouds: turning opennebula into a productMaking clouds: turning opennebula into a product
Making clouds: turning opennebula into a productCarlo Daffara
 
OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...
OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...
OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...NETWAYS
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebula Project
 
Deploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron HavivDeploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron HavivCloud Native Day Tel Aviv
 
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebula Project
 
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...OpenStack Korea Community
 
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...Vietnam Open Infrastructure User Group
 
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebulaOpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebulaOpenNebula Project
 
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...Vietnam Open Infrastructure User Group
 
OpenNebula TechDay Boston 2015 - installing and basic usage
OpenNebula TechDay Boston 2015 - installing and basic usageOpenNebula TechDay Boston 2015 - installing and basic usage
OpenNebula TechDay Boston 2015 - installing and basic usageOpenNebula Project
 
Simple flexible deployments with openstack ansible
Simple flexible deployments with openstack ansibleSimple flexible deployments with openstack ansible
Simple flexible deployments with openstack ansibleJean-Philippe Evrard
 
OpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architectureOpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architectureOpenNebula Project
 
Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...
Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...
Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...Altoros
 
Adventures in Research
Adventures in ResearchAdventures in Research
Adventures in ResearchNETWAYS
 
OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...
OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...
OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...OpenNebula Project
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceShapeBlue
 

What's hot (20)

How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
How to deliver High Performance OpenStack Cloud: Christoph Dwertmann, Vault S...
 
TryStack: A Sandbox for OpenStack Users and Admins
TryStack: A Sandbox for OpenStack Users and AdminsTryStack: A Sandbox for OpenStack Users and Admins
TryStack: A Sandbox for OpenStack Users and Admins
 
Making clouds: turning opennebula into a product
Making clouds: turning opennebula into a productMaking clouds: turning opennebula into a product
Making clouds: turning opennebula into a product
 
OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...
OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...
OpenNebula Conf 2014 | OpenNebula and MooseFS for disaster recovery: real clo...
 
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
OpenNebulaConf 2016 - OpenNebula, a story about flexibility and technological...
 
Openstack trystack
Openstack   trystack Openstack   trystack
Openstack trystack
 
Deploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron HavivDeploying Efficient OpenStack Clouds, Yaron Haviv
Deploying Efficient OpenStack Clouds, Yaron Haviv
 
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
OpenNebulaConf 2016 - VTastic: Akamai Innovations for Distributed System Test...
 
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
[OpenStack Day in Korea 2015] Track 1-6 - 갈라파고스의 이구아나, 인프라에 오픈소스를 올리다. 그래서 보이...
 
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
Unrevealed Story Behind Viettel Network Cloud Hotpot | Đặng Văn Đại, Hà Mạnh ...
 
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebulaOpenNebula TechDay Waterloo 2015 - Hyperconvergence  and  OpenNebula
OpenNebula TechDay Waterloo 2015 - Hyperconvergence and OpenNebula
 
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...
Meetup 23 - 01 - The things I wish I would have known before doing OpenStack ...
 
OpenNebula TechDay Boston 2015 - installing and basic usage
OpenNebula TechDay Boston 2015 - installing and basic usageOpenNebula TechDay Boston 2015 - installing and basic usage
OpenNebula TechDay Boston 2015 - installing and basic usage
 
Simple flexible deployments with openstack ansible
Simple flexible deployments with openstack ansibleSimple flexible deployments with openstack ansible
Simple flexible deployments with openstack ansible
 
Applying OpenStack at iNET use case
Applying OpenStack at iNET use caseApplying OpenStack at iNET use case
Applying OpenStack at iNET use case
 
OpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architectureOpenNebula TechDay Boston 2015 - introduction and architecture
OpenNebula TechDay Boston 2015 - introduction and architecture
 
Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...
Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...
Taking Cloud to Extremes: Scaled-down, Highly Available, and Mission-critical...
 
Adventures in Research
Adventures in ResearchAdventures in Research
Adventures in Research
 
OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...
OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...
OpenNebulaConf 2016 - OpenNebula 5.0 Highlights and Beyond by Ruben S. Monter...
 
Boyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experienceBoyan Krosnov - Building a software-defined cloud - our experience
Boyan Krosnov - Building a software-defined cloud - our experience
 

Similar to OpenNebulaConf2018 - How Inoreader Migrated from Bare-Metal Containers to OpenNebula and StorPool - Yordan Yordanov - Innologica

Inoreader OpenNebula + StorPool migration
Inoreader OpenNebula + StorPool migrationInoreader OpenNebula + StorPool migration
Inoreader OpenNebula + StorPool migrationOpenNebula Project
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on CephCeph Community
 
Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017Stacy Véronneau
 
OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017Stacy Véronneau
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Dave Holland
 
OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017Stacy Véronneau
 
Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...Igor Sfiligoi
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
 
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputeIn-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputePatrick McGarry
 
Idi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknessesIdi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknessesLinuxaria.com
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfClyso GmbH
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4OpenEBS
 
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistOWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistParis Open Source Summit
 
Ansible & Cumulus Networks - Simplify Network Automation
Ansible & Cumulus Networks - Simplify Network AutomationAnsible & Cumulus Networks - Simplify Network Automation
Ansible & Cumulus Networks - Simplify Network AutomationCumulus Networks
 
Scylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop SeamonstersScylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop SeamonstersScyllaDB
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Ceph Community
 

Similar to OpenNebulaConf2018 - How Inoreader Migrated from Bare-Metal Containers to OpenNebula and StorPool - Yordan Yordanov - Innologica (20)

Inoreader OpenNebula + StorPool migration
Inoreader OpenNebula + StorPool migrationInoreader OpenNebula + StorPool migration
Inoreader OpenNebula + StorPool migration
 
2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph2016-JAN-28 -- High Performance Production Databases on Ceph
2016-JAN-28 -- High Performance Production Databases on Ceph
 
Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017Montreal OpenStack Q2 MeetUp - May 30th 2017
Montreal OpenStack Q2 MeetUp - May 30th 2017
 
OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017OpenStack Ottawa Q2 MeetUp - May 31st 2017
OpenStack Ottawa Q2 MeetUp - May 31st 2017
 
Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017Sanger OpenStack presentation March 2017
Sanger OpenStack presentation March 2017
 
OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017OpenStack Toronto Q2 MeetUp - June 1st 2017
OpenStack Toronto Q2 MeetUp - June 1st 2017
 
Linuxcon​ 2013
Linuxcon​ 2013Linuxcon​ 2013
Linuxcon​ 2013
 
Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...Bursting into the public Cloud - Sharing my experience doing it at large scal...
Bursting into the public Cloud - Sharing my experience doing it at large scal...
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community Talk on High-Performance Solid Sate Ceph
 
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamComputeIn-Ceph-tion: Deploying a Ceph cluster on DreamCompute
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
 
Kubernetes 101
Kubernetes 101Kubernetes 101
Kubernetes 101
 
Idi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknessesIdi2017 - Cloud DB: strengths and weaknesses
Idi2017 - Cloud DB: strengths and weaknesses
 
Ceph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdfCeph in 2023 and Beyond.pdf
Ceph in 2023 and Beyond.pdf
 
OpenEBS hangout #4
OpenEBS hangout #4OpenEBS hangout #4
OpenEBS hangout #4
 
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems SpecialistOWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
OWF14 - Plenary Session : Thibaud Besson, IBM POWER Systems Specialist
 
Ansible & Cumulus Networks - Simplify Network Automation
Ansible & Cumulus Networks - Simplify Network AutomationAnsible & Cumulus Networks - Simplify Network Automation
Ansible & Cumulus Networks - Simplify Network Automation
 
Ansible for networks
Ansible for networksAnsible for networks
Ansible for networks
 
Scylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop SeamonstersScylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
Scylla Summit 2018: Meshify - A Case Study, or Petshop Seamonsters
 
Stabilizing Ceph
Stabilizing CephStabilizing Ceph
Stabilizing Ceph
 
Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt Scaling Ceph at CERN - Ceph Day Frankfurt
Scaling Ceph at CERN - Ceph Day Frankfurt
 

More from OpenNebula Project

OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebula Project
 
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebula Project
 
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebula Project
 
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebula Project
 
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebula Project
 
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebula Project
 
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebula Project
 
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
 
Replacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaReplacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaOpenNebula Project
 
NTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItNTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItOpenNebula Project
 
OpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula Project
 
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHNTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHOpenNebula Project
 
Performant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayPerformant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayOpenNebula Project
 
NetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaNetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaOpenNebula Project
 
NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10OpenNebula Project
 
Security for Private Cloud Environments
Security for Private Cloud EnvironmentsSecurity for Private Cloud Environments
Security for Private Cloud EnvironmentsOpenNebula Project
 
CheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaCheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaOpenNebula Project
 
Cloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaCloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaOpenNebula Project
 

More from OpenNebula Project (20)

OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
OpenNebulaConf2019 - Welcome and Project Update - Ignacio M. Llorente, Rubén ...
 
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
OpenNebulaConf2019 - Building Virtual Environments for Security Analyses of C...
 
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
OpenNebulaConf2019 - CORD and Edge computing with OpenNebula - Alfonso Aureli...
 
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
OpenNebulaConf2019 - 6 years (+) OpenNebula - Lessons learned - Sebastian Man...
 
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
OpenNebulaConf2019 - Performant and Resilient Storage the Open Source & Linux...
 
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAFOpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
OpenNebulaConf2019 - Image Backups in OpenNebula - Momčilo Medić - ITAF
 
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
OpenNebulaConf2019 - How We Use GOCA to Manage our OpenNebula Cloud - Jean-Ph...
 
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...
 
Replacing vCloud with OpenNebula
Replacing vCloud with OpenNebulaReplacing vCloud with OpenNebula
Replacing vCloud with OpenNebula
 
NTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do ItNTS: What We Do With OpenNebula - and Why We Do It
NTS: What We Do With OpenNebula - and Why We Do It
 
OpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISPOpenNebula from the Perspective of an ISP
OpenNebula from the Perspective of an ISP
 
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbHNTS CAPTAIN / OpenNebula at Julius Blum GmbH
NTS CAPTAIN / OpenNebula at Julius Blum GmbH
 
Performant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux WayPerformant and Resilient Storage: The Open Source & Linux Way
Performant and Resilient Storage: The Open Source & Linux Way
 
NetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebulaNetApp Hybrid Cloud with OpenNebula
NetApp Hybrid Cloud with OpenNebula
 
NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10NSX with OpenNebula - upcoming 5.10
NSX with OpenNebula - upcoming 5.10
 
Security for Private Cloud Environments
Security for Private Cloud EnvironmentsSecurity for Private Cloud Environments
Security for Private Cloud Environments
 
CheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebulaCheckPoint R80.30 Installation on OpenNebula
CheckPoint R80.30 Installation on OpenNebula
 
DE-CIX: CloudConnectivity
DE-CIX: CloudConnectivityDE-CIX: CloudConnectivity
DE-CIX: CloudConnectivity
 
DDC Demo
DDC DemoDDC Demo
DDC Demo
 
Cloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebulaCloud Disaggregation with OpenNebula
Cloud Disaggregation with OpenNebula
 

Recently uploaded

Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVshikhaohhpro
 
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdfThe Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdfayushiqss
 
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedDelhi Call girls
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrainmasabamasaba
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfryanfarris8
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...kalichargn70th171
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...SelfMade bd
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park masabamasaba
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfonteinmasabamasaba
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisamasabamasaba
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfkalichargn70th171
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerThousandEyes
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...Shane Coughlan
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024Mind IT Systems
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...ICS
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfproinshot.com
 
The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is insideshinachiaurasa2
 
Pharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodologyPharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodologyAnusha Are
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsJhone kinadey
 

Recently uploaded (20)

Optimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTVOptimizing AI for immediate response in Smart CCTV
Optimizing AI for immediate response in Smart CCTV
 
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdfThe Top App Development Trends Shaping the Industry in 2024-25 .pdf
The Top App Development Trends Shaping the Industry in 2024-25 .pdf
 
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verifiedSector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
Sector 18, Noida Call girls :8448380779 Model Escorts | 100% verified
 
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
%in Bahrain+277-882-255-28 abortion pills for sale in Bahrain
 
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Pushp Vihar (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdfAzure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
Azure_Native_Qumulo_High_Performance_Compute_Benchmarks.pdf
 
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
The Guide to Integrating Generative AI into Unified Continuous Testing Platfo...
 
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
Crypto Cloud Review - How To Earn Up To $500 Per DAY Of Bitcoin 100% On AutoP...
 
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park %in kempton park+277-882-255-28 abortion pills for sale in kempton park
%in kempton park+277-882-255-28 abortion pills for sale in kempton park
 
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
%in kaalfontein+277-882-255-28 abortion pills for sale in kaalfontein
 
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa%in tembisa+277-882-255-28 abortion pills for sale in tembisa
%in tembisa+277-882-255-28 abortion pills for sale in tembisa
 
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdfLearn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
Learn the Fundamentals of XCUITest Framework_ A Beginner's Guide.pdf
 
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected WorkerHow To Troubleshoot Collaboration Apps for the Modern Connected Worker
How To Troubleshoot Collaboration Apps for the Modern Connected Worker
 
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
OpenChain - The Ramifications of ISO/IEC 5230 and ISO/IEC 18974 for Legal Pro...
 
10 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 202410 Trends Likely to Shape Enterprise Technology in 2024
10 Trends Likely to Shape Enterprise Technology in 2024
 
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
The Real-World Challenges of Medical Device Cybersecurity- Mitigating Vulnera...
 
Exploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdfExploring the Best Video Editing App.pdf
Exploring the Best Video Editing App.pdf
 
The title is not connected to what is inside
The title is not connected to what is insideThe title is not connected to what is inside
The title is not connected to what is inside
 
Pharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodologyPharm-D Biostatistics and Research methodology
Pharm-D Biostatistics and Research methodology
 
Right Money Management App For Your Financial Goals
Right Money Management App For Your Financial GoalsRight Money Management App For Your Financial Goals
Right Money Management App For Your Financial Goals
 

OpenNebulaConf2018 - How Inoreader Migrated from Bare-Metal Containers to OpenNebula and StorPool - Yordan Yordanov - Innologica

  • 1.
  • 2. I have 10+ years of experience in the Telco IT sector, working with large enterprise solutions as well as building specialized solutions from scratch. I have founded a company called Innologica in 2013 with the mission of developing Next-Gen OSS and BSS solutions. A side project was born back then called Inoreader, which quickly turned into a leading platform for content consumption and is now a core product of the company. Yordan Yordanov 2 CEO Innologica
  • 3. Introduction Agenda 3 Presenter and company intro Who are we and what we do? Migration to OpenNebula and StorPool In order to fix our scalability problems we pinpointed the need for a virtualization layer and distributed storage. After thorough research we ended up with OpenNebula and StorPool Inoreader What is Inoreader? Tips Infrastructure issues We were facing numerous scalability issues while at the same time we hade a an array of servers doing nothing mostly because of filled storage. At certain point we hit a brick wall. QA If you have any questions I will gladly answer them Some useful takeaways for you.
  • 4.
  • 5. Who Are We? 5 Product company We are not a sweatshop. We make successful products. International market Our customers are all over the globe. Relaxed environment We do not push the devs, but we cherish top performers. Smart team The team is small, but each member brings great value.
  • 6. Inoreader RSS aggregation platform and information hub 6 200,000 MAU We have 200k monthly active users (MAU) and more than 30k simultaneous sessions in peak times. Recently passed 1M registrations. 10k+ premium subscribers. 17,000,000,000 articles in MySQL and ES We keep the full archive in enormous MySQL Databases and a separate Elasticsearch cluster just for searching. Around 20TB of data without the replicas. 10M+ new articles per day. 1,300,000 feed updates per hour We need to update our 15+ Million feeds in a timely manner. A lot of machines are dedicated for this task only. 60 VMs and 14 physical hosts The platform is currently running on 60 Virtual Machines mainly in our main DC. There are some physical hosts that were not good candidates for virtualization mainly for Elasticsearch.
  • 7. INFRASTRUCTURE ISSUES Our main drivers to migrate to fully virtualized environment
  • 8. Hardware capacity 8 We needed to constantly buy new servers just to keep up with the growing databases, because local storages were being quickly exhausted. We were using expensive RAID cards and RAID-10 setups for all databases. Those severs never used more than 10% of their CPUs, so it was a complete waste of resources. Our problem CPU 10% Memory Storage Rack space 50% 90% 100%
  • 9. Hardware failures Not so common but always hair-pulling 9 All components are bound to fail. Whenever we lose a server, there was always at least some service disruption if not a whole outage. All databases needed to have replications, which skyrocketed server costs and didn’t provide automatic HA. If a hard-drive fails in a RAID-10 setup you need to replace it ASAP. Bigger drives are more prone to cause errors while rebuilding. Large databases on RAID-10 are slow to recover from crashes, so replications should be carefully set up and should be on identical (expensive) hardware in case a replication should be promoted to a master. Nobody likes to go to a DC on Saturday to replace a failed drive, reinstall OS and rotate replications. We much prefer to ride bikes! Problem description
  • 10. CHOSEN SOLUTION We chose to virtualize everything using OpenNebula + StorPool
  • 11. Project Timeline 11 2017 Nov 2017 Dec 2017 – Jan 2018 Feb 2018 Mar 2018 PROJECT START We knew for quite a while that we need a solution to the growth problem. PLANNINGAND FIRST TESTS While the hardware was in transit we took our time to learn OpenNebula and test it as much as possible. We also started our first VMs. SUCCESS We have finally migrated our last server and all VMs were happily running on OpenNebula and StorPool. CHOOSINGA SOLUTION We held some meetings with vendors and researched different solutions EXECUTION We have migrated all servers through several iterations which will be described in more detail here
  • 12. Hardware 12 StorPool nodes We chose a standard 3x SuperMicro SC836 3U servers. Switches As recommended by StorPool we chose Quanta LB8 for the 10G network and Quanta LB4-M for the Gigabit network. Hosts We have reused our old servers, but modified their CPUs and memory. Others 10G LAN cards and cables
  • 13. StorPool Nodes 13 StorPool recommends to use commodity hardware. Supermicro offers a good platform without vendor specific requirements for RAID cards, etc. and is very budget friendly. Our setup: • Supermicro CSE-836B chassis • Supermicro X10SRL-F motherboard • 1x Intel Xeon E5-1620 v4 CPU (8 threads @3.5Ghz) • 64GB DDR4-2666 RAM • Avago 3108L RAID controller with 2G cache • Intel X520-DA2 10G Ethernet card • 8x 4TB HDD LFF SATA3 7200 RPM • 8x 2TB HDD LFF SATA3 7200 RPM (reused from older servers)
  • 14. Gigabit Network – Quanta LB4M 14 We were struggling with some old TP-Link SG2424 switches that we wanted to upgrade, so we used the opportunity to upgrade the regular 1G network too. We chose the Quanta LB4M. Key aspects • 48x Gigabit RJ45 ports • 2x 10G SFP+ ports • Redundant power supplies • Very cheap! • EOL – You might want to stack up some spare switches! • Stable (4 months without a single flop for now)
  • 15. 10G Network – Quanta LB8 15 Again due to StorPool recommendation we procured three Quanta LB8 switches. They seem to be performing great so far. Key aspects • 48x 10G SFP+ ports • Redundant power supplies • Very cheap for what they offer! • EOL – You might want to stack up some spare switches! • Stable (4 months without a single flop for now)
  • 16. Hosts 16 We have reused our old servers, but with some significant upgrades. We currently have 14 hosts, all with the following configuration: • Supermicro 1U chassis with X9DRW motherboards • 2x Intel Xeon E5-2650 v2 CPU (32 total threads) • Dual power supply • 128G DDR3 12800R Memory • Intel X520-DA2 10G card • 2xHDD in mdraid for OS only
  • 18. Preparation and OpenNebula learning 18 While waiting for our hardware to arrive we installed OpenNebula on two hosts with a shared NFS datastore and we tried everything we can think of to battle test it. After we were happy with how things look and work, we started moving some small things like name servers, smtp servers, ticketing systems, etc. to dedicated VMs to decouple servers from services, which made our lives easier later.
  • 19. New Rack 19 We have rented a new rack in our collocation center since we didn’t have any more space available in the old rack. The idea was simple – Deploy StorPool in the new rack only and gradually migrate hosts.
  • 20. StorPool Nodes 20 The servers landed in our office in late January. It was Friday afternoon, but we quickly installed them in the lab and let the StorPool guys do their magic over the weekend.
  • 21. InstallationDay 21 The next Monday StorPool finished all tests and the equipment was ready to be installed in our DC.
  • 22. InstallationDay 22 Fast forward several hours and we had our first StorPool cluster up and running. Still nо hosts. StorPool needed to perform a full cluster check in the real environment to see if everything works well.
  • 23. First hosts 23 The very next day we installed our first hosts – the temporary ones that were holding VMs installed during our test period. Those VMs were still running on local storage and NFS. The next step was to migrate them to StorPool.
  • 24. VM Migration to StorPool 24 Shut down the VM Use SunStone or cli to shut down the VM.01 Create StorPool volumes On the host, use the storpool cli to create volume(s) for the VM with the exact size of the original images 02 Copy the Volumes Use dd or qemu-convert for raw and qcow2 images respectively to copy the images to the StorPool volumes. 03 Reattach images Detach local images and attach StorPool ones. Mind the order. There’s a catch with large images* 04 Power up the VM Check if the VM boots properly. We’re not done yet…05 Finalize the migration To fully migrate persistent VMs use the Recover -> delete-recreate function to redeploy all files to StorPool. 06 *Large images (100G+) takes forever to detach on slow local storage, so we had to kill the cp process and use the onevm recover success option to lie to OpenNebula that the detach actually completed. This is risky but save a LOT of downtime. After all VMs are migrated, you can delete the old system and image datastores and leave only StorPool DSs At this point we are completely on StorPool! StorPool helps their customers with this step, but here’s the summary of what we did.
  • 25. Next hosts 25 From here on we had several iterations that consisted of roughly the following: • Create a list of servers for migration. The more hosts the more servers we can move in a single iteration • Create VMs and migrate the services there • Use the opportunity to untangle microservices running on the same machine • Make sure servers are completely drained from any services. • Shut down the servers and plan a visit to the DC the next day • Continue on the next slide…
  • 26. Remove servers from the old rack 26
  • 27. Remove HDDs and RAID controllers 27
  • 29. Install 10G card and smaller HDDs and reinstall OS 29
  • 30. Install the servers in the new rack and hand over to StorPool 30
  • 31. RINSE AND REPEAT At each iteration we move more servers at once because we have more capacity for VMs
  • 32. Current capacity 32 At the end we have achieved 3x capacity boost in terms of processing power and memory with just a fraction of our previous servers, because with virtualization we can distribute the resources however we’d like. In terms of storage we are on a completely different level since we are no longer restricted to a single machine capacity, we have 3x redundancy and all the performance we need. We did it! Allocated CPU 37% Allocated Memory Storage Rack space 32% 67% 70%
  • 33. 33 Extreme Makeover The old and the new setup 33 100% Virtualized No more services running directly on bare-metal. Lighter power footprint300% more capacity with 60% of the previous servers with room for expansion. Performance gains Huge compute and storage performance gains. Maintainability is a breeze too.
  • 34. Our Dashboard 34 A glimpse at our OpenNebula dashboard. 400 CPU cores and 1.5TB of RAM in just 14 hosts.
  • 35. Hosts view 35 All hosts are all nicely balanced using the default scheduler. There’s always enough room to move VMs around in case a host crashes or if we need to reboot a host.
  • 37. Optimize CPU for homogenous clusters 37 Available as template setting since OpenNebula 5.4.6. Set to host- passthrough. This option presents the real CPU model to the VMs instead of the default QEMU CPU. It can substantially increase the performance especially if instructions like aes are needed. Do not use it if you have different CPU models across the cluster since it will cause the VMs to crash after live migration. For older OpenNebula setups set this as RAW DATA in the template: <cpu mode="host-passthrough"/>
  • 38. Beware of mkfs.xfs on large StorPool volumes inside VMs 38 We noticed that when doing mkfs.xfs on large StorPool volumes (e.g. 4TB) there was a big delay before the command completes. What’s worse is that during this time all VMs on this host starve for IO, because the storpool_block.bin process is using 100% CPU time. The image shown on the left is for 1TB volume. The reason is that mkfs uses TRIM by default and the StorPool driver support that. To remedy it use -K option for mkfs.xfs or -E nodiscard for mkfs.ext4, e.g.: • mkfs.xfs -K /dev/sdb1 • mkfs.ext4 -E nodiscard /dev/sdb1
  • 39. Use the 10G network for OpenNebula too 39 This is probably an obvious one, but it deserves to be mentioned. By default your hosts will probably resolve others via the regular Gigabit network. Forcing them to talk through the 10G storage network will drastically improve the live VM migration. The migration is not IO bound so it will completely saturate the network. Usually a simple /etc/hosts modification. Consult with StorPool for your specific use case before doing that. Live migrating a VM with 8G of ram takes 7 seconds on 10G. The same VM will take aboud 1.5 minutes on a Gigabit network and will probably disturb VM communications if the network is saturated. Live migration on highly loaded VMs can take significantly longer and should be monitored. In some cases it’s enough to stop busy services for just a second for the migration to complete.
  • 40. Other tips 40 Those are the more obvious ones that probably everyone uses in production, but still worth mentioning. • Use cache=none, io=native when attaching volumes • Use virtio networking instead of the default 8139 nic. The latter has performance issues and drops packets when host IO is high • Measure IO latency instead of IO load to judge saturation. We have several machines with constant 99% IO load which are doing perfectly fine. /etc/one/vmm_exec/vmm_exec_kvm.conf: … DISK = [ driver = "raw" , cache = "none", io = "native", discard = "unmap", bus = "scsi" ] NIC = [ filter = "clean-traffic", model="virtio" ] ….
  • 42. Grafana Dashboards 42 We have adapted the OpenNebula Dashboards with Graphite and Grafana scripts by Sebastian Mangelkramer and used them to create our own Grafana dashboards so we can see at a glance which hosts are most loaded and how much overall capacity we have.
  • 43. Grafana TV Dashboard 43 Why not have a master dashboard on the TV at the office? This gives our team a very quick and easy way to tell if everything is working smoothly. If all you see is green, we’re good J This dashboard show our main DC on the first row, our backup DC on the second and then some other critical aspects of our system. It’s still a WIP, hence the empty space. At the top is our Geckoboard that we use for more business KPIs.
  • 44. Server Power Usage in Grafana 44 Part of our virtualization project was to optimize the electricity bill by using less servers. We were able to easily measure our power usage by using Graphite and Grafana. If you are interested, the script for getting the data into Graphite is here: https://gist.github.com/Jacketbg/6973efdb41a2ecfcf2a83ea84c08 6887 The Grafana Dashboard can be found here: https://gist.github.com/Jacketbg/7255b4f81ebb2de0e8a5708b433 5c9d7 Obviously you will need to tweak it, especially the formula for the power bill.
  • 45. StorPool’s Grafana 45 StorPool were nice to give us an access to their own Grafana instance where they collect a lot of internal data about the system and KPIs. It gives us great insights that we couldn’t get otherwise so we can plan and estimate the system load very well.
  • 46. What’s Left? 46 SSD Pool We are currently only using a HDD pool, but we could benefit from a smaller SSD pool for picky MySQL databases. Add more hosts As the service grows our needs will too. We will probably have rack space for the near years to come. Add more StorPool nodes We have maxed out the HDD bays on our our current nodes, so we’ll probably need to add more nodes in the future.
  • 47. THANK YOU ! READ MORE ON BLOG.INOREADER.COM GET THIS PRESENTATION FROM ino.to/one- amsterdam