This presentation is about how Hong Leong Bank set up a private cloud for its database services and the rules of engagement of utilising a private cloud and getting funding for expansion.
1. 5 & 6 March 2015
Marina Bay Sands
Singapore
Rules of Engagement
on Cloud
2. Traditional Silo Setup
2
2 x P740 (PRD)
DB2:
• CV-RIB MY
2 x P720 (DR)
DB2:
• CV-RIB MY
1 x P740 (UAT/SIT)
DB2:
• CV-RIB MY
1 x Wintel (PRD)
MSSQL:
• Siebel 6
1 x Wintel (UAT/SIT)
MSSQL:
• Siebel 6
1 x P5 (PRD)
DB2:
• FPX/ePay
1 x P5 (UAT/SIT)
DB2:
• FPX/ePay
1 x RS6000 (PRD)
DB2:
• BScore
1 x RS6000 (UAT/SIT)
DB2:
• BScore
4 x T3-2 (PRD)
Oracle 11g SE:
• LOAD$ MG, CC & PL
2 x T3-2 (UAT/SIT)
Oracle 11g SE:
• LOAD$ MG, CC & PL
2 x P720 (PRD)
Oracle 11g SE:
• AML MY & VN
1 x P720 (UAT/SIT)
Oracle 11g SE:
• AML MY & VN
c1
c2
c4
c5
1 x P5 (PRD)
Oracle 10g:
• HL Wealth Planner
1 x P5 (UAT/SIT)
Oracle 10g:
• HL Wealth Planner
1 x Intel (PRD)
Oracle 9i on Linux:
• UTNS
1 x Intel (SIT/UAT)
Oracle 9i on Linux:
• UTNS
1 x Intel (PRD)
MySQL on CentOS:
• 8i Token MY
1 x Intel (UAT/SIT)
MySQL on CentOS:
• 8i Token MY
2 x M4000 (PRD)
DB2:
• CIB MY
1 x M4000 (UAT/SIT)
DB2:
• CIB MY
c3
p3
p4
c5
c6
w1
w2
w3
w4
p1
p2
p5
p6
p9
p10
p7
p8
Legend:
• Tech refresh completed
• Tech refresh WIP
• Tech refresh in planning
• New initiative completed
• New initiative WIP
• New initiative in planning
c1
w1
p1
1. Projects CER includes Capex/Opex for
hardware, software & professional services
for 3 or 5 years
2. Charge-out based on agreed percentage to
business unit 1, 2, 3 etc
1. Projects CER includes Capex/Opex for
hardware, software & professional services
for 3 or 5 years
2. Charge-out based on agreed percentage to
business unit 4, 5, 6 etc
1. Projects CER includes Capex/Opex for
hardware, software & professional services
for 3 or 5 years
2. Charge-out based on agreed percentage to
business unit 7, 8, 9 etc
1. Projects CER includes Capex/Opex for
hardware, software & professional services
for 3 or 5 years
2. Charge-out based on agreed percentage to
business unit 10, 11, 12 etc
No Challenges
1 High Capex/Opex for silo hardware
2 Long time-to-market (e-bidding,
procurement etc) for each project
3 Data center exhausted with
heterogeneous hardware, increase
facility cost
4 Decrease operation efficiency
3. Oracle
Infiniband
Switch
Oracle
Infiniband
Switch
1)No Single Point of Failure System
Infrastructure which have high
bandwidth & low latency interconnect
Fabric.
2) Database Infrastructure which can
deliver near 0 downtime & able to
scale out of the box.
3) Integrate both Hardware &
Software into single point of
Monitoring, Management, Metering &
Chargeback Tool.
40GbpsInfinibandFabric
Oracle SPARC T5-2 Servers
Oracle ZFS Clustered
Storage
4) DB Cloud tuning from DB down to
Storage through Infiniband Fabric.Clustered
5) Start migrating or consolidating
database’s into DB Cloud.
RIB MY CRELOAD$ Siebel RIB VN WLL G.Earth IB.Fraud
What did we do? We built a Private Cloud
4. Before, few database’s in silo-ed environment always led to performance issue especially
CPU bottleneck.
After consolidating 24 Database's into the DB Cloud, CPU Utilization shows less than 10% at
most of the time & peak is still below 20%.
The private cloud facilitates better utilisation
5. Scale according to the needs of
more computing power or more
storage capacity.
•Every Rack come with 2 Infiniband
Switches to Join the Cloud
Infiniband Fabric.
•12 x Compute Nodes of T5-2 for
Compute Node Expansion.
•2 x T5-2 Compute Nodes, 2 x
Backup Nodes, 1 x ZFS Storage for
Storage Node Expansion.
The private cloud facilitates elasticity and scalability
6. ■Shareable resources with others
while idle.
■Increase systems efficiency.
■10 CPU Processing Power only.
■Saving up to 60%.
The private cloud facilitated shared resources
8. ■Coexist different generation of
SPARC Servers in DB Private
Cloud Architecture
• Private Database Cloud
Startup.
The private cloud facilitated shared resources
9. • Long project timeline, slow to market.
• Cut down project timeline from 6 months to 1 month with DB
Private Cloud.
• 5x Faster to Market.
Design & Planning
Purchasing & Delivery
Implementation
Integration
User Acceptance Test
Go Live
Month 1 Month 2 Month 3 Month 4 Month 5 Month 6
Design & Planning
Purchasing & Delivery
Implementation
Integration
User Acceptance Test
Go Live
Month 6Month 1 Month 2 Month 3 Month 4 Month 5
Design & Planning
Purchasing & Delivery
Implementation
Provisioning
User Acceptance Test
Go Live
Month 1 Month 2 Month 3 Month 4 Month 5 Month 6
Provision
Database
Service within
Hours with
Private Cloud
The private cloud facilitated business agility
15. 1
5
Initial Projects (Jun’13)
1. Fuzion RIB/ESS MY
2. Fuzion RIB VN
3. Siebel 8
4. LOAD$-HP/MG/CC/PL
5. CRE
Additional Projects (Jan’14)
6. WLL
7. Green Earth
8. IB Fraud
Cloud Expansion Projects (Sep’14)
9. FPX/ePay
10.EAI
11.Falcon
12.Basel 2
13.Fuzion RIB KH
Additional Projects (Nov’14)
14.Fuzion RIB SG
15.8i Token SG
16.HLIB WinOps & CMS
New Projects (Jan’15)
17.8i Token MY
18.MVI Payment Gateway SG
1. Bulk purchase cloud hardware,
software & professional services for
3 or 5 years
2. Total cost distributed to respective
projects CER
3. Charge-out based on agreed
percentage to business unit 1, 2, 3 etc
1. Each project CER only includes
additional storage, software &
professional services for 3 or 5 years
2. Charge-out based on agreed
percentage to business unit 4, 5, 6 etc
1. Bulk purchase cloud expansion
hardware, software & professional
services for 3 or 5 years
2. Total cost distributed to respective
projects CER
3. Charge-out based on agreed
percentage to business unit 7, 8, 9 etc
1. Each project CER only includes
additional storage, software &
professional services for 3 or 5 years
2. Charge-out based on agreed
percentage to business unit 1, 2, 3 etc
No Challenges
1 High Capex/Opex for silo hardware
Lower Capex/Opex overall
2 Long time-to-market (e-bidding,
procurement etc) for each project
Shorten Lower Capex/Opex for
overall projects
3 Data center exhausted with
heterogeneous hardware, increase
facility cost
Standardize infrastructure
4 Decrease operation efficiency
Improve operation efficiency
5 Difficult to align bulk purchase
timeline for multiple projects
6 Un-fair business model as only
initial project fully bear the
Capex/Opex base infrastructure
7 Might not able to consolidate few
projects to contribute for future
expansion
Challenges – Chargeback
16. Proposed To-Be Cloud Chargeback Methodology 1
6
Initial Projects (Jun’13)
1. Fuzion RIB/ESS MY
2. Fuzion RIB VN
3. Siebel 8
4. LOAD$-HP/MG/CC/PL
5. CRE
Additional Projects (Jan’14)
6. WLL
7. Green Earth
8. IB Fraud
Cloud Expansion Projects (Sep’14)
9. FPX/ePay
10.EAI
11.Falcon
12.Basel 2
13.Fuzion RIB KH
Additional Projects (Nov’14)
14.Fuzion RIB SG
15.8i Token SG
16.HLIB WinOps & CMS
New Projects (Jan’15)
17.8i Token MY
18.MVI Payment Gateway SG
No Challenges
1 High Capex/Opex for silo hardware
Lower Capex/Opex overall
2 Long time-to-market (e-bidding,
procurement etc) for each project
Shorten Lower Capex/Opex for
overall projects
3 Data center exhausted with
heterogeneous hardware, increase
facility cost
Standardize infrastructure
4 Decrease operation efficiency
Improve operation efficiency
5 Difficult to align bulk purchase
timeline for multiple projects
Reserved capacity
6 Un-fair business model as only initial
project fully bear the Capex/Opex bas
infrastructure
Chargeback by allocation
7 Might not able to consolidate few
projects to contribute for future
expansion
Cloud Cost Center
Reserved capacities
under Cloud Cost Center
for upcoming projects or
organic business growth
Proposed chargeback process:
1. ITF to summarize the existing cloud total
investment for hardware & software
2. ITF/GITA to calculate the chargeback
block for hardware & software into a cloud
catalogue
3. GITA/GITI to provide the capacity
allocation & derive the chargeback cost
by project
4. ITF/Finance to manage the chargeback to
respective business units into Cloud Cost
Center
Chargeback for new projects:
1. Monthly chargeback of Capex/Opex for
cloud infrastructure to business unit
2. Chargeback the professional services to
project
3. IT to purchase new Cloud expansion in
advance and keep the capacities for next
business project using Cloud Cost Center
Chargeback for existing projects:
1. Monthly chargeback of Opex only for
cloud infrastructure to business unit for
next 2 years and chargeback Capex/Opex
thereafter
Chargeback (Now)
17. 1
7
Proposed chargeback process:
1. ITF to summarize the existing cloud total
investment for hardware & software
2. ITF/GITA to calculate the chargeback
block for hardware & software into a cloud
catalogue
3. GITA/GITI to provide the capacity
utilization & derive the chargeback cost
by project
4. ITF/Finance to manage the chargeback to
respective business units into Cloud Cost
Center
Chargeback for new projects:
1. Monthly chargeback of Capex/Opex for
cloud infrastructure to business unit
2. Chargeback the professional services to
project
3. IT to purchase new Cloud expansion in
advance and keep the capacities for next
business project using Cloud Cost Center
Chargeback for existing projects:
1. Monthly chargeback of Opex only for
cloud infrastructure to business unit for
another 2 years and chargeback
Capex/Opex thereafter
Initial Projects (Jun’13)
1. Fuzion RIB/ESS MY
2. Fuzion RIB VN
3. Siebel 8
4. LOAD$-HP/MG/CC/PL
5. CRE
Additional Projects (Jan’14)
6. WLL
7. Green Earth
8. IB Fraud
Cloud Expansion Projects (Sep’14)
9. FPX/ePay
10.EAI
11.Falcon
12.Basel 2
13.Fuzion RIB KH
Additional Projects (Nov’14)
14.Fuzion RIB SG
15.8i Token SG
16.HLIB WinOps & CMS
New Projects (Jan’15)
17.8i Token MY
18.MVI Payment Gateway SG
Reserved capacities
under Cloud Cost Center
for upcoming projects or
organic business growth
Upon maturity of
cloud monitoring
& metering
No Challenges
1 High Capex/Opex for silo hardware
Lower Capex/Opex overall
2 Long time-to-market (e-bidding,
procurement etc) for each project
Shorten Lower Capex/Opex for
overall projects
3 Data center exhausted with
heterogeneous hardware, increase
facility cost
Standardize infrastructure
4 Decrease operation efficiency
Improve operation efficiency
5 Difficult to align bulk purchase
timeline for multiple projects
Reserved capacity
6 Un-fair business model as only initial
project fully bear the Capex/Opex bas
infrastructure
Chargeback by utilization
7 Might not able to consolidate few
projects to contribute for future
expansion
Cloud Cost Center
Chargeback (Future)
18. 1
8
No Component Oracle DB
Cloud
PureApp WAS &
DB2 Cloud
Wintel Cloud
1 Server CPU/Core
2 Server Memory
3 Storage Controller -
4 Storage Drive/Disk
(include OS only)
(App data from 3PAR)
5 Software License -
(free for WAS & DB2)
(include OS only)
6 Professional Services
(by project)
-
(deploy from pattern)
(by project)
7 Operation Maintenance
Cost
Chargeback – working out the components
19. 1. For PJC & WHL, each Oracle DB Cloud is architected with 2 units of T5-2 server, 2 Infiniband switches & 2
storage controller.
2. T5-2 servers provide CPU & memory for database processing and the storage controllers host the
storage drive for OS & data. T5-2 servers & storage controllers are connected by Infiniband switches as
the network backbone.
3. The Oracle DB will be setup as RAC (active-active clustering) on both servers to achieve high availability.
Each Oracle DB software license allows activation of 2 cores.
4. Each project will contribute Oracle DB software license to a pool which can be shared across multiple
application based on different peak period to achieve maximum license savings. 1
9
Oracle Infiniband
Switch
Oracle Infiniband
Switch
40GbpsInfinibandFabric
Oracle SPARC T5-2 Servers
Oracle ZFS Clustered Storage
Clustered
Chargeback – components of a database private cloud
20. 20
1 unit Server Rack
2 units Infiniband Switches
12 units T5-2 Servers
For each T5-2 server
• 512 GB memory per server
• 64GB memory reserved for Control Domain
• Each DB VM block is 8GB memory
• Total 56 blocks available [448/8=56]
For each Server Rack
• Total 12 servers
• Total blocks: 12*56=672
Standard allocation is 0.5 core per 8GB memory
1 unit Storage Ra
2 units Infiniband Switches
4 units ZFS
Storage Controller
(36 Drives)
For each Storage Drive
• 1TB reserved for workarea
• 8TB usable for DB
• Each storage block is 100GB
• Total 80 blocks available [8000/100=80]
For each Storage Rack
• Total 4 storage controller (36 drives each)
• Total blocks: 4*36*80=11,520
Proposed hardware chargeback block:
1. Oracle DB Cloud consists of 2 key components: servers &
storage drives
2. Server will be chargeback by core & memory allocation
3. Storage drive will be chargeback by storage allocation
Chargeback – components of a database private cloud
21. Core License Core License Core License Core License
1 Fuzion ESS 2
2 Fuzion MY 2
3 Fuzion VN 2
4 LOAD$ HP 8
5 LOAD$ MG 4
6 LOAD$ CC/PL/RCB/SME 4
7 Siebel 4
8 CRE 4
9 WLL 4 2
10 Green Earth 4 2 2 1
11 IB Fraud 4 2 2 1
12 FPX 2 2
13 EAI (WMB & DataPower) 2 2
14 Falcon 2 -
15 Basel II 4 4
16 Fuzion KH 2 2
17 Fuzion SG 2 2
18 8i Token SG 2 2
19 WinOps & CMS 4 2 4 2 1 1
20 8i Token MY - - - - - - - -
21 MVI Payment Gateway SG - - - - - - - -
22 LOAD$ VN - - - - - - - -
23 Basel II VN - - - - - - - -
24 UTNS MY - - - - - - - -
25 CMS - - - - - - - -
62 26 26 10 10 5 10 5
SIT
10
2
2
PRD
4 2
4
DR
10
UAT
Total
10
8
2
2
No Projects
Proposed software chargeback block:
1. Chargeback 1 OPL for every 4 cores allocation (50% savings
from Oracle standard licensing model)
2. GITA/GITI to monitor the actual core utilization before next OPL
purchase for more core activation
For Oracle DB EE License (OPL)
• Each OPL allows activation of 2 core
• Initial projects started with 16 OPL
• Additional projects contribute some OPL
to the Cloud
• Cloud expansion projects contributed
additional 16 OPL
Server Core Allocation & Shared Pool
• Each project to contribute minimum
software license to enable more cores
into the shared pool
• Each application able to utilize unused
core within the shared pool
• Designed based on the concept of
maximizing cores sharing across multiple
applications by leveraging on the
differences during idle/peak hours
Chargeback – actual in action