SlideShare a Scribd company logo
1 of 4
Download to read offline
Ericsson MSC
Server Blade Cluster
The MSC-S Blade Cluster, the future-proof server part of Ericsson’s
Mobile Softswitch solution, provides very high capacity, effortless
scalability, and outstanding system availability. It also means lower
OPEX per subscriber, and sets the stage for business-efficient network
solutions.
Petri Maekiniemi
Jan Scheurich
The Mobile Switching Center
Server (MSC-S) is a key part of
Ericsson’s Mobile Softswitch
solution, controlling all circuit-
switched call services, the user
plane and media gateways.
And now, with the MSC-S Blade
Cluster, Ericsson has taken
its Mobile Softswitch solution
one step further, substantially
increasing node availability and
server capacity. The MSC-S Blade
Cluster also dramatically sim-
plifies the network, creating an
infrastructure which is always
available and easy to manage and
which can be adjusted to handle
increases in traffic and changing
needs.
Continued strong growth in circuit-
switched voice traffic necessitates fast
andsmoothincreasesinnetworkcapac-
ity. The typical industry approach to
increasing network capacity is to intro-
duce high-capacity servers – preferably
scalable ones. And the historical solu-
tion to increasing the capacity of MSC
servers has been to introduce a more
powerful central processor. Ericsson’s
MSC-S Blade Cluster concept, by con-
trast, is an evolution of the MSC server
architecture, where server functional-
ity is implemented on several generic
processor blades that work together in
agrouporcluster.
The very-high-capacity nodes that
these clusters form require exception-
al resilience both at the node and net-
worklevel,aconsiderationthatEricsson
has addressed in its new blade cluster
concept.
Keybenefits
The unique scalability of the MSC-S
Blade Cluster gives network opera-
tors great flexibility when building
their mobile softswitch networks and
expanding network capacity as traffic
grows, all without complicating net-
worktopologybyaddingmoreandmore
MSCservers.
The MSC-S Blade Cluster fulfills
the extreme in-service performance
requirements put on large nodes. First,
it is designed to ensure zero downtime
– planned or unplanned. Second, it can
beintegratedintoestablishedandprov-
en network-resilience concepts such as
MSCinPool.
Because MSC-S Blade Cluster opera-
tion and maintenance (O&M) does not
depend on the number of blades, the
scalability feature significantly reduc-
esoperatingexpensespersubscriberas
nodecapacityexpands.
Compared with traditional, non-
scalable MSC servers, the MSC-S Blade
Clusterhasthepotentialtoreducepow-
erconsumptionbyupto60percentand
the physical footprint by up to 90 per-
cent thanks especially to its optimized
redundancy concepts and advanced
components. Many of its generic com-
ponents are used in other applications,
suchasIMS.
TheMSC-SBladeClusteralsosupports
advanced, business-efficient network
solutions,suchasMSC-Snodesthatcov-
er multiple countries or are part of a
sharedmobile-softswitchcorenetwork
(Table 1).
Keycomponents
ThekeycomponentsoftheMSC-SBlade
Cluster(Figure1)aretheMSC-Sblades,
a signaling proxy (SPX), an IP load bal-
ancer, an I/O system and a site infra-
structuresupportsystem(SIS).
TheMSC-Sbladesareadvancedgener-
ic processor boards, grouped into clus-
ters and jointly running the MSC serv-
er application that controls circuit-
switched calls and the mobile media
gateways.
The signaling proxy (SPX) serves as
BOX A  Terms and abbreviations
APG43		 Adjunct Processor Group
			 version 43
ATM		 Asynchronous transfer mode
E-GEM		 Enhanced GEM
GEM		 Generic equipment magazine
GEP		 Generic processor board
IMS		 IP Multimedia Subsystem
IO			 Input/output
IP			 Internet protocol
M-MGw		 Media gateway for mobile
			 networks
MSC		 Mobile switching center
MSC-S		 MSC server
OM		 Operation and maintenance
OPEX		 Operating expense
OSS		 Operations support system
SIS		 Site infrastructure support 	
			 system
SPX		 Signaling proxy
SS7		 Signaling system no. 7
TDM		 Time-division multiplexing
10
Ericsson review • 3 2008
Evolving the MSC server architecture
10
the network interface for SS7 signaling
traffic over TDM, ATM and IP. It distrib-
utesexternalSS7signalingtraffictothe
MSC-S blades. Two SPXs give 1+1 redun-
dancy. Each SPX resides on a double-
sidedAPZprocessor.
The IP load balancer serves as the
network interface for non-SS7-based
IP-signaling traffic, such as the ses-
sion initiation protocol (SIP). It distrib-
utes external IP signaling to the MSC-S
blades.TwoIPload-balancerboardsgive
1+1redundancy.
The I/O system handles the transfer
of data – for example, charging data,
hot billing data, input via the man-
machineinterface,andstatisticstoand
fromMSC-SbladesandSPXs.TheMSC-S
BladeClusterI/OsystemusestheAPG43
(Adjunct Processor Group version 43)
and is 1+1 redundant. It is connected
to the operation support system (OSS).
Each I/O system resides on a double-
sidedprocessor.
The site infrastructure support sys-
tem, which is also connected to the
operationsupportsystem,providesthe
I/O system to Ericsson’s Infrastructure
components.
Keycharacteristics
Compared with traditional MSC-S
nodes, the MSC-S Blade Cluster offers
breakthrough advances in terms of
redundancyandscalability.
Redundancy
The MSC-S Blade Cluster features tai-
lored redundancy schemes in different
domains. The network signaling inter-
faces (C7 and non-C7) and the I/O sys-
tems work in a 1+1 redundancy con-
figuration. If one of the components is
unavailable,theothertakesover,ensur-
ingthatserviceavailabilityisnotaffect-
ed, regardless of whether component
downtime is planned (for instance, for
an upgrade or maintenance actions) or
unplanned(hardwarefailure).
The redundancy scheme in the
domain of the MSC-S blades is n+1.
Although the individual blades do not
feature hardware redundancy, the
cluster of blades is fully redundant. All
blades are equal, meaning that every
blade can assume every role in the sys-
tem.Furthermore,nosubscriberrecord,
mobile media gateway, or neighbor-
ing node is bound to any given blade.
Therefore,theMSC-SBladeClustercon-
tinues to offer full service availability
when any of its blades is unavailable.
Subscriber records are always main-
tainedontwobladestoensurethatsub-
scriberdatacannotbelost.
In the unlikely event of simultane-
ous failure of multiple blades, the clus-
ter will remain fully operational, expe-
riencing only a loss of capacity (rough-
ly proportional to the number of failed
blades) and minor loss of subscriber
records.
With the exception of very short
interruptions that have no affect
on in-service performance, blade
MSC-S
blade
MSC-S
blade
MSC-S
blade
MSC-S
blade
Signaling proxy
RAN CN OSS
IP load balancer
1+1 1+1 1+1 1+1
Cluster
SIS I/O
MSC-S
blade
Figure 1  Main functional components of the MSC-S Blade Cluster.
table 1  Key benefits of the MSC-S Blade Cluster.
Feature Benefit
Very high capacity Ten-fold increase over traditional MSC servers
Easy scalability Expansion in steps of 500,000 subscribers through the
addition of new blades
No network impact when adding blades
Outstanding system availability Zero downtime at the node level
Upgrade without impact on traffic
Network-level resilience through MSC in Pool
Reduced OPEX Fewer nodes or sites needed in the network
Up to 90 percent smaller footprint
As much as 60 percent lower power consumption
Business-efficient network solutions Multi-country operators
Shared networks
Future-proof hardware Blades can be used in other applications, such as IMS
11
Ericsson review • 3 2008
a wide range of cluster capacities from
verysmalltoverylarge.
The individual MSC-S blades are not
visible to neighboring network nodes,
suchastheBSC,RNC,M-MGw,HLR,SCP,
P-CSCF and so on. This first enabler is
essential for smooth scalability: blades
can be added immediately without
affecting the configuration of cooper-
atingnodes.
Other parts of the network might
also have to be expanded to make full
use of increased blade cluster capac-
ity. When this is the case, these steps
can be decoupled and taken indepen-
dently.
The second enabler is the ability of
the MSC-S Blade Cluster to dynamical-
ly adapt its internal distribution to a
new blade configuration without man-
ualintervention.Asaconsequence,the
capacity-expansionprocedureisalmost
fully automatic – only a few manual
steps are needed to add a blade to the
runningsystem.
When a generic processor board is
inserted and registered with the clus-
termiddleware,thenewbladeisloaded
witha1-to-1copyoftheapplicationsoft-
ware and a configuration of the active
blades. The blade then joins the cluster
and is prepared for manual test traffic.
For the time being it remains isolated
fromregulartraffic.
Thebladesautomaticallyupdatetheir
internal distribution tables to the new
cluster configuration and replicate all
necessary dynamic data, such as sub-
scriber records, on the added blade.
These activities run in the background
and have no affect on cluster capacity
oravailability.
After a few minutes, when the inter-
nal preparations are complete and test
resultsaresatisfactory,thebladecanbe
activatedfortraffic.Fromthispointon,
it handles its share of the cluster load
andbecomesanintegralpartoftheclus-
terredundancyscheme.
MSC-SBladeClusterhardware
Buildingpractice
TheMSC-SBladeClusterishousedinan
EnhancedGenericEquipmentMagazine
(E-GEM). Compared with the GEM, the
E-GEM provides even more power per
subrackandbettercoolingcapabilities,
which translates into a smaller foot-
print.
failureshavenoaffectonconnectiv-
ity to other nodes. Nor do blade failures
affect availability for traffic of the user-
planeresourcescontrolledbytheMSC-S
BladeCluster.
To achieve n+1 redundancy, the
cluster of MSC-S blades employs a set
of advanced distribution algorithms.
Fault-tolerantmiddlewareensuresthat
thebladesshareaconsistentviewofthe
cluster configuration at all times. The
MSC-S application uses stateless distri-
butionalgorithmsthatrelyonthisclus-
terview.Themiddlewarealsoprovidesa
safegroup-communicationmechanism
fortheblades.
Static configuration data is replicat-
ed on every blade. This way, each blade
thatistoexecutearequestedservicehas
access to the requisite data. Dynamic
data, such as subscriber records or the
state of external traffic devices, is repli-
catedontwoormoreblades.
Interworkingbetweenthetworedun-
dancy domains in the MSC-S Blade
Cluster is handled in an innovative
manner. Components of the 1+1 redun-
dancy domain do not require detailed
information about the distribution of
tasksandrolesamongtheMSC-Sblades.
TheSPXandIPloadbalancer,forexam-
ple,canbasetheirforwardingdecisions
on stateless algorithms. The blades,
on the other hand, use enhanced,
industry-standard redundancy mecha-
nisms when they interact with the 1+1
redundancy domain for, say, selecting
theoutgoingpath.
The combination of cluster middle-
ware, data replication and stateless dis-
tributionalgorithmsprovidesadistrib-
uted system architecture that is highly
redundantandrobust.
One particular benefit of n+1 redun-
dancy is the potential to isolate an
MSC-S blade from traffic to allow
maintenance activities – for example,
to update or upgrade software with-
out disturbing cluster operation. This
means zero planned cluster down-
time.
Scalability
The MSC-S Blade Cluster was designed
withscalabilityinmind:toincreasesys-
tem capacity one needs only add MSC-S
blades to the cluster. The shared clus-
tercomponents,suchastheI/Osystem,
theSPXandIPloadbalancer,havebeen
designed and dimensioned to support
Cabinet 1 Cabinet 2
SPX
Optional
Fan
Fan
Fan
Fan
Fan
Fan
Fan
Fan
Fan
Fan
IO
TDM devices
ATM devices
IS Infrastructure
MSC-S blades
Figure 2  MSC-S Blade Cluster cabinets.
12
Ericsson review • 3 2008
Evolving the MSC server architecture
Genericprocessorboard
TheGenericProcessorBoard(GEP)used
fortheMSC-Sbladesisequippedwithan
x8664-bitarchitectureprocessor.There
areseveralvariantsoftheequippedGEP
board,allmanufacturedfromthesame
printed circuit board. In addition, the
GEP is used in a variety of configura-
tions for several other components in
the MSC-S Blade Cluster, namely the
APG43,theSPXandSIS,andotherappli-
cationsystems.
Infrastructurecomponents
The infrastructure components pro-
vide layer-2 and layer-3 infrastructure
fortheblades,incorporatingrouters,SIS
and switching components. They also
provide the main on-site layer-2 proto-
col infrastructure for the MSC-S Blade
Cluster. Ethernet is used on the back-
planeforsignalingtraffic.
Hardwarelayout
TheMSC-SBladeClusterconsistsofone
or two cabinets (Figure 2). One cabi-
net houses mandatory subracks for the
MSC-S blades with Infrastructure com-
ponents, the APG43 and SPX. The oth-
er cabinet can be used to house a sub-
rack for expanding the MSC-S blades
and two subracks for TDM or ATM sig-
nalinginterfaces.
Powerconsumption
Lowpowerconsumptionisachievedby
using advanced low-power processors
in E-GEMs and GEP boards. The high
subscriber capacity of the blade-cluster
node means very low power consump-
tionpersubscriber.
Conclusion
The MSC-S Blade Cluster makes
Ericsson’s Mobile Softswitch solution
even better – one that is easy to scale
both in terms of capacity and function-
ality. It offers downtime-free MSC-S
upgradesandupdates,andoutstanding
node and network availability. What is
more, the hardware can be reused in
futurenodeandnetworkmigrations.
The architecture, which is based
on a cluster of blades, is aligned with
Ericsson’s Integrated Site concept and
other components, such as the APG43
andSPX.OMissupportedbytheOSS.
The MSC-S Blade Cluster distributes
subscriber traffic between available
blades. One can add, isolate or remove
blades without disturbing traffic. The
system redistributes the subscribers
and replicates subscriber data when
the number of MSC-S blades changes.
Clusterreconfigurationisanautomatic
procedure; moreover, the procedure is
invisibletoentitiesoutsidethenode.
Petri Maekiniemi
is master strategic prod-
uct manager for the MSC-S
Blade Cluster. He joined
Ericsson in Finland in 1985
but has worked at Ericsson Eurolab
Aachen in Germany since 1991. Over
the years he has served as system
manager and, later, as product manag-
er in several areas connected to the
MSC, including subscriber services,
charging and MSC pool. Petri holds a
degree in telecommunications engi-
neering from the Helsinki Institute of
Technology, Finland.
Jan Scheurich
is a principal systems
designer. He joined
Ericsson Eurolab Aachen
in 1993 and has been em-
ployed since then as software designer,
system manager, system tester and
troubleshooter for various Ericsson
products, ranging from B-ISDN over
classical MSC, SGSN, MMS solutions,
to MSS. He currently serves as system
architect for the MSC-S Blade Cluster.
Jan holds a degree in physics from Kiel
University, Germany.
Acknowledgements
The authors thank Joe Wilke whose
comments and contributions have
helped shape this article. Joe is a man-
ager in RD and has driven several
technology-shift endeavors, including
the MSC-S Blade Cluster.
13
Ericsson review • 3 2008

More Related Content

What's hot

Sofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control SystemSofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control SystemGrupo VirreySoft
 
Vlsi 2015 2016 ieee project list-(v)_with abstract
Vlsi 2015 2016 ieee project list-(v)_with abstractVlsi 2015 2016 ieee project list-(v)_with abstract
Vlsi 2015 2016 ieee project list-(v)_with abstractS3 Infotech IEEE Projects
 
IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...
IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...
IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...IRJET Journal
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud Juniper Networks
 
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...Principled Technologies
 
Talk on Parallel Computing at IGWA
Talk on Parallel Computing at IGWATalk on Parallel Computing at IGWA
Talk on Parallel Computing at IGWADishant Ailawadi
 
数据中心网络研究:机遇与挑战
数据中心网络研究:机遇与挑战数据中心网络研究:机遇与挑战
数据中心网络研究:机遇与挑战Weiwei Fang
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Bladespankaj009
 
EasingPainInTheDataCenter
EasingPainInTheDataCenterEasingPainInTheDataCenter
EasingPainInTheDataCenterRobert Wipfel
 

What's hot (15)

Ih2514851489
Ih2514851489Ih2514851489
Ih2514851489
 
Sofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control SystemSofware architure of a SAN storage Control System
Sofware architure of a SAN storage Control System
 
Ca alternative architecture
Ca alternative architectureCa alternative architecture
Ca alternative architecture
 
Chapter30
Chapter30Chapter30
Chapter30
 
Chapter30 (1)
Chapter30 (1)Chapter30 (1)
Chapter30 (1)
 
Vlsi 2015 2016 ieee project list-(v)_with abstract
Vlsi 2015 2016 ieee project list-(v)_with abstractVlsi 2015 2016 ieee project list-(v)_with abstract
Vlsi 2015 2016 ieee project list-(v)_with abstract
 
Alfa bank
Alfa bankAlfa bank
Alfa bank
 
IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...
IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...
IRJET- Reduction of Dark Silicon through Efficient Power Reduction Designing ...
 
NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud NFV Solutions for the Telco Cloud
NFV Solutions for the Telco Cloud
 
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
EMC XtremIO storage array 4.0 and VMware vSphere 6.0: Scaling mixed-database ...
 
Talk on Parallel Computing at IGWA
Talk on Parallel Computing at IGWATalk on Parallel Computing at IGWA
Talk on Parallel Computing at IGWA
 
数据中心网络研究:机遇与挑战
数据中心网络研究:机遇与挑战数据中心网络研究:机遇与挑战
数据中心网络研究:机遇与挑战
 
Mcae brief
Mcae briefMcae brief
Mcae brief
 
Idc Reducing It Costs With Blades
Idc Reducing It Costs With BladesIdc Reducing It Costs With Blades
Idc Reducing It Costs With Blades
 
EasingPainInTheDataCenter
EasingPainInTheDataCenterEasingPainInTheDataCenter
EasingPainInTheDataCenter
 

Viewers also liked

קובץ מצורף בפורום 2452 - 90526
קובץ מצורף בפורום   2452 - 90526קובץ מצורף בפורום   2452 - 90526
קובץ מצורף בפורום 2452 - 90526תיסיר אלקיעאן
 
ΤΟ ΤΣΑΚΑΛΙ - ΣΤ1
ΤΟ ΤΣΑΚΑΛΙ - ΣΤ1ΤΟ ΤΣΑΚΑΛΙ - ΣΤ1
ΤΟ ΤΣΑΚΑΛΙ - ΣΤ113ouser5
 
The Maximum Subarray Problem
The Maximum Subarray ProblemThe Maximum Subarray Problem
The Maximum Subarray ProblemKamran Ashraf
 
Thinking Differently About Computer and Human Interaction
Thinking Differently About Computer and Human InteractionThinking Differently About Computer and Human Interaction
Thinking Differently About Computer and Human InteractionBrad Bush
 
WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...
WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...
WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...Brad Bush
 
Futurecom - Brad Bush Keynote - Limitless Realtime Communications
Futurecom - Brad Bush Keynote - Limitless Realtime CommunicationsFuturecom - Brad Bush Keynote - Limitless Realtime Communications
Futurecom - Brad Bush Keynote - Limitless Realtime CommunicationsBrad Bush
 
Graphic Processing Unit
Graphic Processing UnitGraphic Processing Unit
Graphic Processing UnitKamran Ashraf
 
Bose macbook
Bose macbookBose macbook
Bose macbookburgoyn3
 
Ubiquitous Computing
Ubiquitous ComputingUbiquitous Computing
Ubiquitous ComputingKamran Ashraf
 
Resume_Mohit_Panwar
Resume_Mohit_PanwarResume_Mohit_Panwar
Resume_Mohit_PanwarMohit Panwar
 
04 sgsn&
04 sgsn&04 sgsn&
04 sgsn&kenkenzh
 
Call path trace
Call path traceCall path trace
Call path traceAmin Saja
 
Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?
Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?
Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?Akash Mahagaonkar V
 

Viewers also liked (20)

Ontario Centennial Beach Hotels
Ontario Centennial Beach HotelsOntario Centennial Beach Hotels
Ontario Centennial Beach Hotels
 
ΣΤ1
ΣΤ1ΣΤ1
ΣΤ1
 
קובץ מצורף בפורום 2452 - 90526
קובץ מצורף בפורום   2452 - 90526קובץ מצורף בפורום   2452 - 90526
קובץ מצורף בפורום 2452 - 90526
 
ΤΟ ΤΣΑΚΑΛΙ - ΣΤ1
ΤΟ ΤΣΑΚΑΛΙ - ΣΤ1ΤΟ ΤΣΑΚΑΛΙ - ΣΤ1
ΤΟ ΤΣΑΚΑΛΙ - ΣΤ1
 
Tr1546
Tr1546Tr1546
Tr1546
 
Planeacion_ITP
Planeacion_ITPPlaneacion_ITP
Planeacion_ITP
 
The Maximum Subarray Problem
The Maximum Subarray ProblemThe Maximum Subarray Problem
The Maximum Subarray Problem
 
Thinking Differently About Computer and Human Interaction
Thinking Differently About Computer and Human InteractionThinking Differently About Computer and Human Interaction
Thinking Differently About Computer and Human Interaction
 
WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...
WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...
WebRTC Expo Atlanta June 2014 - Brad Bush, CMO GENBAND speaks on the fast mov...
 
Futurecom - Brad Bush Keynote - Limitless Realtime Communications
Futurecom - Brad Bush Keynote - Limitless Realtime CommunicationsFuturecom - Brad Bush Keynote - Limitless Realtime Communications
Futurecom - Brad Bush Keynote - Limitless Realtime Communications
 
Graphic Processing Unit
Graphic Processing UnitGraphic Processing Unit
Graphic Processing Unit
 
Bose macbook
Bose macbookBose macbook
Bose macbook
 
Ubiquitous Computing
Ubiquitous ComputingUbiquitous Computing
Ubiquitous Computing
 
Purnima
PurnimaPurnima
Purnima
 
Resume_Mohit_Panwar
Resume_Mohit_PanwarResume_Mohit_Panwar
Resume_Mohit_Panwar
 
Resume
ResumeResume
Resume
 
Blade server
Blade serverBlade server
Blade server
 
04 sgsn&
04 sgsn&04 sgsn&
04 sgsn&
 
Call path trace
Call path traceCall path trace
Call path trace
 
Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?
Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?
Imposition of Penal Interest u/s 220(2) by TRACES - How to avoid it ?
 

Similar to Blade

Trends and challenges in IP based SOC design
Trends and challenges in IP based SOC designTrends and challenges in IP based SOC design
Trends and challenges in IP based SOC designAishwaryaRavishankar8
 
UMTS core network and its evolution
UMTS core network and its evolutionUMTS core network and its evolution
UMTS core network and its evolutionNaveen Jakhar, I.T.S
 
SDN Enablement for Microsoft Hyper-V powered Data Centers
SDN Enablement for Microsoft Hyper-V powered Data CentersSDN Enablement for Microsoft Hyper-V powered Data Centers
SDN Enablement for Microsoft Hyper-V powered Data CentersBenjamin Eggerstedt
 
Core cs overview (1)
Core cs overview (1)Core cs overview (1)
Core cs overview (1)Rashid Khan
 
introduction to cmos vlsi
introduction to cmos vlsi introduction to cmos vlsi
introduction to cmos vlsi ssuser593a2d
 
Continuity in the cloud | vNF states in Siemens Common Repository
Continuity in the cloud | vNF states in Siemens Common RepositoryContinuity in the cloud | vNF states in Siemens Common Repository
Continuity in the cloud | vNF states in Siemens Common RepositoryPetr Nemec
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage EMC
 
Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading
Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading
Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading Onyebuchi nosiri
 
5 1-33-1-10-20161221 kennedy
5 1-33-1-10-20161221 kennedy5 1-33-1-10-20161221 kennedy
5 1-33-1-10-20161221 kennedyOnyebuchi nosiri
 
Multiple Chassis: Virtual Is Now A Reality
Multiple Chassis: Virtual Is Now A RealityMultiple Chassis: Virtual Is Now A Reality
Multiple Chassis: Virtual Is Now A RealityJuniper Networks
 
transforming_datacenter_core_with_dce_cisco_nexus.ppt
transforming_datacenter_core_with_dce_cisco_nexus.ppttransforming_datacenter_core_with_dce_cisco_nexus.ppt
transforming_datacenter_core_with_dce_cisco_nexus.pptBalanjaneyaPrasad
 
Cisco 3900 series router datasheet
Cisco 3900 series router datasheetCisco 3900 series router datasheet
Cisco 3900 series router datasheetAmy Huang
 
Necos keynote UFRN Telecomday
Necos keynote UFRN TelecomdayNecos keynote UFRN Telecomday
Necos keynote UFRN TelecomdayAugusto Neto
 
Ericsson Review: The benefits of self-organizing backhaul networks
Ericsson Review: The benefits of self-organizing backhaul networksEricsson Review: The benefits of self-organizing backhaul networks
Ericsson Review: The benefits of self-organizing backhaul networksEricsson
 
DataCluster
DataClusterDataCluster
DataClustergystell
 
Cisco A9K-2X100GE-TR
Cisco A9K-2X100GE-TRCisco A9K-2X100GE-TR
Cisco A9K-2X100GE-TRsavomir
 
Ericsson Technology Review: The future of cloud computing: Highly distributed...
Ericsson Technology Review: The future of cloud computing: Highly distributed...Ericsson Technology Review: The future of cloud computing: Highly distributed...
Ericsson Technology Review: The future of cloud computing: Highly distributed...Ericsson
 

Similar to Blade (20)

Blade
BladeBlade
Blade
 
Trends and challenges in IP based SOC design
Trends and challenges in IP based SOC designTrends and challenges in IP based SOC design
Trends and challenges in IP based SOC design
 
UMTS core network and its evolution
UMTS core network and its evolutionUMTS core network and its evolution
UMTS core network and its evolution
 
SDN Enablement for Microsoft Hyper-V powered Data Centers
SDN Enablement for Microsoft Hyper-V powered Data CentersSDN Enablement for Microsoft Hyper-V powered Data Centers
SDN Enablement for Microsoft Hyper-V powered Data Centers
 
Cs c n overview
Cs c n overviewCs c n overview
Cs c n overview
 
Core cs overview (1)
Core cs overview (1)Core cs overview (1)
Core cs overview (1)
 
Cs c n overview
Cs c n overviewCs c n overview
Cs c n overview
 
introduction to cmos vlsi
introduction to cmos vlsi introduction to cmos vlsi
introduction to cmos vlsi
 
Continuity in the cloud | vNF states in Siemens Common Repository
Continuity in the cloud | vNF states in Siemens Common RepositoryContinuity in the cloud | vNF states in Siemens Common Repository
Continuity in the cloud | vNF states in Siemens Common Repository
 
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
Oracle PeopleSoft on Cisco Unified Computing System and EMC VNX Storage
 
Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading
Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading
Cloud Based Datacenter Network Acceleration Using FPGA for Data-Offloading
 
5 1-33-1-10-20161221 kennedy
5 1-33-1-10-20161221 kennedy5 1-33-1-10-20161221 kennedy
5 1-33-1-10-20161221 kennedy
 
Multiple Chassis: Virtual Is Now A Reality
Multiple Chassis: Virtual Is Now A RealityMultiple Chassis: Virtual Is Now A Reality
Multiple Chassis: Virtual Is Now A Reality
 
transforming_datacenter_core_with_dce_cisco_nexus.ppt
transforming_datacenter_core_with_dce_cisco_nexus.ppttransforming_datacenter_core_with_dce_cisco_nexus.ppt
transforming_datacenter_core_with_dce_cisco_nexus.ppt
 
Cisco 3900 series router datasheet
Cisco 3900 series router datasheetCisco 3900 series router datasheet
Cisco 3900 series router datasheet
 
Necos keynote UFRN Telecomday
Necos keynote UFRN TelecomdayNecos keynote UFRN Telecomday
Necos keynote UFRN Telecomday
 
Ericsson Review: The benefits of self-organizing backhaul networks
Ericsson Review: The benefits of self-organizing backhaul networksEricsson Review: The benefits of self-organizing backhaul networks
Ericsson Review: The benefits of self-organizing backhaul networks
 
DataCluster
DataClusterDataCluster
DataCluster
 
Cisco A9K-2X100GE-TR
Cisco A9K-2X100GE-TRCisco A9K-2X100GE-TR
Cisco A9K-2X100GE-TR
 
Ericsson Technology Review: The future of cloud computing: Highly distributed...
Ericsson Technology Review: The future of cloud computing: Highly distributed...Ericsson Technology Review: The future of cloud computing: Highly distributed...
Ericsson Technology Review: The future of cloud computing: Highly distributed...
 

Blade

  • 1. Ericsson MSC Server Blade Cluster The MSC-S Blade Cluster, the future-proof server part of Ericsson’s Mobile Softswitch solution, provides very high capacity, effortless scalability, and outstanding system availability. It also means lower OPEX per subscriber, and sets the stage for business-efficient network solutions. Petri Maekiniemi Jan Scheurich The Mobile Switching Center Server (MSC-S) is a key part of Ericsson’s Mobile Softswitch solution, controlling all circuit- switched call services, the user plane and media gateways. And now, with the MSC-S Blade Cluster, Ericsson has taken its Mobile Softswitch solution one step further, substantially increasing node availability and server capacity. The MSC-S Blade Cluster also dramatically sim- plifies the network, creating an infrastructure which is always available and easy to manage and which can be adjusted to handle increases in traffic and changing needs. Continued strong growth in circuit- switched voice traffic necessitates fast andsmoothincreasesinnetworkcapac- ity. The typical industry approach to increasing network capacity is to intro- duce high-capacity servers – preferably scalable ones. And the historical solu- tion to increasing the capacity of MSC servers has been to introduce a more powerful central processor. Ericsson’s MSC-S Blade Cluster concept, by con- trast, is an evolution of the MSC server architecture, where server functional- ity is implemented on several generic processor blades that work together in agrouporcluster. The very-high-capacity nodes that these clusters form require exception- al resilience both at the node and net- worklevel,aconsiderationthatEricsson has addressed in its new blade cluster concept. Keybenefits The unique scalability of the MSC-S Blade Cluster gives network opera- tors great flexibility when building their mobile softswitch networks and expanding network capacity as traffic grows, all without complicating net- worktopologybyaddingmoreandmore MSCservers. The MSC-S Blade Cluster fulfills the extreme in-service performance requirements put on large nodes. First, it is designed to ensure zero downtime – planned or unplanned. Second, it can beintegratedintoestablishedandprov- en network-resilience concepts such as MSCinPool. Because MSC-S Blade Cluster opera- tion and maintenance (O&M) does not depend on the number of blades, the scalability feature significantly reduc- esoperatingexpensespersubscriberas nodecapacityexpands. Compared with traditional, non- scalable MSC servers, the MSC-S Blade Clusterhasthepotentialtoreducepow- erconsumptionbyupto60percentand the physical footprint by up to 90 per- cent thanks especially to its optimized redundancy concepts and advanced components. Many of its generic com- ponents are used in other applications, suchasIMS. TheMSC-SBladeClusteralsosupports advanced, business-efficient network solutions,suchasMSC-Snodesthatcov- er multiple countries or are part of a sharedmobile-softswitchcorenetwork (Table 1). Keycomponents ThekeycomponentsoftheMSC-SBlade Cluster(Figure1)aretheMSC-Sblades, a signaling proxy (SPX), an IP load bal- ancer, an I/O system and a site infra- structuresupportsystem(SIS). TheMSC-Sbladesareadvancedgener- ic processor boards, grouped into clus- ters and jointly running the MSC serv- er application that controls circuit- switched calls and the mobile media gateways. The signaling proxy (SPX) serves as BOX A Terms and abbreviations APG43 Adjunct Processor Group version 43 ATM Asynchronous transfer mode E-GEM Enhanced GEM GEM Generic equipment magazine GEP Generic processor board IMS IP Multimedia Subsystem IO Input/output IP Internet protocol M-MGw Media gateway for mobile networks MSC Mobile switching center MSC-S MSC server OM Operation and maintenance OPEX Operating expense OSS Operations support system SIS Site infrastructure support system SPX Signaling proxy SS7 Signaling system no. 7 TDM Time-division multiplexing 10 Ericsson review • 3 2008 Evolving the MSC server architecture 10
  • 2. the network interface for SS7 signaling traffic over TDM, ATM and IP. It distrib- utesexternalSS7signalingtraffictothe MSC-S blades. Two SPXs give 1+1 redun- dancy. Each SPX resides on a double- sidedAPZprocessor. The IP load balancer serves as the network interface for non-SS7-based IP-signaling traffic, such as the ses- sion initiation protocol (SIP). It distrib- utes external IP signaling to the MSC-S blades.TwoIPload-balancerboardsgive 1+1redundancy. The I/O system handles the transfer of data – for example, charging data, hot billing data, input via the man- machineinterface,andstatisticstoand fromMSC-SbladesandSPXs.TheMSC-S BladeClusterI/OsystemusestheAPG43 (Adjunct Processor Group version 43) and is 1+1 redundant. It is connected to the operation support system (OSS). Each I/O system resides on a double- sidedprocessor. The site infrastructure support sys- tem, which is also connected to the operationsupportsystem,providesthe I/O system to Ericsson’s Infrastructure components. Keycharacteristics Compared with traditional MSC-S nodes, the MSC-S Blade Cluster offers breakthrough advances in terms of redundancyandscalability. Redundancy The MSC-S Blade Cluster features tai- lored redundancy schemes in different domains. The network signaling inter- faces (C7 and non-C7) and the I/O sys- tems work in a 1+1 redundancy con- figuration. If one of the components is unavailable,theothertakesover,ensur- ingthatserviceavailabilityisnotaffect- ed, regardless of whether component downtime is planned (for instance, for an upgrade or maintenance actions) or unplanned(hardwarefailure). The redundancy scheme in the domain of the MSC-S blades is n+1. Although the individual blades do not feature hardware redundancy, the cluster of blades is fully redundant. All blades are equal, meaning that every blade can assume every role in the sys- tem.Furthermore,nosubscriberrecord, mobile media gateway, or neighbor- ing node is bound to any given blade. Therefore,theMSC-SBladeClustercon- tinues to offer full service availability when any of its blades is unavailable. Subscriber records are always main- tainedontwobladestoensurethatsub- scriberdatacannotbelost. In the unlikely event of simultane- ous failure of multiple blades, the clus- ter will remain fully operational, expe- riencing only a loss of capacity (rough- ly proportional to the number of failed blades) and minor loss of subscriber records. With the exception of very short interruptions that have no affect on in-service performance, blade MSC-S blade MSC-S blade MSC-S blade MSC-S blade Signaling proxy RAN CN OSS IP load balancer 1+1 1+1 1+1 1+1 Cluster SIS I/O MSC-S blade Figure 1 Main functional components of the MSC-S Blade Cluster. table 1 Key benefits of the MSC-S Blade Cluster. Feature Benefit Very high capacity Ten-fold increase over traditional MSC servers Easy scalability Expansion in steps of 500,000 subscribers through the addition of new blades No network impact when adding blades Outstanding system availability Zero downtime at the node level Upgrade without impact on traffic Network-level resilience through MSC in Pool Reduced OPEX Fewer nodes or sites needed in the network Up to 90 percent smaller footprint As much as 60 percent lower power consumption Business-efficient network solutions Multi-country operators Shared networks Future-proof hardware Blades can be used in other applications, such as IMS 11 Ericsson review • 3 2008
  • 3. a wide range of cluster capacities from verysmalltoverylarge. The individual MSC-S blades are not visible to neighboring network nodes, suchastheBSC,RNC,M-MGw,HLR,SCP, P-CSCF and so on. This first enabler is essential for smooth scalability: blades can be added immediately without affecting the configuration of cooper- atingnodes. Other parts of the network might also have to be expanded to make full use of increased blade cluster capac- ity. When this is the case, these steps can be decoupled and taken indepen- dently. The second enabler is the ability of the MSC-S Blade Cluster to dynamical- ly adapt its internal distribution to a new blade configuration without man- ualintervention.Asaconsequence,the capacity-expansionprocedureisalmost fully automatic – only a few manual steps are needed to add a blade to the runningsystem. When a generic processor board is inserted and registered with the clus- termiddleware,thenewbladeisloaded witha1-to-1copyoftheapplicationsoft- ware and a configuration of the active blades. The blade then joins the cluster and is prepared for manual test traffic. For the time being it remains isolated fromregulartraffic. Thebladesautomaticallyupdatetheir internal distribution tables to the new cluster configuration and replicate all necessary dynamic data, such as sub- scriber records, on the added blade. These activities run in the background and have no affect on cluster capacity oravailability. After a few minutes, when the inter- nal preparations are complete and test resultsaresatisfactory,thebladecanbe activatedfortraffic.Fromthispointon, it handles its share of the cluster load andbecomesanintegralpartoftheclus- terredundancyscheme. MSC-SBladeClusterhardware Buildingpractice TheMSC-SBladeClusterishousedinan EnhancedGenericEquipmentMagazine (E-GEM). Compared with the GEM, the E-GEM provides even more power per subrackandbettercoolingcapabilities, which translates into a smaller foot- print. failureshavenoaffectonconnectiv- ity to other nodes. Nor do blade failures affect availability for traffic of the user- planeresourcescontrolledbytheMSC-S BladeCluster. To achieve n+1 redundancy, the cluster of MSC-S blades employs a set of advanced distribution algorithms. Fault-tolerantmiddlewareensuresthat thebladesshareaconsistentviewofthe cluster configuration at all times. The MSC-S application uses stateless distri- butionalgorithmsthatrelyonthisclus- terview.Themiddlewarealsoprovidesa safegroup-communicationmechanism fortheblades. Static configuration data is replicat- ed on every blade. This way, each blade thatistoexecutearequestedservicehas access to the requisite data. Dynamic data, such as subscriber records or the state of external traffic devices, is repli- catedontwoormoreblades. Interworkingbetweenthetworedun- dancy domains in the MSC-S Blade Cluster is handled in an innovative manner. Components of the 1+1 redun- dancy domain do not require detailed information about the distribution of tasksandrolesamongtheMSC-Sblades. TheSPXandIPloadbalancer,forexam- ple,canbasetheirforwardingdecisions on stateless algorithms. The blades, on the other hand, use enhanced, industry-standard redundancy mecha- nisms when they interact with the 1+1 redundancy domain for, say, selecting theoutgoingpath. The combination of cluster middle- ware, data replication and stateless dis- tributionalgorithmsprovidesadistrib- uted system architecture that is highly redundantandrobust. One particular benefit of n+1 redun- dancy is the potential to isolate an MSC-S blade from traffic to allow maintenance activities – for example, to update or upgrade software with- out disturbing cluster operation. This means zero planned cluster down- time. Scalability The MSC-S Blade Cluster was designed withscalabilityinmind:toincreasesys- tem capacity one needs only add MSC-S blades to the cluster. The shared clus- tercomponents,suchastheI/Osystem, theSPXandIPloadbalancer,havebeen designed and dimensioned to support Cabinet 1 Cabinet 2 SPX Optional Fan Fan Fan Fan Fan Fan Fan Fan Fan Fan IO TDM devices ATM devices IS Infrastructure MSC-S blades Figure 2 MSC-S Blade Cluster cabinets. 12 Ericsson review • 3 2008 Evolving the MSC server architecture
  • 4. Genericprocessorboard TheGenericProcessorBoard(GEP)used fortheMSC-Sbladesisequippedwithan x8664-bitarchitectureprocessor.There areseveralvariantsoftheequippedGEP board,allmanufacturedfromthesame printed circuit board. In addition, the GEP is used in a variety of configura- tions for several other components in the MSC-S Blade Cluster, namely the APG43,theSPXandSIS,andotherappli- cationsystems. Infrastructurecomponents The infrastructure components pro- vide layer-2 and layer-3 infrastructure fortheblades,incorporatingrouters,SIS and switching components. They also provide the main on-site layer-2 proto- col infrastructure for the MSC-S Blade Cluster. Ethernet is used on the back- planeforsignalingtraffic. Hardwarelayout TheMSC-SBladeClusterconsistsofone or two cabinets (Figure 2). One cabi- net houses mandatory subracks for the MSC-S blades with Infrastructure com- ponents, the APG43 and SPX. The oth- er cabinet can be used to house a sub- rack for expanding the MSC-S blades and two subracks for TDM or ATM sig- nalinginterfaces. Powerconsumption Lowpowerconsumptionisachievedby using advanced low-power processors in E-GEMs and GEP boards. The high subscriber capacity of the blade-cluster node means very low power consump- tionpersubscriber. Conclusion The MSC-S Blade Cluster makes Ericsson’s Mobile Softswitch solution even better – one that is easy to scale both in terms of capacity and function- ality. It offers downtime-free MSC-S upgradesandupdates,andoutstanding node and network availability. What is more, the hardware can be reused in futurenodeandnetworkmigrations. The architecture, which is based on a cluster of blades, is aligned with Ericsson’s Integrated Site concept and other components, such as the APG43 andSPX.OMissupportedbytheOSS. The MSC-S Blade Cluster distributes subscriber traffic between available blades. One can add, isolate or remove blades without disturbing traffic. The system redistributes the subscribers and replicates subscriber data when the number of MSC-S blades changes. Clusterreconfigurationisanautomatic procedure; moreover, the procedure is invisibletoentitiesoutsidethenode. Petri Maekiniemi is master strategic prod- uct manager for the MSC-S Blade Cluster. He joined Ericsson in Finland in 1985 but has worked at Ericsson Eurolab Aachen in Germany since 1991. Over the years he has served as system manager and, later, as product manag- er in several areas connected to the MSC, including subscriber services, charging and MSC pool. Petri holds a degree in telecommunications engi- neering from the Helsinki Institute of Technology, Finland. Jan Scheurich is a principal systems designer. He joined Ericsson Eurolab Aachen in 1993 and has been em- ployed since then as software designer, system manager, system tester and troubleshooter for various Ericsson products, ranging from B-ISDN over classical MSC, SGSN, MMS solutions, to MSS. He currently serves as system architect for the MSC-S Blade Cluster. Jan holds a degree in physics from Kiel University, Germany. Acknowledgements The authors thank Joe Wilke whose comments and contributions have helped shape this article. Joe is a man- ager in RD and has driven several technology-shift endeavors, including the MSC-S Blade Cluster. 13 Ericsson review • 3 2008