Palringo: A startup's journey from a
datacenter to the cloud.
Who are we:
• Sam Parsons:
• Head Of Architecture
• 6 Years at Palringo
• AWS SA - Associate Level
• Phil Basford: @philipbasford
• Principal Engineer and Architecture Evangelist
• 3 Years at Palringo
• AWS SA and Developer - Associate Level
Palringo
• A mobile chat, group instant messaging and entertainment
company.
• 50 million signed up users.
• A start-up: 80 Employees in 5 Offices
• (Newcastle, Ipswich, Gothenburg, Helsinki and London)
• VC Backed.
• An agile product driven company, constantly innovating new ideas.
Only 4 months ago
• Datacentre
• Running our own racks within a datacentre in London Globalswitch.
• Major outages from DDOS, Power cuts and hardware failures.
• C Core
• Massive monolithic system; glueing Chat messaging, the database and business logic
together. That was accessed via a binary protocol.
• Deployment of 8 nodes required total system outages and had to be deployed
simultaneously.
• Known performance issues and limited scalability.
• Ops Team
• 60 developers into 3 gate keepers : Slowing deployment of product increments down from
agile teams.
• The 3 gate keepers were kept up all night with constant alarms and spent all day fire fighting.
Where to start
• We decided our custom chat binary protocol was bad, it was not adaptable!
• We decided our TCP Socket solution was out of date!
• We decided that our monolithic was unmaintainable!
• We decided that our architecture stopped us being agile enough!
We needed to change it!
Where to start
• We decided that our one massive LAN in our DC that hosted Web, App and DB
Servers was bad!
• We decided that relying on current firewalls, ddos and single DC was bad!
• We decided that having pet servers with vast amounts of undocumented and out of
date dependencies was bad!
• We decided that not allowing Agile teams to see the logs and performance of their
live deliverables is bad!
We needed to change it!
Where to start
• We decided that having many out of date unmaintained test environments was bad!
• We decided that having a different deployment process for each component was
bad!
• We decided that having one person on rota out of hours babysitting the servers was
bad!
We needed to change it!
Changeeverything!
Focus on our Product
● Amazon manage the Hardware
● Amazon manage (some of) the software
● Scalability
● Reliability (Multi-AZ)
We changed the network!
VPN connection AWS
CloudFormation
Amazon EFS
Amazon
VPC
AWS Direct
Connect
We changed the software!
Amazon
DynamoDB
Amazon
ElastiCache
Amazon
RDS
Socket IO
We changed the way it was deployed!
AWS
CodeDeploy
Amazon ECR
We changed how we run it!
Amazon CloudWatch
Logs
Auto
Scaling
Amazon
SNS
Elastic Load
Balancing
Inc. ALB
Amazon ECS
We changed the team!
In Life TeamDeployersAPI DevelopersCore Developers Alumni
We played a lot!
AWS
Lambda
Amazon Machine
Learning
Amazon
Cognito
Amazon API
Gateway
Amazon
Elasticsearch
Service
Where to next
• We decided that we need to break up our Java services!
• We decided that we wish to turn off the old monolithic!
• We decided that we need to have APIs separating services!
• We decided to break up the database!
We will change it!

Palringo : a startup's journey from a data center to the cloud

  • 1.
    Palringo: A startup'sjourney from a datacenter to the cloud.
  • 2.
    Who are we: •Sam Parsons: • Head Of Architecture • 6 Years at Palringo • AWS SA - Associate Level • Phil Basford: @philipbasford • Principal Engineer and Architecture Evangelist • 3 Years at Palringo • AWS SA and Developer - Associate Level
  • 3.
    Palringo • A mobilechat, group instant messaging and entertainment company. • 50 million signed up users. • A start-up: 80 Employees in 5 Offices • (Newcastle, Ipswich, Gothenburg, Helsinki and London) • VC Backed. • An agile product driven company, constantly innovating new ideas.
  • 4.
    Only 4 monthsago • Datacentre • Running our own racks within a datacentre in London Globalswitch. • Major outages from DDOS, Power cuts and hardware failures. • C Core • Massive monolithic system; glueing Chat messaging, the database and business logic together. That was accessed via a binary protocol. • Deployment of 8 nodes required total system outages and had to be deployed simultaneously. • Known performance issues and limited scalability. • Ops Team • 60 developers into 3 gate keepers : Slowing deployment of product increments down from agile teams. • The 3 gate keepers were kept up all night with constant alarms and spent all day fire fighting.
  • 5.
    Where to start •We decided our custom chat binary protocol was bad, it was not adaptable! • We decided our TCP Socket solution was out of date! • We decided that our monolithic was unmaintainable! • We decided that our architecture stopped us being agile enough! We needed to change it!
  • 6.
    Where to start •We decided that our one massive LAN in our DC that hosted Web, App and DB Servers was bad! • We decided that relying on current firewalls, ddos and single DC was bad! • We decided that having pet servers with vast amounts of undocumented and out of date dependencies was bad! • We decided that not allowing Agile teams to see the logs and performance of their live deliverables is bad! We needed to change it!
  • 7.
    Where to start •We decided that having many out of date unmaintained test environments was bad! • We decided that having a different deployment process for each component was bad! • We decided that having one person on rota out of hours babysitting the servers was bad! We needed to change it!
  • 8.
  • 9.
    Focus on ourProduct ● Amazon manage the Hardware ● Amazon manage (some of) the software ● Scalability ● Reliability (Multi-AZ)
  • 10.
    We changed thenetwork! VPN connection AWS CloudFormation Amazon EFS Amazon VPC AWS Direct Connect
  • 11.
    We changed thesoftware! Amazon DynamoDB Amazon ElastiCache Amazon RDS Socket IO
  • 12.
    We changed theway it was deployed! AWS CodeDeploy Amazon ECR
  • 13.
    We changed howwe run it! Amazon CloudWatch Logs Auto Scaling Amazon SNS Elastic Load Balancing Inc. ALB Amazon ECS
  • 14.
    We changed theteam! In Life TeamDeployersAPI DevelopersCore Developers Alumni
  • 15.
    We played alot! AWS Lambda Amazon Machine Learning Amazon Cognito Amazon API Gateway Amazon Elasticsearch Service
  • 16.
    Where to next •We decided that we need to break up our Java services! • We decided that we wish to turn off the old monolithic! • We decided that we need to have APIs separating services! • We decided to break up the database! We will change it!