Session presented at Big Data Spain 2012 Conference
16th Nov 2012
ETSI Telecomunicacion UPM Madrid
www.bigdataspain.org
More info: http://www.bigdataspain.org/es-2012/conference/architecture-to-scale/donn-rochette
3. Who am I?
• CTO and Co-Founder of AppFirst
• Application Virtualization
o UNIX server applications
o Solaris 2.6 applications on Solaris 10
• Real-time Operating Systems
o Hubble Space Telescope
o Under Wing Armaments
o Medical Instruments
• Launch Processing System
o NASA Kennedy Space Center
o Hardware and Software Design of Ground-based Launch Control
Systems
4. AppFirst Collects, Aggregates and Correlates
Information from Production Applications
• NYC based software start-up
• Application
o Aggregate & summarize data from 10ks of remote
servers
o Provide information for web apps and APIs
• A Few Metrics
o 45k to 50k summaries per minute
o GBs per remote server per day
o TBs of new data daily
o Query & retrieve information in < 100 MS
o Data store for up to 1 year
7. Micro Scale:
Data Processing
Requirements:
•Process a constant stream of data
o3 snapshots per minute, per remote server
•Create summaries in real-time
oup to 1 minute behind wall clock time
•Provide query results in < 100 MS
8. Micro Scale:
Efficiency
We found that:
•Summaries of the data were needed in order to keep
queries < 100 MS
oServer
oProcess
oProcess sets
oTopology
•Time series needed for each summary type
oMinute
oHour
oDay
We tried:
•Flat files
•Network file systems
•Distributed file systems
•Relational databases
•NoSQL key-value store
•Memory based SQL databases
•Distributed shared memory
9. Micro Scale:
We learned the hard way
Tape is Dead
Disk is Tape
Flash is Disk
RAM Locality is King
Jim Gray
Microsoft
December 2006
10. Micro Scale:
Solution
Aggregation:
•HPC pipeline processing model
•RAM based data model
•Queues as message bus
•Stateless processing
•Adaptive control
•Queries are fully abstracted
Horizontal scale may require that you revisit your design
11. Micro Scale
We all know we need to scale horizontally
Cluster Stateless
•Any data processing with any time constraint
•Use components that cluster •Processes can be run on any server
•Don’t do backups, use replication •Processes can be migrated
•Redis, memcached, RabbitMQ, Hbase can be clustered •Multiple processes can be added as load varies
•Postgresql & MySQl don’t really cluster •All data stored in distributed shared memory
•Message passing between components
•Send keys and not data
12. Macro Scale:
Application Capacity
Load:
•Most significant load impact from remote servers
•User interaction, APIs, and queries do not load the system as much as remote servers
•Support 100, 1,000, 10,000, 100,000 remote servers
Will a design that supports 10,000 remote servers scale to support
100,000 remote servers?
13. Infinite Scale
•Paralyzes the design team •But... you don’t want to say no to the business
•Fosters bad behavior •The whole purpose is to add users
•Unrealistic expectations •When the business brings a customer with
•Developers forced to take unrealistic action 10,000 servers you want to say; bring it on
14. Macro Scale:
Capacity
We started with a snapshot:
•Supported 1000 remote servers
•Micro scale results made it possible to scale out
•fairly flexible application component design
•Scale out to 10,000 remote servers
•This is a financial calculation
•Scaled out in linear fashion
•Data processing
•Storage
•Started in linear fashion then determined actual requirements
15. Macro Scale Solution:
The Pod
Pod 0 Pod 1
Pod Architecture:
•Segmented infrastructure along the lines of load sources
•Create infrastructure to support specific load
•Instantiate additional infrastructure with additional load
•When a pod gets to 85-90% capacity spin out a new pod
•Capacity of a pod is a financial calculation
•Scale within a pod in 1000 server increments
•Need to automate the deployment of a pod
16. Adaptive Control The Pod Rocks Metrics are king
•You can’t react fast enough •Isolated
•Scale out •Distributed •Business metrics
•Scale back •Located where needed •Application metrics
•Migrate •Behind the firewall
Time Series Data Don’t trust the data
•Issues relate to a specific time •Clocks are skewed
•Complete state information for any given minute •Encodings fail
•Don’t know what info is needed before a problem •Save all bad data & replay
occurs; all data every minute •Think defensive
17. Conclusions
•Stateless Data
oKey to horizontal scale
•Disk is tape
oRAM based design is critical, not optional
•Cluster
oUse components that cluster, not just master/salve
•Design for infinite scale does not work
•Pod approach is an answer for infinite scale