• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Architecting cloud
 

Architecting cloud

on

  • 1,274 views

Architecting your application for the cloud

Architecting your application for the cloud

Statistics

Views

Total Views
1,274
Views on SlideShare
1,273
Embed Views
1

Actions

Likes
0
Downloads
36
Comments
0

1 Embed 1

http://www.techgig.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • A traditional solution would require to buy servers, storage, sign an expensive CDN contract, in order to efficiently deliver content globally, then launch the website or web application, and then manage its scaling and provisioning.
  • In the Cloud, however, it's much simpler than that: You don't need to buy any IT infrastructure, but you can get it on demand on Amazon Web Services, and you can deploy it worldwide from day one, without any additional extra costs. You can also scale your capacity up and down when needed, by benefiting from the autoscaling and load balancing features. Saving time will also allow you to focus more on your business.
  • For this web startup, we would likely use the following services (briefly explain these services)
  • 15:00 Before we start, let me show you this architectural diagram. This is what we'll end up with. At the bottom, a self-managed Database, serving content to a cluster of auto-scaling web servers, serving web pages to internet users through an automated Load Balancer and a Content Delivery service. Sounds complicated? You will see it's not!
  • 0:17 In step one, we launched a server, installed software, and made our website available online. Pretty simple. Interesting development: our web traffic goes up...
  • 0:35 And we suddenly have a problem: how can we reach to our increasing number of fans worldwide? We need a CDN, or Content Delivery Network, a service that delivers content across multiple locations worldwide, improving the user experience. This way, every time someone visits our page, the content, in this case the pictures, are served from the closest location, automatically.
  • 0:12 From an architectural perspective, we have a single web server, and requests for content will go through CloudFront, reducing the load on our server.
  • 0:09 In your html code, change the URL of your pictures to the CloudFront distribution, like this.
  • 0:12 In step two, we used pictures from Amazon S3 to create a CloudFront distribution, and changed our website to take advantage of it. Very simple.
  • 0:08 In short, our IT Architecture needs an update. Let's see how it can be done.
  • 0:15 A traditional way to grow our architecture would be to simply add servers, each one with its own database, and that's it. However, this could easily turn up into a mess. This is 2011. There should be a better way. Well, there is.
  • 0:15 Look at the "old way" on the left. Right, you can see that we want to use an automated Load Balancer to manage traffic to our machines, and auto-scaling to automate the number of servers running. Just to start.
  • 0:09 In step three we saw more servers being launched with the Autoscaling feature, and then being added to the Elastic Load Balancer.
  • 0:15 One option is to use Amazon RDS, the Relational Database Service, our Database in the cloud. RDS is automatically managed by Amazon, as well as its Failover replicas.
  • 0:15 In step four, we launched a new Database instance, or RDS, and pointed the web servers to it. Then we created a read replica, and our first backup snapshot. That's it!
  • 0:15 There are many difficult things related to Databases: Administration, Backups, Clustering, Replication, and so on. Difficult, time consuming, error prone. How can we use automation to optimize this?
  • There is a strong focus on security, and security is our top priority together with operational excellence.

Architecting cloud Architecting cloud Presentation Transcript

  • Architecting your application for the cloud
  • Traditional solution
    • Buy servers
    • Buy storage
    • Sign a CDN contract (Content Delivery Network)
    • Launch website/application
    • Manage scaling and provisioning
  • Cloud solution
    • Benefits from Cloud Computing:
    • No need to buy IT Infrastructure
    • Deploy worldwide
    • Scale up/down when needed
    • Save time
    • Focus on your business
  • Stage One – The Beginning
    • Simple architecture
    • Low complexity and overhead means quick development and lots of features, fast.
    • No redundancy, low operational costs – great for startups.
  • Stage 2 - More of the same, just bigger
    • Business is becoming successful – risk tolerance low.
    • Add redundant firewalls, load balancers.
    • Add more web servers for high performance.
    • Scale up the database.
    • Add database redundancy.
    • Still simple  .
  • Stage 3 – The pain begins.
    • Publicity hits.
    • Squid or varnish reverse proxy or high end load balancers.
    • Add even more web servers. Managing contents becomes painful.
    • Single database can’t cut it anymore. Splits read and write. All writes go to a single master server with read only slaves.
    • May require some re-coding of the apps.
  • Stage 4 – The pain intensifies
    • Replication doesn’t work for everything. Single writes database – Too many writes – Replication takes too long.
    • Database partitioning starts to make sense. Certain features get their own database.
    • Shared storage makes sense for contents.
    • Requires significant re-architecting of the app and DB.
  • Stage 5 – This Really Hurts !!
    • Panic sets in. Re-thinking entire application. Now we want to go for scale?
    • Can’t just partition on features – what else can we use? Geography, lastname, user Id etc. Create user-cluster.
    • All features available on each cluster.
    • Use a hashing scheme or master DB for locating which user belongs to which cluster.
  • Stage 6 – Getting a little less painful
    • Scalable application and database architecture.
    • Acceptable performance.
    • Starting to add new features again.
    • Optimizing some of the code.
    • Still growing, but manageable.
  • Stage 7 – entering the unknown...
    • Where are the remaining bottleneck?
      • Power, Space
      • Bandwidth, CDN, Hosting provider big enough?
      • Firewall, load balancer bottlenecks?
      • Storage
      • Database technology limits – key/value store anyone?
  • Amazon Services used Servers: Amazon EC2 Storage: Amazon S3 Database: Amazon RDS Content Delivery: Amazon CloudFront Extra: Autoscaling, Elastic Load Balancing
  •  
  • What is in step 1 Launched a Linux server (EC2) Installed a web server Downloaded the website Opened the website Now, our traffic goes up...
  • To reach fans worldwide, we need a CDN.
  •  
  • Changes in HTML code images/stirling1.jpg Becomes d135c2250.cloudfront.net/stirling1.jpg
  • What is in step 2 Uploaded files to Amazon S3 Enabled a Cloudfront Distribution Updated our picture location
  • Our IT Architecture needs an update
  •  
  •  
  • What is in step 3 We added Autoscaling, and watched it grow the number of servers We added Elastic Load Balancer
  •  
  • What we is in step 4 Launched a Database Instance Pointed the web servers to RDS Created a Read Replica Created a Snapshot
  • What is difficult about Databases?
  • Availablity Patterns
    • Fail-over IP
    • Replication
      • Master-slave
      • Master-master
      • Tree replication
      • Buddy replication
  • Master-Slave Replication
  • Master-Slave Replication
    • Assume both Master and Slave is running on Ubuntu Natty(11.04) with MySQL installed.
    • Configure the Master: we must configure the mysql to listen to all IP addresses. We move to
    • /etc/mysql/my.cnf
    • #skip-networking
    • #bind-address = 127.0.0.1
    • Set the mysql log file, the database name that we will replicate and tell that this will be the master
    • log-bin = /var/log/mysql/mysql-bin.log binlog-do-db=exampledb server-id=1
    • Then we made a restart:
    • /etc/init.d/mysql restart
  • Master – Slave Replication
    • Now we enter the mysql on master server:
    • mysql -u root -p Enter password:
    • We grant all privileges for slave for this database:
    • GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'%' IDENTIFIED BY '<some_password>'; FLUSH PRIVILEGES;
    • Then we run the following commands:
    • USE exampledb; FLUSH TABLES WITH READ LOCK;
    • This will show the master log file name and the read position: SHOW MASTER STATUS;
  • Master-Slave Replication
    • We make a dump of the database of the master server:
    • mysqldump -u root -p<password> --opt exampledb > exampledb.sql
    • Or we can run this command on the slave to fetch the data from master:
    • LOAD DATA FROM MASTER;
    • Now we will unlock the tables:
    • mysql -u root -p Enter password: UNLOCK TABLES; quit;
  • Master-Slave Replication : Configure the Slave
    • First we enter the slave mysql and create the database:
    • mysql -u root -p Enter password: CREATE DATABASE exampledb; quit;
    • We import the database using the mysql dump:
    • mysql -u root -p<password> exampledb < /path/to/exampledb.sql
    • Now we will configure the slave server:
    • /etc/mysql/my.cnf
    • We write the below information:
    • server-id=2 master-host=192.168.0.100 master-user=slave_user master-password=secret master-connect-retry=60 replicate-do-db=exampledb
    • Then we restart mysql:
    • /etc/init.d/mysql restart
  • Master-Slave Replication: Configure the Slave
    • We can also load the database using the below command:
    • mysql -u root -p Enter password: LOAD DATA FROM MASTER; quit;
    • Then we stop the slave server.
    • mysql -u root -p Enter password: SLAVE STOP;
    • And we run the below command to adjust the master informations:
    • CHANGE MASTER TO MASTER_HOST='192.168.0.100', MASTER_USER='slave_user', MASTER_PASSWORD='<some_password>', MASTER_LOG_FILE='mysql-bin.006', MASTER_LOG_POS=183;
    • And then we start the slave server:
    • START SLAVE; quit;
  • Master-Master Replication:
  • Master-Master Replication: master1 configuration
    • we will call system 1 as master1 and slave2 and system2 as master2 and slave 1.
    • We go to the master mysql configuration file:
    • /etc/mysql/my.cnf.
    • Then we add the below code block. We show the path and socket path, the log file for the db to replicate.
    • [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock old_passwords=1 log-bin binlog-do-db=<database name> 
    • binlog-ignore-db=mysql            binlog-ignore-db=test server-id=1 [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
    • mysql> grant replication slave on *.* to 'replication'@192.168.16.5 identified by 'slave';
  • Master-Master Replication: slave2 configuration
    • Now we enter the slave2 mysql configuration file.
    • [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock old_passwords=1
    • server-id=2 master-host = 192.168.16.4 master-user = replication master-password = slave master-port = 3306 [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
  • Master-Master Replication: start master1/slave1 server
    • We start the slave:
    • mysql> start slave;
    • mysql> show slave statusG;
    •              Slave_IO_State: Waiting for master to send event                 Master_Host: 192.168.16.4                 Master_User: replica                 Master_Port: 3306               Connect_Retry: 60             Master_Log_File: MASTERMYSQL01-bin.000009         Read_Master_Log_Pos: 4              Relay_Log_File: MASTERMYSQL02-relay-bin.000015         
    •       Relay_Log_Pos: 3630       Relay_Master_Log_File: MASTERMYSQL01-bin.000009            Slave_IO_Running: Yes           Slave_SQL_Running: Yes             Replicate_Do_DB:         Replicate_Ignore_DB:          Replicate_Do_Table:      Replicate_Ignore_Table:     Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table:                  Last_Errno: 0                  Last_Error:                Skip_Counter: 0         Exec_Master_Log_Pos: 4             Relay_Log_Space: 3630             Until_Condition: None              Until_Log_File:               Until_Log_Pos: 0          Master_SSL_Allowed: No          Master_SSL_CA_File:          Master_SSL_CA_Path:             Master_SSL_Cert:           Master_SSL_Cipher:              Master_SSL_Key:       Seconds_Behind_Master: 1519187
  • Master-Master Replication: Creating the master2/slave2
    • On Master2/Slave 1, edit my.cnf and master entries into it:
    •   [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock
    • old_passwords=1 server-id=2 master-host = 192.168.16.4 master-user = replication master-password = slave master-port = 3306 log-bin                     binlog-do-db=adam [mysql.server] user=mysql basedir=/var/lib
    • [mysqld_safe] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid
    • Create a replication slave account on master2 for master1:
    • mysql> grant replication slave on *.* to 'replication'@192.168.16.4 identified by 'slave2';
  • Master-Master Replication: Creating the master2/slave2
    • Edit my.cnf on master1 for information of its master.
    • [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock old_passwords=1 log-bin binlog-do-db=adam binlog-ignore-db=mysql binlog-ignore-db=test server-id=1 #information for becoming slave. master-host = 192.168.16.5 master-user = replication master-password = slave2 master-port = 3306
    • [mysql.server]user=mysql
    • basedir=/var/lib 
  • Master-Master Replication:
    • Restart both mysql master1 and master2.
    • On mysql master1:
    • mysql> start slave;
    • On mysql master2: 
    • mysql > show master status;
    • On mysql master 1:
    • mysql> show slave statusG;
  • Managing overload
  • Load Balancing Algorithm
    • Random allocation
    • Round robin allocation
    • Weighted allocation
    • Dynamic load balancing
    • Least connections
    • Least server CPU
  • Load Balancer in Rackspace
    • 1.    Add a cloud load balancer. If you already have a Rackspace Cloud account, use the “Create Load Balancer” API operation. 
    • 2.   Configure cloud load balancer. Then we select name, protocol, port, algorithm, and which servers we need load balanced.
    • 3.   Enjoy the cloud load balancer which will be online in just a few minutes. each cloud load balancer can be customized or removed as our needs change.
  • Security
  • Security
    • Firewalls – iptables.
    • The iptables program lets slice admins configure the Linux kernel firewall
    • Logrotator.
    • &quot;Log rotation&quot; refers to the practice of archiving an application's current log, starting a fresh log, and deleting older logs.
  • Iptables
  • Configuring the IPtable
    • sudo /sbin/iptables -F
    • sudo /sbin/iptables -A INPUT -i eth0 -p tcp -m tcp --dport 30000 -j ACCEPT
    • sudo /sbin/iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
    • sudo /sbin/iptables -A INPUT -j REJECT
    • sudo /sbin/iptables -A FORWARD -j REJECT
    • sudo /sbin/iptables -A OUTPUT -j ACCEPT
    • sudo /sbin/iptables -I INPUT -i lo -j ACCEPT
    • sudo /sbin/iptables -I INPUT 5 -p tcp --dport 80 -j ACCEPT sudo /sbin/iptables -I INPUT 5 -p tcp --dport 443 -j ACCEPT
  • Secure??
    • DDoS Attack: Dynamic Denial of Service attack.
    • Wikileaks.com is it alive?
  • Log Rotate
    • /etc/logrotate.conf
    • ls /etc/logrotate.d
    • /var/log/apache2/*.log {
    • weekly
    • missingok
    • rotate 52
    • compress
    • delaycompress
    • notifempty
    • create 640 root adm
    • sharedscripts
    • postrotate if [ -f &quot;`. /etc/apache2/envvars ;
    • echo ${APACHE_PID_FILE:-/var/run/apache2.pid}`&quot; ]; then /etc/init.d/apache2 reload > /dev/null fi
    • endscript }
  • Failover IP
    • You can actually 'share' an IP between two servers so when one server is not available the other takes over the IP address.
    • For this you need two servers. Let's keep it simple and call one the 'Master‘ and one the 'Slave'.
    • What this comes down to is creating a High Availability network with your Slices. Your site won't go down.
  • Heartbeat
    • The failover system is not automatic. You need to install an application to allow the failover to occur.
    • Heartbeat runs on both the Master and Slave servers. They chat away and keep an eye on each other. If the Master goes down, the Slave notices this and brings up the same IP address that the Master was using.
  • How to Configure Heartbeat
    • sudo aptitude update Once you have done that, have a check to see if anything needs upgrading on the server:
    • sudo aptitude safe-upgrade
    • sudo aptitude install heartbeat
    • /etc/heartbeat/
  • Configuring Heartbeat
    • sudo nano /etc/heartbeat/authkeys The contents are as simple as this:
    • auth 1
    • 1 sha1 YourSecretPassPhrase
    • sudo chmod 600 /etc/heartbeat/authkeys
  • Configuring Heartbeat
    • sudo nano /etc/heartbeat/haresources
    • master 123.45.67.890/24
    • The name 'master' is the hostname of the MASTER server and the IP address (123.45.67.890) is the IP address of the MASTER server.
    • To drive this home, this file needs to be the same on BOTH servers.
  • Master ha.cf file
    • sudo nano /etc/heartbeat/ha.cf The contents would be as follows:
    • logfacility daemon
    • keepalive 2
    • deadtime 15
    • warntime 5
    • initdead 120
    • udpport 694
    • ucast eth1 172.0.0.0 # The Private IP address of your SLAVE server. auto_failback on
    • node master # The hostname of your MASTER server.
    • node slave # The hostname of your SLAVE server.
    • respawn hacluster /usr/lib/heartbeat/ipfail
    • use_logd yes
  • Creating Slave ha.cf
    • Let's open the file on the Slave server:
    • sudo nano /etc/heartbeat/ha.cf The contents will need to be:
    • logfacility daemon
    • keepalive 2
    • deadtime 15
    • warntime 5
    • initdead 120
    • udpport 694
    • ucast eth1 172.0.0.1 # The Private IP address of your MASTER server.
    • auto_failback on
    • node master
    • node slave
    • respawn hacluster /usr/lib/heartbeat/ipfail
    • use_logd yes
    • Once done, save the file and restart Heartbeat on the Slave Slice:
    • sudo /etc/init.d/heartbeat restart
  • Testing the failover IP
    • Start off with both servers running and ping the main IP (the IP we have set to be the failover) on the Master server:
    • ping -c2 123.45.67.890
    • The '-c2' option simply tells ping to 'ping' twice.
    • Now shutdown the Master Slice:
    • sudo shutdown -h now
    • Without the failover IP, there would be no response from the ping request as the server is down.
    • We will notice that the IP is still responding to pings.
  • Who Am I?
    • Tahsin Hasan
    • Senior Software Engineer
    • Tasawr Interactive.
    • Author of two books ‘Joomla Mobile Web Development Beginner’s Guide’ and ‘Opencart 1.4 Template Design Cookbook’with Packt Publishers Uk.
    • [email_address]
    • http://newdailyblog.blogspot.com (tahSin’s gaRage).
  • Questions?