SlideShare a Scribd company logo
1 of 69
Download to read offline
Wim Godden
Cu.be Solutions
@wimgtr
Beyond PHP :
It's not (just) about the code
Who am I ?
Wim Godden (@wimgtr)
Where I'm from
Where I'm from
Where I'm from
Where I'm from
Where I'm from
Where I'm from
My town
My town
Belgium – the traffic
Who am I ?
Wim Godden (@wimgtr)
Founder of Cu.be Solutions (http://cu.be)
Open Source developer since 1997
Developer of OpenX, PHPCompatibility, Nginx SLIC, ...
Speaker at PHP and Open Source conferences
Cu.be Solutions ?
Open source consultancy
PHP-centered
Training courses
High-speed redundant network (BGP, OSPF, VRRP)
High scalability development
Nginx + extensions
MySQL Cluster
Projects :
mostly IT & Telecom companies
lots of public-facing apps/sites
Who are you ?
Developers ?
Anyone setup a MySQL master-slave ?
Anyone setup a site/app on separate web and database server ?
→ How much traffic between them ?
The topic
Things we take for granted
Famous last words : "It should work just fine"
Works fine today
→ might fail tomorrow
Most common mistakes
PHP code ↔ PHP ecosystem
It starts with...
… code !
First up : database
Database queries – complexity
SELECT DISTINCT n.nid, n.uid, n.title, n.type, e.event_start, e.event_start AS
event_start_orig, e.event_end, e.event_end AS event_end_orig, e.timezone,
e.has_time, e.has_end_date, tz.offset AS offset, tz.offset_dst AS offset_dst,
tz.dst_region, tz.is_dst, e.event_start - INTERVAL IF(tz.is_dst, tz.offset_dst,
tz.offset) HOUR_SECOND AS event_start_utc, e.event_end - INTERVAL
IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND AS event_end_utc,
e.event_start - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND +
INTERVAL 0 SECOND AS event_start_user, e.event_end - INTERVAL IF(tz.is_dst,
tz.offset_dst, tz.offset) HOUR_SECOND + INTERVAL 0 SECOND AS
event_end_user, e.event_start - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset)
HOUR_SECOND + INTERVAL 0 SECOND AS event_start_site, e.event_end -
INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND + INTERVAL 0
SECOND AS event_end_site, tz.name as timezone_name FROM node n INNER
JOIN event e ON n.nid = e.nid INNER JOIN event_timezones tz ON tz.timezone =
e.timezone INNER JOIN node_access na ON na.nid = n.nid LEFT JOIN
domain_access da ON n.nid = da.nid LEFT JOIN node i18n ON n.tnid > 0 AND
n.tnid = i18n.tnid AND i18n.language = 'en' WHERE (na.grant_view >= 1 AND
((na.gid = 0 AND na.realm = 'all'))) AND ((da.realm = "domain_id" AND da.gid = 4)
OR (da.realm = "domain_site" AND da.gid = 0)) AND (n.language ='en' OR
n.language ='' OR n.language IS NULL OR n.language = 'is' AND i18n.nid IS NULL)
AND ( n.status = 1 AND ((e.event_start >= '2010-01-31 00:00:00' AND
e.event_start <= '2010-03-01 23:59:59') OR (e.event_end >= '2010-01-31 00:00:00'
AND e.event_end <= '2010-03-01 23:59:59') OR (e.event_start <= '2010-01-31
00:00:00' AND e.event_end >= '2010-03-01 23:59:59')) ) GROUP BY n.nid HAVING
(event_start >= '2010-02-01 00:00:00' AND event_start <= '2010-02-28 23:59:59')
OR (event_end >= '2010-02-01 00:00:00' AND event_end <= '2010-02-28 23:59:59')
OR (event_start <= '2010-02-01 00:00:00' AND event_end >= '2010-02-28
23:59:59') ORDER BY event_start ASC;
Database - indexing
'select id from stock where status = 2 order by qty'
→ aggregate index on (status, qty)
'select id from stock where status > 2 order by qty'
→ aggregate index on (status, qty) ?
→ Depends :
- Btree : yes
- Hash : range selection stops use of aggregate index
→ separate index on status and qty (since recent versions)
Database - indexing
Indexes make database faster
→ Let's index everything !
→ DON'T :
Insert/update/delete → Index modification
Each query → evaluation of all indexes
"Relational schema design is based on data
but index design is based on queries"
(Bill Karwin, Percona)
Databases – detecting problematic queries
Slow query log
→ SET GLOBAL slow_query_log = ON;
Queries not using indexes
→ In my.cnf/my.ini : 'log_queries_not_using_indexes'
General query log
→ SET GLOBAL general_log = ON;
→ Turn it off quickly !
Percona Toolkit (Maatkit)
pt-query-digest
Databases - pt-query-digest
# Profile
# Rank Query ID Response time Calls R/Call Apdx V/M Item
# ==== ================== ================ ===== ======= ==== ===== ==========
# 1 0x543FB322AE4330FF 16526.2542 98.2% 1208 13.6806 1.00 0.00 SELECT output_option
# 2 0xE78FEA32E3AA3221 0.8312 0.0% 6412 0.0001 1.00 0.00 SELECT poller_output poller_item
# 3 0x211901BF2E1C351E 0.6811 0.0% 6416 0.0001 1.00 0.00 SELECT poller_time
# 4 0xA766EE8F7AB39063 0.2805 0.0% 149 0.0019 1.00 0.00 SELECT wp_terms wp_term_taxonomy wp_term_relationships
# 5 0xA3EEB63EFBA42E9B 0.1999 0.0% 51 0.0039 1.00 0.00 SELECT UNION wp_pp_daily_summary wp_pp_hourly_summary
# 6 0x94350EA2AB8AAC34 0.1956 0.0% 89 0.0022 1.00 0.01 UPDATE wp_options
# MISC 0xMISC 302.8137 1.8% 3853 0.0002 NS 0.0 <147 ITEMS>
Databases - pt-query-digest
# Query 2: 0.26 QPS, 0.00x concurrency, ID 0x92F3B1B361FB0E5B at byte 14081299
# This item is included in the report because it matches --limit.
# Scores: Apdex = 1.00 [1.0], V/M = 0.00
# Query_time sparkline: | _^ |
# Time range: 2011-12-28 18:42:47 to 19:03:10
# Attribute pct total min max avg 95% stddev median
# ============ === ======= ======= ======= ======= ======= ======= =======
# Count 1 312
# Exec time 50 4s 5ms 25ms 13ms 20ms 4ms 12ms
# Lock time 3 32ms 43us 163us 103us 131us 19us 98us
# Rows sent 59 62.41k 203 231 204.82 202.40 3.99 202.40
# Rows examine 13 73.63k 238 296 241.67 246.02 10.15 234.30
# Rows affecte 0 0 0 0 0 0 0 0
# Rows read 59 62.41k 203 231 204.82 202.40 3.99 202.40
# Bytes sent 53 24.85M 46.52k 84.36k 81.56k 83.83k 7.31k 79.83k
# Merge passes 0 0 0 0 0 0 0 0
# Tmp tables 0 0 0 0 0 0 0 0
# Tmp disk tbl 0 0 0 0 0 0 0 0
# Tmp tbl size 0 0 0 0 0 0 0 0
# Query size 0 21.63k 71 71 71 71 0 71
# InnoDB:
# IO r bytes 0 0 0 0 0 0 0 0
# IO r ops 0 0 0 0 0 0 0 0
# IO r wait 0 0 0 0 0 0 0 0
# pages distin 40 11.77k 34 44 38.62 38.53 1.87 38.53
# queue wait 0 0 0 0 0 0 0 0
# rec lock wai 0 0 0 0 0 0 0 0
# Boolean:
# Full scan 100% yes, 0% no
# String:
# Databases wp_blog_one (264/84%), wp_blog_tw… (36/11%)... 1 more
# Hosts
# InnoDB trxID 86B40B (1/0%), 86B430 (1/0%), 86B44A (1/0%)... 309 more
# Last errno 0
# Users wp_blog_one (264/84%), wp_blog_two (36/11%)... 1 more
# Query_time distribution
# 1us
# 10us
# 100us
# 1ms
# 10ms ################################################################
# 100ms
# 1s
# 10s+
# Tables
# SHOW TABLE STATUS FROM `wp_blog_one ` LIKE 'wp_options'G
# SHOW CREATE TABLE `wp_blog_one `.`wp_options`G
# EXPLAIN /*!50100 PARTITIONS*/
SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'G
Databases – next step : explain
explain <query>
"How will MySQL execute the query"
Databases – next step : explain
+----+-------------+-----------+------+---------------+------+---------+------+--------+-------------+
| id | select_type | TABLE | TYPE | possible_keys | KEY | key_len | REF | ROWS | Extra |
+----+-------------+-----------+------+---------------+------+---------+------+--------+-------------+
| 1 | SIMPLE | employees | ALL | NULL | NULL | NULL | NULL | 299809 | USING WHERE |
+----+-------------+-----------+------+---------------+------+---------+------+--------+-------------+
+----+-------------+------------+-------+-------------------------------+---------+---------+-------+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+------------+-------+-------------------------------+---------+---------+-------+------+-------+
| 1 | SIMPLE | itdevice | const | PRIMARY,fk_device_devicetype1 | PRIMARY | 4 | const | 1 | |
| 1 | SIMPLE | devicetype | const | PRIMARY | PRIMARY | 4 | const | 1 | |
+----+-------------+------------+-------+-------------------------------+---------+---------+-------+------+-------+
Databases – next step : explain
Type of lookup
'system', 'const' and 'ref' = good
'ALL' = bad
Extra info
Using index = good
Using filesort = usually bad
For / foreach
$customers = CustomerQuery::create()
->filterByState('MN')
->find();
foreach ($customers as $customer) {
$contacts = ContactsQuery::create()
->filterByCustomerid($customer->getId())
->find();
foreach ($contacts as $contact) {
doSomestuffWith($contact);
}
}
Joins
$contacts = mysql_query("
select
contacts.*
from
customer
join contact
on contact.customerid = customer.id
where
state = 'MN'
");
while ($contact = mysql_fetch_array($contacts)) {
doSomeStuffWith($contact);
}
or the ORM equivalent
Better...
10001 → 1 query
Sadly : people still produce code with query loops
Usually :
Growth not anticipated
Internal app → Public app
The origins of this talk
Customers :
Projects we built
Projects we didn't build, but got pulled into
Fixes
Changes
Infrastructure migration
15 years of 'how to cause mayhem with a few lines of code'
Client X
Jobs search site
Monitor job views :
Daily hits
Weekly hits
Monthly hits
Which user saw which job
Client X
Originally : when user viewed job details
Now : when job is in search result
Search for 'php' → 50 jobs = 50 jobs to be updated
→ 50 updates for shown_today
→ 50 updates for shown_week
→ 50 updates for shown_month
→ 50 inserts for shown_user
= 200 queries for 1 search !
Client X : the code
foreach ($jobs as $job) {
$db->query("
insert into shown_today(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_week(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_month(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_user(
jobId,
userId,
when
) values (
" . $job['id'] . ",
" . $user['id'] . ",
now()
)
");
}
Client X : the graph
Client X : the numbers
600-1000 inserts/sec (peaks up to 1600)
400-1000 updates/sec (peaks up to 2600)
16 core machine
Client X : panic !
Mail : "MySQL slave is more than 5 minutes behind master"
We set it up → who did they blame ?
Wait a second !
Client X : what's causing those peaks ?
Client X : possible cause ?
Code changes ?
→ According to developers : none
Action : turn on general log, analyze with pt-query-digest
→ 50+-fold increase in 4 queries
→ Developers : 'Oops we did make a change'
After 3 days : 2,5 days behind
Every hour : 50 min extra lag
Client X : But why is the slave lagging ?
Master Slave
File :
master-bin-xxxx.log
File :
master-bin-xxxx.logSlave I/O thread
Binlog dump
thread
Slave
SQL
thread
Client X : Master
Client X : Slave
Client X : fix ?
foreach ($jobs as $job) {
$db->query("
insert into shown_today(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_week(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_month(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_user(
jobId,
userId,
when
) values (
" . $job['id'] . ",
" . $user['id'] . ",
now()
)
");
}
Client X : the code change
insert into shown_today values (5, 1), (8, 1), (12, 1), (18, 1), … on duplicate key … ;
insert into shown_week values (5, 1), (8, 1), (12, 1), (18, 1), … on duplicate key … ;
insert into shown_month values (5, 1), (8, 1), (12, 1), (18, 1), … on duplicate key … ;
insert into shown_user values (5, 23, "2013-11-12 12:01:00"), (8, 23, "2013-11-12
12:01:00"), … ;
Client X : the code change
$todayQuery = "
insert into shown_today(
jobId,
number
) values ";
foreach ($jobs as $job) {
$todayQuery .= "(" . $job['id'] . ", 1),";
}
$todayQuery = substr($todayQuery, 0, strlen($todayQuery) - 1);
$todayQuery .= "
)
on duplicate key
update
number = number + 1
";
$db->query($todayQuery);
Client X : the chosen solution
$db->autocommit(false);
foreach ($jobs as $job) {
$db->query("
insert into shown_today(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_week(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_month(
jobId,
number
) values(
" . $job['id'] . ",
1
)
on duplicate key
update
number = number + 1
");
$db->query("
insert into shown_user(
jobId,
userId,
when
) values (
" . $job['id'] . ",
" . $user['id'] . ",
now()
)
");
}
$db->commit();
Client X : conclusion
For loops are bad (we already knew that)
Add master/slave and it gets much worse
Use transactions : it will provide huge performance increase
Result : slave caught up 5 days later
Database → Network
Customer Y
Top 10 site in Belgium
Growing rapidly
At peak traffic :
Unexplicable latency on database
Load on webservers : minimal
Load on database servers : acceptable
Client Y : the network
Client Y : the network
60GB 700GB 700GB
Client Y : network overload
Cause : Drupal hooks → retrieving data that was not needed
Only load data you actually need
Don't know at the start ? → Use lazy loading
Caching :
Same story
Memcached/Redis are fast
But : data still needs to cross the network
Network trouble : more than just traffic
Customer Z
150.000 visits/day
News ticker :
XML feed from other site (owned by same customer)
Cached for 15 min
Customer Z – fetching the feed
if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) {
unlink(APP_DIR . '/tmp/cacheFile.xml');
file_put_contents(
APP_DIR . '/tmp/cacheFile.xml',
file_get_contents('http://www.scrambledsitename.be/xml/feed.xml')
);
}
$xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
What's wrong with this code ?
Customer Z – no feed without the source
Feed source
Customer Z – no feed without the source
Feed source
Customer Z : timeout
default_socket_timeout : 60 sec by default
Each visitor : 60 sec wait time
People keep hitting refresh → more load
More active connections → more load
Apache hits maximum connections → entire site down
Customer Z : timeout fix
$context = stream_context_create(
array(
'http' => array(
'timeout' => 5
)
)
);
if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) {
unlink(APP_DIR . '/tmp/cacheFile.xml');
file_put_contents(
APP_DIR . '/tmp/cacheFile.xml',
file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context)
);
}
$xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
Customer Z : don't delete from cache
$context = stream_context_create(
array(
'http' => array(
'timeout' => 5
)
)
);
if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) {
unlink(APP_DIR . '/tmp/cacheFile.xml');
file_put_contents(
APP_DIR . '/tmp/cacheFile.xml',
file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context)
);
}
$xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
Customer Z : don't delete from cache
$context = stream_context_create(
array(
'http' => array(
'timeout' => 5
)
)
);
if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) {
file_put_contents(
APP_DIR . '/tmp/cacheFile.xml',
file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context)
);
}
$xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
Customer Z : don't delete from cache
$context = stream_context_create(
array(
'http' => array(
'timeout' => 5
)
)
);
if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) {
$feed = file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context);
if ($feed !== false) {
file_put_contents(
APP_DIR . '/tmp/cacheFile.xml',
$feed
);
}
}
$xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
Customer Z : process early
$context = stream_context_create(
array(
'http' => array(
'timeout' => 5
)
)
);
if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) {
$feed = file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context);
if ($feed !== false) {
file_put_contents(
APP_DIR . '/tmp/cacheFile.xml',
ParseXmlFeed($feed)
);
}
}
Customer Z : file_[get|put]_contents atomicity
$context = stream_context_create(
array(
'http' => array(
'timeout' => 5
)
)
);
if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) {
$feed = file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context);
if ($feed !== false) {
file_put_contents(
APP_DIR . '/tmp/cacheFile.xml',
ParseXmlFeed($feed)
);
}
}
Relying on user → concurrent request → possible corrupt data
Better : run every 15min through cronjob
Network resources
Use timeouts for all :
fopen
curl
SOAP
…
Data source trusted ?
→ setup a webservice
→ let them push updates when their feed changes
→ less load on data source
→ no timeout issues
Add logging → early detection
Logging
Logging = good
Logging in PHP using fopen
→ bad idea : locking issues
→ Use monolog : file, syslog, mail, Pushover, HipChat, Graylog,
Rollbar, ElasticSearch (and 50 more)
For Firefox : FirePHP (add-on for Firebug)
Debug logging = bad on production
Watch your logs !
Don't log on slow disks → I/O bottlenecks
File system : I/O bottlenecks
Causes :
Excessive writes (database updates, logfiles, swapping, …)
Excessive reads (non-indexed database queries, swapping, small file
system cache, …)
How to detect ?
top
iostat
See iowait ? Stop worrying about php, fix the I/O problem !
Cpu(s): 0.2%us, 3.0%sy, 0.0%ni, 61.4%id, 35.5%wa, 0.0%hi, 0.0%si, 0.0%st
avg-cpu: %user %nice %system %iowait %steal %idle
0.10 0.00 0.96 53.70 0.00 45.24
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 120.40 0.00 123289.60 0 616448
sdb 2.10 0.00 4378.10 0 18215
dm-0 4.20 0.00 36.80 0 184
dm-1 0.00 0.00 0.00 0 0
Much more than code
DB
server
Webserver
User
Network
XML feed
Look beyond PHP (or Perl, Ruby, Python, ...) !
Questions ?
Questions ?
Contact
Twitter @wimgtr
Web http://techblog.wimgodden.be
Slides http://www.slideshare.net/wimg
E-mail wim.godden@cu.be
Please...
Rate my talk : http://joind.in/10408
Step-by-step : most common issues
iowait on database server
I/O reads (use iostat) ? → missing/wrong indexes
I/O writes ?
→ no transactions ?
→ too many queries ?
→ bad DB engine settings
iowait on webserver (logs ? static files ?)
CPU on database server (missing/wrong/too many indexes)
CPU on webserver (PHP)

More Related Content

What's hot

Intravert Server side processing for Cassandra
Intravert Server side processing for CassandraIntravert Server side processing for Cassandra
Intravert Server side processing for Cassandra
Edward Capriolo
 
Nko workshop - node js crud & deploy
Nko workshop - node js crud & deployNko workshop - node js crud & deploy
Nko workshop - node js crud & deploy
Simon Su
 
Indexing and Query Optimizer (Aaron Staple)
Indexing and Query Optimizer (Aaron Staple)Indexing and Query Optimizer (Aaron Staple)
Indexing and Query Optimizer (Aaron Staple)
MongoSF
 

What's hot (20)

Load Data Fast!
Load Data Fast!Load Data Fast!
Load Data Fast!
 
Database Wizardry for Legacy Applications
Database Wizardry for Legacy ApplicationsDatabase Wizardry for Legacy Applications
Database Wizardry for Legacy Applications
 
MongoDB World 2016: Deciphering .explain() Output
MongoDB World 2016: Deciphering .explain() OutputMongoDB World 2016: Deciphering .explain() Output
MongoDB World 2016: Deciphering .explain() Output
 
MongoDB Days Silicon Valley: MongoDB and the Hadoop Connector
MongoDB Days Silicon Valley: MongoDB and the Hadoop ConnectorMongoDB Days Silicon Valley: MongoDB and the Hadoop Connector
MongoDB Days Silicon Valley: MongoDB and the Hadoop Connector
 
PostgreSQL Open SV 2018
PostgreSQL Open SV 2018PostgreSQL Open SV 2018
PostgreSQL Open SV 2018
 
Service discovery and configuration provisioning
Service discovery and configuration provisioningService discovery and configuration provisioning
Service discovery and configuration provisioning
 
Intravert Server side processing for Cassandra
Intravert Server side processing for CassandraIntravert Server side processing for Cassandra
Intravert Server side processing for Cassandra
 
MySQL under the siege
MySQL under the siegeMySQL under the siege
MySQL under the siege
 
SunshinePHP 2017 - Making the most out of MySQL
SunshinePHP 2017 - Making the most out of MySQLSunshinePHP 2017 - Making the most out of MySQL
SunshinePHP 2017 - Making the most out of MySQL
 
Percona toolkit
Percona toolkitPercona toolkit
Percona toolkit
 
MySQL 8.0 Preview: What Is Coming?
MySQL 8.0 Preview: What Is Coming?MySQL 8.0 Preview: What Is Coming?
MySQL 8.0 Preview: What Is Coming?
 
Survey of Percona Toolkit
Survey of Percona ToolkitSurvey of Percona Toolkit
Survey of Percona Toolkit
 
Practical JSON in MySQL 5.7
Practical JSON in MySQL 5.7Practical JSON in MySQL 5.7
Practical JSON in MySQL 5.7
 
Practical JSON in MySQL 5.7 and Beyond
Practical JSON in MySQL 5.7 and BeyondPractical JSON in MySQL 5.7 and Beyond
Practical JSON in MySQL 5.7 and Beyond
 
Malcon2017
Malcon2017Malcon2017
Malcon2017
 
MongoDB Performance Tuning
MongoDB Performance TuningMongoDB Performance Tuning
MongoDB Performance Tuning
 
Nko workshop - node js crud & deploy
Nko workshop - node js crud & deployNko workshop - node js crud & deploy
Nko workshop - node js crud & deploy
 
Indexing and Query Optimizer (Aaron Staple)
Indexing and Query Optimizer (Aaron Staple)Indexing and Query Optimizer (Aaron Staple)
Indexing and Query Optimizer (Aaron Staple)
 
It's 10pm: Do You Know Where Your Writes Are?
It's 10pm: Do You Know Where Your Writes Are?It's 10pm: Do You Know Where Your Writes Are?
It's 10pm: Do You Know Where Your Writes Are?
 
IOTA MAM介紹
IOTA MAM介紹IOTA MAM介紹
IOTA MAM介紹
 

Similar to Beyond php - it's not (just) about the code

Similar to Beyond php - it's not (just) about the code (20)

Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
Beyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the codeBeyond PHP - it's not (just) about the code
Beyond PHP - it's not (just) about the code
 
Beyond php - it's not (just) about the code
Beyond php - it's not (just) about the codeBeyond php - it's not (just) about the code
Beyond php - it's not (just) about the code
 
A miało być tak... bez wycieków
A miało być tak... bez wyciekówA miało być tak... bez wycieków
A miało być tak... bez wycieków
 
Explain this!
Explain this!Explain this!
Explain this!
 
Duplicates everywhere (Kiev)
Duplicates everywhere (Kiev)Duplicates everywhere (Kiev)
Duplicates everywhere (Kiev)
 
Apache Spark Side of Funnels
Apache Spark Side of FunnelsApache Spark Side of Funnels
Apache Spark Side of Funnels
 
Clean Code Development
Clean Code DevelopmentClean Code Development
Clean Code Development
 
Python na Infraestrutura 
MySQL do Facebook

Python na Infraestrutura 
MySQL do Facebook
Python na Infraestrutura 
MySQL do Facebook

Python na Infraestrutura 
MySQL do Facebook

 
Duplicates everywhere (Berlin)
Duplicates everywhere (Berlin)Duplicates everywhere (Berlin)
Duplicates everywhere (Berlin)
 
Piano rubyslava final
Piano rubyslava finalPiano rubyslava final
Piano rubyslava final
 
Data Wars: The Bloody Enterprise strikes back
Data Wars: The Bloody Enterprise strikes backData Wars: The Bloody Enterprise strikes back
Data Wars: The Bloody Enterprise strikes back
 
Pdxpugday2010 pg90
Pdxpugday2010 pg90Pdxpugday2010 pg90
Pdxpugday2010 pg90
 
Windbg랑 친해지기
Windbg랑 친해지기Windbg랑 친해지기
Windbg랑 친해지기
 
Big Ruby 2014: A 4 Pack of Lightning Talks
Big Ruby 2014: A 4 Pack of Lightning TalksBig Ruby 2014: A 4 Pack of Lightning Talks
Big Ruby 2014: A 4 Pack of Lightning Talks
 
Digital analytics with R - Sydney Users of R Forum - May 2015
Digital analytics with R - Sydney Users of R Forum - May 2015Digital analytics with R - Sydney Users of R Forum - May 2015
Digital analytics with R - Sydney Users of R Forum - May 2015
 
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Data Provenance Support in...
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Data Provenance Support in...Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Data Provenance Support in...
Big Data Day LA 2016/ Hadoop/ Spark/ Kafka track - Data Provenance Support in...
 
The Dynamic Language is not Enough
The Dynamic Language is not EnoughThe Dynamic Language is not Enough
The Dynamic Language is not Enough
 
Inside Winnyp
Inside WinnypInside Winnyp
Inside Winnyp
 
ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...
ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...
ClickHouse Data Warehouse 101: The First Billion Rows, by Alexander Zaitsev a...
 

More from Wim Godden

More from Wim Godden (20)

Bringing bright ideas to life
Bringing bright ideas to lifeBringing bright ideas to life
Bringing bright ideas to life
 
The why and how of moving to php 8
The why and how of moving to php 8The why and how of moving to php 8
The why and how of moving to php 8
 
The why and how of moving to php 7
The why and how of moving to php 7The why and how of moving to php 7
The why and how of moving to php 7
 
My app is secure... I think
My app is secure... I thinkMy app is secure... I think
My app is secure... I think
 
My app is secure... I think
My app is secure... I thinkMy app is secure... I think
My app is secure... I think
 
Building interactivity with websockets
Building interactivity with websocketsBuilding interactivity with websockets
Building interactivity with websockets
 
Bringing bright ideas to life
Bringing bright ideas to lifeBringing bright ideas to life
Bringing bright ideas to life
 
Your app lives on the network - networking for web developers
Your app lives on the network - networking for web developersYour app lives on the network - networking for web developers
Your app lives on the network - networking for web developers
 
The why and how of moving to php 7.x
The why and how of moving to php 7.xThe why and how of moving to php 7.x
The why and how of moving to php 7.x
 
The why and how of moving to php 7.x
The why and how of moving to php 7.xThe why and how of moving to php 7.x
The why and how of moving to php 7.x
 
My app is secure... I think
My app is secure... I thinkMy app is secure... I think
My app is secure... I think
 
Building interactivity with websockets
Building interactivity with websocketsBuilding interactivity with websockets
Building interactivity with websockets
 
Your app lives on the network - networking for web developers
Your app lives on the network - networking for web developersYour app lives on the network - networking for web developers
Your app lives on the network - networking for web developers
 
My app is secure... I think
My app is secure... I thinkMy app is secure... I think
My app is secure... I think
 
My app is secure... I think
My app is secure... I thinkMy app is secure... I think
My app is secure... I think
 
The promise of asynchronous php
The promise of asynchronous phpThe promise of asynchronous php
The promise of asynchronous php
 
My app is secure... I think
My app is secure... I thinkMy app is secure... I think
My app is secure... I think
 
My app is secure... I think
My app is secure... I thinkMy app is secure... I think
My app is secure... I think
 
Practical git for developers
Practical git for developersPractical git for developers
Practical git for developers
 
Is your code ready for PHP 7 ?
Is your code ready for PHP 7 ?Is your code ready for PHP 7 ?
Is your code ready for PHP 7 ?
 

Recently uploaded

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 

Recently uploaded (20)

Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Evaluating the top large language models.pdf
Evaluating the top large language models.pdfEvaluating the top large language models.pdf
Evaluating the top large language models.pdf
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 

Beyond php - it's not (just) about the code

  • 1. Wim Godden Cu.be Solutions @wimgtr Beyond PHP : It's not (just) about the code
  • 2. Who am I ? Wim Godden (@wimgtr)
  • 11. Belgium – the traffic
  • 12. Who am I ? Wim Godden (@wimgtr) Founder of Cu.be Solutions (http://cu.be) Open Source developer since 1997 Developer of OpenX, PHPCompatibility, Nginx SLIC, ... Speaker at PHP and Open Source conferences
  • 13. Cu.be Solutions ? Open source consultancy PHP-centered Training courses High-speed redundant network (BGP, OSPF, VRRP) High scalability development Nginx + extensions MySQL Cluster Projects : mostly IT & Telecom companies lots of public-facing apps/sites
  • 14. Who are you ? Developers ? Anyone setup a MySQL master-slave ? Anyone setup a site/app on separate web and database server ? → How much traffic between them ?
  • 15. The topic Things we take for granted Famous last words : "It should work just fine" Works fine today → might fail tomorrow Most common mistakes PHP code ↔ PHP ecosystem
  • 16. It starts with... … code ! First up : database
  • 17. Database queries – complexity SELECT DISTINCT n.nid, n.uid, n.title, n.type, e.event_start, e.event_start AS event_start_orig, e.event_end, e.event_end AS event_end_orig, e.timezone, e.has_time, e.has_end_date, tz.offset AS offset, tz.offset_dst AS offset_dst, tz.dst_region, tz.is_dst, e.event_start - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND AS event_start_utc, e.event_end - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND AS event_end_utc, e.event_start - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND + INTERVAL 0 SECOND AS event_start_user, e.event_end - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND + INTERVAL 0 SECOND AS event_end_user, e.event_start - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND + INTERVAL 0 SECOND AS event_start_site, e.event_end - INTERVAL IF(tz.is_dst, tz.offset_dst, tz.offset) HOUR_SECOND + INTERVAL 0 SECOND AS event_end_site, tz.name as timezone_name FROM node n INNER JOIN event e ON n.nid = e.nid INNER JOIN event_timezones tz ON tz.timezone = e.timezone INNER JOIN node_access na ON na.nid = n.nid LEFT JOIN domain_access da ON n.nid = da.nid LEFT JOIN node i18n ON n.tnid > 0 AND n.tnid = i18n.tnid AND i18n.language = 'en' WHERE (na.grant_view >= 1 AND ((na.gid = 0 AND na.realm = 'all'))) AND ((da.realm = "domain_id" AND da.gid = 4) OR (da.realm = "domain_site" AND da.gid = 0)) AND (n.language ='en' OR n.language ='' OR n.language IS NULL OR n.language = 'is' AND i18n.nid IS NULL) AND ( n.status = 1 AND ((e.event_start >= '2010-01-31 00:00:00' AND e.event_start <= '2010-03-01 23:59:59') OR (e.event_end >= '2010-01-31 00:00:00' AND e.event_end <= '2010-03-01 23:59:59') OR (e.event_start <= '2010-01-31 00:00:00' AND e.event_end >= '2010-03-01 23:59:59')) ) GROUP BY n.nid HAVING (event_start >= '2010-02-01 00:00:00' AND event_start <= '2010-02-28 23:59:59') OR (event_end >= '2010-02-01 00:00:00' AND event_end <= '2010-02-28 23:59:59') OR (event_start <= '2010-02-01 00:00:00' AND event_end >= '2010-02-28 23:59:59') ORDER BY event_start ASC;
  • 18. Database - indexing 'select id from stock where status = 2 order by qty' → aggregate index on (status, qty) 'select id from stock where status > 2 order by qty' → aggregate index on (status, qty) ? → Depends : - Btree : yes - Hash : range selection stops use of aggregate index → separate index on status and qty (since recent versions)
  • 19. Database - indexing Indexes make database faster → Let's index everything ! → DON'T : Insert/update/delete → Index modification Each query → evaluation of all indexes "Relational schema design is based on data but index design is based on queries" (Bill Karwin, Percona)
  • 20. Databases – detecting problematic queries Slow query log → SET GLOBAL slow_query_log = ON; Queries not using indexes → In my.cnf/my.ini : 'log_queries_not_using_indexes' General query log → SET GLOBAL general_log = ON; → Turn it off quickly ! Percona Toolkit (Maatkit) pt-query-digest
  • 21. Databases - pt-query-digest # Profile # Rank Query ID Response time Calls R/Call Apdx V/M Item # ==== ================== ================ ===== ======= ==== ===== ========== # 1 0x543FB322AE4330FF 16526.2542 98.2% 1208 13.6806 1.00 0.00 SELECT output_option # 2 0xE78FEA32E3AA3221 0.8312 0.0% 6412 0.0001 1.00 0.00 SELECT poller_output poller_item # 3 0x211901BF2E1C351E 0.6811 0.0% 6416 0.0001 1.00 0.00 SELECT poller_time # 4 0xA766EE8F7AB39063 0.2805 0.0% 149 0.0019 1.00 0.00 SELECT wp_terms wp_term_taxonomy wp_term_relationships # 5 0xA3EEB63EFBA42E9B 0.1999 0.0% 51 0.0039 1.00 0.00 SELECT UNION wp_pp_daily_summary wp_pp_hourly_summary # 6 0x94350EA2AB8AAC34 0.1956 0.0% 89 0.0022 1.00 0.01 UPDATE wp_options # MISC 0xMISC 302.8137 1.8% 3853 0.0002 NS 0.0 <147 ITEMS>
  • 22. Databases - pt-query-digest # Query 2: 0.26 QPS, 0.00x concurrency, ID 0x92F3B1B361FB0E5B at byte 14081299 # This item is included in the report because it matches --limit. # Scores: Apdex = 1.00 [1.0], V/M = 0.00 # Query_time sparkline: | _^ | # Time range: 2011-12-28 18:42:47 to 19:03:10 # Attribute pct total min max avg 95% stddev median # ============ === ======= ======= ======= ======= ======= ======= ======= # Count 1 312 # Exec time 50 4s 5ms 25ms 13ms 20ms 4ms 12ms # Lock time 3 32ms 43us 163us 103us 131us 19us 98us # Rows sent 59 62.41k 203 231 204.82 202.40 3.99 202.40 # Rows examine 13 73.63k 238 296 241.67 246.02 10.15 234.30 # Rows affecte 0 0 0 0 0 0 0 0 # Rows read 59 62.41k 203 231 204.82 202.40 3.99 202.40 # Bytes sent 53 24.85M 46.52k 84.36k 81.56k 83.83k 7.31k 79.83k # Merge passes 0 0 0 0 0 0 0 0 # Tmp tables 0 0 0 0 0 0 0 0 # Tmp disk tbl 0 0 0 0 0 0 0 0 # Tmp tbl size 0 0 0 0 0 0 0 0 # Query size 0 21.63k 71 71 71 71 0 71 # InnoDB: # IO r bytes 0 0 0 0 0 0 0 0 # IO r ops 0 0 0 0 0 0 0 0 # IO r wait 0 0 0 0 0 0 0 0 # pages distin 40 11.77k 34 44 38.62 38.53 1.87 38.53 # queue wait 0 0 0 0 0 0 0 0 # rec lock wai 0 0 0 0 0 0 0 0 # Boolean: # Full scan 100% yes, 0% no # String: # Databases wp_blog_one (264/84%), wp_blog_tw… (36/11%)... 1 more # Hosts # InnoDB trxID 86B40B (1/0%), 86B430 (1/0%), 86B44A (1/0%)... 309 more # Last errno 0 # Users wp_blog_one (264/84%), wp_blog_two (36/11%)... 1 more # Query_time distribution # 1us # 10us # 100us # 1ms # 10ms ################################################################ # 100ms # 1s # 10s+ # Tables # SHOW TABLE STATUS FROM `wp_blog_one ` LIKE 'wp_options'G # SHOW CREATE TABLE `wp_blog_one `.`wp_options`G # EXPLAIN /*!50100 PARTITIONS*/ SELECT option_name, option_value FROM wp_options WHERE autoload = 'yes'G
  • 23. Databases – next step : explain explain <query> "How will MySQL execute the query"
  • 24. Databases – next step : explain +----+-------------+-----------+------+---------------+------+---------+------+--------+-------------+ | id | select_type | TABLE | TYPE | possible_keys | KEY | key_len | REF | ROWS | Extra | +----+-------------+-----------+------+---------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | employees | ALL | NULL | NULL | NULL | NULL | 299809 | USING WHERE | +----+-------------+-----------+------+---------------+------+---------+------+--------+-------------+ +----+-------------+------------+-------+-------------------------------+---------+---------+-------+------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+-------+-------------------------------+---------+---------+-------+------+-------+ | 1 | SIMPLE | itdevice | const | PRIMARY,fk_device_devicetype1 | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | devicetype | const | PRIMARY | PRIMARY | 4 | const | 1 | | +----+-------------+------------+-------+-------------------------------+---------+---------+-------+------+-------+
  • 25. Databases – next step : explain Type of lookup 'system', 'const' and 'ref' = good 'ALL' = bad Extra info Using index = good Using filesort = usually bad
  • 26. For / foreach $customers = CustomerQuery::create() ->filterByState('MN') ->find(); foreach ($customers as $customer) { $contacts = ContactsQuery::create() ->filterByCustomerid($customer->getId()) ->find(); foreach ($contacts as $contact) { doSomestuffWith($contact); } }
  • 27. Joins $contacts = mysql_query(" select contacts.* from customer join contact on contact.customerid = customer.id where state = 'MN' "); while ($contact = mysql_fetch_array($contacts)) { doSomeStuffWith($contact); } or the ORM equivalent
  • 28. Better... 10001 → 1 query Sadly : people still produce code with query loops Usually : Growth not anticipated Internal app → Public app
  • 29. The origins of this talk Customers : Projects we built Projects we didn't build, but got pulled into Fixes Changes Infrastructure migration 15 years of 'how to cause mayhem with a few lines of code'
  • 30. Client X Jobs search site Monitor job views : Daily hits Weekly hits Monthly hits Which user saw which job
  • 31. Client X Originally : when user viewed job details Now : when job is in search result Search for 'php' → 50 jobs = 50 jobs to be updated → 50 updates for shown_today → 50 updates for shown_week → 50 updates for shown_month → 50 inserts for shown_user = 200 queries for 1 search !
  • 32. Client X : the code foreach ($jobs as $job) { $db->query(" insert into shown_today( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_week( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_month( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_user( jobId, userId, when ) values ( " . $job['id'] . ", " . $user['id'] . ", now() ) "); }
  • 33. Client X : the graph
  • 34. Client X : the numbers 600-1000 inserts/sec (peaks up to 1600) 400-1000 updates/sec (peaks up to 2600) 16 core machine
  • 35. Client X : panic ! Mail : "MySQL slave is more than 5 minutes behind master" We set it up → who did they blame ? Wait a second !
  • 36. Client X : what's causing those peaks ?
  • 37. Client X : possible cause ? Code changes ? → According to developers : none Action : turn on general log, analyze with pt-query-digest → 50+-fold increase in 4 queries → Developers : 'Oops we did make a change' After 3 days : 2,5 days behind Every hour : 50 min extra lag
  • 38. Client X : But why is the slave lagging ? Master Slave File : master-bin-xxxx.log File : master-bin-xxxx.logSlave I/O thread Binlog dump thread Slave SQL thread
  • 39. Client X : Master
  • 40. Client X : Slave
  • 41. Client X : fix ? foreach ($jobs as $job) { $db->query(" insert into shown_today( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_week( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_month( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_user( jobId, userId, when ) values ( " . $job['id'] . ", " . $user['id'] . ", now() ) "); }
  • 42. Client X : the code change insert into shown_today values (5, 1), (8, 1), (12, 1), (18, 1), … on duplicate key … ; insert into shown_week values (5, 1), (8, 1), (12, 1), (18, 1), … on duplicate key … ; insert into shown_month values (5, 1), (8, 1), (12, 1), (18, 1), … on duplicate key … ; insert into shown_user values (5, 23, "2013-11-12 12:01:00"), (8, 23, "2013-11-12 12:01:00"), … ;
  • 43. Client X : the code change $todayQuery = " insert into shown_today( jobId, number ) values "; foreach ($jobs as $job) { $todayQuery .= "(" . $job['id'] . ", 1),"; } $todayQuery = substr($todayQuery, 0, strlen($todayQuery) - 1); $todayQuery .= " ) on duplicate key update number = number + 1 "; $db->query($todayQuery);
  • 44. Client X : the chosen solution $db->autocommit(false); foreach ($jobs as $job) { $db->query(" insert into shown_today( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_week( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_month( jobId, number ) values( " . $job['id'] . ", 1 ) on duplicate key update number = number + 1 "); $db->query(" insert into shown_user( jobId, userId, when ) values ( " . $job['id'] . ", " . $user['id'] . ", now() ) "); } $db->commit();
  • 45. Client X : conclusion For loops are bad (we already knew that) Add master/slave and it gets much worse Use transactions : it will provide huge performance increase Result : slave caught up 5 days later
  • 46. Database → Network Customer Y Top 10 site in Belgium Growing rapidly At peak traffic : Unexplicable latency on database Load on webservers : minimal Load on database servers : acceptable
  • 47. Client Y : the network
  • 48. Client Y : the network 60GB 700GB 700GB
  • 49. Client Y : network overload Cause : Drupal hooks → retrieving data that was not needed Only load data you actually need Don't know at the start ? → Use lazy loading Caching : Same story Memcached/Redis are fast But : data still needs to cross the network
  • 50. Network trouble : more than just traffic Customer Z 150.000 visits/day News ticker : XML feed from other site (owned by same customer) Cached for 15 min
  • 51. Customer Z – fetching the feed if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) { unlink(APP_DIR . '/tmp/cacheFile.xml'); file_put_contents( APP_DIR . '/tmp/cacheFile.xml', file_get_contents('http://www.scrambledsitename.be/xml/feed.xml') ); } $xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml'); What's wrong with this code ?
  • 52. Customer Z – no feed without the source Feed source
  • 53. Customer Z – no feed without the source Feed source
  • 54. Customer Z : timeout default_socket_timeout : 60 sec by default Each visitor : 60 sec wait time People keep hitting refresh → more load More active connections → more load Apache hits maximum connections → entire site down
  • 55. Customer Z : timeout fix $context = stream_context_create( array( 'http' => array( 'timeout' => 5 ) ) ); if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) { unlink(APP_DIR . '/tmp/cacheFile.xml'); file_put_contents( APP_DIR . '/tmp/cacheFile.xml', file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context) ); } $xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
  • 56. Customer Z : don't delete from cache $context = stream_context_create( array( 'http' => array( 'timeout' => 5 ) ) ); if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) { unlink(APP_DIR . '/tmp/cacheFile.xml'); file_put_contents( APP_DIR . '/tmp/cacheFile.xml', file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context) ); } $xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
  • 57. Customer Z : don't delete from cache $context = stream_context_create( array( 'http' => array( 'timeout' => 5 ) ) ); if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) { file_put_contents( APP_DIR . '/tmp/cacheFile.xml', file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context) ); } $xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
  • 58. Customer Z : don't delete from cache $context = stream_context_create( array( 'http' => array( 'timeout' => 5 ) ) ); if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) { $feed = file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context); if ($feed !== false) { file_put_contents( APP_DIR . '/tmp/cacheFile.xml', $feed ); } } $xmlfeed = ParseXmlFeed(APP_DIR . '/tmp/cacheFile.xml');
  • 59. Customer Z : process early $context = stream_context_create( array( 'http' => array( 'timeout' => 5 ) ) ); if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) { $feed = file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context); if ($feed !== false) { file_put_contents( APP_DIR . '/tmp/cacheFile.xml', ParseXmlFeed($feed) ); } }
  • 60. Customer Z : file_[get|put]_contents atomicity $context = stream_context_create( array( 'http' => array( 'timeout' => 5 ) ) ); if (filectime(APP_DIR . '/tmp/cacheFile.xml') < time() - 900) { $feed = file_get_contents('http://www.scrambledsitename.be/xml/feed.xml', false, $context); if ($feed !== false) { file_put_contents( APP_DIR . '/tmp/cacheFile.xml', ParseXmlFeed($feed) ); } } Relying on user → concurrent request → possible corrupt data Better : run every 15min through cronjob
  • 61. Network resources Use timeouts for all : fopen curl SOAP … Data source trusted ? → setup a webservice → let them push updates when their feed changes → less load on data source → no timeout issues Add logging → early detection
  • 62. Logging Logging = good Logging in PHP using fopen → bad idea : locking issues → Use monolog : file, syslog, mail, Pushover, HipChat, Graylog, Rollbar, ElasticSearch (and 50 more) For Firefox : FirePHP (add-on for Firebug) Debug logging = bad on production Watch your logs ! Don't log on slow disks → I/O bottlenecks
  • 63. File system : I/O bottlenecks Causes : Excessive writes (database updates, logfiles, swapping, …) Excessive reads (non-indexed database queries, swapping, small file system cache, …) How to detect ? top iostat See iowait ? Stop worrying about php, fix the I/O problem ! Cpu(s): 0.2%us, 3.0%sy, 0.0%ni, 61.4%id, 35.5%wa, 0.0%hi, 0.0%si, 0.0%st avg-cpu: %user %nice %system %iowait %steal %idle 0.10 0.00 0.96 53.70 0.00 45.24 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 120.40 0.00 123289.60 0 616448 sdb 2.10 0.00 4378.10 0 18215 dm-0 4.20 0.00 36.80 0 184 dm-1 0.00 0.00 0.00 0 0
  • 64. Much more than code DB server Webserver User Network XML feed
  • 65. Look beyond PHP (or Perl, Ruby, Python, ...) !
  • 68. Contact Twitter @wimgtr Web http://techblog.wimgodden.be Slides http://www.slideshare.net/wimg E-mail wim.godden@cu.be Please... Rate my talk : http://joind.in/10408
  • 69. Step-by-step : most common issues iowait on database server I/O reads (use iostat) ? → missing/wrong indexes I/O writes ? → no transactions ? → too many queries ? → bad DB engine settings iowait on webserver (logs ? static files ?) CPU on database server (missing/wrong/too many indexes) CPU on webserver (PHP)

Editor's Notes

  1. 5kbit/sec or 100Mbit/sec ?
  2. Let&amp;apos;s talk about code Without : we don&amp;apos;t exist What are most common mistakes in ecosystem Let&amp;apos;s start with the database
  3. time spent per query pattern how many queries of that query pattern
  4. Get back to what I said Lots of people use ORM - easier - don&amp;apos;t need to write queries - object-oriented but people start doing this Imagine 10000 customers → 10001 queries
  5. Not best code Uses deprecated mysql extension no error handling
  6. Master : 16 CPU cores 12 cores for SQL 1 core for binlog dump rest for system Slave : 16 CPU cores 1 core for slave I/O 1 core for slave SQL
  7. Grouping Works fine, but : maximum size of string ? PHP = no limit MySQL = max_allowed_packet
  8. Grouping Works fine, but : maximum size of string ? PHP = no limit MySQL = max_allowed_packet
  9. All in a single commit Note : transaction has max. size Possible : combination with previous solution
  10. took few moments to figure out No network monitoring → iptraf → 100Mbit/sec limit → packets dropped → connections dropped Customer : upgrade switch Us : why 100Mbit/sec ?
  11. Databases → network What other network related issues ?
  12. Server on which feed located : crashed Fine for few minutes (cache) 15 minutes : file_get_contents uses default_socket_timeout
  13. Better, not perfect. What else is wrong ? Multiple visitors hit expiring cache → file delete → xml feed hit a lot
  14. Better, not perfect. What else is wrong ? Multiple visitors hit expiring cache → file delete → xml feed hit a lot
  15. Better, not perfect. What else is wrong ? Multiple visitors hit expiring cache → file delete → xml feed hit a lot
  16. Better, not perfect. What else is wrong ? Multiple visitors hit expiring cache → file delete → xml feed hit a lot
  17. Better, not perfect. What else is wrong ? Multiple visitors hit expiring cache → file delete → xml feed hit a lot
  18. Better, not perfect. What else is wrong ? Multiple visitors hit expiring cache → file delete → xml feed hit a lot
  19. How do you treat your data : - where do you get it - how long did you have to wait to get it - how is it transported - how is it processed minimize the amount of data : retrieved transported processed, sent to db and users