profiling for grown-ups
Post on 18-May-2015
5.957 Views
Preview:
DESCRIPTION
TRANSCRIPT
Profiling for Grownups
(107 slides to go ... )
Hi!
Hi Everybody! Nice to be here in berlin!
Let‘s wait until everybody is seated - or le, :-)
Let‘s wait until everybody is here. Actually i haven‘t enough slides, so i try to spend some time useless slides.
Hi, great to see You here!
A, so that‘s why we waited. Great you came, too.
Hey good lookin‘!I am only in software business for all the pretty people around
Good morning!
Who is tired?
Yeah, i am tired, too
Yep, me too. That‘s the bad thing about berlin. In munich - where i am from - you get enough sleep because there is no
Ok, let‘s start
But let‘s start anyway
With a short story.
I am old. And you know what old people do. They tell you war stories
Actually two short storys.
two war stories, actually.
War Story 1The first one, already some years ago.
Big On-Demand Video Streaming Platform
It was one of the first video on demand platforms around. Large scale, with some unique licensed live events that were only streamed using this platform.
Based on an enterprise ecommerce solution
To build a solid solution they decided to use an enterprise ecommerce solution as the core. So basic performance problems shouldn‘t happen, since it was already enterprise.
We are professionals!
The other reason it should scale was us. they wanted to have mayflower in the team to care about performance und scaling issues.
Server Setup
done right
So we cared about the server setup. We implemented a small testing environment, benchmarked the resource usage and calculated that we‘d need 12 4-core machines for a start.
Performance Testing
done right, too
When the cluster was ready. We did a classic check to see if it does scale.
root@local:/# ab2 -n 50000 -c 500 http://lb.ours.tld/start This is ApacheBench, Version 2.3 <$Revision: 655654 $>Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/Licensed to The Apache Software Foundation, http://www.apache.org/...Requests per second: A_LOT [#/sec] (mean)Time per request: REALLY_FAST [ms] (mean)Time per request: EVEN_FASTER [ms] (mean, across all concurrent requests)
We first run Apache-Bench. Everything looked pretty good.
Then we simulated user traffic using jmeter to create realistic traffic.
Profiling included!
So we tough: hey, how professional we are
Everything was fine.
Until 10 minutes before the life event.
Then everything was stuck.
Oops.
That wasn‘t this brilliant
War Story 2Second war story, not this long ago.
Cloud Project
100% HipsterIt was a classic hipster cloud project. Yes, there is a memcache, a rabbitmq,mogilefs, ejabberd, varnish, nginx and cloud! Uh, and a deployment pipeline, of course, all managed by puppet.
Internal Cloud
We started on our internal cloud - based on eucalyptus 1.6. Anybody else worked with eucalyptus? Hate it? However, we developed it there. It was fast and snappy. We were happy, and so was our customer.
At some day it went live, and it worked great. Just like we expected it to work. Just a bit less snappy. Responsivity wasn‘t as good.
Fscking Slow
Actually, it was very slow, more than 2 seconds for the start page, more than a second for every logged in page.
Performance Optim4ation FTW!
MemCache
APCORM-Level-Caching
Template-Caching
So we did a lot of work. There was a caching-layer. And another one to support it. And one in the ORM. And one for the database data. And so on.
Still sloooooow.
But it still was slow. We tried it again at our local cloud, where it was fast.
Test:Put it on
1 LaptopSo we took an really old Lenovo R61 Laptop, and installed everything on it. MySQL, MogileFS, RabbitMQ, eJabberD, and the whole set of production data.
Bl5ingly fast again.
And it was blazingly fast again. On a laptop, with 4G ram, not even a SSD was installed.
And we were like „What the fuck?“ This was a solid medium sized amazon ec2 installation, mostly based on large vms. and it was slower than an old laptop?
So, that‘s what this talk is about.
Stuff about profiling we never wanted to know.
That were the stories i wanted to tell. It‘s about the things i never wanted to know about profiling
Johann-Peter Hartmann:Hacker-turned-Manager
Hacker at heart.
Company: Mayflower GmbH
(founded SektionEins GmbH, too)
I am actually an old school php hacker, i even did a talk at the first php conference ever in 2000. Now i am the CTO of mayflower gmbh, we do php development. a number of php release manager has been working for our company. we love php. I founded sektioneins together with stefan esser, too. Ask me about security.
And this is our logo done with bacon.
And this is the mayflower logo, done with bacon! We do agile, devops, slacktime with nodecopter projects.
Every new important project is done with symfony 2.
We do every new, important project with symfony 2.
Symfony 2: I‘ve got only basic knowledge
Symfony CMF sandb7 was the official lab rat for this talk
But i don‘t know a lot about it. My colleagues do, like paul, who is going to do a short talk about fancy reactive programming symfony stutt tomorrow.
Zend Framework 1: a lot of knowledge
i did a lot of stuff with zend framework, though, maybe that‘s the reason i end up with a profiling talk here :-) For sure it‘s one of the reasons we do symfony now :-)
Profiling
Who in here uses the zend Profiler? Who uses XDebug? Who xhprof? Symfony web debug toolbar? Web Debug toolbar with xhprof? Server site profiling like Valgrind? Sysprof? oprofile?
Why profiling anyway?
Why are you doing it? Imho there are two major reasons:
To figure out how to survive the launch / update / new popularity
Reason No 1
First, you really want to survive the launch. or a new version update. or a tv spot for your website tonight.
Reason No 2To figure out why you did not survive the
launch / update / new popularity
Second reason: you want to know why it did not work out.
The first moment You are certain about your applications performance is
one day after the launch
9e ugly truth about profiling:
The bad thing about profiling is that you really, really want to know if your website stays online, all of your customers are happy and everything is up and running. Even with a television campaign. But you won‘t get a guarantee anyway, and in fact everybody knows this.
9ats why your b:s thinks it‘s the responsibility of the so,ware guys.
And that‘s why your boss thinks you are the one to blame. Maybe some very stupid feature just needs a lot of computation work to be done, the reason for a bad responsivity of the server is always slow code, not a slow feature.
9ats why you think it‘s theresponsibility of the operation guys.
And, of course, there is the operations team and the operations infrastructure. If the system slows down, it‘s their job to make it fast again. So if their hardware is not fast enough, they should get some faster iron.
On the other hand side, the operation guys think the developers
don‘t care about performance anyway.On the other hand site the ops guys thinks that the software developers simply write slow code. which, in a lot of cases, they actually do.
Strategy No 1
Make it hard to blame You.
Create an impressive presentation!
So we use a common, well established strategy: CYA.
We do 300 Req/s!
And we do a little benchmark based on apachebench. And we prove that we are able to service 300 requests in one second. How great we are!
No, you don‘t
Actually, we did not. You measured something that does not exist.
Unless somebody ...
... hacked your network
... started 50 w3m in parallel and
... pressed „reload“ 6 times a second
What everynbody really wants to know
What are the odds that everything is up and running a,er the launch? 1
2Is there anything we should ; before we launch?
3Do we know how to ; any performance issue fast?
1Real TestScenario
Because 1 request != 1 out of 1000 requestsFirst you need a test scenario that is close to your current or expected reality.
1JMeter
Virtual Users(Personas)
First: use a proper Load generation tool like jmeter, webload, silk performer or similar. Create different thread groups . Personas: In marketing and user-centered design, personas are fictional characters created to represent the different user types within a targeted demographic, attitude and/or behavior set that might use a site, brand or product in a similar way.
1That‘s a simple jmeter setup. Who already uses jmeter?
Are my tests realistic?
1If you‘re already online
If you already launched it‘s easy to answer. Simply take a look at your existing web traffic.
WebAnalytics for Validation
Google Analytics
Webalizer
Analog
AWStats
If you already know what is happening on your website: great.Just do a webalizer or analogue run on the access_log of a normal hour, and another one done with your benchmark. If the result looks similar, you are generating the right kind of traffic.
Correlation: 99,5%
In a few cases we did some validation using proper statistics, where we did analyzed the correlation.
1If you haven‘t launched yet
If you haven‘t got any data, and we are not talking about an intranet application for exactly 50 people who start working at 9 am, than you have to guess was is going to happen
1Create Scenarios based on Personas:
1 Worst Case Users doing all the expensive stuff
2 Expected Average „Normal“ user behavior
Since you don‘t know where you are going to end up, simply create 3 scenarios that try to show the range of your applications performance.
MonitoringCacti 1
Munin
JMeter-Perfmon
On the other hand side you need to figure out what happens on the server side. Maybe you already got a monitoring solution installed, like cacti or munin. Cacti anybody? Planning to change? Jmeter provides an agent based monitoring solution, it‘s part of the jmeter plugin package.
MuninMost of the time we use munin, since it‘s imply already there
source: http://www.methodsandtools.com/tools/jmeterplugins.php
You can configure perfmon to show graphs like „CPU load based on number of users“ and the like
Interesting Stuff
CPU User, System, IOWait, Interrupts, Context switches, Forks
Memory PageFaults, Swap, Free Memory
IO Network,Connections, Harddisk, SSD
A lot of consoles.
And of course a lot of consoles. To monitor everything. I am not talking about game consoles, btw. :_
topiotop
vmstatsysprof
Tools to do some analysis:
Top - everybody should know this one. iotop: a small python script, already part of all mayor distributionsvmstat display was originally about virtual memory statistics (hence the name), but does processes, interrupts, paging, block io and a lot of device statistics as well. Sysprof is a statistical, systemwide profiler i‘ll show later.
Ramp up? app/console does the cache warming!
Now test all your scenarios. you don‘t need any ramp up since your favourite framework already does the cache warmup :-)
Application still works
System is stable
Good service quality
P:itive outcome
Maybe there is some high load at the beginning, but maybe the systems get stable again. If everything is alright, we probably bought too much hardware.
Now we have a setup for realistic profiling
But that‘s cool, that‘s the perfect starting point for realistic profiling.
Application still works
System is stable
Good service quality
Negative outcome
But what happens if it goes wrong? if we are not able to provide the scalability or responsitivity needed?
Now we have a need for realistic profiling
Now we have to do some profiling anyway :-)
Hey, it‘s already in my IDE!
And on my Server!
Default
By default you do the profiling while development in your IDE. Who has a profiler enabled? On your local or on a close-to-production system? Lets have a closer look.
Single Request in jmeter: 223ms
Single Request in Xdebug: 1408ms
Everything is up to 12 times slower
But hey, it‘s relative performance!
At least that‘s what our benchmarks tell.
We did some benchmarking in different environments, and the slowdown caused by xdebug was up to 12 times. That‘s quite a big difference. Now you could say: yeah, but that is just relative.
20 ms database query in a 100 ms request vs
20 ms database query in a 1.200 ms request
20% vs 1.6 %The problem is, that this makes it close to impossible to find race conditions. or to measure the influence of external inout. See the example: a slow query my result in 20% of the request time, or in below 2% in yur profiling environment.
Your PHP profiler lies.
Hey, it‘s the Autoloader!
What does really happen in your application?
So, what does really happen in your application. You can see it here - xdebugs callgrind-format export. Who knows this one? Obviously most of the time is spend inside the autoloader. That‘s wellknown, i know. So, do we know everything that is happening there?
Wall Clock Time
We know that time was spent, but we don‘t know
how the time was spentxdebug measures wall clock time, and that is fine. but that means you only know that time was spent, not how it was spent.
One level deeper
Install valgrind and all needed debugging-symbols
start the jmeter test
run one apache child separatelyvalgrind --tool=callgrind /usr/sbin/apache2 -f /etc/apache2/apache2_single.conf -X
So let‘s have a look at valgrind / callgrind /cachegrind. It‘s nice to have all debugging symbols installed to see what‘s happening. Start a single apache child in valgrind while the jmeter test is running. Attention: this single request is a lot slower, just like xdebug.
Hey, it‘s the Parser!
What does really happen in your application?
Again we see what really happens - now most of the (self) time is spend inside the parser.
APC FTW!
Luckily both issues can be fixed with APC and the ApcUniversalClassLoader
One level deeper
But anyway, there is more way to go.
SysprofProfiling the entire lin< system, user- and kernelspace
fast profiler with low fingerprintSysprof tells you about everything that is happening inside your cpu. and, the performance impact is okeyish
root@local:/# sysprof-cli myoutfile
... wait some seconds to gather enough data ...
... stop with ctrl-c when you are done ...
root@local:/# sysprof myoutfile
And that‘s how it works: when the jmeter test is running, sysprof-cli is started for some time. It‘s stopped with ctrl-c and writes the data to the filesystem. this file can be viewed with sysprof itself, which is a X11 software.
18% of the time spent to look at the watch?!
Test run with xdebug loaded
Most of the time is spent within PHP, and that‘s expected to happe. but half of the time is spent within gettimeofday, a system call that is rather slow, especially when used in 32bit environments. so 18% of the whole cpu time is used to look at the wallclock.
perfSampling current calls or events every few seconds
fast profiler with low fingerprint
root@local:/# perf record -a -F 10000 sleep 60... creates 600.000 samples ...... saving result in perf.data ...root@local:/# perf report
Because the memory bus is slow ...
War Story 2Now we have got the toolset to look at our war stories.
Lenovo R61: fastEnterprise Cloud Cluster: slow
Xdebug profiling
xhprof profiling
iotop, vmstat, ...
Default profiling
Everything is fine, but still slow?
page_fault is the reason?!
sysprof
Reason: XEN memory sharing on am5on ec2
We saw a lot of page_faults. but there was no swapping. So this should not happen?!_very_ expensive page_faults due to memory sharing in XENSolution: memory sharing, pages marked as read_only -> page_fault and this very page_fault was slow in 32bit guests in 64bit hosts.
War Story 1The first one, already some years ago.
Everything was fine.
Until 10 minutes before the life event.
Then everything was stuck.
You remember? Where we went not only live, but dead.
Xdebug profiling
xhprof profiling
iotop, vmstat, ...
cacti monitoring ...
Default profiling
Everything is fine, but still stuck when it matters?
And we have had the same problem - everything looked fine in normal profiling.
50 terminals running top, atop, htop, iotop, mytop etc
Wait for trouble
And we waited, and when the next wave hit us we could see it in mytop: next slide
source: http://www.mysqlfanboy.com/2010/06/mytop/suddenly a lot of mysql queries waiting for a lock
smart mysql-pr7y-logging
See https://github.com/patrickallaert/MySQL-Proxy-scripts-for-devs
root@local:/# mysql-proxy -P 192.168.178.32:4400 \ --admin-username=root --admin-password=mypw \ --admin-lua-script=/usr/share/mysql-proxy/admin.lua \ -s /usr/share/mysql-proxy/debug-blind.luaab2 -n 50000 \ -c 500 http://lb.ours.tld/start 12,345678 ms select * from mytable ... 2,123456 ms insert into mytable ...
This creates a mysql proxy listener on port 4400, that forwards all request to the local server. Logging all queries with execution time, removing all unneeded whitespaces. for the _whole_ system, nod for a single process.
> 500 registering and > 1000 logging in usersin the same minute
==Table lockup
(Yep, solution was simple: MyISAM -> InnoDB)
Other war stories:
IBM kernel module screwing IO in concurrency, but hiding any iowait
In-memory-Caching layers does more bad than good
We‘ve got more war stories of this kind, mostly fixed with tools like oprofile, sysprof, vmstat and the like.
Why should i care?Because:
Todays system architecture is created by developers
DevOps: You are responsible for production, too
You can‘t ; a operational bug caused by application code otherwise
Now you might ask: why should i care? i am a developer, not a system administrator?Because:
Setup: Symfony CMF sandbox
1400 requests with concurrency 20
What‘s in for you?
A recommended setup(but not very surprising)
0
1250
2500
3750
5000
Low Average High
Pure PHP With APCWith Xdebug 6 APC With APC, with New RelicWith APC,XHProf
2698
new relic is too slow for production right now.
Thanks!Take the red pill
Profiling?
9anks!Sysprof Perf Valgrind/Callgrind Mytop Mysql-proxy Xdebug
top related