Measuring Front-End Performance - What, When and How?

Download Measuring Front-End Performance - What, When and How?

Post on 17-Jan-2017

755 views

Category:

Technology

2 download

TRANSCRIPT

Measuring Front-end Performance:What, When and How?Gareth Hughes, @brassic_lint

Gareth Hughes, @brassic_lint

What?

Start with the what?What shall we measure?more questions:meaningful? how our pages are performing? user experience?

Akamai study, 2009

http://uk.akamai.com/html/about/press/releases/2009/press_091409.html47% of consumers expect a web page to load in 2 seconds or less

what do users *mean*?

The page is usable?

All objects are loaded?

Akamai study, 200947% of consumers expect a web page to load in 2 seconds or less

When the browser wheel stops spinning?

Cant answer

Can help to find outKnow what you can measure to ensure you are meeting you users expectations.

Anatomy of an HTTP request

TTFB(Time to First Byte)

Lets start with the basicsrequest an object over HTTPbasic steps to deliver an object over HTTP, measure all of theseindication of the page delivery performance bundle the back-end metrics into TTFB

Time to glass

RenderTree

HTML

CSS

DOM

CSSOM

Layout

Paint

DOM ContentLoadedRenderStart

JavaScript

DOM

CSSOM

HTML page has been downloadedhow the rest of the page gets built and displayed

DOM = document object model

very simplistic model

partial render treerender start may happen before DCL

elephant in the room: JavaScript!blocks DOM construction CSSOM construction blocks JavaScript execution! maybe DCL is a useful metric?

In the browser

http://www.w3.org/TR/navigation-timing/

Once in the browser, there are APIs we can use to collect these, and other metrics

The NavTiming API

Lots of metrics covering navigation, page load + browser events like DCL

http://www.w3.org/TR/navigation-timing/

In the browser

http://www.w3.org/TR/resource-timing/

The ResourceTiming APIPerformance metrics for page objects / resourcesNB Subject to CORSMust have an allow header (Timing-Allow-Origin)

In the browserfunction myTimings() { performance.mark("startTask1"); doTask1(); // Some developer code performance.mark("endTask1"); performance.mark("startTask2"); doTask2(); // Some developer code performance.mark("endTask2");

}

http://www.w3.org/TR/user-timing/

ultimate flexibility, the UserTimings APIown timing marks in JSGuardian 1st party JS app instrumented

Speed Index

Measure of how quickly the visible portions are drawnVisual completeness during page loadIndex of how long spent IncompleteExample:Start and End at the same timeGraphing completeness over time givesCan see that A is more complete more quicklyB is Incomplete longer = worse UXIndex calcd from area aboveLarger area = larger index = worse UXMore detail on formula onlineUsed in syntheticCan be calcd from browser paint eventsunreliable & not used commercially

What?

So lets return to the What?

Huge number of metricsWhat can we use to represent UX?

Depends

a starting point of what I use

Response End / TTFBHow quickly has my server served the base pageDOM Content LoadedA good analogy for Page is usableRender Start / First PaintGives us an indication of when the user actually sees somethingTotal Page LoadAlthough this includes all 3rd-party and deferred content, it can help get a feel for how well everything is workingUser TimingsThis is a little more work, but allows the ability to instrument the areas important to youSpeed IndexThis is a great single metric to give a pretty good idea of overall user experienceWhat?

Response EndHow quickly has my server served the base pageDOM Content LoadedA good analogy for Page is usableRender Start / First PaintGives us an indication of when the user actually sees somethingTotal Page LoadAlthough this includes all 3rd-party and deferred content, it can help get a feel for how well everything is workingUser TimingsThis is a little more work, but allows the ability to instrument the areas important to youSpeed IndexThis is a great single metric to give a pretty good idea of overall user experience

When?

lets look at the When?when to testdevelop then test and hope?

RequirementsDesignDevelopmentTestRelease / Maintenance

Example of a waterfall methodology when to measure performance?

without saying do it in testand probably development.

What about requirements? Performance should be a NFR

And monitoring performance after releaseeditors add content, marketing add tagsensure that users are still getting the optimal experience.what about during Design?

Brad Frost

http://bradfrost.com/blog/post/performance-as-design/Good performance is good design

Brad Frost tells usGood performance is good design

Many articles on designing for performance

book on it by Lara Hogan.

Designing to be fast from the beginning, rather than trying to optimise, will always give a better experience for end-users, saves time in development & test, and makes the life of a developer a heck of a lot easier!

Performance BudgetsDefines tangible numbers or metricsMay be defined by an aspiration or industry standardsEnforces the performance standardsInstills a culture of performance in the project teamGives a mark to measure byYou probably already have one!Start vague, but define earlyPerformance is everyones problem

A key way to achieve this is to set Performance Budgets;

dev & design collaborating on designing a fast site

PERFORMANCE IS EVERYONES PROBLEM

When?At every stage of the lifecycle!

So we come back to the question of When?

At every stage of the lifecycle

But how do we do that?

How?

We know:what to measure, when to measure.

But how?

Ill walk you through some of the options, using examples of tools on the way.

SyntheticDomo arigato, Mr. Roboto

Synthetic, often referred to as robots.

many forms

simple curl-type requests measuring the HTTP reqalso commonly used for availability monitoring

doesnt tell us a lot about UX

easy, and often free or very cheap.

Better to test using real browsers - emulated browser to load the page at regularly - external (generally) Methodologies varyTest under (relatively) consistent conditions. Emulated browsers for controlGraph-based portalswaterfall chartsAlso test from real VMs running desktop (or mobile in some cases) browsers. Some will use these for regular testing, as well as for single tests. Also screenshots filmstrips videosKey is consistency - Stable - Bandwidth - LatencyCant compare otherwise

http://www.webpagetest.org Free!Real browsersGlobalScriptingAPIMobile devicesOSS

Webpagetest is a fantastic resourceits free, test from real browsers all over the world. Can build scripts to do things like authentication, click-paths.API to run tests, and get results, plenty of tools use this to automate measurementbuild pipelines - other reporting suites (sitespeed.io)Real mobile devices (Android and iOS) - US based.Open source, available on github own private instance on your own network, A few minutes on AWS (pre-built AMIs). Great tool for testing before production as you can put it anywhere you need!

Real User Monitoring

How our site is really performing for end users, we need to get metrics from them. We know browsers provide a mechanism for getting data how do we get the millions(?) of data points and make sense of them?

Start on the rum no,Real User Monitoring

Typically, small JS tag collect the metrics and beacon to a portalSome analytics will also collect basic performance data Like GA, Usually very basic and heavily sampled

But how do we make sense of all this data?Eternal question for RUM data. Portals allow you to analyse the avalanche of data Often averages, percentiles or aggregationsValuable, but takes workAllows visibility into UXFurther investigation before conclusions can be drawn

Poor performance in Mexico could bepoor CDN performance in the regionA local connectivity issueEven, a single data point from a user on dial-up ;)

HISTOGRAMS

sitespeed.ioUses WPT & PhantomJS to run performance audits on site.Can be used internally (CLI tool)

PerfBar (http://wpotools.github.io/perfBar/)Surfaces NavTiming data in the browserUseful on UAT-type environments

CI pluginsTest for performance as part of the CI processOther Tools

Other ToolsSitespeed - CLI - PhantomJS - WPT runner - internal

Perfbar in UAT or for internal users

CI plugins - fail the build on broken budgets

Reporting Data

So what are we going to do with all this data were collecting?

Use to optimise the siteFind areas to improve, RUM data might show situational optimisations

But what else can we do with it?

SpeedCurvehttps://speedcurve.com

Speedcurve offers a number of high-level visualisations.

Example here shows number of images on homepageMarked performance budgetMarkers to show deployments.

Can be used to publicise site performance I know teams that display around the offices Make sure everyone knows whats going on.

performance is everyones problem.

Custom DashboardsGraphite / Splunk

API access to the data, Build custom dashboardsGraphite (with Grafana as a front-end) on the left, and Splunk on the right.

Flexibility to integrate performance data business needs, other data sources like analytics, combining synthetic and RUM data,

Build your own story Display data in a way thats meaningful to everyone.

How?SyntheticExternal, controlled testingReal User MonitoringBrowser-based reporting of real users experienceDont choose!Both synthetic and RUM provide valuable insight into performance and should be seen as complementary - either alone gives a narrow viewReportDisplay data on dashboards, make it visible and relevant

So How

SummaryWhat: Decide what metrics are relevant to User ExperienceWhen: At every stage of the lifecycleHow: Using tools and reports to make the data relevant and actionable

Ian Malpass, Etsy,

https://codeascraft.com/2011/02/15/measure-anything-measure-everything/If it moves, we track it. Sometimes well draw a graph of something that isnt moving yet, just in case it decides to make a run for it.

Ill leave you with this quote from a 2011 blog entry from Ian Malpass at Etsy this is their philosophy.

However its important to remember to focus on whats important to you, while collecting all the data you possibly can - you never know when it may be useful!

https://codeascraft.com/2011/02/15/measure-anything-measure-everything/

Thank you!http://www.slideshare.net/GarethHughes3Gareth Hughes, @brassic_lint

Gareth Hughes, @brassic_lint