fcc open internet transparency - a review by martin geddes

39
OPEN INTERNET TRANSPARENCY Informal technical review notes of FCC GN Docket No. 14-28 4 th June 2016 MARTIN GEDDES FOUNDER & PRINCIPAL MARTIN GEDDES CONSULTING LTD

Upload: martin-geddes

Post on 14-Apr-2017

704 views

Category:

Internet


0 download

TRANSCRIPT

OPEN INTERNET TRANSPARENCY Informal technical review notesof FCC GN Docket No. 14-284th June 2016

MARTIN GEDDESFOUNDER & PRINCIPALMARTIN GEDDES CONSULTING LTD

2About Martin Geddes

4 June 2016© Martin Geddes Consulting Ltd

I am a computer scientist, telecoms expert, writer and consultant.

I collaborate with other leading practitioners in the communications industry. Together we create game-changing new ideas, technologies and businesses.

3Introduction

4 June 2016© Martin Geddes Consulting Ltd

This is an informal technical assessment of the US Federal Communication Commission's planned broadband measurement regime. It has not examined all the relevant documents; there may be (significant!) errors and omissions.

The purpose is to indicate where there are potential issues with the approach taken. It’s more right than wrong, so take it as a general barometer or progress (or lack thereof).

4Context

4 June 2016© Martin Geddes Consulting Ltd

It would be easy to interpret this document as being an attack on the FCC, but that is not its intention, and is also an unwise interpretation.

Broadband is a new technology that has not yet matured. The underlying science is still being uncovered. As such, all regulators are struggling with similar issues. What exactly is the service that ISPs deliver? How should that be described? In what ways can the service be objectively measured?

The FCC has taken a lead in attempting to answer these questions. That the answers are less than wholly satisfactory needs to be understood in the context of a new and developing industry.

5Context

4 June 2016© Martin Geddes Consulting Ltd

This document was written shortly before BEREC, the association of European regulators, issued its guidelines on implementing the new European law on broadband transparency.

It is premature and unfair to evaluate the FCC’s effort without understanding how well others have answered the same core questions. The problems are systemic; any failure or blame is industrywide.

The evaluation criteria in this document are largely drawn from and inspired by the Ofcom technical report “A Study of Traffic Management Detection Methods & Tools”, June 2015.

6Key terms

4 June 2016© Martin Geddes Consulting Ltd

• ‘Intentional semantics’ is the desired outcome of the measurement policy

• ‘Denotational semantics’ is how the service is described on its ‘label’ to users

• ‘Operational semantics’ is what it actually does, how that is measured, and how that compared to what is on the label

7Contents

4 June 2016© Martin Geddes Consulting Ltd

Part 1: Technical notes• What the FCC says: intentional, denotational and operational semantics• Comments on each• General technical observations

Part 2: Assessment of measurement system• Properties of a good regulatory measurement system• Scoring of each• The bottom line: how well is the FCC doing?

8Part 1: Technical notes

9FCC intentional semantics (‘policy’)

4 June 2016© Martin Geddes Consulting Ltd

1. Aims to support “open Internet principles”, “address open Internet violations”, ensure “harmful practices will not occur”, limit “harmful conduct”.

2. Serves to “enable…consumers to make informed choices”.3. Support “content, application, service, and device providers to

develop, market, and maintain Internet offerings”.4. Capture service variation related to “operational areas” with

“distinctive set of network performance metrics”.

10Notes on intentional semantics

4 June 2016© Martin Geddes Consulting Ltd

• Primary goal is political to support “neutrality” dogma; end user experience is secondary.

• Does not discuss or capture the choices that the user might wish to exercise; their (diverse) intentional semantics are ignored! Hence cannot be a measure of fitness-for-purpose and the regulation is unfit-for-purpose.

• Wider and long-term goals of commerce and society (e.g. IoT) are not considered.

• Creates the wrong kind of user entitlement, which is opposed to the social remit of the FCC for affordability and sustainability.

11Notes on intentional semantics

4 June 2016© Martin Geddes Consulting Ltd

• Content providers are enjoying a best effort service; they have no entitlement to anything when not paying for delivery. Creates a false implication of contract for a quality or capacity floor.

• The service variation requirement makes sense, but ignores the vast variability that exists in the system that may subsume this data.

12FCC denotational semantics (‘label’)

4 June 2016© Martin Geddes Consulting Ltd

1. Why? Seeks “accurate information” on “network management practices, performance”.

2. What? “…disclose expected and actual download and upload speeds, latency, and packet loss” such that “expected network performance disclosed for a geographic area should not exceed actual network performance in that geographic area”

3. How? Speed as “median speeds and percentiles of speed [as a range]”4. How? Latency as “median latency or a range of actual latencies”5. How? Loss as “average packet loss”6. How? “provide actual and expected performance in comparable

formats”

13Notes on denotational semantics

4 June 2016© Martin Geddes Consulting Ltd

• Doesn’t capture that network idleness (quantity) is being used to manage quality; a “fast” network may need to stay idle to work! (So consumer choice is distorted by a false impression.)

• Fails to see coupling and trades of loss and delay hence places itself at odds with the two degrees of freedom (load, loss, delay). Optimise delay, pessimise loss; and vice versa.

• No consideration of what “speed” is, or defines a “speed test”.• Doesn’t separate out what is under ISP control (architecture;

scheduling) from other factors (e.g. how rural, hence longer DSL lines and lower speed).

14Notes on denotational semantics

4 June 2016© Martin Geddes Consulting Ltd

• What is ‘success’ in these metrics? Is less always better? What about other factors like variability or distance? Tail of loss/delay?

• No consideration of relationship between subjective customer experience, objective user experience, service quality and network performance. In particular, fails to capture need to make bad experiences rare, not merely good ones possible.

• There is a plethora of competing measurement approaches. Ambiguous as to where the measure should be made (e.g. L2, L3, or L7?).

• False assumption that comparing upload/download speed and network management practices will deliver a meaningful comparison.

15Notes on denotational semantics

4 June 2016© Martin Geddes Consulting Ltd

• Burstiness is far more important that averages are. Fails to capture this data.

• Is the loss induced by TCP’s own behaviour included in the loss metric or not? If it is, then you've created an impossible situation; a fight between the protocols and measurement system.

• The use of measures of “central tendency” is intrinsically wrong when measuring systems where small variations of operational properties have large impacts. Creates perverse incentives.

• Median speed focus sets up a big conflict between the stochastics (and their emergent statistics) and the “lawgeneers”.

16Notes on denotational semantics

4 June 2016© Martin Geddes Consulting Ltd

• Ability to make informed choices using this data is never tested and validated with actual consumers.

• Focus on bulk delivery of large data sets skews measurement to vocal subset of users and content providers (e.g. video on demand).

• Omits to define end points of measurements, which is a major factor in country the size of the USA. (Think: Hawaii.)

17FCC operational semantics (‘measured’)

4 June 2016© Martin Geddes Consulting Ltd

1. “disclose actual performance” … “based on internal testing; consumer speed test data; or other data regarding network performance, including reliable, relevant data from third-party sources.”

2. “be reasonably related to the performance the consumer would likely experience”. To this end “The [measured] routes…should…accurately represent the actual network performance…”

3. Capture performance during “times of peak usage” over a “representative sampling of routes” in the “geographic area in which the consumer is purchasing service”

4. Using “measurement clients in broadband modems” or equivalent in network

18Notes on operational semantics

4 June 2016© Martin Geddes Consulting Ltd

• “actual” never defined (which points, how often, to where, for what, to what fidelity, etc.)

• So many standard methods to choose from! No comparability.• “reasonably related” -- what’s that supposed to mean?• Routes are dynamic, packets can take multiple routes. When those

routes are changing from connection to connection between the same endpoints, how should the measures should be weighted by the traffic pattern?

• “Peak usage” – how long? Falsely presumes busy equals QoE risk. Plus causality is backwards; frequent QoE risk implies network is busy.

19Notes on operational semantics

4 June 2016© Martin Geddes Consulting Ltd

• Reliable, consistent and affordable peak time performance measurement is not achievable in this framework.

• To be useful the metric has to express likely experience of me as an end user, not a mythical average. Data being gathered doesn’t have that property.

• Sets up a self-deployed mass denial of service attack. The costs of this measurement approach (on the network infrastructure) are enormous, and there is no analysis of how the measurement system would work or scale.

20Notes on operational semantics

4 June 2016© Martin Geddes Consulting Ltd

• DSL is likely to outperform cable for “stability over time”, but that is not being reported, so there is a basic consumer choice being suppressed.

• Does not capture the fidelity of geographical reporting. This has particular bearing when comparing DSL with Cable, since they have different geographic variability.

21General technical observations

4 June 2016© Martin Geddes Consulting Ltd

1. No concept of stationarity demand requirement or supply property.2. No apparent awareness of emergence; assumes intentionality to

performance. Calls into question FCC’s technical competence.3. No concept of multiple explicit or implicit classes of service. Hence

forces a sub-optimal delivery and business model on the providers; increases input costs and hence overall cost to consumer.

4. No separation of responsibilities, or consideration of how does resolution occur. Might even create opportunities for undesirable behaviour by key providers (e.g. VoD providers) to engage in predatory change of results to punish providers.

22General technical observations

4 June 2016© Martin Geddes Consulting Ltd

5. Creates new performance arbitrage that can be used to ‘game’ the measurement system.

6. Ignores CDNs and other computational elements.7. Conflates network access and peering arrangements into one object.8. No concept of load limit on performance contract.9. Uses averages (and there is no quality in averages).10. Ignores CPE variation (hardware and software) as confounding factor.11. Never defines “performance”.12. No separation of in-building and network issues.13. Never defines service boundaries (e.g. wholesale), either horizontal or

vertical.

23General technical observations

4 June 2016© Martin Geddes Consulting Ltd

14. Doesn’t really capture what QoE intention is (exceptional experiences possible? bad experiences rare?)

15. “few variations in actual BIAS performance across a BIAS provider’s service area” – not necessarily true; this QoE data is not generally available.

24Part 2: Assessment of system

25Properties of a goodregulatory measurement system

4 June 2016© Martin Geddes Consulting Ltd

1. Technically relatable to fitness-for-purpose for end user use2. Easy to understand by consumers3. Able to isolate problems and allocate cause/blame4. Auditable evidence chain of (non)compliance5. Non-intrusive to collect6. Comparable across all providers and bearer technologies7. Clear technical definition of service description and operation8. Cost-effective to run9. Non-proprietary10. Sound scientific basis

26Technically relatable tofitness-for-purpose for end user use

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 5/10

Why?Positive:• Packet loss and delay included; weak proxy for capacity and QoE (up to

c. 10mbps)Negative: • Too focused on describing best case, not worst case. Doesn’t capture

the tail or the variation (size of “peak period”, non-stationarity)• Falls well short of being a strong QoE proxy. Likely that reported results

and actual user experience will not tally; legitimacy issue for FCC

1

27Easy to understand by consumers

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 3/10

Why?Positive:• Speed is simple to understandNegative: • Can’t relate data to key applications you want to use• Comparability is limited, especially with respect to quality

2

28Able to isolate problemsand allocate cause/blame

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 0/10

Why?• Not even considered; implicitly assumed that somehow this will be

obvious.

3

29Auditable evidence chainof (non)compliance

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 0/10

Why?• Not even considered as a requirement, hence unenforceable.

4

30Non-intrusive to collect

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 3/10

Why?Positive:• Can reuse existing network metricsNegative:• Requirement for peak period speed tests is a self-induced denial of

service attack.

5

31Comparable across allproviders and bearer technologies

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 3/10

Why?Positive:• General framework for comparabilityNegative:• So many woolly definitions, options and variability the reality is going

to be a mess. Approach biased between "physical environment rate limited but uncontended last mile" (DSL) v "higher peak rate but variably contended last mile" (DOCSIS) towards latter.

• Does not capture how often do you get your peak speed; out of scope

6

32Clear technical definition ofservice description and operation

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 3/10

Why?Positive:• Captures many of the essential issues in managing service definition

and variability.Negative:• Fails to grasp the undefined nature of “best effort” broadband; no real

idea of a quality floor or how to go about defining and enforcing one.

7

33Cost-effective to run

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 1/10

Why?Negative:• New measurement systems required for many ISPs.• Speed tests will absorb all network resources. Will force certain ISPs

(e.g. WISPs) out of business, as the measurement approach is fatal to their ability to delivery consistent service. This will reduce consumer choice.

• Incentive is for unsustainably idle networks. Favours certain geographies and incumbents, reducing consumer choice.

8

34Non-proprietary

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 6/10

Why?Positive:• Uses well-known concepts.Negative:• Loads of speed tests which are dynamically updated; comparability

means adoption of a single proprietary vendor standard.• Fails to say what to measure, as there are so many feasible

measurements. ISPs will want ones closest to the technology, which may be opposite of what users want.

9

35Sound scientific basis

4 June 2016© Martin Geddes Consulting Ltd

FCC Score: 3/10

Why?Positive:• Considers many relevant technical issues.Negative:• No framework for relating UX to service quality or network

performance.• No framework to consider service semantics.• Many technical holes; no resource model defined.

10

4 June 2016© Martin Geddes Consulting Ltd

FCC BEREC

Technically relatable to fitness-for-purpose for end user use 5 TBA

Easy to understand by consumers 3 TBA

Able to isolate problems and allocate cause/blame 0 TBA

Auditable evidence chain of (non)compliance 0 TBA

Non-intrusive to collect 3 TBA

Comparable across all providers and bearer technologies 3 TBA

Clear technical definition of service description and operation 3 TBA

Cost-effective to run 1 TBA

Non-proprietary 6 TBA

Sound scientific basis 3 TBA

OVERALL SCORE 27/100 ?

37Bottom line

4 June 2016© Martin Geddes Consulting Ltd

1. The FCC has its heart in the right place: a transparent marketplace.2. Sadly, the FCC’s head is not screwed on: fundamentally fails to grasp

statistically multiplexed nature of broadband and existence of trading space. Misclassifies broadband as “jittery lossy circuits”: monoserviceview, ‘peak hour’, no idea of degrees of freedom.

3. Data is virtually unusable by the public. Traffic management disclosure is irrelevant. Performance metrics hard to interpret except by experts and offer limited data. Opens up risk of misinterpretation and conflicting subjective views of what is meant to be objective data.

4. Misses need for predictability entirely. As a result the data is biased towards some bearers (e.g. DOCSIS) and away from others (e.g. DSL).

38Bottom line

4 June 2016© Martin Geddes Consulting Ltd

5. The scaling costs of speed tests are prohibitive. May put WISPs out of business.

6. Enforcement has not been considered. No regime is offered for isolation of issues or proof of cause.

7. Sets up inappropriate market incentives. Damages competition, adversely affects low-density (especially rural) ISPs. Encourages overload of network by content providers.

39|39

Thank youThe secret of success isto know somethingnobody else knows.―Aristotle Onassis

Martin [email protected]