1.258j lecture 26: service reliability measurement using
TRANSCRIPT
Service Reliability Measurement using Oyster Data
- A Framework for the London Underground
David L. Uniman
MIT – TfL
January 2009
1
Introduction
• Research Objective To develop a framework for quantifying reliability from the perspective of passengers using Oyster data that is useful for improving service quality on the Underground.
• How reliable is the Underground? - how do we think about reliability?
- how do we quantify it?
- how do we understand its causes?
- how do we improve it?
2
Background – What is Service Reliability?
• Reliability means the degree of predictability of the service attributes including comfort, safety, and especially travel times.
� Passengers are concerned with average travel times, but also with certainty of on-time arrival
% of Trips
T’arrival TdepartureT’departure
Time of DayTdesired arrival
Probability of Late Arrival
Avg. Travel Time “Buffer” Avg. Travel Time - Adapted from Abkowitz (1978) 3
Framework - Reliability Buffer Time Metric
• Criteria for Reliability Measure • Representative of passenger experience • Straightforward to estimate and interpret • Usefulness and applicability – compatible with JTM
• Propose the following measure: Reliability Buffer Time (RBT) Metric “The amount of time above the typical duration of a journey required to arrive on-time at one’s destination with 95% certainty”
RBT = (95th percentile – 50th percentile)O-D, AM Peak, LUL Period sample size ≥ 20
No. of • Actual Distribution Observations
50th perc. 95th perc. Travel Time
RBT 4
Framework – Separating Causes of Unreliability
• Two types of factors that influence reliability and affect the applicability & usefulness of the measure:
1. Chan (2007) found evidence for the effects of service characteristics on travel time variability – impact on aggregation (e.g. Line Level measure)
2. In this study, observed that reliability was sensitive to the performance of a few (3-4) days each Period, which showed large and non-recurring delays (believe Incident-related)
Waterloo to Picc. Circus (Bakerloo NB) - February, AM Peak Comparison of Travel Time Distributions (normalized)
35 0.3 30
0.25 25
95th
per
cent
ile [m
in]
% o
f jou
rney
sFebruary 14th
0.2 20
15
10
February 5th
0.15
5 0.05
0 0
4-Feb 7-Feb 10-Feb 13-Feb 16-Feb 19-Feb 22-Feb 25-Feb 28-Feb 2-Mar 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Weekdays Travel Time [min]
0.1
5
Framework – Classification of Performance
• Propose to classify performance into two categories along two dimensions – degree of recurrence and magnitude of delays � Relate to reliability factors and strategies to address them
Recurrent Reliability Incident-Related Reliability Day-to-day (systematic)
performance Unpredictable or Random
(unsystematic) delays Includes the effects of service
characteristics and other repeatable factors (e.g. demand)
Unreliability caused by severe disruptions, additional to inherent
levels of travel time variation Can be considered as the
Underground’s potential reliability under “typical” conditions
Together with performance under “typical” conditions, makes up the
actual passenger experience
• Methodology – use a classification approach based on a Stepwise Regression to automate process
6
Framework – Classification of Performance
− Bakerloo Line Example: Waterloo to Piccadilly Circus – AM Peak, Feb. 2007
35 30
25 20 15
95th
per
cent
ile
[min
] 10
5 0 4-Feb 7-Feb 10-Feb 13-Feb 16-Feb 19-Feb 22-Feb 25-Feb 28-Feb 2-Mar
Weekdays
T.T.Actual = P[No Disruption]*T.T.Recurrent + P[Disruption]*T.T.Incident-related
0.3
0.25
0.2
0.15
0.1
0.05
0
0.3
0.25
0.2
0.15
0.1
0.05
0
P[No Disruption] = 17/20 days = 85%
0.3
0.25
0.2
0.15
0.1
0.05
0
P[Disruption] = 3/20 days = 15%
0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30
RBTActual = 4-min RBTRecurrent = 3-min RBTIncident-related = 10-min 7
40
50
Framework – Validation with Incident Log data
• Validation of non-recurrent performance with Incident Log data (from NACHs system) confirmed Incident-related disruptions as the primary cause
Brixton to Oxford Circus (Victoria NB) - February 7, AM Peak Brixton to Oxford Circus (Victoria NB) - February 14 - AM Peak
60 60 0.3
0.3 0.25
0.25
% J
ourn
eys 0.2 50
% J
ourn
eys 0.2
0.15
0.1
0.15
0.1
400.05 0.05
0 0 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
30 Travel Time [min] 30 Travel Time [min]
20 20
10 10
0 0
7:00 AM 7:30 AM 8:00 AM 8:30 AM 9:00 AM 9:30 AM 10:00 AM 7:00 AM 7:30 AM 8:00 AM 8:30 AM 9:00 AM 9:30 AM 10:00 AM Departure Time D epart ure T ime
Indicative Incident Date Cause Code Result NAX's Start Incident End
February 7 Fleet Train Withdrawal 2.6124 7:02am 8:20am
February 7 Customer - PEA Train Delay 3.5756 9:16am 9:19am
Indicative Incident Incident Date Cause Code Result NAX's Start End
Customer -February 14 PEA Train Delay 9.8803 8:03am 8:06am
February 14 Signals Train Delay 47.8043 9:05am 9:16am
Partial Line February 14 Signals Suspension 45.4703 9:57am 11:19am
8
Trav
el T
ime
[min
]
Excess Reliability Buffer Time Metric
• Using these 2 performance categories, we can extend our reliability measure by comparing the actual performance with a baseline value
• Propose the following: Excess Reliability Buffer Time (ERBT) Metric “The amount of buffer time that passengers need to allow for to arrive on-time with 95% certainty, in excess of what it would have been under disruption-free conditions.”
ERBT = (RBTActual – RBTRecurrent )O-D, AM Peak, LUL Period sample size ≥ 20, cumulative baseline
50th perc. (I) 95th perc.(II)
RBT (II)
Excess Unreliable Journeys
Num. of Observations
Travel Time
• Recurrent Distribution (II) • Actual Distribution (I)
95th perc.(I)
ERBT
9
Application # 1 – JTM Reliability Addition
• Use measure for routine monitoring and evaluation of service quality – propose to include it within JTM as an additional component.
• RBT form is compatible is JTM – units (min, pax-min), aggregation (Period, AM Peak, O-D), estimation (Actual, Scheduled, Excess & Weighted)
TPT | AEI | PWT | IVTT | C&I | RBT
•Apply RBT measure to Victoria Line – AM Peak, Feb. & Nov. 2007 20.00
18.00 E_RBT 16.00
B_RBT 14.00
12.00
10.00 (7.01) 8.00
3.62 6.00 2.08 4.00
2.00
0.00
4.93 4.93
16.71
(8.55)
February November Median Travel Time
Period 10
Trav
el T
ime
[min
]
Application # 1 – JTM Reliability Addition
• Actual W eighted RBT value estimation • Contribution to Service Quality (i.e. Perceived Performance)
Comparison of Actual Reliability Buffer Time and Median Journey Time: Victoria Line, AM Peak, Feb/Nov 2007
16.71
7.01 8.55
0.00
2.00 4.00
6.00 8.00
10.00
12.00 14.00
16.00 18.00
20.00
February November Median Travel Time
Trav
el T
ime
[min
]
• Compare contribution of RBT to other JTM components through VOT – FEB. 2007
Unweighted Unweighted, JTM Weighted, JTM (VOTRBT = 1.0) Proportions* Proportions
(VOTRBT = 1.0) (VOTRBT = 0.6)7%
AEI 35%
21% 16%9%
1%1%34% 34% 35% 50th Perc. PWT
OTT8% CLRS RBT
RBT
66% 36%36%1% 12%
AEI PWT OTT CLRS RBT
AEI
PWT
OTT
CLRS
B_RBT
E_RBT
12% 37% 11
* total 101% due to rounding
Application # 2 – Journey Planner Reliability Addition
• Better information reduces uncertainty by closing the gap between expectations and reality – improve reliability of service
¾ Propose more COMPLETE information through Journey Planner
SIMPLE EXAMPLE: David’s morning commute – Bow Road to St. James’ Park
12
− Probability of arrival on-time using Journey Planner = 0.02
− Access/Egress and Wait Times (?) … must “guess” 17% of trip
− Journey Planner time must be increased by a FACTOR of 1.64 to be 95% reliable
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
10 15 20 25 30 35 40 45 50 55 60
O yster Travel Time [min]
Cum
ulat
ive
Prob
abili
ty
Journey Planner
O yster Median
O yster 95th perc.
Courtesy of Transport for London. Used with permission.
• Assessment: Journey Planner information is INCOMPLETE in 2 ways:1. Journey Planner consistently underpredicts Oyster journey times - possibly
AET & PWT – leaves around 30-50% of journey to chance
2. Expected journey times not helpful for passengers concerned with on-time arrival (e.g. commuters)
Assessment: Journey Planner information is INCOMPLETE in 2 ways: 1. Journey Planner consistently underpredicts Oyster journey times - possibly
AET & PWT – leaves around 30-50% of journey to chance
2. Expected journey times not helpful for passengers concerned with on-time arrival (e.g. commuters)
Difference in Travel Times - O yster Data and Journey Planner:Difference in Travel Times - O yster Data and Journey Planner:50 Largest O-D's, AM Peak, Nov. 200750 Largest O-D's, AM Peak, Nov. 2007
205045181640
ss'' ) 1435
DD )-- 00 1230
OO 5 5llf a 1025
. o
. of attoo
oo (t(t 208
NN 1561045200
0 0% 1 0-2 10% 3 10-420% 5 20-30%6 7 30-40%8 9 40-50%10
(O ystePrrMobeadbianilit Ty roavf OelnT-iTmimee- AJorruivrnael uy Psinlang nJeoru TrnraveyePl Tlainmnee)r [min]
Difference in Travel Times - yster Data and Journey Planner: O 50 Largest O-D's, AM Peak, Nov. 2007
30
25
s' ) D 20
- 0 O
5
f l 15
. o ato
o (tN 10
5
0 0 1 2 3 4 5 6 7
(95th Perc. Travel Time / Journey Planner Time)
Application # 2 – Journey Planner Reliability Addition
•
13
Application # 2 – Journey Planner Reliability Addition
• Possible representation of new journey information:
SIMPLE EXAMPLE: David’s morning commute – Bow Road to St. James’ Park
Chose to:
Depart at… Æ
Arrive by… Æ
Route Depart Expected Arrival Latest Arrival Duration (up to) Interchanges
1
2
08:27
08:19
08:57
08:49 09:00
09:11 00:30 (00:41)
00:30 (00:41)
View
View
14
Courtesy of Transport for London. Used with permission.
Conclusions & Recommendations
1. Reliability is an important part of service quality, relative to average performance, and should be accounted for explicitly.
9 Monitor and evaluate reliability through JTM Extension
2. Incidents have a large impact on service quality through unreliability, which may be underestimated through focus on average performance.
9 Use Oyster and Reliability Framework to improve monitoring and understanding through NACHs (measurement vs. estimate)
3. The impacts of unreliability on passengers can be mitigated through better information.
9 Update travel information and include reliability alternative in Journey Planner
15
Conclusions & Recommendations
4. In order to manage performance, we need to be able to measure it first.
9 Framework contributes to making this possible, and sheds light on some of the broader questions…
• How reliable is the Underground? - how do we think about reliability?
- how do we quantify it?
- how do we understand its causes?
- how do we improve it?
16
Thank You
• Special thanks to people at TfL and LUL that made this research possible and a memorable experience
17
MIT OpenCourseWarehttp://ocw.mit.edu
1.258J / 11.541J / ESD.226J Public Transportation SystemsSpring 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.