determining baseline response preformance

55
Evaluating Response Performance 1 Running Head: EVALUATING EMERGENCY RESPONSE PERFORMANCE Determining Baseline Response Performance David L. DeMarco Everett Fire Department, Everett, Washington

Upload: others

Post on 26-Nov-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Determining Baseline Response Preformance

Evaluating Response Performance 1

Running Head: EVALUATING EMERGENCY RESPONSE PERFORMANCE

Determining Baseline Response Performance

David L. DeMarco

Everett Fire Department, Everett, Washington

Page 2: Determining Baseline Response Preformance

Evaluating Response Performance 2

Certification Statement

I hereby certify that this paper constitutes my own product, that where the language of others is

set forth, quotation marks so indicate, and that appropriate credit is given where I have used the

language, ideas, expressions, or writings of another.

Signed: _________________________________________

Page 3: Determining Baseline Response Preformance

Evaluating Response Performance 3

Abstract

Like many municipal corporations, the City of Everett has experienced an economic downturn

resulting in reduced revenue. This fiscal crunch has led to reduced spending in all departments,

including the Fire Department. In 2010 the Everett Fire Department began a policy of

intermittent reduced daily staffing to decrease firefighter overtime expense. This policy created

a labor dispute in which both sides made cases for and against the impact of reduced staffing on

department performance. The problem is significant effort is invested in response data collection

but no comprehensive performance understanding results from that effort. Internal and external

customers are allowed to speculate about current performance, various deployment models and

workload distribution. The descriptive method of research was used for this ARP. A literature

review was conducted to determine what response performance standards exist for the fire

service. Procedures included a data dump from existing computer aided dispatch records,

extensive data cleaning, filtering, and summarization. Procedures also included the application

of derived data to a geographic information system to provide performance mapping. Results

determined Everett did not meet NFPA or CFAI response performance standards. Turnout times

were particularly slow. Everett’s response data collection practices were inadequate. This ARP

recommends the department adopt performance standards and report on performance in an

annual report. It also recommends the data entry practices at dispatch be validated, and that

performance data reporting be automated so it can be reviewed regularly. Policy surrounding

collection principals and practices should be more clearly defined. This ARP also recommends

the department conduct a community-wide risk assessment allowing hazard-specific effective

response force assignments which may positively impact performance.

Page 4: Determining Baseline Response Preformance

Evaluating Response Performance 4

Table of Contents

Certification Statement ................................................................................................................... 2

Abstract ........................................................................................................................................... 3

Introduction ..................................................................................................................................... 6

Background and Significance ......................................................................................................... 7

Literature Review.......................................................................................................................... 11

Table 1. ......................................................................................................................................... 14

Table 2. ......................................................................................................................................... 17

Procedures ..................................................................................................................................... 19

Table 3. ......................................................................................................................................... 20

Results ........................................................................................................................................... 24

Table 4. ......................................................................................................................................... 24

Table 5. ......................................................................................................................................... 35

Table 6. ......................................................................................................................................... 36

Discussion ..................................................................................................................................... 37

Recommendations ......................................................................................................................... 42

Bibliography ................................................................................................................................. 46

Appendix A ................................................................................................................................... 50

Appendix B ................................................................................................................................... 51

Page 5: Determining Baseline Response Preformance

Evaluating Response Performance 5

Appendix C ................................................................................................................................... 55

Page 6: Determining Baseline Response Preformance

Evaluating Response Performance 6

Introduction

The socio-economic conditions prevalent in the United States today have brought close

scrutiny to all aspects of public expense, including emergency services. The American fire

service is being evaluated by politicians and citizens in cost/benefit analyses unlike any that have

occurred in the past. A historic recession beginning in 2008 (Rampell, 2009) accompanied by

the bursting of a significant “bubble” (Kuntz, 2009) in property values during that same time has

left municipalities faced with serious budget shortfalls. Leaders at the state and local level have

been forced to include cuts to public safety services in the form of service and staffing

reductions. This has been true nationally and the trend continues locally. (Coleman, 2011 )

(King, 2011) Due to the competitive financial environment, the fire service is being forced to

justify its expense in quantifiable, demonstrable ways.

The problem is the Everett Fire Department is ill-prepared to meet this challenge;

significant effort is invested in data collection but no comprehensive performance reports result

from that effort. Thus, internal and external customers are allowed to speculate about current

performance, various deployment models, and workload distribution. This problem has left the

department scrambling to respond when city administrators imposed intermittent daily staffing

reductions, or “brownouts” (Roberts, 2010) to reduce labor costs in 2010. The staffing

reductions model was chosen based not on empirical data and risk assessment, but rather simply

on the availability of multiple-company fire stations. The city administration is currently

defending the argument that brownouts have no impact on service delivery and the labor unit has

challenged that position. Unfortunately both sides are making arguments based on finance, with

only anecdotal reference to actual system performance or service delivery. (City of Everett,

2011) (International Association of Firefighters Local #46, 2011) The purpose of this applied

Page 7: Determining Baseline Response Preformance

Evaluating Response Performance 7

research project (ARP) is to measure the Everett Fire Department’s 2010 emergency response

performance, which will provide a common base from which discussions about changing

deployment models and the possible impacts on level of service can occur. The descriptive

method of research was used to prepare this ARP. The research questions addresses were: What

is the average response time for all alarm types, by unit? What is the 90th percentile response

time for all call types, by unit? Which areas of the city are or are not receiving service delivery

in times meeting national standards? What are weaknesses in emergency response data

collection practices? And finally, how do Everett’s results compare to nationally recommended

standards?

Background and Significance

The City of Everett is populated by approximately 110,000 residents over forty-two

square miles of land within Snohomish County, Washington. Everett is located approximately

thirty miles north of Seattle on the Interstate 5 corridor and comprises the northern end of the

Puget Sound metropolitan region containing the cities of Seattle, Tacoma, Bellevue, and Everett

among others. The region is home to 1.5 million residents. The City of Everett is old by west

coast standards, incorporated in 1892 and is the seat of Snohomish County government. As an

early western terminus of the Pacific Railroad, Everett exported lumber and lumber products to

the United States and the world for the bulk of its first sixty years. Fantastic mill fires routinely

occurred on Everett’s waterfront and contributed to an early, rich firefighting tradition. Modern

Everett is home to several multinational corporations including the Boeing Company’s primary

assembly factory for the 747, 767, 777, and 787 commercial aircraft. In addition to these and

other private sector employers the City of Everett is also home to Naval Station Everett,

Page 8: Determining Baseline Response Preformance

Evaluating Response Performance 8

homeport to the nuclear-powered aircraft carrier U.S.S. Abraham Lincoln and several other ships

from her battle group.

The Everett Fire Department is a career fire department comprised of 177 uniformed

personnel providing all-hazard emergency services including fire response, emergency medical

services and transport including basic and advanced life support, technical rescue, hazardous

materials technician-level response, fire code plan review, fire code inspection and enforcement,

and a variety of supporting services. Historically the Everett Fire Department has staffed seven

fire engines, one ladder company, three paramedic units, two basic-life support units and a

battalion unit daily. Every apparatus in the fleet is equipped with a mobile data computer

(MDC) and 900MHz wireless modem for communication with computer-aided dispatching

(CAD) software. Each year the department answers an upward trending number of calls for

service; in 2010 there were over 18,000 alarms. As is the case with most modern jurisdictions,

over eighty percent of those were requests for medical assistance.

The Everett Fire Department is dispatched to alarms by the Snohomish County Police

Staff and Auxiliary Service Center (SNOPAC), a publicly funded communications center

currently serving twenty-four fire jurisdictions and twelve police jurisdictions. (SNOPAC)

SNOPAC utilizes PRC Public Sector Inc. CAD software, which was originally engineered in the

1980’s and has been updated as needed, and when possible. PRC was purchased by Litton in

1996 which was a subsidiary of the Northup Grumman Corporation. Northup Grumman

continues to provide support for this CAD suite but it is widely regarded as obsolete. (Dowd,

2011) SNOPAC is currently involved with the installation of a new CAD software suite by New

World Systems (NWS) which is expected to begin service in late 2012. Until that time, the

Page 9: Determining Baseline Response Preformance

Evaluating Response Performance 9

antiquated Grumman CAD system will be maintained but is unable to apply any data beyond

what is currently collected today.

Like all fire service organizations, Everett takes pride in its service to the community.

Unfortunately the City of Everett, like most municipal corporations has experienced a significant

reduction in revenue during the economic recession of 2008-2010. The City of Everett’s 2011

tax assessed valuation has declined 9.1% from 2010, (Snohomish County Assessor, 2011) and

the State of Washington revenue forecast for the 2011-2013 biennium is $698M lower than the

2009-2011 biennium due to decreased sales tax collection and reduced business and occupation

taxes. (Raha, 2011) These three revenue streams comprise the bulk of revenue for the City of

Everett. The City of Everett Chief Financial Officer projects expenses to exceed revenues by

nearly ten million dollars in 2012. (Herald, 2011)

The City of Everett identified the looming revenue shortfalls early, and began to react

with cost-saving measures before expenses exceeded revenues. A city-wide hiring freeze was

implemented in 2008 and remains in effect. (Robinson, 2011) Retirements within the fire

department have left vacancies that remain unfilled, allowing attrition to absorb some of the

budget reductions. Unfortunately the City administration required additional savings and in the

Fall of 2010 the Fire Chief implemented policy directed at reducing the City’s fire department

overtime expense, which exceeded $600,000 in 2009. (Robinson, 2011)

Reduction in overtime expense has been achieved by reducing the number of available

response apparatus when the daily number of employees reporting for work is not sufficient to

staff those vehicles. This process has come to be known nationally as a “brownout”; a term

borrowed from the electrical utility describing a reduction in available voltage during supply

shortages necessary to avert a blackout. (Wikipedia) Brownouts in Everett were begun in

Page 10: Determining Baseline Response Preformance

Evaluating Response Performance 10

October, 2010; the units chosen for brownouts were the only BLS transport unit (Aid 2) and

Engine 3. Aid 2 is staffed by two firefighter-EMT’s and responds from Station 2, which is also

home to a three-firefighter staffed engine company (Engine 2). Engine 3 is housed at Station 1,

which is the department’s largest and busiest fire station. Station 1 is home to Battalion 1,

Engine 1, Ladder 1, Medic 1 and Engine 3. Engine 3 was moved to station 1 in 2007 when its

station was demolished for expansion plans at the Port of Everett; there are currently no plans to

replace Station 3.

For the remainder of 2010 one or both of those units was browned out on 42 days

between October 1 and December 31, 2010; (International Association of Firefighters Local #46,

2011) and brownouts continue in 2011. The union representing the firefighters, International

Association of Firefighters Local Affiliate #46 (I.A.F.F. 46) immediately demanded to negotiate

the impacts of this, charging a change in working conditions under the collective bargaining laws

of the State of Washington (Washington State, RCW 40.56). The City of Everett has uniformly

denied the claims of the union and the matter is scheduled to be heard by a hearing examiner in

June, 2011. Needless to say, there is a great deal of emotion surrounding the issue and it has

served to further polarize the work unit and isolate the administration. Additionally, a drawn-out

legal proceeding during a time of revenue shortfalls is undesirable, as limited public funds are

further dwindled by legal expenses which only benefit the attorneys representing either side of

the argument.

I.A.F.F. 46 contends brownouts “will result in the delayed arrival to single and multiple

unit responses” (International Association of Firefighters Local #46, 2011, p. 9) among other

effects, and the City counters that brownouts are a “reallocation of resources” (City of Everett,

2011, p. 4) which still allow for a “balanced, satisfactory service for the citizens”. (City of

Page 11: Determining Baseline Response Preformance

Evaluating Response Performance 11

Everett, 2011, p. 4) Both parties to the argument will vigorously pursue their position with an

outcome as yet to be determined. However, neither party is well prepared to address the

underlying questions created by this disagreement: What is the current baseline performance of

the Everett Fire Department? What is considered acceptable risk by the citizens and elected

officials of the community? And finally, what methods will be employed to gauge changing risk

levels as changes are made to deployment models and staffing?

As described by the research questions, this ARP will address describing baseline

performance of the department, as that must be understood before risk analysis of deployment

model changes can occur. While the union and the City may never agree on daily staffing or the

best approach for cost savings during economic recession, they should be able to agree on the

facts surrounding past performance and use those facts to adopt local performance objectives

based on defined methods and criteria. By undergoing that process the department will be better

able to plan for the future by making decisions based on performance data, which will improve

local planning and emergency preparedness allowing Everett to make advances towards the

United States Fire Administration’s stated Goal #2. (USFA)

Literature Review

Fire Department performance measures are largely defined by time. With the ultimate

goal being the preservation of life, numerous sources have identified suggested emergency

service performance standards by pitting survivability against time. The American Heart

Association (AHA) has repeatedly shown a correlation between survival of cardiac arrest and the

length of time that passes between the arrest and the arrival of definitive intervention. The AHA

has also shown that the probability of survival after cardiac arrest decreases ten percent with

each minute that passes without definitive care, thus the likelihood of survival after ten minutes

Page 12: Determining Baseline Response Preformance

Evaluating Response Performance 12

without medical care is virtually zero. (American Heart Association, 2010) This fact has played

a major role in the development of emergency medical services deployment models nationwide.

The 1960’s availability of the nation’s fire service, and its poised state of immediate response

readiness made it the logical choice for early development of emergency medical services

delivery. This fact played a prominent role in the deployment of the modern fire service as the

nation’s frontline emergency medical response force.

The National Institute for Standards and Safety (NIST) has also identified time as a

valuable variable in the progression of severity in a structural fire incident. Based on standards

set by the American Society for the Testing of Materials (ASTM, 2011) NIST has defined a

time-temperature curve, finding that under controlled conditions a typically loaded residential

room will experience fire flashover at the four-minute mark. The output of heat energy begins to

grow exponentially once flashover has occurred, releasing enough thermal energy to break

through residential fire barriers, extending the fire from the room contents to the structure,

making it a much more difficult fire to fight. Brannigan (2004) has argued the 1980’s era test

fires used to define the time temperature curve are not representative of modern building

materials or construction techniques and that currently the time to flashover can be significantly

shorter than the NIST prediction, further emphasizing the need for rapid response.

No longer is the early threat from fire confined to the room of origin. NIST has recently

identified the four-minute mark in residential fire progression as the time at which the presence

of lethal levels of carbon monoxide become present in the distal ends of homes experiencing a

structural fire. Carbon monoxide, known as the “silent killer” (Masimo) escapes the room of

origin and has been shown to reach deadly concentrations in all areas of a residence within four

minutes. (National Institue of Standards and Technology, 2010)

Page 13: Determining Baseline Response Preformance

Evaluating Response Performance 13

The current deployment model of the American fire service and the response standards

that have been identified for it are all built on the foundation of these important observations

about human survival and fire behavior. In the interest of better understanding the relationship

between the number and placement of fire companies and the severity of fire loss in New York

City, Ignall, Rider and Urbach were the first to methodically study this relationship using New

York City 1968 to 1970 fire incidents. (Ignall, Rider & Urbach, 1978). While travel distance

was the variable being studied, it is reasonable to consider distance as another expression of

time. Plots of severity versus distance found that “severity appears to increase with distance”,

(Ignall, Rider, & Urbach, 1978, p. 10) and subsequent statistical analysis of these plots supported

that finding. This document is the first American statistical study supporting what is generally

intuitive: The smaller the distance (time) between the onset of an emergency and arriving

mitigation resources, the smaller the resulting severity and loss.

The State of Washington has recognized time as a critical measure of department

performance by passing Revised Code of Washington (RCW) Chapter 35.103: Fire

Departments – Performance Measures, citing the need for medical responders to arrive before

the onset of brain death in cardiac arrest or firefighters to begin fire extinguishment before

flashover occurs. (Washington State) Within the law are provisions requiring “substantially

career” (Washington State, RCW 35.103.010) fire departments to set their own response

standards and then report their performance, as measured against those standards, to the State of

Washington. Specifically, the law requires each city to set specific response time objectives for

calls of all types, and that those objectives are met 90 percent of the time. Additionally, each

city was required to evaluate its performance annually, and issue a written report defining areas

of success and failure against defined response standards. To date, the Everett Fire Department

Page 14: Determining Baseline Response Preformance

Evaluating Response Performance 14

has neither set performance standards nor evaluated its performance as has been required by

Washington State law since 2007.

The National Fire Protection Association (NFPA) Standard 1710: Standard for the

Organization and Deployment of Fire Suppression Operations, Emergency Medical Operations,

and Special Operations to the Public by Career Fire Departments, 2001 Edition was the first

national effort to define performance standards for career fire departments. (NFPA, 2010) NFPA

1710 defines several key time markers for evaluation of emergency response performance.

Alarm answering, handling, processing, and transfer time all occur at the public safety answering

point and emergency call center prior to fire department notification. For the City of Everett this

portion of response occurs at SNOPAC. Evaluation of SNOPAC’s service against NFPA 1710 is

outside the scope of this applied research project.

NFPA 1710, 2010 Edition also defines several key performance markers that are the

responsibility of the fire department: Turnout time, travel time, initiating action time, and total

response time. Turnout time is defined as the time interval between receipt of the original alarm

signal and the beginning of travel to the emergency incident. Travel time is the interval between

beginning of travel and arrival at the location of the emergency incident. Initiating action time is

the interval between arrival at an emergency incident and the initiation of activities intended to

mitigate the emergency. Total response time is the interval between alarm receipt and the

initiation of mitigating actions. NFPA 1710 performance standards for each time interval are

depicted in Table 1.

Table 1.

NFPA 1710 Time Interval Standards (NFPA 1710:4.1.2.1) EMS Turnout Time 60 seconds 90% of the time

Page 15: Determining Baseline Response Preformance

Evaluating Response Performance 15

Fire/Special Operations Turnout Time 80 seconds 90% of the time

EMS incident travel time: BLS unit 240 seconds 90% of the time

EMS incident travel time: ALS unit 480 seconds 90% of the time

Fire incident travel time: First engine 240 seconds 90% of the time

Fire incident travel time: Full first alarm 480 seconds 90% of the time

Another portion of NFPA 1710 relevant to this ARP is section 4.1.2.2 which states the

“department shall document the initiating action/intervention time”. (NFPA, 2010) The Everett

Fire Department records an arrival time at emergency incidents but does not identify a separate

initiation of action time, instead choosing to define the two as being the same, which is often not

the case.

The Fire Suppression Rating Schedule is used by the Insurance Services Office (ISO) to

assign communities a graded Public Protection Classification (PPC). (ISO) The ISO PPC is part

of the formula used in setting commercial and residential insurance rates for communities; thus

the better the ISO PPC, the lower the collective fire protection insurance rates for the

community. Ten percent of the ISO PPC is based on the community’s dispatching efficiency;

forty percent is based on the community water supply system; and fifty percent is based on fire

department performance and capability. (ISO) ISO does not set response time standards, instead

assigning value to the percentage of the community serviced by a fire department pumper that is

within 1.5 miles and a ladder truck within 2.5 miles. Distribution of pumpers and ladder trucks

comprises only four percent of the total ISO PPC score, with the remaining fire department

evaluations focused on equipment, training, and personnel.

Page 16: Determining Baseline Response Preformance

Evaluating Response Performance 16

The Commission on Fire Accreditation International (CFAI) is perhaps the leading

national authority on evaluating fire department performance. CFAI has developed a self-

assessment manual for fire department use in evaluating all aspects of operations, including

emergency response, training, prevention, and physical and human resources. (Commission on

Fire Accreditation International, 2009) CFAI introduces performance measurement terminology

for the fire service useful to this ARP:

Effective Response Force (ERF): “The establishment of an ERF is an exercise that

closely follows on-scene performance expectations…congruent with the agency’s service

level objectives. For example, if the service level objectives are to conduct offensive fire

attack operations… the ERF needs to arrive with enough weight in a short enough time

to safely establish an initial attack force…” (Commission on Fire Accreditation

International, 2009, p. 47)

Concentration of Resources: “The spacing of multiple resources arranged so an effective

response force can arrive within the time frames outlined in the on-scene performance

expectations.” (Commission on Fire Accreditation International, 2009, p. 47)

Distribution of Resources: “Resource distribution is associated with travel time.

Distribution is measured by the percent of the jurisdiction covered by the first-due units.

The service level objectives will drive response time performance, and response times

will subsequently drive resource distribution”. (Commission on Fire Accreditation

International, 2009, p. 47)

Response Reliability: “The probability the required amount of staffing and apparatus

will be available when an emergency call is received”. (Commission on Fire

Accreditation International, 2009, p. 48)

Page 17: Determining Baseline Response Preformance

Evaluating Response Performance 17

Fractal time measurements: “Response time performance should be measured on a

percentage(or fractal) basis, which follows industry standard and is far more accurate in

allowing the AHJ to determine performance.” (Commission on Fire Accreditation

International, 2009, p. 47)

CFAI response performance standards vary slightly from NFPA 1710 and are depicted in Table

2.

Table 2.

Commission on Fire Accreditation International – Response Time Performance Benchmarks Alarm Handling (Call processing) 60 seconds 90% of the time

EMS Turnout Time 60 seconds 90% of the time

Fire & Special Ops Turnout Time 80 seconds 90% of the time

Travel Times Vary based on risk assessment and/or population density

CFAI requires the local fire authority to perform a comprehensive community-wide risk

assessment prior to the formation of service level objectives and response performance goals.

Additionally CFAI recognizes that an effective response force is not necessarily the same size,

even in cases where the incident type might be identical. (Commission on Fire Accreditation

International, 2009) For example, an effective response force for an un-sprinklered Type III

commercial structure fire might be entirely different than the effective response force for a fully

sprinklered Type I commercial structure fire, even though both incidents are identified as

commercial fire incidents. NFPA 1710 agrees with this concept stating the number of on-duty

personnel assigned to an incident will be “determined through (prior) task analyses” which

include multiple risk assessment factors.

Page 18: Determining Baseline Response Preformance

Evaluating Response Performance 18

CFAI recommends a community-wide risk assessment based on historical performance

measurement, nationally recognized risk factors including: Life safety, potential economic

impact, potential environmental impact, and community profile. Once the risk assessment is

complete the jurisdiction can proceed with identifying service level objectives for the expected

incident types based on available resources and then set performance standards for each of those

incidents. This critical assessment and objective-setting process is encompassed in a

jurisdiction-specific document known as a Standard of Cover, the development of which is

described comprehensively by CFAI. (Commission on Fire Accreditation International, 2008)

Of the elements contained within response time, fire departments may not have much

impact on call processing time, particularly when the communications center is not contained

within the department; nor on travel time, which can only be affected at the time of new fire

station placement (distribution of resources). Other factors affecting travel times include traffic,

road network quality and capacity, time of day, and weather; none of which can be meaningfully

impacted by the fire department. (Commission on Fire Accreditation International, 2009) Thus

much research focus has been placed on turnout times, which are directly under the authority of

the fire department.

Many authors have found NFPA and CFAI turnout time standards impossible to achieve,

and have set standards closer to their actual performance. (Dell’Orfano, 2009) (Kitterman, 2006)

(Soptich, 2005) (Mueller, 2010) Factors studied as possible contributors to lagging turnout times

include station design, time of day, preparation (turnout training), and activities being performed

at the time of alarm receipt. (Dell’Orfano, 2009) (Soptich, 2005) (Weninger, 2004)

Station design and size has been repeatedly shown to have negligible effect on turnout

time, despite the seeming logic of the relationship. (Dell'Orfano, 2009) (Soptich, 2005) Time of

Page 19: Determining Baseline Response Preformance

Evaluating Response Performance 19

day consistently effects turnout times, with nighttime alarm turnout time exceeding daytime

alarms by 30 to 45 seconds. (Soptich, 2005) (Weninger, 2004) (Dell'Orfano, 2009) Training for

expedient turnout times appears to be beneficial, as crews more familiar with streamlining

turnout procedures report improved response times. (Dell'Orfano, 2009)

Attitude and understanding of turnout time performance is an essential part of

maintaining excellent turnout times. (Soptich, 2005) (Weninger, 2004) Employees may report

their turnout performance as being within established criteria when it is not, making the need for

regular performance reports essential for improvement. (Dell'Orfano, 2010)

Procedures

The Everett Fire Department is reliant on a third-service communications center which

records and maintains response time data. Input from subscribing fire agencies on dispatching

procedures and data collected occurs within a fire-users committee, which reports to the

SNOPAC board of directors. CAD data is stored by SNOPAC, and in 2006 a software link was

established between SNOPAC servers and newly acquired FDM records management software

(RMS) purchased by the Everett Fire Department. Since that time incident data points created by

SNOPAC in CAD are transferred, upon the closure of the incident, to FDM. Due to the limited

data collection capability of CAD, the data transfer allows the Everett Fire Department to add

supplemental information for each incident. The data transferred from CAD and then

supplemented in FDM by Everett Fire users was the source data for this ARP.

A public records request was submitted to the City of Everett Information Technology

Department, asking for all 2010 response data stored in FDM to be outputted to an Excel

spreadsheet. Data points collected in FDM and provided in Excel are contained in Table 3. The

time values transferred from CAD to FDM, and subsequently from FDM to Excel are

Page 20: Determining Baseline Response Preformance

Evaluating Response Performance 20

consistently stored in the following format: MM/DD/YYYY; HH:mm:ss. Those time values of

interest to this study but not specifically stored by CAD are calculations based on time stamps

from CAD; for example the calculation for “turnout time” is calculated using the following

formula:

[Enroute Time (CAD)] – [Dispatch Time (CAD)]= [Turnout Time (Calculated)]

All calculated data, and the underlying formula used to derive the value is depicted in Table 3. A

complete translation of SNOPAC incident abbreviations, including the required level of personal

protective equipment and response mode for each incident type is found in Appendix A.

Table 3.

Study Source Data and Calculations

Data Point CAD or Calculated

Formula and Method

Incident Number CAD

Incident Date & Time CAD

Incident Type CAD

Address CAD

Fire Demand Zone CAD

Apparatus Assigned CAD

Dispatched Time CAD

Enroute Time CAD

Turnout Time Calculated [Enroute Time] – [Dispatched Time]

Arrival Time CAD

Response Time Calculated [Arrival Time] – [Dispatched Time]

Available (In- Service) Time CAD

Cancelled Enroute CAD

Average Turnout Time Calculated

[Sum of all turnout times where an enroute

time is reported] / [count of all turnout times

where an enroute time is reported]

Page 21: Determining Baseline Response Preformance

Evaluating Response Performance 21

Average Response Time Calculated

[Sum of all response times where an arrival

time is reported] / [count of all response

times where an arrival time is reported]

90th Percentile Response Time Calculated [Count of all response times] * 90%; applied

to sorted list of all response times

Average Response Time by

Demand Zone Calculated

[Sum of all response times within each

demand zone] / [Count of all response times

within each demand zone]

Total Emergency Work Time Calculated [Available time] – [Dispatched time]

Before calculations could commence, some cleaning of the data was required.

Understanding of Everett Fire Department operating practices was necessary to interpret the

value of some data; for example, an incident for which a unit was dispatched and went enroute

but was later cancelled before arrival would generate a turnout time that was of use to the study,

but not a response time. A series of IF/THEN statements were used to progressively evaluate the

progress of a unit during an alarm. When an IF/THEN statement returned a null or negative

value no further calculations were conducted on that incident. For example, the Excel formula

for response time:

=IF([Turnout Time]="","",[Arrival Time]-[Dispatched Time])

would read “If turnout time equals no value, then no calculation, otherwise calculate response

time to be arrival time less dispatched time. This method allows alarms that generated a turnout

time to be studied, even if the same alarm did not generate an arrival time (cancelled en-route)

and therefore no travel time or overall response time.

Further cleaning is required when calculating total work time. Due to idiosyncrasies in

SNOPAC CAD software, it does not record a constant value when a unit becomes available or

departs one alarm for another. Therefore the automated data stream from CAD to FDM does not

Page 22: Determining Baseline Response Preformance

Evaluating Response Performance 22

include the time a unit is cleared from an alarm, and so of the various time stamps contained in

FDM the “available” time must be inputted manually by fire officers upon return to the station.

This manual entry results in numerous typographical errors across the dataset. The information

manually entered by the fire officer is a complete date and time field: MM/DD/YYYY;

HH:mm:ss. In most cases the errors lay with the entry of the date, including errors in month, day

and year. These errors resulted in total work time values that were enormous when considered

on a minutes and seconds scale. Occasionally they would result in negative work time values.

These manual entry errors we found by comparing the date and time of call origin as stamped by

CAD to the date and time of unit availability. When a discrepancy was found, but all other

stamped values appeared to be within normal ranges, the date or time was corrected. These

manual input error corrections were the only alterations made to the original dataset.

Fire Demand Zones (FDZs) are small geographic grids overlaying the City of Everett.

The FDZ an alarm is located within is determined by address or location, and is assigned

automatically by CAD software. The origins of the FDZ lie with ISO during their initial

appraisal of the City of Everett in the 1970’s. (Robinson, 2011) FDZs are similar, but not

identical to individual census blocks, and allow for geographic analysis of incidents at the

neighborhood level.

Calculating service to demand zones required narrowing the dataset. Pertinent to this

ARP was the time it takes to arrive at alarms within the demand zones in an emergent (lights and

sirens) response. When calculating service to fire demand zones, only responses which were

deemed emergent by the initial dispatch were considered, with non-emergent alarms excluded

from the calculations. Additionally, first due basic life support units were the only units

considered in demand zone studies, as the standards for advanced life support arrival allow

Page 23: Determining Baseline Response Preformance

Evaluating Response Performance 23

additional time. Arrival times for advanced life support are outside the scope of this ARP and

will require additional study.

The data points requiring calculations (Table 3) were performed in Excel and exported to

ArcMap 10 geographic information system (GIS) software as data tables for geographic analysis

by demand zone. Shape files of the City of Everett including fire station locations, fire demand

zones, parcels, and transportation networks were provided by the City of Everett Utilities

Department. Maps displaying various comparisons of actual response time performance versus

national standards were created using ArcMap 10 software in conjunction with derived data

tables and existing City of Everett shape files.

To determine the department’s performance in assembling an effective response force,

first an ERF had to be defined. While NFPA 1710 and CFAI both offer slightly varying

interpretations of an ERF, Everett Fire Department SOP 1.04.04 and 1.04.05 define the minimum

resources to be assigned to residential and commercial fires, respectively. (Everett Fire

Department, 2011) Everett policy assigns a battalion chief, three engine companies, a ladder

company, a single BLS aid unit and a single ALS medic unit to all residential fires. For

commercial fires a fourth engine and a second ladder company are added.

Using the department’s staffing policy (Everett Fire Department, 2011) it can be

determined that the department’s definition of an effective response force is seventeen

firefighters for residential fires and twenty-three for commercial fires. Due to the real-world

complexity of the movement of multiple units, it was difficult to measure ERF assembly time

and exposed some limitations of the data. For the purposes of calculating ERF assembly, this

ARP considers commercial and residential fire incidents whose data met the following

requirements: All response units were dispatched simultaneously, no units were cancelled en-

Page 24: Determining Baseline Response Preformance

Evaluating Response Performance 24

route, and all units that were initially dispatched to the alarm recorded an arrival time, with the

arrival of a battalion chief and one ladder company required. Rather than require the arrival of

specific units as described by SOP, the total number of firefighting personnel on-scene was the

final determining factor for qualification: When the above criteria were met and seventeen

firefighters arrived at a residential fire, or twenty-three firefighters arrived at a commercial fire,

the time was recorded. Many 2010 multi-unit fire incidents were excluded due to their failure to

meet these criteria.

Results

What are the average and 90th percentile response times for all call types, by unit? The

2010 alarms sorted by incident type, (Appendix A) unit, and performance times for turnout,

travel time, response time, and total emergency work time are depicted in table 4. Definitions of

the incident abbreviations are found in Appendix A. Alarm types are grouped according to their

similarity of required levels of personal protective equipment and emergency response mode.

Table 4.

Results of Research Question #1: 2010 Emergency Responses by Unit

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 235 0:02:41 0:02:17 0:04:58 0:07:48 49.91MVCN 163 0:02:18 0:05:18 0:07:36 0:11:50 39.44BLS 1205 0:02:09 0:02:33 0:04:42 0:06:46 392.86FAC FAS FAR 198 0:02:20 0:01:56 0:04:16 0:07:04 41.09FC 71 0:02:38 0:03:22 0:06:00 0:09:55 23.21FR 77 0:02:26 0:03:27 0:05:53 0:08:31 38.08FB FS FTU 98 0:02:28 0:03:52 0:06:20 0:09:46 26.19MED MVCP 630 0:02:11 0:02:10 0:04:21 0:06:19 223.78MEDX 63 0:01:59 0:02:14 0:04:13 0:06:29 20.84MVC MVCE MV 128 0:02:27 0:03:31 0:05:58 0:08:42 47.21AIR, GLI, GLO, H 31 0:02:37 0:03:24 0:06:01 0:08:31 6.08Overall 2899 908.69

Engine 1

Page 25: Determining Baseline Response Preformance

Evaluating Response Performance 25

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 809 0:01:57 0:03:36 0:05:33 0:10:28 233.69MVCN 137 0:02:13 0:05:43 0:07:55 0:14:33 41.65BLS 421 0:01:45 0:02:33 0:04:18 0:06:26 133.47FAC FAS FAR 175 0:01:59 0:02:11 0:04:10 0:06:57 36.77FC 51 0:02:18 0:03:49 0:06:07 0:10:54 14.75FR 51 0:02:00 0:02:33 0:04:33 0:06:22 35.83FB FS FTU 94 0:02:04 0:04:02 0:06:06 0:10:33 23.01MED MVCP 228 0:01:51 0:02:08 0:03:59 0:06:03 93.95MEDX 43 0:01:39 0:02:14 0:03:53 0:06:20 17.67MVC MVCE MV 125 0:02:11 0:03:54 0:06:05 0:10:10 44.53AIR, GLI, GLO, H 22 0:02:05 0:04:30 0:06:35 0:08:57 5.79Overall 2156 681.12

Engine 2

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 387 0:02:17 0:03:40 0:05:57 0:08:31 113.27MVCN 82 0:02:19 0:04:01 0:06:20 0:10:08 16.82BLS 747 0:02:09 0:02:17 0:04:26 0:06:27 257.57FAC FAS FAR 231 0:02:26 0:01:46 0:04:12 0:06:52 45.11FC 50 0:02:30 0:02:48 0:05:18 0:07:58 17.86FR 60 0:02:22 0:02:52 0:05:14 0:07:34 40.56FB FS FTU 73 0:02:31 0:03:30 0:06:01 0:08:41 27.65MED MVCP 448 0:02:03 0:02:07 0:04:10 0:05:55 161.59MEDX 45 0:02:05 0:02:04 0:04:09 0:06:12 20.11MVC MVCE MV 65 0:02:10 0:02:56 0:05:06 0:07:01 28.22AIR, GLI, GLO, H 23 0:02:23 0:03:28 0:05:51 0:07:58 7.37Overall 2211 736.12

Engine 3

Page 26: Determining Baseline Response Preformance

Evaluating Response Performance 26

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 392 0:02:13 0:05:30 0:07:43 0:10:51 157.39MVCN 45 0:02:23 0:03:58 0:06:21 0:09:43 10.14BLS 460 0:01:56 0:03:56 0:05:51 0:08:31 201.28FAC FAS FAR 277 0:02:04 0:02:57 0:05:01 0:08:46 50.32FC 70 0:02:21 0:04:40 0:07:01 0:11:01 23.54FR 43 0:02:34 0:04:13 0:06:47 0:09:20 20.49FB FS FTU 46 0:02:20 0:04:21 0:06:41 0:09:17 17.36MED MVCP 334 0:01:54 0:03:42 0:05:36 0:07:52 195.50MEDX 35 0:01:45 0:03:16 0:05:01 0:07:08 14.20MVC MVCE MV 53 0:02:17 0:03:52 0:06:09 0:09:16 16.95AIR, GLI, GLO, H 38 0:03:38 0:06:39 0:10:17 0:13:18 13.22Overall 1793 720.40

Engine 4

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 392 0:02:15 0:03:52 0:06:08 0:08:38 129.00MVCN 89 0:02:19 0:04:16 0:06:35 0:11:08 21.01BLS 803 0:02:03 0:02:50 0:04:53 0:07:04 321.54FAC FAS FAR 159 0:02:19 0:01:46 0:04:06 0:07:09 24.02FC 39 0:02:40 0:01:53 0:04:33 0:07:38 9.01FR 52 0:02:25 0:02:35 0:05:00 0:08:38 15.26FB FS FTU 69 0:02:31 0:03:32 0:06:02 0:09:07 22.63MED MVCP 493 0:02:00 0:02:36 0:04:37 0:06:27 200.73MEDX 48 0:02:01 0:02:02 0:04:03 0:05:44 20.25MVC MVCE MV 91 0:02:11 0:03:10 0:05:21 0:07:54 26.74AIR, GLI, GLO, H 19 0:02:56 0:02:33 0:05:29 0:10:26 5.78Overall 2254 790.19

Engine 5

Page 27: Determining Baseline Response Preformance

Evaluating Response Performance 27

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 423 0:02:24 0:04:31 0:06:55 0:10:40 154.11MVCN 128 0:02:26 0:04:26 0:06:52 0:11:37 27.56BLS 797 0:02:18 0:03:26 0:05:45 0:08:37 309.98FAC FAS FAR 343 0:02:22 0:02:48 0:05:10 0:08:15 69.97FC 58 0:02:33 0:03:34 0:06:07 0:07:55 23.57FR 15 0:02:23 0:04:11 0:06:34 0:09:30 5.05FB FS FTU 110 0:02:21 0:03:32 0:05:52 0:08:09 31.30MED MVCP 496 0:02:17 0:02:54 0:05:11 0:07:19 210.52MEDX 71 0:02:18 0:03:04 0:05:23 0:08:12 34.21MVC MVCE MV 152 0:02:25 0:03:10 0:05:35 0:08:20 46.17AIR, GLI, GLO, H 27 0:02:50 0:03:58 0:06:48 0:11:38 17.22Overall 2620 929.68

Engine 6

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 361 0:02:19 0:04:20 0:06:39 0:09:16 140.23MVCN 121 0:02:33 0:04:54 0:07:27 0:12:21 28.71BLS 787 0:01:59 0:03:13 0:05:13 0:07:17 326.92FAC FAS FAR 189 0:02:16 0:02:38 0:04:55 0:07:51 32.25FC 50 0:02:34 0:04:33 0:07:08 0:10:45 15.33FR 12 0:02:31 0:03:58 0:06:29 0:07:50 4.35FB FS FTU 74 0:02:16 0:03:23 0:05:39 0:07:23 21.11MED MVCP 461 0:01:58 0:03:03 0:05:02 0:06:55 213.13MEDX 40 0:02:01 0:03:43 0:05:45 0:07:30 18.00MVC MVCE MV 100 0:02:21 0:03:58 0:06:20 0:09:47 34.41AIR, GLI, GLO, H 14 0:02:37 0:03:53 0:06:30 0:08:44 8.09Overall 2209 842.54

Engine 7

Page 28: Determining Baseline Response Preformance

Evaluating Response Performance 28

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 691 0:02:19 0:03:42 0:06:01 0:08:29 199.54MVCN 32 0:02:58 0:02:28 0:05:26 0:11:25 9.42BLS 313 0:01:58 0:01:47 0:03:45 0:06:01 91.10FAC FAS FAR 350 0:02:24 0:02:07 0:04:32 0:07:27 61.63FC 91 0:02:42 0:04:40 0:07:22 0:10:21 31.37FR 86 0:02:28 0:03:15 0:05:43 0:08:36 45.10FB FS FTU 19 0:02:43 0:02:41 0:05:24 0:08:05 6.39MED MVCP 186 0:02:07 0:01:24 0:03:31 0:06:31 73.68MEDX 53 0:01:54 0:01:55 0:03:49 0:05:36 16.21MVC MVCE MV 48 0:02:10 0:02:37 0:04:47 0:07:17 21.67AIR, GLI, GLO, H 41 0:02:37 0:03:59 0:06:36 0:09:33 9.15Overall 1910 565.26

Ladder 1

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 34 0:02:56 0:05:06 0:08:02 0:15:18 9.77MVCN 3 0.37BLS 31 0:01:46 0:02:06 0:03:52 0:07:08 6.17FAC FAS FAR 84 0:02:45 0:02:33 0:05:18 0:09:02 15.00FC 62 0:02:33 0:02:48 0:05:21 0:09:26 17.85FR 17 0:02:42 0:03:04 0:05:45 0:09:11 6.58FB FS FTU 3 0.26MED MVCP 30 0:02:07 0:00:56 0:03:03 0:07:17 5.36MEDX 3 0.25MVC MVCE MV 15 0:02:16 0:02:19 0:04:35 0:07:35 4.82AIR, GLI, GLO, H 18 0:02:49 0:04:26 0:07:16 0:10:43 6.44Overall 300 72.85

Ladder 5 (Cross staffed with Engine 5)

Page 29: Determining Baseline Response Preformance

Evaluating Response Performance 29

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 37 0:01:41 0:03:31 0:05:12 0:07:19 14.04MVCN 4 1.34BLS 197 0:01:41 0:02:44 0:04:25 0:07:45 102.22FAC FAS FAR 1 0.86FC 58 0:02:32 0:03:05 0:05:38 0:09:01 18.83FR 68 0:02:28 0:03:05 0:05:33 0:08:12 25.73FB FS FTU 4 0.46MED MVCP 1959 0:01:52 0:03:12 0:05:04 0:07:58 1090.97MEDX 148 0:01:36 0:03:08 0:04:44 0:06:42 110.07MVC MVCE MV 71 0:01:49 0:03:37 0:05:26 0:08:37 43.73AIR, GLI, GLO, H 33 0:02:08 0:03:35 0:05:43 0:07:49 20.88Overall 2580 1429.12

Medic 1

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 69 0:01:56 0:04:33 0:06:29 0:08:56 25.99MVCN 5 1.31BLS 185 0:01:54 0:03:23 0:05:18 0:08:42 94.83FAC FAS FAR 1 1.56FC 53 0:02:35 0:03:40 0:06:15 0:09:03 16.38FR 49 0:02:39 0:04:13 0:06:52 0:10:26 20.40FB FS FTU 3 0.26MED MVCP 1338 0:01:52 0:04:11 0:06:03 0:09:23 788.57MEDX 87 0:01:43 0:03:49 0:05:32 0:08:57 66.58MVC MVCE MV 47 0:02:00 0:03:54 0:05:53 0:08:41 22.42AIR, GLI, GLO, H 25 0:02:39 0:07:07 0:09:46 0:10:05 10.80Overall 1862 1049.10

Medic 5

Page 30: Determining Baseline Response Preformance

Evaluating Response Performance 30

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 52 0:02:06 0:04:38 0:06:44 0:09:21 27.89MVCN 3 0.59BLS 205 0:01:57 0:03:30 0:05:27 0:08:35 115.58FAC FAS FAR 0 0.00FC 66 0:02:54 0:03:22 0:06:16 0:08:23 19.77FR 21 0:02:32 0:03:31 0:06:03 0:08:32 5.29FB FS FTU 4 0.57MED MVCP 1638 0:01:55 0:04:23 0:06:19 0:09:20 1076.11MEDX 105 0:01:58 0:03:57 0:05:55 0:08:37 90.12MVC MVCE MV 33 0:01:53 0:04:05 0:05:58 0:09:06 14.48AIR, GLI, GLO, H 24 0:02:28 0:05:47 0:08:15 0:10:16 9.15Overall 2151 1359.54

Medic 6

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 172 0:01:49 0:03:38 0:05:27 0:10:23 76.04MVCN 18 0:01:29 0:04:57 0:06:26 0:14:13 9.60BLS 1196 0:01:52 0:02:54 0:04:46 0:07:03 643.49FAC FAS FAR 0 0.00FC 42 0:02:33 0:03:52 0:06:25 0:10:26 17.85FR 42 0:02:21 0:02:17 0:04:38 0:06:55 23.51FB FS FTU 0 0.00MED MVCP 520 0:01:47 0:02:26 0:04:13 0:05:58 284.60MEDX 40 0:01:33 0:02:29 0:04:02 0:07:10 19.42MVC MVCE MV 19 0:01:25 0:04:33 0:05:58 0:09:55 17.41AIR, GLI, GLO, H 29 0:02:15 0:03:23 0:05:38 0:08:05 15.75Overall 2078 1107.65

Aid 2

Page 31: Determining Baseline Response Preformance

Evaluating Response Performance 31

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 239 0:02:06 0:04:31 0:06:38 0:09:59 124.22MVCN 9 4.79BLS 599 0:02:03 0:03:08 0:05:11 0:08:00 351.42FAC FAS FAR 1 0.29FC 28 0:03:06 0:03:29 0:06:35 0:09:49 6.63FR 10 0:02:58 0:03:56 0:06:54 0:08:48 3.52FB FS FTU 0 0.00MED MVCP 362 0:02:02 0:02:56 0:04:58 0:07:14 194.42MEDX 38 0:02:00 0:03:26 0:05:26 20.96MVC MVCE MV 8 3.47AIR, GLI, GLO, H 8 2.47Overall 1302 712.18

Aid 6

2010 Count

Avg Turnout

TimeAvg Travel

Time

Avg Response

Time

90th Percentile Response

Time

Total Emergency Work Time

(Hrs)BLSN & SC 1 0.40MVCN 1 0.82BLS 2 0.39FAC FAS FAR 2 0.10FC 112 0:02:16 0:04:21 0:06:37 0:10:21 38.61FR 90 0:01:54 0:04:00 0:05:54 0:08:45 39.13FB FS FTU 8 1.25MED MVCP 2 0.38MEDX 1 0.27MVC MVCE MV 25 0:02:10 0:04:20 0:06:30 0:11:11 8.89AIR, GLI, GLO, H 39 0:02:44 0:04:28 0:07:12 0:10:25 14.86Overall 283 105.10

Battalion 1

Page 32: Determining Baseline Response Preformance

Evaluating Response Performance 32

Which, if any, fire demand zones receive service below national standards? There are

439 unique fire demand zones in the City of Everett. The unique demand zone number, total

2010 requests for lights and sirens EMS service, and the average response time to that demand

zone by the first arriving BLS unit are listed in Appendix B. The summary of that data is

depicted in Figure 1. Using the NFPA 1710 and CFAI turnout time standard of 60 seconds, and

the NFPA 1710 travel time standard for first arriving unit to EMS and fire responses of 240

additional seconds, those fire demand zones whose average response time was greater than 300

total seconds are depicted in Figure 2.

Figure 1. 2010 Average response times to fire demand zones by BLS units for emergent

EMS alarms.

Page 33: Determining Baseline Response Preformance

Evaluating Response Performance 33

Figure 2. 2010 fire demand zones with average first arrival times over and under

the five minute NFPA 1710 response time standard.

What are weaknesses in emergency response data collection practices? The first

weakness found while assembling Everett response data were the manual input errors of users

entering the date and time they cleared an alarm, requiring inferred correction when cleaning the

data.

Page 34: Determining Baseline Response Preformance

Evaluating Response Performance 34

CAD dispatches to residential (FR) and commercial (FC) fires are often false alarms, or

are small fires handled with only one or two first arriving fire suppression companies. Post-

incident FDM reporting does not require the fire officer to report whether an incident that was

dispatched as FR or FC was in fact a working fire. So while a structural fire may have been

coded FR or FC with good intent by the dispatcher and the 911 caller, the actual service provided

may not have required assembly of an entire effective response force. It proved very difficult to

determine which incidents were actual working fires based on FDM data without reading the

narrative description.

EFD data collection practices also do not address CFAI response reliability. While unit

emergency work time is recorded, there is no effort made at measuring the ability of the

department to handle subsequent alarms. There is no record of residual capability during busy

periods; this information would have to be re-created from unit response histories.

Disparate definitions surrounding emergency response data is also a significant

weakness. No clear set of standards exist at the SNOPAC dispatch center defining the events

that lead up to the time stamped as “dispatched”. While SNOPAC advised they simultaneously

make electronic notification to the responders and record that notification time as the

“dispatched” data point, (Dowd, 2011) on numerous occasions the recorded elapsed time

between the time stamped as “dispatched” and the time a unit reports it is responding via MDC

may be significantly greater than the actual elapsed time as observed by the responding crews.

(Keller, 2011)

Additionally, department policy does not clearly define the critical time markers during

emergency response, nor how to record them consistently. Nowhere in Everett Fire Department

Standard Operating Procedures (SOP) (Everett Fire Department, 2011) chapters 1.04 Alarms and

Page 35: Determining Baseline Response Preformance

Evaluating Response Performance 35

Response Levels, 1.05 Dispatching, 1.08 Radio Communication, or 1.08B Mobile Data

Computers is the definition of the status change from “dispatched” to “en-route” defined. SOP

1.08 B 1.08.04B.1.c. does require responders to use their MDC to change their status but it is still

commonplace to rely on the dispatcher to change unit status by reporting over the radio,

subjecting time data entry to delays from dispatcher task saturation or human error.

How do Everett results compare to nationally recognized standards? Results for first due

companies using lights and sirens was addressed in research question #3 and shown in Table 4.

Table 5 is derived from the same dataset but demonstrates department-wide 90th percentile

performance.

Table 5.

Everett Fire Department – 2010 Department-Wide Response Performance

90th Percentile Turnout Time

(Secs)

90th Percentile Travel Time

(Secs)

90th Percentile Response

Time (Secs)BLSN & SC 196 437 563MVCN 198 585 711BLS 184 321 445FAC FAS FAR 193 366 468FC 203 480 604FR 191 391 510FB FS FTU 185 397 549MED MVCP 182 282 410MEDX 181 313 432MVC MVCE MVCM 190 455 571AIR, GLI, GLO, HZ, TR, All Other 232 571 701

For assembly of an ERF, NFPA 1710 and CFAI recommend an 80 second turnout time

and 480 seconds travel time (560 seconds total) 90% of the time. In 2010 there were thirty-eight

residential fires and nineteen commercial fires which assembled a complete ERF meeting the

Page 36: Determining Baseline Response Preformance

Evaluating Response Performance 36

procedural criteria. The summary of performance in assembling an effective response force is

contained in Table 6. Detailed ERF assembly data can be found in Appendix C. The Everett fire

demand zones where an ERF assembled in 560 seconds or less in 2010 are depicted in Figure 3.

Table 6.

2010 Assembly of an Effective Response Force 2010 Qualifying

Incidents Average Assembly

Time 90th Percentile

Assembly Time

Commercial Fires 19 00:11:21 00:13:47 Residential Fires 38 00:08:49 00:10:55

Figure 3. 2010 fire demand zones where an ERF assembled in 560 seconds or less.

Page 37: Determining Baseline Response Preformance

Evaluating Response Performance 37

Discussion

The Everett Fire Department has difficulty meeting NFPA or CFAI response time

standards in large portions of the city. Perhaps the item of greatest interest developed in the data

is turnout times. Everett’s turnout times are significantly worse than standards. Numerous

authors have found similar department-wide difficulty with meeting NFPA 1710 turnout times.

(Dell'Orfano, 2010) (Weninger, 2004) (Kitterman, 2006) (Mueller, 2010) (Soptich, 2005)

Numerous possible causes for noncompliant turnout times have been identified and studied.

Dell’Orfano (2010) and Soptich (2005) both found little correlation between station size and

turnout time: Large and small stations were equally capable of producing compliant and non-

compliant times. Dell’Orfano (2010) found that responding firefighters typically perceived their

turnout times as compliant but their perceptions did not match reality.

While various authors all reported difficulty meeting NFPA 1710 standard for turnout

time, none were as far from the mark as Everett. Kitterman (2006) reported an average turnout

time for all alarm types of 75 seconds. Soptich (2005) reported an average daytime turnout of

109 seconds and an average nighttime turnout of 146 seconds. Dell’Orfano (2010) found a

department-wide average for all alarms of 81 seconds and found their 90th percentile turnout

performance to be 129 seconds for suburban areas. Everett data shows turnout time with average

ranges from 90 seconds to 150+ seconds (Table 4) and 90th percentile turnout times by alarm

type ranging from 181 seconds to 232 seconds (Table 5).

Since travel time cannot be impacted by the fire department without adopting riskier

driving behavior (Dell'Orfano, 2010), and increasing fire department resource concentration is

currently economically infeasible, improving response time performance in Everett must be

focused on the two remaining components of emergency response time: Call processing and

Page 38: Determining Baseline Response Preformance

Evaluating Response Performance 38

turnout times. Before assigning the weight of responsibility for poor turnout times on crew

performance, the issue of inconsistent data being provided from the dispatch center must be

considered.

Several Everett fire officers have reported inconsistencies with turnout times as recorded

by SNOPAC. (Keller, 2011) (Hausman, 2011) In several cases fire officers report that after

receiving an alarm while already driving in the apparatus, they have used the MDC to report they

were en-route instantly after the alarm arrived at the MDC. The MDC visually acknowledges

CAD receipt of all successful outbound transmissions and so in these cases the turnout time

should be a matter of seconds. However in many cases the turnout time was still in excess of 30

seconds, leading officers to believe the time stamped by SNOPAC as “dispatched” may not be

the time it actually departs the communications center for electronic notification of crews.

Dell’Orfano (2010) found that crew attitude was the biggest predictor of turnout

performance. Crews who had received clear direction on department expectations of turnout

time and had regular performance reports were most likely to maintain the best possible turnout

performance. Everett has not adopted turnout or response time performance objectives, despite

the recommendations of NFPA 1710 and the lawful reporting requirements of RCW 35.103. The

department does not measure crew response performance and therefore cannot provide crews

with feedback on their performance. Without this feedback it is impossible to provide crews

direction on improving turnout performance.

The potential for improvement to turnout times must be evaluated against the possible

benefits to the community. Based on times being reported by Soptich (2005), Mueller (2010)

and Kitterman (2006), a universal 30-second overall improvement in turnout time in Everett is

plausible. Those areas which were not NFPA 1710 compliant for first arriving companies in

Page 39: Determining Baseline Response Preformance

Evaluating Response Performance 39

2010 but would become compliant as a result of a 30-second reduction in turnout time are

depicted in Figure 4.

Figure 4. The 2010 demand zones which were potentially NFPA 1710 compliant for first

due units with a 30-second reduction in turnout time.

The department should not be concerned about setting standards that it is unable to

achieve. None of the departments reviewed by this ARP were meeting NFPA or CFAI

Page 40: Determining Baseline Response Preformance

Evaluating Response Performance 40

standards. Dell’Orfano (2009) recommended setting standards that were achievable based on

historical performance with emphasis on regular performance measuring and reporting to the

crews. However, the Seattle Fire Department has set its performance standards to NFPA 1710

and is comfortable reporting a 31% turnout time compliance and 84% travel time compliance in

its 2009 annual report. (City of Seattle, 2010) Eastside Fire and Rescue and Tacoma Fire have

also adopted standards they have yet to achieve. (Mueller, 2010) (Soptich, 2005)

The ability of the department to produce an effective response force within a reasonable

period of time is a central part of the argument surrounding brownouts. The data presented in

Table 6 clearly shows the department is not meeting NFPA standards for delivery of an “initial

full alarm assignment” (NFPA, 2010, p. 9) within the 560 total seconds allowed for turnout and

travel time. However, NFPA 1710 does not specify the delivery of seventeen firefighters to a

typical residential fire. In fact, 1710 lists a series of roles that must be filled which total fifteen

firefighters for an initial effective response force. (NFPA, 2010) Department policy currently

exceeds the requirements described by 1710 for residential fires. 1710 does not define additional

personnel or roles to be filled for commercial fires, stating instead that in commercial fires

departments “shall deploy additional resources on the initial alarm”. (NFPA, 2010, p. 9) Everett

SOP requiring an additional six firefighters for commercial fires clearly meets that intent.

The Everett Fire Department has not undertaken the task of community-wide risk

assessment as recommended by CFAI. CFAI allows varying descriptions of an effective

response force based on the pre-planned risk associated with a given structure or the

predetermined fire flows required in various circumstances. (CFAI, 2009) When discussing the

size of an effective response force, CFAI references the accredited fire department in the City of

Fresno, CA, which delivers sixteen to twenty-two firefighters for initial effective response force

Page 41: Determining Baseline Response Preformance

Evaluating Response Performance 41

based on previously completed risk assessments. (Commission on Fire Accreditation

International, 2009)

When comparing Everett initial response forces of seventeen to twenty-three firefighters

to NFPA 1710 or CFAI cited examples, Everett appears to be slightly beyond recommendations.

However, with minimum staffing of two firefighters on aid units and three firefighters on

suppression units, the removal of even one unit from the initial response force would

immediately take the department to the low side of recommendations. Based on Table 6 results,

it seems Everett’s concentration of resources (Commission on Fire Accreditation International,

2009) makes compliant assembly of an effective response force possible, on average, for

residential fires but not for commercial fires. The department approaches the 90th percentile

performance standard for residential fires, but not for commercial fires. Figure 3 shows the most

likely locations for 1710 compliant ERF assemblies are central locations, where units are

responding inward toward the city core.

Response times determine the effective distribution of resources. Evaluation of

distribution of resources (Commission on Fire Accreditation International, 2009) can be done by

considering Table 5. Since travel time is a major component of overall response time, and

“response time will subsequently drive resource distribution”, (Commission on Fire

Accreditation International, 2009, p. 47) the 90th percentile travel times described in Table 5

suggest the department is spread too far across the area protected.

Response reliability (Commission on Fire Accreditation International, 2009) is not

measured by the department. The ability of the department to answer additional alarms during

busy periods is perhaps the best measure of adequate daily staffing and would provide the best

measure of the effects of brownouts when presenting to political leadership. The reserve

Page 42: Determining Baseline Response Preformance

Evaluating Response Performance 42

capacity of the department could be calculated from the annual response information but is

beyond the scope of simple Excel calculations. Calculating response reliability would require

advanced programming skills which can be provided by the Everett IT department. (Tinney,

2011) Understanding how many minutes of each day the department is unable to produce an

effective response force due to typical alarm volume would be the truest measure of response

reliability and allow for better planning of daily staffing or potential staffing changes.

The lack of a community-wide risk assessment (Commission on Fire Accreditation

International, 2009) for all hazards hinders the department’s ability to plan. CFAI’s varying size

of an effective response force based on assessed risk could benefit the Everett Fire Department

by allowing a reduction in assigned manpower when reasonable, and expose the need for

additional initial effective response resources in high-risk facilities.

Recommendations

The Everett Fire Department should adhere to Washington State law and adopt NFPA

1710 compliant performance standards for the following nationally recognized events: Turnout

time, travel time, and assembly of an effective response force. State law requires reporting only,

with no penalties assessed should a department fail to meet its own standards. The intent of the

law is to force substantially career departments to build the processes which allow for

performance evaluation and reporting. (Washington, RCW 35.103) Meaningful discussions

about how and when to make changes that affect performance cannot occur until a department

has a comprehensive understanding of its historical performance and where performance has

been trending.

The Everett Fire Department should initiate a study of the time recording practices at the

communications center. It would be unfair to ask for better performance from fire service

Page 43: Determining Baseline Response Preformance

Evaluating Response Performance 43

members if the recording methods are inaccurate. No meaningful improvements to response

time can occur while the methods behind data collection are in question. With the imminent

arrival of a new suite of dispatching software at SNOPAC there could be no better time to create

clear definitions of these recordable events and define the practices that assure they are recorded

accurately and consistently. An ongoing quality assurance program, including random, blind,

system-wide proofing tests must be included.

The department should undertake an effort to improve turnout times. The department

should regularly report turnout times to the members. Reports could be formatted to foster a

sense of competition between stations and platoons in an effort to improve turnout times without

a heavy hand. The value of bragging rights in the fire service should not be underestimated.

The department should undertake the process of releasing an annual report publicizing its

performance findings. An annual report would include the data required by the State of

Washington as well as local information on services provided to the community. Annual reports

provided to city administrators and political leadership would allow them to draw some of their

own conclusions about fire department performance and perhaps bring questions to the Fire

Chief’s office which may have gone otherwise unnoticed. Transparency about performance is

essential in the information age.

The Everett Fire Department should automate the process of regular performance

reporting. The City employs an extremely effective Information Technology department, which

could be tasked with the creation of scripts that routinely query CAD data and provide weekly,

monthly or quarterly updates on performance.

Everett Fire Department policies should be amended to clearly define the meaning behind

each recordable event. As with SNOPAC, the department needs clear definitions of each

Page 44: Determining Baseline Response Preformance

Evaluating Response Performance 44

performance event and the processes by which those events are recorded in CAD. While there

may be some tactical value to using voice transmissions to announce responding and arrival

times, the potential for human error or task saturation delays at the dispatch center makes MDC

the preferred method for marking these events accurately. Training should be centered on

assuring all officers are marking events in the same way, using the same standards and

definitions. For example, policy should define the “enroute” time to be the moment all members

are seated and belted, and the wheels begin turning toward the incident. Currently some officers

press the “enroute” button on the MDC when they arrive in the cab, but before actually

departing. Officers should also be trained to understand the two-way nature of MDC

communications so they know their MDC transmission has been received.

The Everett Fire Department should perform a community-wide all hazard risk

assessment based on occupancy, construction and special hazards. This assessment should occur

parcel by parcel, and be recorded in the fire service layers of the City GIS database. Varying risk

assignments will allow for tailor-made effective response forces which may improve department

performance in assembly of an ERF. Certainly it would allow for more accurate dispatching of

needed units at higher risk parcels as well as inform responding officers of the predetermined

risk while responding. It would also move the department closer to compliance with

accreditation standards set by CFAI.

Finally, the department should undertake an effort to measure response reliability.

Cooperation with the IT department should allow the creation of automated monitoring of

reliability based on reserve resource capacity. The best measure of system-wide capability is its

capacity to handle another alarm. By knowing what our current reliability is we can then make

informed recommendations to City leadership about staffing as reliability changes over time.

Page 45: Determining Baseline Response Preformance

Evaluating Response Performance 45

Resource decisions may be inherently political, but without factual data about performance it

will be impossible to impress city leadership with the needs of the fire department.

Page 46: Determining Baseline Response Preformance

Evaluating Response Performance 46

Bibliography

American Heart Association. (2010, December 15). Heart and Stroke Statistics - 2011 Update.

Retrieved April 3, 2011 from American Heart Association:

http://circ.ahajournals.org/content/123/4/e18.full.pdf

ASTM. (2011). Standard Test Methods for Fire Tests of Building Construction and Materials.

Boston, MA: ASTM.

Brannigan, F. (2004, April 1). Monster House. Fire Engineering. Retrieved May 30, 2011

from http://www.fireengineering.com/index/articles/display/205713/articles/fire-

engineering/volume-157/issue-4/departments/the-ol-professor/monster-houses-

update.html

City of Everett. (2011, March 10). PERC Filing No. 23744-U-11-6055. Everett, WA.

City of Seattle. (2010). 2009 Emergency Response Report. Retrieved June 22, 2011 from

http://www.seattle.gov/fire/deptInfo/annualreport/SFD%20Emergency%

20Response%20Report%202009.pdf

Coleman, R. J. (2011 , May 1). In Budget Negotiations There is No Right or Wrong. Fire Chief.

Retrieved June 30, 2011 from http://firechief.com/management-

administration/fire-department-budget-negotiations-201105/index.html

Commission on Fire Accreditation International. (2009). Fire & Emergency Service Self-

Assessment Manual, 8th Edition. Chantilly, VA: Center for Public Safety Excellence.

Commission on Fire Accreditation International. (2008). Standards of Cover, 5th Edition.

Chantilly, VA: Center for Public Safety Excellence.

Dell'Orfano, M. (2009). Turnout Time Analysis for South Metro Fire Rescue Authority.

Emmitsburg, MD: National Fire Academy.

Page 47: Determining Baseline Response Preformance

Evaluating Response Performance 47

Dowd, K. (2011, June 1). SNOPAC Supervisor. (D. DeMarco, Interviewer)

Everett Fire Department. (2011). Standard Operating Procedures. Everett, WA: City of Everett.

Hausman, M. (2011, June 2). Everett Fire Captain. (D. DeMarco, Interviewer)

Ignall, E., Rider, K., & Urbach, R. (1978). Fire Severity and Response Distance: Initial

Findings. New York, NY: The New York City Rand Institute.

International Association of Firefighters Local #46. (2011, February 26). Complaint Charging

Unfair Labor Practices. Everett, WA.

ISO. (n.d.). ISO. Retrieved June 3, 2011 from http://www.isomitigation.com/

ppc/2000/ppc2001.html

Keller, M. (2011, March 16). Everett Fire Captain. (D. DeMarco, Interviewer)

King, R. (2011, May 31). Marysville Fire Department forced to cut back. The Herald.

Retreived May 30, 2011 from http://heraldnet.com/article/20110531

/NEWS01/705319934

Kitterman, D. (2006). The importance of efficient turnout times. Emmitsburg, MD:

National Fire Academy.

Kuntz, T. (2009, October 20). Why the Housing Bubble Was Local, Not National. The New York

Times. New York, NY. Retreived July 1, 2011 from http://ideas.blogs.nytimes.com

/2009/10/20/a-local-not-national-housing-bubble/.

Masimo. (n.d.). The Silent Killer. Retrieved May 16, 2011 from

http://www.thesilentkiller.net/index.html

Mueller, F. (2010). Tacoma Fire Department turnout time analysis. Emmitsburg, MD: National

Fire Academy.

National Fire Protection Association. (2010). NFPA 1710: Standard for the organization and

Page 48: Determining Baseline Response Preformance

Evaluating Response Performance 48

deployment of fire suppression operations, emergency medical operations, and special

operations to the public by career fire departments (2010 ed.). Quincy, MA.

National Institue of Standards and Technology. (2010, April). Report on Residential Fireground

Field Experiments. Retrieved May 16, 2011, from http://www.nist.gov/customcf

/get_pdf.cfm?pub_id=904607

Raha, A. (2011, April 14) Economic Outlook. Presentation to the Downtown Tacoma Rotary

Club. Tacoma, WA: Washington State Economic and Revenue Forecast Council.

Rampell, C. (2009, January 7). Crisis Comparisons: How Bad Might It Get? The New York

Times. New York, NY. Retrieved June 12, 2011 from http://economix.blogs.nytimes.com

/2009/01/07/crisis-comparisons-how-low-can-we-go/

Roberts, M. R. (2010, September 1). Financial SOS. Fire Chief. Retrieved May 16, 2011, from

Fire Chief: http://firechief.com/leadership/management-administration/survive-budget-

crisis-201009/index1.html

Robinson, R. (2011, February 16). Everett Fire Marshal. (D. DeMarco, Interviewer)

Smith, D. (2011, May 11). Everett projects a $10 million gap in next year's budget. The Herald.

Retrieved May 22, 2011, from http://heraldnet.com/

article/20110511/BLOG40/110519957/-1/news01

SNOPAC. (n.d.). About Us. Retrieved May 28, 2011, from www.snopac.snohomish.

wa.us/about.htm

Snohomish County. (2011) Assessor's Office. Retrieved July10, 2011 from

http://assessor.snoco.org/divisions/levy.aspx

Soptich, L. (2005). A qualitative look at turnout times in emergency responses. Emmitsburg,

MD: National Fire Academy.

Page 49: Determining Baseline Response Preformance

Evaluating Response Performance 49

Tinney, J. (2011, May 2). Everett IT Technician. (D. DeMarco, Interviewer)

United States Fire Administration. (n.d.). America's Fire and Emergency Services Leader;

Strategic Plan Fiscal years 2010-2014. Retreived July 1, 2011 from

http://www.usfa.dhs.gov/downloads/pdf/strategic_plan.pdf .

Washington State. (n.d.). Chapter 35.103 RCW Fire Departments - Performance Measures.

Retreived February 22, 2011 from http://apps.leg.wa.gov/rcw/default.aspx?cite=35.103

Washington State. (n.d.). Chapter 41.56 RCW Public employees' collective bargaining.

Retreived February 22, 2011 from http://apps.leg.wa.gov/RCW/default.aspx?cite=41.56

Weninger, S. (2004). An evaluation of response turnout times. Emmitsburg, MD: National Fire

Academy.

Wikipedia. (n.d.). Brownout. Retrieved from Wikipedia: http://en.wikipedia.org

/wiki/Brownout_(electricity)

Page 50: Determining Baseline Response Preformance

Evaluating Response Performance 50

Appendix A

SNOPAC Alarm Abbreviations Defined, Response Mode, and PPE Required Abbreviation Definition Response Mode Bunker Gear

Required? BLSN Basic Life Support –

Noncode No Lights and Sirens No

BLS Basic Life Support Lights and Sirens No MED Advanced Life Support Lights and Sirens No MEDX Upgraded Advanced Life

Support Lights and Sirens No

MVCN Vehicle Crash – Noncode No Lights and Sirens Yes MVC Vehicle Crash - BLS Lights and Sirens Yes MVCM Vehicle Crash – Medic Lights and Sirens Yes MVCE Vehicle Crash –

Entrapment Lights and Sirens Yes

MVCP Vehicle Crash – Pedestrian Lights and Sirens No SC Service Call No Lights and Sirens No FAC Fire Alarm – Commercial Lights and Sirens Yes FAR Fire Alarm – Residential Lights and Sirens Yes FAS Fire Alarm – Sprinkler Lights and Sirens Yes FR Fire Residential Lights and Sirens Yes FC Fire Commercial Lights and Sirens Yes AIR Aircraft Crash Lights and Sirens Yes GLI Gas Leak – Indoors Lights and Sirens Yes GLO Gas Leak – Outdoors Lights and Sirens Yes HZM HazMat Incident Lights and Sirens Yes

Page 51: Determining Baseline Response Preformance

Evaluating Response Performance 51

Appendix B

2010 Average response time to Fire Demand Zones – BLS Units to EMS Alarms

Fire Demand

ZoneTotal Calls

BLS Unit Avg

Response Time

Fire Demand

ZoneTotal Calls

BLS Unit Avg

Response Time

Fire Demand

ZoneTotal Calls

BLS Unit Avg

Response Time

0 3 0:06:56 168 17 0:04:41 339 31 0:04:273 1 0:04:19 169 66 0:04:11 340 69 0:04:195 4 0:05:41 170 14 0:04:15 341 31 0:04:476 4 0:04:23 171 1 0:04:48 342 18 0:05:097 0 172 3 0:04:47 343 13 0:04:528 1 0:06:41 173 12 0:06:01 344 37 0:06:32

14 5 0:08:36 174 44 0:04:57 345 12 0:05:3015 15 0:07:15 175 7 0:04:45 346 31 0:05:4816 6 0:08:02 176 8 0:05:07 347 36 0:05:2717 1 0:06:41 177 11 0:04:26 348 5 0:05:3919 3 0:07:22 178 25 0:04:30 349 59 0:05:5021 1 0:08:41 179 21 0:05:10 351 48 0:06:5524 6 0:04:58 180 17 0:04:51 352 16 0:06:2625 2 0:08:17 181 36 0:04:50 353 52 0:06:0326 5 0:09:37 182 13 0:05:11 354 98 0:05:0335 1 0:04:01 183 32 0:06:19 355 18 0:05:1636 10 0:05:23 184 9 0:07:46 356 10 0:03:0137 1 0:11:42 185 29 0:05:48 357 22 0:04:4638 11 0:05:34 188 45 0:06:10 358 54 0:04:3539 24 0:05:07 189 12 0:06:58 359 10 0:03:3940 1 0:10:56 190 12 0:06:56 360 32 0:04:4741 24 0:05:52 191 2 0:06:18 361 3 0:04:4642 3 0:05:10 194 4 0:07:55 362 51 0:05:0743 29 0:05:16 195 1 0:06:10 363 18 0:05:2044 15 0:04:43 196 11 0:06:37 364 6 0:07:4145 55 0:04:21 197 15 0:06:14 365 18 0:08:3346 69 0:04:08 198 8 0:06:35 366 18 0:07:2847 93 0:03:57 199 10 0:06:49 367 28 0:08:4748 52 0:04:03 200 3 0:07:23 369 20 0:06:4149 38 0:05:15 201 10 0:06:50 370 32 0:06:2950 21 0:05:14 202 15 0:06:49 371 8 0:06:2351 19 0:05:48 203 16 0:07:45 372 6 0:07:3352 23 0:06:05 204 6 0:03:41 374 1 0:09:4953 40 0:05:50 205 28 0:06:43 376 3 0:07:1154 144 0:04:57 206 14 0:05:45 377 2 0:06:0455 24 0:04:58 207 12 0:05:18 379 3 0:05:26

Page 52: Determining Baseline Response Preformance

Evaluating Response Performance 52

56 53 0:04:24 208 1 0:04:53 380 17 0:06:2657 10 0:04:13 209 4 0:06:08 381 33 0:06:2458 27 0:04:08 210 12 0:06:49 382 11 0:06:4459 76 0:03:42 212 10 0:06:25 383 30 0:06:0760 18 0:02:38 213 4 0:06:57 384 10 0:06:2161 39 0:03:56 214 12 0:06:16 385 70 0:05:1362 161 0:03:59 215 6 0:05:39 386 3 0:07:4063 9 0:05:03 216 14 0:07:05 388 1 0:06:2964 23 0:04:21 217 3 0:06:05 390 36 0:04:2765 23 0:03:51 218 7 0:07:11 391 5 0:03:5666 11 0:04:05 219 9 0:07:31 392 24 0:04:1967 13 0:03:11 220 2 0:06:54 393 52 0:04:2268 9 0:03:45 221 3 0:07:39 394 19 0:05:2069 8 0:03:58 222 4 0:08:16 395 19 0:06:1270 113 0:03:53 223 8 0:08:00 396 69 0:06:5071 76 0:03:28 224 10 0:10:02 397 24 0:05:5472 9 0:03:39 225 7 0:09:18 398 58 0:06:5273 21 0:03:55 226 1 0:04:15 399 29 0:06:1274 18 0:04:10 227 1 0:07:24 400 10 0:07:2275 24 0:04:22 229 11 0:04:18 401 31 0:10:5077 22 0:04:20 230 15 0:03:33 402 2 0:10:2978 14 0:04:33 231 9 0:05:54 403 2 0:07:3079 13 0:03:53 232 13 0:06:50 404 8 0:08:5380 20 0:03:43 233 7 0:06:15 405 8 0:08:3581 12 0:04:07 235 19 0:04:36 407 3 0:07:3682 11 0:04:26 236 17 0:07:26 413 1 0:06:1283 29 0:04:49 237 19 0:04:55 415 1 0:07:2284 36 0:05:13 238 18 0:04:24 416 5 0:05:1185 33 0:05:11 239 28 0:04:43 418 1 0:06:1286 19 0:04:40 240 10 0:05:19 419 4 0:05:1387 14 0:04:07 241 17 0:05:41 420 1 0:04:0688 13 0:04:17 242 10 0:05:47 421 4 0:06:0189 13 0:04:26 243 3 0:05:15 423 1 0:05:4190 19 0:05:18 244 8 0:05:22 424 3 0:05:0591 11 0:04:58 245 38 0:04:54 425 65 0:05:0692 16 0:05:22 246 56 0:05:45 427 6 0:06:1493 6 0:06:03 247 68 0:04:21 432 1 0:06:2294 7 0:05:53 248 34 0:03:54 433 5 0:05:2495 3 0:04:55 249 39 0:04:29 434 17 0:06:0596 9 0:05:25 250 28 0:04:17 436 2 0:04:2397 11 0:04:14 251 20 0:04:24 437 9 0:05:0898 9 0:04:30 252 9 0:06:36 438 2 0:06:08

Page 53: Determining Baseline Response Preformance

Evaluating Response Performance 53

99 40 0:04:25 253 6 0:03:43 439 1 0:04:32100 12 0:04:46 254 42 0:05:01 440 1 0:05:00101 17 0:04:59 255 7 0:04:11 441 7 0:03:55102 40 0:04:25 256 2 0:04:36 442 7 0:04:14103 20 0:04:56 257 13 0:04:49 445 4 0:06:39104 76 0:04:40 258 5 0:04:41 446 3 0:03:26105 38 0:04:15 259 6 0:04:18 447 1 0:07:12106 56 0:04:35 260 18 0:05:21 452 1 0:06:46107 24 0:04:54 261 17 0:04:33 454 12 0:06:48108 24 0:04:43 262 10 0:04:34 455 93 0:05:53109 40 0:05:05 263 10 0:04:01 457 19 0:05:08110 23 0:05:33 264 0 458 25 0:04:42111 26 0:05:31 265 37 0:04:27 459 51 0:04:03112 9 0:05:36 266 87 0:04:01 460 23 0:04:07113 22 0:06:02 267 16 0:05:10 461 50 0:05:11114 11 0:06:04 268 8 0:03:38 462 12 0:05:38115 42 0:05:32 269 7 0:04:24 463 9 0:06:22116 13 0:05:17 270 10 0:03:40 464 21 0:05:33117 67 0:04:49 271 4 0:03:44 465 9 0:06:17118 43 0:04:11 272 17 0:02:56 466 25 0:06:33119 37 0:04:53 273 13 0:03:18 476 1 0:05:11120 38 0:04:13 274 7 0:03:28 477 6 0:05:52121 27 0:03:44 275 46 0:04:54 478 149 0:05:36122 9 0:04:00 276 12 0:03:31 479 73 0:05:28123 14 0:04:22 277 27 0:04:55 494 13 0:05:51124 99 0:04:05 278 4 0:05:07 509 31 0:04:50125 18 0:04:02 279 57 0:05:33 510 2 0:05:22126 51 0:04:24 280 28 0:05:33 513 94 0:04:45127 66 0:04:01 281 61 0:04:58 514 16 0:04:34128 162 0:04:30 282 20 0:04:50 526 24 0:05:12129 174 0:04:59 283 28 0:04:25 537 13 0:04:44130 9 0:04:38 284 33 0:05:00 538 27 0:05:54131 43 0:05:38 285 16 0:04:43 539 198 0:06:26132 6 0:06:26 286 3 0:05:42 540 1 0:06:07133 127 0:04:55 287 14 0:05:00 544 17 0:06:22134 63 0:04:48 288 20 0:04:23 545 20 0:06:15135 27 0:03:57 289 11 0:05:22 546 191 0:05:42136 20 0:04:00 290 51 0:04:51 551 18 0:04:35137 22 0:04:15 291 19 0:05:32 552 35 0:05:28138 156 0:04:06 292 10 0:05:43 555 6 0:07:14139 6 0:07:16 293 5 0:06:11 556 26 0:06:29140 5 0:05:49 294 15 0:05:46 557 15 0:06:30

Page 54: Determining Baseline Response Preformance

Evaluating Response Performance 54

142 9 0:04:16 296 8 0:06:45 559 16 0:05:32143 33 0:03:43 297 32 0:06:48 560 9 0:05:53144 21 0:03:55 298 22 0:06:38 566 9 0:05:24145 20 0:05:01 299 9 0:05:55 568 1 0:06:03146 24 0:04:29 300 6 0:04:40 570 1 0:10:33147 11 0:04:28 301 5 0:05:54 573 1 0:16:15148 4 0:05:50 302 2 0:06:32 574 4 0:07:32149 5 0:05:03 304 7 0:06:42 576 1 0:07:03150 14 0:03:52 305 2 0:08:28 578 1 0:05:46151 17 0:03:30 323 14 0:08:35 579 1 0:07:36152 51 0:02:53 324 4 0:07:21 580 4 0:07:56153 39 0:04:07 325 38 0:06:51 584 24 0:08:17154 19 0:03:29 326 78 0:06:04 585 2 0:07:56155 34 0:04:28 327 44 0:05:15 586 13 0:09:16156 4 0:03:40 328 75 0:05:50 590 4 0:07:33157 60 0:04:22 329 100 0:05:57 592 1 0:06:10158 73 0:03:47 330 46 0:05:33 593 9 0:07:59159 31 0:03:56 331 39 0:04:56 594 1 0:07:24160 7 0:04:25 332 36 0:05:28 596 4 0:10:19161 9 0:06:23 333 78 0:05:58 600 7 0:08:06163 14 0:05:13 334 39 0:05:31 101a 1 0:03:03164 5 0:04:06 335 20 0:05:18 106B 1 0:05:54165 11 0:06:04 336 15 0:05:42 114a 1 0:06:39166 5 0:05:38 337 20 0:05:08 179B 1 0:06:04167 19 0:03:53 338 91 0:03:38 446a 1 0:03:38

447a 1 0:05:52

Page 55: Determining Baseline Response Preformance

Evaluating Response Performance 55

Appendix C

2010 Effective Response Force Assembly Times

IncidentERF Assy.

Time FDZ IncidentERF Assy.

Time FDZ10003828 0:05:16 202 10003578 0:05:55 10410008575 0:05:26 133 10004993 0:06:24 34710002538 0:06:02 86 10010670 0:08:24 25410008703 0:06:28 159 10005581 0:10:21 43610011647 0:06:32 383 10013235 0:10:25 46110016107 0:06:53 86 10001839 0:10:28 51010003120 0:06:56 104 10009896 0:10:45 42510014030 0:06:57 157 10009377 0:10:51 42510013807 0:07:07 173 10014196 0:10:59 32510017235 0:07:29 371 10016641 0:11:37 447a10015313 0:07:32 151 10003297 0:11:43 610002488 0:07:36 184 10016142 0:12:17 5310013286 0:07:40 247 10013768 0:12:37 38510007763 0:07:43 270 10004363 0:12:40 34110008850 0:08:10 245 10002200 0:13:06 2210003894 0:08:19 190 10003669 0:13:47 28110000578 0:08:21 15 10013796 0:13:47 15510013206 0:08:30 94 10004973 0:14:42 10610005735 0:08:34 266 10016473 0:14:55 28510008618 0:08:38 8310001408 0:08:44 4510003578 0:08:45 10410003764 0:08:48 33610017414 0:08:49 6510000960 0:08:57 27410010032 0:09:01 4110006535 0:09:08 14410002840 0:09:52 6510001492 0:10:07 21910014407 0:10:16 55610015809 0:10:24 5510015897 0:10:28 13710015826 0:10:38 8310014054 0:10:55 5310003557 0:11:19 17310003133 0:12:47 9710014768 0:13:25 53810001045 0:16:35 63

Residential Fires Commercial Fires