evaluation of current and future crew sizes and...
Post on 27-Mar-2018
215 Views
Preview:
TRANSCRIPT
NAVAL ENGINEERS JOURNAL December 2016 | No. 128-4 | 59
T E C H N I C A L PA P E R
Evaluation of Current and Future Crew Sizes and Compositions: Two RCN Case StudiesDr. Renee Chow1, Matthew Lamb1, CPO1 Ghislain Charest2, CPO1 Daniel Labbé2
ABSTRACT INTRODUCTION
The Royal Canadian Navy (RCN) has three new ship pro-curement projects under way. There is a requirement to evaluate crew sizes and compositions to ensure an optimal balance of operational capabilities and life cycle costs. Task network simulation is a well-established approach to assess crew requirements for naval platforms. For example, Woodward (2011) modeled mission pack crews for the UK Mine-countermeasures, Hydrographic, and Patrol Capability (MHPC) project using a tool called Micro Saint Sharp (Alion, 2015), and Hollins & Leszczynski (2014) modeled the US Littoral Combat Ship crew using a tool called IMPRINT Pro (US Army Research Laboratory, 2010). In this approach,
that need to be performed by crew members, interactions between crew members and tasks, and interactions between tasks. Task attributes may include probabilistic task times and likelihoods of success. A crew-task interaction may be more crew for the same task causing a decrease in the task completion time. A task-task interaction may be a higher priority task causing a lower priority task to be aborted.
tasks in the network, and multiple simulation runs are con-ducted to predict how well the crew option performs using measures such as number of failed tasks. This approach is powerful in mapping out possible interactions between crew and systems in a large-scale system-of-systems like a naval
performance. However, task network simulation requires extensive and
in-depth knowledge of the ship’s crew, as well as the ship’s --
prehensive and valid models that thoroughly and accurately represent the reliability and interdependencies of the human and technological components. This modeling approach seems especially suited for existing platforms, where the
1DRDC Toronto Research Centre; 2Directorate of Naval Personnel and Training, Royal Canadian Navy
60 | December 2016 | No. 128-4 NAVAL ENGINEERS JOURNAL
Evaluation of Current and Future Crew Sizes and Compositions: Two RCN Case Studies
crew and system data are readily available, to investigate al-
also be suited for new platforms, to support decision making
but the modeling may need to be restricted to one or more subsystem(s) and a small number of options for the subsys-tem(s) of interest. If many subsystems are included and each subsystem still has many unknowns, the resources required to model all the permutations may exceed what is feasible within the decision making timeline.
For existing platforms that have large crews, or new platforms that are at early stages of design, an alternate approach to crewing analysis is required because there are so many variables that task network simulation would be infeasible or too onerous. The alternate approach should be applicable to whole-ship modeling without depending on de-tailed system data. It should also support options analysis in-volving many options for many subsystems. To address these requirements, Defence R&D Canada (DRDC) developed the Simulation for Crew Optimization and Risk Evaluation
(SCORE) tool to support the RCN in evaluating crew sizes and compositions for existing and future naval platforms (Chow, 2014; Chow, Burke, & Witzke, 2016; Chow, Perlin, McKay, Coates, Lamb, & Wang, 2015). This paper describes two case studies where SCORE was applied to evaluate crew options for the in-service Canadian Patrol Frigate (CPF) and
its contributions and limitations.
SCORE
SCORE includes a crew validation module and a crew generation module. This paper focuses on crew validation. SCORE was designed for use by navy domain experts rather than defence scientists or engineers. Most current RCN users have not required formal tool training, and have been able to perform some analyses with SCORE within three days of observing experienced users or participating in group discussions of SCORE inputs or outputs. To perform crew
one more or scenarios. In SCORE, a crew manifest is a list
FIGURE 1: Relationships between SCORE Crew Manifests, Configurations, and Assignments
NAVAL ENGINEERS JOURNAL December 2016 | No. 128-4 | 61
-ic activities. An assignment is a mapping of roles in activities to crew members in a crew manifest. Figure 1 shows the
assignments in SCORE. A scenario is a schedule of activities that may span multiple days. To conduct crew validation, the
The usage report shows the percentage of time each crew member is assigned to a role during the scenario, and the
-signed to multiple concurrent roles in the scenario. The tool
same crew assignment.
CPF Crew Validation
When the CPF entered service in the 1990s, the crew was established at 200. That crew was frequently augmented for missions and for force generation, and these augmentees performed various duties together with the established crew
for the CPF, to enable the crew to sustain operations at high readiness. The RCN used SCORE Version 1.6 to validate this proposal. A 10-day (228-hour) scenario, representative of high-tempo operations, was used to evaluate the crew of 217 and to compare it with the crew of 200. For both crew options, average crew usage was computed and compared between ship departments. Perhaps surprisingly, while the proposed crew of 217 would increase the crew size by 8.5%,
2% between the two crew options. In both crew options,
Combat Systems Engineering, and Deck departments had the highest usage, followed by the Marine Systems Engineering (MSE) department, and then the Logistics department whose average daily usage was about 2.5 hours less than the most highly used departments. The average usage for each depart-
These results seemed counter-intuitive, as the increased crew size was not found to reduce the workload of the overall crew.
However, further examination of the SCORE results
Role usage referred to the amount of time each role was
FIGURE 2: SCORE Usage Reports based on Scenarios and Assignments
62 | December 2016 | No. 128-4 NAVAL ENGINEERS JOURNAL
Evaluation of Current and Future Crew Sizes and Compositions: Two RCN Case Studies
roles performed by the crew of 217 that were not performed at all by the crew of 200. These roles included Second
other roles including junior roles in the Combat and Deck departments as well as trainee and maintenance roles in the engineering departments, the crew of 200 performed these roles for a total of 445 person-hours less than the crew of 217. In summary, compared to a crew of 200, the crew of 217 gained about 137 hours of force generation or mainte-
departments (Combat, Deck, MSE, and CSE). These SCORE results were a critical factor in the substantiation for addi-
in 2013. The crew of 217 was approved in 2014.
AOPS Crew Options Analysis
The AOPS is a new class of ship for the RCN that is ex-
to be delivered in 2018. In 2014, the RCN applied SCORE Version 2.0 to conduct a crew options analysis. At least eight
to 65. A 7-day (168-hour) high-tempo scenario was used to evaluate the options, and the ship’s crew was modelled with a combined technical department that performed both MSE and CSE duties. Some crew options were modelled to stand 1-in-3 watches in all departments, while other crew options were modelled to stand 1-in-4 watches in all departments.
In contrast to the CPF analysis where two crew options
of options examined for AOPS emerged during the analysis process. Analysis began with a large crew option and a small crew option that were evaluated through the generation of
certain crew members had very high or very low usage, crew -
at the same time in the scenario. When re-assigning roles
for further analysis. Each new option was subjected to eval-
issues (if any) were addressed through further re-assign-ment of roles and/or further changes to the crew, until each
each option was feasible, at least in principle, though some
In current RCN practice, roles for as-required activities other than regular watchkeeping or maintenance are distrib-uted between watches, such that the same crew members are not on call at all times. The SCORE analysis revealed that for the AOPS, with the smaller crew options, roles for these as-required activities were assigned to the same crew members regardless of when the activity occurred, thereby reducing the opportunity for crew rest. The same applied to the medium crew options, except that only senior crew
the current practice to protect the opportunity for crew rest.-
critical roles, the small and medium crew options required senior personnel to be double-hatted. For example, the person in charge of a section base team also had to do the plotting and the communications, even though these roles
and medium crews also had to assign roles to crew members
options were not able to meet the Statement of Require-ments in terms of performing concurrent activities. In contrast, the large crew had increased capability (e.g., naval boarding party) and increased capacity during emergencies
There were also differences in the crews’ capacities for mainte-nance, force generation, or other departmental work. For the small
maintenance roles had reduced coverage for two other technical
CPF AOPS
Platform type Multi-role patrol frigate, (1990s–now) Ice-capable patrol ship, (expected in 2018)
Number of crew options 2 8+
Crew size 200 vs. 217 45-65
Largest crew as % of smallest crew 109% 144%
Key findings from SCORE analysis Larger crew: No decrease in crew usage; no unfilled roles; increased capacity for force generation and maintenance
Smaller crews: unfilled roles for special parties unless crew from both watches are assigned; unfilled roles for concurrent activities
Estimated time spent by primary analysts 4 person-months 10 person-months
TABLE 1. Comparison of the CPF and AOPS analyses.
NAVAL ENGINEERS JOURNAL December 2016 | No. 128-4 | 63
46 person-hours of maintenance per day, before considering as-re-quired activities that could further decrease the crew’s availability to conduct maintenance. The small crew had lower capacity for
Department. The small crew also had lower capacity for the force
role, one Second-In-Charge role, one Director role
Overall, SCORE modeling answered the critical question of what the ship would not be able to do with fewer crew members, which was integral to the RCN decision to endorse a complement of 65
Discussion
The CPF and AOPS case studies applied SCORE to evaluate crewing for an existing platform and for a new platform respectively. In both cases, the analyses encompassed the whole ship, and included multiple ship departments and
--
bilities such as whether the ship’s crew included a boarding party. They also revealed implications for operational capac-
damage control in multiple sections of the ship at the same time, or whether as-required activities could be sustained over time depending on the requirement for crew to come from multiple watches. While the focus for both analyses was operations (i.e., watchkeeping, and as-required activ-
was still possible to glean insights into how crewing options
availability for maintenance and force generation. One important limitation of SCORE is that it is not
currently implemented to enable stochastic modeling. In a stochastic task network simulation, the developer could specify probability distributions for task completion times, and logical or temporal dependencies between tasks, for example: 80% of the time, task A is followed by task B; 20% of the time, task A is followed by task C. In contrast,
in SCORE (e.g., on Day 1, activity E occurs for three hours, followed by activity F for one hour), the same results can be expected every time the same scenario is used to evaluate the same crew. If the SCORE user is interested to examine
inserted into multiple days and times during the scenario,
and each instance of the activity is preceded or followed by various other activities. The SCORE user may also choose to
of interest, for example, a best case scenario where activity times are short and there are no emergencies, and a worst case scenario where activity times are long and multiple emergencies occur.
In other words, SCORE can be manually manipulated to consider meaningful variances in the operational demand but the feasibility and ease of these manipulations depend primarily on the user’s naval subject matter expertise. As long as the naval subject matter expert (SME) can conceive of a meaningful scenario, that schedule of activities can be entered directly by the SME into SCORE. In contrast, the development of a task network simulation depends more heavily on the user’s modeling and simulation expertise. A task network simulation is built by a modeling expert to
provide ranges of values for many of these parameters in various parts of the simulation, as long as the SMEs have visibility into the possible parameters. The task network sim-
of parameters, and naval SMEs can work with the modeling expert to interpret the range of results over many simulation
which is relatively straightforward: One scenario would produce one set of results; a second scenario that embodies
set of results; the two scenarios and sets of results can be directly compared.
A second limitation of SCORE is that it does not simulate human or system performance. In contrast, a task network
-ternal conditions and actions taken by crew or automation. For example, Chow, Hiltz, Coates & Wang (2010) developed a task network simulation to investigate optimized crew-ing for damage control that included environmental and design parameters, three crew-and-automation options, and two scenarios. The simulation produced outputs on system
-binations of factors were simulated and each combination was run 25 times to produce numerous results. The results required a multi-variate analysis of variance (MANOVA)
level, automation reliability and scenario complexity, fol-lowed by eight uni-variate analyses of variance (ANOVAs) to
64 | December 2016 | No. 128-4 NAVAL ENGINEERS JOURNAL
Evaluation of Current and Future Crew Sizes and Compositions: Two RCN Case Studies
examine two complementary subsets of the 26 combinations for the four variables (Chow, 2010). The time and type of expertise required to interpret these simulation results were
and resources required to develop these types of models to support crewing analysis. For example, the implementation cost of SCORE’s crew validation module was about 20% less than the implementation cost of one task network simulation
control (Chow et al, 2010). SCORE’s crew validation module can be reused for various analyses (e.g., CPF, AOPS), but the
three crew-and-automation options and the two scenarios that were developed. The results of the task network simula-tion seem more applicable to decision making about systems, such as what damage control automation to acquire. The re-sults from SCORE seem more applicable to decision making about overall crew size.
In the future, new forms of support for stochastic modeling could be developed in SCORE to investigate the
impact of crew members becoming unavailable. However, this increased modeling capability must not come at the cost of a very large increase in the expertise or time required to develop or modify crews or scenarios. As long as SCORE users remain primarily interested in whole ship crewing, it is
durations and outcomes for each activity.
to include some support for the consideration of human per-formance. This would, for example, allow the investigation
of crew fatigue. DRDC has already developed and demon-strated a proof-of-concept for the integration of a crew
performance model into SCORE. This crew performance model uses predicted crew work schedules from SCORE to predict crew sleep schedules, and uses the predicted crew
fully integrated crew performance model is expected to be available in SCORE in 2016 (Chow et al., 2016).
Conclusion
SCORE was applied to evaluate crew options for existing and new RCN platforms. The analysis results provided meaning-ful and useful evidence related to operational capability and capacity to support high level decision making on crew sizes. The results also provided a starting point for considering the complex and important relationships between mission requirements and maintenance or force generation require-ments. In a case where equipment decisions have been made and there are detailed data on how the equipment needs to be operated and/or maintained (i.e., the CPF case study), SCORE was used successfully to compare two fully
fully available (i.e., the AOPS case study), SCORE was used successfully to assess and to compare, in an iterative manner,
As a tool for crew validation, SCORE provided a feasible, meaningful, and usable alternative to detailed task network simulations.
Acknowledgements
This paper is referred to within DRDC as “DRDC E15-0910-1336”. The authors acknowledge the co-inventors of SCORE including Dr. Wenbi Wang, Curtis Coates, Michael Perlin, and Paul McKay. The authors also acknowledge Cdr Den-nis Witzke and Cdr Ramona Burke from the Directorate of Naval Personnel and Training for their guidance and support throughout the CPF and AOPS analyses.
REFERENCES
Alion Science and Technology. Human performance analysis: Micro Saint Sharp, 2015. Retrieved on 20 August 2015 from: http://www.alionscience.com/Technologies/Human-Performance-Analysis/Micro-Saint-Sharp
Chow, R. Analysis of a simulation experiment on optimized crewing for damage control. DRDC Toronto Technical Report TR2010-128, August 2010.
Chow, R. Decision support for Royal Canadian Navy crewing analysis. ‘Human capital’ in the National Shipbuilding Procurement Strategy Workshop, Centre for Foreign Policy Studies, Dalhousie University, 14 November 2014.
Chow, R., Burke, R., & Witzke, D., 2016. A systems approach to naval crewing analysis: Coping with complexity. Canadian Naval Review, 11(3).
Chow, R., Hiltz, J., Coates, C., & Wang, W., Optimized crewing for damage control: A simulation study. Maritime Systems and Technology (MAST) Americas 2010 Conference, 2010.
Chow, R., Perlin, M., McKay, P., Coates, C., Lamb, M., & Wang, W. SCORE 2.0 User’s Guide: Crew generation and validation. DRDC-RDDC-2015-R052, 2015.
Hollins, R., & Leszczynski, K. USN manpower determination decision making: a case study using IMPRINT Pro to validate the LCS core crew manning solution. Monterey, California: naval Postgraduate School, 2014-12. Retrieved on 14 July 2015 from http://hdl.handle.net/10945/44582
U.S. Army Research Laboratory. Improved Performance Research Integration Tool, September 2010. Retrieved on 14 July 2015 from http://www.arl.army.mil/www/default.cfm?page=445
Woodward, M. Complement Modelling for the MHPC Naval Project. In Ergoship 2011—1st Conference on Maritime Human Factors, Göteborg, Sweden, September 2011.
top related