a new military retention prediction model: machine

26
INSTITUTE FOR DEFENSE ANALYSES A New Military Retention Prediction Model: Machine Learning for High- Fidelity Forecasting Julie Pechacek Alan Gelder Cullen Roberts Joe King James Bishop Michael Guggisberg Yev Kirpichevsky June 2019 Approved for public release; distribution is unlimited. IDA Document NS D-10712 Log: H 19-000305 INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882

Upload: others

Post on 07-Jun-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A New Military Retention Prediction Model: Machine

INSTITUTE FOR DEFENSE ANALYSES

A New Military Retention Prediction Model: Machine Learning for High-

Fidelity Forecasting

Julie PechacekAlan Gelder

Cullen Roberts Joe King

James Bishop Michael Guggisberg

Yev Kirpichevsky

June 2019Approved for public release;

distribution is unlimited. IDA Document NS D-10712

Log: H 19-000305

INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive

Alexandria, Virginia 22311-1882

Page 2: A New Military Retention Prediction Model: Machine

Copyright Notice © 2019 Institute for Defense Analyses 4850 Mark Center Drive, Alexandria, Virginia 22311-1882 • (703) 845-2000 This material may be reproduced by or for the U.S. Government pursuant to the copyright license under the clause at DFARS 252.227-7013 (a)(16) [June 2013].

The Institute for Defense Analyses is a non-profit corporation that operates three federally funded research and development centers to provide objective analyses of national security issues, particularly those requiring scientific and technical expertise, and conduct related research on other national challenges.

About This Publication This work was conducted by the Institute for Defense Analyses (IDA) under contract HQ0034-14-D-0001, project BE-6-4311, “Next Generation Personnel Management Models and Modeling Environment,” for the Office Under Secretary of Defense for Personnel and Readiness (OUSD (P&R)). The views, opinions, and findings should not be construed as representing the official position of either the Department of Defense, sponsoring organization, or the U.S. Government.

For More Information: Dr. Julie A. Pechacek, Project Leader, SFRD [email protected], 703-578-2858ADM John C. Harvey, Jr., USN (Ret), Director, SFRD [email protected], 703-575-4530

Page 3: A New Military Retention Prediction Model: Machine

INSTITUTE FOR DEFENSE ANALYSES

IDA Document NS D-10712

A New Military Retention Prediction Model: Machine Learning for High-

Fidelity Forecasting

Julie PechacekAlan Gelder

Cullen Roberts Joe King

James Bishop Michael Guggisberg

Yev Kirpichevsky

Page 4: A New Military Retention Prediction Model: Machine

This page is intentionally blank.

Page 5: A New Military Retention Prediction Model: Machine

Executive Summary

Improved forecasts of military personnel retention can assist Department of Defense (DOD)

leaders and force managers on multiple levels. Developing a force that is lethal, efficient, and

ready requires that leaders anticipate upcoming changes in the size and shape of the military

workforce at a detailed level. To support the Office of the Under Secretary of Defense for

Personnel and Readiness, IDA has developed the Retention Prediction Model (RPM). The RPM

uses machine learning algorithms and extensive personnel records to capture rich interactions in

service characteristics and predict when individual servicemembers will separate from the military.

The RPM’s person-level predictions can be aggregated by any desired population subset, including

career field, cohort, unit, or demographics.

The RPM currently incorporates monthly records for active duty personnel between 2000 and

2018. This population encompasses approximately 4.5 million unique individuals and more than

600 administrative fields, covering career history, family, pay, and deployments. To facilitate and

speed model training, a 5% sample of all individuals serving between 2000 and 2018 was split into

two sub-samples. The first sub-sample, comprising 75% of the sample (approximately 169,000

individuals), was used to train the RPM. The remaining 25% of the sample was used for testing.

Based on information about a service member’s career and characteristics observable up to at a

given point, the RPM estimates the probability that a person will continue to serve for any number

of future periods. The RPM uses a survival loss function developed specifically for analytic

applications where the end state in a chain of events is not observable or has not yet occurred.

Categorical variables were encoded using an embedding layer to determine a mapping structure

that is most useful to the predictive model.

The RPM produces individual-level predictions that closely mirror actual attrition patterns.

Testing on out-of-sample data, given two randomly selected servicemembers, one of whom

separates from the military within one year, the RPM identifies the correct individual 88% of the

time. Extending the time horizon to four years, the model is correct 80% of the time; for any

number of years up to 18, the model is correct more than 78% of the time.

Applying machine learning techniques to identify patterns in personnel data enables new

insights into issues affecting military personnel retention and force planning. The DOD can

leverage the RPM to anticipate shortfalls in specific occupational fields, plan for expected career

lengths among a heterogeneous population, and tailor policies to retain highly sought-after

personnel.

Page 6: A New Military Retention Prediction Model: Machine

This page is intentionally blank.

Page 7: A New Military Retention Prediction Model: Machine

1

A New Military Retention Prediction Model: Machine Learning for High-Fidelity Forecasting

0

JSM 2019

Julie Pechacek, Principal Investigator

Alan GelderCullen Roberts

Joe King

James BishopMichael Guggisberg

Yev Kirpichevsky

The views expressed here are my own

and do not represent the official position of IDA or DOD

1

Page 8: A New Military Retention Prediction Model: Machine

2

The Institute for Defense Analyses (IDA) conducts independent research in the public interest

2

Non-profit and non-partisan

Operates three Federally Funded Research and Development Centers (FFRDCs)

Primary sponsor is the Department of Defense (DOD)

Bridge between academia and DOD

The DOD seeks to better forecast and understand factors that predict the retention of military personnel

3

Anticipate shortfalls in career fields

Identify changes in expected career length among a heterogeneous population

Provide appropriate policies to encourage retention

Page 9: A New Military Retention Prediction Model: Machine

3

How and why do people leave?

4

Many leave at contract endpointsFixed-duration contracts often last 4 years

Others leave mid-contractPoor health, discipline problems, or other reasons

We observe entry and exit by date of first and last recordRight-censoring flagged by being present at last date in data;Entry date inferred for those present at first date in data

Observed survival function for active duty personnel

5

Share of service members remaining X years after observation

0

0.2

0.4

0.6

0.8

1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Years after current observation

Actual

Page 10: A New Military Retention Prediction Model: Machine

4

6

Out-of-sample prediction with feed forward neural network

Share of service members remaining X years after observation

0

0.2

0.4

0.6

0.8

1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

Years after current observation

Actual Predicted

Observed survival function for active duty personnel

This aggregate prediction results from millions of person-level predictions across the force

7

How well does the model perform for a given person?

Pick two service members at random, one of whom will leave within 1 year

Our model identifies the person who leaves 88% of the time

What if the time horizon is 4 years instead of 1 year? 80%For any number of years up to 18? 78+%

Page 11: A New Military Retention Prediction Model: Machine

5

Data Overview

8

9

We harness extensive DOD administrative records on active duty personnel

Data include monthly records for all active duty personnel315 million person-month records, 2000‒20184.5 million unique persons600 fields on career history, family, pay, deployments

To reduce training time, weAggregate to yearly recordsEstimate model on a 5% subset of personsTrain on 75%, test on 25% of this subset

Page 12: A New Military Retention Prediction Model: Machine

6

DOD data from multiple administrative feeds form an unbalanced, censored panel

10

Censoring31% of persons in service at first data month30% of persons in service at last data month42% of persons uncensored3% in service in both first and last data month

We construct additional fields to highlight potentially salient career and life experiences, such as …

11

Assigned unitCharacteristics (e.g., size, mean Armed Forces Qualification Test

(AFQT) score)Similarity to subject personProximity to subject person’s home of record

Features of serviceDays until end of current contractTotal days deployedDeployment features

Family Composition Oldest and youngest dependents Total dependents in various age ranges

Page 13: A New Military Retention Prediction Model: Machine

7

Modeling

12

We seek to estimate the probability that a person continues to serve for any number of future periods

13

Input: information on a person’s career at a given point

Output: survival probability that a person will remain in service for each successive period

Page 14: A New Military Retention Prediction Model: Machine

8

Survival loss function accounts for right-censoring

14

Discrete survival framework

Likelihood Censoring Formulation

Hazard function ℎ(𝑡)= Prob(exit at t | survival to t −1 )

Exit at time j Uncensored ℎ(𝑡)ෑ

𝑡=1

𝑗−1

(1 − ℎ(𝑡))

Survival until at least j

Censoredafter j −1 ෑ

𝑡=1

𝑗−1

(1 − ℎ(𝑡))

Loss function implementation based on Gensheimer and Narasimhan (2019)

Output is a vector of survival probabilities (1 − ℎ(𝑡))extending out to the maximal observed survival time

Machine learning models require real-valued predictors and other data transformations

15

Categorical fieldsWe have hundreds, many with hundreds of unique values

Transformation(k unique values) Number of resulting fields

One-hot encoding k (Boolean)

Binary encoding log2k (Boolean)

Target mean encoding 1 (group mean outcome)

Neural network embedding 1 or more

Other (e.g., PCA, autoencoders) 1 or more

Numeric fieldsFields with a low number of unique values are treated as categoricalOthers are rescaled to the unit interval [-0.5, 0.5]

Page 15: A New Military Retention Prediction Model: Machine

9

Transform each categorical field via embedding layersStore for use in subsequent models

16

Non-categorical

fields

Embed 1

Embed 2

Embed 3

Embed N

Mai

n ne

ura

l net

laye

rs

. . . Overall output

(survival function)

Model rerun after embeddings are calculated (“freezing the embeddings”)L2 regularization applied to output of each embedding layerAdam optimizer (AMSGrad variant, see Reddi et al. 2018)

Main neural network layers consider all possible interactions while reducing overfitting

17

Final dense layer(sigmoid activation)

Batch normalization

Dropout

Maxout dense

Batch normalization

Dropout

Maxout dense

Normalize inputs from previous layer; allows for larger learning rates

Batch normalization (Loffe and Szegedy 2015); Maxout dense (Goodfellow et al. 2013)

Flexibility for convex activation functions

Model averaging

Page 16: A New Military Retention Prediction Model: Machine

10

Results

18

1 year forecast 0.88

3 year forecast 0.82

7 year forecast 0.79

Area Under Receiver Operator Characteristic (AUROC)

19Measured on out-of-sample observations

Probability that a random draw from the set of actual positives will rank higher than a random draw from the set of actual negatives

Page 17: A New Military Retention Prediction Model: Machine

11

Precision-recall graph: for a fixed recall coverage, how precise are the predictions?

20

21

0.96

0.83

0.67

0.58

0.88

0.66

0.50

0.40

Baseline

Precision-recall graph: for a fixed recall coverage, how precise are the predictions?

Page 18: A New Military Retention Prediction Model: Machine

12

Person-level predictions can be aggregated by career field, cohort, unit, demographics, and so forth

22

Estimates for Air Force fighter pilots (11F)

A graphical interface enables DOD leaders to visualize these predictions for custom groups

23

For a given set of drill-down characteristics, plot the current and predicted force curve by years of experience

Notional example

Page 19: A New Military Retention Prediction Model: Machine

13

This capability can predict events for personnel, units, equipment, and operations

24

Examples: accurately predicting …whether a potential recruit will succeed in basic trainingwhich applicants will likely pass special forces evaluationwhen an equipment item will fail

The inputs are flexible, only requiring panel data withtime and person (or object) identifiers regularly spaced time intervals

Questions or Comments?

25

Feel free to contact Julie [email protected]

Thank You

Page 20: A New Military Retention Prediction Model: Machine

14

Backup

26

We examined four classes of models

27

Feed-forward Neural NetworkTwo 256-node Maxout layers, each with 25% dropout

Gradient Boosted Trees64 trees, each at most 8 branches deep

Random Forest128 trees, each leaf with at least 256 observations

Logistic RegressionL2 regularization of 0.125

Page 21: A New Military Retention Prediction Model: Machine

15

Neural network predictions best match actual aggregate attrition patterns

28Measured on out-of-sample observations

AUROC: Comparing model classes

29

Measured on out-of-sample observations

Page 22: A New Military Retention Prediction Model: Machine

16

The Persistence Prediction Capability and supporting data have been designed for efficient reuse

30

Personnel data examples:Personnel filesFamily filesDeployments and casualties

PPCAlgorithmData Output

Retention Prediction Model

AC Recruiting

RC Recruiting

AC/RC Transfers

RC Attrition

PPC Algorithm

DataPopulation

FieldsOutcome

Neural netsTree methods Linear models

Output(s)o1o2

Person-level …Expected career durations Survival functions

Personnel Data Environment(e.g., EDDIE*)

*Enterprise Data to Decisions Information Environment

Metrics primer: precision, recall, false positive rate

31

Actual negativesActual positives

True positives False

positives

True negatives

False negatives

Classification thresholdAll elements inside circle classified as “positive”

Recall(True positive rate) How complete is the positive classification coverage?

PrecisionHow correct are the positive classifications?

False Positive RateHow complete is the negative classification coverage?

Page 23: A New Military Retention Prediction Model: Machine

17

Metrics primer: changing classification thresholds

32

Our output is a set of survival probabilitiesBinary stay-exit predictions require a classification thresholdExample: stay if survival probability > 0.5; exit otherwise

How does the model perform as we vary the threshold?

Recall(True positive rate)

Pre

cisi

on

1

1

0False positive rate

Re

call

(Tru

e p

osi

tive

rat

e)

1

0 1

Each point corresponds to a classification threshold

Receiver operator characteristic (ROC) curve Precision-recall curve

Summary of missing data

33

Missing data valuesWe remove the 30% of fields that are more than 99.9% missing54% of the remaining fields are at least 50% missingMissing values filled with person’s previous non-missing value

Page 24: A New Military Retention Prediction Model: Machine

18

Future work will allow for more accurate predictions, measures of uncertainty, and counter-factual studies

34

Bayesian optimization of hyper-parametersBootstrapping to estimate sampling uncertaintyAddress prediction interactions with legally protected fieldsMove toward counter-factual studies

Econometrics Machine Learning

Page 25: A New Military Retention Prediction Model: Machine

Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18

R E P O R T D O C U M E N T AT I O N P A G E Form Approved

OMB No. 0704-0188

Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.

2. REPORT TYPE 3. DATES COVERED (From – To) 1. REPORT DATE (DD-MM-YYYY)

XX-06-20194. TITLE AND SUBTITLE 5a. CONTRACT NO.

A New Military Retention Prediction Model: Machine Learning for High-Fidelity Forecasting

5b. GRANT NO.

5c. PROGRAM ELEMENT NO(S).

6. AUTHOR(S) 5d. PROJECT NO.

BE-6-4311Julie Pechacek, Alan Gelder, Cullen Roberts, Joe King, James Bishop, Michael Guggisberg, Yev Kirpichevsky 5e. TASK NO.

5f. WORK UNIT NO.

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Institute for Defense AnalysesStrategy, Forces and Resources Division 4850 Mark Center DriveAlexandria, VA 22311-1882

8. PERFORMING ORGANIZATION REPORT NO.

IDA Document NS D-10712 Log: H 19-000305

9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR’S / MONITOR’S ACRONYM(S) OUSD (P&R)Pentagon

OUSD (P&R)11. SPONSOR’S / MONITOR’S REPORT NO(S).

12. DISTRIBUTION / AVAILABILITY STATEMENT

Approved for public release; distribution is unlimited.

13. SUPPLEMENTARY NOTES

14. ABSTRACTUsing machine learning algorithms and 18 years of data, we predict individual-level attrition among active duty personnel in allmilitary Services, with hold-out sample prediction accuracies typically exceeding 70%. Importantly, our methodologyaccommodates both right and left-censoring of observed career paths, and significantly outperforms traditional survival analysis.Using these individual-level predictions, we generate aggregate predicted force profiles which closely align with historicalactuals. This and other features offer a rich slate of observations for further empirical analysis, and suggest new policy levers formanaging attrition.

16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

ABSTRACT 18. NO. OF PAGES19a. NAME OF RESPONSIBLE PERSON

David M. Percichb. ABSTRACT c. THIS PAGE 19b. TELEPHONE NUMBER (Include Area Code)

(703) 693-2238a. REPORT

U U U26

15. SUBJECT TERMS Retention, attrition, machine learning, forecast, military personnel

UU

Final

HQ0034-14-D-0001

Page 26: A New Military Retention Prediction Model: Machine

This page is intentionally blank.