birth simulator: reliability of transvaginal assessment of fetal head station as defined by the...

7
Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification Olivier Dupuis, MD, a,b, * Ruimark Silveira, MS, b Adrien Zentner, MS, b Andre ´ Dittmar, PhD, b Pascal Gaucherand, MD, c Michel Cucherat, MD, d Tanneguy Redarce, PhD, b Rene ´-Charles Rudigoz, MD a Unite ´ de Gyne´cologie Obste´trique, Ho ˆpital de la Croix Rousse, Lyon, France, a INSA, Villeurbanne, France, b Unite ´ de Gyne´cologie Obste´trique, Ho ˆpital E. Herriot, Lyon, France, c and Service de Biostatistique, Hospices Civils de Lyon, Lyon, France d Received for publication June 14, 2004; revised September 11, 2004; accepted September 11, 2004 KEY WORDS Birth simulation Competency Engagement Objective: This study was undertaken to investigate the reliability of transvaginal assessment of fetal head station by using a newly designed birth simulator. Study design: This prospective study involved 32 residents and 25 attending physicians. Each operator was given all 11 possible fetal stations in random order. A fetal head mannequin was placed in 1 of the 11 American College of Obstetricians and Gynecologists (ACOG) stations (ÿ5 to C5) in a birth simulator equipped with real-time miniaturized sensor. The operator then determined head position clinically using the ACOG classification. Head position was described as: (1) ‘‘engaged’’ or ‘‘nonengaged’’ (engagement code); (2) ‘‘high,’’ ‘‘mid,’’ ‘‘low,’’ or ‘‘outlet’’ (group code); and (3) according to the 11 ACOG ischial spine stations (numerical code). Errors were defined as differences between the stations given by the sensor and by the operator. We determined the error rates for the 3 codes. Results: ‘‘Numerical’’ errors occurred in 50% to 88% of cases for residents and in 36% to 80% of cases for attending physicians, depending on the position. The mean ‘‘group’’ error was 30% (95% CI 25%-35%) for residents and 34% (95% CI 27%-41%) for attending physicians. In most cases (87.5% for residents and 66.8% for attending physicians) of misdiagnosis of ‘‘high’’ station, the ‘‘mid’’ station was retained. Residents and attending physicians made an average of 12% of ‘‘engagement’’ errors, equally distributed between false diagnosis of engagement and non- engagement. Conclusion: Our results show that transvaginal assessment of fetal head station is poorly reliable, meaning clinical training should be promoted. The choice not to perform vaginal delivery when the fetus is in the ‘‘mid’’ position strongly decreases the risk of applying instruments on an undiagnosed ‘‘high’’ station. Conversely, obstetricians who perform only ‘‘low’’ operative vaginal * Reprint requests: Olivier Dupuis, MD, Unite´ de Gyne´cologie Obste´trique, Hoˆpital de la Croix Rousse, 103 Grande-Rue de la Croix Rousse, 69317 Lyon cedex 04, France. E-mail: [email protected] 0002-9378/$ - see front matter Ó 2005 Elsevier Inc. All rights reserved. doi:10.1016/j.ajog.2004.09.028 American Journal of Obstetrics and Gynecology (2005) 192, 868–74 www.ajog.org

Upload: olivier-dupuis

Post on 03-Sep-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification

American Journal of Obstetrics and Gynecology (2005) 192, 868–74

www.ajog.org

Birth simulator: Reliability of transvaginal assessmentof fetal head station as defined by the American Collegeof Obstetricians and Gynecologists classification

Olivier Dupuis, MD,a,b,* Ruimark Silveira, MS,b Adrien Zentner, MS,b

Andre Dittmar, PhD,b Pascal Gaucherand, MD,c Michel Cucherat, MD,d

Tanneguy Redarce, PhD,b Rene-Charles Rudigoz, MDa

Unite de Gynecologie Obstetrique, Hopital de la Croix Rousse, Lyon, France,a INSA, Villeurbanne, France,b

Unite de Gynecologie Obstetrique, Hopital E. Herriot, Lyon, France,c and Service de Biostatistique, HospicesCivils de Lyon, Lyon, Franced

Received for publication June 14, 2004; revised September 11, 2004; accepted September 11, 2004

KEY WORDSBirth simulationCompetency

Engagement

Objective: This study was undertaken to investigate the reliability of transvaginal assessment of

fetal head station by using a newly designed birth simulator.Study design: This prospective study involved 32 residents and 25 attending physicians. Eachoperator was given all 11 possible fetal stations in random order. A fetal head mannequin wasplaced in 1 of the 11 American College of Obstetricians and Gynecologists (ACOG) stations (�5

to C5) in a birth simulator equipped with real-time miniaturized sensor. The operator thendetermined head position clinically using the ACOG classification. Head position was describedas: (1) ‘‘engaged’’ or ‘‘nonengaged’’ (engagement code); (2) ‘‘high,’’ ‘‘mid,’’ ‘‘low,’’ or ‘‘outlet’’

(group code); and (3) according to the 11 ACOG ischial spine stations (numerical code). Errorswere defined as differences between the stations given by the sensor and by the operator. Wedetermined the error rates for the 3 codes.

Results: ‘‘Numerical’’ errors occurred in 50% to 88% of cases for residents and in 36% to 80% ofcases for attending physicians, depending on the position. The mean ‘‘group’’ error was 30%(95% CI 25%-35%) for residents and 34% (95% CI 27%-41%) for attending physicians. In mostcases (87.5% for residents and 66.8% for attending physicians) of misdiagnosis of ‘‘high’’ station,

the ‘‘mid’’ station was retained. Residents and attending physicians made an average of 12% of‘‘engagement’’ errors, equally distributed between false diagnosis of engagement and non-engagement.

Conclusion: Our results show that transvaginal assessment of fetal head station is poorly reliable,meaning clinical training should be promoted. The choice not to perform vaginal delivery whenthe fetus is in the ‘‘mid’’ position strongly decreases the risk of applying instruments on an

undiagnosed ‘‘high’’ station. Conversely, obstetricians who perform only ‘‘low’’ operative vaginal

* Reprint requests: Olivier Dupuis, MD, Unite de Gynecologie Obstetrique, Hopital de la Croix Rousse, 103 Grande-Rue de la Croix Rousse,

69317 Lyon cedex 04, France.

E-mail: [email protected]

0002-9378/$ - see front matter � 2005 Elsevier Inc. All rights reserved.

doi:10.1016/j.ajog.2004.09.028

Page 2: Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification

Dupuis et al 869

deliveries also deliver unrecognized ‘‘mid’’ station fetuses. Therefore, residency programs shouldoffer training in ‘‘mid’’ pelvic operative vaginal deliveries. Birth simulators could be used intraining programs.� 2005 Elsevier Inc. All rights reserved.

In 1988, the American College of Obstetricians andGynecologists (ACOG) implemented a new classifica-tion system that divided the birth canal into 11 stationsaccording to the position of the fetal head relative to theischial spines (�5 to C5).1 For clinical purposes, the 11stations have been divided into 4 groups: ‘‘high’’ (�5,�4, �3, �2, �1), ‘‘mid’’ (0, C1), ‘‘low’’ (C2, C3), and‘‘outlet’’ (C4, C5).2 There is some debate about the riskassociated with ‘‘mid’’ pelvic operative vaginal deliver-ies, and only 64% of North American residencyprograms offer training in such deliveries.3 Only 41%of ACOG fellows claim to perform ‘‘mid’’ pelvic de-liveries,4 whereas 86% claim to perform ‘‘low’’ and‘‘outlet’’ forceps deliveries.4 These 4 groups can bepooled according to a third classification system thatdifferentiates between nonengaged fetal head (‘‘high’’group) and engaged fetal head (‘‘mid,’’ ‘‘low,’’ and‘‘outlet’’ groups).

Little is known about the accuracy of clinical trans-vaginal assessment of head position. One study com-pared abdominal and transvaginal clinical assessment5

and another compared clinical and ultrasound assess-ment,6 but neither of these studies used a gold standard,meaning that they are subject to significant bias.

We designed a birth simulator with a fetal headequipped with a location sensor. This miniaturized, real-time tracking sensor served as a gold standard andallowed us to assess the reliability of the clinicaldiagnosis. We used this birth simulator to assess thevalue of clinical transvaginal examination by a residentor an attending physician.

Material and methods

This prospective, randomized study was performedbetween July 2003 and January 2004. Residents andattending physicians were recruited from 6 universitymaternity hospitals. We used a newly designed mechan-ical birth simulator that consisted of 4 parts: a fetalmannequin representing a term newborn head, a mater-nal mannequin, an interface pressure system, anda location system (Figure). The head was modeled onthe skull of a dissected term fetus. Cranial computedtomographic images of this skull were recorded withAmira Software (TGS, Inc, San Diego, Calif). Datawere exported to a fast prototyping machine with theuse of a Stratasys FDM 1600 device (PADT, Inc,Tempe, Ariz) used to produce an acrylonitrile butadienestyrene (ABS) head. This head was then covered with

a 5-mm thick layer of natural latex rubber. The headwas equipped with a commercially available, real-time,miniaturized, tracking sensor with 6 df. This sensor canrecord position to within 1.8 mm and orientation towithin 0.5 degrees. A computer program was developedto allow us to visualize on a PC screen and in real timethe head station and location. The fetal head wasattached to a pneumatic actuator via a spherical link.The maternal mannequin consisted of an anatomicallycorrect pelvic model (Simulaids, Inc, Woodstock, NY).An interface pressure system made of an elastomerballoon inflated to 400 millibar, mimicked the pelvicmuscles. The location system consisted of a pneumaticactuator (Festo AG, Inc, Esslingen, Germany), sup-ported on a mechanical system called the ‘‘movablemechanical stair’’ (MMS). The actuator made it possibleto move the fetal head along the x-axis from �5 to C5cm relative to the ischial spines. The MMS was designedto locate the fetal head along the y- and z-axes. Theactuator can be set in 5 positions along the z-axis(vertical axis): OA, (ROA LOA), (ROT LOT), (ROPLOP), and OP. This device could also be moved into 1of 5 positions along the y-axis (horizontal axis): (OAOP), (LOA LOP), (ROA ROP), ROT, and LOT.Combining the 2 systems made it possible to place thefetal head in the OA, ROA, LOA, ROT, LOT, ROP,LOP, and OP locations.

The following definitions were used:Station was defined as the position of the fetal head

relative to the ischial spines according to the ACOGclassification.1

High, mid, low, and outlet groups were definedaccording to the ACOG classification.2,7

Head engagement was defined as the descent of thebiparietal plane of the fetal head to a level below that ofthe pelvic inlet.8

Test ‘‘reliability’’ was defined as the degree to whichthe test results were consistent.9

Three codes were used to describe the fetal headstation. The ‘‘numerical’’ code referred to the 11 ischialspine stations according to the ACOG classification (�5to C5). The ‘‘group’’ code referred to the high, mid,low, or outlet groups. Finally, the ‘‘engagement’’ codereferred either to an ‘‘engaged’’ or a ‘‘nonengaged’’station. The group and engagement codes were derivedfrom the numerical code using the ACOG classification.Errors were defined as any difference between the station

Page 3: Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification

870 Dupuis et al

Figure The birth simulator. It includes 4 parts: a fetal mannequin representing a term newborn head, a maternal mannequin, aninterface pressure system, and a location system.

(numerical, group, or engagement) given by the operatorand the station given by the sensor (eg, �4 wronglydiagnosed as�2 or high diagnosed asmid, or nonengageddiagnosed as engaged). The rate of errors was analyzed byusing the ‘‘group’’ and the ‘‘engagement’’ code.Operatorswere either residents or attending physicians. Locationrefers to the fetal head location: OA, ROA, LOA, ROT,LOT, ROP, LOP, and OP.

The following experimental protocol was used. Im-mediately before the experiment, each operator wasallowed to examine the maternal mannequin to palpatethe ischial spine, sacrum, coccyx, and fetal head. Eachoperator was given all 11 possible fetal stations inrandom order and therefore performed 11 clinicaltransvaginal examinations. To avoid any bias linked tothe fetal head location, a second randomization wasperformed, giving the location that should be used forevery station. The fetal head station and location werechanged according to the randomization table betweeneach experiment. For this, 1 of the authors (O.D. or

R.S.) moved the fetal head along the x-axis using theactuator and along the y- and z-axes using the MMS.The position was maintained by using a mechanicalscrew. The real-time sensor allowed the authors toensure that the fetal head had reached the exact stationand location on the PC screen. The operator was askedto give the fetal head station using the numerical code.Operators were blinded to the results; the computerscreen was turned away from the operator and no resultswere given during the study period.

The primary outcome analyzed was the rate of errorbetween the real fetal head station given by the real-timeminiaturized tracking sensor (gold standard) and theclinical fetal head station given by the operator using thenumerical code. Secondary outcomes were rates of errorexpressed by using the group and engagement codes.Statistical analysis was stratified by operator to take intoaccount the fact that 11 measurements were performedby each operator. The intraoperator error rates werecalculated for each operator and thus were averaged

Page 4: Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification

Dupuis et al 871

across all operators. The group error rate by operatorand the engagement error rate by operator werecalculated for the 11 error occasions given to eachoperator. Power analysis, with a prerequisite of a 95%CI around an estimated fraction of error of no morethan G 20%, indicated that at least 25 subjects wererequired for each subgroup. Normal CIs around allobserved rates were calculated and compared with ttests. P values of less than .05 were considered statisti-cally significant. Statistical packages that were usedincluded Excel 2001 (Microsoft Office 2001, MicrosoftCorporation, Redmond, Wash) and the R statisticalpackage (version 1.8.1, R Development Core Team,[2003]. R: A language and environment for statisticalcomputing. R Foundation for Statistical Computing,Vienna, Austria. http://www.R-project.org).

Results

Thirty-two residents and 25 attending physicians from 6university-affiliated maternity hospitals participated inthis study. Residents had an average of 2.2 years ofexperience (range 0.5-5). Attending physicians had anaverage of 9.4 years of experience (range 4-21).

Numerical errors occurred in 50% to 88% of cases inthe resident group and in 36% to 80% of cases in theattending physician group depending on the headstation (Table I). Residents made an average of 3.3group errors (95% CI 2.8-3.9), corresponding to anerror rate of 30% (95% CI 25-35). None of the residentsmade zero group errors (Table II). Attending physiciansmade an average of 3.8 group errors (95% CI 3.0-4.5),corresponding to an error rate of 34% (95% CI 27-41).

Undiagnosed ‘‘high’’ stations accounted for 22.4% ofthe errors made by residents (Table IV) and 15.95% ofthe errors made by attending physicians (Table III).

The ‘‘mid’’ when really ‘‘high’’ type of error ac-counted for 87.5% and 66.8% of the misdiagnosed‘‘high’’ stations for residents and attending physicians,

Table I Rate of numerical errors according to station

Actualposition

Resident(n = 32)

Attending physician(n = 25)

Error rate (%) 95% CI Error rate (%) 95% CI

�5 50 34-66 36 19-57�4 72 53-86 52 32-72�3 63 44-78 80 59-92�2 88 70-96 68 46-84�1 66 47-81 76 54-900 72 53-86 72 50-87

C1 81 63-92 76 54-90C2 69 50-83 68 46-84C3 63 44-78 76 54-90C4 53 35-70 72 50-87C5 56 38-73 68 46-84

respectively. Undiagnosed ‘‘mid’’ stations accounted for30.8% and 27.7% of the resident and attending physi-cian errors, respectively (Table III). Operator senioritydid not significantly modify the proportion of correctdiagnoses.

Residents made an average of 1.3 engagement errors(95% CI 1.0-1.74), corresponding to an error rate of12% (95% CI 9%-16%) (Table IV). Attending physi-cians made an average of 1.3 engagement errors (95%CI 0.89-1.7), corresponding to an error rate of 12%(95% CI 8%-15%) (Table IV).

False diagnosis of engagement and false diagnosis ofnonengagement accounted for approximately half of theerrors for both residents and attending physicians. Nosignificant difference was found between residents andattending physicians.

Comment

Over the past decade, medical simulators have beendeveloped in many specialties, including abdominalsurgery,10 anesthesiology,11 pediatrics,12 and urology.13,14

In the United States, gynecology teaching associates(GTAs) provide hands-on training.15 GTAs have beentrained to recognize adequate technical skills and canprovide students with feedback when their cervix, fundus,or ovaries are being examined.Nevertheless, such trainingcannot mimic pathologic situations and is not availableworldwide.15 Pugh and Youngblood16 recently reportedthe use of a female pelvis equippedwith electronic sensors.They found that this simulator allowed an objective andreliable assessment of physical examination skills. Goniket al17were the first to report the use of a physical obstetricmodel, including a rotational maternal pelvis, an alumi-num fetal ‘‘skeleton,’’ a tactile sensing glove, and amicro-computer-based data acquisition system. This simulatorincluded fingertip sensors as well as brachial plexus andneck extension sensors. It objectively demonstrated thatMcRoberts positioning reduces shoulder extraction

Table II Results that used the group code. Number of errors(and error rate) per operator

No. of errorsper operator(% for the11 error occasions)

Residents Attending physiciansNo. ofoperators (%)

No. ofoperators (%)

0 (0.0) 0 (0) 2 (8.0)1 (9.1) 5 (15.6) 0 (0)2 (18.2) 3 (9.4) 4 (16.0)3 (27.3) 10 (31.3) 7 (28.0)4 (36.4) 7 (21.9) 2 (8.0)5 (45.5) 5 (15.6) 4 (16.0)6 (54.5) 1 (3.1) 5 (20.0)7 (63.6) 1 (3.1) 1 (4.0)8-11 (72.7-100) 0 (0) 0 (0.0)Total 32 (100.0) 25 (100.0)

Page 5: Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification

872 Dupuis et al

Table III Type of group error (%) for residents and attending physicians

Group code High Mid Low Outlet

Real position (sensor value)Resident results High d 21 (19.6%) 3 (2.8%) 0 (0.0%)

Mid 17 (15.9%) d 15(14.0%) 1 (0.9%)Low 2 (1.9%) 19 (17.8%) d 7 (6.5%)Outlet 0 (0%) 1 (0.9%) 21 (19.7%) d

Attending physician results High d 10 (10.65%) 5 (5.3%) 0 (0.0%)Mid 15 (16.0%) d 11 (11.7%) 0 (0.0%)Low 2 (2.1%) 20 (21.3%) d 5 (5.3%)Outlet 0 (0.0%) 2 (2.1%) 24 (25.55%) d

forces, as well as the incidence of brachial plexus stretch-ing and clavicle fracture.17 The birthing simulators thatare currently commercially available do not include aninterface systembetween thematernal bony pelvis and thefetal mannequin. Furthermore, they do not make itpossible to assess fetal head station or location.

There is currently a training problem in the field ofobstetrics. In the United States, the use of forcepsdecreased by 22% between 1985 and 1992.18 Witha training program that has 3000 deliveries annually,12 residents per year, and a transverse arrest incidenceof 0.79% to 2.93%,19 each resident would only performbetween 2 and 7 operative vaginal deliveries in 1 year.As the frequency of operative vaginal delivery is de-creasing, it is become increasingly difficult to provideadequate training. This might explain why 36% ofNorth American residency programs no longer providedtraining in midpelvic delivery procedures.3 A highlyreliable birth simulator, such as the one described, couldbe used to train obstetric residents and might help intraining programs.

Several classification systems have been proposed toclassify the difficulties encountered during operativevaginal deliveries. In 1952, Dennen20 described a 4-levelclassification system for forceps delivery. A ‘‘low mid-forceps’’ category was created to subdivide the broad

Table IV Results that used the engagement code. Number oferrors (and error rate) per operator

No. of errorsper operator(% for the11 error occasions)

Residents Attending physiciansNo. ofoperators (%)

No. ofoperators (%)

0 (0.0) 7 (21.9) 3 (12.0)1 (9.1) 14 (43.8) 16 (64.0)2 (18.2) 5 (15.6) 3 (12.0)3 (27.3) 5 (15.6) 2 (8.0)4 (36.4) 1 (3.1) 1 (4.0)5-11 (45.4-100) 0 (0.0) 0 (0.0)Total 32 (100.0) 25 (100.0)

range of midforceps delivery.20 In 1965, the ACOGadopted a 3-level classification system (ie, high, mid, andlow forceps). In 1988 a new system was created, basedon the hypothesis that the difficulty of the operation andits inherent risk depend on the station from whichdelivery is initiated and the need to perform rotations.Maternal risk factors, such as the risk of third- andfourth-degree perineal tears, as well as fetal injuries,have been reported to be correlated with the stationfrom which delivery is initiated.2,21 These studies led tothe traditional belief that ‘‘high’’ operative vaginaldeliveries should not be performed, whereas ‘‘mid’’ onesare potentially traumatic to the fetus and ‘‘low’’ and‘‘outlet’’ ones safe.

The reliability of clinical transvaginal diagnosis offetal head station has not been studied in detail. Shererrecently compared transvaginal digital examinationand transabdominal ultrasound determinations for thediagnosis of engagement.6 The ultrasound methoddescribed should allow an ‘‘objective’’ engagement di-agnosis. To locate the ultrasound transducer correctly itis necessary to locate the sacral promontory via clinicalexamination. The reliability of this clinical step has notbeen studied and cannot be considered as a goldstandard.

Our study shows that the accuracy of clinical trans-vaginal examination is poor. With the exception of the‘‘–5’’ level for attending physicians (error rate = 36%),numerical error rates were always above 50%. Forresidents and attending physicians, the error rates werelowest at the two ends of the scale. This can be explainedby the fact that errors can occur only on one side at theends of the scale (�5 and C5), whereas errors occurringat any other station are two sided. It can be argued thatnot every ‘‘numerical’’ error is clinically relevant. In-deed, numerical errors that do not change the group(e.g. –3 falsely diagnosed as –5) are not clinicallysignificant, whereas numerical errors that alter the groupclassification are clinically relevant (e.g. –2 falsely di-agnosed as C1). This is why we also studied the resultsusing the ‘‘group’’ code. The ‘‘group’’ code helps theclinician to decide whether or not to perform an

Page 6: Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification

Dupuis et al 873

operative vaginal delivery. Use of the group codeshowed that group errors are not rare occurring in anaverage of 30% of cases for residents and in 34% ofcases for attending physicians. Two of the 25 attendingphysicians made no group errors, whereas no residentsmade zero group errors. This might be explained bya learning curve in clinical skills.

In the case of instrumental deliveries, group errorsare potentially dangerous when the operator states thatthe fetal head is below its real station, especially whenthe operator fails to identify a ‘‘high’’ station. Thesedangerous group errors accounted for 22.4% of theerrors made by residents and for 15.95% of the errorsmade by the attending physicians. Interestingly, falsediagnosis of ‘‘mid’’ station (‘‘mid’’ when really ‘‘high’’)accounted for 87.5% of these errors in the residentgroup (19.6% of 22.4%) and for 68% (10.65% of15.95%) of these errors in the attending physiciangroup. In other words, choosing not to perform ‘‘mid’’instrumental deliveries would decrease the number ofpotentially dangerous situations considerably for mostoperators. Therefore, the statement of Knight et al5 that‘‘clinical surveys of the outcome of midpelvic deliverieswhich have been performed by clinicians.must beconsidered to possibly include cases of unrecognizedhigh forceps’’ is true. Such errors could explain thedifference between the so-called ‘‘easy’’ mid operativevaginal deliveries that are ‘‘real mid operative vaginaldeliveries’’ and ‘‘difficult’’ mid operative vaginal de-liveries that are in reality ‘‘high operative vaginaldeliveries.’’

Obstetricians reading studies reporting neonatal andmaternal morbidity after ‘‘mid’’ operative vaginal de-livery should remember that some of those deliveries areactually ‘‘high’’ operative vaginal deliveries. Some grouperrors were of the ‘‘low’’ when really ‘‘mid’’ type,meaning that an operator who decides to perform only‘‘low’’ instrumental deliveries will also perform unrec-ognized ‘‘mid’’ deliveries. For residents, such errorsrepresent 14% of all errors. This explains why webelieve that all residency programs should still providetraining in mid operative vaginal deliveries.

Engagement is 1 of the prerequisites for an operativevaginal delivery. The traditional cutoff is the zerostation. Expression of our results with the use of theengagement code showed that errors occurred in 12% ofcases. In other words, should the delivery be indicated,unnecessary cesarean sections (‘‘not engaged’’ if‘‘engaged’’) or dangerous operative vaginal deliveries(‘‘engaged’’ if ‘‘not engaged’’) will be performed in 12%of cases. For residents and attending physicians half ofall errors are of the former type and half of the lattertype. Knight et al5 compared the value of abdominaland vaginal examinations for the diagnosis of engage-ment. The 104 consecutive women were divided into 1 of3 groups according to the results of abdominal and

vaginal examinations. Engagement by abdominal exam-ination was defined as the presence of no more than onefifth of the fetal head palpable above the pelvic brim. In15.3% of cases, the head was found to be engaged usingvaginal examination and not engaged using abdominalpalpation. Unfortunately, this study compared 2 sub-jective types of examination (abdominal vs vaginalexamination) and did not use a gold standard. It wastherefore impossible to know whether errors occurred inthe ‘‘abdominal’’ or ‘‘vaginal’’ group.

We are aware that our birth simulator does not mimicevery clinical situation. The fetal head mannequin useddoes not mimic molding or caput succedaneum. Theseare both classical clinical pitfalls; therefore, even moreerrors may occur in a real situation than in this‘‘simulation’’ setting.

Our data suggest that birth simulators equipped withspatial location sensor might help to assess the reliabilityof traditional clinical parameters. Clinical transvaginalassessment of fetal head station is not fully reliable.Clinical training on a birth simulator should be pro-moted and resident programs should still providetraining in mid operative vaginal deliveries.

Acknowledgments

We thank S. Dupuis-Girod for reviewing the manuscriptand M. Berland, D. Raudrant, D. Cabrol, and R.Frydman for providing residents and attending physi-cians from their maternity units in Lyon and Paris.

References

1. Cunningham F, MacDonald PC, Gant NF, et al. Conduct of

normal labor and delivery. In: Lange A, editor. Williams obstet-

rics. Vol 1. Stamford: Appleton Lange; 1997.

2. Robertson PA, Laros RK, Zhao RL. Neonatal and maternal

outcome in low-pelvic and midpelvic operative deliveries. Am J

Obstet Gynecol 1990;162:1436-44.

3. Bofill JA, Rust OA, Perry KG, Roberts WE, Martin RW,

Morrison JC. Forceps and vacuum delivery: a survey of North

American reisdency programs. Obstet Gynecol 1996;88:622-5.

4. Bofill JA, Rust OA, Perry KG, Roberts WE, Martin RW,

Morrison JC. Operative vaginal delivery: a survey of fellows of

ACOG. Obstet Gynecol 1996;88:1007-10.

5. Knight D, Newnham JP, McKenna M, Evans S. A comparison of

abdominal and vaginal examinations for the diagnosis of engage-

ment of the fetal head. Aust NZ J Obstet Gynaecol 1993;33:154-8.

6. Sherer D, Abulafia O. Intrapartum assessment of fetal head

engagement: comparison between transvaginal digital and trans-

abdominal ultrasound determinations. Ultrasound Obstet Gynecol

2003;21:430-6.

7. Cunningham F, MacDonald PC, Gant NF, Leveno KJ, Gilstrap

LC 3rd, Hankins GDV, et al. Operative vaginal delivery. In:

Lange A, editor. Williams obstetrics. Vol 1. Stamford (CT):

Appleton & Lange; 1997.

8. Cunningham F, MacDonald PC, Gant NF, Leveno KJ, Gilstrap

LC 3rd, Hankins GDV, et al. Anatomy of the reproductive tract.

Page 7: Birth simulator: Reliability of transvaginal assessment of fetal head station as defined by the American College of Obstetricians and Gynecologists classification

874 Dupuis et al

In: Licht J, editor. Williams obstetrics. Stamford (CT): Appleton &

Lange; 1997.

9. Gay LR. Educational evaluation and measurement: competencies

for analysis and application. Columbus (Ohio): Charles E. Merrill;

1985.

10. Adrales GL, Park AE, Chu UB, Witzke DB, Donnelly MB,

Hoskins JD, et al. A valid method of laparocopic simulation

training and competence assessment. J Surg Res 2003;114:

156-62.

11. Seropian MA. General concepts in full scale simulation: getting

started. Anesth Analg 2003;97:1695-705.

12. Tsai TC, Harasym PH, Nijssen-Jordan C, Powell G. The quality of

a simulation examination using a high-fidelity child manikin. Med

Educ 2003;37:72-8.

13. Jacomides L, Ogan K, Cadeddu JA, Pearle MS. Use of a virtual

reality simulator for ureteroscopy training. J Urol 2004;171:320-3.

14. Maran NJ, Glavin RJ. Low- to high-fidelity simulationda

continuum of medical education? Med Educ 2003;37:22-8.

15. Beckmann CR, Sharf BF, Barzansky BM, Spellacy WN. Student

response to gynecologic teaching associates. Am J Obstet Gynecol

1986;155:301-6.

16. Pugh CM, Youngblood P. Development and validation of

assessment measures for a newly developed physical examination

simulator. J Am Med Inform Assoc 2002;9:448-60.

17. Gonik B, Allen R, Sorab J. Objective evaluation of the shoulder

dystocia phenomenon: effect of maternal pelvic orientation on

force reduction. Obstet Gynecol 1989;74:44-7.

18. Ventura SJ, Martin JA, Curtin SC, Mathews TJ. Report of final

natality statistics, 1996. Mon Vital Stat Rep 1998;46:1-99.

19. Jain V, Guleria K, Gopalan S. Mode of delivery in deep transverse

arrest. Int J Gynecol Obstet 1993;43:129-35.

20. Dennen EH. A classification of forceps operations according to

station of head in pelvis. Am J Obstet Gynecol 1951;63:272-83.

21. Hagadorn-Freathy AS, Yeomans ER, Hankins GDV. Validation

of the 1988 ACOG forceps classification system. Obstet Gynecol

1991;77:356-60.