display quality of different monitors in feline digital radiography
TRANSCRIPT
DISPLAY QUALITY OF DIFFERENT MONITORS IN FELINE DIGITAL
RADIOGRAPHY
EBERHARD LUDEWIG, CHRISTIAN BOELTZIG, KATRIN GaBLER, ANJA WERRMANN, GERHARD OECHTERING
In human medical imaging, the performance of the monitor used for image reporting has a substantial impact on
the diagnostic performance of the entire digital system. Our purpose was to compare the display quality of
different monitors used in veterinary practice. Two medical-grade gray scale monitors (one cathode-ray
tube [CRT], one liquid crystal display [LCD]) and two standard consumer-grade color monitors (one CRT,
one LCD) were compared in the ability to display anatomic structures in cats. Radiographs of the stifle joint and
the thorax of 30 normal domestic shorthair cats were acquired by use of a storage phosphor system.
Two anatomic features of the stifle joint and five anatomic structures of the thorax were evaluated. The
two medical-grade monitors had superior display quality compared with standard PC monitors. No differences
were seen between the monochrome monitors. In comparison with the color CRT, the ratings of the color
LCD were significantly worse. The ranking order was uniform for both the region and the criteria investigated.
Differences in monitor luminance, bit depth, and screen size were presumed to be the reasons for the observed
varying performance. The observed differences between monitors place an emphasis on the need for guide-
lines defining minimum requirements for the acceptance of monitors and for quality control in veterinary
radiography. r 2010 Veterinary Radiology & Ultrasound, Vol. 52, No. 1, 2011, pp 1–9.
Key words: cat, CRT, feline, LCD, monitor, stifle joint, thorax, visual grading analysis study.
Introduction
IN DIGITAL RADIOGRAPHY the imaging chain comprises
four separate technical steps: signal acquisition, signal
processing, image archiving, and image presentation. The
performance of a digital radiography system depends on
the interplay of those interdependent parts.1,2 There has
been a transition of image presentation from reviewing
images on film, so-called hard copy viewing, to reviewing
images on computer monitors, or soft copy viewing. Soft
copy viewing offers advantages over hard copy reading
since the image can be adjusted on-line. The option to use
the entire spectrum of attenuation differences recorded by
the detector means that more information is available.3–5
Furthermore, with soft copy viewing zooming or measure-
ment tools are available and the cost of film, film process-
ing, and hard-copy image storage and retrieval are
eliminated.3,4,6 In the human medical profession, the tran-
sition from hard copy to soft copy viewing was not
instantaneous and was based on substantial prior work in
the field of medical monitor displays and workstation
technology. Historically, soft copy viewing in the human
medical profession was affected by limited monitor perfor-
mance. New generations of monitors offered better display
properties. Gray scale cathode-ray tube (CRT) monitors
and, later on, gray scale liquid crystal displays (LCD)
became the display media of choice for medical images.
Differences in monitor performance can influence the dis-
play quality and consequently the overall final diagnosis.7,8
To ensure a high and consistent level of image display
quality in human medical practice, guidelines exist that
define the minimum technical prerequisites for monitors
and methods of quality assurance.1,9 To our knowledge,
comparable regulations for veterinary radiology do not
exist. Because the price of monitors specifically designed
for medical imaging can exceed the price of a standard
computer monitor by a factor of ten, consumer-grade
monitors often are used in veterinary practice. Considering
the corresponding radiation safety aspects, poor monitor
selection or inadequate calibration violate the ALARA
principle. At worst, veterinary personnel receive occupa-
tional exposure to create an image that cannot be evaluated
adequately due to an unacceptable monitor.
This study was motivated by the uncertainty of whether
specific information dealing with monitor evaluation in
the human medical profession is applicable to veterinary
radiology. Our aim was to compare the display quality of
selected monitors on the basis of subjective assessment of the
appearance of anatomic structures in feline radiographs.
We hypothesized that monitors recommended for primary
image interpretation in human radiology offer superior dis-
play properties in feline radiographs as well. Furthermore,
due to the variability in object contrast and size, differences
in the ratings between the selected criteria could be expected.
Address correspondence and reprint requests to Eberhard Ludewig, atthe above address. E-mail: [email protected] April 15, 2010; accepted for publication June 16, 2010.doi: 10.1111/j.1740-8261.2010.01733.x
From the Department of Small, Animal Medicine, University of Lei-pzig, An den Tierkliniken 23, D-04103 Leipzig, Germany.
1
Material and Methods
Four types of monitors were evaluated (Table 1). The
two monochrome displays represented medical-grade de-
vices consistent with national standards.10–12 The color
displays were standard consumer grade monitors. At the
beginning of each reading session the settings of the gray
scale monitors were rechecked. Brightness and contrast of
the color monitors were adjusted to the achievable opti-
mum with the help of a SMPTE RP-133 test pattern.13 The
monitors were controlled by the graphic card of the indi-
vidual computer.
Under identical exposure conditions 30 lateral stifle joint
radiographs and 30 right lateral thoracic radiographs of 30
anesthetized normal domestic shorthair cats older than 1
year were acquired. General anesthesia was required for
reasons other than for radiography, e.g. castration, re-
moval of orthopedic implant, and treatment of a dental
disease.
The radiographs were made using a storage-phosphor
system� on a Bucky tablew (Table 2). Uniform processing
was employed for both the stifle and the thoracic radio-
graphs. Dynamic range reconstruction algorithm and un-
sharp mask filtering was employed for the images of the
stifle joint and the thorax, respectively. In pre-studies, the
parameters of these processing algorithms were optimized
with regard to detail visibility (Table 3).
The investigation was designed as an observer perfor-
mance study. In an absolute visual grading analyses (VGA)
study the observer stated their opinion on the visibility of a
certain feature on the basis of an absolute scale without
reference pictures.14 The images were evaluated indepen-
dently on the various monitors by four observers with a
minimum of 3 years of experience with digital radiography
(one board-certified radiologist, three residents of a na-
tional specialization program). Two features of stifle
radiographs and the appearance of five anatomic struc-
tures of the thorax were scored on the basis of a four-point
scale (4, excellent; 3, average; 2, borderline sufficient; 1,
insufficient) (Fig. 1). The observers were trained for their
task using a separate set of images. Consistent with the
practical routine of image reading, the radiologists were
encouraged to apply the entire workstation functionalityzto record as much information as possible. Evaluation time
per image was unlimited. To ensure uniform ambient
conditions all workstations were placed in the same reading
room. The ambient light and other conditions of the
viewing environment fulfilled the requirements for medical
image interpretation.11,15 Lighting was indirect, and
illuminance at the monitor surface was o100 lx. Observ-
ers were unaware of the animal identification.
Table 1. Technical Specification of the Monitors
Type Gray Scale CRT Gray Scale LCD Color CRT Color LCD
Manufacturer labeling Philips 21 CY9� Totoku ME 181Lw ADI Microscan PD959z Fujitsu Siemens Amilo Ay (laptop)Physical size (in.) 21 18.1 19 15.1Matrix 1280 � 1024 (1.3MP) 1280 � 1024 (1.3MP) 1600 � 1200 (1.9MP) 1024 � 768 (0.8MP)Dot pitch (mm) 0.35 0.28 0.24 0.30Bit depth 10 10 8 8Maximum luminance (cd/m2) 650 700 120 200Operating luminance (cd/m2) 250 360 100 200Contrast ratio 450:1 400:1 400:1 400:1Graphic card (type) SUN 81-76 Matrox Millenium
P650 PCIe 128MNVIDIA GeForce 7300LE ATI IGP 320M
Calibration DICOM GSDF Yes Yes No No
�Philips Healthcare. wTotoku Electric Co.,Tokyo, Japan. zADI Corp., Taipei, Taiwan. yFujitsu-Siemens, Sommerda, Germany. CRT, cathode ray
tube; LCD, liquid crystal display; DICOM, Digital Imaging and Communications in Medicine; GSDF, Grayscale Display Function Standard.
Table 2. Exposure Technique
Parameter
X-ray systemType Philips Bucky DiagnostFiltration 2.5mm AlFocus size 0.6 � 0.6mm2
Storage phosphor systemScreen Fuji HR-VReader Philips AC 500Spatial frequency 5 lp/mmDetective quantum efficiency(70 kVp: 1 lp � mm-1) 21%
Exposure conditions Stifle joint ThoraxGrid No NoFocus-to-detector distance 110cm 110cmField size 10 � 8 cm2 15 � 21 cm2
Tube potential 44 kVp 52kVpTube current 8mAs 6.3mAsExposure time 36.0ms 21.6msDose–area product 0.9 cGy � cm2 3.5 cGy � cm2
�Fuji HR-V, Fujifilm Medical Systems, Tokyo, Japan; PCR AC 500,Philips Healthcare, Hamburg, Germany.wBucky Diagnost TH, Philips Healthcare, Hamburg, Germany.
zeFilm workstation, version 1.8.3, Merge Healthcare, Milwaukee,WI.
2 LUDEWIG ET AL 2011
Because the image quality and visibility of the anatomic
structures in this study were rated subjectively, an assess-
ment of consistency was desirable to get an impression of
the objectivity and reliability of the image evaluation.
Kappa statistics were not applicable because of the multi-
variate character of the data, caused by the number of
observers and rating categories. Instead, Spearman’s rank
correlation was performed for all criteria of all observer
combinations. The level of significance was calculated for
each correlation. A significant positive correlation indicates
a high level of consistency between observer ratings.
Median and average absolute deviation of the ratings
were calculated for each anatomic structure. Visual grading
characteristic (VGC) analysis was applied to analyze the
data of the VGA study. In principle, VGC analysis treats
the scale steps as ordinal with no assumptions of the dis-
tribution of the data being made. It handles visual grading
data in a fashion similar to ROC data. The area under the
curve (AUCVGC) is a measure of the difference in the image
quality between two modalities. A curve equal or close to
the diagonal, equivalent to an AUGVGC of about 0.5, in-
dicates equality between the monitors compared (Fig. 2).16
Results
The analysis was based on a total of 3360 individual
observer decisions. For the individual readers (E.L., A.W.,
C.B., K.G.) the mean values of the correlation coefficients
averaged over the two criteria of the stifle joint images were
Fig. 1. Definition of criteria for the depiction of diagnostically relevant anatomic structures in the evaluation of image quality
Radiological structures Anatomical criteria
(A) Stifle jointBone Identification of the subchondral borders (black arrows), discrimination between trabecular and
compact bone (black circles), delineation of the patella, fabella(e), and popliteal sesamoid (blackasterisks)
Soft tissue Demarcation of the infrapatellar fat pad (open white triangles), and extraarticular soft tissuestructures (closed white triangles: muscle contours; white arrows: patellar ligament)
(B) ThoraxTrachea Discrimination of trachea and principal bronchi from the adjacent mediastinumCranial lung field Visibility of small vessels (white arrows) in the cranial lung fieldSternum Visibility of the border and the architecture of the sternebraeCardiac silhouette Identification of the caudal border of the cardiac silhouetteCaudodorsal thoracic field Rendition of the aorta (open triangles), the caudal vena cava (closed triangles), pulmonary
vessels (arrows), and contour of the diaphragm
Table 3. Image Processing Parameters
Stifle jointProcessing algorithm: dynamic range reconstruction
Contrast equation 0.8 (Kernel size: 135)Contour sharpness 1.00 (Kernel size: 3,
Curve type: F)Thorax
Processing algorithm: unsharp mask filterGradient amount (GA) 1.17Gradient type (GT) EGradient center (GC) 1.80Gradient shift (GS) � 0.28Frequency rank (RN) 9Frequency type (RT) UFrequency enhancement (RE) 1.4Kernel size 5
Processing Workstation: Easy Vision Rad Release 4.2 L5 (Philips
Healthcare, Hamburg, Germany)
3DISPLAY QUALITY OF DIFFERENT MONITORSVol. 52, No. 1
0.73, 0.74, 0.66, and 0.64, respectively. The mean values
over the five structures of the thoracic images were 0.75,
0.67, 0.71, and 0.65, respectively. Because of the sustained
high level of significance of the underlying correlations
(P � 0.001) the subsequent VGC analysis was based upon
the pooled data of the four observers (Table 4).
The results of the evaluation of the image quality are
summarized in Table 5 and in Fig. 3. The data revealed a
complete uniform ranking order of the four monitors over
all features evaluated. The two medical-grade monitors
offered clear superior display quality. There were no sig-
nificantly different rating results noted between those two
modalities. In comparison, the display quality of the color
CRT was inferior. This was characterized by both lower
median values and significant differences in the corre-
sponding AUCVGC values based on the direct compari-
sons. The median of four out of the seven features of the
color LCD ratings was lower than those for the color CRT
ratings. Concerning the individual allocations in the color
LCD in five criteria an insufficient quality (grade 1) was
recorded. Using the color CRT this was seen only for
one out of the seven features. In the gray scale monitors
there was no grade 1 allocation and even grade 2 ratings
were documented only for a single feature.
Discussion
The quality of a digital radiographic image is limited by
the weakest part of the imaging chain. We found obvious
differences between various monitors with regard to the
rendering of anatomic detail in feline radiographs. Image
quality was better when using a monitor that met defined
criteria regarding primary image evaluation in human
medical practice.
A comparison of our findings with results from other
studies performed with either clinical images from humans
Table 4. Spearman’s Rank Correlation Coefficients of the IndividualObservers
Observer 1 2 3 4
Stifle jointBone 0.76 0.76 0.72 0.72
0.71–0.84 0.71–0.84 0.71–0.73 0.71–0.73Soft tissue 0.70 0.73 0.60 0.57
0.59–0.88 0.63–0.89 0.48–0.66 0.48–0.63All structures 0.73 0.74 0.66 0.64
0.59–0.88 0.63–0.89 0.48–0.73 0.48–0.73Thorax
Trachea 0.73 0.65 0.68 0.690.69–0.76 0.62–0.69 0.62–0.76 0.65–0.74
Cranial lungfield
0.83 0.78 0.77 0.570.82–0.85 0.73–0.82 0.73–0.83 0.75–0.85
Sternum 0.69 0.56 0.68 0.680.61–0.79 0.47–0.61 0.60–0.80 0.47–0.64
Cardiacsilhouette
0.71 0.67 0.66 0.550.63–0.76 0.53–0.75 0.50–0.76 0.50–0.63
Caudodorsalthoracic field
0.76 0.67 0.74 0.730.70–0.80 0.66–0.70 0.66–0.78 0.64–0.80
All structures 0.75 0.67 0.71 0.650.69–0.83 0.56–0.78 0.66–0.77 0.55–0.73
Top: mean value of the correlation coefficients (in relation to the other
three observer).Bottom: minimum and maximum of the correlation
coefficients (in relation to the other three observer).
Number of ratings (n)
A -Color CRT
60
70
1.0
40
50VGAB
0.6
0.8
10
20
30
0.470
B -Color LCD
0.240
50
60 AUCVGC = 0.66 ± 0.07(x ± 95%-CI)
0 0.2 0.4 0.6 0.8 1.00
10
20
30
VGAA
1 2 3 4 Score
Fig. 2. The visual grading characteristic (VGC) curve (right) from the data of the ratings for the criterion ‘‘cranial lung field’’ for the color cathode-ray tube(CRT) and the color liquid crystal display (LCD) (left). The boxes represent the operating points corresponding to the observer’s interpretation. The area underthe curve (AUCVGC) differs significantly from 0.5, indicating a superior display quality of the color CRT (A) in comparison with the (B) color LCD.
4 LUDEWIG ET AL 2011
or phantoms was hampered for major reasons. One was
that the monitors investigated diverged substantially in
their technical properties. The second related to the vastly
different target structures. Thus, it is not surprising that a
number of human studies described differences in the
monitor performance8,9,17–19 while others confirm equal
display quality.20–23
The quality of a monitor is determined by the interplay
of several factors, such as screen size, pixel size, luminance,
contrast ratio, and bit depth. Further characteristics such
as phosphor type, gray scale- or color monitor, glare- and
reflection characteristics, and display calibration are im-
portant as well.1,9,24,25 The importance of brightness and
spatial resolution have been emphasized.9,22,23,25,26 It is
likely that the superior performance of medical-grade
monitors in our study was related primarily to the ability of
the monitors to display more shades of gray. Luminance
was three to five times higher and therefore different look-
up tables were applied. The main advantage of high lumi-
nance is that it is easier to see the entire gray scale from
white to black in an image. Brighter monitors always yield
better perceived contrast.1,25 Furthermore, the gray scale
monitors were able to display 1024 shades of gray com-
pared with 256 shades for the color monitors, which also
improved gray scale rendition. Additional benefits of the
medical-grade gray scale monitors were that they were
calibrated to the DICOM part 14 Grayscale Standard
Display Function (GSDF).27 The aim of the calibration
was to obtain consistent presentation on all displays by
distributing the total contrast of the display across the en-
tire gray scale. As a result, objects were presented with the
same contrast regardless of whether they were located in
dark or bright parts of the image.28
In general, the pixel size of a monitor is an important
quality factor. But, it is not very likely that the differences
in results found in this study were related the pixel size.
Basically, to ensure adequate resolution, the matrix of the
monitor should be as close as possible to the matrix of the
preprocessed image data. Alternatively, high resolution is
attainable with magnification function.1,22,29 The spatial
frequency of the applied storage phosphor system was
5 line-pairs/mm. According to the Nyquist theorem, this
corresponded to a detector pixel size of 0.1mm (100mm).
The pixel size of the monitors included in the study ranged
from 0.24 to 0.35mm. Consequently none of the monitors
was able to display the exposed field of the thoracic
Table 5. Tabulated Results of the Ratings
Stifle Joint Thorax
Bone Soft Tissue Trachea Cranial Lung Field Sternum Cardiac Silhouette Caudodorsal Thoracic Field
Gray scale CRTNumber of ratings
1 0 0 0 0 0 0 02 0 0 0 0 0 0 03 26 22 21 23 8 12 244 94 98 99 97 112 108 96
Median 4 4 4 4 4 4 4Average absolute deviation 0.22 0.18 0.17 0.19 0.07 0.10 0.20
Gray scale LCDNumber of ratings
1 0 0 0 0 0 0 02 0 0 0 4 0 0 03 33 24 36 30 17 22 374 87 96 84 86 103 98 83
Median 4 4 4 4 4 4 4Average absolute deviation 0.27 0.20 0.30 0.32 0.14 0.18 0.31
Color CRTNumber of ratings
1 0 0 0 2 0 0 02 32 10 20 32 8 12 273 75 84 81 70 64 64 824 13 26 19 16 48 44 11
Median 3 3 3 3 3 3 3Average absolute deviation 0.37 0.30 0.32 0.43 0.47 0.47 0.32
Color LCDNumber of ratings
1 9 0 9 11 2 0 42 57 36 53 53 17 37 673 51 74 57 54 92 60 484 3 10 1 2 9 23 1
Median 2 3 2 2 3 3 2Average absolute deviation 0.55 0.38 0.57 0.58 0.25 0.50 0.45
5DISPLAY QUALITY OF DIFFERENT MONITORSVol. 52, No. 1
radiographs of 15� 21 cm2 in the original resolution with-
out the use of the magnification function. Because the ob-
servers used the zooming function, it is unlikely that the
differences of the monitor pixel size had a significant in-
fluence on the results. However, generally such an influence
may not be ignored completely because the larger monitors
allowed for a higher magnification. Beyond that, other
factors, such as monitor technology (CRT vs. LCD; gray
scale vs. color) or graphic card could also have attributed
to the differences.18,24,30
For the study, monitors that are commonly used in vet-
erinary practice were chosen. They were selected on the
basis of different technologies (CRT vs. LCD; gray scale
vs. color) and physical properties (e.g. luminance, contrast
ratio, spatial resolution, bith depth). Because some vendors
advertise the use of notebook computers for primary in-
terpretation, such as in mobile practice, a standard laptop
display was included. There are significant price differences
between medical-grade and consumer-grade monitors,
making the cheaper standard PC monitors appear attrac-
Fig. 3. AUCVGC values (mean � 95% confidence interval) of the monitor comparisons. (A) Stifle joint. (B) Thorax.
6 LUDEWIG ET AL 2011
tive. For example the current price of a 21 in. medical-
grade gray scale LCD monitor is approximately h4000,
whereas a consumer-grade color LCD of the same size
ranges between h120 and h400. Because of those price
differences, monitor recommendations for human medical
practice are based on the diagnostic purpose for which the
workstation should be used. Basically, expensive high
quality monitors were recommended for image interpreta-
tion, whereas less expensive monitors with a lower perfor-
mance can be used for image viewing without the need for
an immediate final diagnosis. Therefore two categories of
monitors have been distinguished currently: monitors for
interpretation of medical images for rendering a clinical
diagnosis, termed primary or diagnostic monitors, and
monitors for viewing medical images without the require-
ment for clinical interpretation, e.g. for viewing images by
medical staff or specialists other than radiologists after an
interpretive report has been rendered, termed secondary or
nondiagnostic monitors.11,12,31–34 Within both of these
categories, the minimum specification differs dependent on
the application, e.g. thorax, skeleton, and mammography.
Beyond that, it was proposed to expand this limited
classification range to match the full range of applications
more precisely.34 When a single workstation will be
used for multiple applications, the monitor specification
has to match the highest level needed.11 Accordingly, in
our study, the monitor would have to fulfill the require-
ments for reporting thoracic images. The two gray scale
monitors met the national requirements for primary image
interpretation for both thoracic and skeletal image inter-
pretation in the human medical field11 while the two con-
sumer-grade monitors did not. The quality of the color
CRT was acceptable for secondary viewing. The color
LCD was inadequate even for secondary image review.
Thoracic and joint structures of the cat were selected
because they are small objects with a wide spectrum of
attenuation differences. In the thorax, motion caused by
respiration has to be considered to avoid loss of evaluabil-
ity. In the human medical profession, comparable chal-
lenging conditions are restricted to neonatal radiography.35
As in pediatric radiology it was assumed that the chosen
regions placed high demands on all parts of the imaging
chain to display structures of interest with high diagnostic
quality.36 The rating differences seen were unrelated to
both the two regions and the target structures evaluated.
This was somewhat unexpected, because the display quality
of low-contrast structures such as the cranial lung field and
caudodorsal thoracic field in the thoracic radiographs, and
soft tissue in the stifle images, were theoretically more
dependent on monitor properties related to the display
of luminance and contrast than target structures with
higher attenuation differences.37 However, other results
from phantom20,23 and clinical human studies19,23 agree
with ours.
The major drawback of this study was the small number
of monitors investigated. We recognize that the entire
spectrum of monitor quality could not be addressed.
Furthermore, more sophisticated monitors are becoming
available continually. There has been a trend away from
CRT monitors to LCD flat panel displays.18 We did not
evaluate large screen color LCD monitors. Large screen
color LCD monitors with high brightness and resolu-
tion (display diameter: �20 in., maximum luminance: �200cd/m2, matrix: �2MP) can perform similar to gray
scale monitors in human medical practice.21–23 The ability
of large screen LCD monitors to display subtle struc-
tures in small animals has not been characterized. Also,
monitors cannot be evaluated without considering the
associated graphics card. Our study design was in-
adequate to assess the effect of the graphics card in-
dependently.
Another limitation might be that anatomic structures
were evaluated instead of pathologic lesions. It was
assumed that the ability to detect pathologic changes is
related to accurate anatomic presentation.14,38 In contrast
to pathologic structures, anatomic landmarks have a more
uniform appearance. Consequently it is easier to evaluate
the quality of their radiographic presentation for a mean-
ingful interpretation. In human medical practice, the
quality of reproduction of anatomic structures is the ba-
sis of established standards of quality assurance.39,40 In
observer performance studies dealing with comparative
evaluation of the quality of clinical images, these criteria
are reliable measurement instruments.41–43 Despite the lack
of comparable standards in veterinary radiology, quality
criteria can be deduced from generally accepted paradigms
of image interpretation.44–47 Once the requisite level of
radiographic rendition of a diagnostic relevant criteria has
been verbalized, the description can be applied to observer
performance studies in general and for VGA studies in
particular.36 However, some have argued that it is more
difficult to identify existing pathologic changes.19,48 Accord-
ingly the results of our study could be considered as too
optimistic. Such an interpretation underlines the need of
high-quality monitors even more strictly. Another limitation
was that it was not possible to hide the monitor type from
the observers. Consequently, preferences of individual ob-
servers could not be excluded. However the consistent pos-
itive correlation of ratings between observers weakens this
argument.
In summary, we have shown that the performance of the
monitor used for soft-copy interpretation influences image
interpretation significantly. Monitor quality is a critical el-
ement within the imaging chain in small animal radiology.
Deviation from high quality monitors is accompanied by a
loss of information. From the view of radiation safety
considerations such a loss may not be tolerated as it rep-
resents a violation of fundamental radiation safety princi-
7DISPLAY QUALITY OF DIFFERENT MONITORSVol. 52, No. 1
ples. In consequence, guidelines are needed that define
minimum requirements for devices used for soft-copy in-
terpretation in veterinary radiology. Because of similarities
of many target structures and the needed quality for their
radiographic presentation, guidelines for acceptance and
quality testing of display devices in human medical imaging
could be followed in veterinary medicine.
ACKNOWLEDGMENTS
The authors would like to thank Prof. Dr. Joe Morgan for assistancewith the manuscript.
REFERENCES
1. Krupinski EA, Williams MB, Andriole K, et al. Digital radiographyimage quality: image processing and display. J Am Coll Roentgenol2007;4:389–400.
2. Puchalski SM. Image display. Vet Radiol Ultrasound 2008;49(Suppl 1):S9–S13.
3. Pisano ED, Cole EB, Kistner EO, et al. Interpretation of digitalmammograms: comparison of speed and accuracy of soft-copy versusprinted-film display. Radiology 2002;223:483–488.
4. Mattoon JS. Digital radiography. Vet Comp Orthop Traumatol2006;19:123–132.
5. Don S, Whiting BR, Ellinwood JS, et al. Neonatal chest computedradiography: image processing and optimal image display. Am J Roentgenol2007;88:1138–1144.
6. Shtern F, Winfield D. Report of the Working Group on DigitalMammography: digital displays and workstation design. Acad Radiol1999;6:S197–S218.
7. Otto D, Bernhardt TM, Rapp-Bernhardt U, et al. Subtle pulmonaryabnormalities: detection on monitors with varying spatial resolutions andmaximum luminance levels compared with detection on storage phosphorradiographic hard copies. Radiology 1998;207:237–242.
8. Peer S, Giacomuzzi SM, Peer R, et al. Resolution requirements formonitor viewing of digital flat-panel detector radiographs: a contrast detailanalysis. Eur Radiol 2003;13:413–417.
9. Krupinski E, Kallergi M. Choosing a radiology workstation: techni-cal and clinical considerations. Radiology 2007;242:671–682.
10. Bundesministerium fur Umwelt, Naturschutz und Reaktorsicherheit.Richtlinie zur Durchfuhrung der Qualitatssicherung bei Rontgeneinrichtun-gen zur Untersuchung oder Behandlung von Menschen nach den yy 16 und17 der Rontgenverordnung (Qualitatssicherungs-Richtlinie) vom 20.11.2003,(geandert durch Rundschreiben vom 14.12.2009). Available at http://www.bmu.de/files/pdfs/allgemein/application/pdf/qs_richtlinie.pdf (accessedDecember 15, 2010).
11. Bundesarztekammer Leitlinien der Bundesarztekammer zurQualitatssicherung in der Rontgendiagnostik–Qualitatskriterien rontgen-diagnostischer Untersuchungen, 23rd November 2007. Dt Arzteblatt 2008;105: A 536. Available at http://www.bundesaerztekammer.de/downloads/LeitRoentgen2008Korr2.pdf (accessed February 19, 2008).
12. Deutsches Institut fur Normung. Sicherung der Bildqualitat inrontgendiagnostischen Betrieben, Teil 57: Abnahmeprufung an Bildwieder-gabegeraten, DIN V 6868-57. Berlin: Beuth Verlag, 2007.
13. Gray JE. Use of the SMPTE test pattern in picture archiving andcommunication systems. J Digit Imaging 1992;5:54–58.
14. Mansson LG. Methods for the evaluation of image quality: Areview. Radiat Prot Dosim 2000;90:89–99.
15. Deutsches Institut fur Normung. Bildschirmarbeitsplatze; Ergo-nomische Gestaltung des Arbeitsraumes; Beleuchtung und Anordnung, DIN66234-7. Berlin: Beuth Verlag; 1985.
16. Bath M, Mansson LG. Visual grading characteristics (VGC) anal-ysis: a non-parametric rank-invariant statistical method for image qualityanalysis. Br J Radiol 2007;80:169–176.
17. Krupinski EA, Johnson J, Roehrig H, et al. Use of a human visualsystem model to predict observer performance with CRT vs. LCD display ofimages. J Digit Imaging 2004;17:258–263.
18. Lehmkuhl L, Mulzer J, Teichgraeber U, et al. Evaluation derAbbildungsqualitat unterschiedlicher Befundungsmodalitaten in der digi-talen Radiologie. Fortschr Rontgenstr 2004;176:1031–1038.
19. Balassy C, Prokop M, Weber M, et al. Flat-panel display(LCD) versus high-resolution gray-scale display (CRT) for chest
radiography: an observer preference study. Am J Roentgenol 2005;184:752–756.
20. Kotter E, Bley TA, Saueressig U, et al. Comparison of the detect-ability of high- and low-contrast details on a TFT screen and a CRT screendesigned for radiologic diagnosis. Invest Radiol 2003;38:719–724.
21. Doyle AJ, Le Fevre J, Anderson GD. Personal computer versusworkstation display: observer performance in detection of wrist fractures ondigital radiographs. Radiology 2005;237:872–877.
22. Langer S, Fetterly K, Mandrekar J, et al. ROC study of four LCDdisplays under typical medical center lighting conditions. J Digit Imaging2005;19:30–40.
23. Geijer H, Geijer M, Forsberg L, et al. Comparison of color LCDand medical-grade monochrome LCD displays in diagnostic radiology.J Digit Imaging 2007;20:114–121.
24. Badano A. AAPM/RSNA tutorial on equipment selection:PACS equipment overview—display systems. Radio Graphics 2004;24:879–889.
25. Weiser J, Romlein J.Monitor minefield. Imaging Economics: Imag-ing Informatics, April 2006. Available at http://www.imagingeconomics.com/issues/articles/2006-04_09.asp (accessed June 22, 2009).
26. Ikeda M, Ishigaki T, Shimamoto K, et al. Influence of monitorluminance change on observer performance for detection of abnormalitiesdepicted on chest radiographs. Invest Radiol 2003;38:57–63.
27. Digital Imaging and Communications in Medicine (DICOM).Part 14; grayscale standard display function, March 2003. http://www.medical.nema.org/dicom/2003/03_14PU.pdf (accessed June 22, 2009).
28. Wright MA, Balance D, Robertson IA, et al. Introduction toDICOM for the practicing veterinarian. Vet Radiol Ultrasound 2008;49(Suppl 1):S14–S18.
29. Fachverband Elektromedizinische Technik. Systemaspekte bei Bild-wiedergabegeraten und Bildarbeitsplatzen. In: ZVEI (ed): Qualitatssicherungan Bildwiedergabegeraten—Ein Kompendium zur Information uber Grundla-gen, Technik und Durchfuhrung, 2nd edn. Frankfurt: ZVEI, 2004;30–38.
30. Ricke J, Hanninen EL, Zielinski C, et al. Shortcoming so low-costimaging systems for viewing computed radiographs. Comput Med ImagingGraph 2000;24:25–32.
31. Samei E, Bandano A, Chakraborty D, et al. Assessment of displayperformance for medical imaging systems: executive summary of the AAPMTG18 report. Med Phys 2005;32:1205–1225.
32. Institute of Physics and Engineering in Medicine. Recommendedstandards for the routine performance testing of diagnostic X-ray imagingsystems, report 91 York: IPEM, 2005.
33. Japan Industries Association of Radiological Systems Standards.Quality Assurance (QA) Guideline for Medical Imaging Display Systems,JESRA X-0093-2005, August 2005. Available at http://www.jira-net.or.jp/commission/system/04_information/files/JESRAX-0093_e.pdf (accessedOctober 30, 2009).
34. Brettle DS. Display considerations for hospital-wide viewing of softcopy images. Br J Radiol 2007;80:503–507.
35. Dougeni ED, Delis HB, Karatza AA, et al. Dose and image qualityoptimization in neonatal radiography. Br J Radiol 2007;80:807–815.
36. Ludewig E, Hirsch W, Bosch B, et al. Untersuchungen zurQualitat von Thoraxaufnahmen bei Katzen mit einem auf einer Nadel-struktur basierenden Speicherfoliensystem—Modelluntersuchungen zurBewertung der Bildqualitat bei Neugeborenen. Fortschr Rontgenstr2010;182:122–131.
37. Fachverband Elektromedizinische Technik. Physiologie des menschli-chen Sehens. In: ZVEI (ed): Qualitatssicherung an Bildwiedergabegeraten—Ein
8 LUDEWIG ET AL 2011
Kompendium zur Information uber Grundlagen, Technik und Durchfuhrung,2nd edn. Frankfurt: ZVEI, 2004;6–11.
38. Martin CJ, Sharp PF, Sutton DG. Measurement of image quality indiagnostic radiology. Appl Radiat Isotop 1999;50:21–38.
39. European Commission. European guidelines on quality criteria fordiagnostic radiographic images, Report EUR 16260 EN Luxembourg: Officefor official publications of the European Communities, 1996.
40. European Commission. European guidelines on quality criteria fordiagnostic radiographic images in paediatrics, Report EUR 16261 EN Lux-embourg: Office for official publications of the European Communities,1996.
41. Bacher K, Smeets P, Vereecken L, et al. Image quality and radiationdose on digital chest imaging: comparison of amorphous silicon and amor-phous selenium flat-panel systems. Am J Roentgenol 2006;187:630–637.
42. Lanhede B, Bath M, Kheddache S, et al. The influence of differenttechnique factors on image quality of chest radiographs as evaluated bymodified CEC image quality criteria. Br J Radiol 2002;75:38–49.
43. Rainford LA, Al-Qattan E, McFadden S, et al. CEC analysisof radiological images produced in Europe and Asia. Radiography2007;3:202–209.
44. Berry CR, Graham JP, Thrall DE. Interpretation paradigms for thesmall animal thorax. In: Thrall DE (ed): Textbook of veterinary diagnosticradiology. Philadelphia: W. B. Saunders, 2007;462–485.
45. Comerford EJ. The stifle joint. In: Barr FJ, Kirberger RM (eds):BSAVA manual of canine and feline musculoskeletal imaging. Gloucester:BSAVA, 2006;135–149.
46. Rudorf H, Taeymans O, Johnson V. Basics of thoracic radiographyand radiology. In: Schwarz T, Johnson V (eds): BSAVA Manual of canineand feline thoracic imaging. Gloucester: BSAVA, 2008;1–19.
47. Suter PF, Lord P. (eds): Thoracic radiography—a text atlas of tho-racic diseases of the dog and cat. Wettswil: PF Suter Publisher, 1984.
48. Tapiovaara MJ. Review of relationships between physical measure-ments and user evaluation of image quality. Radiat Prot Dosim 2008;129:244–248.
9DISPLAY QUALITY OF DIFFERENT MONITORSVol. 52, No. 1