medical imaging displays 2012

17
Note: This copy is for your personal non-commercial use only. To order presentation-ready copies for distribution to your colleagues or clients, contact us at www.rsna.org/rsnarights. 275 IMAGING PHYSICS George C. Kagadis, PhD • Alisa Walz-Flannigan, PhD • Elizabeth A. Krupinski, PhD • Paul G. Nagy, PhD • Konstantinos Katsanos, MD, PhD Athanasios Diamantopoulos, MD • Steve G. Langer, PhD The adequate and repeatable performance of the image display system is a key element of information technology platforms in a modern radiology department. However, despite the wide availability of high-end comput- ing platforms and advanced color and gray-scale monitors, the quality and properties of the final displayed medical image may often be inade- quate for diagnostic purposes if the displays are not configured and main- tained properly. In this article—an expanded version of the Radiological Society of North America educational module “Image Display”—the authors discuss fundamentals of image display hardware, quality control and quality assurance processes for optimal image interpretation settings, and parameters of the viewing environment that influence reader perfor- mance. Radiologists, medical physicists, and other allied professionals should strive to understand the role of display technology and proper usage for a quality radiology practice. The display settings and display quality control and quality assurance processes described in this article can help ensure high standards of perceived image quality and image in- terpretation accuracy. Introduction Medical imaging professionals such as radiologists, information technology special- ists, and medical physicists routinely use a picture archiving and communication system (PACS) to transfer, process, store, and display medical images. If the radiolo- gist is the “eyes” and “brain” of the radiology department, the electronic informa- tion technologies are the “lifeblood” that allows the circulation of acquired medical images so that they can be presented on a display and interpreted by the radiologist. Medical Imaging Displays and Their Use in Image Interpretation 1 Abbreviations: AAPM = American Association of Physicists in Medicine, ACR = American College of Radiology, CCFL = cold-cathode fluores- cent lamp, COTS = commercial off-the-shelf, DDL = digital driving level, DICOM = Digital Imaging and Communications in Medicine, GSDF = gray-scale standard display function, HVS = human visual system, JND = just noticeable difference, LCD = liquid crystal display, LED = light-emitting diode, PACS = picture archiving and communication system, SIIM = Society for Imaging Informatics in Medicine, SOS = satisfaction of search, TG18 = Task Group 18 RadioGraphics 2013; 33:275–290 • Published online 10.1148/rg.331125096 • Content Codes: 1 From the Departments of Medical Physics (G.C.K.) and Radiology (A.D.), School of Medicine, University of Patras, PO Box 13273, 265 04 Rion, Greece; Department of Radiology, Mayo Clinic, Rochester, Minn (A.W.F., S.G.L.); Department of Radiology, University of Arizona, Tucson, Ariz (E.A.K.); Department of Radiology, Johns Hopkins University, Baltimore, Md (P.G.N.); and Department of Interventional Radiology, Guy’s and St Thomas’ Hospitals, NHS Foundation Trust, King’s Health Partners, London, England (K.K.). Received May 4, 2012; revision requested May 31 and received July 18; accepted July 31. All authors have no financial relationships to disclose. Address correspondence to G.C.K. (e-mail: [email protected]). © RSNA, 2013 radiographics.rsna.org

Upload: a

Post on 07-Jul-2016

237 views

Category:

Documents


17 download

DESCRIPTION

Medical Imaging Displays 2012

TRANSCRIPT

Page 1: Medical Imaging Displays 2012

Note: This copy is for your personal non-commercial use only. To order presentation-ready copies for distribution to your colleagues or clients, contact us at www.rsna.org/rsnarights.

275IMAGING PHYSICS

George C. Kagadis, PhD • Alisa Walz-Flannigan, PhD • Elizabeth A. Krupinski, PhD • Paul G. Nagy, PhD • Konstantinos Katsanos, MD, PhD Athanasios Diamantopoulos, MD • Steve G. Langer, PhD

The adequate and repeatable performance of the image display system is a key element of information technology platforms in a modern radiology department. However, despite the wide availability of high-end comput-ing platforms and advanced color and gray-scale monitors, the quality and properties of the final displayed medical image may often be inade-quate for diagnostic purposes if the displays are not configured and main-tained properly. In this article—an expanded version of the Radiological Society of North America educational module “Image Display”—the authors discuss fundamentals of image display hardware, quality control and quality assurance processes for optimal image interpretation settings, and parameters of the viewing environment that influence reader perfor-mance. Radiologists, medical physicists, and other allied professionals should strive to understand the role of display technology and proper usage for a quality radiology practice. The display settings and display quality control and quality assurance processes described in this article can help ensure high standards of perceived image quality and image in-terpretation accuracy.

IntroductionMedical imaging professionals such as radiologists, information technology special-ists, and medical physicists routinely use a picture archiving and communication system (PACS) to transfer, process, store, and display medical images. If the radiolo-gist is the “eyes” and “brain” of the radiology department, the electronic informa-tion technologies are the “lifeblood” that allows the circulation of acquired medical images so that they can be presented on a display and interpreted by the radiologist.

Medical Imaging Displays and Their Use in Image Interpretation1

Abbreviations: AAPM = American Association of Physicists in Medicine, ACR = American College of Radiology, CCFL = cold-cathode fluores-cent lamp, COTS = commercial off-the-shelf, DDL = digital driving level, DICOM = Digital Imaging and Communications in Medicine, GSDF = gray-scale standard display function, HVS = human visual system, JND = just noticeable difference, LCD = liquid crystal display, LED = light-emitting diode, PACS = picture archiving and communication system, SIIM = Society for Imaging Informatics in Medicine, SOS = satisfaction of search, TG18 = Task Group 18

RadioGraphics 2013; 33:275–290 • Published online 10.1148/rg.331125096 • Content Codes: 1From the Departments of Medical Physics (G.C.K.) and Radiology (A.D.), School of Medicine, University of Patras, PO Box 13273, 265 04 Rion, Greece; Department of Radiology, Mayo Clinic, Rochester, Minn (A.W.F., S.G.L.); Department of Radiology, University of Arizona, Tucson, Ariz (E.A.K.); Department of Radiology, Johns Hopkins University, Baltimore, Md (P.G.N.); and Department of Interventional Radiology, Guy’s and St Thomas’ Hospitals, NHS Foundation Trust, King’s Health Partners, London, England (K.K.). Received May 4, 2012; revision requested May 31 and received July 18; accepted July 31. All authors have no financial relationships to disclose. Address correspondence to G.C.K. (e-mail: [email protected]).

©RSNA, 2013 • radiographics.rsna.org

Page 2: Medical Imaging Displays 2012

276 January-February 2013 radiographics.rsna.org

Figure 1. Drawing illustrates the technol-ogy behind the color LCD. A monochrome display lacks color filters, allowing more of the backlight to reach the front panel. TFT = thin-film transistor.

Despite the widespread availability of high-end computing platforms and advanced color and gray-scale monitors, the quality and properties of the final displayed image are often inadequate for diagnostic purposes if the displays are not config-ured and maintained properly. Therefore, image quality assurance should include the entire im-age manipulation chain, starting with acquisition and ending with display and interpretation. A display’s operational characteristics can affect the quality of the displayed image, thereby influenc-ing reader performance. The clinical setting and ambient lighting conditions can also influence reader performance.

In this article—an expanded version of the Ra-diological Society of North America educational module “Image Display” (http://physics.rsna.org /section/default.asp?id=PHYS0910) (1)—we dis-cuss technical issues and operational standards of modern medical imaging displays and the read-ing environment, including (a) fundamentals of image display hardware, (b) quality control and quality assurance processes for optimal image interpretation, and (c) parameters of the view-ing environment that influence the performance of the interpreting radiologist. We also provide a detailed analysis of key display characteristics—luminance, matrix size, contrast resolution, and noise—that determine perceived image quality, as well as viewing strategies to improve diagnostic performance.

Displays for Medical Imaging

Display Hardware

Liquid Crystal Display Technology.—The major-ity of displays used in radiology today are liquid crystal displays (LCDs) backlit with cold-cathode fluorescent lamps (CCFLs). Newer display op-tions include light-emitting diode (LED)–backlit displays. LED backlights and CCFL backlights are typically used in similar ways; thus, the dia-

gram in Figure 1 applies to both types of displays. Compared with CCFL-backlit systems, LED-backlit systems have lower power consumption, a thinner design, and different backlight aging.

To create an image, the backlight is modu-lated by the liquid crystal panel (Fig 1), acting like millions of tiny autonomous “shutters” to produce the image (Figs 2, 3). The shutters are controlled individually by a thin-film transistor array (Figs 1, 2).

Physical Properties and Characteristics of Dis-plays.—The most obvious features of a display are screen size and spatial resolution. The aim should be to have a display with an inherent reso-lution that matches the matrix of the images to be displayed, and a screen size that is appropriate for the viewing distance. For example, display of an image on a screen whose matrix is larger than that of the image can result in interpolation arti-facts (Fig 4). If there are fewer display pixels than image pixels, zoom and pan functions are needed to see all the image information.

The amount of backlight allowed through the panel is controlled by the LCD elements. The maximum brightness of the backlight deter-mines the maximum luminance of a given LCD. The minimum luminance is determined by the amount of light the LCD panel is capable of blocking. It is important to remember that the amount of light generated by a CCFL- or LED-backlit display decreases over time, resulting in a gradual dimming or perhaps a finite lifetime at a given luminance.

Medical-grade displays typically involve tech-nology that can monitor and adjust the backlight levels as they change over time, making these dis-plays more stable than commercial off-the-shelf (COTS) displays. On the other hand, COTS dis-plays cost less than medical-grade displays, and a COTS device with a backlight and other hard-ware identical to that of a medical-grade device could last as long as its counterpart, although

Page 3: Medical Imaging Displays 2012

RG  •  Volume 33  Number 1  Kagadis et al  277

it will require regular recalibration to maintain performance.

Spatial noise and temporal noise affect the dis-played image, contributing to the degradation of low-contrast images. More specifically, spatial noise arises from stationary pixel-to-pixel varia-tion in output from spatial variation in the panel,

whereas temporal noise consists primarily of Poisson noise from the emission of photons by the backlight, as well as from instabilities in the backlight and control electronics (3).

In addition, backlights may not provide a per-fectly homogeneous light source, leading to poorer

Figure 3. Images show picture elements (pixels) from a monochrome LCD (a) and a color LCD (b). Each pixel is made up of separately addressable subpixels, and for the color LCD, each subpixel has red, green, and blue filters. To show a bright red pixel, the liquid crystal of the red subpixel is oriented so as to twist light sufficiently to allow the maximum amount of light to pass through the second po-larizer. The green and blue subpixels remain off. (Fig 3 reprinted, with permission, from reference 2.)

Figure 4. Schematic illustrates how display interpolation algorithms can result in undesirable artifacts, including aliasing, blur, and edge halo.

Figure 2. Drawing illustrates how the liquid crystal (LC) panel modulates light. Light from the backlight is polarized with a polarizing filter. The LC panel rotates the polarized light. The amount of twist determines how much light is allowed to pass through the second polarizer; if the polarized light is not rotated, it will not pass through the second polarizer. The thin-film transistor array (see Fig 1) provides the voltage to each pixel that determines the amount of twist.

Page 4: Medical Imaging Displays 2012

278 January-February 2013 radiographics.rsna.org

Figure 6. Graph illustrates the contrast thresh-old of the HVS versus luminance at a given spa-tial frequency. (Reprinted, with permission, from reference 7.) L = luminance.

luminance uniformity. Medical-grade displays generally provide better luminance uniformity (~15%) compared with average consumer-grade displays, whose luminance uniformity typically exceeds 20%–30%. Medical-grade displays often incorporate technology that compensates for pixel luminance variation in a way that COTS displays do not, thereby improving spatial uniformity.

The backlight of a display has its own par-ticular tint, or chromaticity, even with gray-scale displays. To our knowledge, there is no literature supporting the notion that this modest tint has any influence on diagnostic performance. Never-theless, color variation on one display or between displays at a reading workstation may be disrup-tive to the interpretation process (Fig 5). Color management for both gray-scale and color dis-plays needs to account for these variations so that the displays are perceived to be the same.

The luminance ratio of a display is the ratio of maximum luminance (“white level”) to minimum luminance (“black level”). Contrast ratio as per-ceived by the human eye can vary (see “Display Calibration: The Barten Model and the DICOM Gray-Scale Standard Display Function”). The eye adjusts to the average brightness to which it is exposed and has diminishing ability to perceive subtle contrast changes as brightness deviates from the point of adaptation (4).

Contrast ratios quoted by manufacturers can be misleading. The realized (observed) contrast ratio includes the ambient light reflected off the display screen. For example, a television with an advertised contrast ratio of 30,000:1 may have a black level of 0.016 cd/m2. This means that for typical daytime viewing, the realized contrast ra-tio would in reality be about 1/100th of what was advertised because the maximum luminance is 480 cd/m2.

If a display is viewed at angles that are at sig-nificant variance from the orthogonal view, there may be a significant drop in perceived contrast, resulting in a reduced ability to detect low-contrast lesions. Typical contrast drop-off for two types of LCD panels—in-plane switching and patterned vertical alignment—is shown in Figure 5. Manufacturers typically quote the viewing an-gle for a display in terms of the point at which the contrast ratio is 1/10th that of on-axis viewing.

An 8-bit display has 256 different digital driv-ing levels (DDLs) that generate 256 unique lu-minance values. The relationship between DDLs and displayed luminance is determined by the calibration of the display. As described in the next section, how a display is calibrated affects the contrast resolution, or the perceptibility of different gray-scale values (DDLs), in an image.

Display Calibration: The Barten Model and the DICOM Gray-Scale Standard Display Function.—To maximize the contrast resolution in an image, it is necessary to understand the nature of the

Figure 5. Graph illustrates the change in contrast ratio with off-axis viewing on an in-plane switching (IPS)–type display ver-sus a patterned vertical alignment (PVA)–type display.

Page 5: Medical Imaging Displays 2012

RG  •  Volume 33  Number 1  Kagadis et al  279

observer. In the current context, the observer is human and the capabilities in question are those of the human visual system (HVS). One way to quantify HVS contrast sensitivity is to use the concept of smallest perceivable difference, or just noticeable difference (JND). JND represents a perceivable change in luminance (∆L), or light output, for a given luminance. The measured contrast change (∆L/L) for a JND varies with spatial frequency and with the background lumi-nance that the eye perceives.

A model of the HVS was developed by Peter Barten and is based on a series of key experiments that characterized the contrast sensitivity of the human eye (5,6). Subjects were shown sinusoidal contrast patterns on a fixed background, allowing the subject to adapt to each luminance setting. The subject had to determine at each luminance level whether he or she could detect or recognize the individual pattern elements. By synthesizing data from many subjects, Barten calculated an average HVS response, describing a JND for each background luminance. The model spans a lumi-nance range of five orders of magnitude—from pitch black to bright sunshine—and shows that the HVS is nonlinear (Fig 6); that is, the percentage of contrast change required for a JND (detection threshold) at low background luminance is higher than that for a JND at high background luminance (although the absolute contrast change at low lu-minance is much smaller).

Barten’s model does have its limitations. For one thing, medical images are not simple sinu-soids on flat backgrounds. Furthermore, the eyes adapt to an average background luminance made up of many different luminance values, a phenomenon described as fixed adaptation (Fig 7), whereas the Barten model is based on a vari-able adaptation scenario. With fixed adaptation,

the HVS has a more limited range of contrast sensitivity. This means that even if a display were to have a very wide contrast range, the observer may not be able to perceive more detail that is contained outside the luminance range to which the eye has adapted.

In creating strategies for displaying medical images electronically, we seek to maximize the perceptibility of information and promote the consistency of image presentation across different displays (2). The latter is accomplished with use of consistent display calibration. Maximizing the perceptibility of information can be accomplished by creating perceptual linearity across gray-scale values, meaning that changes in image pixel val-ues throughout a gray-scale range appear to have similar contrast. This ensures that information at one luminance level is not lost at the expense of visibility at other levels. Perceptual linearity can be achieved by setting display luminance values such that each change in pixel value corresponds to the same quantity of JND.

Although Barten’s model does have shortcom-ings, it still provides a straightforward basis for display calibration (2). A calibration standard based on Barten’s model is described in Part 14 of the Digital Imaging and Communications in Medicine (DICOM) standard (9). The DICOM gray-scale standard display function (GSDF) specifies the precise display luminance that should be produced for a given input value. The practical result of using the GSDF is that dif-ferent displays can be set to have the same gray-scale response (for consistent presentation of images), which may help improve the perceptual linearity of a display over other calibration set-tings and thus better match the capabilities of the HVS, resulting in better diagnostic performance.

Figure 7. Graph illustrates the JND for variable (A) and fixed (B) visual adaptation. L = luminance. (Reprinted, with permission, from reference 8.)

Page 6: Medical Imaging Displays 2012

280 January-February 2013 radiographics.rsna.org

Criteria, Guidelines, and Technical Standards.—In 2005, the American Association of Physicists in Medicine (AAPM) published the Task Group (TG) 18 report (8), with guidelines and crite-ria for acceptance testing and quality control of medical display devices. The highlights of their recommendations are provided in the executive summary (10).

In 2012, the American College of Radiology (ACR), in collaboration with the AAPM and the Society for Imaging Informatics in Medicine (SIIM), published a revised Technical Standard for the Electronic Practice of Medical Imaging (11). This standard provides guidelines for the purchase, use, and quality control of displays. It is expected that ACR-accredited modalities will address these guidelines. Displays used for mam-mography are subject to additional requirements for U.S. Food and Drug Administration 510K clearance, and a separate ACR guideline exists for the electronic display of mammographic images.

The latest version of the ACR-AAPM-SIIM technical standard emphasizes the importance of consistent presentation of images throughout the imaging chain, making display and quality con-trol recommendations for all displays (primary and secondary) in the imaging chain with respect to overall appearance. This document should be consulted when one is selecting display settings or formulating a quality control program for dis-plays used in radiology.

Other Display Features and Considerations.—Although specific display settings and quality control processes are referenced in both the AAPM TG18 report (8) and the ACR-AAPM-SIIM technical standard (11), there are many dis-play features and viewer or display management software that can be advantageous.

Desirable display-related features include luminance stabilization, luminance uniformity correction, an integrated front panel luminance sensor, power management, end-user validation software, and remote management and quality control software.

Luminance stabilization typically relies on sen-sors that monitor backlight output. Without this feature, displays can vary in luminance by more than 20% within the first several hours after being turned on. Displays without luminance stabiliza-tion require more frequent quality control checks or calibration to keep luminance on target. Lu-minance uniformity correction is a pixel-by-pixel

or regional correction to the luminance output of an LCD to make it more uniform. An integrated front panel luminance sensor enables remote qual-ity control checks and calibration. It is a more robust means of validating luminance stability than is a backlight sensor because aging of display components may shift the relationship between the backlight and the front panel output. Power management is important because a display will last longer if it is used less and therefore should be turned off when not in use. End-user validation software consists of quality control checks that ap-pear automatically and periodically for the viewer. This software is especially useful if displays are not stabilized or cannot be conveniently accessed by quality control staff (as in teleradiology). The ACR technical standard recommends that qual-ity checks for diagnostic displays be performed at least once a month. Remote management and quality control software is helpful for larger fleet management and is provided by a number of dis-play manufacturers and distributors. It can help maintain quality control records, report on the last calibration, detect and correct changes by means of remote calibration or stabilization features, and provide an estimate for remaining display life.

However, all the “bells and whistles” and tight calibration are not worth much if (a) the image viewing software puts bright borders and text around an image, which shifts the lumi-nance range of optimal contrast sensitivity and confounds contrast discrimination in darker regions of an image (other bright light contami-nation of the viewing field will have the same effect); (b) the images are viewed in a brightly lit (or completely dark) environment; (c) the viewer sits too far off-axis to the display; (d) the monitor display is covered with fingerprints; or (e) the viewer alters the brightness and contrast of the display after it has been calibrated.

Tools and Tips for Alter- native Viewing of Medical ImagesThus far, our recommendations for medical image viewing have been made with the typi-cal radiology reading room and the observer in mind. There are many factors that influence diagnostic performance, including the display hardware, viewing software, and reading environ-ment. However, we recognize that a lot of image viewing takes place outside the radiology reading room, and perceived image quality is still of great importance. In this section, we look at various important use cases for viewing images outside a radiology reading room for both primary-

Page 7: Medical Imaging Displays 2012

RG  •  Volume 33  Number 1  Kagadis et al  281

diagnostic (fluoroscopic-interventional imaging) and secondary-nondiagnostic (surgical, modality, clinical review) image viewing, with recommen-dations and considerations for each.

Secondary Displays (PACS Quality Control or Modality Viewing).—The AAPM and ACR both classify displays into two main categories: primary and secondary. Primary displays are intended to be used by medical practitioners to interpret images for an official billable report that will be used by others to make healthcare decisions. Secondary displays are intended for multiple uses, not all of which are related to medicine (eg, office work, patient education in the examination room, review by clinicians and surgeons, image acquisition, and PACS quality assurance). The last-named use is of particular interest, since it is closely tied to the process that leads to radiologist interpretation. The properties for some commonly used primary and secondary displays are shown in Table 1 (8). In general, the differences in these properties are related to the requirements for the different intended uses.

The lion’s share of attention is directed to-ward primary displays, and rightly so. However, the efficiency and quality of the imaging chain is closely tied to (a) efficiency and quality of image acquisition, (b) quality assurance in the interpre-

tation pipeline, and (c) accurate reproducibility of the image at each stage in the pipeline. Certain concerns arise at each step in the process.

1. During acquisition, the technologist views images at the modality display to ensure that the required anatomy is imaged without cut-off, motion, blur, or other artifacts. Additionally, with ultrasonography (US), it is common to have the sonographer perform clip selection separate from diagnostic viewing. Deciding which clips are relevant also has a large impact on diagnosis and is influenced by image quality.

2. At the PACS quality assurance workstation, the technologist makes sure that the images are oriented and ordered correctly, and that the ini-tial gray scale is optimized to show as much detail across the image as possible.

3. At the PACS diagnostic station, the radiolo-gist sees the same presentation that was seen by the technologist at the prior step, but typically on a higher-quality display. The radiologist may annotate the images or apply specific image en-hancement techniques or processing to clarify a diagnostic finding.

4. At the point-of-care workstation, the clinician or surgeon sees the images in the manner that the radiologist intended at the moment of interpreta-tion, but they may be only selected relevant images

Table 1 Display Properties of Two Mobile Viewing Platforms versus a Typical Reading Room Display

Mobile Viewing Platforms

Display Properties 3-MP PACS Display iPhone 4* iPad 2*

No. of MPs 3 0.4 0.6Pixel size (mm) 0.211 0. 092 0.192Native resolution (width × height [pixels]) 1536 × 2048 640 × 960 768 × 1024Viewing distance (cm) required to achieve equal pixel

size projected onto the retina76† 33 69

Typical viewing distance (cm) 70–80 30–40 30–45Size of a lesion (mm) when the entire image is viewed

from a typical viewing distance‡3† (14) 0.47 (5) 1.4 (7)§

Spatial frequency of a 3-mm lesion at a typical viewing distance (log[cycles/degree])

2.1 2.5 2.2

Maximum luminance (cd/m2) 400ǁ 500 387Minimum luminance (cd/m2) 1.0 0.5 0.48Realized contrast ratio in 25-lux reading room lighting 358 223 175

Source.—Reference 12. Note.—MP = megapixel. *Apple Computer, Cupertino, Calif. †Baseline. ‡Numbers in parentheses indicate number of pixels. §Portrait mode, full-screen width. ǁGray-scale calibrated (DICOM).

Page 8: Medical Imaging Displays 2012

282 January-February 2013 radiographics.rsna.org

or even highlighted regions of interest that have already been interpreted by the radiologist.

Note that in three of these four steps (steps 1, 2, and 4), a secondary display is used. The key to achieving a consistent appearance of images across all these displays is that the displays all be calibrated to the same gray-scale response func-tion and have similar luminance ratios. By agree-ment, this standard is Part 14 of the DICOM GSDF (9), and recommended maximum and minimum luminance values can be found in the ACR-AAPM-SIIM technical standard (11).

Mobile Viewing Platforms (Smartphones, Tab-lets).—In February 2011, the U.S. Food and Drug Administration approved the first mobile application for limited diagnostic viewing (13). This approval was given with the proviso that the application should be used only if no diagnostic reading room facility is available. It also included a pre–image interpretation quality control tool that measures the ability of the user to detect defined low-contrast targets given the existing ambient light conditions. If the user cannot de-tect all of the targets, he or she should not inter-pret images until a more suitable environment is found. However, although this product and a myriad of other mobile image-viewing products are increasingly being used in clinical practice, more thought needs to be given to how and why these products are used. Rigorous research will be required to verify that even the newest “reti-nal” resolution displays (application for registered trademark made by Apple Computer, Cupertino, Calif) provide sufficient quality for radiologists and other clinicians to render appropriate and accurate diagnostic decisions. In this section, we discuss only nondiagnostic image viewing on mobile platforms. For diagnostic mobile appli-cations, users should pay close attention to the strict usage guidelines and accept the limitations of the devices before integrating them into a radi-ology practice (14).

Mobile platforms can provide ease of access in distributing images and making reports avail-able to referring clinicians. They may be espe-cially useful for patient rounds, communicating with patients about findings, and nondiagnostic consultations. There is no doubt that they have enjoyed widespread use in the clinical setting and can be of real benefit in patient care.

Optimized software on mobile operating sys-tems can provide very fast and easy access to images (often faster than logging onto a nearby workstation). However, viewers must be aware that touch screens get covered with fingerprints, poorly selected display settings can compromise viewing, how one holds the device and the envi-ronment in which one is reading can significantly limit what is visible, and the images may not look like what the reporting radiologist saw.

Table 1 summarizes some of the properties of mobile image viewing platforms compared with a typical 3-megapixel diagnostic display found in a reading room.

Ultimately, clinicians need to recognize when higher image quality is important and, therefore, to know how to create the appropriate viewing conditions for a given task. They must also have the self-discipline to take the required steps to improve viewing or to switch viewing platforms. It is not always obvious what information is lost when good viewing conditions are not met. The best protection that radiologists can provide for their patients is to establish and maintain good habits for mobile viewing. There is also much that healthcare institutions can do by way of training and explaining the options provided by mobile viewing platforms to optimize image viewing and protect patients.

Optimal Use of Mobile Devices for Clinical Im-age Viewing.—The following suggestions are made to improve the use of mobile devices for viewing clinical images. Some steps are more time consuming (eg, gray-scale calibration) but may be critical when information described in the radiology report should be visible but is not. Other steps are simpler and can be integrated into regular good viewing habits (eg, moving to a dimmer part of the room to view images).

1. Use viewing software that provides easy ac-cess to the radiology report, reporting physician contact information, and examination status that accompany the image.

2. Set autobrightness to “OFF.” Use maximum allowed brightness.

3. Move to a dimmer part of the room when viewing images; ambient lighting should be 20–40 lux. Avoid direct reflection from overhead lighting or windows.

4. Use viewing software that is gray-scale calibrated.

5. Use software that interactively checks ambient lighting conditions prior to viewing (a minimal contrast visibility test).

6. Hold the device perpendicular to the line of sight (viewing distances are shown in Table 1 and should be ergonomically reasonable).

Page 9: Medical Imaging Displays 2012

RG  •  Volume 33  Number 1  Kagadis et al  283

7. Do not use a protective film, which can re-duce contrast and increase reflections and glare.

8. Clean the display of fingerprints and other markings.

9. With limited native resolution, use the zoom and pan functions to see small imaging findings, even (if necessary) on many small-matrix (computed tomographic, magnetic resonance, nuclear medicine–positron emission tomographic, US) images. This holds true even for the new retinal displays.

Point-of-Care Viewing (Critical Care, Surgi-cal, and Interventional Viewing).—There is a substantial amount of modality-based (point-of-care) image viewing done both within and outside of radiology. Within radiology, the most obvious application is fluoroscopy (although other interventional viewing systems also fit this scenario). Outside of radiology, there is also point-of-care viewing (eg, portable digital radiography, US) for critical or interventional patient care, allowing on-the-spot treatment de-cisions without a radiology report. The display needs for point-of-care viewing may not be as demanding as those for more critical diagnostic tasks or modalities, and the imaging task may be perceived as modality limited; however, this does not negate the adverse affect that the viewing environment or viewing arrangements may have on the perceptibility of image information.

The contrast resolution requirements for viewing fluoroscopic and US images may not be as strict as those for viewing computed or digital radiographs, but a bright viewing envi-ronment can reduce what is visible to far below 256 shades of gray, especially for off-angle view-ing. Additionally, the work flow in point-of-care viewing may be less amenable to the use of window and level settings that help regain the contrast detail lost with a default setting. View-ing distances can also adversely affect percep-tibility, since the perceived spatial frequency of image details can move away from peak contrast sensitivity (see “Display Calibration: The Barten Model and the DICOM Gray-Scale Standard Display Function”). In addition, digital radiog-raphy is becoming a point-of-care viewing mo-dality, and the contrast subtleties available to the viewer will be display limited rather than modal-ity limited, even in an optimal setting.

Displays are typically provided as part of a package with the modality or PACS and are often closed systems that cannot be calibrated,

properly tested, or easily replaced by the end user. This should be of great concern because the intended life span of most equipment is lon-ger than the typical life span of displays. Most of the amenities we take for granted in the reading room may not be available elsewhere. Listed be-low are some recommendations for the purchase (points 1–9) and use (points 10–13) of equip-ment for diagnostic or critical-care viewing in an interventional or point-of-care environment. These recommendations can be helpful when negotiating with a vendor or optimizing existing display technologies.

1. Consult the ACR-AAPM-SIIM technical standard for display settings and maintenance (11). Consider the typical use environment when setting luminance.

2. Make sure a digital test pattern is available on the unit for display quality control. At the very least, a Society of Motion Picture and Television Engineers (SMPTE) pattern (15) or the pat-terns available from the AAPM TG18 report (8) should be available.

3. Request display conformance standards from the vendor as part of the equipment war-ranty, especially if the display is integrated with the modality and cannot easily be replaced by the end user. Make sure that the display can be either recalibrated or replaced by the vendor when it does not meet these standards, since it will likely be used longer than typical diagnostic displays.

4. Request end-user calibration tools and cali-brate the display (or request that it be calibrated) so that the minimum black level is greater than the level of diffuse reflection of ambient lighting from the display surface.

5. Buy or request displays that have good view-ing angles. Currently, in-plane switching–type LCD panels provide better viewing angles than other types.

6. Avoid using reflective protective panels or displays with highly reflective screens. Use antireflective coatings on protective panels only when needed.

7. If possible, request systems with displays whose height and angle can be adjusted for com-fortable viewing.

8. If the integrated display options are poor, look into other options, including digital wired or networked output from the modality to higher-quality fixed displays.

Page 10: Medical Imaging Displays 2012

284 January-February 2013 radiographics.rsna.org

9. If the viewing distance must be greater than usual, consider using larger-screen displays (Table 2).

10. Position the display perpendicular to the viewer’s line of sight. When using a bank of monitors, angle the side monitors slightly to-ward the viewer so that orthogonal viewing can be achieved with minimal head turning or body movement.

11. Create an easy way for room lighting to be dimmed for image viewing (eg, ~25 lux).

12. Follow the ACR guidelines for quality control (11).

13. Keep the display clean.

Purchasing and Support ContractsDisplays sold for radiologic image viewing can be significantly more expensive than the average COTS flat panel display. One often pays for fea-tures that are designed to improve image quality and reduce eye strain, but there are other features that can help with display stability and mainte-nance that should be considered. Understanding what one wants and is willing to pay for, and how often one is willing to pay, makes for a bet-ter purchase decision. It is important to keep in mind that selection of the proper display is just as important as selection of the proper PACS or ac-quisition device, even though displays are clearly less expensive and are often viewed as disposables rather than long-term investments. To help with selection, creating a request for proposals form or a vendor questionnaire is a useful way to outline

display requirements and get information from vendors in an easily comparable format.

The purchase of displays, like that of any other equipment, should be viewed in terms of total cost of ownership. A longer display lifetime can translate into an overall lower cost; therefore, it is good to understand not only the cost of the desired display features, but also how long the display will last and how hard it will be to main-tain. Creating a good purchase contract can help ensure that one gets what he or she expects, and can also help protect the investment for the ex-pected lifetime of the display.

Displays should be subjected to acceptance testing upon receipt to make sure they meet the promised specifications. This testing should mea-sure tolerances for black dots or bright dots, lu-minance uniformity, deviation from the DICOM GSDF, maximum and minimum luminance, and color tracking and uniformity.

To be able to reject a unit for failing an accep-tance test, it is necessary to understand what the vendor criteria for failure are for the test (as stated in the warranty) and exactly how these criteria are measured. For example, one might record deviations of over 10% from the DICOM GSDF in a 256-step measurement, but the vendor may measure at only 18 points and claim compliance. If this is consistent with the vendor’s warrantied criteria, there may not be any recourse.

In addition to acceptance test criteria, there are other purchasing and support contract con-siderations and criteria that can help one get the most out of one’s displays.

Table 2 Display Properties of a Large-Screen Surgical Display versus a Typical Reading Room Display

Display Properties3-MP PACS

Display8-MP Large-

Screen Display

No. of MPs 3 3Pixel size (mm) 0.211 0.324Native resolution (width × height [pixels]) 1536 × 2048 3840 × 2160Viewing distance (cm) required to achieve equal pixel size pro-

jected onto the retina76* 116

Typical viewing distance (cm) 70–80 120–160Size of a lesion (mm) when the entire image is viewed from a

typical viewing distance†3* (14) 5.7 (17.8)

Spatial frequency of a 3-mm lesion at a typical viewing distance (log[cycles/degree])

2.1 2.1

Maximum luminance (cd/m2)‡ 400 300Minimum luminance (cd/m2) 1.0 0.3Realized contrast ratio in 25-lux reading room lighting 358 147

Note.—MP = megapixel.

*Baseline. †Numbers in parentheses indicate number of pixels. ‡Gray-scale calibrated (DICOM).

Page 11: Medical Imaging Displays 2012

RG  •  Volume 33  Number 1  Kagadis et al  285

Connecting to the Computer.—Most display vendors have video cards that have been se-lected specifically for use with their displays. Whenever possible, it is advisable to use vendor-recommended models. If the intention is to use an existing video card, it is important to ensure that compatibility issues have been addressed. Is the display supported by the video card that is intended for use? Does the display use a Display-Port digital display interface or a digital visual interface? (A dual-link digital visual interface is required for displays with more than 2 megapix-els.) Analog connections should not be used with displays for viewing medical images.

Display Calibration.—The radiology standard for image display (8,11) emphasizes the use of the DICOM GSDF for gray-scale calibration. Some displays may come precalibrated to the DICOM GSDF with a set maximum and minimum lu-minance. The end user should make sure that the preset calibration is suitable for the desired luminance and ambient light settings for the ap-plication. Regardless of whether a display comes precalibrated, the end user will want to be able to recalibrate the display for him- or herself. This will be one of the first things the end user will want to try if there is an issue with a luminance-related quality control failure. It also gives the end user flexibility in choosing display settings (some may be more appropriate than others for different areas).

To make display calibration possible, the ven-dor must offer a calibration software package (although such a package can also be purchased from an independent vendor). The software should be tested prior to purchase to make sure it is user friendly and provides the desired controls and reporting features. Some displays have built-in photometers that can help monitor luminance stabilization frequently. However, the end user should consider an alternative means of assessing and testing the display to validate the accuracy of the integrated photometer and to check for lu-minance uniformity. He or she should make sure that the calibration–quality control software is also compatible with an external photometer and should make plans for how the photometer might be calibrated or exchanged in the future.

Maximum Luminance.—Maximum luminance can be a limiting factor in the lifetime of a display. As a display ages, optical components may become more opaque and backlights will gradually dim. The rate of luminance decline is much lower for an LED-backlit display than for a CCFL-backlit display. Most manufacturers actually recommend

running the display at a level lower than the maxi-mum so that as the display ages the luminance can be adjusted upward for luminance stabilization. The end user should make sure that the warranty explicitly covers performance at a calibrated maxi-mum luminance that meets his or her minimum required value, and that this warranty period lasts for a given minimum number of years or bulb-on–time hours (eg, 5 y or 20,000 h). Some displays provide a bulb-on–time clock that can be used to track hours of operation. If the end user is likely to have spare displays onsite or displays that will be sitting on the shelf for awhile, he or she must make sure that the warranty period either does not begin until deployment or utilizes bulb-on time.

The warranty should refer to a calibrated maximum luminance that is much less than the advertised maximum output of the display. The calibrated maximum luminance will be less than the noncalibrated maximum luminance (initially, it will be much less). The extra luminance “head-room” is needed for calibration and to maintain luminance stabilization over time.

Tools for Quality Control.—To ensure that the displays are functioning as intended, a quality con-trol program is needed. Most vendors offer quality control and reporting software that can help in this regard, albeit at additional cost. The end user should try out this software and understand clearly what features are important and what is offered by the product. Some features to consider include remote management, reporting fields and filters, quality control scheduling, failure notification, and appropriate or flexible data fields.

Service Agreement.—A reading room without displays is useless; therefore, planning for dis-play failure and replacement is necessary. The purchaser should contract for a specified turn-around time and shipping agreement for failed displays. If any substantial downtime is unaccept-able, having hot swap–available units onsite is recommended.

An important consideration in dealing with replacement of failed gray-scale displays is the need for color matching to the remaining dis-plays. The color temperature of the remaining displays should be known and can be matched, or provision should be made to replace the bank of displays with ones that are color matched. Color displays can be color calibrated to match each other, although it should be noted that there is no widely accepted color calibration method or stan-dard for any medical imaging applications.

Page 12: Medical Imaging Displays 2012

286 January-February 2013 radiographics.rsna.org

Radiologist Reading Room EnvironmentNowadays, radiologists are reading more cases with more images per case. Physician shortages—in particular, of specialists in rural and medically underserved areas—aggravate the problem. Fa-tigue is a common problem that affects all health-care professionals (16–21), but radiologists are particularly susceptible (22–29), and concerns have been raised that workloads are so demand-ing that fatigue may impact diagnostic accuracy (30–38). One factor that may contribute to fa-tigue is a poor or inappropriate reading environ-ment. However, there are a number of relatively easy steps that can be taken to optimize reading in a digital environment (39).

Ambient Lighting Conditions.—The ambient lighting in the reading room should not be turned off, nor should it be left completely on; it should be set between 20 and 40 lux. This will help avoid reflections and glare on the screen, while providing sufficient illumination for the HVS to adapt to the surrounding environment. It is recommended that indirect or backlit incandescent lighting with dim-mer switches be used instead of fluorescent light-ing. Room lighting should be as uniform as pos-sible. Overhead lights add to reflections and glare and therefore are not recommended. If supple-mental task lighting is used, it should be focused; portable, mountable fixtures are very useful for this application. To optimize the HVS, radiologists should “dark adapt” for about 15 minutes before starting to read clinical images.

Reader Comfort.—It is important to consider the physical and visual comfort of the radiolo-gist to avoid injury, fatigue, and general dis-comfort. There are three key “contact” points to consider (40): (a) where the eyes “hit” the monitor; (b) where and how the hands come into contact with input devices; and (c) where the body, back, and arms are in contact with the chair. The eyes should be level with or slightly below the top of the display monitor so that the entire scene can be viewed without having to bend the head or neck. With portrait-mode displays, it is useful to keep the eyes level somewhere between the top and center of the display. To avoid injuries to the shoulder, elbow, wrist, and hand, manual input de-vices (eg, keyboard, mouse) should be chosen by the individual to suit his or her own needs, style, and comfort level—one size does not fit all. These components should be placed for ease and com-fort during use, which is likely to vary from person

to person. Dictation tools, the Internet and other reference tools, and various devices and displays should be readily accessible and easy to use.

Many workstations have two major high-reso-lution displays for viewing images and possibly a lower-resolution monitor on the side for access-ing patient records and other nonimaging data. The main monitors should be set side by side and angled toward each other slightly to avoid the end user’s having to turn the head and neck too much (thereby avoiding neck, shoulder, and back prob-lems). Although the displays are not orthogonal to the viewer, it is recommended that the radiolo-gist maintain a viewing angle as close to orthogo-nal as possible to avoid angle-dependent differ-ences in contrast, which is still a problem with some monitors. The hands and wrists should not rest on sharp or hard edges and should be kept straight and in line with the keyboard and other input devices. Pads may be useful for some peo-ple. The chair is critical, and it is recommended that the end user find a comfortable chair, op-timize its position, and not switch to someone else’s chair for convenience. Chairs should have a good backrest for lumbar support, be big enough for the individual in terms of width and depth, have a front edge that does not push against the knees or lower legs, and have adjustable armrests and good cushioning.

Reading rooms are generally not quiet places (41). Noise from computers, dictation, phones, and in-person consultations can be a significant distraction to radiologists. Moveable walls or partitions, sound-absorbing tiles and walls, and carpeting should be considered when laying out a reading room. It is necessary to create a bal-ance between spatial separation and spatial open-ness to allow radiologists to readily consult and collaborate with each other. Background white noise or noise-canceling headphones are used by some radiologists to reduce the impact of ex-ternal noise. All of the technology in the reading room—computers and monitors—generally emits a considerable amount of noise and heat. There-fore, it is important to maintain adequate air flow with optimal temperature and humidity controls.

Radiologists and Fatigue.—It is becoming in-creasingly evident that radiologists are experienc-ing growing fatigue as caseloads and the number of images per case increase (35–37). When film and view boxes were the norm, radiologists had to take breaks while images were removed and new images were hung on the view box. In the digital reading environment, these breaks no longer ex-ist. Radiologists often sit for long hours reading

Page 13: Medical Imaging Displays 2012

RG  •  Volume 33  Number 1  Kagadis et al  287

images at short distances. This is visually fatiguing and can reduce the ability of the radiologist to fo-cus, especially at short distances. There is evidence that a radiologist’s performance declines signifi-cantly after a long day of reading clinical images.

A very easy way to help reduce and even elimi-nate visual fatigue is the so-called 20-20-20 rule: every 20 minutes, look 20 feet away for 20 sec-onds. This technique relaxes the eye muscles that were being used to focus on the image display at short distances and is surprisingly effective at reducing eyestrain and improving focus. Another helpful technique is to avoid abrupt and frequent changes in light levels, which require the eyes to adapt to light and dark too often. Optimizing the ambient lighting can solve this problem. In addi-tion, it is useful to avoid moving closer to the dis-play to see fine details; instead, the use of zoom and pan functions and other image manipulation or processing tools is highly recommended.

Visual Search and Errors.—The Institute of Medicine reports that medical errors kill more Americans than do automobile accidents, breast cancer, or acquired immunodeficiency syndrome (42). Inadequate knowledge about the frequency, cause, and impact of errors, as well as about ef-fective methods for error prevention, is a major obstacle to improving healthcare quality (43). Missed radiologic diagnoses from interpretation errors account for 30% of all malpractice suits

in the United States (44–47). Avoiding false-negative errors is critical because treatment is not forthcoming when such errors occur. To avoid diagnostic errors, it is essential to know the un-derlying causes of error, and thus, how radiolo-gists search and interpret images. This knowledge also provides the underlying rationale for the adoption of many of the quality assessment–qual-ity control measures that are recommended, as well as an understanding of why it is so important to consider the display features described earlier when purchasing and using displays for medical image interpretation.

The retina is the main visual receptor and is located at the back of the eye. It contains about 115 million rods and 6.5 million cones. The rods are responsible for sensing contrast, bright-ness, and motion and are located in the retina periphery. The cones are responsible for spatial resolution and color vision and are located in the central fovea and the parafoveal regions of the retina. Fine details are best seen with the fovea, but spatial resolution drops off quickly toward the periphery (48). Thus, there is a “useful visual field” (49) only about 5° in diameter that the radiologist must move over an image to view and detect subtle, fine lesion details with high-resolu-tion foveal vision (Figs 8, 9) (50–53).

Although advances in technology and imaging techniques have greatly improved the quality of

Figures 8, 9.  (8) Radiograph shows the useful visual field (~5° in diameter) that can be processed with high-resolution foveal vision as the observer moves his or her eyes around an image to gather information. (9) Chest radiograph shows the pattern typically used by a radiologist in searching for pulmonary nodules. Each circle represents a fixation point where the eye lands with foveal vision. The size of the circle reflects dwell time (ie, how long the observer’s eye remains at that location), with larger circles reflecting longer dwell time. The lines between fixation points represent saccades, or jumps between locations.

Page 14: Medical Imaging Displays 2012

288 January-February 2013 radiographics.rsna.org

images, radiologists still do not always detect le-sions that are present, even though in retrospect such lesions are often visible to others and even to the same radiologist. Developing ways to ame-liorate these errors of omission will require a better understanding of the underlying causes.

Since the 1960s, the visual search patterns of radiologists have been studied to character-ize the types of omission errors that are made. Eye-position recording techniques (Fig 10) have been used in many of these studies, since the technology can record where and for how long a radiologist looks at (ie, fixates on) a given im-age location (52). False-negative omission errors have been classified into three categories on the basis of visual dwell times. Approximately one-third of these errors are search errors, wherein the radiologist never fixates on the lesion with foveal vision and thus does not process the relevant in-formation with high-resolution vision, so that it is unlikely that the lesion is detected or recognized (unless it is obvious enough to be detected with peripheral vision, but this is not how most lesions are found). With recognition errors, the radiologist fixates on lesions with foveal vision, but not for very long, and the resulting inadequate process-ing time reduces the likelihood that these lesions will be detected or recognized. Decision errors occur when the radiologist fixates on the lesion for long periods of time, but either does not con-sciously recognize the features of the lesion or actively dismisses them.

Analyzing eye position to classify omission errors is based solely on search behaviors, but there are at least three steps in the interpreta-tion process. The radiologist must (a) detect

and (b) classify lesions and then (c) decide on an action, and errors can occur at any point in this process. Thus, there are misclassification errors (eg, a benign mass is characterized as malignant or vice versa, or pneumonia is mistaken for atelec-tasis). There are also action errors, such as when a lesion is recognized at mammography and re-ported but then put on a 6-month watch, when instead the lesion should have been biopsied.

Satisfaction of search (SOS) can also contribute to errors (54). In SOS, once an abnormality is detected and recognized, additional diligence is required to look for other possible abnormali-ties in an image. Sometimes, this extra effort is not made, and other lesions in the same image or case are missed. Estimates of the prevalence of SOS errors vary, but they range from one-fifth to one-third of misses in radiology and may be as high as 91% of misses in emergency medicine. Premature termination of search is generally not the root cause of SOS; rather, faulty pattern rec-ognition and faulty decision making seem to be the more likely culprits.

Reader Experience.—All radiologists make mis-takes; it is unavoidable. However, the preponder-ance of evidence as well as practical observation show that as radiologists become more experi-enced—as they spend more time reviewing more images—they tend to improve their performance. As they gain experience, radiologists tend to have higher sensitivity and specificity rates than when they started. In terms of visual search, more ex-perienced readers tend to find lesions faster and to require less time actually fixating on a lesion before deciding whether a finding really is a lesion. In short, with experience, radiologists become more efficient and effective decision makers (55).

Figure 10. Photograph shows typical observer set-up for an eye-position recording session using a head-mounted system with head-tracker capability. The optics module sits on the headband on the observer’s forehead, directing infra-red light at the visor (alternatively, a monocle is used). The infrared light reflects off the visor into the eye and obtains a signal from the pupil and cornea, which is then reflected back to a camera in the optics module for processing.

Page 15: Medical Imaging Displays 2012

RG  •  Volume 33  Number 1  Kagadis et al  289

ConclusionsAllied medical imaging professionals must be familiar with a wide variety of image display con-formance parameters and usage factors to ensure high standards of perceived image quality and to improve image readability and interpretation by the end user. Adequate and repeatable perfor-mance of image display systems is a key element of information technology platforms in modern radiology departments. Those persons responsi-ble for displays should follow recommended ACR technical standards to implement a comprehen-sive program of image display quality control and quality assurance.

References 1. Kagadis GC, Walz-Flannigan A, Krupinski EA, et

al. Image Display, from the AAPM/RSNA Physics Online Modules. http://physics.rsna.org/section /default.asp?id=PHYS0910. Accessed March 1, 2012.

2. Fetterly KA, Blume HR, Flynn MJ, Samei E. Intro-duction to grayscale calibration and related aspects of medical imaging grade liquid crystal displays. J Digit Imaging 2008;21(2):193–207.

3. Roehrig H, Krupinski EA, Chawla AS, et al. Noise of LCD display systems. International Congress Series 2003; 168.

4. Samei E. AAPM/RSNA physics tutorial for resi-dents: technological and psychophysical consider-ations for digital mammographic displays. Radio-Graphics 2005;25(2):491–501.

5. Barten P. Physical model for the contrast sensitivity of the human eye. SPIE Human Vision, Visual Pro-cessing and Digital Display III 1992; 57–72.

6. Barten P. Spatio-temporal model for the contrast sensitivity of the human eye and its temporal as-pects. SPIE Human Vision, Visual Processing and Digital Display IV 1992; 2–14.

7. Flynn MJ, Kanicki J, Badano A, Eyler WR. High-fidelity electronic display of digital radiographs. RadioGraphics 1999;19(6):1653–1669.

8. Samei E, Badano A, Chakraborty D, et al. Assess-ment of display performance for medical imaging systems: report of the American Association of Phys-icists in Medicine (AAPM) Task Group 18. College Park, Md: American Association of Physicists in Medicine, 2005.

9. Digital Imaging and Communications in Medicine (DICOM). Part 14: Grayscale standard display Function. Rosslyn, Va: National Electrical Manu-facturers Association, 2004.

10. Samei E, Badano A, Chakraborty D, et al. Assess-ment of display performance for medical imaging systems: executive summary of AAPM TG18 report. Med Phys 2005;32(4):1205–1225.

11. ACR-AAPM-SIIM Technical Standard for Electronic Practice of Medical Imaging. http://www.acr.org /~/media/ACR/Documents/PGTS/standards/Elec-tronicPracticeMedImg.pdf. Accessed October 15, 2012.

12. Apple iPhone 4 LCD Display Shoot-Out. 2012. http://www.displaymate.com/iPhone_4_ShootOut .htm. Accessed March 1, 2012.

13. FDA. FDA clears first diagnostic radiology applica-tion for mobile devices. U.S. Food and Drug Ad-ministration; 2011. http://www.fda.gov/NewsEvents /Newsroom/PressAnnouncements/ucm242295.htm. Accessed March 1, 2012.

14. MIM. Mobile MIM. http://www.mimsoftware.com /markets/mobile/. Accessed March 1, 2012.

15. SMPTE RP133. Specifications for medical diag-nostic imaging test pattern for television monitors and hard-copy recording cameras. White Plains, NY: Society of Motion Picture & Television Engineers, 1991.

16. Committee on Patient Safety and Quality Improve-ment. ACOG committee opinion number 398, February 2008: fatigue and patient safety. Obstet Gynecol 2008;111(2 Pt 1):471–474.

17. Higginson JD. Perspective: limiting resident work hours is a moral concern. Acad Med 2009;84(3): 310–314.

18. LeBlanc VR. The effects of acute stress on perfor-mance: implications for health professions educa-tion. Acad Med 2009;84(10 suppl):S25–S33.

19. Lindfors PM, Heponiemi T, Meretoja OA, Leino TJ, Elovainio MJ. Mitigating on-call symptoms through organizational justice and job control: a cross-sec-tional study among Finnish anesthesiologists. Acta Anaesthesiol Scand 2009;53(9):1138–1144.

20. Riad W, Mansour A, Moussa A. Anesthesiologists work-related exhaustion: a comparison study with other hospital employees. Saudi J Anaesth 2011;5 (3):244–247.

21. Scott LD, Hofmeister N, Rogness N, Rogers AE. Implementing a fatigue countermeasures program for nurses: a focus group analysis. J Nurs Adm 2010; 40(5):233–240.

22. Bhargavan M, Sunshine JH. Utilization of radiol-ogy services in the United States: levels and trends in modalities, regions, and populations. Radiology 2005;234(3):824–832.

23. DiPiro PJ, vanSonnenberg E, Tumeh SS, Ros PR. Volume and impact of second-opinion consultations by radiologists at a tertiary care cancer center: data. Acad Radiol 2002;9(12):1430–1433.

24. Ebbert TL, Meghea C, Iturbe S, Forman HP, Bhar-gavan M, Sunshine JH. The state of teleradiology in 2003 and changes since 1999. AJR Am J Roentgenol 2007;188(2):W103–W112.

25. Lu Y, Zhao S, Chu PW, Arenson RL. An update sur-vey of academic radiologists’ clinical productivity. J Am Coll Radiol 2008;5(7):817–826.

26. Meghea C, Sunshine JH. Determinants of radiolo-gists’ desired workloads. J Am Coll Radiol 2007;4 (3):166–170.

27. Mukerji N, Wallace D, Mitra D. Audit of the change in the on-call practices in neuroradiology and factors affecting it. BMC Med Imaging 2006;6:13.

28. Nakajima Y, Yamada K, Imamura K, Kobayashi K. Radiologist supply and workload: international comparison—Working Group of Japanese College of Radiology. Radiat Med 2008;26(8):455–465.

Page 16: Medical Imaging Displays 2012

290  January-February 2013 radiographics.rsna.org

29. Sunshine JH, Maynard CD. Update on the diagnos-tic radiology employment market: findings through 2007-2008. J Am Coll Radiol 2008;5(7):827–833.

30. McCall I. Workload and manpower in clinical radi-ology. London, England: Royal College of Radiolo-gists, 1999; 5.

31. Risk management in radiology in Europe IV. Vienna, Austria: European Society of Radiology, 2004.

32. Bechtold RE, Chen MY, Ott DJ, et al. Interpre-tation of abdominal CT: analysis of errors and their causes. J Comput Assist Tomogr 1997;21(5): 681–685.

33. Berlin L. Liability of interpreting too many radio-graphs. AJR Am J Roentgenol 2000;175(1):17–22.

34. Fitzgerald R. Error in radiology. Clin Radiol 2001; 56(12):938–946.

35. Krupinski EA, Berbaum KS. Measurement of vi-sual strain in radiologists. Acad Radiol 2009;16(8): 947–950.

36. Krupinski EA, Berbaum KS, Caldwell RT, Schartz KM, Kim J. Long radiology workdays reduce detec-tion and accommodation accuracy. J Am Coll Radiol 2010;7(9):698–704.

37. Krupinski EA, Berbaum KS, Caldwell RT, Schartz KM, Madsen MT, Kramer DJ. Do long radiology workdays affect nodule detection in dynamic CT in-terpretation? J Am Coll Radiol 2012;9(3):191–198.

38. Oestmann JW, Greene R, Kushner DC, Bourgouin PM, Linetsky L, Llewellyn HJ. Lung lesions: corre-lation between viewing time and detection. Radiol-ogy 1988;166(2):451–453.

39. Krupinski EA, Kallergi M. Choosing a radiology workstation: technical and clinical considerations. Radiology 2007;242(3):671–682.

40. Occupational Safety & Health Administration. Computer workstations. http://www.osha.gov/SLTC /etools/computerworkstations/. Accessed February 28, 2012.

41. Brennan PC, Ryan J, Evanoff M, et al. The impact of acoustic noise found within clinical departments on radiology performance. Acad Radiol 2008;15(4): 472–476.

42. Kohn LT, Corrigan JM, Donaldson MS, eds. To err is human: building a safer health care system. Wash-ington, DC: Institute of Medicine, 1999.

43. Report to the President. Doing what counts for pa-tient safety: federal actions to reduce medical errors and their impact. Washington, DC: Quality Inter-agency Coordination Task Force, 2000.

44. Berlin L. Reporting the “missed” radiologic diagno-sis: medicolegal and ethical considerations. Radiology 1994;192(1):183–187.

45. Berlin L. Malpractice issues in radiology: perceptual errors. AJR Am J Roentgenol 1996;167(3):587–590.

46. Berlin L, Berlin JW. Malpractice and radiologists in Cook County, IL: trends in 20 years of litigation. AJR Am J Roentgenol 1995;165(4):781–788.

47. Berlin L, Hendrix RW. Perceptual errors and negli-gence. AJR Am J Roentgenol 1998;170(4):863–867.

48. Forrester JV, Dick AD, McMenamin PG, Lee WR. The eye: basic sciences in practice. Philadelphia, Pa: Saunders, 1996.

49. Carmody DP, Nodine CF, Kundel HL. An analysis of perceptual and cognitive factors in radiographic interpretation. Perception 1980;9(3):339–344.

50. Kundel HL. Perception errors in chest radiography. Semin Respir Med 1989;10(3):203–210.

51. Kundel HL. Peripheral vision, structured noise and film reader error. Radiology 1975;114(2):269–273.

52. Kundel HL, Nodine CF, Carmody D. Visual scan-ning, pattern recognition and decision-making in pulmonary nodule detection. Invest Radiol 1978;13 (3):175–181.

53. Kundel HL, Nodine CF, Krupinski EA. Searching for lung nodules: visual dwell indicates locations of false-positive and false-negative decisions. Invest Radiol 1989;24(6):472–478.

54. Berbaum K, Franken E Jr, Caldwell R, Schartz K. Satisfaction of search in traditional radiographic imaging. In: Samei E, Krupinski EA, eds. The hand-book of medical image perception and techniques. New York, NY: Cambridge University Press, 2010; 107–138.

55. Nodine C, Mello-Thoms C. The role of expertise in radiologic image interpretation. In: Samei E, Kru-pinski EA, eds. The handbook of medical image per-ception and techniques. New York, NY: Cambridge University Press, 2010; 139–156.

Page 17: Medical Imaging Displays 2012

Teaching Points January-February Issue 2013

Medical Imaging Displays and Their Use in Image InterpretationGeorge C. Kagadis, PhD • Alisa Walz-Flannigan, PhD • Elizabeth A. Krupinski, PhD • Paul G. Nagy, PhD • Konstantinos Katsanos, MD, PhD • Athanasios Diamantopoulos, MD • Steve G. Langer, PhD

RadioGraphics 2013; 33:275–290 • Published online 10.1148/rg.331125096 • Content Codes:

Pages 276Despite the widespread availability of high-end computing platforms and advanced color and gray-scale monitors, the quality and properties of the final displayed image are often inadequate for diagnostic purposes if the displays are not configured and maintained properly. Therefore, image quality assurance should include the entire image manipulation chain, starting with acquisition and ending with display and interpretation.

Page 279In creating strategies for displaying medical images electronically, we seek to maximize the perceptibility of information and promote the consistency of image presentation across different displays (2). The latter is accomplished with use of consistent display calibration.

Page 282Optimized software on mobile operating systems can provide very fast and easy access to images (often faster than logging onto a nearby workstation). However, viewers must be aware that touch screens get covered with fingerprints, poorly selected display settings can compromise viewing, how one holds the device and the environment in which one is reading can significantly limit what is visible, and the images may not look like what the reporting radiologist saw.

Page 287To avoid diagnostic errors, it is essential to know the underlying causes of error, and thus, how radiolo-gists search and interpret images.

Page 289Adequate and repeatable performance of image display systems is a key element of information technol-ogy platforms in modern radiology departments. Those persons responsible for displays should follow recommended ACR technical standards to implement a comprehensive program of image display quality control and quality assurance.