Radiologists have been looking at film images ever since Roentgen first discovered x-rays and obtained an image of his wife's hand. However, since the 1980s, radiologists have leapt into the digital world and view images on computer monitors with increasing frequency. Picture archiving and communication systems (PACS), Cross-Enterprise Document Sharing for Imaging (XDS-1), and teleradiology networks are becoming commonplace, and many radiology residents currently are trained with digital rather than film displays. [1, 2, 3, 4, 5, 6, 7]
For many radiologists, the technical details of network architectures, bandwidths, digital archives, and digital imaging and communications in medicine (DICOM) interface compatibility are of little concern. What is most important from the clinical perspective is that the required patient images are available when needed, are available quickly, and are of diagnostic quality. Of these, diagnostic quality may be the primary factor from the clinician's standpoint (see the images below). 
Quality can be affected at a number of points in the digital chain before the final image is presented to the radiologist's eye-brain system. Of most concern to the radiologist is the final step in this chain, the presentation of the clinical image on a display device, because this is what the radiologist examines and uses to make a diagnostic decision. Thus, radiologists must be aware of the issues that affect the display of digital radiographic images.
Recent research has focused on issues of privacy protection and security in the transmission of these images, including the use of chaotic maps  and watermarks.  Other researchers are focusing on establishing standardized and consistent reporting procedures. [11, 12]
Transition to monitors
As the transition from film to monitors began to take place, it became evident that, similar to many digital images, the radiographic image on the computer monitor did not appear the same as the image on film. Initially, radiologists were skeptical and did not trust digital displays for routine clinical use. Many perceptual and ergonomic issues arose when the use of film began to decrease. Compared with the traditional method of viewing film on a light box, monitors typically are less bright, have less spatial resolution, have less contrast (dynamic range), and have a limited viewing area.
These factors must be addressed, and researchers in medical image perception have begun to investigate them. Eventually, radiologists must ensure that switching to a different viewing medium neither negatively affects diagnostic accuracy nor significantly affects workflow. If adapting to and using a new type of workstation or viewing system takes too much time, radiologists are not likely to make the transition easily or quickly. [13, 14, 15, 16]
Optimizing the Digital Display for Clinical Use
Comparing monitor images to film
A number of discoveries correlate optimal monitor luminance, tone scale, and interface design with perceptual factors that impact the clinical reading environment and diagnostic accuracy. In diagnostic accuracy, performance usually is approximately the same with film as with monitor viewing. However, other aspects of performance may be affected by a change in display modality. [17, 18, 19, 20, 21]
In one study, 3 bone radiologists and 3 orthopedic surgeons read findings from 27 patients with bone trauma, once from film and once from a monitor.  They searched for fractures, dislocations or subluxations, ligamentous injuries, and soft-tissue swellings or effusions. The readers were required to indicate the presence and/or absence of each feature for each patient. Eye position was recorded as they searched the images (see the image below). [23, 24, 25, 26, 27, 28]
Diagnostic performance was statistically equivalent for film and monitor viewing (film = 86% true positives, 13% false positives; monitor = 80% true positives, 21% false positives), although film reading was slightly better. Viewing time and other measures of visual search performance differed significantly, as determined from the eye-position recordings. Average viewing time for film was 46.45 seconds, compared with 91.15 seconds for the monitor, which was approximately twice as long.
The primary difference between film and monitor readings was in the visual dwell times associated with lesion-free areas of images. Average dwell time on true-negative areas was significantly longer with the monitor than with film, and because most areas on an image are lesion free, the extended dwell times incrementally produced significantly longer viewing times. [29, 30]
Additionally, readers took twice as long (Student t [t] = 4.84, degrees of freedom [df] = 107, statistical probability [p] = 0.0001) to first fixate (have the eye land on) the lesion of interest with a monitor (4.67 seconds into the search) than with film (2.35 seconds into the search).
Viewing times with the monitor were extended by the 20% of fixation clusters generated during the search that were on the image-processing menu and tool bar rather than on the diagnostic image. Therefore, the computer interface may have been a distraction. Increasingly, the best interfaces are found to be simple and uncluttered and require little training to use.
This study demonstrates how factors other than diagnostic accuracy can be important. Extended viewing times per patient can yield decreased workflow, increased fatigue, and, possibly, decreased performance over time. Developing an easy-to-use, nondistracting interface is also crucial to promote the use of PACS and teleradiology systems by clinical radiologists. [8, 31, 32]
Physical features of the monitor
Certain physical characteristics of the display monitor also can affect diagnostic performance; therefore, examine these features when considering a monitor purchase for teleradiology and/or PACS applications in the clinical setting. Note that all of these display parameters apply to both the traditional cathode ray tube (CRT) display  and the multitude of liquid crystal displays (LCDs) that have become available for use in radiology. LCDs, of course, may come with their own set of limitations (eg, degradation of image quality when viewing from off-axis angles).
For example, monitor luminance reportedly affects diagnostic performance. The best monitors currently available are approximately 5 times less bright than a typical radiographic view box (1000 vs 250 foot-lambert). In one study, diagnostic performance was better with a high-luminance monitor (140 foot-lambert) than with a relatively low-luminance monitor (80 foot-lambert).  Eye position was recorded as 50 pairs of mammograms were viewed on each monitor (see the image below).
As in previous studies, no significant difference in diagnostic performance was found (alternative free response receiver operating characteristic [AFROC] A1 for 80 foot-lambert = 0.9594, for 140 foot-lambert = 0.9695; t = 1.685, df = 5, p = 0.1528). However, once again, the eye-position recording revealed significantly different viewing times: 52.71 seconds versus 48.99 seconds for 80 foot-lambert versus 140 foot-lambert, respectively (t = 1.99, df = 299, p = 0.047). Concerning dwell times associated with decisions, again it was found that true-negative dwell times were affected most and were significantly longer with the 80 foot-lambert versus the 140 foot-lambert monitor.
These results suggest that changes in digital display luminance may affect the radiologist's ability to easily determine that lesion-free (ie, normal) image locations are normal. Luminance changes may increase the time required to search an image thoroughly and determine whether it is lesion free.
Perceptual linearization of monitors
Performance is better with a perceptually linearized display curve (eg, DICOM curve) than with a nonlinearized curve (eg, Society of Motion Picture and Television Engineers pattern used to calibrate monitors). The DICOM standard curve was developed to match monitor output (relative to gray levels) to the perceptual capabilities of the human visual system. 
The idea for perceptual linearization derives from the display of images on a monitor using 2 nonlinear mappings.
The first map takes recorded image data (actual numeric values from the digital image) and transforms them into luminance values on the monitor screen, which represent the monitor's display function or characteristic curve.
The second map transforms the display luminance according to the brightness response of the human visual system.
The optimal perceived dynamic range of the display (which affects contrast and, therefore, perception, especially of low-contrast lesions) depends crucially on the optimal combination of these 2 mappings. Standardization of display curves is important in PACS and teleradiology because the systems allow radiologists to send images from one location to another and to use different monitors for viewing. Images on one monitor must look the same on all monitors. The DICOM display standard tries to realize this ideal by setting up a standard display curve and certain other quality-control measures.
In a related study, similar to the study that compared monitors of different luminances, a series of 50 mammograms were used to compare the performances of perceptually linearized displays with those of nonperceptually linearized displays.  AFROC analysis indicated that diagnostic performance was significantly higher with the perceptually linearized display (A1 = 0.972) than with the nonlinearized display (A1 = 0.951, t = 5.42, df = 5, p = 0.003).
Eye-position data also revealed significant differences concerning dwell time and visual search. With the nonlinearized display, total viewing time was longer, dwell times associated with all types of decisions (true and false, positive and negative) were longer (especially true negatives), and significantly more fixation clusters were generated during searches than with the linearized display. The choice of monitor display curve may significantly affect the radiologist's ability to detect lesions and decide if truly negative images are negative, which extends overall view time with a nonperceptually linearized display.
Display resolution considerations
Display resolution currently is an important topic in digital radiology and PACS; radiologists prefer as much resolution as possible. However, the higher the resolution of the monitor, the higher the cost. Black and white (B&W) monitors that maximize dynamic range (required for most gray-scale images) are also typically more expensive than color monitors. The most common resolutions of the typical desktop computer monitor are 1024 x 1280 pixels or 1200 x 1600 pixels. Desktop monitors are typically color, which degrade the dynamic range (blacks are not as black, and whites are not as white as with a B&W monitor).
Nuclear medicine (single photon emission computed tomography [SPECT], positron emission tomography [PET]), computed tomography (CT) scanning, and magnetic resonance imaging (MRI) all produce images that are either 256 x 256 pixels or 512 x 512 pixels (for each slice). If one were to view these images on a slice-by-slice basis or by scrolling through them, a 1200 x 1600–pixel monitor potentially would suffice, because these slices easily can be displayed at full resolution on such a monitor. With the advent of techniques that merge images from different modalities and use color overlays to highlight certain types of information, color monitors are desirable.
The concern with low-resolution color monitors is that they cannot provide maximum contrast resolution (CR) and spatial resolution for modalities that require B&W displays for high-resolution images to fully maximize the dynamic range. A typical CR image is approximately 2300 x 1700 pixels, making a medium-to-high-resolution monitor a requirement if the entire image is to be viewed at full resolution. The new digital mammography systems can produce images as large as 4800 x 6400 pixels.
There is some debate in radiology about whether it is necessary to display an image at full resolution all at once or whether it is acceptable to compress the image to view it all at one time, followed by zoom or magnification to access the original data and to view specific portions of the image at full resolution. A significant amount of work is being done to resolve this debate.
In general, monitors of at least 2048 x 2560 pixels should be used for primary diagnostic interpretation in radiology. This resolution will suit most modalities, especially given adequate image-processing support. Currently, 5000 x 5000–pixel monitors are available, but cost and lifetime issues are still of great concern with these displays. As noted, other characteristics need to be considered when purchasing a display device for radiology (eg, luminance, price, dynamic range); thus, spatial resolution alone should not be the deciding factor.
Although the CRT display monitor is currently the most common and reliable display device, other technologies are making an impact in digital radiology. Flat panel devices hold a lot of promise, especially in terms of display luminance. Once the angle-of-regard problem (viewing the display off-center) has been fully solved, these new devices could represent a viable alternative to high-resolution CRT monitors. Because flat panel technology is also undergoing research and development by many companies for the commercial market, radiology may benefit significantly in terms of cost if the technology can be easily adapted for clinical use.
Luminance and color
Radiographic images today are being displayed on both medical-grade (MG) and commercial off-the-shelf (COTS) color LCDs that are readily available and usually less expensive than MG grayscale displays and thus very attractive for both large and small practices. Most LCD panels are backlit with cold-cathode fluorescent lamps (CCFLs), but newer ones use light-emitting diodes (LEDs) and have thinner profiles, lower power consumption, and reportedly longer lifetimes.
In both cases, the LCD/LED elements regulate the amount of backlight through the panel and the backlight brightness determines display luminance. The amount of light the panel blocks sets the minimum luminance. Both LCDs and LEDs degrade or dim with time and amount of use, so they need to be replaced once the luminance is no longer in compliance with established standards (minimum 1.0 cd/m2; maximum at least 350 cd/m2 and 420 cd/m2 for mammography; luminance ratio >250). MG displays often have embedded tools that monitor and adjust backlight levels so they are more stable than COTS displays, which require more regular manual recalibration. They typically have better luminance uniformity (~15%) than COTS displays (>20%), also owing to embedded technology that compensates for pixel luminance variation.
One main reason luminance is so important is contrast ratio, which can vary considerably since (1) the human eye adjusts to the average brightness it is exposed to and (2) as brightness diverges from the point of adaptation, subtle contrast changes (ie, lesions in radiographic images) are more and more difficult to perceive. Very briefly, contrast sensitivity of the human visual system (HVS) can be quantified using just-noticeable differences (JNDs) or detection thresholds, which represent perceivable changes in luminance for a given displayed luminance. Most of the seminal work on modeling contrast sensitivity was done by Barten and is still used today. He determined an average HVS response based on data (detection of sinusoidal contrast patterns on different luminance backgrounds) collected from a large sample of subjects and showed that the HVS is nonlinear. In other words, the percent contrast change required for a JND at high background luminance is lower than that for a JND at low backgroundluminance.
This means that in order to optimize the perceptibility of diagnostic image information, calibration methods must account for the capabilities and limitations (ie, contrast sensitivity nonlinearity) of the HVS. Barten’s model provides a means to accomplish this by producing perceptual linearity across grayscale values (by setting luminance values so changes in pixel value correspond to equal JNDs), so changes in pixel values across a grayscale range are perceived to have similar contrast. Basically, information at low luminance levels is not lost at the expense of being able to perceive information at high levels and vice versa.
The Digital Imaging and Communications in Medicine (DICOM) Part 14 grayscale standard display function (GSDF) accomplishes this, and studies have shown that diagnostic accuracy is better with a DICOM-calibrated than uncalibrated display. Although the DICOM GSDF is not perfect, it is the most widely used calibration method in radiology today.
The use and impact of the DICOM GSDF on MG grayscale displays is rather well known, but as noted previously, color displays are being used more widely today in radiology for diagnostic interpretation than ever before. The DICOM GSDF does not to date have any specific recommendations for color displays in radiology, and the de facto standard is to use the GSDF for calibrating color display of grayscale images. Although older studies showed clear differences between color and monochrome displays in terms of achievable diagnostic accuracy (monochrome superior to color), most recent studies show that even high-quality COTS color displays can yield equivalent levels of diagnostic accuracy if properly calibrated and maintained. The American College of Radiology (ACR) standard recommends that all monitors be set to a white point corresponding to the CIE daylight standard D65 white point or a color temperature of about 6500°F.
With the advent of electronic health records and the Integrated Healthcare Enterprise (IHE), proper calibration of color displays is increasingly becoming important. Radiologists and other clinicians are not only viewing grayscale radiographs during patient care but they are also viewing color medical image such as pathology whole slide images (WSI), digital ophthalmology images, dermatology images, and a host of other visible light images. In current practice, there is very little guidance on the calibration or characterization of medical color displays. One possibility is to use color-device profiles conforming to the International Color Consortium (ICC) specification standard for color management of digital imaging systems, as it provides a standardized architecture, profile format and data structure for color management, and color data interchange between different color imaging devices.
Methods are being proposed for display of both monochrome and color medical images, but few have been validated with respect to impact on diagnostic performance. It may be that separate calibration schemes will be required for displaying monochrome versus color images on color displays until an all-in-one method suitable for simultaneous display of both types of images has been devised. For example, one could start by taking 2 standard pathology slides that are scanned and displayed. One slide is embedded with 9 filters having colors purposely selected for hematoxylin and eosin (H&E)–stained WSIs, and the other slide is an H&E-stained mouse embryo. The displayed images are compared with a standard to identify inaccurate display of color and its causes.
Other methods include looking at display characterization and the tools used for calibration. One recent study characterized 3 probes for measuring display color: a modification of a small-spot luminance probe and 2 conic probes based on black frusta. They found significant differences between the probes that affect the measurements used to quantify display color and have thus proposed a method to evaluate the performance of color calibration kits for LCD monitors, a universal platform (Virtual Display) to emulate tone reproduction curves.
A more recent method developed by Silverstein et al included implementing a black-level correction and encoding it so that it was compatible with the ICC color profile structure. They found that color reproduction accuracy improved dramatically using their proposed methodology for color display characterization and profiling using a series of COTS displays with varying preset calibrations.
In one of the only studies to examine the impact of color management and calibration on diagnostic accuracy, this method was used to compare a calibrated versus uncalibrated (out of the box) COTS NEC 2690 display for diagnosing a set of WSI breast biopsy images. Although diagnostic performance with the color-calibrated display was higher than with the uncalibrated display, no statistically significant differences in diagnostic accuracy were observed. However, viewing time was significantly shorter with the calibrated display, suggesting a slight advantage diagnostically for a properly calibrated and color-managed display and a significant potential advantage in terms of improved workflow.
Digital Displays and Computer Aids for the Radiologist
Digital displays of radiographs also make possible the true clinical use of computer-aided diagnosis (CAD) schemes. The goal of CAD is similar to the goal of perceptual feedback discussed earlier (ie, to provide the radiologist with an additional look at an image, with the potential lesion locations indicated). However, instead of using eye-position information, CAD uses a variety of image-processing algorithms to detect and occasionally classify probable lesion sites. Methods of using CAD information by radiologists in the clinic and CAD's effect on diagnostic performance are becoming topics of interest. 
Although CAD systems perform well, computers still miss lesions that the radiologist is able to find. Radiologists and a CAD system independently examined a series of 80 mammograms for microcalcification clusters, with the following findings  :
The CAD system had a true-positive rate of 83%, with 0.5 false positives per image.
The radiologists had true-positive rates of 78-90% and false-positive rates of 0.03-0.20 per image.
When the locations of the CADs' and the radiologists' true and false positives were examined, all but 5% of the true microcalcification clusters were identified by the CAD system, the radiologists, or both.
Of the detected clusters, 10% were detected by CAD but missed by the radiologist, and 11% were missed by CAD and detected by a minimum of 1 radiologist.
Examination of the lesion features revealed that CAD detected microcalcifications that radiologists judged to have few or no visible features but occasionally missed those with obvious but nontypical features.
As radiologists, be aware that CAD is not perfect, and learn how to use CAD as a supplement to perceptual search strategies. Do not eliminate perceptual search of images, and do not rely on CAD to detect 100% of lesions. Continue to search the entire image to better decide if a suggestive region indicated by CAD is a true lesion or a false positive.
In general, CAD will probably help most radiologists to an extent for certain types of images. Those radiologists with more experience are less likely to benefit from CAD than radiologists with less experience in terms of diagnostic performance, but CAD prompts (when accuracy is high, without excessive false-positive results) may help in other ways, such as improving workflow. This may be especially true in such areas as CAD for lung CT imaging, in which a significant number of images must be viewed, potentially leading to increased distraction or inattention on the part of the radiologist. CAD in this situation may help find potential lesions and may also help focus the radiologist's search.
One study also used eye-position recordings to study perceptual strategies of experienced mammographers versus residents reading mammograms with and without CAD information.  Significant differences based on the level of expertise were found. Experienced mammographers spent more time (104 seconds) doing a more thorough search of the images before they accessed CAD prompts than did residents (86 seconds). During the search without CAD, mammographers also fixated more of the lesions than residents; therefore, when the experienced mammographers accessed the CAD pointers, they used them more to confirm suspicions about potential lesions. This hypothesis was supported in interviews with the readers after the study.
The experienced mammographers in the study noted that in at least 95% of patients in whom they had detected the lesion prior to CAD, they merely glanced at the lesion with CAD to make sure the CAD was pointing to the same lesion and location. The rest of the time, they looked a little longer because the CAD prompt was not always pointing at the center of the lesion; thus, the location had to be verified more carefully. Residents appeared to use CAD to guide them to an initial inspection of potential lesions.
With this strategy, less experienced readers may not be as likely as more experienced readers to discover lesions that the CAD system did not detect. This also was confirmed after the study was completed. The residents tended to state that they were not able to detect too many lesions on first glance or were very unsure of the lesions they detected. They also said that they tended to wait for the CAD information because it was apt to take too long to search without it and because there were too many confusing structures to deal with without the help from CAD. However, this may not have been a good strategy.
The study also showed that CAD's usefulness in helping radiologists determine if lesions were present was affected. For mammographers, 50% of lesions missed without CAD were detected and reported with CAD. The original number of false negatives before CAD was higher for residents than for the experienced mammographers, and only 33% of the missed lesions were detected correctly and reported with CAD. Thus, although the residents decided to wait for the CAD information, it only helped them with approximately one third of the missed lesions.
CAD can help identify a lesion but without the necessary experience to interpret what is seen, the residents do not benefit as greatly from the CAD help as experienced mammographers, who are more prepared to be able to interpret CAD findings. These and similar results may have significant implications for CAD's use in the clinical environment. Residents or radiologists who are not expert in mammography may require explicit instruction to conduct thorough searches of images before using CAD. 
Other viewing aids
With the exception of CAD, digital displays provide the radiologist with viewing aids that are not available with film displays. For example, typically, more general image processing (eg, window-level operations, high-pass filters, low-pass filters) is standard with most digital-display workstations. 
A number of types of image processing do not appear to improve diagnostic accuracy, while other types do. For example, one study found that radiologists' decisions were equally as likely to change from false negative to true positive as from true positive to false negative when image processing was used.  The conclusion is that, at least for the types of images and image-processing functions used in the study, image processing did not affect diagnostic performance significantly.
Other types of viewing aids with digital displays that may be more helpful to the radiologist include 3-dimensional (3-D) displays (especially with CT scanning, MRI, ultrasound [US]) and color. Traditionally, film-based radiographic images have been displayed only in gray scale, with a dye (typically blue) as the single color added to the film base to reduce eye strain. With digital images, color has been used occasionally and may become accepted more widely in the future.
Currently, the most successful application of color to radiographic images may be in Doppler US for tracking flow information. In a single image, the radiologist can view both anatomy and function. A similar technique has been used in MRI, CT scanning, and nuclear medicine imaging, especially with image registration that compares images taken at different times or in 2 modalities and with 3-D rendering of image data. [43, 44, 45, 46]
Whether color applications will be used on a regular basis, especially with 2-D computer radiography images, remains to be seen. However, keep in mind that if color displays are used, a new set of standards will be required for quality control and calibration of color monitors to maintain image fidelity among monitors and over time.
As we move further into the 21st century and radiology completes the transition to a filmless practice, image perception and observer performance concerns will not decrease. In fact, the importance of medical image perception research may increase. Technology will present better and different images to the radiologist. How will radiologists deal perceptually with advances such as compressed images, color addition, and computer-aided prompts superimposed over images?
Currently, these new methods of presenting images and image information are under investigation. Soon, radiologists will be asked to view these new types of images. Image perception research will help us guide basic research, understand how to present this new information to radiologists, and understand how to improve diagnostic performance by improving perception of images. Perhaps most importantly, radiologists will require continuous feedback and opinion concerning PACS and teleradiology systems used in clinical practice.