Close
New

Medscape is available in 5 Language Editions – Choose your Edition here.

 

Screening and Diagnostic Tests

  • Author: Elizabeth Abram, MD; Chief Editor: Robert E O'Connor, MD, MPH  more...
 
Updated: Dec 01, 2015
 

Overview

Diagnostic tests help physicians revise disease probability for their patients. All tests should be ordered by the physician to answer a specific question. The 5 main reasons for a diagnostic test are as follows:

  • Establish a diagnosis in symptomatic patients. For example, an ECG to diagnose ST-elevation myocardial infarction (STEMI) in patients with chest pain.
  • Screen for disease in asymptomatic patients. For example, a prostate-specific antigen (PSA) test in men older than 50 years.
  • Provide prognostic information in patients with established disease. For example, a CD4 count in patients with HIV.
  • Monitor therapy by either benefits or side effects. For example, measuring the international normalized ratio (INR) in patients taking warfarin.
  • A test may be performed to confirm that a person is free from a disease. For example, a pregnancy test to exclude the diagnosis of ectopic pregnancy.

The criterion (reference) standard test definitively decides either presence or absence of a disease. Examples of criterion standard tests include pathological specimens for malignancies and pulmonary angiography for pulmonary embolism. However, criterion standard tests routinely come with drawbacks; they are usually expensive, less widely available, more invasive, and riskier. These issues usually compel most physicians to choose other diagnostic tests as surrogates for their criterion standard test.

For example, venography, the criterion standard for vein thrombosis, is an invasive procedure with significant complications including renal failure, allergic reaction, and clot formation. These risks make venography less desirable than the alternative diagnostic test—venous duplex ultrasonography. The price most diagnostic tests pay for their ease of use compared with their criterion standard is a decrease in accuracy. How to account for this trade-off between diagnostic accuracy and patient acceptability is the subject of this article.

Patient Diagnostic and Screening Decisions

In order for patients to make informed decisions regarding diagnostic and screening options when there is more than one option, when no option has a clear advantage, and when the risk-benefit profile may be valued differently, decision aids such as pamphlets, videos, or Web-based tools may be used.

A Cochrane review of decision aids for patients facing treatment or screening decisions found that the use of these aids improved knowledge of the options and helped patients have more accurate expectations of possible benefits and harms, reach choices that are more consistent with informed values, and participate in decision making with health practitioners.[1] Smaller improvements were seen with the use of more detailed decision aids compared to simpler decision aids. The use of decision aids had no apparent adverse effects on health outcomes or satisfaction.[2]

Next

Pretest and Posttest Probability

Every clinical encounter begins with an initial clinical impression, a subjective pretest probability of disease. The ultimate goal of all diagnostic testing is to refine this pretest probability to the point where the physician can confidently make a treat or no-treat decision. Each diagnostic test whether it is a symptom, sign, laboratory, or radiological examination results in a change in the physician’s probability of disease, the posttest probability. The degree to which a diagnostic test increases or decreases the probability of disease from pretest to posttest represents the clinical utility of the test as measured by its operating characteristics.

A recent prospective observational multicenter study sought to determine if the threshold of quantitative D-dimer can be varied according to pretest probability of pulmonary embolism to increase specificity and to maintain a negative predictive value greater than 99%.[3] A diagnostic study was ordered for eligible patients to determine possible pulmonary embolism. Pretest probability was determined by the clinician's unstructured estimate and Wells score. Various threshold measurements were obtained from 5 different D-dimer tests. A quantitative D-dimer test was ordered for 4357 patients of the 7940 eligible patients who were tested for pulmonary embolism.

Across all pretest probability strata, at standard cutoffs, quantitative D-dimer testing results had a sensitivity of 94% (95% confidence interval [CI], 91-97%), specificity of 58% (95% CI, 56-60%), and negative predictive value of 99.5% (95% CI, 99.1-99.7%). With variable cutoffs, the overall sensitivity would have been 88% (95% CI, 83-92%), specificity 75% (95% CI, 74-76%), and negative predictive value 99.1% (95% CI, 98.7-99.4%). The authors concluded that emergency medicine clinicians currently order a D-dimer in most patients tested for pulmonary embolism, including a large proportion with intermediate pretest probability and high pretest probability. Increasing the specificity with no measurable decrease in negative predictive value can be accomplished by varying the D-dimer's cutoff according to pretest probability.[4, 5]

Previous
Next

Definitions and Calculations

Clinical studies of diagnostic tests measure the accuracy of the test against its criterion standard.

Table 1. Criterion Standard (Open Table in a new window)

Criterion Standard Test
  Disease (+) Disease (-)
Test (+) True positive (TP) False positive (FP)
Test (-) False negative (FN) True negative (TN)

Table 2. Definition of Terms (Open Table in a new window)

Term Calculation Plain English
True positive (TP) Counts in 2 X 2 table # Patients with the disease who have a positive test result
True negative (TN) Counts in 2 X 2 table # Patients without the disease who have a negative test result
False positive (FP) Counts in 2 X 2 table # Patients without the disease who have a positive test result
False negative (FN) Counts in 2 X 2 table # Patients with the disease who have a negative test result
Sensitivity = True positive rate (TPR) TP / (TP + FN) The probability that a patient with the disease will have a positive test result
1 - Sensitivity = False-negative rate (FPR) FN / (TP + FN) The probability that a patient with the disease will have a negative test result
Specificity = True negative rate (TNR) TN / (TN + FP) The probability that a patient without the disease will have a negative test result
1 - Specificity = False-positive rate (FPR) FP / (TN + FP) The probability that a patient without the disease will have a positive test result
Positive predictive value TP / (TP + FP) The probability that a patient with a positive test result will have the disease
Negative predictive value TN / (TN + FN) The probability that a patient with a negative test result will not have the disease.
Accuracy (TP + TN) / (TP + TN + FP + FN) The probability that the results of a test will accurately predict presence or absence of disease
Bayes’ theorem Posttest Odds = Pretest Odds X Likelihood Ratio The odds of having or not having the disease after testing
Likelihood ratio of a positive test result (LR+) Sensitivity / (1 - Specificity) The increase in the odds of having the disease after a positive test result
Likelihood ratio of a negative test result (LR-) (1 - Sensitivity) / Specificity The decrease in the odds of having the disease after a negative test result

 

Previous
Next

Sensitivity and Specificity

Different diagnostic tests for the same disease often trade sensitivity for specificity or vice versa. In general, the more sensitive a test is for a disease, the higher its false-positive rate, lowering its specificity. A test with a higher specificity will usually sacrifice sensitivity by increasing its false-negative rate. This makes a highly sensitive test ideal for a screening examination. While, highly specific tests are best in a confirmatory role.

Sensitivity and specificity are calculated vertically in a 2 X 2 table. Sensitivity is measured in patients definitively diagnosed with the disease, whereas specificity is only a function of those free of disease. Sensitivity contains no information about false-positive results, and specificity does not account for false-negative results. This limits the applicability of sensitivity and specificity in predicting disease when the physician is uncertain about the diagnosis. For example, a positive test result with 90% sensitivity does not predict a 90% probability of disease in a patient.

The mnemonics SnOut  and SpIn  provide some guidelines on how to interpret sensitivity and specificity for an individual patient. SnOut helps physicians to remember that a highly ensitive test with a egative result is good at ruling- out  the disease. SpIn  reminds physicians that a highly pecific test with a ositive result is good at ruling- in the disease.

Previous
Next

Predictive Values

To estimate the posttest probability for an individual patient another statistic is needed. Predictive values are horizontally calculated operating characteristics, which incorporate both false-positive and false-negative results into disease probability. The positive predictive value (PPV) is the probability of a patient actually having the disease if the test result is positive. The probability of the patient being free of the disease after a negative test result is given by the negative predictive value (NPV).

Unfortunately, predictive values are not stable characteristics of diagnostic tests. The predictive values are dependent on the prevalence of disease among the population of interest. The same diagnostic test will have varying predictive values in different populations. Without knowing the disease prevalence in the population of interest, predictive values cannot be accurately estimated.

For example, chest CT angiography with venous runoff (CTA VN) has a sensitivity of 90% and specificity of 95%. In a patient with a high probability (78.4%) of a pulmonary embolism according to the Wells’ criteria, the CTA VN would produce a PPV of 99% and NPV of 72%. The same test given to a patient with a much lower pretest probability of pulmonary embolism (3.4%) would result in a PPV of 39% and NPV of 99%.

Previous
Next

Bayes' Theorem and Likelihood Ratios

Bayes' Theorem

Adapting a theory of conditional probability from the 18th century statistician Thomas Bayes solves the problem of calculating posttest disease probability. This theory allows pretest probability to be separated from a term that describes the strength of the diagnostic test—likelihood ratio.[6, 7]

Bayes' theorem: Posttest Odds = Pretest Odds X Likelihood Ratio

Likelihood Ratio

Likelihood ratios are proportions of probabilities.[8] A likelihood ratio for a positive test result (LR+) is the ratio of the true positive rate (sensitivity) divided by the false-positive rate (1 - specificity). LR+ then can be thought of how much more likely the patient is to actually have the disease after a positive test result.

Dividing the false-negative rate (1 - sensitivity) by the true negative rate (specificity) gives the likelihood ratio for a negative test result and provides the strength of a negative test result in convincing the physician the patient is free of disease.

Since likelihood ratios are calculated from sensitivity and specificity, LRs are stable operating test characteristics, unaffected by prevalence of disease.

A LR of 1.0 is a useless test because this result fails to change the opinion of probability of disease from pretest to posttest. LR+ are always greater than 1.0; the larger the number, the more likely is the patient to have the disease after a positive test result. LR- are always less than 1.0, with the smaller numbers signifying a lower risk for disease than pretest estimates.

Table 3. Strength of the Test by Likelihood Ratio (Open Table in a new window)

Qualitative Strength LR(+) LR(-)
Excellent 10 0.1
Very good 6 0.2
Fair 2 0.5
Useless 1 1

 

Previous
Next

Using Bayes' Theorem

Bayes' theorem: Posttest Odds = Pretest Odds X Likelihood Ratio

This form of Bayes’ theorem using likelihood ratios requires the conversion of pretest probability to odds multiplied by the appropriate LR and then reconverted to the posttest odds, back into posttest probability.

Example: What is the probability of a pulmonary embolism in a patient after a positive CTA VN (sensitivity 90%, specificity 95%) if the patient has a pretest probability of 28%?

Table 4. Bayes' theorem: Posttest Odds = Pretest Odds X Likelihood Ratio (Open Table in a new window)

Steps Calculations
1. Convert pretest probability to odds.



Odds = Probability / (1 - Probability)



Pretest odds = 0.28 / (1 - 0.28) = 0.389
2. Calculate LR+.



LR+ = Sensitivity / (1 - Specificity)



LR+ = 0.90 / (1 - 0.95) = 18.0
3. Calculate Bayes' Theorem.



Posttest Odds = Pretest Odds X LR



Posttest odds = 0.389 X 18 = 7.0
4. Convert posttest odds to probability.



Probability = Odds / (1 + Odds)



Posttest probability = 7 / (1 + 7) = 87.5%

 

This method requires multiple steps and is inconvenient for bedside use. In 1975, Fagan published a nomogram for the graphical calculation of Bayes' theorem.[9] This nomogram (see image below) only requires drawing a straight line from the patient’s pretest probability through the appropriate LR connecting to the posttest probability.

Fagan nomogram. Fagan nomogram.
Previous
 
Contributor Information and Disclosures
Author

Elizabeth Abram, MD Resident Physician, Department of Emergency Medicine, Kings County Hospital, State University of New York Downstate Medical Center

Elizabeth Abram, MD is a member of the following medical societies: American Medical Association, American Medical Student Association/Foundation, Medical Society of the State of New York, Women in Medicine

Disclosure: Nothing to disclose.

Coauthor(s)

Walter W Valesky, Jr, MD Clinical Assistant Instructor, Department of Emergency Medicine, Kings County Hospital, State University of New York Downstate Medical Center

Disclosure: Nothing to disclose.

Specialty Editor Board

Francisco Talavera, PharmD, PhD Adjunct Assistant Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference

Disclosure: Received salary from Medscape for employment. for: Medscape.

Jon Mark Hirshon, MD, MPH, PhD Professor, Department of Emergency Medicine, University of Maryland School of Medicine

Jon Mark Hirshon, MD, MPH, PhD is a member of the following medical societies: Alpha Omega Alpha, American Academy of Emergency Medicine, American College of Emergency Physicians, American Public Health Association, Society for Academic Emergency Medicine

Disclosure: Nothing to disclose.

Chief Editor

Robert E O'Connor, MD, MPH Professor and Chair, Department of Emergency Medicine, University of Virginia Health System

Robert E O'Connor, MD, MPH is a member of the following medical societies: American Academy of Emergency Medicine, American College of Emergency Physicians, American Association for Physician Leadership, American Heart Association, Medical Society of Delaware, Society for Academic Emergency Medicine, Wilderness Medical Society, American Medical Association, National Association of EMS Physicians

Disclosure: Nothing to disclose.

Acknowledgements

Jyoti Elavunkal, MD Staff Physician, Department of Emergency Medicine, State University of New York Downstate Medical Center, Brooklyn, Kings County Hospital Center

Jyoti Elavunkal, MD is a member of the following medical societies: American Medical Association, American Medical Student Association/Foundation, and Society for Academic Emergency Medicine

Disclosure: Nothing to disclose.

Theodore J Gaeta, DO, MPH, FACEP Clinical Associate Professor, Department of Emergency Medicine, Weill Cornell Medical College; Vice Chairman and Program Director of Emergency Medicine Residency Program, Department of Emergency Medicine, New York Methodist Hospital; Academic Chair, Adjunct Professor, Department of Emergency Medicine, St George's University School of Medicine

Theodore J Gaeta, DO, MPH, FACEP is a member of the following medical societies: Alliance for Clinical Education, American College of Emergency Physicians, Clerkship Directors in Emergency Medicine, Council of Emergency Medicine Residency Directors, New York Academy of Medicine, and Society for Academic Emergency Medicine

Disclosure: Nothing to disclose.

Richard H Sinert, DO Professor of Emergency Medicine, Clinical Assistant Professor of Medicine, Research Director, State University of New York College of Medicine; Consulting Staff, Department of Emergency Medicine, Kings County Hospital Center

Richard H Sinert, DO is a member of the following medical societies: American College of Physicians and Society for Academic Emergency Medicine

Disclosure: Nothing to disclose.

References
  1. Stacey D, Bennett CL, Barry MJ, Col NF, Eden KB, Holmes-Rovner M, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2011 Oct 5. CD001431. [Medline].

  2. Vollert J, Mainka T, Baron R, Enax-Krumova EK, Hüllemann P, Maier C, et al. Quality assurance for Quantitative Sensory Testing laboratories: development and validation of an automated evaluation tool for the analysis of declared healthy samples. Pain. 2015 Dec. 156 (12):2423-30. [Medline].

  3. Kabrhel C, Mark Courtney D, Camargo CA Jr, Moore CL, Richman PB, Plewa MC, et al. Potential impact of adjusting the threshold of the quantitative D-dimer based on pretest probability of acute pulmonary embolism. Acad Emerg Med. 2009 Apr. 16(4):325-32. [Medline].

  4. Szturmowicz M, Kacprzak A, Wyrostkiewicz D, Lewandowska K, Jędrych M, Bartoszuk I, et al. Non-high risk PE in the patients with acute or exacerbated respiratory disease: the value of the algorithm based on D-dimer evaluation and Revised Geneva Score. Pneumonol Alergol Pol. 2015. 83 (6):445-52. [Medline].

  5. Marquardt U, Apau D. Point-of-care D-dimer testing in emergency departments. Emerg Nurse. 2015 Sep. 23 (5):29-35. [Medline].

  6. Fernando RL, Garrick D. Bayesian Methods Applied to GWAS. Methods Mol Biol. 2013. 1019:237-74. [Medline].

  7. Bois FY. Bayesian inference. Methods Mol Biol. 2013. 930:597-636. [Medline].

  8. Akobeng AK. Understanding diagnostic tests 2: likelihood ratios, pre- and post-test probabilities and their use in clinical practice. Acta Paediatr. 2007 Apr. 96(4):487-91. [Medline].

  9. Fagan TJ. Letter: Nomogram for Bayes theorem. N Engl J Med. 1975 Jul 31. 293(5):257. [Medline].

  10. Iannaccone R, Laghi A, Catalano C, Mangiapane F, Lamazza A, Schillaci A, et al. Computed tomographic colonography without cathartic preparation for the detection of colorectal polyps. Gastroenterology. 2004 Nov. 127(5):1300-11. [Medline].

  11. Mayor D. Essential Evidenced-Based Medicine. United Kingdom: Cambridge University Press; 2004. Chap 20-26.

  12. Stein PD, Fowler SE, Goodman LR, Gottschalk A, Hales CA, Hull RD, et al. Multidetector computed tomography for acute pulmonary embolism. N Engl J Med. 2006 Jun 1. 354(22):2317-27. [Medline].

  13. Wells PS, Anderson DR, Rodger M, Ginsberg JS, Kearon C, Gent M, et al. Derivation of a simple clinical model to categorize patients probability of pulmonary embolism: increasing the models utility with the SimpliRED D-dimer. Thromb Haemost. 2000 Mar. 83(3):416-20. [Medline].

 
Previous
Next
 
Fagan nomogram.
Table 1. Criterion Standard
Criterion Standard Test
  Disease (+) Disease (-)
Test (+) True positive (TP) False positive (FP)
Test (-) False negative (FN) True negative (TN)
Table 2. Definition of Terms
Term Calculation Plain English
True positive (TP) Counts in 2 X 2 table # Patients with the disease who have a positive test result
True negative (TN) Counts in 2 X 2 table # Patients without the disease who have a negative test result
False positive (FP) Counts in 2 X 2 table # Patients without the disease who have a positive test result
False negative (FN) Counts in 2 X 2 table # Patients with the disease who have a negative test result
Sensitivity = True positive rate (TPR) TP / (TP + FN) The probability that a patient with the disease will have a positive test result
1 - Sensitivity = False-negative rate (FPR) FN / (TP + FN) The probability that a patient with the disease will have a negative test result
Specificity = True negative rate (TNR) TN / (TN + FP) The probability that a patient without the disease will have a negative test result
1 - Specificity = False-positive rate (FPR) FP / (TN + FP) The probability that a patient without the disease will have a positive test result
Positive predictive value TP / (TP + FP) The probability that a patient with a positive test result will have the disease
Negative predictive value TN / (TN + FN) The probability that a patient with a negative test result will not have the disease.
Accuracy (TP + TN) / (TP + TN + FP + FN) The probability that the results of a test will accurately predict presence or absence of disease
Bayes’ theorem Posttest Odds = Pretest Odds X Likelihood Ratio The odds of having or not having the disease after testing
Likelihood ratio of a positive test result (LR+) Sensitivity / (1 - Specificity) The increase in the odds of having the disease after a positive test result
Likelihood ratio of a negative test result (LR-) (1 - Sensitivity) / Specificity The decrease in the odds of having the disease after a negative test result
Table 3. Strength of the Test by Likelihood Ratio
Qualitative Strength LR(+) LR(-)
Excellent 10 0.1
Very good 6 0.2
Fair 2 0.5
Useless 1 1
Table 4. Bayes' theorem: Posttest Odds = Pretest Odds X Likelihood Ratio
Steps Calculations
1. Convert pretest probability to odds.



Odds = Probability / (1 - Probability)



Pretest odds = 0.28 / (1 - 0.28) = 0.389
2. Calculate LR+.



LR+ = Sensitivity / (1 - Specificity)



LR+ = 0.90 / (1 - 0.95) = 18.0
3. Calculate Bayes' Theorem.



Posttest Odds = Pretest Odds X LR



Posttest odds = 0.389 X 18 = 7.0
4. Convert posttest odds to probability.



Probability = Odds / (1 + Odds)



Posttest probability = 7 / (1 + 7) = 87.5%
Previous
Next
 
 
 
 
 
All material on this website is protected by copyright, Copyright © 1994-2016 by WebMD LLC. This website also contains material copyrighted by 3rd parties.