This work was presented at the November 1994 STFM Patient Education conference, Orlando, FL.
Robert M. Hamm, PhD, Associate Professor
Director, Clinical Decision Making Program
Dept. of Family and Preventive Medicine,
U of Oklahoma Health Sciences Center
900 NE 10th St., Oklahoma City OK 73104
405/271-8000 x 32302 Fax 405/271-2784
Reprint requests to Hamm.
Methods. To measure patients' understanding of uncertainty of diagnostic test results, questionnaires describing diseases were given to patients in clinic waiting rooms. For each of six diseases, a two page questionnaire (1) presented a case history of disease and diagnostic test; (2) asked respondent to estimate (a) probability the case patient has the suspected disease, (b) sensitivity of test, (c) specificity of test, and (d) probability of disease given positive test; and (3) asked whether patient or close friend or family member had ever been thought to have this disease.
Results. In clinic waiting room, 184 patients responded for at least one disease. Although patients judged the disease probabilities to be higher after a positive diagnostic test, each of their four judgments was essentially the same for all diseases, including those with high and low prior probabilities, and with accurate and inaccurate tests. Past experience with the disease was associated with only a minimal increase in the accuracy of patient knowledge.
Conclusions. Patient ignorance of the uncertainties of diseases
they might encounter demonstrates the need for patient education when a
disease is suspected. Lack of relation between accuracy and experience
suggests this need is not being effectively met.
Key words: Probability, Physician-Patient Relations, Patient
Education, Diagnosis, Sensitivity and Specificity
Diagnostic uncertainty creates uncertainty about treatment and prognosis. Physicians have adopted a variety of approaches for discussing these uncertainties with patients, ranging from providing explicit numerical or verbal probabilities to denying the uncertainty.1 Patients have various preferences for how their physicians should discuss these uncertainties.2-6
Physicians generally endorse the need for informed consent and truth telling with regard to diagnostic uncertainty.7 Many of them would communicate with probabilities -- prevalences of disease, probability the patient has a disease given the clinical presentation, sensitivity and specificity of a test, probability the patient has a disease after a positive or a negative test result -- if they believed the probabilities were well-founded and the patients would understand. It is not known, however, whether patients understand information about diagnostic uncertainties, presented in terms of probabilities. Presentation format may confuse patients.8 For these reasons, it is not clear how to talk with patients about diagnostic uncertainty.
Patient interpretations of diagnostic uncertainty have rarely been studied. For example, the concept was not mentioned in a recent review of physician-patient communication.9 Therefore this study seeks to describe what patients know, without talking with their physicians, about the test characteristics of diagnostic procedures that may be applied when a disease is suspected.
What do patients understand of the diagnostic reasoning about diseases
they might experience? This understanding is what the physician has to
work with when starting to explain diagnostic uncertainty. If the patient's
initial understanding is accurate, the physician can build on it; if inaccurate,
it must be corrected before the physician can effectively communicate about
diagnostic strategies for the disease. Does patients' experience suspecting
a disease affect their knowledge of diagnostic uncertainties for that disease?
How do patients combine their own estimates of disease probability and
diagnostic test characteristics to estimate the probability of the disease
after learning a test result?
Patients waiting in primary care clinics during a 2 week period early in 1994 were asked to fill out 2-page questionnaires containing vignettes (see Table 1) describing a patient "such as yourself" who presented with a problem that suggested a disease (e.g., small bowel obstruction), and received a test (e.g., abdominal x-ray). Parallel questionnaires were prepared for six different diseases, selected for their varying test characteristics. The study was done in two clinics. In a Colorado clinic serving both active duty and retired Army personnel and their families, patients were approached at the convenience of clinic staff and each patient was asked about all six diseases. In an Oklahoma clinic serving primarily a low-income urban population, each patient was asked about only one disease, and the questionnaire was handed out along with a patient satisfaction questionnaire. The staff tried to get all patients to respond, but the patient satisfaction form had priority. Assignment of questionnaire to patient was random because the stack of vignettes was shuffled. No information is available about the proportion of patients in either clinic who refused to participate or who were unable to complete the questionnaire.
Table 1. Small Bowel Obstruction Vignette (one of six used in study)
A person such as yourself might experience abdominal pain, a distended abdomen, constipation and vomiting. It would be natural to go to the doctor's office, worried about these symptoms. The doctor might suspect the patient has an obstruction of the small bowel, a condition that needs a surgical operation if it doesn't get better on its own soon. To check whether a patient has a small bowel obstruction, the doctor might have the patient undergo an X-ray of the abdominal region, a procedure that takes some time and costs a moderate amount, and involves exposing the patient to a small amount of radiation.
The following questions ask you how common small bowel obstructions are and how accurate the abdominal X-ray is in determining whether or not a person has a small bowel obstruction. Answer to the best of your knowledge. Read the question over if you are not sure.
1. Let's think about 100 people with abdominal pain, a distended abdomen, constipation and vomiting, where the doctor suspects they might have a small bowel obstruction. How many of these people would actually have a small bowell obstruction?
0% 2% 5% 10% 20% 35% 50% 65% 80% 90% 95% 98% 100%
2. Now let's think about 100 people who actually have a small bowel obstruction, and whose doctor gets them an abdominal X-ray. For how many of these people would the X-ray say that they have a small bowel obstruction?
0% 2% 5% 10% 20% 35% 50% 65% 80% 90% 95% 98% 100%
3. Next consider 100 people who have the symptoms described, but NOT due to an obstruction of the small bowel. For how many of these people would the abdominal X-ray say, correctly, that they DO NOT have a small bowel obstruction?
0% 2% 5% 10% 20% 35% 50% 65% 80% 90% 95% 98% 100%
4. Finally, consider 100 people who have the symptoms described and
their abdominal X-ray says that they have a small bowel obstruction. How
many of these people would really have a small bowel obstruction?
0% 2% 5% 10% 20% 35% 50% 65% 80% 90% 95% 98% 100%
Using non-technical language, the patient was asked to judge the probability of disease prior to the test, test sensitivity and specificity, and disease probability if the test result is positive. Responses were on a 13-level 0% to 100% scale, with the intervals more closely spaced near the ends. The questions asked how many of 100 people would have the specified outcome, because we expected patients might have difficulty understanding a question about "probability."10
Six vignettes were prepared: small bowel obstruction, pulmonary embolus, strep throat, HIV, herniated lumbar disk, and acute MI. The set includes both common and uncommon diseases. The sensitivity and specificity of their standard diagnostic tests11 vary from about .50 to .99 (Table 2). Three physicians reviewed the vignettes for realism.
Table 2. Diagnostic probabilities (test characteristics and pre-test
disease probabilities) for the six disease vignettes, compared with mean
patient estimates .
|Disease||Test||A. Baseline: Probability Prior to Test*||B. Patients' Mean Estimated Probability Prior to Test**||C.
Patients' Mean Estimated Test
|F. Patients' Mean Estimated Test
|Small Bowel Obstruction||Abdominal
|Strep Throat||Throat culture17||.11||.37
|Human Imm.-def. Virus||ELISA blood test18||.085||.39
|Herniated Lumbar Disk||CT scan19||.05||.41
|Acute Myocardial Infarction||EKG20||.21||.39
Patients were asked their age, sex, and experience with the disease: whether they have ever had a test for the disease, and if yes whether they had been treated for the disease; whether they had ever been to the doctor with a member of their family or with a friend who had a test for the disease, and if yes whether that person was treated for the disease. Respecting privacy we dropped questions about personal experience with HIV tests.
Accuracy of patients' judgments was evaluated with reference to objective
criteria. Sensitivity and specificity were derived from pre-1994 research
(Table 2). The probability of disease upon presentation, before the test,
was the mean of three primary care physicians' judgments based on the vignette's
case description. The probability of disease given a positive test (Column
D, Table 3) was derived by applying Bayes' theorem to the physicians' judged
prior probability and the sensitivity and specificity found in the medical
literature.12 In the formula, "P(Disease)" means the probability
the patient has the disease, prior to a test, and "P(Disease|Positive_Test)"
means the disease probability after a positive test result.
Table 3. Estimates of disease probabilities given positive test,
for the six disease vignettes. Patient post test probability judgment (A)
compared with Bayes' Theorem applied to patient judgments (B), to a combination
of patient judgments and objective data (C), and to objective data (D).
Mean Judged Probability of Disease after Positive Test
Mean calculated P(D/T+)* using patient's prior, sensitivity, and specificity
Mean calculated P(D/T+) using patient's prior, true sensitivity, and true specificity
P(Disease/ Positive Test) using MDs' judged prior, true sensitivity, and true specificity
|Small Bowel Obstruction||Abdom-inal X-ray||.80
|Pulmonary Embolus||Arterial PO2||.78
|Strep Throat||Throat culture||.79
|Human Immuno-deficiency Virus||ELISA blood test||.73
|Herniated Lumbar Disk||CT scan||.80
|Acute Myocardial Infarction||EKG||.79
To measure patient accuracy, we used the absolute value of the difference between the patient's judgment and the criterion probability. Patient experience suspecting each disease was measured by summing the number of "yes" answers to the questions about whether the patient or a friend or family member had been tested or treated for the disease.
Patients' probability estimates for each vignette were compared with
the criterion answer using t-tests. Comparisons between vignettes, and
comparisons between accuracy and other variables (such as patient gender
and experience), were complicated by the fact that patients at the Army
retiree clinic did all the vignettes, while patients at the other clinic
did only one vignette. Although we did separate analyses for each clinic,
we report statistics only from one-way and multi-way ANOVAs that assume
all responses were from different people. The simplifying assumption of
this model might be expected to overestimate the statistical significance
of comparisons between vignettes. Alternative statistical tests that do
not inaccurately assume those responses independent were used for every
comparison, but for simplicity these were not reported when the results
A total of 184 patients responded, 145 in the Oklahoma clinic who did 1 vignette each, and 39 in the Colorado clinic who did 1 to 6 vignettes each (mean and median 4.9). It is not known how many patients declined to participate, or failed to complete and return the questionnaires. There were between 55 and 60 responses to each of the 6 vignettes. Of the 336 vignettes, 175 were completed by males, 160 by females, and one did not indicate gender. Ages ranged from 6 to 89 (mean 48.7). The three children under 15 years were assumed to be assisted by parents, whose age is not known.
Accuracy of patient judgments of disease probability and test characteristics.
Patients' mean estimates are given in Tables 2 and 3. Their judged probabilities after the positive diagnostic test (Column A, Table 3) are substantially higher than before the test (Column B, Table 2), for all diseases (all t-tests significant at p < .001), which is appropriate. However, comparison with the objective data (Column D of Table 3 and Column A of Table 2, respectively) shows that the patients' judgments are inaccurate.
More importantly, patients made the same judgments for all diseases.
This is true for pretest (one way ANOVA: F(5,265) = 0.84, p = .53) and
post-positive-test (ANOVA: F(5,265) = 0.33, p = .90) judgments of the probability
of the disease. A vivid illustration of these results is given in Figure
1, which shows an arrow for each of the disease vignettes. The arrow goes
from a point indicating the objective pretest and posttest probabilities,
to a point representing the patient's judgments of those probabilities.
For example, the tail of the "CT for back pain" arrow indicates that the
MD experts' judgment of the pretest probability is .05 and the calculated
probability after a positive test is .12, while the head of the arrow indicates
the patients on average thought the prior probability to be .41, and the
probability after a positive CT scan to be .80. While the objective probabilities
(the tails of the six arrows) vary widely, the patients' judgments (the
heads of the arrows) are clustered together. This indicates that patients
make no distinctions among the diseases when estimating their likelihoods,
both before and after learning a positive test result.
Figure 1. Comparison of patients's estimates of disease probability with objective estimates, before and after a positive diagnostic test result. Arrows connect the objective probability estimates to the mean patient estimates, for each disease. The point at the tail of each arrow represents the objective probability of the disease before (X-axis) and after (Y-axis) a positive test. The point at the head of the arrow represents the mean of the patients' judgments of those probabilities.
Although the diagnostic tests for the six disease vignettes have very different characteristics, there were no differences between vignettes in patients' sensitivity judgments (ANOVA: F(5,265) = 1.67, p = .15) nor specificity judgments (ANOVA: F(5,265) = 0.81, p = .54). In addition, the patients judged test specificity to be no different from test sensitivity (each of the six within-vignette t-tests was nonsignificant).
Influence of patient experience with disease upon accuracy.
Patients with more experience made judgments that were more accurate (or less inaccurate), as indicated by a test for linear trend (on number of "yes" answers) within a one-way analysis of variance, for the judgments of specificity (F(1,297) = 7.07, p = .008) and of disease probability given a positive test result (F(1,299) = 5.33, p = .02). However, those with experience were still quite inaccurate. Experience had no effect on the accuracy of the sensitivity judgments (F(1,300) = 0.03, p = .87) or the pretest probabilities (F(1,301) = 0.47, p = .49). The above analysis treats all questionnaires as independent, though patients at one clinic did multiple questionnaires. Analyses of the data from the two clinics separately, using appropriate models for each clinic, showed the same results at the Oklahoma clinic but no relation at the Colorado clinic. Thus, there is only weak and inconsistent evidence that patients who have experience with a disease have better knowledge of the uncertainty of that disease and its diagnostic test.
Influence of patient demographic factors upon accuracy.
The accuracy of the patients' judgments was related (in multiple regression analyses) to some demographic factors. We mention only the statistically significant relations, which were found both in analyses of all patients together and in analyses which looked at each clinic separately. (Details of these comparisons are available from the first author.) Older respondents had more accurate (less inaccurate) sensitivity and specificity judgments. This also held when the respondents under age 16 were excluded from the analysis. The relation with age was independent of the relation with disease experience. Exploration of alternative models excluding and including age as a predictor eliminated the possibility that age might have somehow masked, or accounted for, the impact of experience with the disease. Males had less inaccurate sensitivity and specificity judgments. Those at the Colorado clinic (active duty or retired Army personnel or their families) tended to overestimate the probability of disease upon presentation more than the family medicine patients at the Oklahoma clinic, yet overestimated the sensitivity of the diagnostic test less than the family medicine patients.
Patient inference controlling for patient knowledge.
The post test probability of disease is distinguished from the other three probabilities the patient estimated by the fact that it can be calculated from the others. Thus, a patient who has no basis for judging the pretest probability of disease and the test sensitivity and specificity could deduce the probability of disease after a positive test, once he or she has estimated the first three -- if he or she knew how to apply the Bayesian calculation. The patient's judgment can be evaluated in terms of consistency with his or her earlier judgments. To seek the source of error in the patients' judgments of post test disease probabilities (Column A, Table 3), we have applied Bayes' Theorem to the pretest disease probability, sensitivity, and specificity in three ways (Columns B, C, and D of Table 3).13 Column B adjusts the patient's pretest probability using the patient's estimates of sensitivity and specificity. Column C adjusts the patient's pretest probability using the objective sensitivity and specificity. Column D adjusts our standard (the MDs' pretest probability) using the objective sensitivity and specificity.
In each of the six vignettes the patient judged the posttest probability
to be higher than the probability calculated by applying Bayes' Theorem
to the patient's own judgments of pretest probability, sensitivity, and
specificity (Columns A and B). This judgment was significantly higher (t-test,
p < .05) for all but the small bowel obstruction vignette (t = 1.8,
df = 46, p = .08). There are 30 additional paired comparisons (5 more pairs,
for each of the six vignettes). Twenty-five of them are significant at
p = .05 or less, and two more have a p between .05 and .10.
This suggests that no one element of the patient's judgment is primarily
responsible for the inaccurate post-test probability estimates. Rather,
the inaccuracy of the patient's post-test probability judgments is produced
jointly by errors in the patient's judgments of sensitivity, specificity,
and the pretest probability, as well as by the way the patient combines
This study showed that patients have very inaccurate knowledge about the pre-test and post-test probabilities of six common diseases, and about the characteristics of typical tests used for those diseases. A recent assessment of patients' understanding of the effects of an intervention (breast cancer screening) reported a similar finding.21 This inaccuracy is not surprising, and may be completely appropriate -- it is the doctor's job, not the patient's, to appreciate disease differences in diagnostic uncertainty.
Patients do have reasonable generic expectations about disease probability and test accuracy. Their pretest probability for any suspected disease is less than half (mean 41.3%, median 35.0%) and after a positive test they think the probability is much higher (mean 78.2%, median 95.0%). Thus, they recognize that when a disease is suspected this does not mean one has it, and they think that after a positive test the probability increases greatly. They acknowledge the possibility that a diagnostic error might occur (sensitivity: mean 73.1%, median 90.0%; specificity: mean 72.6%, median 90.0%), but they don't know which diseases have accurate tests, and they think a false negative is as likely as a false positive.
Among the participants in the study were patients who had had experience suspecting each of the diseases. At only one of the two clinics, subjects with disease experience had significantly more accurate knowledge about test specificity and about disease probability after a positive test. However, this effect is very small, and patients with disease experience still made very inaccurate judgments. This implies either that their physicians had not explained the diagnostic uncertainty, that the patients had not understood the explanation, or that the patients had forgotten it.
The patients' intuitive judgment of the probability of the disease following a positive test result was larger than the post-test probability calculated by Bayes' Theorem using the patient's own estimates of prior disease probability, sensitivity, and specificity. This result was statistically significant for 5 of the six disease vignettes, and nearly significant (p < .10) for the sixth vignette. This means that the average patient overadjusts to the evidence, compared with the adjustment prescribed by Bayes' Theorem. This is consistent with some22-25 but not all26 observations of untrained subjects. However, physicians may have a better sense of the impact of test results. In a recent study, physicians' average estimate of the post-test disease probability was only slightly lower than the Bayesian extrapolation from their own pre-test probabilities and their own estimates of the test characteristics.13, 27
Limitations of the study.
Study weaknesses may limit the generalizability of the finding that patients have inaccurate knowledge of disease probabilities and test characteristics. We do not know the response rate nor the demographic characteristics of the repondents in comparison with the typical patient at each clinic. If only patients who understood the questions completed the questionnaire, then the average patient may have even less knowledge of diagnostic probabilities than shown in the results.
Could patients know disease probabilities and test characteristics accurately, but be confused by the response scale? This is unlikely because the average answers are consistent with a belief that a positive test increases disease probability. This suggests the patients could use the response scale.
The mean of three clinicians' judgments was the standard for evaluating patients' judgments of pretest disease probability. The accuracy of this standard is not known. However, the patients on average judged all six diseases to have about the same pretest probability, so patient judgment would be inaccurate unless the true pretest probability for all 6 diseases were truly 38% to 50%, which is not likely.
Patients were asked their experience with the disease but not with the particular test. Further, the patient may have confused the named test (e.g., throat culture for strep throat) with another test for the same disease (e.g, rapid strep test), which might have test characteristics closer to the patient's answer. The fact that the patient judgments for both sensitivity and specificity for all six diseases were about the same suggests that their inaccuracy is not due to particular ways the questionnaire misled them.
Implications for physicians.
That patients have inaccurate knowledge of probabilities and test characteristics for diseases they do not have is, in itself, no more important than citizen ignorance of engineering or geography facts. That patients who have experience with the disease still know little about these disease probabilities and test characteristics is more concerning. It suggests physician failure to conform to the legal and ethical requirements that patients should be informed about their diagnosis and treatment,28, 29 including any inherent risks and uncertainties.1 It is also inconsistent with a view of the doctor-patient relationship popular with about half of patients,30-37 that calls for full patient participation in decisions made about their health.
We suspect that patients who had experienced a disease judged its probabilities inaccurately not only because of their "innumeracy"21 but also because during that earlier episode their physician had not used probabilities to explain the disease.38 In a workshop at a recent meeting of the Society of Teachers of Family Medicine,39 participants roleplayed conversations between doctor and patient concerning screening for diseases such as breast cancer and prostate cancer. The participants were amused to discover, as they discussed the role playing exercises, that those playing the doctor role would often express variations in the degree of certainty conveyed by screening test results by using the same words, varying only the tone of voice. Thus, for a positive screen with high accuracy, they might say "It is not certain" with a more ominous tone than for a positive screen with low accuracy. The physicians explained that they do not know the actual probabilities and they are not confident that the patients can understand data presented as probabilities. The challenge presented to participants by the workshop faculty, and more generally by those advocating that all aspects of medicine be based on evidence, is to communicate this evidence in the best available terms. Surely it is appropriate to discuss uncertainties using summaries of expectations based on actual data -- whether they be presented in relative frequencies (probabilities) or absolute frequencies. A statement that "1 out of 10 people like you with a positive screen would actually have cancer" is more informative than "It is not certain that a positive screen would mean you have cancer", even if the latter statement is delivered with a mildly ominous tone.
A recent study observing patient-physician interaction concluded that physicians cover with patients only a fraction of the ideas necessary for a full understanding of a decision.38 We would venture to hypothesize that if doctors would explain uncertainties using explicit probabilities patients would have a more realistic appreciation of their situations and their options.1, 40 The benefits of an explicit probabilistic explanation of diagnostic uncertainties are analogous to the benefits of explicit probabilities in weather reports: the patients can use the information, in accord with their abilities, to assess their situation and make their decisions about it. Patients without this understanding risk making mistakes, such as worrying excessively about a very low probability disease, or assuming they are free of a disease that has not yet been ruled out.
How should doctors communicate with patients about diagnostic uncertainty?
Our study shows that patients lack both the facts about the uncertainty of particular diseases (pretest disease probability, test sensitivity and specificity) and also the skill of revising their own estimates of those facts to produce an updated estimate of disease probability. In cases where it matters, the physician could provide the facts and help the patient interpret those facts. To convey the rates or probabilities, and to help the patients understand what they are based on, it is desirable to speak in terms that patients can easily understand. For example, it has been argued that people understand the basis for post-test probabilities better if expressed in terms of absolute counts rather than conditional probabilities.10, 41 The paragraph below illustrates a conversation with a woman in her 40's about breast cancer screening, where the pertinent probabilities are that about .0004 of women in this age group have an undetected cancer at any given time, that the sensitivity of mammography (probability any abnormality will be detected, if the woman has a cancer) is about .90 and the specificity .94.42 The conversation uses absolute frequencies:
"Imagine 100,000 women your age being tested for breast cancer. Of them, 40 have breast cancer and 36 of the 40 have an abnormal mammogram. Of the remaining 99,960 women who don't have breast cancer, 5998 will also have an abnormal mammogram, and 93,962 will not. Thus 36 of the 6034 women who have an abnormal mammogram will actually have breast cancer."
Most people find this explanation easier to understand than explanations in terms of probabilities, and so this form has been recommended as a way to communicate about uncertainty with patients.41 In addition to adopting a vocabulary that makes the probabilities understandable, using visual aids, such as the 2 by 2 table relating diseases to tests43 and the tree representation of the same concept,39, 44 may help patients grasp the logic of diagnosis in the face of uncertainty.
The authors appreciate research assistance from Debra Bemben, PhD, who researched the vignettes and administered the questionnaires in Oklahoma, and secretarial and data entry help from Gwen Arnold, as well as helpful readings by Ursula Moore, Laine McCarthy, Eric Stader, and three anonymous reviewers.
1. Bursztajn HJ, Feinbloom RI, Hamm RM, Brodsky A. Medical Choices, Medical Chances: How Patients, Families, and Physicians Can Cope with Uncertainty. . New York: Routledge; 1990.
2. Deber RB, Kraetschmer N, Irvine J. What role do patients wish to play in treatment decision making? Archives of Internal Medicine. 1996;156:1414-1420.
3. Manfredi C, Czaja R, Buis M, Derk D. Patient use of treatment-related information received from the Cancer Information Service. Cancer. 1993;71:1326-1337.
4. Greene MG, Adelman RD, Friedmann E, Charon R. Older patient satisfaction with communication during an initial medical encounter. Social Science and Medicine. 1994;38:1279-1288.
5. Peters RM. Matching physician practice style to patient informational issues and decision-making preferences: An approach to patient autonomy and medical paternalism issues in clinical practice. Archives of Family Medicine. 1994;3:760-763.
6. Hack TF, Degner LF, Dyck DG. Relationship between preferences for decisional control and illness information among women with breast cancer: A quantitative and qualitative analysis. Social Science and Medicine. 1994;39:279-289.
7. Hebert PC, Hoffmaster B, Glass KC, Singer PA. Bioethics for clinicians: 7. Truth telling. Canadian Medical Association Journal. 1997;156:225-228.
8. Mazur DJ, Hickam DH. Five-year survival curves: How much data are enough for patient-physician decision making in general surgery? European Journal of Surgery. 1996;162:101-104.
9. Roter DL, Hall JA. Doctors Talking with Patients/Patients Talking with Doctors: Improving Communication in Medical Visits. . Westport, CT: Auburn House; 1992.
10. Gigerenzer G, Hoffrage U. How to improve Bayesian reasoning without instruction: frequency formats. Psychological Review. 1995;102:684-704.
11. Sox HC, Jr., Blatt MA, Higgins MC, Marton KI. Medical Decision Making. . Boston, MA: Butterworths; 1988.
12. Hagen MD. Test characteristics: How good is that test? Primary Care. 1995;22:213-233.
13. Bergus GR, Chapman GB, Gjerde C, Elstein AS. Clinical reasoning about new symptoms despite preexisting disease: Sources of error and order effects. Family Medicine. 1995;27:314-320.
14. Lee PWR. The plain x-ray in the acute abdomen: A surgeon's evaluation. Br. J. Surg. 1976;63:763-766.
15. Hull RD, Hirsh J, Carter CJ, et al. Pulmonary angiography, ventilation lung scanning, and venography for clinically suspected pulmonary embolism with abnormal perfusion lung scan. Annals of Internal Medicine. 1983;98:891-899.
16. Heckerling PS, Tape TG, Wigton RS, et al. Clinical prediction rule for pulmonary infiltrates. Annals of Internal Medicine. 1990;113:664-670.
17. Centor RM, Meier FA, Dalton HP. Throat cultures and rapid tests for diagnosis of group A streptococcal pharyngitis. Annals of Internal Medicine. 1986;105:892-899.
18. Carlson JR, Bryant ML, Hinrichs SH, et al. AIDS serology testing in low- and high-risk groups. JAMA. 1985;253:3405-3408.
19. Post MJ, Green BA, Quencer RM, Stokes NA, Callahan RA, Eismont FJ. The value of computed tomography in spinal trauma. Spine. 1982;7:417-431.
20. Behar S, Schor S, Kariv I, Barell V, Modan B. Evaluation of electrocardiogram in emergency room as a decision-making tool. Chest. 1977;71:486-491.
21. Schwartz LM, Woloshin S, Black WC, Welch HG. The role of numeracy in understanding the benefit of screening mammography. Annals of Internal Medicine. 1997;127:966-972.
22. Goodie AS, Fantino E. An experientially derived base-rate error in humans. Psychological Science. 1995;6:101-106.
23. Bar-Hillel M. The base-rate fallacy in probability judgments. Acta Psychologica. 1980;44:211-233.
24. Hamm RM, Miller MA. Interpretation of conditional probabilities in probabilistic inference word problems. : Boulder, CO: Institute of Cognitive Science, University of Colorado.; 1988.
25. Hamm RM. Explanations for common responses to the blue/green cab probabilistic inference word problem. Psychological Reports. 1993;72:219-242.
26. Edwards W. Conservatism in human information processing. In: Kahneman D, Slovic P, Tversky A, eds. Judgment under Uncertainty: Heuristics and Biases. New York, NY: Cambridge University Press; 1982:359-369.
27. Chapman GB, Bergus GR, Elstein AS. Order of information affects clinical judgment. Journal of Behavioral Decision Making. 1996;9:201-211.
28. Meisel A, Kabnick LD. Informed consent to medical treatment: An analysis of recent legislation. University of Pittsburgh Law Review. 1980;41:407-564.
29. Lidz CW, Meisel A, Osterweis M, Holden JL, Marx JH, Munetz MR. Barriers to informed consent. Annals of Internal Medicine. 1983;99:539-543.
30. Strull WM, Lo B, Charles G. Do patients want to participate in medical decision making? JAMA. 1984;252:2990-2994.
31. Blanchard CG, Labrecque MS, Ruckdeschel JC, Blanchard EB. Information and decision-making preferences of hospitalized adult cancer patients. Soc Sci Med. 1988;27:1139-1145.
32. Ende J, Kazis L, Ash A, Moskowitz MA. Measuring patients' desire for autonomy: Decision making and information-seeking preferences among medical patients. J Gen Intern Med. 1989;4:23-30.
33. Davison BJ, Degner LF, Morgan TR. Information and decision-making preferences of men with prostate cancer. Oncol Nurs Forum. 1995;22:1401-1408.
34. Deber RB, Kraetschmer N, Irvine J. What role do patients wish to play in treatment decision making? Arch Intern Med. 1996;156:1414-1420.
35. Bradley JG, Zia MJ, Hamilton N. Patient preferences for control in medical decision making: A scenario-based approach. Family Medicine. 1996;28:496-501.
36. Stiggelbout AM, Kiebert GM. A role for the sick role: Patient preferences regarding information and participation in clinical decision-making. CMAJ. 1997;157:383-389.
37. Mazur DJ, Hickam DH. Patients' preferences for risk disclosure and role in decision making for invasive medical procedures. J. Gen Intern Med. 1997;12:114-117.
38. Braddock CH, III, Fihn SD, Levinson W, Jonsen AR, Pearlman RA. How doctors and patients discuss routine clinical decisions: Informed decision making in the outpatient setting. Journal of General Internal Medicine. 1997;12:339-345.
39. Mengel M, Downs T. Teaching evidence-based patient-centered prevention. In: Medicine SfToF, ed. Annual Spring Conference. Boston, MA: STFM; 1997:24.
40. Dudley TW, Nagle J. How to communicate health risks to patients. In: STFM, ed. 16th Annual Conference on Patient Education. Orlando, FL: Society of Teachers of Family Medicine; 1994:14.
41. Gigerenzer G, Hoffrage U, Ebert A. AIDS Counselling for Low-Risk Clients. : Center for Adaptive Behavior and Cognition, Max Planck Institute for Psychological Research, Munich, Germany; 1997.
42. Kerlikowske K, Grady D, Barclay J, Sickles EA, Ernster V. Likelihood ratios for modern screening mammography: Risk of breast cancer based on age and mammographic interpretation. JAMA. 1996;276:39-43.
43. Glasziou PP. Probability revision. Primary Care. 1995;22:235-245.
44. Gigerenzer G. The psychology of good judgment: Frequency formats and simple algorithms. Medical Decision Making. 1996;16:273-280.