The Logic of laboratory Medicine - page 56

become available. As each new study result is
received, the clinician is able to reassess the
probability of the competing diagnoses using Bayes'
formula. The posterior probability calculated from
the preceding study result serves as the prior
probability for the computation of the probability of
a diagnosis based upon the current study results.
This approach is correct as long as there is no
result correlation among the studies, that is, as long
as the segregation of patients into subgroups accord-
ing to study results does not affect the result
frequency distributions and, hence, likelihood ratios,
for any of the studies. When there is appreciable
result correlation—and there usually is—this
approach will generate probability estimates that are
exaggerated; low probability estimates will be too
low and high probability estimates will be too high.
Indeed, as the number of study results becomes
large, the probability estimate will approach either
one or zero even though the true probability has an
intermediate value (Russek
et al.
1983).
In the presence of result correlation, conditional
likelihood ratios must be used in Bayes' formula. A
conditional likelihood ratio is the likelihood ratio for
a study result calculated from reference populations
who have identical results for the preceding studies.
This ratio may be greater than, less than, or equal to
the ratio that would be calculated from reference
populations assembled without this restriction.
When multiple study results are analyzed in
combination rather than serially, the following form
of Bayes formula can be used, but only if there is no
result correlation among the studies,
P
[
post
] =
P
[
pre
]
likelihood ratio
i
P
[
pre
]
likelihood ratio
i
+ (
1
P
[
pre
])
This formula indicates that for a combination of
i
study results, the overall likelihood ratio used in
calculating the posterior probability is the product of
the likelihood ratios of each of the individual studies.
When result correlation is present, the joint likeli-
hood ratio should be used to calculate the posterior
probability,
P
[
post
] =
P
[
pre
]
joint likelihood ratio
P
[
pre
]
joint likelihood ratio
+ (
1
P
[
pre
])
The joint likelihood ratio is the ratio of the
frequency of the combination of study results in the
presence of the disorder to that in the absence of the
disorder. Although the calculation of joint likeli-
hood ratios is simple in the case of two diagnostic
studies, as the number of studies increases the
computational burden becomes significant. More
importantly, tabulation of the ratios for their ready
use clinically becomes nearly impossible, although
the growing availability of computer databases may
someday make it achievable (Krieg 1988). If the
result frequency distributions behave according to a
parametric statistical model, an enormous simplifica-
tion can be realized because only the model parame-
ter values need to be recorded. Specific result
combination frequencies and joint likelihood ratios
can then be calculated as needed.
Discriminant and logistic functions.
If the
result frequency distributions for a test combination
satisfy the statistical conditions required for linear
discriminant regression and are multivariate normal,
the likelihood ratios for result combinations can be
calculated directly using the discriminant function
(Strike 1996),
likelihood ratio
=
e
(
Z
D
+
Z
DF
)
/2
Z
where
Z
is the discriminant score for the result
combination,
discriminant score =
Σ
bi result i
with
i
indicating the
i
th study,
Z
D
is the mean
discriminant score among individuals with the
disease, and
Z
DF
is the mean discriminant score
among individuals who are disease-free. Dividing
through by (1-P[pre]) and re-expressing the fraction
P[pre]/(1-P[pre]) as an exponential allows Bayes'
formula to be written,
P
[
post
] =
e
log
P
[
pre
]
1
P
[
pre
] +(
Z
D
+
Z
DF
)
/2
bi result i
e
log
P
[
pre
]
1
P
[
pre
] +(
Z
D
+
Z
DF
)
/2
bi result i
+
1
In this form, which is that of a logistic function, the
parameter values can be estimated using logistic
regression techniques (Strike 1996). Logistic regres-
sion has two advantages over linear discriminant
regression. First, the method is much more robust
regarding deviations from statistical constraints; in
particular, it can be used, cautiously, when the
combination result frequency distributions are not
multivariate normal and when the variance/covari-
ance structure of the distributions in the two diagnos-
tic classes are not identical. Second, logistic
regression allows for the inclusion of qualitative and
semiquantitative study results as test combination
terms (Liao 1994). This capability is not often
needed in the realm of diagnostic discrimination but
it is indispensable in prognostic discrimination.
Diagnostic and Prognostic Classification
3-11
1...,46,47,48,49,50,51,52,53,54,55 57,58,59,60,61,62,63,64,65,66,...238
Powered by FlippingBook