dependent phenomena such as drift in the perform-
ance characteristics of the measurement instrument.
Within-laboratory imprecision is the total impre-
cision in the measurement of an analyte in a single
laboratory. It reflects the variability arising within
and between runs. The variances of these two
sources add together to give the total variance,
var
within-laboratory
=
var
within
−
run
+
var
between
−
run
where
var
is variance. In terms of the usual measure
of imprecision, standard deviations,
SD
within-laboratory
=
SD
within
−
run
2
+
SD
between
−
run
2
where
SD
is standard deviation.
The imprecision that arises when several labora-
tories contribute to the production of results is called
between-laboratory imprecision. It is caused by
inter-laboratory variation in calibrators, calibration
spacing scheme, choice of calibration function, and
technique for estimation of calibration curve parame-
ters. Other causes include differences in operating
conditions, differences in operator skill, and differ-
ences in the measurement system.
Resolving power.
The precision of a method
determines how good the method is at distinguishing
differences in analyte concentration. This property,
referred to as the resolving power of a method, is a
useful alternative measure of method precision,
especially when small changes in analyte concentra-
tion must be discerned and when trace concentra-
tions of analyte must be detected (Sadler
et al.
1992,
Gautschi
et al.
1993). The resolving power of a
method is what is often referred to as the analytical
sensitivity of the method (Ekins and Edwards 1997).
Resolving power is a less confusing term, however,
because analytic sensitivity is also taken to mean the
slope of the calibration curve (Pardue 1997).
The usual way in which resolving power is
expressed is as the minimum distinguishable differ-
ence in concentration, D
min
. This parameter can be
defined for within-run differences or for between-
run differences. For between-run differences
(Sadler
et al.
1992),
D
min
= z
c
SD
within-laboratory
2
where
z
c
is the confidence coefficient as found with
the standard normal distribution;
z
c
equals 1.645 for
a 95% confidence level. This formula assumes that
method precision is essentially constant over inter-
vals of analyte concentration equal in length to D
min
.
The detection limit of a method, which is the
smallest analyte concentration that can reliably be
distinguished from zero, is a special case of D
min
.
Hierarchy of method quality
The ideal of laboratory practice is to implement
methods of the highest quality. Unfortunately, of
the methods available for the measurement of a
particular analyte, those of the very highest quality
are always too expensive and too impractical for
most clinical laboratories. These methods, which
are called definitive methods, are used to validate
the accuracy of the methods at the next level of
quality, called reference methods. Reference
methods, which have only negligible inaccuracy
compared to definitive methods, are generally less
costly than definitive methods but they are still
impractical for routine use. They are used to
validate the accuracy of the affordable and practical
methods of lower quality that are actually imple-
mented in the clinical laboratory. These methods
are called field methods. This hierarchic chain of
validation of the accuracy of laboratory methods
represents one of the two elements of the system of
accuracy transfer that is used to assure the quality of
field methods. The other element of the system is a
hierarchy of calibrators. In this hierarchy, field
methods are calibrated with secondary reference
materials, these being calibrators whose values have
been established using a reference method. Refer-
ence methods, in turn, are calibrated with primary
reference materials which are calibrators whose
values have been certified by competent authority
through the use of a definitive method.
Analytical quality goals
It is recognized that field methods cannot
provide reference method-level analytical quality
given the constraints of affordability and practicality
within which the methods must operate. However,
it is necessary that the methods achieve a minimum
level of quality—one that allows them to be of use
clinically. It is therefore useful to define a desirable
level of quality that can be used by both method
developers and laboratorians as a benchmark for
field method performance.
A number of different approaches can be used to
define desirable analytical quality goals (Stöckl
et al.
1995). These approaches include defining goals in
keeping with the current “state of the art" in high-
quality laboratories, having experts define the goals,
and basing goals on the quality expectations of
Laboratory Methods
2-6