shelf-lives of the solutions are indicated. The instru-
ments and auxiliary equipment called for by the
method are identified and their operation and mainte-
nance are covered, often by reference to the
manufacturer’s manual. The procedure for readying
the equipment and the samples (natural samples,
blanks, calibrators, and controls) is indicated. The
steps involved in the performance of the method are
presented in a sequential fashion; included are the
steps leading to the standby condition, if there is
one, and the steps for closing down the method. If
the operating steps are different in certain circum-
stances, the modifications of the method and the
circumstances for their application are described.
The mathematical methods for deriving the calibra-
tion function and the measurement function are
described and the algorithm for the calculation of
sample results is given. The computer software to
be used to perform these calculations is stated.
Quality assurance program
In the broadest possible sense, quality assurance
is concerned with the reliability of the patient data
generated by the laboratory. It therefore encom-
passes the procedures used to recognize, quantify,
and control the sources of measurement variability
that arise within the laboratory between the receipt
of a specimen and the posting of the study results
(Büttner
et al.
1980a). In its more common usage,
quality assurance refers to the control of the preci-
sion and trueness of laboratory methods. It is in this
more narrow sense of quality control that quality
assurance will be discussed here.
Internal quality control
Internal quality control refers to the procedures
for quality monitoring, intervention, and remediation
undertaken in a single laboratory (Büttner
et al.
1983a, Nix
et al.
1987, Petersen
et al.
1996). The
unit of control is typically the set of samples that
constitute one batch of the method. As mentioned
previously, each batch includes a set of calibrators
for the purpose of constructing a calibration curve
for that batch. This is done to reduce the variability
that results from between-run calibration variation in
the method. As this is a quality maintenance goal,
batchwise calibration is properly considered one of
the elements of internal quality control.
Also included in each batch of samples is a set
of control samples for which the measurement result
frequency distributions are known. Using statistical
tests called control rules to compare the current
results for the control samples with their known
frequency distributions, the trueness and precision of
the method can be monitored.
Control samples are derived from control
material rather than individual patient specimens but
are otherwise handled in a fashion identical to test
samples. Indeed, valid internal quality control
depends upon the identical treatment of the control
samples and the test samples. Control material must
come from a large, homogeneous, and stable pool of
material (Büttner
et al.
1980c). The composition of
control material should be as similar as possible to
the composition of the test material used in the
method being controlled. This requirement is best
satisfied by control material made from human
products but such material is generally biohazardous.
Instead, the most widely used type of control
material is commercially manufactured artificial
material which is contrived to simulate the corre-
sponding test material.
Using the techniques for the assessment of
method quality discussed later in this chapter, the
mean analyte concentration of the control material is
established as is the within-laboratory imprecision in
the measurement of the concentration. For quality
control purposes, method trueness is then evaluated
in terms of this mean and method precision is evalu-
ated in terms of this SD
within-laboratory
. This is done
each time a new lot of control material is introduced
into use in the laboratory.
Control rules.
The logic of control rules is best
appreciated by considering the effects that a decline
in method quality has upon the frequency distribu-
tion of control sample results.
Degradation in the quality of a method may be
characterized by an increase in method bias, by an
increase in method imprecision, or by both. If there
is an increase in method bias, the control sample
results will tend to be displaced from the established
mean value for the control material. This means that
there will be an increased probability for the control
sample results to be in the region of one of the tails
of the established distribution. For instance, as
shown in the graph on the left in Figure 2.4, if the
current bias is equal to 1 SD
within-laboratory
, 15.9 percent
of the control sample results will be larger than the
established mean plus 2 SD
within-laboratory
. For the estab-
lished distribution, only 2.3 percent of the control
sample results would be that far above the mean.
On the other tail of the distribution, the bias will
Laboratory Methods
2-9