To assess whether screening blood donors could pro vide early warning of a bioterror attack, we combined sto



Yüklə 344,23 Kb.
Pdf görüntüsü
tarix20.06.2017
ölçüsü344,23 Kb.

To assess whether screening blood donors could pro-

vide early warning of a bioterror attack, we combined sto-

chastic models of blood donation and the workings of blood

tests with an epidemic model to derive the probability distri-

bution of the time to detect an attack under assumptions

favorable to blood donor screening. Comparing the attack

detection delay to the incubation times of the most feared

bioterror agents shows that even under such optimistic

conditions, victims of a bioterror attack would likely exhibit

symptoms before the attack was detected through blood

donor screening. For example, an attack infecting 100 per-

sons with a noncontagious agent such as Bacillus anthracis

would only have a 26% chance of being detected within 25

days; yet, at an assumed additional charge of $10 per test,

donor screening would cost $139 million per year.

Furthermore, even if screening tests were 99.99% specific,

1,390 false-positive results would occur each year.

Therefore, screening blood donors for bioterror agents

should not be used to detect a bioterror attack.

T

he health and economic consequences of an extensive



bioterror attack could be severe (1–5); thus, early

detection of an otherwise silent bioterror attack is of obvi-

ous importance (6). Ongoing developments in rapid testing

for potential bioterror agents (7–10) led us to consider

whether screening blood donors to detect a bioterror attack

with the most feared bioterror agents (11) could prove use-

ful. The rationale for screening blood donors is twofold.

First, blood donors are numerous, and donations are uni-

formly spread over time and throughout the population. In

the United States, approximately 13.9 million blood dona-

tions are made each year (12); thus, the annual number of

donations roughly equals 5% of the 286 million popula-

tion. Second, in the absence of specific information regard-

ing how such an attack might target the population, we can

assume that blood donors are as likely to be infected in a

bioterror attack as nondonors. In a sizeable attack, infect-

ed donors might donate blood before their infections have

been detected medically. Screening donated blood for

bioterror agents could therefore serve to detect an attack

sooner than would otherwise be possible. 

However, the cost of screening donations is proportion-

al to the number of donations tested, in addition to the

resources expended investigating false alarms. To investi-

gate these issues, we developed a model for bioterror

attack detection under assumptions favorable to donor

screening, for if such best-case assumptions fail to justify

screening donors, more realistic assumptions will also. In

particular, we initially assume that the screening test used

is perfectly specific, which removes the possibility of false

alarms, and compare the time required to detect an attack

through donor screening to the incubation periods for var-

ious bioterror agents to see whether donor screening leads

to more rapid detection than simply observing sympto-

matic cases. We then consider tests with imperfect speci-

ficity, examine the false-alarm rate that would result from

donor screening, and compare this rate to the true-positive

rate for blood donations. 

Methods

Though blood tests with the ability to detect agents

such as smallpox virus or Francisella tularensis within

days after infection do not exist at present, research to

develop such sensitive tests is under way (7–10). To ana-

lyze whether screening donors might meaningfully shorten

the time required to detect an attack were such tests avail-

able, we developed a probabilistic model that joins the

workings of a screening test, blood donation, and epidem-

ic spread under assumptions that deliberately favor attack

detection through donor screening (see Appendix). In the

model, the sensitivity of a screening test is determined by

a (random) window period with mean 

ω days that must

transpire before a person infected at time 0 can be detect-

ed as infected. Test sensitivity thus depends on the time

from infection until testing. Though the model can accom-

modate any probability distribution desired, we take to

follow an exponential distribution in our examples, an

assumption that favors early detection (since the exponen-

tial likelihood is maximized at W=0, that is, no detection

delay, and declines as increases). We assume initially

Emerging Infectious Diseases • Vol. 9, No. 8, August 2003

909


PERSPECTIVES

Detecting Bioterror Attacks 

by Screening Blood Donors:

A Best-Case Analysis 



Edward H. Kaplan,*Christopher A. Patton,† William P. FitzGerald,† and Lawrence M. Wein‡ 

*Yale School of Management and Yale Medical School, New

Haven, Connecticut, USA, †American Red Cross, Arlington,

Virginia, USA;  and ‡Stanford University, Stanford, California, USA



that the screening test is perfectly specific, though we will

relax this assumption later.

A bioterror attack at time 0 infects I(0)=Np persons in

a population of size (where is the fraction of the pop-

ulation initially infected). We assume that everyone in the

population has the same probability of infection due to

the attack, that is, the attack does not target the population

in a manner that would make blood donors more or less

likely to be infected than nondonors. Given that the total

number of blood donations over time results from the inde-

pendent actions of individual blood donors, the aggregate

number of blood donations over time was modeled as a

Poisson process (13) with rate 

λ=kN, where is the mean

number of blood donations per person per unit of time. If

the agent used in the attack is contagious, secondary infec-

tions spread according to an epidemic model, governed by

a reproductive number R

0

(number of secondary infections



per initial index case) and an exponentially distributed

duration of infectiousness with mean r

-1

. To favor donor



screening, we deliberately exclude an explicit latent period

(during which an infected person is not infectious). These

assumptions imply that infections in the population will

grow exponentially with rate (R

0

– 1)postattack (14), an



assumption that further favors donor screening as the num-

ber of blood donors who are infected (and by ignoring

latent periods, infectious) will grow exponentially at the

same rate, leading to earlier detection via donor screening

than would occur otherwise. 

We assume that the attack is detected once a single

infected donation tests positive for infection with a bioter-

ror agent, another assumption favorable to donor screen-

ing, which enables us to derive the probability distribution

of the time required to detect a bioterror attack of a given

magnitude. However, to demonstrate the extent to which

we have “stacked the deck” in favor of blood donor screen-

ing, we relax the assumption of perfect test specificity for

noncontagious agents. We assume fixed attack rates and

disaster response and recovery periods, which together

determine the fraction of time during which infected dona-

tions can occur. This assumption allows us to model the

rate of false alarms per unit of time and compare this to the

rate of true-positive alarms.

Results

For initial attacks ranging from 100 to 1,000 infections,

Figure 1 shows the probability distribution of the attack

detection delay for a noncontagious agent that would result

from using a blood-screening test able to detect infections

an average of 

ω = 3 days after infection (an optimistic

assumption, given that such tests do not exist at present),

assuming that blood donations arrive at rate = 0.05 per

person per year, the average rate for blood donation in the

United States (12). The results are not encouraging: for an

attack that infects 100 persons, the chance of detecting the

attack through blood donor screening within 25 days is

26%; even for a large attack that infects 1,000 persons, the

median time to detect the attack is 8 days. Figure 2 (solid

curve) shows the mean delay in attack detection as a func-

tion of the initial attack size for a noncontagious agent. For

an initial attack that affects 1,000 persons, the mean time

to detection is 10 days, while for an attack that affects 100

persons, the mean time to detection is 76 days. In most

infected persons, symptoms would develop during this

period, leading to earlier detection of an attack than blood

donor screening would allow, even when potential delay

from misdiagnosis or failure to recognize symptoms is

accounted for (Table 1; compare to incubation times from

infection through symptoms for Bacillus anthracis and



Clostridium botulinum, two noncontagious agents). That

we have deliberately made assumptions favorable to blood

donor screening strengthens this finding, for the actual

time required to detect an attack by means of donor screen-

ing would be longer than reported above.

If we also assume that 

ω=3 days and k=0.05 per person

per year, Figure 3 shows the distribution of delays in attack

detection that would result from a contagious agent char-

acterized by R

0

=3 and r



-1

=14 days (parameters suggestive

910

Emerging Infectious Diseases • Vol. 9, No. 8, August 2003



PERSPECTIVES

Figure 1. Probability distribution of attack detection delay for a

noncontagious agent. Blood donations occur at rate k=0.05 per

person per year, the screening test has a mean window period of

ω=3 days, and initial attack sizes range from 100 through 1,000

infections.

Figure 2. Mean attack detection delays for noncontagious (solid)

and contagious (dashed) agents as a function of the initial attack

size. Other parameters set as in Figures 1 and 3.


of smallpox [3,11,15] and perhaps Ebola virus [11]).

Because additional infections are transmitted to suscepti-

ble persons, the probability of detecting an attack within

any given period is greater than for a noncontagious agent.

Consequently, for a given initial attack size, the attack

detection delay distribution is shorter for a contagious

agent, as is clear from Figure 3. However, symptoms

would develop in many infected persons, and such infec-

tions would be recognized before blood donor screening

would uncover an attack. Under our best-case assump-

tions, an attack that initially infects 100 persons would still

require 15 days on average before donor screening would

detect the attack, while an initial attack infecting 1,000

persons would require 6 days until detection on average

(Figure 2). 

Treating the range of incubation times from infection

through symptoms (Table 1) as 99% probability intervals

from agent-specific lognormal distributions, in the case of

smallpox one would expect to see five symptomatic cases

after 7 days, while more than half of those initially infect-

ed with Ebola virus would progress to symptoms within 1

week. The incubation times for plague and tularemia are

much shorter (Table 1), but even after increasing to com-

pensate for this in our model many of those infected would

exhibit symptoms before the bioterror event was detected

through tests of the blood supply (results not shown).

Again, considering that we have made assumptions that

favor donor screening—that the test has an exponentially

distributed window period that detects infection after 3

days on average, that donor screening detects the attack

after the first donor tests positive, that there is no latent

period from infection through infectiousness, and that a

postattack epidemic grows exponentially—donor screen-

ing as a method of attack detection does not seem compet-

itive with simple observation of symptomatic case-

patients. 

Until now, we have assumed that screening occurs with

perfect specificity, which eliminates false-positive results

as a consequence. However, if false-positive test results

can occur, they will occur frequently. Table 2 reports the

false-alarm rates that would occur for tests of different

specificities for a noncontagious agent, if one assumes that

all 13.9 million annual blood donations are tested, that on

average one bioterror attack takes place per year (a rate all

would agree is unrealistically high), that on average 1

month is required to respond to and recover from an attack

(so infected donations can occur for up to 1 month after an

attack), and that each attack infects 1,000 persons. Even

with 99.99% specificity, an average of 1,390 false-positive

results would occur per year; at 99% specificity, the aver-

age would be 139,000 false-positive results per year.

In addition to the resources wasted in investigating so

many false alarms, a “crying wolf” mindset could diminish

the attention paid to all screening test results, increasing

the chance of missing a true-positive test result. That this

latter possibility could well occur seems clear because with

the attack rate and duration of response and recovery

assumed above, one would expect only 3.7 donations with

true-positive results each year (again, presuming an expo-

nentially distributed window period with mean 

ω3

days). Also, though lowering the attack rate below one per

year to more realistic levels would have no effect on the

false-positive rate, the number of donations with true-pos-

itive results would fall. Similarly, reducing the duration of

the postattack response and recovery during which infect-

ed donations can still occur would have essentially no

impact on the false-positive rate, while again lowering the

number of donations with true-positive results.

Conclusion

We have argued that even under assumptions deliber-

ately favorable to blood donor screening, an attack was

unlikely to be detected earlier through donor screening

than from observing symptomatic case-patients. We have

also shown that imperfect test specificity could overwhelm

the blood collection system with false-positive results. In

addition, the costs of screening apply to all blood dona-

tions tested: even if the cost of screening were as low as an

incremental $10 per test, screening all blood donations in

the United States to detect a bioterror attack would cost an

additional $139 million per year at current donation rates.

Emerging Infectious Diseases • Vol. 9, No. 8, August 2003

911


PERSPECTIVES

Table 1. Incubation periods from infection through symptoms for 

Centers for Disease Control category A agents 

Agent 


Incubation time (days) 

Bacillus anthracis 

<7 

Clostridium botulinum 

0.5–1.5 


Yersinia pestis 

1–6 


Smallpox virus 

7–17 


Francisella tularensis 

3–5 


Hemorrhagic fever viruses 

2–21 (Ebola); 5–10 (Marburg) 

Source: (11). 

Figure 3. Probability distribution of attack detection delay for a con-

tagious agent. Blood donations occur at rate k=0.05 per person

per year, the screening test has a mean window period of 

ω=3

days, the reproductive number R



0

=3, the mean duration of infec-

tiousness r

-1

=14 days, and initial attack sizes range from 100



through 1,000 infections.

Total costs would be even higher when the resources that

would be expended investigating false-positive results are

considered. For all of these reasons, blood donors should

not be screened for bioterror agents for the purpose of

detecting a bioterror attack.

E.H.K. was supported in part by Yale University’s Center for

Interdisciplinary Research on AIDS via Grant MH/DA56826

from the U.S. National Institutes of Mental Health and Drug

Abuse.

Dr. Kaplan is the William N. and Marie A. Beach Professor



of Management Sciences at the Yale School of Management and

professor of public health at the Yale School of Medicine, where

he directs the Methodology and Biostatistics Core of Yale’s

Center for Interdisciplinary Research on AIDS. His interests

include the application of operations research, statistics, and

mathematical modeling to public health policy problems such as

HIV prevention, and more recently bioterror preparedness and

response. 



Appendix

We consider a single bioterror attack that infects a proportion



of the population at time 0. To model test sensitivity, we pre-

sume that a blood test administered to a person days after

becoming infected will test positive for infection with probabili-

ty F



W

(t), where refers to the window period of the test. In our

examples we assume that follows the exponential distribution

with mean 

ω days, that is, F

W

(t)= 1 – e

-t/

ω

, though the model



allows assessment for any window period distribution. We set

ω=3 days in our examples.

The probability that a randomly selected member of the pop-

ulation would test positive days after the attack is then given by 

(1)     

where 


ι(u), the per-capita rate of infection due to transmission

after attack (but before detection) grows exponentially as

(2)

as explained following equation 8 below. In equation (2), R



0

is

the reproductive number specifying the number of secondary



infections transmitted by an initially infected individual early in

the outbreak, while r

–1

is the mean duration of infectiousness



(14). We set R

0

=3 and r



–1

=14 days in our contagious examples,

parameters suggestive of smallpox (3,15), while results for

attacks with noncontagious agents are obtained by setting R

0

=0.


Note that 

π

(t) is proportional to p, the fraction of the population



initially infected in the attack.

Due to the superposition of many individually arriving donors

(13), we assume that in the aggregate, blood donations occur in

accord with a Poisson process with rate 

λ per unit of time. We set

λ =kN for some constant k, that is, the blood donation rate is pro-

portional to the size of the population (k=0.05 to represent the

average U.S. donation rate [12] in our examples). We further

assume that donors are no more or less likely to have been infect-

ed than nondonors. The number of blood donations that would

test positive within time 

τ

of the attack then follows a Poisson



distribution with mean

(3)


Note that since 

π(t) is proportional to while λ is proportional to



N

ρ(τ) is proportional to I(0)=Np, the initial attack size. Thus the

ability to detect a bioterror attack by means of  blood donor

screening when blood donation occurs at a rate proportional to

the population is directly related to the initial number of persons

infected in the attack, independently of the size of the population.

The probability that at least one blood donation would test

positive and detect the attack within 

τ days is given by

(4)


while the expected time required to detect such an attack equals

(5)


because the expected value of a nonnegative random variable

equals the integral of its survivor function, as is well-known.

Since 

ρ(τ) is proportional to the initial attack size, the probabili-



ty of detecting an attack within any fixed time interval increases

with the initial attack size, while the expected time required to

detect an attack decreases with the size of the attack.

In the event of an attack at time 0 with a contagious agent, we

approximate the progress of the resulting epidemic with the stan-

dard model

(6)

where is the population size, and 



(7)

is the disease transmission rate (14). Persons infected in this

model immediately become infectious and remain so for r

-1

time



units on average; thus; no latent period occurs during which a

person is infected but not infectious. Early in the epidemic we

912

Emerging Infectious Diseases • Vol. 9, No. 8, August 2003



PERSPECTIVES

Table 2. False-alarm rates with test specificities as shown  

Specificity (s

Annual false-alarm rate (FAR) 

0.9 

1,390,000 



0.99 

139,000 


0.999 

13,900 


0.9999 

1,390 


a

If one assumes 13.9 million annual blood donations tested, an average of one 

bioterror attack per year that infects 1,000 persons with a noncontagious agent, 

and a 1-month response and recovery period during which infected donations 

continue to arrive. 



+

=

t



W

W

du

u

t

F

u

t

pF

t

0

)



(

)

(



)

(

)



(

ι

π



)

)

1



exp((

)

(



0

0

ru



R

r

pR

u

=



ι

=



τ

λπ

τ



ρ

0

.



)

(

)



(

dt

t

))

(



exp(

1

)



(

τ

ρ



τ



=

D



=

0



))

(

exp(



]

Delay


Detection 

Attack 


[

τ

τ



ρ

d

E

)

(



)]

(

)[



(

)

(



t

rI

t

I

N

t

I

dt

t

dI



=

β

N



r

R

0

=



β

have N–1(t)

N, which as usual leads to exponential growth in the

number of infections as

(8)


Note that the per-capita transmission of infection before the

detection of the attack in this model is given by 

as in equation 2.

The sensitivity of the attack detection delay to the parameters

of this model can be determined directly from the mathematics

above. To summarize, the time to detect an attack via blood donor

screening will decrease if, ceteris paribus, any of the following

parameters increase: the initial number of infections, I(0), the per

capita blood donation rate (k), the reproductive number (R

0

), and



the disease progression rate (r). Increasing the mean window

period of the screening test (

ω) would lengthen the time required

to detect an attack.

The screening test employed is perfectly specific in the analy-

sis above, which obviates the problem of false alarms by assump-

tion. We now relax the assumption of perfect specificity and

instead assume that an uninfected donation will test negative with

probability s, where is the specificity of the test. With this new

assumption, uninfected donations will test positive with probabil-

ity 1–s, which leads to false-positive results.

To compare false-positive and true-positive rates for noncon-

tagious agents, we adopt an alternating renewal process model

(13) of bioterror attack and recovery (a similar analysis could be

conducted for contagious agents, but little insight can be gained

from doing so). Under normal circumstances, we assume that

attacks occur at a mean rate of 

α per unit time. Once an attack

occurs, we assume that 

δ time units are required for response and

recovery (clearly 

δ would depend upon the time required to

detect an attack, which in turn could be influenced by donor

screening, but this effect is minor and not essential for the main

results reported). Infected donations can only occur during the

response and recovery period, while to simplify the analysis, we

presume that no further attacks ensue during the recovery period

(indeed, multiple attacks could simply be modeled within this

framework as one larger attack). Again for simplicity, we further

assume that blood donations occur at the constant rate 

λ= kN

over time, and that any attack infects a fraction of the popula-

tion. 

With these assumptions, it follows immediately that the frac-



tion of time occupied by response and recovery, which coincides

with the fraction of time during which infectious donations can

occur, is given by

(9)


It follows that the false-alarm rate, FAR  (i.e., the mean number

of noninfected donations that falsely test positive), is equal to

(10) 

for all donations that test positive do so falsely under normal cir-



cumstances, while during the response and recovery period, a

fraction (1–p) of donations will be noninfected, and of these (1–s)

will falsely test positive. 

To obtain a simple formula for the true-positive donation rate,

note first that the overall attack rate per unit time is given by 

(11)


because, by assumption, attacks do not occur during the response

and recovery period. Since 

ρ(δ) infected and detected donations

will occur on average during the response and recovery period

(where 

ρ(δ) ) is given by equation [3]), the overall true-positive



donation rate (TPDR) is given by

(12)


In the text, we report results for 

δ=1 month and α'=1 attack

per year, but again the sensitivity of the results to the model

parameters is clear from the mathematics: reducing either the

attack rate or the duration of response and recovery serves to

reduce the true-positive donation rate while marginally increas-

ing the false-positive rate; increasing test specificity obviously

reduces the false-alarm rate. 



References

1. O’Toole T, Mair M, Inglesby TV. Shining light on “Dark Winter.”

Clin Infect Dis 2002;34:972–83.

2. Webb GF, Blaser MJ. Mailborne transmission of anthrax: modeling

and implications. Proc Natl Acad Sci U S A 2002;99:7027–32.

3. Kaplan EH, Craft DL, Wein LM. Emergency response to a smallpox

attack: the case for mass vaccination. Proc Natl Acad Sci U S A

2002;99:10935–40.

4. Wein LM, Craft DL, Kaplan EH. Emergency response to an anthrax

attack. Proc Natl Acad Sci U S A 2003;100:4346–51.

5. Broad WJ. White House debate on smallpox slows plan for wide vac-

cination. New York Times 2002 Oct. 13; 20.

6. Obstacles to biodefence. Nature 2002;419:1.

7. De BK, Bragg SL, Sanden GN, Wilson KE, Diem LA, Marston CK,

et al. A two-component direct fluorescent-antibody assay for rapid

identification of Bacillus anthracis. Emerg Infect Dis 2002;8:1060–5.

8. Espy MJ, Cockerill III FR, Meyer RF, Bowen MD, Poland GA,

Hadfield TL, Smith TF. Detection of smallpox virus DNA by

LightCycler PCR. J Clin Microbiol 2002;40:1985–8.

9. Berdal BP, Mehl R, Haaheim H, Loksa M, Grunow R, Burans J, et al.

Field detection of Francisella tularensis. Scand J Infect Dis

2000;32:287–91.

10. Neergard L. Scientists work on smallpox medicines. Associated Press

Online, June 2 2002. Available from: URL:http://www.lexis-

nexis.com

11. Centers for Disease Control and Prevention. Biological

diseases/agents: category A. Atlanta (GA): The Centers; 2003.

Available from: URL:http://www.bt.cdc.gov/agent/agentlist.asp\#cat-

egoryadiseases

12. Report on blood collection and transfusion in the United States in

1999. Bethesda (MD): National Blood Data Resource Center; 2000.

13. Cox DR. Renewal theory. London: Methuen & Co. Ltd.; 1967.

Emerging Infectious Diseases • Vol. 9, No. 8, August 2003

913


PERSPECTIVES

).

)



1

exp((


)

0

(



)

)

exp((



)

0

(



)

(

0



rt

R

I

t

r

N

I

t

I

=



=

β



).

1

/(



αδ

αδ

+



=

f

)

1



)(

1

(



)]

1

(



1

)[

1



(

fp

s

kN

p

f

f

s

kN

FAR



=

+



=



)

1

(



f

=



′ α

α

).



(

δ

ρ



α′

=

TPDR

)

)

1



exp((

/

)



(

0

0



rt

R

r

pR

N

t

NI

=



β

14. Anderson RM, May RM. Infectious diseases of humans: dynamics

and control. New York: Oxford University Press; 1991.

15. Gani R, Leach S. Transmission potential of smallpox in contempo-

rary populations. Nature 2001;414:748–51.

Address for correspondence: Edward H. Kaplan, Yale School of

Management, 135 Prospect Street, New Haven, CT 06511-3729, USA;

fax: 203-432-9995; email: edward.kaplan@yale.edu

914


Emerging Infectious Diseases • Vol. 9, No. 8, August 2003

PERSPECTIVES

The opinions expressed by authors contributing to this journal do

not necessarily reflect the opinions of the Centers for Disease

Control and Prevention or the institutions with which the authors

are affiliated.



Search ppast iissues oof EEID aat wwww.cdc.gov/eid


Yüklə 344,23 Kb.

Dostları ilə paylaş:




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azkurs.org 2020
rəhbərliyinə müraciət

    Ana səhifə