During the first year of the coronavirus pandemic, the belated, disorganized and fragmented response of the United States repeatedly prompted the question: why had the country been caught so unprepared? In this period, the death toll from Covid-19 in the United States was among the highest in the world, leading observers to compare the nation’s pandemic response unfavorably to that of other advanced industrial countries, and to seek explanations for its apparent inability to develop a coherent response. “The United States is among the hardest-hit nations in the world, with more than 327,000 deaths, 18 million infected, the fourth-highest per capita mortality rate among nations and more suffering to come,” editorialized the Washington Post (2020), “What went wrong?” A comparative study of national responses to the pandemic similarly pointed to the puzzle of “why some nations have contained the virus completely while others have struggled to prevent multiple waves of community transmission,” noting that “despite the impressive US achievements in biomedicine, and despite extensive planning for pandemic preparedness, the US record in addressing the public health crisis of Covid-19 is among the worst in the world” (Jasanoff et al., 2021).1
The question of why the US response to Covid had failed so spectacularly was all the more perplexing in considering the amount of attention and resources that the federal government had devoted to preparing for an outbreak of a novel infectious disease over the prior two decades (Lakoff, 2017). An initial drive for biological preparedness began in the late 1990s, as biodefense officials became concerned about the whereabouts of Cold War era bioweapons and the prospect of a future biological attack, leading to the creation of a Strategic National Stockpile of biomedical countermeasures. The 2001 anthrax letters led to further biosecurity initiatives, such as the passage of Project Bioshield, designed to enable the government to develop, acquire, and stockpile biomedical countermeasures against bioweapons threats. In the wake of the 2002–2003 SARS outbreak, and as the specter of avian flu came to the attention of US health and security officials, the problematic of preparedness extended beyond biological weapons to address the threat posed by “emerging infectious diseases” such as pandemic influenza. Among other measures, antiviral medications and equipment for managing a respiratory disease outbreak were added to the national stockpile. During this period, the US Homeland Security Council released the National Strategy for Pandemic Influenza (White House, 2005), the Centers for Disease Control made federal funds available to incentivize local public health agencies to improve their pandemic readiness, and the National Institutes of Health greatly increased its support for basic research in influenza virology. Meanwhile, pandemic preparedness efforts extended internationally. In 2005, with support from the US Centers for Disease Control, the World Health Organization revised its International Health Regulations, a set of legally binding measures for managing infectious disease outbreaks, in order to make it easier for health authorities to detect an emerging disease outbreak and to coordinate international response at an early stage (World Health Organization, 2005). Multiple governmental and non-governmental organizations ran test exercises that simulated catastrophic disease outbreaks, exposing vulnerabilities in public health systems and pointing to policy solutions. These various efforts continued over the following decade, in relation to events such as the 2009 H1N1 pandemic, the 2014 Ebola epidemic in West Africa, and anxieties around birth defects linked to the spread of Zika in Latin America in 2016.
As the nation that had initiated and provided support for many of these efforts, the United States was arguably the world’s capital of pandemic preparedness. A 2019 assessment conducted by two Washington, DC-based think tanks, the Nuclear Threat Initiative and the Johns Hopkins Center for Health Security, confirmed US leadership in the field, ranking the United States first among 195 countries “in their readiness to deal with the threat of an epidemic or pandemic” (Center for Health Security, 2019). The analysis, entitled the Global Health Security Index (GHSI) was the “first comprehensive assessment and benchmarking of health security and related capabilities” among the state parties to the revised International Health Regulations. In the GHSI rankings, the US was rated well ahead of countries that, observers later agreed, were far more successful in responding to the early stages of the coronavirus pandemic, including South Korea (rated #9), Germany (#14), Singapore (#24) and Vietnam (#50). In late February 2020, President Donald Trump cited the GHSI rankings in assuring the American public of the nation’s readiness for the arrival of the novel coronavirus, boasting that after comparing the “countries best and worst prepared for an epidemic,” the index had concluded that “the United States, we’re rated No. 1” (Alltucker & Hauck, 2020). Six months later, a New York Times columnist pointed to the GHSI in describing the US response as a “colossal failure of leadership,” writing that “the paradox is that a year ago, the United States seemed particularly well positioned to handle this kind of crisis” (Kristof, 2020).2 More generally, as the pandemic unfolded, the index became a source of interest and curiosity for a range of commentators: how was it possible, they asked, that the top rated country in preparedness for a future pandemic could have fared so comparatively poorly in its response to Covid-19?
In this essay, I suggest that the question of the significance of this index of national preparedness should be posed somewhat differently. Rather than asking why the United States, despite being ranked so highly in the GHSI, proved to be so ill-prepared for the coronavirus pandemic, we should ask: how did this index formulate “health security” as a measurable condition? What was the purpose of this comparative project of assessment, and how did this purpose direct the attention of the index toward measuring certain capabilities and not others as keys to calculating and comparing levels of national readiness?3
1 The Project of Global Health Security
The Global Health Security Index’s effort to measure and compare national levels of pandemic preparedness resembles other comparative efforts to quantify national well-being that are associated with fields such as international development. One can point, for instance, to the World Bank’s World Development Indicators, the United Nations’ Human Development Index, or the “global indicator framework” of the UN Sustainable Development Goals.4 These various analytic instruments are tools for the enactment of a form of global biopolitics (Collier & Ong, 2003). Such comparative measurement tools generate quantitative data on domains of social and economic life, providing targets for policy interventions and enabling technocratic assessment of the efficacy of such interventions. These comparative indices are particularly useful to multilateral agencies and philanthropic foundations that seek to measure the effectiveness of donor-funded programs to improve the well-being of populations at the global scale. For its Human Development Index, the United Nations Development Program calculates and compares average life expectancy at birth, years of schooling, and per capita income for all countries in the world, and then ranks each country according to its overall score.5 Similarly, the World Bank has introduced “World Development Indicators” to enable national comparison of poverty rates, population growth, agricultural yield, military expenditures, and other elements of national life.6 The United Nations’ Sustainable Development Goals framework includes 231 unique indicators for monitoring a country’s path toward sustainable development, such as undernourishment, maternal mortality, and rates of infectious disease.7
While similar in its aspiration to enable comparison across a diverse range of national contexts, the Global Health Security Index is distinct from these other biopolitical devices in that it does not rely on measurements of the actual welfare of national populations — whether income levels, longevity, infant mortality, or rates of malnutrition. Instead, it seeks to measure a virtual capacity: whether a nation will be able to respond adequately to a potential future event — the emergence of a novel infectious disease. It does not assess the health and wellbeing of a given population in the present, but rather the capabilities of a national public health system in the event of a future outbreak. The broad categories of capability that are included in the GHSI measuring system include “prevention of the emergence or release of pathogens,” “early detection and reporting for outbreaks of potential international concern,” and “rapid response to and mitigation of the spread of an epidemic” (Center for Health Security, 2019, p. 8). By generating such anticipatory knowledge about a country’s ability to detect and manage a future outbreak, the index can point to sites of present intervention that would improve its condition of “health security.”
Here it is useful to step back for a moment to ask: why was it considered important, for those who formulated the GHSI and their patrons, to measure the health security of every nation in the world? As we will see, this initiative was one element in a larger effort by a group of international health specialists to improve the world’s preparedness for future outbreaks of emerging infectious diseases, a project termed “global health security.” As part of the project of global health security, international health and security experts sought to understand the current level of pandemic preparedness of each country. This required, first of all, the development of standard metrics that could identify the elements of national preparedness and make disparate settings comparable (Alder, 1998; Rottenberg et al., 2015). The “global health security indicator” served as such a measuring device, making it possible to quantify and compare a condition of preparedness across a variegated landscape of national public health systems. The GHSI, then, sought to assess how far each country was along the path toward health security, and to provide targets for improvement.8 The index, then, served as the technical basis of a normative framework: as its authors put it, “over time, the GHS index will spur measurable changes in national health security and improve international capability to address” the risk of “infectious disease outbreaks that can lead to international epidemics and pandemics” (Center for Health Security, 2019, p. 31).
The project of establishing a global form of health security had been launched over a decade before the publication of GHSI, with the 2007 release of a report from the World Health Organization entitled A Safer Future: Global Public Health Security in the 21st Century (World Health Organization, 2007). The WHO report — and the technical and organizational initiatives that accompanied it — focused on a distinctive type of event, what it called a “public health emergency of international concern.” According to the WHO framework, such an emergency could be declared in response to a “naturally-occurring” outbreak of an emerging pathogen, an intentional biological attack, or some other health-related disaster. More generally, the framework enjoined WHO member states to prepare for a disease event that would be novel, unpredictable, and potentially catastrophic. The goal of the framework was to ensure a collaborative and coherent international response to future public health emergencies. Toward this end, one of its critical elements was the requirement that national governments be able — and willing — to detect and report outbreaks of novel pathogens to international health authorities.
A Safer Future articulated the basic technical capabilities that would be required at the national level in order for the project of global health security to succeed: first, an ability to detect and report the initial onset of an event with the potential to become a global health emergency, and second, the capacity to rapidly respond to contain the event and minimize its damage. As the report stated, a condition of health security could be achieved only “if there is immediate alert and response to disease outbreaks and other incidents that could spark epidemics or spread globally and if there are national systems in place for detection and response should such events occur across international borders” (World Health Organization, 2007, p. 11). These basic functions, in turn, point to two key challenges facing the project of global health security — one of veridiction and the other of jurisdiction. The first concerns the production of knowledge about a possible future event: global health security must operate in the present on an object — a future disease outbreak — that cannot yet be grasped. It strives to put in place systems that can detect the onset of an as-yet unknown infectious disease at its early stages before it has spread to become catastrophic. The second challenge concerns jurisdiction over a vast terrain of potential disease emergence. To become “global,” health security requires the active collaboration of local and national health agencies with international officials. This leads to an ongoing disjuncture between the site of responsibility for knowing about and acting on potential global health emergencies, on the one hand, and the locus of sovereignty in which such action may be authorized, on the other. Even before the coronavirus pandemic, a series of controversies over the prior two decades concerning potential and actual health emergencies — from avian influenza to H1N1 to Ebola — were characterized by difficulties in addressing these two challenges (see Lakoff, 2017).
2 The Preparedness Kit
Given the field’s orientation to the future, advocates for health security must continually ask themselves the question: “are we prepared for the next emergency?” The answer, whether drawn from lessons learned after test exercises, or from post-hoc assessments of actual events, is inevitably “no.” It is always possible to identify gaps in capability, one can always strive to become more prepared. The demand for measurement arises with the question: how to know whether this striving is leading anywhere? In other words, how to gauge improvement (or the lack thereof) in a condition of national preparedness, in the absence of the anticipated event? Three basic elements make up what we can think of as a “kit” of reflexive self-transformation that makes it possible to assess and — in principle — to improve a nation’s preparedness for a future disease emergency: first, a list of required governmental actions; second, a practice of imaginative enactment; and third, a process of self-assessment in relation to such enactment. I will briefly discuss the historical emergence of this preparedness kit, before turning to its current application in the field of global health security. As we will see, the preparedness kit initially arose in an altogether different context, Cold War mobilization for a nuclear attack, but has, over the last several decades, extended beyond this setting to address a range of potential emergencies, including a catastrophic disease outbreak.
Detailed lists of governmental actions to take in a future emergency were initially compiled in the 1950s within the US Office of Defense Mobilization (ODM), a little-known but influential office located in the Executive Branch, charged with resource planning for a future war.9 During this period, the goal of mobilization policy shifted from military-industrial planning for a total war along the model of World War II, to ensuring the survival of the national population and the capacity for economic recovery in the aftermath of a future nuclear attack (Collier & Lakoff, 2021). ODM officials faced the challenge of envisioning the details of an unprecedented event: historical experience could not be used as a basis in planning for resource needs in a future nuclear war. To develop a mobilization plan for such a war, officials asked: what capacities would have to be in place in order to enable national survival and recovery in the aftermath of nuclear attack? And how could government agencies and the public be convinced of the need to invest in these capacities in advance of the event?
ODM’s classified plan for a future nuclear war, Mobilization Plan D-Minus, was developed over several years and circulated to other federal agencies in 1957. The plan included a detailed scenario of an imagined future attack: where bombs would be dropped, the amount of damage that would be inflicted on industrial and government facilities, the number of civilian casualties that would be suffered (Office of Defense Mobilization, 1957; Collier & Lakoff, 2021). The plan also included a schema for the post-attack organization of emergency government. Upon the order of the President, a series of new government agencies would come into being whose task would be to manage the nation’s resources toward the aim of population survival and economic recovery.10 In the imagined post-attack future, a new “Office of War Resources” would coordinate the provision of resources with newly formed emergency agencies such as the War Communications Administration, the War Food Administration, and the War Transport Administration. To avoid governmental chaos, each new agency would have to be aware of its required emergency functions and be capable of performing them. ODM used two planning techniques, the list of emergency action steps and the scenario-based exercise, to generate awareness of this schema among officials and to test the government’s capability to address a future wartime emergency.
The list of emergency action steps served as the basic scaffolding of Plan D-Minus. The completed plan was composed of dozens of pages of tables listing specific action steps, when they were to be performed, and which government agency would be responsible for performing them. These tables of action steps were organized according to a series of resource categories, including telecommunications, food, housing, raw materials, transportation, and fuels. According to a table of action steps compiled under the category of “food,” for instance, the Agriculture Department was charged with developing food rationing systems and allocating limited food supplies. The list of emergency housing actions assigned to the Federal Civil Defense Administration included such tasks as determining post-attack shelter needs and creating new programs to meet these needs.
In tandem with these lists of future actions, Cold War mobilization planners developed a method for testing the capability of government agencies to perform their emergency functions: the scenario-based exercise. These simulated events made it possible to generate knowledge in the present about capabilities that would be needed in the future. Scenario-based exercises tested the adequacy of mobilization plans, and enjoined government agencies to learn about and practice their assigned tasks, identifying gaps in preparedness that could then be targets of rectification. A government memorandum explained the goals of one such exercise, Operation Alert 1957: “To improve the national readiness” to meet the demands of a future war, to “maintain the functioning of government” under emergency conditions," and — most tellingly, in terms of this recursive planning process — “to determine what aspects of our preparedness program need greatest emphasis during the next 12 months” (White House, 1957). The objective of the Cold War program of test exercises was to turn nuclear preparedness into a measurable, and thereby improvable, condition.11 Equipped with the scenario of a future attack and the list of emergency actions, mobilization officials could assess the effectiveness of federal agencies’ response to the exercise.
This kit for critical self-rectification in the service of achieving a condition of improved national preparedness gradually migrated from the specific context of planning for a nuclear attack to address the more general problem of emergency planning. An initial step was the federal government’s 1964 National Plan for Emergency Preparedness, which applied the framework developed in mobilization planning to “any threat to the national security” (Office of Emergency Planning, 1964).12 The 1964 Plan was organized according to sixteen resource areas — including food, energy, fuel, health, and water — in which federal agencies would have to take emergency actions. As they evolved over the next several decades, US government plans for dealing with a range of potential future emergencies — whether caused by a natural disaster, a terrorist attack, or an epidemic — typically contained detailed lists of agency responsibilities for the management of resources and the provision of relief.13 And, in turn, agencies have used scenario-based exercises to test their capacity to meet their assigned responsibilities.
While the combination of elements found in Plan D-Minus was a contingent response to the challenges of mobilization for nuclear attack, this diagram of planning for a future emergency has extended into a range of new areas. Initially formed “as a specific response to a historical problem,” as Paul Rabinow describes the consolidation of a governmental apparatus, it has since been “turned into a general technology of power applicable to other situations” (2003, p. 54). The preparedness kit — the list of emergency actions, the scenario-based exercise, and the practice of assessment — has proven to be a dynamic engine of critical self-rectification.
3 The Emerging Disease Threat
With the genealogy of this schema of governmental preparedness for emergency in mind, we can now return to the domain of global health security. In the late 1980s and early 1990s, a group of infectious disease specialists introduced the category of “emerging infectious diseases” to describe an apparent increase in the appearance of novel pathogens. AIDS, Ebola and West Nile virus, as well as drug resistant forms of malaria and tuberculosis, were prominent examples (King, 2002). Emerging diseases had three salient characteristics in common, according to these specialists. First, their appearance and global spread were bound up with modernization processes: urban crowding, environmental destruction, and increasing global circulation (of people and things) were the ecological conditions of possibility for novel disease emergence. Second, the appearance of such novel and deadly infectious diseases could not be prevented but could only be anticipated through the implementation of epidemiological monitoring networks at a global scale. And third, from the perspective of health authorities in advanced industrial countries, while these diseases typically emerged in poorer parts of the world, global interdependence rendered wealthy countries vulnerable to them, and only a global form of detection and response could provide security against this novel threat. But there was as yet no institutional mechanism to put in place such a system.
International health authorities conceptualized the 2002-3 SARS outbreak in these terms: human populations had been rendered vulnerable to such an outbreak by virtue of new forms of human-animal interaction, rapid international circulation, and the absence of a global network for detection and response to novel pathogens. A group of infectious disease specialists — many of whom had served in the Epidemic Intelligence Service (EIS) of the US Centers for Disease Control — had both a diagnosis of the problem and a prescription for addressing it. They argued that SARS had demonstrated a worrying incapacity to detect and collectively respond to emerging diseases in time to contain them, an incapacity that could lead to catastrophic consequences in the future. A major problem for outbreak containment — demonstrated by China’s initial response to SARS — was that national governments were often hesitant to report outbreaks of novel infectious disease to international health officials, or to allow experts into the country to monitor and seek to manage such outbreaks. In a 2004 interview, epidemiologist David Heymann, an EIS veteran, articulated this problem of compliance from the perspective of countries that were concerned about the threat posed by emerging pathogens: “Inadequate surveillance and response capacity in a single country can endanger the public health security of national populations and in the rest of the world” (Heymann & Rodier, 2004). This was the rationale for building an apparatus of global health security that could “govern” public health response at the national scale.
As a means of implementing the envisioned global surveillance and response capacity, this group of specialists pushed for a revision of the venerable International Health Regulations (IHR). Originally enacted in the nineteenth century, in the context of colonial-era efforts to control the spread of infectious disease, the IHR system is designed to ensure national sovereignty over public health response to an epidemic while at the same time regulating state action to minimize global economic disruption and ensure that international authorities can monitor and minimize circulation of the disease (Fidler, 2005). The IHR system envisions a role of organizational coordination and technical support for the World Health Organization, one that is dependent upon actions at the national level. Thus, it provides administrative and technical protocols for managing the global circulation of pathogens — as a collaboration among multilateral agencies and national authorities (Opitz, 2015).
The 2005 revision of IHR included three major changes to address the novel threat of emerging pathogens. First, it vastly expanded the set of diseases that could constitute an international health emergency from the limited nineteenth century list of yellow fever, cholera, and plague, inventing the generic category of “public health emergency of international concern.” Second, it defined the actions WHO would take in order to coordinate a global response to such an emergency, as well as the responsibilities of national partner organizations. And third, it obliged all WHO member states to develop “core capacities for outbreak detection and response” within a circumscribed time frame, though without providing either a legal enforcement mechanism or an outlay of resources to achieve this. As we will see, it was this latter element of the revised IHR system that the “global health security indicator” would seek to measure and improve.
The IHR revision laid out the spatial dynamics through which “core capacities” at local and national levels would, in theory, contain the spread of an emerging disease outbreak (see Figure 1). According to this schema, a novel pathogen appears in a given country through a vehicle of global circulation such as an airplane or a ship, arriving at a “point of entry.” Each country where the pathogen arrives is able to use its “national core capacities” to detect and respond to the event, and to coordinate its response with international health officials. WHO in turn provides technical expertise in disease surveillance, risk assessment and the coordination of response. For this envisioned system of coordinated global response to function — now returning to the problem of metrics of evaluation — a method was needed to ensure that each country had adequately implemented its required core capacities for detection, alert and response.
The revised IHR (2005) included a list of the core technical and administrative capacities that would be necessary for each WHO member state to be able to fulfill its responsibilities. This list of core capacities included outbreak notification systems, epidemic control measures, and sites of response coordination. To implement these capacities would require that member states invest in health security at the community, intermediate, and national scales. And these capacities would have to be in place, in principle, for all 193 member states. Each WHO member state was initially given until the year 2012 to fulfill its core capacity requirements under the revised IHR. But by that year, only twenty percent of member states had actually implemented these requirements, even according to their own assessments, and WHO extended the deadline for compliance to 2016. A later WHO report argued that “weak political will,” “limited awareness” of the regulations, and a lack of sufficient resources had made implementation of the IHR core capacity requirements an “insurmountable challenge” in much of the world (World Health Organization, 2016). Meanwhile, advocates of health security began to investigate whether there were ways to galvanize resources and put pressure on national governments to build these capacities, seen as crucial to effective disease detection and response at a global scale. As part of these efforts, in 2013 WHO released a “core capacity monitoring framework,” including a checklist of indicators, to measure the extent to which its member states were fulfilling their obligations under the IHR. The monitoring framework defined eight core capacities “needed for detecting and responding to the specified human hazards and events” at the point of entry, including “surveillance,” “response,” “preparedness,” and “risk communication” (World Health Organization, 2013, p. 14).14
The catastrophic 2014 Ebola epidemic in West Africa led to a push for WHO to move more aggressively on the implementation of the core capacity requirement. In the aftermath of what was widely seen as a massively failed response to the early stages of the epidemic, observers blamed the international community for allowing a “preventable tragedy” to unfold (United Nations, 2016). Thousands of people had died from a disease that in prior outbreaks had never caused more than a few hundred deaths. WHO came under particular criticism for its perceived failures of response. Some critics pointed to a lack of leadership within the organization, and others to the absence of adequate resources. But more specifically, a number of post-hoc assessments focused on flaws in the implementation of the 2005 revised IHR framework as a key source of the poor WHO response.
Analysts scrutinized two elements of the IHR framework in particular. First, they looked at the role of the decision instrument whose task was to rapidly galvanize international attention and resources to address an unfolding health emergency: WHO had not declared an official “public health emergency of international concern” until the epidemic was already out of control, several months after the initial identification of the outbreak. And second, analysts pointed to the long-running failure of most WHO member states to implement the IHR core capacities requirement — a failure that was now implicated in the poor responses by national public health agencies in the region affected by the Ebola epidemic. As an editorial in Nature put it, while “aspirations” of “strengthening health systems everywhere” as “the best defence against outbreaks of potential international concern” were correct, “the reality is that few poor countries have anything that resembles a working outbreak-response system” (2014, p. 459).
4 Health Security Indicators
In discussions of post-hoc reforms to the IHR system, international health authorities understood the Ebola epidemic as a kind of “test” of the global health security framework, one that should — like an exercise — lead to a process of critical assessment, and, presumably, self-rectification. As an internal report on WHO’s response to the epidemic concluded, the epidemic had been a “major test of the revised IHR.” The “severity and duration” of the event had “challenged the IHR in unprecedented ways,” and thus “shone a bright light on just how ill-prepared and vulnerable the global community remains” (World Health Organization, 2016). For other observers, however, it was not clear that WHO was up to the task of improving the preparedness of the global health community, given its failures in the Ebola response. At this point a different organizational actor entered the picture, the “Global Health Security Agenda,” which had initially been launched by the US Centers for Disease Control (CDC) just before the Ebola epidemic. In announcing its launch in early 2014, CDC Director Thomas Frieden explained, to a domestic American audience, why it made sense for the US to lead such a “global” initiative, emphasizing interdependence and shared vulnerability: “US national health security depends on global health security, because a threat anywhere is a threat everywhere” (Frieden, 2014). The initial US$ 40 million investment was geared to help countries around the world “establish minimum capabilities” as outlined in the 2005 International Health Regulations. Two years later, after the Ebola epidemic — in response to the perception of a failed global response — the US announced a massive infusion of new resources into the Global Health Security Agenda, pledging one billion dollars to assist in implementing the IHR core capacities in poor countries. In his November 2016 executive order on “Advancing the Global Health Security Agenda to Achieve a World Safe and Secure from Infectious Disease Threats,” President Barack Obama articulated the rationale for the US investment in global health security, an explanation that echoed epidemiologist David Heymann’s argument from a decade before: “No single nation can be prepared if other nations remain unprepared to counter biological threats,” said the President (White House, 2016). In other words, insofar as the core capacities for outbreak detection and response had not been implemented in countries at risk of emerging disease outbreaks, the US remained vulnerable to the spread of a novel and deadly pathogen via global circulatory networks. “Health security” could not be limited to a national project.
While it was cast as a “multi-country initiative,” the funding and organizational impetus for the Global Health Security Agenda came from the US government — specifically, the Centers for Disease Control and the US Agency for International Development — which considered WHO, given its limited resources and constrained jurisdiction, to be ineffectual in enforcing the compliance of “at-risk” countries with their obligations under the revised International Health Regulations. In its attempt to implement the key goals of the “global health security” project — specifically, the global extension of core capabilities for detecting and rapidly responding to emerging disease — which WHO had not succeeded in over the decade since the passage of the new regulations — the Global Health Security Agenda can be seen as an attempt to bypass the bureaucratically hidebound and chronically underfunded WHO. By 2016, the initiative had received commitments from the US and other G–7 nations to support core capacity development in over 60 countries.
The goal of the Global Health Security Agenda (GHSA), according to President Obama’s executive order, was “to accelerate partner countries’ measurable capabilities to achieve specific targets to prevent, detect, and respond to infectious disease threats […] whether naturally occurring, deliberate, or accidental” (White House, 2016). We can see, in this language of measurement and targets, the centrality of practices of technical assessment to GHSA’s vision for advancing a global condition of health security. A report on the program explained its collaborative process for strengthening a given country’s “capability for health security” (Global Health Security Agenda, 2015). If the country participated in the assessment process and developed a plan for capacity building, it would be eligible for funding and training support from “partners” — typically the US Centers for Disease Control. The resulting “gap analysis” — that is, the assessment of the gap between the country’s current and its needed health security capabilities — led to the formulation of “action package targets,” which would then guide the country’s work of self-rectification. In 2016, GHSA and WHO developed a “Joint External Evaluation” tool to be used in the evaluation of a country’s “capacity to prevent, detect, and rapidly respond to public health threats” (World Health Organization, 2016b, p. 2). The evaluation process would take place in two stages: first, a self-assessment by the national government, and then an external assessment conducted by a Joint External Evaluation team, which consisted of experts from WHO, the World Organization for Animal Health, INTERPOL, and other organizations. After a five-day presentation covering nineteen different technical areas, the Joint External Evaluation Team would assign a score for the country’s capacity in each of these areas. Countries under evaluation were also encouraged to hold simulation exercises as a means of critical assessment.
Within the Joint External Evaluation process, the key measuring device for generating knowledge about a given country’s condition of health security was the “indicator.” An indicator, as historian of science Ted Porter notes, is a device used to point to an abstract entity — such as the national economy — that cannot be easily grasped through direct measurement. In place of the thing of interest itself, an indicator measures “something whose movements show a consistent relation to that thing.” As the entity whose condition is to be assessed by GHSA and WHO, a country’s “health security” is an elusive object, not least because it is supposed to operate on an event that has not yet occurred. But in building an index to assess this entity one need not inquire too deeply into the thing being measured, Porter suggests: “Since its purpose is merely to indicate as a guide to action, ease of measurement is preferred to meaning or depth.” (2015, p. 38)
In their analysis of the central role of indicators in multiple domains of contemporary global governance, anthropologists Richard Rottenberg and Sally Engle Merry observe that indicators serve as “a globally circulating knowledge technology that can be used to quantify, compare and rank virtually any complex field of human affairs.” (2015, p. 3)15 Such quantitative knowledge is of use to donors, multilateral agencies, and others who are invested in comparing and managing the conditions of collective life across countries. What distinguishes comparative evaluation in the field of “global health security” from areas such as international development is that it seeks to assess the condition of a system for responding to a potential future event. To approach this “present future”, the assessment tool draws on elements of the preparedness kit described earlier, breaking down the problem of health security into lengthy tables of specific areas of government action.16 The Joint External Evaluation (JEE) instrument is a 92-page document composed almost entirely of tables of indicators — and is the precursor to the Global Health Security Index. The nineteen technical areas covered by the JEE instrument are arranged according to the three broad rubrics of “prevent,” “detect,” and “respond.” Within a given indicator table, the horizontal axis provides a checklist of the capabilities that will be required in the event of the onset of a novel and dangerous pathogen. The vertical axis, meanwhile, consists of a color-coded scheme that enables the evaluator to grade a country according to its capacity level, along a spectrum ranging from “no capacity” to “sustainable capacity.” One set of JEE indicators concerns the country’s capacity to conduct real-time disease surveillance. Another area covers the prevention of zoonotic disease emergence: here a country requires the right surveillance systems, an adequate workforce, and so on. A third example — from the “respond” rubric — includes two indicators of a country’s condition of health security: does it have in place an emergency response plan? Have risks and resources been mapped?
The conceit of the Joint External Evaluation process, as developed by the Global Health Security Agenda, was that the practice of collaborative assessment would lead to the formulation of a national plan to implement the “core capacity” requirements laid out in the 2005 IHR revision, and — with financial and technical assistance provided via GHSA and its partners — would induce countries to voluntarily comply with their IHR obligations. In many ways, GHSA resembles contemporary development-oriented approaches — the use of an index to measure progress, the role of cosmopolitan technical advisors, the lure of foreign aid tethered to the production of evidence of improvement — but the kinds of health capacities being supported by GHSA are distinct from those that a development-oriented approach would seek to measure and improve.
Here it is useful to contrast the aims and techniques of global health security with those of typical international development projects. In discussions of why West African countries had proved so vulnerable to the 2014 Ebola epidemic, a common point of discussion was the lack of “basic public health infrastructure” in these countries.17 One might, then, imagine that the “core capacities” requirement in IHR would seek to directly address this deficiency in basic health infrastructure. Within the framework of international development efforts, one might think of policies such as training more nurses and doctors, building community health clinics, improving access to preventive care, or ensuring the availability of essential medicines. But in actuality, the “core capacities” measured by the Joint External Evaluation tool focus on a different set of functions than those of classical public health. For IHR, the key objects and techniques of public health infrastructure are redefined: the concept of “core capacities” refers not to the prevention and treatment of common maladies that are prevalent in a given national population — such as infant diarrheal disease, malaria, heart disease or alcoholism — but rather to the detection and rapid containment of possible future outbreaks of novel pathogens that threaten to spread globally, such as a mutant form of H5N1, Ebola, or a novel coronavirus. Thus, the IHR core capacities embody a distinctive form of public health, oriented to an event that might or might not happen.
In this sense, the basic function of the 2005 International Health Regulations — and the design of initiatives, such as GHSA, that seek to improve the core capacities that compliance with IHR requires from WHO member states — is not to care for the health of national populations per se, but rather to prevent the spread of novel disease entities across international borders while at the same time ensuring the ongoing circulation of goods through global networks. This latter aim was the objective of the health technocrats in Atlanta and Geneva who developed the vision — and technical practices — underlying global health security.
5 Conclusion
We can now return to the discussion, introduced at the outset, of the significance of the findings of the 2019 Global Health Security Index (GHSI) in comparing national responses to the coronavirus pandemic. Recall that the index was generated by two Washington, DC based think tanks, the Johns Hopkins Center for Health Security and the Nuclear Threat Initiative (NTI). The index project was spearheaded by Beth Cameron, NTI’s vice president for global biological policy, who had been senior director for global health security and biodefense within the Obama administration’s National Security Council, where she was “instrumental in developing and launching the Global Health Security Agenda.”18 In other words, GHSI was the post-2016 continuation, now based outside of the US government in the world of Washington, DC think tanks, of the Obama administration’s global health security project.
The GHSI thus grew out of the effort, described above, to develop and implement a system of indicators that would make it possible to assess and target interventions into pandemic preparedness at the level of the individual nation-state; and in turn, to generate a global space of health security by ensuring national compliance with the “core capacity” requirements of the International Health Regulations. The GHSI categories were similar to those of the Joint External Evaluation (JEE) tool, now expanded to six categories of measurement: in addition to the JEE categories of “prevention,” “detection and reporting,” and “rapid response,” GHSI added “health system,” “compliance with international norms,” and “risk environment.” In comparison to its predecessor, GHSI increased the total number of technical areas to be measured (from 17 to 34) and claimed to provide a more objective method of evaluation, relying less on individual countries’ self-assessments and instead on a body of external experts. But its object and its method were the same.
The finding of the 2019 GHSI assessment, that the United States ranked highest in the world in national health security, was in a way unsurprising. As we have seen, US biosecurity and global health initiatives of the early 2000s were the initial source of the imperative to consider the future of disease incidence in terms of a condition of “national preparedness,” as well as the source of the tools that were invented to measure this condition. What came as a surprise to many, however, was how poorly the US in fact responded when an actual pandemic occurred, given its high ranking by the GHSI. As Manjari Mahajan has noted, comparing US mortality rates in the first year of the coronavirus pandemic to those of other countries, “It is striking how little correlation there is between countries’ preparedness rankings on the GHS Index and the actual experiences with Covid-19” (2021, p. 204). She points out that the key factors in a country’s success in responding to the pandemic were very different than those emphasized by the index. Such characteristics as state capacity, quality of leadership, coordination among different levels of government, and public health infrastructure at the community level proved more critical than the specific technical capacities measured by GHSI. Moreover, the very assumption that is possible to come up with a standard way of measuring “health security” was belied by variation in the basis for successful response across different countries, from Germany to South Korea.
What, then, are we to make of the juxtaposition between a given country’s ranking in the GHSI and its performance in responding to an actual pandemic? We can see that “health security,” as measured by GHSI, involved a narrowly circumscribed set of capacities designed with a particular scenario in mind: a future situation — perhaps like SARS (2002) — in which the technical ability to detect and contain the emergence of a novel pathogen at its early stages would make it possible to manage a future outbreak. International health experts assumed that if such capacities — already present in the United States — could be implemented worldwide, a future catastrophe could be avoided. Covid-19, however, did not follow the experts’ script. Once the disease had spread rapidly and could not be contained, the set of “core capacities” initially elaborated in the revised International Health Regulations proved insufficient to deal with the complex social, economic, and biomedical dimensions of an actual pandemic. Perhaps, then, GHSI did accurately measure the relative “health security” of each country in relation to its scenario of a future disease emergency — but its definition of health security failed to account for the realities of what eventually occurred.
References
Alder, K. (1998). Making Things the Same: Representation, Tolerance, and the End of the Ancien Régime in France. Social Studies of Science, 28(4), 499–545. https://doi.org/10.1177/030631298028004001
Alltucker, K. & Hauck, G. (2020). Trump Addressed the Nation on Coronavirus. We Checked the Facts. USA Today, 26 February. https://eu.usatoday.com/story/news/nation/2020/02/26/coronavirus-trump-addresses-nation-amid-first-case-community-spread/4883728002/
Center for Health Security and Nuclear Threat Initiative. (2019). Global Health Security Index. https://www.nti.org/about/programs-projects/project/global-health-security-index/
Collier, S.J. & Lakoff, A. (2021). The Government of Emergency: Vital Systems, Expertise, and the Politics of Security. Princeton: Princeton University Press. https://doi.org/10.1515/9780691228884
Collier, S.J. & Ong, A. (2005). Global Assemblages, Anthropological Problems. In A. Ong & S.J. Collier (Eds.), Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems. Malden: Blackwell.
De Goede, M. & Sullivan, G. (2016). The Politics of Security Lists. Environment and Planning D – Society and Space, 34(1), 67–88. https://doi.org/10.1177/0263775815599309
Department of Homeland Security. (2005). Interim National Preparedness Goal. https://www.hsdl.org/?view&did=455391
Ferguson, J. (1994). The Anti-Politics Machine: Development, Depoliticization, and Bureaucratic Power in Lesotho. Minneapolis: University of Minnesota Press.
Fidler, D. (2005). From International Sanitary Regulations to Global Health Security: The New International Health Regulations. Chinese Journal of International Law, 4(2), 325–392. https://doi.org/10.1093/chinesejil/jmi029
Frieden, T. (2014). Why Global Health Security is Imperative. The Atlantic, 13 February. https://www.theatlantic.com/health/archive/2014/02/why-global-health-security-is-imperative/283765/
Global Health Security Agenda. (2015). General Presentation. https://www.slideshare.net/stmslide/ghsa-july2015-final
Gostin, L.O. (2014). Ebola: Towards an International Health Systems Fund. The Lancet, 384(9951), e49–e51. https://doi.org/10.1016/S0140-6736(14)61345-3
Harris, I. (1958). Lessons Learned from Operations Alert 1955–1957. Lecture to the Industrial College of the Armed Forces, Washington DC, 30 April, 1958.
Heymann, D.L. & Rodier, G. (2004). Global Surveillance, National Surveillance, and SARS. Emerging Infectious Diseases, 10(2), 173–175. https://doi.org/10.3201/eid1002.031038
Jasanoff, S., Hilgartner, S., Hurlbut, J.B., Özgöde, O., & Rayzberg, M. (2021). Comparative Covid Response: Crisis, Knowledge, Politics: Interim Report, January. Harvard Kennedy School. https://www.ingsa.org/covidtag/covid-19-commentary/jasanoff-schmidt/
King, N. (2002). Security, Disease, Commerce: Ideologies of Postcolonial Global Health. Social Studies of Science, 32(5-6), 763–789. https://doi.org/10.1177/030631270203200507
Kristof, N. (2020). America and the Virus: A Colossal Failure of Leadership. New York Times, 22 October. https://www.nytimes.com/2020/10/22/opinion/sunday/coronavirus-united-states.html
Lakoff, A. (2017). Unprepared: Global Health in a Time of Emergency. Berkeley: University of California Press. https://doi.org/10.1525/9780520968417
Li, T.M. (2007). The Will to Improve: Governmentality, Development, and the Practice of Politics. Durham: Duke University Press. https://doi.org/10.1515/9780822389781
Luhmann, N. (1998). Observations on Modernity (W. Whobrey, Trans.). Palo Alto: Stanford University Press. (Original work published 1992)
Mahajan, M. (2021). Casualties of Preparedness: The Global Health Security Index and Covid-19. International Journal of Law in Context, 17(2), 204–214. https://doi.org/10.1017/S1744552321000288
Nature. (2014). Editorial: First Response, Revisited. 513, 459. https://doi.org/10.1038/513459a
Opitz, S. (2015). Regulating Epidemic Space: The Nomos of Global Circulation. Journal of International Relations and Development, 19(2), 263–284. https://doi.org/10.1057/jird.2014.30
Porter, T.M. (2015). The Flight of the Indicator. In R. Rottenberg, S.E. Merry, S.-J. Park, & J. Mugler (Eds.), The World of Indicators: The Making of Governmental Knowledge through Quantification (pp. 34–55). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781316091265.002
Rabinow, P. (2003). Anthropos Today: Reflections on Modern Equipment. Princeton: Princeton University Press.
Rottenberg, R. & Merry, S.E. (2015). A World of Indicators. In R. Rottenberg, S.E. Merry, S.-J. Park, & J. Mugler (Eds.), The World of Indicators: The Making of Governmental Knowledge through Quantification (pp. 1–33). Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781316091265
Samimian-Darash, L. & Rabinow, P. (Eds.). (2015). Modes of Uncertainty: Anthropological Cases. Chicago: University of Chicago Press. https://doi.org/10.7208/chicago/9780226257242.001.0001
United Nations. (2016). Protecting Humanity from Future Health Crises: Report of the High-level Panel on the Global Response to Health Crises. https://digitallibrary.un.org/record/822489
US Office of Defense Mobilization. (1957). Mobilization Plan D-Minus.
US Office of Emergency Planning. (1964). The National Plan for Emergency Preparedness.
Washington Post. (2020). Opinion: The U.S. Was Supposed to Be Equipped to Handle a Pandemic. So What Went Wrong? The Washington Post, 26 December. https://www.washingtonpost.com/opinions/the-us-was-supposed-to-be-equipped-to-handle-a-pandemic-so-what-went-wrong/2020/12/24/021ec42c-453f-11eb-975c-d17b8815a66d_story.html
White House. (1957). Cabinet Paper: Operation Alert 1957. The White House, 20 May. https://www.eisenhowerlibrary.gov/sites/default/files/finding-aids/pdf/who-oss/cabinet-minutes-series.pdf
White House. (2016). Executive Order: Advancing the Global Health Security Agenda to Achieve a World Safe and Secure from Infectious Disease Threats. The White House, 4 November. https://www.govinfo.gov/app/details/DCPD-201600752
White House Homeland Security Council. (2005). National Strategy for Pandemic Influenza. https://www.cdc.gov/flu/pandemic-resources/pdf/pandemic-influenza-strategy-2005.pdf
World Health Organization. (2005). International Health Regulations. https://www.who.int/publications/i/item/9789241580496
World Health Organization. (2007). World Health Report 2007: A Safer Future: Global Public Health Security in the 21st Century. https://apps.who.int/iris/handle/10665/43713
World Health Organization. (2013). IHR Core Capacity Monitoring Framework: Checklist and Indicators for Monitoring Progress in the Development of IHR Core Capacities. https://extranet.who.int/sph/ihr-core-capacity-monitoring-framework-checklist-and-indicators-monitoring-progress-development-ihr
World Health Organization. (2016a). Implementation of the International Health Regulations (2005): Report of the Review Committee on the Role of the International Health Regulations (2005) in the Ebola Outbreak and Response: Report by the Director-General 13 May. https://apps.who.int/iris/handle/10665/252676
World Health Organization. (2016b). Joint External Evaluation Tool: International Health Regulations (2005). https://apps.who.int/iris/bitstream/handle/10665/204368/9789241510172_eng.pdf;sequence=1
Yong, E. (2020). How the Pandemic Defeated America. The Atlantic, 4 August. https://www.theatlantic.com/magazine/archive/2020/09/coronavirus-american-failure/614191/
Moreover, as a prominent journalist put it, summarizing the position of a range of public health experts, “almost everything that went wrong with America’s response to the pandemic was predictable and preventable” (Yong, 2020).↩︎
The Washington Post similarly noted that “[w]hen a group of experts examined 195 countries last year on how well prepared they were for an outbreak of infectious disease, the United States ranked best in the world” (2020).↩︎
As Manjari Mahajan has argued in a perceptive critique of the assumptions underlying this system of indicators, “we need to interrogate the prevailing paradigm of global health security that informs instruments such as the GHS Index.” (2021, p. 205)↩︎
The World Bank reports that its “World Development Indicators is a compilation of relevant, high-quality, and internationally comparable statistics about global development and the fight against poverty. The database contains 1,400 time series indicators for 217 economies and more than 40 country groups, with data for many indicators going back more than 50 years.” See: https://datatopics.worldbank.org/world-development-indicators/↩︎
See http://hdr.undp.org/en/content/human-development-index-hdi.↩︎
See https://datatopics.worldbank.org/world-development-indicators/.↩︎
See https://unstats.un.org/sdgs/indicators/indicators-list/.↩︎
For critical analyses of this impulse toward measurement and targeted improvement in the context of development, see Ferguson (1994) and Li (2007).↩︎
While seemingly banal, lists occupy a privileged place in a number of contemporary security practices. As De Goede & Sullivan (2016) argue, such lists materialize the categories they purport to describe, and enact novel forms of knowledge and jurisdiction. As part of a preparedness kit, the list of emergency action steps performs this work toward a particular end: to produce knowledge about future requirements in relation to an event that may or may not occur.↩︎
As the plan put it: “The creation of emergency agencies and of a special organizational structure for the Executive Branch of the Federal Government in time of national emergency is required […] to provide [the] governmental machinery best suited to meet the unusual demands of such [a] situation.” (Office of Defense Mobilization, 1957)↩︎
As Innis Harris of ODM’s Office of Plans and Readiness put it: “The lessons learned from these exercises are in substance the sum total of our experience in mobilization planning to cope with any emergency involving war and general war — but principal emphasis has been on situations involving a nuclear attack on the continental United States” (1958).↩︎
The 1964 plan was assembled by a successor organization to ODM, the Office of Emergency Planning.↩︎
See, for example, Department of Homeland Security (2005).↩︎
“The eight core capacities,” explained the monitoring framework, “are the result of an interpretation, by a technical group of experts, of the IHR 2005 capacity requirements” (World Health Organization, 2013, p. 14).↩︎
Rottenberg and Merry, “A World of Indicators.”↩︎
The distinction between the “present future” and “future presents” is made in Luhmann (1998); also see the cases presented in Samimian-Darash & Rabinow (2015).↩︎
Writing in The Lancet, for instance, Lawrence O. Gostin argued: “The countries most effected by Ebola […] rank lowest in global development, lacking essential public health infrastructure.” (2014, p. e49)↩︎
The quotation comes from Cameron’s online biography, available here: https://www.nti.org/about/leadership-and-staff/beth-cameron/. Note that Cameron was appointed Senior Director for Global Health Security and Biodefense in the Biden National Security Council in January 2021.↩︎