The US public health system’s preparedness in the early 1990s

I’ve been reading about how the US public health system’s ability to diagnose for coronavirus is slower than that in other countries. This reminded me of results of an audit of the US public health system in the early 1990s (from Laurie Garrett’s _The Coming Plague_), which found there was much room for improvement in the US public health system. Perhaps this is a guide for what could be done better in the future, although I’m not sure if similar room for improvement exists today, or if infectious disease writer Laurie Garrett’s diagnosis of the barriers to improvement at the time were correct (“two decades of government belt tightening, coupled with decreased local and state revenues due to both taxation reductions and severe recessions, had rendered most local and regional disease reporting systems horribly deficient, often completely unreliable”), or if that diagnosis is also true today. However, this is an interesting historical example:

“In response to [an early 1990s] Institute of Medicine’s report on emerging diseases, the CDC gave Dr. Ruth Berkelman the task of formulating plans for surveillance and rapid response to emerging diseases. For a year and a half Berkelman coordinated an exhaustive effort, identifying weaknesses in CDC systems and outlining a new, improved system of disease surveillance and response.
Berkelman and her collaborators discovered a long list of serious weaknesses and flaws in the CDC’s domestic surveillance system and determined that international monitoring was so haphazard as to be nonexistent. For example, the CDC for the first time in 1990 attempted to keep track of domestic disease outbreaks using a computerized reporting system linking the federal agency to four state health departments. Over a six-month period 233 communicable disease outbreaks were reported. The project revealed two disturbing findings: no federal or state agency routinely kept track of disease outbreaks of any kind, and once the pilot project was underway the ability of the target states to survey such events varied radically. Vermont, for example, reported outbreaks at a rate of 14.1 per one million residents versus Mississippi’s rate of 0.8 per million.27
Minnesota state epidemiologist Dr. Michael Osterholm assisted the CDC’s efforts by surveying the policies and scientific capabilities of all fifty state health departments. He discovered that the tremendous variations in outbreak and disease reports reflected not differences in the actual incidence of such occurrences in the respective states, but enormous discrepancies in the policies and capabilities of the health departments.28 In the United States all disease surveillance began at the local level, working its way upward through state capitals and, eventually, to CDC headquarters in Atlanta. If any link in the municipal-to-federal chain was weak, the entire system was compromised. At the least, local weaknesses could lead to a skewed misperception of where problems lay: states with strong reporting networks would appear to be more disease-ridden than those that simply didn’t monitor or report any outbreaks. At the extreme, however, the situation could be dangerous, as genuine outbreaks, even deaths, were overlooked.
What Osterholm and Berkelman discovered was that nearly two decades of government belt tightening, coupled with decreased local and state revenues due to both taxation reductions and severe recessions, had rendered most local and regional disease reporting systems horribly deficient, often completely unreliable. Deaths were going unnoticed. Contagious outbreaks were ignored. Few states really knew what was transpiring in their respective microbial worlds.
“A survey of public health agencies conducted in all states in 1993 documented that only skeletal staff exists in many state and local health departments to conduct surveillance for most infectious diseases,” the research team concluded. The situation was so bad that even diseases which physicians and hospitals were required by law to report to their state agencies, and the states were, in turn, legally obligated to report to CDC, were going unrecorded. AIDS surveillance, which by 1990 was the best-funded and most assiduously followed of all CDC disease programs, was at any given time underreported by a minimum of 20 percent. That being the case, officials could only guess about the real incidences in the fifty states of such ailments as penicillin-resistant gonorrhea, vancomycin-resistant enterococcus, E. coli 0157 food poisoning, multiply drug-resistant tuberculosis, or Lyme disease. As more disease crises cropped up, such as various antibiotic-resistant bacterial diseases, or new types of epidemic hepatitis, the beleaguered state and local health agencies loudly protested CDC proposals to expand the mandatory disease reporting list—they just couldn’t keep up.
Osterholm closely surveyed twenty-three state health department laboratories and found that all but one had had a hiring freeze in place since 1992 or earlier. Nearly half of the state labs had begun contracting their work out to private companies, and lacked government personnel to monitor the quality of the work.29 In a dozen states there was no qualified scientist on staff to monitor food safety, despite the enormous surge in E. coli and Salmonella outbreaks that occurred nationwide during the 1980s and early 1990s.
At the international level the situation was even worse. The CDC’s Jim LeDuc, working out of WHO headquarters in Geneva, in 1993 surveyed the thirty-four disease detection laboratories worldwide that were supposed to alert the global medical community to outbreaks of dangerous viral diseases. (There was no similar laboratory network set up to follow bacterial outbreaks or parasitic disease trends.) He discovered shocking insufficiencies in the laboratories’ skills, equipment, and general capabilities. Only half the labs could reliably diagnose yellow fever; the 1993 Kenya epidemic undoubtedly got out of control because of that regional laboratory’s failure to diagnose the cause of the outbreak. For other microbes the labs were even less prepared: 53 percent were unable to diagnose Japanese encephalitis; 56 percent couldn’t properly identify hantaviruses; 59 percent failed to diagnose Rift Valley fever virus; 82 percent missed California encephalitis. For the less common hemorrhagic disease-producing microbes, such as Ebola, Marburg, Lassa, and Machupo, virtually no labs had the necessary biological reagents to even try to conduct diagnostic tests.”