My favorite projects 3/29/20 (after envisioning covid19 implications for pandemic preparedness and climate change)

I just wrapped up envisioning implications of covid19 for my two areas of interest (pandemic preparedness and climate change), and read others’ ideas on the subject as well. I wrote my favorites in a visioning doc and this has helped me prioritize my favorite project ideas. Two project ideas from the doc:

  1. Monitor what happens to the airline and oil industries even if covid19 doesn’t last long, and what this means for climate.
  2. Monitor whether public health infrastructures receive investment during/post-covid19.

If you’re curious to see what’s in either doc, contact me!

Life in the US when infectious disease was more prevalent, pre-antibiotics (early 1900s)

This depiction is from Laurie Garrett, _The Coming Plague_. It reminds me that there are many possibilities for how life ends up looking in the US during and after covid19, which are between the extremes of total lockdown and a complete return to the 2010s.

“By the time my Uncle Bernard started his medical studies at the University of Chicago in 1932 he had already witnessed the great influenza pandemic of 1918–19. He was seven years old when he counted the funeral hearses that made their way down the streets of Baltimore. Three years earlier Bernard’s father had nearly died of typhoid fever, acquired in downtown Baltimore. And shortly after, his grandfather died of tuberculosis.
In his twelfth year Bernard got what was called “summer sickness,” spending the long, hot Maryland days lying about the house, “acting lazy,” as his mother put it. It wasn’t until 1938, when he volunteered as an X-ray guinea pig during his internship at the University of California’s medical school in San Francisco, that Uncle Bernard discovered that the “summer sickness” was actually tuberculosis. He had no doubt acquired consumption from his grandfather, survived the disease, but for the rest of his life had telltale scars in his lungs that were revealed by chest X rays.
It seemed that everybody had TB in those days. When young Bernard Silber was struggling his way through medical studies in Chicago, incoming nursing students were routinely tested for antibodies against TB. The women who came from rural areas always tested negative for TB when they started their studies. With equal certainty, they all tested TB-positive after a year on the urban hospital wards. Any ailment in those days could light up a latent TB infection, and tuberculosis sanitariums were overflowing. Treatment was pretty much limited to bed rest and a variety of hotly debated diets, exercise regimens, fresh air, and extraordinary pneumothorax surgical procedures.
In 1939 Uncle Bernard started a two-year residency in medicine at Los Angeles County Hospital, where he met my Aunt Bernice, a medical social worker. Bernice limped and was deaf in one ear, the results of a childhood bacterial infection. When she was nine, the bacteria grew in her ear, eventually infecting the mastoid bone. A complication of that was osteomyelitis, which left her right leg about an inch shorter than her left, forcing Bernice to walk knock-kneed to keep her balance. Shortly after they met, Bernard got a nasty pneumococcal infection and, because he was a physician, received state-of-the-art treatment: tender loving care and oxygen. For a month he languished as a patient in Los Angeles County Hospital hoping he would be among the 60 percent of Americans who, in the days before antibiotics, survived bacterial pneumonia.
Bacterial infections were both common and very serious before 1944, when the first antibiotic drugs became available. My Uncle Bernard could diagnose scarlet fever, pneumococcal pneumonia, rheumatic fever, whooping cough, diphtheria, or tuberculosis in a matter of minutes with little or no laboratory support. Doctors had to know how to work quickly because these infections could escalate rapidly. Besides, there wasn’t much the lab could tell a physician in 1940 that a well-trained, observant doctor couldn’t determine independently.”

Note to self: consult experts, Dworkin case studies and low-COVID19 countries’ playbooks to control a respiratory epidemic

tl;dr: To control a respiratory epidemic, especially if I’m ever in a public health agency, or if a new epidemic is emerging and I have the urge to write [a widely read Medium post](https://medium.com/@tomaspueyo/coronavirus-the-hammer-and-the-dance-be9337092b56), in addition to consulting experienced experts, I would consult Chapters 18, 20 and 8 of Mark Dworkin’s  _Outbreak Investigations Around the World: Case Studies in Infectious Disease Field Epidemiology_ plus the playbooks of countries that have had low cases of COVID-19.

There are many options for controlling a respiratory epidemic. Treatments or vaccines are options when available for the disease, and antimicrobial treatments may also offer the option of antimicrobial prophylaxis. There are additionally the many, _many_ versions of “separating people” and other “non-pharmaceutical” interventions (NPIs), including variations of social distancing, with which many of us have become familiar during COVID19[1]; if controlling a pandemic, I would brainstorm and source many NPIs and review the footnoted ones.

Knowing the various intervention options by name, however, would not be enough, as so clearly illustrated by Mark Dworkin’s _Outbreak Investigations Around the World: Case Studies in Infectious Disease Field Epidemiology_. The book is twenty chapters; each chapter is the story of a particular nonfictional outbreak, written by a public health professional who responded to that outbreak at the time. The chapters I read on respiratory epidemics—18 (“Whipping Whooping Cough in Rock Island County, Illinois”), 20 (“A Mumps Epidemic, Iowa, 2006”) and 8 (“Measles Among Religiously Exempt Persons”)—show that planning in advance of the epidemic (e.g. having relationships with various media and political entities; having flexible stockpiles, manufacturing capacity and personnel for different interventions) can be critical. Moreover, they illustrate that each intervention, along with decisions of whether and how to enforce it, and among whom to intervene, has considerations around complexity, cost, timing and payoff. These factors should affect decisions about scheduling of interventions, including which interventions implement first and to what extent to implement them (my guess is that they should all be implemented as far as they can!). This is why I would refer to these chapters as part of my effort to control a respiratory epidemic. Although I have not had time to find them yet, I would also refer to the playbooks of the countries with low COVID19 incidence.

If you want a sneak peek of the chapters, here are some of the tidbits on complexity, cost and payoff of an intervention that I found most interesting, with direct quotes from Dworkin:

  1. By complexity, I refer to the number of moving parts that must be accounted for.
    1. Anyone implementing an intervention that relies upon diagnostic tests must additionally learn about the sensitivity/specificity of the test, the supply chain of appropriate materials for the test, the time to develop the test, etc., and account for these in implementing that intervention (as illustrated by COVID19). They must think about advance communication to diagnostic labs that their workload for a particular test is about to increase 10x or 100x.
    2. Patricia Quinlisk writes about factors in the vaccine intervention in Chapter 20’s Iowan mumps epidemic: “there were issues of vaccine supply, payment of vaccine purchased, distribution of the vaccine to each county, determination of the amount needed by each county, getting the word out about the program, and efforts to have clinics in the midst of the highest risk groups (such as on college campuses). A special committee meeting of legislators with the Director of the Iowa Department of Public Health was needed to release the emergency funds required to buy the vaccine. (For the best uptake of vaccine, it had to be offered free of charge; thus, public funding for the vaccine had to obtained.) This whole process took about 10 days from the decision to do a statewide mass vaccination program, purchase the vaccine, have it shipped to Iowa, distribute it to the local health departments, and have vaccination clinics. In the end, a total of 37,500 doses of MMR vaccine were purchased and made available. This was no small task… We found it difficult to get college students to go and get vaccinated, and in the end only about 10,000 doses of vaccine were used in Iowa’s 99 counties.” Another factor is that some people may have already been vaccinated; one would have to consider whether it’s worth it to review people’s vaccination records to determine who does not need to be vaccinated again.
  2. By cost, I refer to the amount of work that must be done, as well as monetary cost.
    1. A stay-at-home order is simple to describe and is not costly if not enforced, but can be costly if enforced (i.e. consider the number of police required to keep everyone in their homes!).
    2. Educating the public and particular communities (e.g. healthcare workers) on preventive action seems to be good bang-per-buck to me. There are many channels through which to educate, as in a PR campaign: keeping the public health agency website up to date, getting institutions like colleges to educate their constituents, being in the media in all its channels, etc.
    3. The workload for contact tracing, which is instrumental to many interventions, can increase exponentially, exhausting the personnel of a public health department.
    4. Case finding and confirmation, also instrumental to many interventions and often hand-in-hand with contact tracing, can require diagnosing individuals over the phone one-by-one, which scales linearly with the number of cases. This and contact tracing are instrumental to “respiratory precautions to prevent droplet spread from cases, treatment of cases, and prophylactic treatment of close contacts of the cases.”
    5. Any vaccination, treatment or prophylaxis campaign can require convincing a subject to undergo the medicine and then actually administering it. This can scale linearly with the number of people targeted.
  3. By payoff, I refer to the amount of benefit—the amount of transmission reduced—minus any costs to people’s livelihoods or basic needs.
    1. An unenforced ban/closure has lower cost but potentially lower payoff if people do not comply.
    2. Patricia Quinlisk writes about the decision to not broadly quarantine in Chapter 20’s Iowan mumps epidemic: “Quarantine for the public was not used for several reasons, unlike during a measles outbreak in Iowa 2 years earlier. Those reasons included (1) 20% to 30% of cases are asymptomatic, (2) mumps virus can be spread up to 3 days prior to symptoms, (3) some mumps cases are so mildly ill that they do not seek medical attention thus are not reported to public health, and (4) mumps is generally a mild illness. Isolation and work quarantine (i.e., restrictions on patient contact during the incubation period), however, were used for health care workers because of the risk of spread to vulnerable patients.” Asymptomatic-ness makes quarantine lower payoff per cost inflicted upon people’s freedom.
    3. It may also be useful to read [a National Academics Press review of social distancing’s effectiveness, including the effectiveness of different methods](https://www.dropbox.com/s/47erqziz3ivhr55/Social%20distancing%20effectiveness%20NAP.pdf?dl=0).

Some additional thoughts:

  1. Microbial ecology, e.g. understanding where microbes are and how they’re moving around, may be useful to coming up with ideas for new, more granular NPIs, even in respiratory epidemics.
  2. I have assumed that all the interventions above are worth doing because they follow the logic of separation (even though their payoffs may vary). Perhaps a review of these interventions would show that some are not. Evidence for this would likely come from historical case studies and/or modeling studies.

Footnotes:

  1. Options are included below. Many were sourced from [Hatchet, Mecher and Lipsitch 2007](https://www.dropbox.com/s/49p0oxqplhyxsoo/NPIs_spanish_flu_Hatchett_Lipsitch.pdf?dl=0) and [Market et al. 2007](https://www.dropbox.com/s/g24uaggpvf2pih3/NPIs_Spanish_flu_Markel_Cetron.pdf?dl=0):
    1. isolation (separate/restrict the movement of sick people);
    2. quarantine (separate/restrict the movement of exposed people);
    3. public risk communications and emergency declarations;
    4. closures of schools and businesses;
    5. transportation and public transit restrictions;
    6. public gathering bans, or “outdoor gatherings only”;
    7. staggered business hours;
    8. shelter-in-place (which includes work from home);
    9. notifiability (legally requiring hospitals/physicians to report instances of a disease to a public health agency);
    10. no-crowding rules;
    11. release or no-crowding of students from or in dorms, soldiers from or in barracks or prisoners from or in prisons;
    12. placards on the doors of the sick;
    13. face mask ordinances;
    14. delivery of goods instead of in-person shopping;
    15. hospitals set up to draw out the potentially sick and isolate them;
    16. voluntary quarantine of infected households;
    17. and more? (Brainstorm; there are many variations!)

Is it true that, worldwide, people took 4 billion flights in 2018?

**Tl;dr:** Yes, 4 billion (or that order of magnitude) seems correct to me, after checking its source with other independent sources. The largeness of this number shocked me but it doesn’t change what has to be done about flying and long-distance travel with respect to climate change: reduce flying and long-distance travel, first by taking this step in our own lives and then by promoting the message to others. This is especially true of leisure/vacation trips.

Beware: this post is about to get into the nitty-gritty of data sources.

The International Air Transport Association (IATA) estimated that [nearly 4 billion “passenger journeys”](https://www.iata.org/contentassets/c81222d96c9a4e0bb4ff6ced0126f0bb/iata-annual-review-2019.pdf) were flown in 2018, where one passenger journey refers to one person taking a one-way trip (multiple flights with layovers count as one passenger journey). It seems that this estimate is based on data given by IATA’s member airlines, which apparently include [“290 airlines or 82% of total air traffic”](https://www.iata.org/en/about/).

Isn’t this a remarkably high number? If we convert that to 2 billion round trips, that’s like saying there was 1 round trip per 3-4 people on Earth in 2018. How does that square with my intuition that probably the majority of people have never stepped on the plane? Probably because certain people fly so often? I was suspicious of the 4 billion number so I checked the IATA report against independent sources.

The main IATA report claim I had the time to check was this: [“the US domestic market [is] where almost 590 million passenger journeys were undertaken in 2018”](https://www.iata.org/contentassets/c81222d96c9a4e0bb4ff6ced0126f0bb/iata-annual-review-2019.pdf). “Domestic market” means that 590 million is counting only journeys that started and ended in the US.

First, I checked this against the major US airlines’ [10-K’s](https://en.wikipedia.org/wiki/Form_10-K). These are the forms that companies send to shareholders each year, required by the Securities and Exchange Commission, which often contain important data on those companies’ revenues. For the airlines, they contain numbers of passengers and flights. [There are 18 airlines in the US with revenue greater than $1 billion annually](https://www.bts.gov/sites/bts.dot.gov/files/docs/explore-topics-and-geography/topics/airlines-and-airports/227546/number-332-air-carrier-groupings-2020.pdf): Southwest, United, American, Delta, Jet Blue, etc. A mostly domestic airline, Southwest reports in [its 10-K](http://otp.investis.com/clients/us/southwest/SEC/sec-show.aspx?Type=html&FilingId=13882031&CIK=0000092380&Index=10000#LUV-12312019X10K_HTM_S33F2C3F2F70F5BFBB619FCBA5C426869) about 150 million passenger flights in 2019 (I took a midpoint between the reported 134,056,000 revenue passengers carried and 162,681,000 enplaned passengers). American Airlines reports 241,252 million revenue passenger miles in [its 10-K](https://americanairlines.gcs-web.com/static-files/d46a00e3-db05-4a91-af7a-fbe0fc2a7f08); by my estimate, this is equivalent to 276 million passenger flights (I scaled Southwest’s “150 million passenger flights” figure by a ratio of AA:Southwest’s revenue passenger miles). And so on for the rest of the airlines. One might start thinking the 590 million number is low because 150 million + 276 million + … for several more airlines quickly adds up to more than 590 million, but (1) note that 590 million counts passenger _journeys_, which can contain multiple passenger flights as legs of a journey with layovers, while 150 million, 276 million, etc. count passenger flights; (2) I started with the largest airlines so I expect the remaining terms in that sum to decrease quickly; (3) 590 million is only meant to count journeys starting and ending in the United States, whereas there airlines’ flights will also include those to and from international destinations; and (4) scaling AA and other airlines’ revenue passenger miles flown by the Southwest ratio might overestimate the non-Southwest passenger flight numbers if those airlines tend to be international carriers because those flights will have more miles per flight.

However, it is possible that the 590 million number is too low: if we extrapolate from [an Ipsos/Gallup survey of 5,048 American adults from their online panel](http://airlines.org/wp-content/uploads/2018/02/A4A-AirTravelSurvey-20Feb2018-FINAL.pdf), in which the average adult flew 2.5 round trips in 2017, we get [253,768,092 American adults](https://www.reference.com/world-view/many-adults-live-usa-b830ecdfb6047660) * 2.5 round trips per adult * 2 passenger journeys per round trip, or approximately 1.3 billion passenger journeys by American adults. The survey found that 2/3 of these trips were domestic; extrapolating that, we get about 850 million domestic passenger journeys, which is the same order of magnitude as 590 million. It may also be useful to note that the 590 million number [is probably a difficult estimate](http://www.intervistas.com/downloads/CAIR/articles/06_jun2008_b.pdf) which guesses at travelers’ true origins and destinations given the data on the legs of their flights; the algorithm reducing passenger flights into passenger journeys might have reduced by too much.

The Ipsos/Gallup data are consistent with my own experience. The 5,048-adult panel were asked how many round trips they took in 2017, and answered as follows, which matches what I know of my friends’ and family’s flying behaviors:

Screen Shot 2020-03-21 at 12.24.39 PM

How does 590 million compare with data from the Federal Aviation Administration (FAA)? The [“Annexes to the 2020 Inventory of US Greenhouse Gas Emissions and Sinks”](https://www.epa.gov/sites/production/files/2020-02/documents/us-ghg-inventory-2020-annexes.pdf) [1], in Table A-125, reports that the FAA reported 6.3-6.4 billion nautical miles of domestic commercial aviation in 2018, presumably because they have access to all flight data in the US[2]; this number is consistent with US Energy Information Administration (EIA) data on jet fuel consumption[3]. This implies 7.3 billion domestic miles (1 nautical mile = 1.15 miles). If we take the Southwest 10-K data on average “passenger haul” (which I am interpreting as passenger journey), average seats on a flight and load factor (fraction of those seats that are filled), the FAA number suggests 7.3 billion miles / 980 miles per journey * 150.9 seats on a flight * 0.835 load factor = 940 million domestic passenger journeys. That is not so far off from 850 million passenger journeys extrapolated from the Ipsos/Gallup survey, and again is the same order of magnitude as the IATA’s 590 million. The 940 million might also be inflated because it includes cargo and chartered flights.

Given the consistent order of magnitude from the IATA, airline 10-K, Ipsos/Gallup and FAA numbers, which themselves seem consistent with personal experience and EIA numbers, it would seem that the number of domestic one-way passenger journeys taken by Americans is in the high hundreds of millions, or that the number of domestic round trips is in the mid-hundreds of millions. Although I won’t spend the time to do such a check for other large domestic markets or international flights, the fact the IATA 590 million number is the right order of magnitude (potentially a slight underestimate) makes me trust the IATA 4 billion number more, although I recognize that I have not evaluated the data quality on the remaining 3.4 billion passenger journeys, which could be quite different. For me, this large number underscores the urgency of reducing flying now.

Outside the US, China and perhaps India are likely increasing their flying volume substantially: [“China’s domestic market added the most passenger journeys in 2018… China[‘s domestic market] comes second [to the US’s], with 515 million, followed by India some distance back, at 116 million.”](https://www.iata.org/contentassets/c81222d96c9a4e0bb4ff6ced0126f0bb/iata-annual-review-2019.pdf) “Flying/traveling less” campaigns should start and accelerate there as well.

Footnotes:

  1. [Annex 3.3 of the Greenhouse Gas Inventory](https://www.epa.gov/sites/production/files/2020-02/documents/us-ghg-inventory-2020-annexes.pdf) also includes jet fuel burn and carbon dioxide (CO2) emissions, as well as international aviation and military aviation. This data may be of interest late in tracking greenhouse gas emissions.
  2. I infer that FAA has access to all flight data in the US based on their ability to use such data to infer jet fuel burn and CO2 emissions from such data: [“Commercial aircraft jet fuel burn and carbon dioxide (CO2) emissions estimates were developed by the U.S. Federal Aviation Administration (FAA) using radar-informed data from the FAA Enhanced Traffic Management System (ETMS) for 2000 through 2018 as modeled with the Aviation Environmental Design Tool (AEDT). This bottom-up approach is built from modeling dynamic aircraft performance for each flight occurring within an individual calendar year. The analysis incorporates data on the aircraft type, date, flight identifier, departure time, arrival time, departure airport, arrival airport, ground delay at each airport, and real-world flight trajectories.”](https://www.epa.gov/sites/production/files/2020-02/documents/us-ghg-inventory-2020-annexes.pdf)
  3. The FAA data informing this estimate also seem consistent with EIA and military data on jet fuel burned. Table A-96 of [“Annexes to the 2020 Inventory of US Greenhouse Gas Emissions and Sinks”](https://www.epa.gov/sites/production/files/2020-02/documents/us-ghg-inventory-2020-annexes.pdf) reports that the FAA reported 13,650 million gallons of jet fuel were consumed by commercial aircraft domestically in 2018. The EIA reported 17,674 million gallons of jet fuel consumed domestically in the same year, with the remaining ~4,000 million gallons accounted for by military and other aviation jet fuel burn.

Why has nuclear power not scaled as a clean energy source for climate change?

My friend Max recently asked why nuclear power, which seemed to him to be the obvious scalable answer as a clean energy source to fight climate change, had not taken off. Here’s the email I wrote him:

“Hey Max,

It was great catching up today! Just wanted to share what I know about what’s blocking nuclear energy as a climate solution…

My main source is Jim Hansen in http://www.columbia.edu/~jeh1/mailings/2019/20191211_Fire.pdf, with _Oil, Power and War_ as an interesting history of nuclear in the late 20th century. I found Hansen’s writing very insightful and recommend reading it in full but I recognize it is quite long, so I’ve summarized the major points below with quoted passages if you’re interested. Below that is a passage of interesting history of the expansion of nuclear in the late 20th century.

**Hansen:**

1. Many environmentalists oppose nuclear power: “[W]hen I was hosted by other environmental groups, I heard comments that put coal and nuclear in the same category, both highly undesirable. Some environmentalists advocated lawsuits against nuclear power plants, with the aim to drag out construction times, to make nuclear power more expensive… Amory Lovins advises governments worldwide (Fig. 3) and is the most effective energy adviser in the world. Indeed, Amory was energy adviser to Bill Clinton, when Clinton was Governor of Arkansas. After Clinton was elected President, in his first State of the Union address in early 1993, he announced ‘We are eliminating programs that are no longer needed, such as nuclear power research and development.’ This was consistent with Lovins’ opinion… When Gore’s next book came out, the section on nuclear power looked as though it could have been written by Amory Lovins.” Another quote: “Antipathy of philanthropy toward nuclear power is understandable. Many of the principals [of philanthropic organizations] came of age in the 1970s, a time of activism against nuclear power. A view of 200,000 people cheering at an anti-nuclear rally in New York City captures the spirit of the activism that succeeded earlier protests against the Viet Nam war. Nukes were government’s latest misdeed… Environmental organizations such as National Resources Defense Council, Environmental Defense Fund, World Wildlife Fund, Friends of the Earth, and Greenpeace are almost uniformly and deeply anti-nuclear, so it is no wonder that philanthropy follows their lead… [Steve Kirsch] reported that he was given three reasons why NRDC could not change its position on nuclear power. Numbers 1 and 2 were such-and-such. Number 3: NRDC would lose a significant fraction of their major donors…  It seemed to me that [Obama] listened to Al Gore and Democrats in Congress, including John Kerry. They were advocates of cap-and-trade, heavy subsidies of renewables, RPSs, and neglect of nuclear power, if not outright hostility to it. But where did they get advice? I already mentioned the guru Amory Lovins. However, I believe that Big Green, the large environmental organizations, were even more influential. Al Gore went to Kyoto in 1997 carrying the cap-and-trade policy advocated by Environmental Defense. Although well-designed for national sulfur emission trading among U.S. utilities, cap-and-trade is cumbersome and ineffectual for trading among 200 nations. NRDC had a major role in constructing the President’s Clean Power Plan. The New York Times had a photo of NRDC lawyers sitting around a table making the plans… The problem is at the top of these [environmental] organizations, in my opinion. Their leadership seems unable to take a global view.”

2. Maybe because of this, there has been loss of government and other support for nuclear power: “Rapid replacement of old technology imposes costs. Strong government support was needed to drive down renewable costs rapidly. In contrast, cost of nuclear power rose, as government support of nuclear RD&D dropped, nuclear power was excluded from RPSs and excluded as a Clean Development Mechanism in the Kyoto Protocol, and nuclear power was successfully targeted by the Big Green environmental groups that emerged in the ‘boomer’ generation…” **Hansen advocates for “high priority RD&D20 for advanced [4th] generation nuclear power.”**

3: Maybe because of environmental opposition, there is regulatory opposition to nukes, which I believe is key because my impression is that the Nuclear Regulatory Commission gives the go-ahead for nuclear power plants: “the Head of [Nuclear Regulatory Commission, which I believe gives the go-ahead for new nuclear capacity] was a political appointee and Democratic presidents appointed NRC Heads who were more-or-less anti-nuclear. Anti-nukes heading NRC? What a strange situation, if true!… According to George Stanford, NRC Chairs appointed by Democrats tended to be subtly antinuclear. Over time, approval of a new nuclear power plant got longer and more expensive. This was consistent with the aim of many environmentalists, expressed to me, that it was best to stop construction of new nuclear power plants and phase out existing ones. However, note that the NRC deserves credit for safe operation of all U.S. nuclear power plants for over 50 years. There was nothing subtle about Jaczko. He delayed the startup of a new nuclear plant by sitting on the paperwork for several months, thus increasing costs and time-to-build record. The NRC Inspector General accused Jaczko of “strategically” withholding information from his colleagues in an effort to keep plans for the Yucca Mountain nuclear waste repository from advancing. When the Fukushima accident occurred, Jaczko went into overdrive, advising the Japanese government to evacuate a huge area. This increased the accident’s cost and more than 1000 people died, essentially because of heartbreak and stress from abandoning their homes. No people died from radiation released at Fukushima, but many will die because of reactions to the accident. Worldwide shuttering of plans to use nuclear power increases fossil fuel pollution.”

4. On cost reduction and further technological developments for nuclear to succeed: “The primary challenge for future nuclear power is to drive down the cost, and there are good reasons to believe that it can be competitive with all other energy sources. In its early days the cost of nuclear power did decline with learning as expected (Fig. 16). The costs moved in the opposite direction as the power plants became larger, took longer to build, and encountered strong opposition from environmentalists. Examination suggests that the industry and government share in the blame for rising costs, but there is limited value in that debate. Nuclear fuel is inexpensive, about $6 per MWe-hr including all costs (natural uranium, enrichment, fabrication, Nuclear Waste Fund fee), which translates to $0.56 per MMBtu. Today we consider gas at $6 per MMBtu to be very cheap. Gas could not compete with nuclear, if we built and operated nuclear plants at the same cost as fossil plants. Construction and operating costs of nuclear plants are several times greater than equivalently complex non-nuclear facilities. There are no physics that require nuclear plant construction and operation to be more expensive than fossil plants, which is the reason to support innovation. Standardized modular reactors produced in a factory have potential for great cost reduction… Our workshop paper39 published in Science shows that nuclear power has been the fastest way to build carbon-free power. With modular mass-manufacturing, future construction could be even faster, especially if there is technical cooperation between our nations. In the four years since our workshop, the rate of addition of renewable energies has increased markedly in many countries, but five of the six fastest cases of power addition are still those for nuclear power. Facts matter. The argument of renewable advocates that nuclear construction is slow compared to renewables is false. Renewables and nuclear power are both needed. The most recent UN deep decarbonization scenarios all include major contributions from both renewables and nuclear power. They see no prospect of rapidly phasing down emissions without both energy sources. Modern nuclear power did not obtain R&D support equivalent to the RPSs and subsidies that renewables enjoyed, yet much progress has been made. Large reduction of cost and construction time likely requires mass manufacture, analogous to ship and aircraft construction, which lends itself to product-type licensing. Passive safety features allow reactor shutdown and cooling without external power or operator intervention. Innovative designs use fuel more efficiently and produce less nuclear waste, directly supply heat for industrial processes, can reduce or eliminate cooling-water requirements, and can be ordered in a range of scales. Deep decarbonization needed in China and India by midcentury can be accelerated by these innovative developments. Recent progress in the U.S. has been entrepreneurially driven, including small modular light-water, molten salt, gas-cooled and liquid-metal-cooled reactors. China has made major investments in several nuclear innovation projects.”

5. Max, your questions on training people in nuclear engineering (what others were there?) also seem very salient.

_Oil, Power and War_:

“Oil, by far the main overall energy source in 1973, was only the second source of electricity: On a global level, oil-fired electrical plants provided a little less current than coal did, but more than hydroelectric dams.17 The oil crisis facilitated the gradual emergence of natural gas and gave a boost to coal. But it also marked the rapid emergence of civilian nuclear power. Starting in the late 1940s, the development of the first uranium nuclear reactors remained an ancillary phenomenon, in large part entwined with efforts to develop atomic weapons… With Herculean industrial efforts, the energy generated in the world thanks to the atom quadrupled between 1973 and 1980, to reach the equivalent of 160 million tons of oil; this represented a modest 5 percent of crude oil consumption in the same year.18 In January 1975, in a State of the Union speech, President Gerald Ford put forth a grandiose plan to construct two hundred nuclear power plants over the following ten years. A little more than sixty were eventually built, making the United States by far the world’s leading producer of nuclear energy. Among the countries most dependent on Arab oil, France and Japan made the most radical choice in favor of the atom. In 1973, in each of these countries, fuel oil was the main energy source feeding the electric power plants’ turbines.19… In spite of the emergence of civilian nuclear power, the expansion of natural gas and the resurgence of coal, oil was not ousted.”

Andrew”

Cheap, portable, easy-to-use ventilators for respiratory support in pandemics

I think it is worth exploring inventions for cheap, portable, easy-to-use ventilators to address coronavirus, or to prepare for the next respiratory pandemic. OneBreath, a company out of Stanford, apparently [has already invented this](http://onebreathventilators.com/), although I’m not sure if this company is still active (I just emailed them to check). Although I have not fact-checked much of the below, my hunch is that this avenue is promising.

From https://www.popsci.com/diy/article/2010-05/invention-awards-breathing-easy/: “[In 2006], when Matthew Callaghan was a surgery intern at the University of California at San Francisco, the medical world was buzzing over the prospect of a global flu pandemic. One of the biggest potential problems was logistical: Because 95 percent of the ventilators in the U.S.—which keep critically ill patients breathing when their respiratory system is unable to function—are already in use, thousands of patients would die for lack of available life support. Ventilators cost hospitals from $3,000 up to $40,000 for state-of-the-art models, making it impractical for most hospitals and clinics to stockpile them for emergencies.

Callaghan, [in 2010] a postdoctoral fellow in Stanford University’s biodesign program, knew that in a pandemic situation, hospitals would have to come up with triage procedures that would leave some to die. If he could develop a reliable, no-frills ventilator, it would eliminate many such heart-wrenching decisions. “I thought, these ventilators cost 40 grand, and they just push air around. It isn’t complicated engineering. You don’t need all the bells and whistles.” That thought was the impetus behind the OneBreath, the ventilator Callaghan invented with a small team of fellow Stanford biodesign students. The device is just a fraction of the cost of a low-end conventional ventilator, runs on a 12-volt battery for six to 12 hours at a time, and is smaller than a toolbox so it can be easily deployed wherever needed.”

Ten years after that article, I’ve [read](https://medium.com/@joschabach/flattening-the-curve-is-a-deadly-delusion-eea324fe9727) and confirmed with friends in Boston hospitals that ventilator shortage remains a concern in today’s coronavirus pandemic. “If we take the number of ventilators [existing in hospitals and a CDC stockpile] as a proximate limit on the medical resources, it means we can take care of up to 170,000 critically ill patients at the same time.” Compare that to a loose estimate of the number of ventilators needed, whose numbers I have *not* checked (and note that this estimate is based on an overstatement of Marc Lipsitch’s true estimate of COVID prevalence, which I last read was 20-60% of American adults, in the absence of mitigation): “assume that 55% of Americans catch COVID-19 until the end of 2020, and 6% (10.8 million) of them will need ventilators at some point [with each intense case needing a ventilator for 4 weeks]” (https://medium.com/@joschabach/flattening-the-curve-is-a-deadly-delusion-eea324fe9727). Knowing that that estimate needs to be taken with a fistful of salt, it still seems quite plausible that there will be a ventilator shortage. So what happened to OneBreath?

“A round of successful tests on pigs wrapped up [in December 2009], and the FDA is expected to review the device for humans [in fall 2010]. The OneBreath should not need to undergo clinical trials, Callaghan says, since it performs the same air-moving function as existing ventilators. He anticipates that the U.S. government will want to stockpile the device for use during pandemics, but clinicians who have been privy to the OneBreath’s development are excited about its prospects elsewhere as well.”

What happened to it after that? I would be curious to find out.

For more on how OneBreath works (in case you were interested in designing your own), see the full article: https://www.popsci.com/diy/article/2010-05/invention-awards-breathing-easy/.

The US public health system’s preparedness in the early 1990s

I’ve been reading about how the US public health system’s ability to diagnose for coronavirus is slower than that in other countries. This reminded me of results of an audit of the US public health system in the early 1990s (from Laurie Garrett’s _The Coming Plague_), which found there was much room for improvement in the US public health system. Perhaps this is a guide for what could be done better in the future, although I’m not sure if similar room for improvement exists today, or if infectious disease writer Laurie Garrett’s diagnosis of the barriers to improvement at the time were correct (“two decades of government belt tightening, coupled with decreased local and state revenues due to both taxation reductions and severe recessions, had rendered most local and regional disease reporting systems horribly deficient, often completely unreliable”), or if that diagnosis is also true today. However, this is an interesting historical example:

“In response to [an early 1990s] Institute of Medicine’s report on emerging diseases, the CDC gave Dr. Ruth Berkelman the task of formulating plans for surveillance and rapid response to emerging diseases. For a year and a half Berkelman coordinated an exhaustive effort, identifying weaknesses in CDC systems and outlining a new, improved system of disease surveillance and response.
Berkelman and her collaborators discovered a long list of serious weaknesses and flaws in the CDC’s domestic surveillance system and determined that international monitoring was so haphazard as to be nonexistent. For example, the CDC for the first time in 1990 attempted to keep track of domestic disease outbreaks using a computerized reporting system linking the federal agency to four state health departments. Over a six-month period 233 communicable disease outbreaks were reported. The project revealed two disturbing findings: no federal or state agency routinely kept track of disease outbreaks of any kind, and once the pilot project was underway the ability of the target states to survey such events varied radically. Vermont, for example, reported outbreaks at a rate of 14.1 per one million residents versus Mississippi’s rate of 0.8 per million.27
Minnesota state epidemiologist Dr. Michael Osterholm assisted the CDC’s efforts by surveying the policies and scientific capabilities of all fifty state health departments. He discovered that the tremendous variations in outbreak and disease reports reflected not differences in the actual incidence of such occurrences in the respective states, but enormous discrepancies in the policies and capabilities of the health departments.28 In the United States all disease surveillance began at the local level, working its way upward through state capitals and, eventually, to CDC headquarters in Atlanta. If any link in the municipal-to-federal chain was weak, the entire system was compromised. At the least, local weaknesses could lead to a skewed misperception of where problems lay: states with strong reporting networks would appear to be more disease-ridden than those that simply didn’t monitor or report any outbreaks. At the extreme, however, the situation could be dangerous, as genuine outbreaks, even deaths, were overlooked.
What Osterholm and Berkelman discovered was that nearly two decades of government belt tightening, coupled with decreased local and state revenues due to both taxation reductions and severe recessions, had rendered most local and regional disease reporting systems horribly deficient, often completely unreliable. Deaths were going unnoticed. Contagious outbreaks were ignored. Few states really knew what was transpiring in their respective microbial worlds.
“A survey of public health agencies conducted in all states in 1993 documented that only skeletal staff exists in many state and local health departments to conduct surveillance for most infectious diseases,” the research team concluded. The situation was so bad that even diseases which physicians and hospitals were required by law to report to their state agencies, and the states were, in turn, legally obligated to report to CDC, were going unrecorded. AIDS surveillance, which by 1990 was the best-funded and most assiduously followed of all CDC disease programs, was at any given time underreported by a minimum of 20 percent. That being the case, officials could only guess about the real incidences in the fifty states of such ailments as penicillin-resistant gonorrhea, vancomycin-resistant enterococcus, E. coli 0157 food poisoning, multiply drug-resistant tuberculosis, or Lyme disease. As more disease crises cropped up, such as various antibiotic-resistant bacterial diseases, or new types of epidemic hepatitis, the beleaguered state and local health agencies loudly protested CDC proposals to expand the mandatory disease reporting list—they just couldn’t keep up.
Osterholm closely surveyed twenty-three state health department laboratories and found that all but one had had a hiring freeze in place since 1992 or earlier. Nearly half of the state labs had begun contracting their work out to private companies, and lacked government personnel to monitor the quality of the work.29 In a dozen states there was no qualified scientist on staff to monitor food safety, despite the enormous surge in E. coli and Salmonella outbreaks that occurred nationwide during the 1980s and early 1990s.
At the international level the situation was even worse. The CDC’s Jim LeDuc, working out of WHO headquarters in Geneva, in 1993 surveyed the thirty-four disease detection laboratories worldwide that were supposed to alert the global medical community to outbreaks of dangerous viral diseases. (There was no similar laboratory network set up to follow bacterial outbreaks or parasitic disease trends.) He discovered shocking insufficiencies in the laboratories’ skills, equipment, and general capabilities. Only half the labs could reliably diagnose yellow fever; the 1993 Kenya epidemic undoubtedly got out of control because of that regional laboratory’s failure to diagnose the cause of the outbreak. For other microbes the labs were even less prepared: 53 percent were unable to diagnose Japanese encephalitis; 56 percent couldn’t properly identify hantaviruses; 59 percent failed to diagnose Rift Valley fever virus; 82 percent missed California encephalitis. For the less common hemorrhagic disease-producing microbes, such as Ebola, Marburg, Lassa, and Machupo, virtually no labs had the necessary biological reagents to even try to conduct diagnostic tests.”

Arguments for carbon fee-and-dividend over cap-and-trade

From climate scientist Jim Hansen’s _Storms of our Grandchildren_:

“Let’s discuss cap-and-trade explicitly first. Then I will provide a bottom-line proof that it cannot work. Because I have already made up my mind about the uselessness of cap-and-trade, my commentary may be slanted, but you have been warned, so you should be able to make up your own mind.
In cap-and-trade, the amount of a fossil fuel for sale is supposedly “capped.” A nominal cap is defined by selling a limited number of certificates that allow a business or speculator to buy the fuel. So the fuel costs more because you must pay for the certificate and the fuel. Congress thinks this will reduce the amount of fuel you buy—which may be true, because it will cost you more. Congress likes cap-and-trade because it thinks the public will not figure out that a cap is a tax.
How does the “trade” part factor in? Well, you don’t have to use the certificate; you can trade it or sell it to somebody else. There will be markets for these certificates on Wall Street and such places. And markets for derivatives. The biggest player is expected to be Goldman Sachs. Thousands of people will be employed in this trading business—the big boys, not guys working for five dollars an hour. Are you wondering who will provide their income? Three guesses and the first two don’t count. Yes, it’s you—sorry about that. Their profits are also added to the fuel price.
What is the advantage of cap-and-trade over fee-and-dividend, with the fee distributed to the public in equal shares? There is an advantage to cap-and-trade only for energy companies with strong lobbyists and for Congress, which would get to dole out the money collected in certificate selling, or just give away some certificates to special interests. Don’t hurry to write a letter to your congressional representative asking for a certificate to pollute—that’s not how things work in Washington. Your paragraph requesting a certificate is not likely to be included in the Waxman-Markey bill, even though at last count 1,400 pages had been added. Again, think lobbyists. Think revolving doors. People in alligator shoes write the paragraphs that actually get added. If you think I am kidding, ask yourself this: Do you believe that your representatives in Congress can write 1,400 pages themselves? It is still a free country, so you can hire your own lobbyist, but the price is kind of high. A coal company can afford someone like Dick Gephardt—can you?
Okay, I will try to be more specific about why cap-and-trade will be necessarily ineffectual. Most of these arguments are relevant to other nations as well as the United States.
First, Congress is pretending that the cap is not a tax, so it must try to keep the cap’s impact on fuel costs small. Therefore, the impact of cap-and-trade on people’s spending decisions will be small, so necessarily it will have little effect on carbon emissions. Of course that defeats the whole purpose, which is to drive out fossil fuels by raising their price, replacing them with efficiency and carbon-free energy.
The impact of cap-and-trade is made even smaller by the fact that the cap is usually not across the board at the mine. In the fee-and-dividend system, a single number, dollars per ton of carbon dioxide, is applied at the mine or port of entry. No exceptions, no freebies for anyone, all fossil fuels covered for everybody. In cap-and-trade, things are usually done in a more complicated way, which allows lobbyists and special interests to get their fingers in the pie. If the cap is not applied across the board, covering everything equally, any sector not covered will benefit from reduced fuel demand, and thus reduced fuel price. Sectors not covered then increase their fuel use.
In contrast, the fee-and-dividend approach puts a rising and substantial price on carbon. I believe that the public, if honestly informed, will accept a rise in the carbon fee rate because their monthly dividend will increase correspondingly.
Second, the cap-and-trade target level for emissions (defined by the number of permits) sets a floor on emissions. Emissions cannot go lower than this floor, because the price of permits on the market would crash, bringing down fossil fuel prices and again making it more economical for profit-maximizing businesses to burn fossil fuels than to employ energy-efficiency measures and renewable-energy technology. It would be akin to a drug dealer luring back former customers by offering free cash along with a free fix.
With fee-and-dividend, in contrast, we will reach a series of points at which various carbon-free energies and carbon-saving technologies are cheaper than fossil fuels plus their fee. As time goes on, fossil fuel use will collapse, remaining coal supplies will be left in the ground, and we will have arrived at a clean energy future. And that is our objective.
A perverse effect of the cap-and-trade floor is that altruistic actions become meaningless. Say that you are concerned about your grandchildren, so you decide to buy a high-efficiency little car. That will reduce your emissions but not the country’s or the world’s; instead it will just allow somebody else to drive a bigger SUV. Emissions will be set by the cap, not by your actions.
In contrast, the fee-and-dividend approach has no floor, so every action you take to reduce emissions helps. Indeed, your actions may also spur your neighbor to do the same. That snowballing (amplifying feedback) effect is possible with fee-and-dividend, but not with cap-and-trade.
Third, offsets cause actual emission reductions to be less than targets, because emissions covered by an offset do not count as emissions. They don’t count as emissions to the politicians, but they sure count to the planet! For example, actual reductions under the Waxman-Markey bill have been estimated to be less than half of the target, because of offsets.
Fourth, Wall Street trading of emission permits and their derivatives in the anticipated multitrillion-dollar carbon market, along with the demonstrated volatility of carbon markets, creates the danger of Wall Street failures and taxpayer-funded bailouts. In the best case, if market failures are avoided, there is the added cost of the Wall Street trading operation and the profits of insider trading. To believe that there will be no insider profits is to believe that government overseers are more clever than all the people on Wall Street and that there is the added cost of the Wall Street trading operation and the profits of insider trading. To believe that there will be no insider profits is to believe that government overseers are more clever than all the people on Wall Street and that there is no revolving door between Wall Street and Washington. Where will Wall Street profits come from? They too will come from John Q. Public via higher energy prices.
In contrast, a simple flat fee at the mine or well, with simple long division to determine the size of the monthly dividend to all legal residents, provides no role for Wall Street. Could that be the main reason that Washington so adamantly prefers cap-and-trade?
Fee-and-dividend is revenue neutral to the public, on average. Cap-and-trade is not, because we, the public, provide the profits to Wall Street and any special interests that have managed to get written into the legislation. Of course Congress will say, “We will keep the cost very low, so you will hardly notice it.” The problem is, if it’s too small for you to notice, then it is not having an effect. But maybe Congress doesn’t really care about your grandchildren.
Hold on! Or so you must be thinking. If cap-and-trade is so bad, why do environmental organizations such as the Environmental Defense Fund and the National Resources Defense Council support it? And what about Waxman and Markey, two of the strongest supporters of the environment among all members of the House of Representatives?
“I don’t doubt the motives of these people and organizations, but they have been around Washington a long time. They think they can handle this problem the way they always have, by wheeling and dealing. Environmental organizations “help” Congress in the legislative process, just as the coal and oil lobbyists do. So there are lots of “good” items in the 1,400 pages of the Waxman-Markey bill, such as support for specific renewable energies. There may be more good items than bad ones—but unfortunately the net result is ineffectual change. Indeed, the bill throws money to the polluters, propping up the coal industry with tens of billions of taxpayer dollars and locking in coal emissions for decades at great expense.
Yet these organizations say, “It is a start. We will get better legislation in the future.” It would surely require continued efforts for many decades, but we do not have many decades to straighten out the mess.
The beauty of the fee-and-dividend approach is that the carbon fee helps any carbon-free energy source, but it does not specify these sources; it lets the consumer choose. It does not cost the government anything. Whether it costs “citizens, and how much, depends on how well they reduce their carbon footprint.
A quantitative comparison of fee-and-dividend and cap-and-trade has been made by economist Charles Komanoff (www.komanoff.net/fossil/CTC_Carbon_Tax_Model.xls). If the carbon fee increases by $12.50 per ton per year, Komanoff estimates that U.S. carbon emissions in 2020 would be 28 percent lower than today. And that is without the snowballing (amplifying feedback) effect I mentioned above. By that time the fee would add just over a dollar to the price of a gallon of gasoline, but the reduction in fossil fuel use would tend to reduce the price of raw crude. The 28 percent emissions reduction compares with the Waxman-Markey bill goal of 17 percent—which is, however, fictitious because of offsets. This approach, small annual increases of the carbon fee (ten to fifteen dollars per ton per year), is essentially the bill proposed by Congressman John B. Larson, a Democrat in the U.S. House of Representatives. Except Larson proposes using the money from the fee to reduce payroll taxes, rather than to pay a dividend to legal residents. The Democratic leadership and President Obama, so far, have chosen to ignore Congressman Larson.
A final comment on cap-and-trade versus fee-and-dividend. Say an exogenous development occurs, for example, someone invents an inexpensive solar cell or an algae biofuel that works wonders. Any such invention will add to the 28 percent emissions reduction in the fee-and-dividend approach. But the 17 percent reduction under cap-and-trade will be unaffected, because the cap is a floor. Permit prices would fall, so energy prices would fall, but emission reductions would not go below the floor. Cap-and-trade is not a smart approach.
But, you may ask, was it not proven with the acid rain problem that cap-and-trade did a wonderful job of reducing emissions at “low cost? No, sorry, that is a myth—and worse. In fact, examination of the story about acid rain and power plant emissions shows the dangers in both horse-trading with polluters and the cap-and-trade floor.
Here is essentially how the acid rain “solution” worked. Acid rain was caused mainly by sulfur in coal burned at power plants. A cap was placed on sulfur emissions, and power plants had to buy permits to emit sulfur. Initially the permit price was high, so many utilities decided to stop burning high-sulfur coal and to replace it with low-sulfur coal from Wyoming. From 1990 to today, sulfur emissions have been cut in half. A smaller part of the reduction was from the addition of sulfur scrubbers to some power plants that could install them for less than the price of the sulfur permits, but the main solution was use of low-sulfur coal. Now what the dickens does that prove?
It proves that in a case where there are a finite number of point sources, and there are simple ways to reduce the emissions, and you are satisfied to just reduce the emissions by some specified fraction, then emission permits make sense. The utilities that were closest to the Wyoming coal or that needed to install scrubbers for other reasons could reduce their emissions, and so overall the cost of achieving the specified reduction of sulfur emissions was minimized. But the floor of this cap-and-trade approach prevented further reductions. Analyses have shown that the economic benefits of further reductions would have exceeded costs by a factor of twenty-five. So, in some sense, the acid rain cap-and-trade solution was an abject failure.
It is worse than that. The horse-trading that made coal companies and utilities willing to allow this cap-and-trade solution did enormous long-term damage. (What do I mean by “coal companies and utilities willing to allow”? That is the way it works in Washington. Special interests have so much power, or Congress chooses to give them so much sway, that their assent is needed.) The horse-trading was done in 1970. Senator Edmund Muskie, one of the best friends that the environment has ever had, felt it was necessary to compromise with the coal companies and utilities when the 1970 Clean Air Act was defined. So he allowed old coal-fired power plants to be “grandfathered”: they would be allowed to continue to pollute, because they would soon be retired anyhow, or so the utilities said. Like fun they would. Those old plants became cash cows once they were off the pollution hook—the business community will never let them die. Thousands of environmentalists have been fighting those plants and trying to adjust clean air regulations ever since. Yet today, in 2009, there are still 145 operating coal-fired power plants in the United States that were constructed before 1950. Two thirds of the coal fleet was constructed before the Clean Air Act of 1970 was passed.
Those people, including the leaders of our nation, who tell you that the acid rain experience shows that cap-and-trade will work for the climate problem do not know what they are talking about. The experience with coal-fired power plants does contain important lessons, though.
First, it shows that the path we start on is all-important. People who say that cap-and-trade is a good start and we will move on from there are not looking at reality. Four decades later we are still paying for an early misstep with coal-fired plants.
Second, it shows that we need a simple, across-the-board solution that covers all emissions. A fee or tax must be applied at the source. If Congress insists that it must help somebody who will be hurt by the carbon fee, such as coal miners, fine—Congress can provide for job retraining or some other compensation. But the fee on fossil fuel carbon must be uniform at the source, with no exceptions.
Finally, let me address the ultimate defense that is used for cap-and-trade: “The train has left the station. It is too late to change. President Obama has decided. The world has decided. It must be cap-and-trade, because an approach such as you are talking about would delay things too much.” That latter claim turns truth on its head, calling black “white” and white “black.” The truth is shown by empirical evidence. In February 2008, British Columbia decided to adopt a carbon tax with an equal reduction of payroll taxes. Five months later it was in place and working. This year there was an election in British Columbia in which the opposition party campaigned hard against the carbon tax. They lost. The public liked the carbon tax with a payroll tax reduction. Now both parties support it. In contrast, it took a decade to negotiate the cap-and-trade Kyoto Protocol, and many countries had to be individually bribed with concessions. The result: slow implementation and an ineffectual reduction of emissions. The Waxman-Markey bill is following a similar path…
Okay, at long last, we can address the fundamental problem. What is the backbone and framework for a solution to human-caused climate change?
The backbone must be a rising fee (tax) on carbon-based fuels, uniform across the board. No exceptions. The money must be returned to the public in a way that is direct, so they realize and trust that (averaged over the public) the money is being returned in full. Otherwise the rate will never be high enough to do the job. Returning the money to the public is the hard part in the United States. Congress prefers to keep the money for itself and divvy it out to special interests.
The framework concerns how to make an across-the-board fee on fossil fuel carbon work on a global basis, in a way that is fair, because unless there is a universal carbon fee, it will be ineffective. The backbone, I will argue, makes it relatively simple to define international arrangements—I will explain what I mean by “relatively simple” in a moment. The backbone also makes it practical to have a framework that deals with the problem of fairness between those who have caused the problem, those who are causing the problem, and those who are primarily the victims of others. The framework can also help deal with the fundamental problems of population and poverty.
Contrary to the assertion by proponents of a Kyoto-style cap-and-trade agreement, cap-and-trade is not the fastest way to an international agreement. That assertion is another case of calling black “white,” apparently under the assumption that the listener will accept it without thinking. A cap-and-trade agreement will be just as hard to achieve as was the Kyoto Protocol. Indeed, why should China, India, and the rest of the developing world accept a cap when their per-capita emissions are an order of magnitude less than America’s or Europe’s? Leaders of developing countries are making that argument more and more vocally. Even if differences are papered over to achieve a cap-and-trade agreement at upcoming international talks, the agreement is guaranteed to be ineffectual. So eventually (quickly, I hope!) it must be replaced with a more meaningful approach. Let’s define one.
The key requirement is that the United States and China agree to apply across-the-board fees to carbon-based fuels. Why would China do that? Lots of reasons. China is developing rapidly and it does not want to be saddled with the fossil fuel addiction that plagues the United States. Besides, China would be hit at least as hard as the United States by climate change. The most economically efficient way for China to limit its fossil fuel dependence, to encourage energy efficiency and carbon-free energies, is via a uniform carbon fee. The same is true for the United States. Indeed, if the United States does not take such an approach, but rather continues to throw lifelines to special interests, its economic power and standard of living will deteriorate, because such actions make the United States economy less and less efficient relative to the rest of the world.
Agreement between the United States and China comes down to negotiating the ratio of their respective carbon tax rates. In this negotiation the question of fairness will come up—the United States being more responsible for the excess carbon dioxide in the air today despite its smaller population. That negotiation will not be easy, but once both countries realize they are in the same boat and will sink or survive together, an agreement should be possible.
Europe, Japan, and most developed countries would likely agree to a status similar to that of the United States. It would not be difficult to deal with any country that refuses to levy a comparable across-the-board carbon fee. An import duty could be collected by countries importing products from any nation that does not levy such a carbon fee. The World Trade Organization already has rules permitting such duties. The duty would be based on standard estimates of the amount of fossil fuels that go into producing the imported product, with the exporting company allowed the option of demonstrating that its product is made without fossil fuels, or with a lesser amount of them. In fact, exporting countries would have a strong incentive to impose their own carbon fee, so that they could keep the revenue themselves.
As for developing nations, and the poorest nations in the world, how can they be treated fairly? They also must have a fee on their fossil fuel use or a duty applied to the products that they export. That is the only way that fossil fuels can be phased out. If these countries do not have a tax on fossil fuels, then industry will move there, as it has moved already from the West to China and India, with carbon pollution moving along with it. Fairness can be achieved by using the funds from export duties, which are likely to greatly exceed foreign aid, to improve the economic and social well-being of the developing nations.”

 

My questions about coronavirus (possible PhD research questions)

  1. Why do infectious pathogens like coronavirus not end up infecting 100% of people? Marc Lipsitch claims this is because enough of the population becomes immune at a certain point, creating “walls” through which the pathogen cannot transmit. Where is the study that shows this is the explanation? Because the other possibility is that the estimates that “20% of the world was infected with 2009 H1N1” were taken too early (the pathogen was still spreading) or had limitations in some other way. Edit 3/12: Per https://medium.com/@tomaspueyo/coronavirus-act-today-or-people-will-die-f4d3d9cd99ca, this has implications: if we lift social distancing measures in x time, will cases just reappear? This is suggested by Spanish flu, and China being able to limit cases now but lifting social distancing soon will test this. :Edit 3/12.
  2. My friend told me hospitals will lack enough ventilators to provide breathing support; these ventilators may also be bulky and expensive. What happened to the portable, cheap ventilators called for in the [Johns Hopkins Center for Health Security report on technologies for global catastrophic biorisks](https://www.dropbox.com/s/5h3r7e9m0zrujgl/181009-gcbr-tech-report.pdf?dl=0)? For example, what happened to the [OneBreath ventilator](https://www.popsci.com/diy/article/2010-05/invention-awards-breathing-easy/)? Maybe that could be deployed now.
  3. According to Marc Lipsitch, on average it takes “three weeks to die from infection” from coronavirus. I assume this number is calculated only from those who have died? (This makes it less relevant to someone who does not know, in advance, whether they will die.) Also, how was the infection date measured, given that coronavirus is often asymptomatic? This is relevant for knowing how urgently one should fly home to see one’s elderly relatives if they start showing symptoms. Edit 3/12: https://github.com/midas-network/COVID-19/tree/master/parameter_estimates/2019_novel_coronavirus#time-from-symptom-onset-to-death. :End edit.
  4. What is the status of the coronavirus diagnostic from the CDC? How about those from other developers, e.g. SHERLOCK out of the Broad? Is SHERLOCK field-ready and better for this coronavirus situation?
  5. Let’s say two friends are considering whether to hang out. If they check that neither of them have symptoms, is it safe to meet up? (This is not to say preemptive social distancing measures like closing down public gatherings are not useful; clearly they are, because even symptomatic people might come to those.)

  6. Elderly people are dying in higher numbers of coronavirus. This also seems to be true of flu and other potentially unrelated problems like air pollution. I never see this in my life because I’m not in the hospital and I don’t know many elderly people. What is the pathology in these diseases, i.e. how does death happen?
  7. Less important: I heard there are studies showing a bad health outcome if exposed to a high dose of virus, but a good outcome if exposed to a low dose. I don’t remember which virus, which organism (animal?), etc. Where are these studies? What is the implication, if any, for coronavirus? (For example, should we think of increased risk from being in crowded places as being explained by lots of virus particle from lots of people, i.e. high dose? This is different from how I usually picture it: a crowded place means more sick people.)
  8. How many tests did South Korea actually do when people say “they did a lot of tests”? Check https://ourworldindata.org/coronavirus.

My actions in response to coronavirus

My friend asked me, “Friends!! Are you worried about coronavirus at all?” Here’s my current response:

**~Although I will be adopting “common-sense” protective habits~, I will practice social distancing and other “common-sense” habits when I don’t lose too much by adopting the habit; this includes remote work. I’m mainly not worried about my own safety because fatality rates seem to be low for my age range. I’ll especially avoid being in contact with the elderly and immunocompromised to reduce risk of transmitting to them.**

The Chinese CDC-reported fatality rate for my age range is [0.2%, i.e. 1 in 500 chance of dying if I get it (maybe one could consider that too high), with this being a potential overestimate if it’s true that early case fatality rate is overstated due to better recording of deaths from coronavirus than coronavirus cases themselves](Chinese CDC: https://www.businessinsider.com/coronavirus-death-age-older-people-higher-risk-2020-2).[1] Edit 3/7: I agree with [Dr. Jeremy Faust](https://www.cnn.com/videos/tv/2020/03/07/how-vulnerable-is-the-average-person-to-the-coronavirus.cnn) that more of the focus should be to help those most vulnerable, i.e. the elderly and immunosuppressed. For a non-elderly/non-immunosuppressed person, this may mean not hoarding supplies, and not becoming a source of transmission (e.g. by following self-protective habits that don’t hoard and by self-quarantining if infected), which are both cases of worrying about the much higher risk to someone else than the much lower risk to oneself. :End edit.

By “common-sense” protective habits, I mean things like the following, where I or things I care about (e.g. environmental values) don’t seem to lose much by adopting the habit, and the main thing keeping me from adoption, especially _consistent_ adoption, is inertia:

  1. Biking to places instead of taking crowded subways (which I mostly already do) (Edit 3/9: however, if my only mode of transport were a subway and I wanted to make that trip, I’d do it without worrying, because biking is mainly for exercise and speed of travel in Boston :End edit),
  2. Avoiding non-essential flight or travel (which I mostly already do, because I dislike being on planes anyway and for environmental reasons),
  3. Doing my PhD work around 1-2 people in a relatively uncrowded office space instead of around 10-20 people in more crowded office space (which for me gets the best of both worlds of keeping each other productive and non-loneliness, plus reduce transmission risk; I mostly already do this), and
  4. Continuing to eat healthy, sleep well and stay low-stress to keep my immune system healthy (which I mostly already do).
  5. [“And self-protective activities include all the hygiene measures that were mentioned earlier [washing hands, not touching one’s face, avoiding public places], includes getting up-to-date on vaccinations to prevent needs for contact with the health care system. It includes, if you smoke, quitting smoking, because you need your lung function… It includes staying home from work to protect others if you’re sick. It includes making that possible if you’re a business owner.”](https://theforum.sph.harvard.edu/events/the-coronavirus-outbreak/).

[“These are all individually self-improving acts that will reduce one’s own risk. And this is a happy case where every one of those things also has benefits to the community. Slowing the epidemic is what we have to do if we can’t stop it. And all of those measures, small though some of them may be, help to slow the epidemic.”](https://theforum.sph.harvard.edu/events/the-coronavirus-outbreak/)

Footnotes:

  1. I understand that fatality rates are highly uncertain given that they’re calculated as “num deaths attributed to coronavirus” / “num cases of coronavirus,” [both of which are underestimated numbers right now due to lack of testing](https://theforum.sph.harvard.edu/events/the-coronavirus-outbreak/). However, I feel the low fatality rate for my age groups, even with error bars, makes me not worry about my own safety.