A plan for cutting US greenhouse gas emissions: can I help accelerate this?

I spent most of last week or two reading about climate change and came across a path for cutting US emissions that I really liked: https://medium.com/otherlab-news/how-do-we-decarbonize-7fc2fa84e887.

“The beginning and the first half of decarbonization will most likely look the same: a commitment to solar and wind [zero-carbon electricity], batteries, electrification of homes, heat pumps, electric vehicles [electrification]…” The author, energy inventor Saul Griffith, then shows this needs to be done on a massive scale in the US: just focusing on 3 key steps, we need 200m+ electric vehicles, electric heating systems for 120m+ homes and 5m+ commercial buildings, and 2 TW of zero-carbon electricity production (mostly solar/wind because they’re cheapest, with variation based on locale).

I don’t think this is new, but seeing that a large fraction of emissions could be reduced by just a handful of steps was extremely clarifying. I think parts of this approach are already happening in the US (growth in number of solar rooftops to about 1m), although maybe not at a fast pace. I also haven’t looked into the negative consequences of this path (not just carbon but other environmental impacts of e.g. producing so many electric vehicles).

If this is a good path, it seems likely that policy, operations, engineering and financing elements could help, including potentially large workforces to manufacture the cars, heating systems, batteries, solar panels and wind turbines; and definitely large workforces to install, maintain and connect these elements. Operations, engineering and financing might be grouped into centralized businesses like SolarCity (which “markets, manufactures, and installs residential and commercial solar panels”), but they don’t have to be (e.g. many local contractors around the country). Owners might just pay for these electric vehicles and heat pumps like they pay for their current vehicles and furnaces, but financing (or straight up donation?) might be important for big solar/wind projects and for people unable or unwilling to switch to electric appliances. Policy seems like it could help by providing such financing (e.g. a Fannie Mae for these clean techs), by requiring zero-carbon electricity (e.g. renewable portfolio standards) and/or electrification, and by making sure local regulations are compatible with solving climate change (e.g. building codes).

Now, what can individuals or small groups additionally do to accelerate this? I know I’m focused on pandemics with most of my time now, but if this is a good path, I want to spend some time on this.

Reactions to cancer immunotherapy book _The Breakthrough_ by Charles Graeber

1. Cancer immunology or immunology more generally may be relatively crowded fields for me to enter as a biology researcher; however, there might be other reasons to enter anyway.

“As of June 2018, there were reported to be some 940 new immuno-oncological drugs being tested for breakthrough designation and FDA approval. Another 1,064 new immunotherapy drugs are in the labs in preclinical phase. That’s 2,004 new cancer drugs in just a few short years. This speed of change is highly unusual in medicine, and totally unprecedented in cancer. And by the time you read this, those numbers and the science behind them will have advanced again.”

“There are now reportedly 164 PD-1 / PD-L1 drugs in the pipeline between preclinical testing and consumer marketing, and industry insiders suspect there may be many more being developed in China.”

“The result is billions of dollars and scores of talented specialists now devoted to cancer immunotherapy. The funding torchbearers of the field such the Cancer Research Institute, started more than seventy years ago by William Coley’s daughter, have been joined by new organizational infrastructures to support that work, among them the Biden “moon shot” Cancer Initiative, to rethink medicine as a whole, and cancer most specifically; the Parker Institute for Cancer Immunotherapy to fund and coordinate researchers and clinical trials as never before; public appeal drives such as Stand Up to Cancer; (SU2C), which directs hundreds of millions of donated dollars directly into research and clinical trials; and a gold rush for commercial pharmaceutical companies and startups and the dozens of biotech venture capitalists that fund them. Several researchers have quipped that there are now two types of drug companies: those that are deep into cancer immunotherapy, and those that want to be.”

There are many caveats with my claim. Perhaps there is a ton of work left to be done to make immunotherapy promising, and immunotherapy is that good of a tool to have against cancer or pandemics. Maybe a bunch of money and industry interest are incoming, but there is still a bottleneck in innovative research. In computational research specifically? Is there a bottleneck in research talent in either (e.g. maybe there aren’t many people who have been trained in certain skills or types of research in cancer immunotherapy or immunology)? And, as always, beware statistics. However, I do tend to think that, with so much funding and hype around the field, there’s bound to be some incoming research talent, including computational talent. I would be willing to bet on myself as being able to advance immunology or cancer immunotherapy, at least marginally, but I’d also bet that I could advance other fields by more. However, for my PhD research choice, this may be outweighed by factors by aiming to be in a top lab or being in a field that is (currently) prestigious and reliably funded.

I am now more interested in immunology that engineers the immune system. See Preface below for my mixed feelings about the promise of this approach vs. say broad-spectrum antimicrobials. I guess that means understanding how it works, but in a focused way. I think you should argue as a prior that it’s hard to predict where the next finding from immunology will come from and it won’t necessarily follow the past, but if the past is some indication, I do think that empirical work, including the flavor of work of Allison and hx, is more robust. As opposed to “epistemic base” flavor stuff, specifically molecular-level speculative biology. Also, note that you might want to engineer the immune system to go stronger or weaker (again, how do the pathogeneses work?).

2. Cancer immunotherapy is currently very expensive. (This might be worth trying to solve.)

As of 2018, “[p]ricing for Yervoy—the trade name for the anti-CLTA-4 drug ipilimumab—is typical, costing more than $120,000 for a four-course treatment. Merck’s anti-PD-1 drug Keytruda, for advanced melanoma, costs $150,000 for a yearlong treatment.”

3. Cancer is potentially not as much a death sentence as I imagined.

“Until very recently we’ve had three main methods for treating cancer… These ‘cut, burn, and poison [surgery, radiation and chemotherapy]’ techniques are currently estimated to be able to cure cancer in about half of the people who develop the disease.” However, Graeber doesn’t provide much backing for this estimate. Is it true? I feel some doubt. The word “cure” seems strong; a cross-check with https://slatestarcodex.com/2018/08/01/cancer-progress-much-more-than-you-wanted-to-know/ suggests ~50-70% as a 5-year survival rate across cancers (the percentage of people who survive five years after being diagnosed with cancer), which is less than a cure but still pretty good. Maybe a majority of those 5-year survivors were actually cured and never experienced a remission. It’s unclear.

4. Maybe consider or ask about cancer immunotherapy, on its own or in combination with other therapies, if you or someone you know gets cancer (I’m not a doctor).

“[Ipi] was an immediate game changer, reducing deaths from late-stage melanoma by 28 to 38 percent. The first phase 1 clinical trials started in 2001, long ago enough to qualify 20 to 25 percent of those patients with “long-term survival” benefit. That’s still less than half of the patients, but a great deal better than the low single-digit survivor percentages only the year before.”

“There are now at least half a dozen approved anti-PD-1 / PD-L1 drugs… The anti-PD-1 / PD-L1 drugs seem to work best if a patient’s tumor is expressing PD-L1. For that subset of patients, the drug has worked well, providing durable and sometimes complete responses.”

This section will become out of date as new trials come out. I can only speculate on whether immunotherapy will offer improvements for other cancers in the coming years.

5. One might imagine cancer as analogous to viral infection; both involve potentially hijacked cells creating many other hijacked cells (cancer by outgrowing normal cells, virally infected cells by spreading viral particles). I should read about how both of these cause diseases and their pathogenesis. Without knowing anything, I can imagine that they cause disease by (a) creating big masses in your body through uncontrollable replication and/or (b) decreasing your normally functioning cells via infection and/or outcompetition.

6. I have a bias against engineering the immune system instead of letting it perform naturally unless it’s an emergency and the alternative is clearly worse; this bias is confirmed by this book. Two notable emergencies:

  • You’re clearly going to die of cancer without treatment (the book’s subject).
  • It’s a deadly pandemic (e.g. case fatality rate > 25%) where traditional vaccine approaches don’t work, so you should try some of the less tested approaches to immunotherapy that have been tried in cancer. Approaches mentioned in the book include
    • vaccines or live infection
    • passive immunization via transfusion of serum/blood, antibodies or T cells
      • adoptive T cell therapy (a cellular therapy, i.e. drug is a cell)
    • checkpoint inhibitors (T cell-specific)
    • broad stimulants of the immune system or parts of it (e.g. interferon, IL-2 or TCGF for T cells)
    • CAR-T cells
    • bispecific antibodies to chain T cells with cancer or infected cells
    • ways to make cancer or infected cells express unique antigen to be visible to the immune system
    • 50+ targets in the tumor microenvironment
    • oncolytic virus therapy
    • combinations of such with existing therapies and each other
    • more?
    • You could consider these in combination with the suite of antimicrobials as the tools in our arsenal in a pandemic:
      • antibiotics
      • antivirals (e.g. ART)

7. I learned various things related to

  • the standards needed in experiments to prove various findings in cancer immunotherapy
  • the strategies for discovery of such findings
  • the stories behind key concepts in immunology
  • various wet lab techniques
  • pictures of certain phenomena

These may be expanded and may be of interest to fellow scientists.

8. Below, I outlined my favorite parts of the book as it fits my background and interests. This may be helpful to those only interested in specific questions or sections, as well as those who have read the book and wish to have a mental map of its contents. I think those interested in the bioscience and clinical aspects (e.g. point 7) will enjoy reading the book in full. I think others will benefit from just a summary, which can be found in Appendix B of the book.

  • Preface. Argues that, to treat cancer, it is better to use the immune system than to use a typical drug, because the immune system adapts against cancer’s mutation, unlike a typical drug. Even though this is true, isn’t it possible that the immune system is already doing what it can to fight a disease, so there may be little to gain my engineering/optimizing it further? I agree that there’s a clear optimization for immunocompromised people (get them off immunosuppressants or fight their HIV/AIDS) but otherwise it’s not obvious to me that just because the immune system is strong and adaptive, that we should be looking to further engineer it to do better.
  • Introduction
  • Chapter 1 Patient 101006 JDS
  • Chapter 2 A Simple Idea
  • Chapter 3 Glimmers in the Darkness. How we know how the immune system works (“In retrospect… horrible, glaring exception”). People seeing immune system not working against cancer as proof of its non-relation. Some Rosenberg and cancer immunologists’ stuff in 1970s-1990s (Coley repeat but with serum/blood transfusion instead of infection, then a more specific transfusion of T cells created by pig in response to tumor, then IL2-fertilized T cells plus IL2 (accidentally discovered in 1976 as fertilizing healthy T cells in an attempt to grow leukemia, more appropriately T-cell growth factor for crazy T cell growth numbers, produced beyond scarcity with rDNA spurred by interferon, the 1957-discovered “interfering” hormone) <1/2 helpful study followed by news and FDA approval but difficulty in reproducing and more scientists avoiding immuno for cancer area) (“In 1968… the most successful would be the ones who weren’t even trying.”) So Rosenberg and previous era can be thought of as a. IL2, b. maybe cancer vaccine (mash up tumor), c. serum and/or T cell transfusion [b. and c. having analogs for infectious disease, a. is a new idea that didn’t pan out so well, vs. these checkpoint inhibitors which may pan out reasonably well].
  • Chapter 4 Eureka, Texas. Jim Allison. How do T cells recognize and get activated at all? Find TCR, then CD28, then co-inhibitor CTLA-4.
  • Chapter 5 The Three E’s. This is basically immuno again but maybe post-Allison and the author trying to go back and forth and/or put things chronologically, I’m not sure what the logic of the order is (1974 Stutman nude (non-immune) vs. normal mice getting carcinogen and same tumor rate; 1988 Schreiber and Old TNF and IFNy knockdowns (genetic mutation, antibody) stop immune response, incl. vs Meth A tumor model and carcinogen-induced tumor mice; fight between cancer and immune system (elimination, equilibrium, escape) and survival of the fittest tumors (name “immunoediting”) portrayed by taking out immune system and tumors suddenly overwhelming previously healthy (mutagenized) mice and by transplanting tumors from immunosuppressed/competent mice to the other, respectively, with checkpoints being possible tumor tactic and other ways to engineer immune system). Story of ipilimumab (Allison’s anti-CTLA4 antibody for desperate metastatic melanoma, w/ BMS unsure about continuing, S vs. PFS vs. feeling better, side effects, approval in 2011, etc.).
  • Chapter 6 Tempting Fate. Mostly the anti-PDL1 approval story (“On December 10… [end of chapter]”). It’s also Brad (“[beginning]… help Brad save his”) and discovery of PDL1 (“Like most big discoveries… side of the handshake followed quickly behind”).
  • Chapter 7 The Chimera. CAR-T.
  • Chapter 8 After the Gold Rush
  • Chapter 9 It’s Time

Universal basic income’s implications for climate change and other catastrophic risks

[Highly speculative post]

Would a universal basic income (a plan for the government to pay everyone, say, $1k/month) decrease greenhouse gas emissions? The thought occurred to me because I’ve been trying to cap my emissions for a while and quickly realized that I was only able to keep my energy usage below the average global citizen’s level because I didn’t have to drive 20 minutes to work each day, which otherwise would explode my energy usage. This is possible in Boston, especially when you have housing options within 20 minutes’ walking distance from your place of work. But I had a harder time imagining this working in the sprawling South Bay in the San Francisco Bay Area (although you can get surprisingly far with a bike). Reading about basic income made me think: if basic income resulted in some people quitting their jobs, and if quitting your job means reducing your emissions spent on driving to work, does basic income win on this environmental count as well?

Yes, I realize this is speculative. Basic income might have to be pretty high, or your cost of living pretty low, for a basic income to convince you to leave your job. (Andrew Yang’s $1000/mo. Freedom Dividend, while a start, feels insufficient.) And while I do *feel* that quitting your job reduces your emissions, maybe you spend your leisure time or your low-cost lifestyle driving an equal or even greater amount to other places instead. I doubt it, but it’s possible.

Two speculative arguments make me feel that no job is less energy-intensive than a job:

  1. I saw inventor Saul Griffith show a graph of how energy usage is correlated with GDP. The 2008 recession saw a significant dip in energy usage. I would guess that that happens because lower GDP means fewer transactions, sales and jobs, and (a) people are just doing and making less stuff, (b) fewer people are coming to work and (c) people are spending less on carbon-expensive vacation flights. The first two of these factors pertain to a basic income scenario, and suggest a link between fewer jobs and fewer emissions.
  2. I read about this terrifyingly energy-intensive job in David Graeber’s _Bullshit Jobs_:

    Kurt works for a subcontractor for the German military. Or… actually, he is employed by a subcontractor of a subcontractor of a subcontractor for the German military. Here is how he describes his work:

    The German military has a subcontractor that does their IT work.The IT firm has a subcontractor that does their logistics.
    The logistics firm has a subcontractor that does their personnel management, and I work for that company.
    Let’s say soldier A moves to an office two rooms farther down the hall. Instead of just carrying his computer over there, he has to fill out a form.
    The IT subcontractor will get the form, people will read it and approve it, and forward it to the logistics firm.
    The logistics firm will then have to approve the moving down the hall and will request personnel from us.
    The office people in my company will then do whatever they do, and now I come in.
    I get an email: “Be at barracks B at time C.” Usually these barracks are one hundred to five hundred kilometers [62–310 miles] away from my home, so I will get a rental car. I take the rental car, drive to the barracks, let dispatch know that I arrived, fill out a form, unhook the computer, load the computer into a box, seal the box, have a guy from the logistics firm carry the box to the next room, where I unseal the box, fill out another form, hook up the computer, call dispatch to tell them how long I took, get a couple of signatures, take my rental car back home, send dispatch a letter with all of the paperwork and then get paid.
    So instead of the soldier carrying his computer for five meters, two people drive for a combined six to ten hours, fill out around fifteen pages of paperwork, and waste a good four hundred euros of taxpayers’ money.

    What the…?

    Obviously not all jobs are so energy-inefficient, but even a job in which you drive 20 miles to the office, sit in an air-conditioned office for 8 hours and then drive back is pretty energy-intensive. This makes me feel that not having a job is often less energy-intensive than having one.

This leaves me more stuck on the first objection that current proposals for universal basic income would not influence many to quit their jobs.

Besides reducing my energy usage, I’ve been thinking a lot about pandemics for my PhD research. Is it crazy that basic income might also be helpful in that scenario? Specifically, imagine the next Spanish flu emerges. It threatens to spread to every continent within weeks, and, if the world responds to it as it responded to the 1918 version, 3-5% of the world population will die. So the CDC and WHO and a bunch of scientists and companies around the world scurry to make a vaccine, but that’s going to take hundreds of days. The main way for the rest of us to stay safe and stop the spread in the meantime is to stay away from each other and from our offices and schools. Except many need to go to work to pay the rent and eat…

Universal basic income to the rescue! You may now consider not going to work if you’re really concerned about the pandemic. You may lose your job, but you hopefully can still pay your rent (depending on where you live in the United States), and maybe you get your job back when the pandemic is over. Again, however, as with the emissions case, the basic income amount may not be enough for this to make sense.

Thinking about this made me wonder about a pandemic basic income, which is an income administered to everyone in pandemic situations for this very reason. I don’t think most pandemics warrant this kind of panicked response, but if one really did, maybe you’d want a pandemic basic income (plus other measures in place to ensure food, clean water and a bunch of other services still get to people despite the social disruption).

My Feelings on the Seriousness of Pandemic Risk

Note: When I try to imagine the rest of my lifetime, I feel pretty optimistic. It’s not that I believe that poverty will magically go away or that people aren’t already feeling and won’t continue to feel some of the impacts of climate change. But I do generally speculate that we won’t see a huge pandemic, nuclear war or similar humanity-devastating outcome.

Even though such risks seem unlikely, I have generally felt that they are still worth mitigating given their seriousness if they do happen. I think this explains my interest in the area, which has driven me to work on a couple projects in biosecurity and pandemic preparedness in my PhD. This post lays out lines of reasoning that have influenced my changing feelings about the seriousness of pandemics.

I first learned about pandemic risk from the Open Philanthropy Project (OPP). According to OPP, “[t]he worst flu pandemic in the past century was the ‘Spanish’ flu epidemic of 1918, which is believed to have been responsible for about 50 million deaths,” or 3-5 percent of the world population at the time [0]. The academic biosecurity community [1] in general fears that modern pandemics have the potential to kill even larger percentages: increased travel and other factors potentially augment the risk, while modern medicine and public health knowledge potentially decrease the risk.

How serious is pandemic risk, versus the 30-40 million people who die each year of non-communicable diseases [2], or climate change, or poverty and improved economic well-being (e.g. housing prices)?

tl;dr: At least 2 historical pandemics killed > 3% of the global population within a few years; extrapolating this gives a death toll of > 210 million people today, which in rate terms is an order of magnitude less than that of non-communicable diseases. When considering modern factors that increase pandemic risk (see Red Note), along with the suddenness of pandemics and several scenarios that are worse than historical ones, pandemic risk seems as or more serious to me than suffering from non-communicable diseases.

(Note 1: Comparing pandemic risk and non-communicable diseases does not make me feel, “Oh, non-communicable diseases aren’t that bad.” Like many, I have had family members die and/or suffer from such diseases. This comparison makes me feel, “Wow, surprisingly this problem feels as or more serious than one I already feel is incredibly serious; I should seriously consider working on it.”)

(Note 2: I recommend the 106-minute movie _Contagion_ and, if you have more time, Laurie Garrett’s book _The Coming Plague_ for vivid (and realistic) depictions of pandemics that go beyond the mainly quantitative arguments below. For more background on the specific pandemic threats that could lead to different numbers of deaths including the ones below, as well as actions that can be taken to mitigate such threats, I recommend the 80000 Hours’ podcasts and OPP cause report [3].)

My feelings about seriousness of pandemic risk

1. I first imagined a scenario in which Spanish flu today kills the same percentage of the world population as it did in 1918. That’s 210-350 million deaths (3-5% of the world population) [4]. The Institute of Disease Modeling performed simulations of a modern Spanish flu (presumably accounting for modern factors like increased air travel and more advanced medicine); this 15-second simulation predicted 33 million deaths in the first 6 months:

This IDM number seems significantly lower than the 3-5% extrapolation. Using the 3-5% extrapolation or the IDM results, one modern Spanish flu would kill as many people as non-communicable diseases do in about 5-8 years or 1 year, respectively. If such a Spanish flu happens once or even twice in my lifetime, then in relative terms, the impact is an order of magnitude smaller than that of non-communicable diseases. I dislike engaging in such speculative estimates, but I see no other way to get an intuitive grasp of how serious the problem is. These estimates made me feel the problem is still extremely serious, but less relatively serious than I had initially thought.

2. At this point, I was surprised and asked why I had previously felt so sure that pandemic risk was as or more serious than non-communicable diseases. I then realized that my comparison of total numbers between the two problems missed part of the picture:

  • Many factors increase pandemic risk and magnitudes in the future [5].
  • The worst past pandemic is not a worst case bound on future pandemics, even if such factors don’t end up increasing pandemic risk. I can’t just hope that the worst or average case death toll for a future pandemic is the Spanish flu number (3-5% of world population) or even the Black Death number (until I know more about how these numbers were estimated, I’m not going to put much faith in them, but Wikipedia [6] estimates 16-23% of the world population). Pandemics seem like forest fires in their potential to scale unpredictably and uncontrollably beyond what one might expect [7]: infectious agents self-replicate just like fires, with billions of proximate humans constituting the firewood.
  • Pandemics unpredictably kill a bunch of people at once; this seems worse than something that kills the same number at a slow, steady rate. I mean “worse” in the sense of denting humanity’s trajectory, by unpredictably killing many people of child-bearing age in a short timeframe [8]. In the case of Black Death, “[i]t took 200 years for the world population to recover to its previous level.” [9] In contrast, many of the non-communicable diseases are diseases of old age; they seem to kill no more than a fixed percentage of the world population in predictable ways. This is not captured in comparison of raw numbers.

3. These arguments make me feel again that pandemic risk is extremely serious, at least on par with the suffering caused by non-communicable diseases.

My attitude towards personally working to mitigate pandemic risk

Guided by my interest, I try to pick what to work on based on the seriousness of the problem, how many people are already working on the problem, whether I can actually contribute to the problem and other hard-to-name factors called “personal fit.” The above post only addresses the first factor. At this point, I remain reasonably interested in working to mitigate pandemic risk, as well as other biosecurity risks.


0. Bill Gates paints the picture of a pandemic here:

1. From my current understanding, the academic biosecurity community includes people at the Johns Hopkins Center for Health Security, the Nuclear Threat Initiative, the Open Philanthropy Project and quite a few other institutions. See https://80000hours.org/podcast/episodes/beth-cameron-pandemic-preparedness/ for a full list.

2. This was generated from http://ghdx.healthdata.org/gbd-results-tool:






3. See https://80000hours.org/podcast/episodes/we-are-not-worried-enough-about-the-next-pandemic/ (“When we talk about biosecurity and pandemic preparedness… something even scarier”), https://80000hours.org/podcast/episodes/beth-cameron-pandemic-preparedness/, https://80000hours.org/podcast/episodes/tom-inglesby-health-security/ and https://www.openphilanthropy.org/research/cause-reports/biosecurity.

4. I don’t know about how the Spanish flu or Black Death death toll estimates are made, so I take them with a grain of salt. However, to take pandemic risk seriously, I don’t require the numbers to be that exact, given that such large numbers are plausible to me based on the exponential nature of pandemics and the huge variance in modern pandemic death tolls, and especially given multiple instances of global pandemics in history.

5. I think there are many factors; ask me if you’re interested. As mentioned before, there are also factors that potentially decrease modern pandemic risk, such as advanced medicine and public health knowledge. I won’t delve into it here, but my feeling is that factors increasing risk outweigh those decreasing it.

6. “In total, the plague may have reduced the world population from an estimated 450 million to 350–375 million in the 14th century.” 75-100 million / 450 million = 16.67-22.22%. https://en.wikipedia.org/wiki/Black_Death.

7. I first heard the pandemic-forest fire analogy from Marc Lipsitch.

8. It’s true that flu usually kills the extremely young or old, not those of child-bearing age. I would guess this is due the stronger immunity of young adults and middle-aged people. However, Spanish flu is a notable exception.

9. https://en.wikipedia.org/wiki/Black_Death

Can I reduce my energy usage to the global average?

I’m trying to reduce my energy usage to the level of the average global citizen’s. [**EDIT**: I am now tracking how well I’m doing that here.]

I was inspired to do so by renewable energy inventor Saul Griffith. In 2009, he gave a talk about climate change (http://longnow.org/seminars/02009/jan/16/climate-change-recalculated/):

“[Saul Griffith]’s been analyzing his own life in extreme detail to figure out exactly how much energy he uses and what changes might reduce the load. In 2007, when he started, he was consuming about 18,000 watts, like most Americans. The energy budget of the average person in the world is about 2,200 watts… [T]o stay at the world’s energy budget at 16 terawatts, while many of the poorest in the world might raise their standard of living to 2,200 watts, everyone now above that level would have to drop down to it.”

Well, how much can I reduce my energy usage?

I estimate that I could easily be am at **3,000 1,689 (3,385) watts** (the latter number tries to also very roughly account for my portion of energy used by my country’s government and energy used by the education, finance and healthcare services I use). , plus my portion of heating for the dorm I live in and the places I work [no longer in a dorm and don’t use house heat]. (I will update this with numbers when I hear back from my dorm building manager.)

Concretely, 3,000+ 1,689 (3,385) watts is:

1. *Flights*: Jet fuel for 1 2 roundtrips from SF to Boston each year (385 771 watts)
2. *Land transport*: Gas for 1 8 mi roundtrip on a bus from Boston to Cambridge each week (225 watts)
3. *Stuff*: Energy for production and transportation\* of my laptop and phone, toiletries, clothes, miscellaneous items and the trash these produce (270 watts)
4. *Food*: Energy for production and transportation\* of 1 serving of milk per day, 1 serving of fish per month and 10 servings of vegetables per day (425 watts)
5. *Heat and electric*: Electricity and gas for heating of my dorm and workplace, refrigerator, cooking, laptop and phone (34 watts plus my portion of heating)
6. *Services*: Energy for production, transportation and retail of the healthcare, education and finance services I use (1080 watts)
7. *Society*: My portion of the energy spent by the US government, notably the construction of roads and the purchases and operations of the US military (464 watts)
8. *Buffer*: Things I forgot to count, plus miscellaneous things here and there that aren’t recurring or significant uses of energy (e.g. a random drink of wine over the holidays) (151 watts)

What would I have to change in my life to get to 3,000+ 1,689 (3,385) watts? **I’d start by cutting flights, cars/buses and bought stuff, the highest bang-per-buck areas** for me and Saul (and, I suspect, many of my friends). I actually don’t have that much to cut to get to the lifestyle described above, minus a few things:

1. For 2019, I’m planning on traveling once to Peru for vacation and once to home over the holidays. Re Peru: In general, I’d need to do such vacation travel very infrequently, or replace the holiday trip home with one to a vacation destination.
2. I’d have to cut three trips per week to Cambridge (which I did in Fall 2018) down to one. I could accomplish this by coordinating all my meetings and activities to happen on one day per week, participating in others remotely instead of in person, prioritizing which activities in Cambridge are truly important to me and potentially staying over in Cambridge.
3. I’d have to reduce buying from Amazon (I count 19 orders over 17 weeks in September to December 2018) with the wish list method\*\* or by buying used things, which seems like a great and fair-accounting way to reduce energy usage.
4. Less important things: I could take cold showers exclusively, which I occasionally do for the thrill and am curious to turn into a habit.
5. Relatively easy things: I’d have to continue my habits of not buying recurring new things except food and toiletries (e.g. no recurring clothes!), not buying non-essential things (e.g. no physical books), not buying new big things (e.g. furniture), not using the heat in my room (wear warm clothes or insulate the room instead), etc.

I’ve posted the details of my accounting here, including the numbers I used for energy usage of each lifestyle item or action: https://docs.google.com/spreadsheets/d/1c5741VO0wBtWL5ph4E6APThuwR7lF-LH6arTxPXlv3g/edit?usp=sharing.

How accurate are the numbers I’m using for energy usage per lifestyle action or item? I haven’t checked them myself, but I somewhat don’t care about the exact numbers, because the direction is correct (i.e. using/doing less means reducing your energy usage), and the general action item remains the same: **travel less and buy fewer things and services.** I *do* care about not forgetting my sources of energy usage (e.g. I forgot about electricity for lighting the first time around), and I especially care about not forgetting the biggest contributors. I also care that the numbers’ orders of magnitude are correct. I am inclined to trust Saul’s numbers because he seems to be very rigorous and detailed. You can judge for yourself by watching Saul’s talk (http://longnow.org/seminars/02009/jan/16/climate-change-recalculated/, especially 47:46-51:00) or his other talk (https://www.youtube.com/watch?v=1ewEaTlGz4s, especially 31:00-35:00), which he apparently prepared by combing through 60,000 pages of footnotes of energy data.

If you’re interested in making these calculations for your life, I’d recommend watching 47:46-51:00 in Saul’s talk to get an overview of the process: http://longnow.org/seminars/02009/jan/16/climate-change-recalculated/ \*\*\*. Then you could use a spreadsheet similar to mine. My spreadsheet has some common line items (e.g. watts for air or car travel) but excludes others that aren’t part of my life (e.g. my room came furnished, so I haven’t bought furniture). Your lifestyle likely has different stuff and actions than mine; to find the numbers of watts for these things or actions, I’d recommend referring to Saul’s slides, which are in high definition here: https://www.dropbox.com/s/86tfvc6mm5gbbv9/longnow16jan09-090905230147-phpapp01.pdf?dl=0. Specifically, pages 75, 79 and 81 are useful for watts on overall goods and services, food and physical goods, respectively. I tried searching for an online energy usage calculator but couldn’t find one\*\*\*\*. If there’s a number you’re looking for that isn’t in Saul’s slides, you might find it in this MIT climate course lecture, although I haven’t looked at the lecture myself and didn’t use it for my calculations: https://ocw.mit.edu/courses/chemistry/5-92-energy-environment-and-society-spring-2007/lecture-notes/energy_calc_guid.pdf.

Am I uncomfortable making these cuts in my life? For me, I’ve always been a pretty cheap homebody. I spend a lot of time using the computer (for work, reading or chatting with friends) or taking walks with nearby friends. I hate owning too many things because I dislike spending mental effort tracking them. So the ideas of living minimally have been easy for me to adopt, and the environmental argument for living this way is a cherry on top. For me, the most difficult sacrifice of these cuts is not flying home more often to see my family and close friends, but I see this as solvable by making my trips home longer and getting better at having a deep relationship remotely, with gifts, cards and quality time online. According to Long Now, after Saul’s cuts, “[h]e’s healthier, eats better, has more time with his family, and the stuff he has he cherishes.” I think that many people would similarly find that living in a lower-footprint way would change their lives in other ways that matter to them, sometimes positive and sometimes negative. If those changes are positive, then these “cuts” will look more like improvements and tend to stick.

I don’t want to say that there’s no environmental motivation for what I’m doing. While I’m not under the illusion that the cuts in my life will, by themselves, put a dent into climate change, I hold out some hope that I will figure out a way to have a larger impact on climate change. Living this way is potentially educational in that endeavor and could make me more credible to have that kind of impact. Moreover, it could be that developed-world citizens *must* make this magnitude of cuts (and/or make up their carbon footprint in some other way\*\*\*\*\*) for it to even be feasible for renewable energy to scale to cover the world’s energy usage. Something just resonates emotionally with me when I hear Saul make this argument, even though he says it in such a nonchalant way (to see him say it with graphics, start at 46:46 in http://longnow.org/seminars/02009/jan/16/climate-change-recalculated/):

“So you now realize that the 18,000 watts or 17,000 watts in Saul Griffith’s life looks a little like extravagant, because if everyone in the world went from 2,500 watts to 18,000 watts suddenly, we are not going to need to have just Renewistan [Saul’s imaginary planet that produces the world’s 2009 energy consumption, 16 TW, from renewable sources]; we are going to need 6 or 7 Renewistans. That’s not going to scale. So it’s inevitable that China and India bring their power consumption per capita up, and probably we shouldn’t begrudge anyone in the less developed nations to do so. And that sort of means that we [developed-world citizens] have to go down.”

If everyone has to make these cuts, then in particular, I have to make these cuts.


\* This figure possibly includes energy for retail as well, i.e. the energy to keep open the store in which I buy food and stuff. I’m not sure.
\*\* The wish list method: whenever I have the desire to buy something, I put it on a wish list and only buy it if, 30 days later, I still feel I need it.
\*\*\* If you have time, I found the whole talk to be very informative, entertaining and inspiring!
\*\*\*\* Saul created such a calculator called WattzOn at some point before 2009, but I wasn’t able to find the calculator today (in 2019). It used to exist, because I found it on Wayback Machine: https://web.archive.org/web/20120212172149/http://wattzon.com/track-and-monitor; however, the calculator doesn’t seem to be functional on Wayback Machine.
\*\*\*\*\* For example, by planting trees.

Talking to Himself, November 3 to November 4

Andrew November 3: So, explain this to me. What’s your, I mean our, vision for the near-term future of intelligence? I mean, in the next 5–10 years.

Andrew November 4: Whoa… that’s a big question, and maybe worth a 30-minute brainstorm and 2 cups of tea. Unfortunately I, I mean we, don’t have the answer to that one quite yet. But I can tell you something else I learned today. I think I’m getting a clearer understanding of how human and machine intelligence might complement each other in solving problems. And in particular, 2 types of human intelligence.

AN3: Darn, you got my hopes up with all that talk about the singularity… but I’ll hear you out. 2 types of human intelligence… what do you mean?

AN4: First of all, by intelligence I mean the ability to achieve one’s goals in a wide variety of environments. I won’t define it further; I’ll leave you with people’s usual connotations of intelligence. Now the 2 types I’m referring to are expert intelligence and layman intelligence. Layman intelligence is the intelligence the average human gets as part of the current “standard package” of being born as a human. That includes incredible association-forming mechanisms and visual and spatial recognition. Expert intelligence, as I’m defining it here, is any intelligence beyond this standard package. Chess grandmasters, professional basketball players, and scientists trained to do physics all demonstrate expert intelligence.

AN3: OK, I see. And what’s the purpose of distinguishing between these 2 types of human intelligence?

AN4: Perhaps an example will help here. Recall our initial goal is to understand how these 3 types of intelligence (2 human, 1 machine) might complement each other in solving problems. I’m going to steal a couple examples from Michael Nielsen’s Reinventing Discovery: The New Era of Networked Science. Suppose we were trying to solve the problem of mapping out the entities in the universe, where by “mapping out entities” I mean knowing what entities are useful to talk about, some characteristics of those entities (e.g. the electromagnetic spectra and thus composition of those entities, or the shapes of galaxies – spiral, elliptical, etc.), including their locations, and the important relations between those entities (e.g. one type of relation is “belonging”: planets belong to stars, which belong to galaxies; another type is gravitation). Given the state of today’s world and technology, what are the subproblems we’d have to solve in order to accomplish this?

AN3: Well, I imagine we’d first have to image as much of the galaxy, i.e. as much of the sky, as we could – I’m envisioning panning a super high-resolution telescope across the night sky from an observatory somewhere in Chile. Then we could in theory lay out all the printed images on the ground in the same positions that we imaged them, and look at each one in turn. That might be a lot of images though… we might want to understand both the macro-structure of larger universal entities like superclusters as well as micro-structures like individual stars and black holes. So we might have to zoom in and out to figure out which entities are useful to talk about. And then we could go and study their characteristics like their elemental composition via spectroscopy, we could classify them, and we could…

AN4: Yes, I think that will be enough detail to illustrate the point. You’re always planning about how to solve problems, aren’t you?

AN3: I thought we liked that about ourself.

AN4: I guess we do… Anyway, let’s take the first subproblem you mentioned – imaging the night sky. As you mentioned, this is clearly a job for a precise machine like a high-resolution telescope. Plus, I’m guessing that we’ve built telescopes to record electromagnetic radiation outside the range of visible light as well (not sure about this though). You couldn’t imagine just looking out at the night sky and recording the positions of all the stars in a notebook, could you? Well, I guess Kepler did that, but today we can get much more resolution than we could just by looking… Kepler never would have been able to differentiate a nearby, less bright star from a brighter star further away, for instance.

AN3: Yes, that’s all reasonable… What’s your point?

AN4: Here we have one conclusion: machines can measure the things that humans have designed them to measure more precisely and reliably than humans can. We’re going to be making generalizations like this to answer our original question of how human and machine intelligences might complement each other.

AN3: Ignoring my distaste for generalizations based on single examples, I like the direction in which this is going – we’re trying to characterize the comparative advantages that machine and the 2 human types of intelligence have with respect to each other. Then we can plot out how we might solve the “mapping out the universe” problem!

AN4: Exactly!

AN3: Let me try making a characterization now! Let’s take the second subproblem, that of taking the images output by the telescope and figuring out, I guess, what’s in the images.

AN4: I would suggest that you break that down into more subproblems.

AN3: What do you mean?

AN4: Here, I’ll suggest one strategy for “figuring out what’s in the images.” Step one: identify all the universal entities currently known to astronomy (e.g. galaxies, stars) and what they’re documented to look like. Step two: look at a sample of the images and figure out probably multiple resolutions at which to display those images so that we can actually visualize the entities and their relationships, as well as catch unforeseen entities and relationships. Then, to map all the known entities, follow step three: for each level of resolution, search through all the images for things that look like what the various known entities would look like at that resolution. To discover unforeseen entities and relationships…

AN3: OK, I get your point. So for step one, I would leverage the second type of human intelligence you mentioned, expert intelligence (in case, the astronomer’s intelligence) to know the latest entities known to astronomy, as well as their various types and what they look like. For step two, perhaps this is the job of the astronomer as well, because the astronomer is likely to know what kind of visualization characteristically depicts an entity and depicts an entity relationship. For step three… obviously the astronomer can’t look at all the images herself! Could the astronomer possibly get the aid of a computer vision expert and write computer vision algorithms to detect the entities?

AN4: Such algorithms as they are today could get you part of the way there; for example, they could screen out obvious non-entities. But without a mass of labeled training data, they probably couldn’t tell you whether a galaxy was spiral or elliptical.

AN3: Dang… but it seems impractical to have the astronomer, or even her whole research lab, do this labeling of spiral vs. elliptical galaxies.

AN4: You’re forgetting about the third type of intelligence we mentioned.

AN3: Layman intelligence? Is classifying a galaxy as spiral or elliptical easily teachable to a layman?

AN4: It’s surprisingly accessible; you can go to http://www.galaxyzoo.org/ and check out how accessible it is yourself. Classifying galaxy shapes is one place where our human “standard package,” which includes spatial recognition, gets a lot of leverage.

AN3: Oh, wow – I’m getting the hang of this! It’s even kind of fun =) … Sorry, back to serious talk.

AN4: In fact, the solutions we’ve proposed here for “mapping out the universe” are a simplified version of what the Sloan Digital Sky Survey and scientists Kevin Schawinski and Chris Lintott (who built Galaxy Zoo) actually did!

AN3: Wow, that wasn’t as hard as I thought it’d be =)

AN4: Well, let’s think about the characterizations we can make about the three types of intelligence. So far we have that machines can measure precise and reliably. What else?

AN3: It seems like two expert intelligences were important here: the astronomer’s, for knowing the universal entities and their relationships, as well as what they looked like, and the computer vision programmer’s, for knowing what’s possible with computer vision and for designing the algorithm to filter out obvious non-entities. I’m not sure how to generalize the expert intelligence here except to say that the expert intelligences were good for having the relevant “domain knowledge.” And then the layman intelligences were good at the spatial recognition to classify spiral and elliptical galaxies, as well as just having a lot more time between the thousands of citizens than the two astronomers.

AN4: I think I like those general characterizations… And, as you can see, of the three types of intelligence, the expert intelligence has currently the most case-by-case comparative advantage, in the sense that it’s hard to make general statements about when expert intelligence beats out the other two types except to say that, perhaps, expert intelligence is good at having the “domain knowledge and ways of thinking” in areas where there exist human experts.

AN3: Oh, I see, as opposed to layman intelligence, where you can point to very specific comparative advantages like spatial recognition, natural language processing, association-forming mechanisms, etc. And the same thing for machines: at least right now, you can point to specific comparative advantages like speed of computation, accuracy of measurement and computation, reliability of measurement and computation, freedom from boredom, etc.

AN4: Precisely. Also, we did miss one comparative advantage above that also belongs to the astronomer’s expert intelligence. I would say that “knowing the right questions to ask” about the map – in this case, focusing attention on the distinction between spiral and elliptical galaxies among all possible questions to be asked about the map – is the expert’s comparative advantage.

AN3: Very interesting! I do have one question here though. As you know, I’ve been reading Ray Kurzweil’s The Singularity is Near and the book makes compelling arguments that machine intelligence is growing at an accelerating pace and human biological intelligence, at least the way it exists now, will become only a small part of our human-machine joint intelligences in the future. Will that make the analysis we’ve done here obsolete?

AN4: Although I would take Ray Kurzweil’s predictions with a grain of salt, I would agree that machine intelligence will continue to beat human intelligence in more and more areas that we currently consider the comparative advantage of expert or layman intelligences. As an example, Schawinski and Lintott have actually trained a classification algorithm based on the Zooites’ classifications of spiral vs. elliptical galaxies to automatically do the classification without layman input, and the algorithm has achieved a 90 percent AUC! It does seem to be the tendency that machine intelligence will replace human intelligence in areas like spatial recognition and NLP, as well as other areas; the only highly uncertain question is when this will happen, but even Kurzweil’s estimates suggest that such advances are still at least five years out. In the meantime, it may be worthwhile to think about how we might leverage these three types of intelligence as they exist today.

Reconciliation of Intellectual Inquiry and Religious Education in American Universities

As my junior spring semester at Harvard ends, I am beginning to reflect and wrap up learnings for the year. I came across this essay I wrote a few months ago for my “Democracy and Education in Modern America” class, and I thought it might be of interest to anyone curious about history of American universities, history of religious education, and/or my personal development of ideas about “value systems.” Since I wrote this essay, my Christian friend Brian Zhang has added to my thoughts on religious belief (especially to William James’s interpretation of it), yet I think it is interesting to see my thoughts on religious belief before I considered his perspective.

Reconciliation of Intellectual Inquiry and Religious Education in American Universities

When asked about the value of their higher education experience, many American university students will talk about their intellectual education, which Charles Eliot describes as “a mental training inferior to none in breadth and vigor [and] a thirst for knowledge” (Eliot). Yet fewer students will talk about their “religious education,” by which I mean their process of learning the core beliefs and values according to which they should live. But a religious education is, to varying degrees, an important part of many university students’ experiences as well. Separated from their parents and hometown morals and placed into an environment with incredible freedom and many choices—including the freedoms to choose how they use their time and which peers to befriend—many students are initially overwhelmed by their newfound freedom and the accompanying realization that “freedom is responsibility” (Eliot). When these students start asking the questions “How should I choose?” or “What should I do now?” they necessarily start asking harder value questions such as “What do and should I care about?” They might look to their peers, families, and role model professors for answers. The difficulty of answering such questions is the beginning of their lifelong religious education to uncover their core beliefs and values; such religious education is one of the great purposes of higher education, alongside intellectual education.

Before 1900, American university educators sought to meet this need for difficult answers by simply telling students the answers given by Christianity. Such “telling” was a very direct form of religious education—it consisted of chapel services for students to instill good morals and a sense of community (Reuben 119) and courses in “Natural Theology and Evidences of Christianity” (Reuben 89). However, as the late 1800s saw the spirit of open intellectual and scientific inquiry begin to sweep the academic fields, including philosophy and religion, the dogmatism of “telling” in religious education no longer sufficed, and universities struggled for 30 years to reconcile intellectual inquiry and religious education in their students, ultimately giving up by outsourcing religious education to competing student groups and outside religious groups (Reuben 132). Although university leaders ostensibly failed to reconcile intellectual inquiry and religious education after losing control of religious education during 1890-1920, the resulting decentralization of religious life into a diverse set of student communities has actually reconciled intellectual inquiry and religious education by giving students the control to both design more emotional and personal religious education for their peers and conduct their own investigations of the religious systems by which they want to live.

From 1890-1920, American university leaders failed to reconcile intellectual inquiry and religious education in their students because they failed to understand that religious education is mostly initially relevant to students because of the emotional strength—not intellectual wisdom—it provides. It was a natural mistake for universities to make, since one of their main goals is to provide their students with an intellectual education, and since the institutions most easily under university control—the teaching of courses and research—were intellectual. This is evident from the universities’ first attempts at this reconciliation—university presidents started by searching for professors who combined intellectual inquiry and good religious character in their teaching, “who were independent thinkers… who were sympathetic to Christian beliefs and values” (Reuben 90); then sought to create new fields of research under the umbrella of “science of religion,” which included the psychology and sociology of religion as well as literary criticism of the Bible (Reuben 102); and then even tried to secretly insert religion into courses that were deceptively named scientifically, such as “The Things That Shape a Nation’s Character” (Reuben 116). That all efforts at reconciliation focused on improving courses and research indicates the intellectual focus of these universities’ first attempts at religious education. Faculty’s assumption that students would take courses on religion if “taught scientifically” (Reuben 95) is also telling of their misunderstanding of the value of religious education to students.

In fact, as William James emphasizes in his “Varieties of Religious Experience,” it is emotional strength—not intellectual challenge—that is religion’s most relevant benefit to religious people. He discounts the value of intellectual pursuits in religion when he says that “personal religion will prove itself more fundamental than either theology or ecclesiasticism” (James 744), the intellectual and organizational outgrowths of personal religion. James later demonstrates that this “fundamental” characteristic of personal religion is the emotional strength it provides to the participant: “There is a state of mind, known to religious men, but to no others… The time for tension in our soul is over, and that of happy relaxation, of calm deep breathing, of an eternal present, with no discordant future to be anxious about, has arrived… No other emotion than religious emotion can bring a man to this peculiar pass” (James 755). James’s physical images of “deep breathing,” “relaxation,” and “anxiety” suggest that religion’s great appeal is the happy and peaceful “religious emotion” it brings, not intricate theological arguments or doctrinal differences. We can easily imagine, then, why universities’ first attempts to add intellectual rigor to religious education failed to appeal to students. Evidence of failure includes “student rowdiness at [chapel] services” (Reuben 119) and declining attendance of classes concerning religion.

After these failures, universities gave up control of religious education to student groups and religious organizations leading up to 1920 (Reuben 132). This move would appear to symbolize the ultimate failure to reconcile intellectual inquiry and religious education; after all, William James’s claim about the fundamentality of religious emotion seems to make such reconciliation impossible. But, after broadening “religion” to encompass its general aforementioned definition as a set of values and beliefs according to which one should live, we will see that the end of the university “monopoly” on religious education and the rise of diverse student communities each attempting to serve different students actually fostered each student’s process of inquiry into the religions of each of these communities. This is the reconciliation of open intellectual inquiry and religious education.

There are two main reasons that a diversity of student communities, rather than the single university, has succeeded in reconciling open inquiry and religious education. The first is that students now have the control to design more emotional and personal “religious education” for their peers. For example, to look at Christian groups temporarily, the 1910s saw the rise in influence of student YMCA and YWCA groups as well as University of Chicago Christian Union (Reuben 129), and today at Harvard there exist a variety of Christian organizations include Harvard College Faith and Action (HCFA) and Asian American Christian Fellowship (AACF). The reason these groups have hundreds of students in modern times is that students running these organizations understand the strong emotional and personal needs of other peers that are elsewhere unmet in a university; these include emotional strength during such existential crises as I previously identify as the “overwhelming freedom” of college, as well as a close community to which one feels a special connection. The diversity of these communities is particularly important since today’s student bodies are much more diverse; thus the Harvard Christian groups have specific ethnic appeal (e.g. AACF), gender appeal (e.g. YMCA and YWCA), and denominational appeal (e.g. Orthodox Christian Fellowship). It is difficult to imagine a single university coordinating the religious education and close community of such a diverse set of students as today’s, and even the diversifying set that entered university in the 1920s; this set included women in coeducational schools (Angell) and students of different denominations (Reuben 122).

The other reason for the success of diverse student communities is that they enabled a student to conduct his or her own inquiry of the various religions offered by the various communities on campus. Here I will broaden my definition of “religion” as above to encompass any set of values and beliefs according to which one should live, so that we can interpret traditional theistic religions as examples of this broadened definition but also include secular belief systems. This implies that we can interpret student communities to have meanings broader than organized student groups; they also include the students from one’s concentration, one’s house, and one’s sports teams. From the point of view of a student at Harvard beginning to grapple with the questions I pose above regarding what one should do or what one values, the diverse set of communities at Harvard is a perfect field for open inquiry into the religions by which one would prefer to live. Each community has its own set of beliefs and values, and by analyzing and choosing communities, a student is doing exactly the intellectual inquiry into which type of religious education he or she desires.

For example, I recently became a member of the effective altruism community at Harvard, which is dedicated to doing good for the world in the most effective way. The community has as its religion—or core beliefs and values—that one should do the most good one can in the world, and that it is possible to figure out which methods of doing good are better than other methods. The reason I chose to surround myself with people in effective altruism is because I have felt the strongest emotional and intellectual connection to it. It is filled with people who think mathematically and rationally, which appeals to my desire for rigorous evidence and proof in making decisions, and who are searching for ways to improve the world. Moreover, I have evaluated how strongly I connect with the effective altruist community compared to other communities around Harvard—including the Christian communities, math academic community, or the community that embraces the philosophies of classes like Justice or Chinese Philosophy—and concluded my search with effective altruism. This is how I have reconciled open inquiry with religious education of my own beliefs and values, making the choice of religion ultimately my own.

In sum, the decentralization of religious life into a diverse set of student communities has actually reconciled intellectual inquiry and religious education by giving students the control to both design more emotional and personal religious education for their peers and conduct their own investigations of the religious systems by which they want to live. Because of this decentralization, both open inquiry and religious education have complemented each other in constituting a large part of the value of higher education for me.


  1. Angell, James. “Presidency of the University of Michigan.” In The Reminiscences of James Burrill Angell. : Longmans, Green, and Co., 1911.
  2. Eliot, Charles. “The New Education.” The Atlantic, February 27, 1869.
  3. James, William. The Writings of William James: A Comprehensive Edition, Including an Annotated Bibliography Updated Through 1977. Chicago, IL: University of Chicago Press, 1977.
  4. Reuben, Julie. The Making of the Modern University: Intellectual Transformation and the Marginalization of Morality. Chicago, IL: University of Chicago Press, 1996.

Max Tegmark Talk Notes

As a Philanthropy Fellow for Harvard Effective Altruism, I had the opportunity to get dinner with MIT physicist Max Tegmark and the other Fellows on Tuesday evening. We had a fascinating discussion about existential risks and the reasons that much of the public today does not even think about the destructive potential of such risks as unfriendly artificial intelligence (AI), while just 50 years ago during the Cold War people all over the world actually felt and believed in the real possibility of human extinction. Max noted that the Cuban Missile Crisis and realistic movies about nuclear war (such as TV shows like Threads) played a large role in making the public aware of these threats, which suggests that films about unfriendly AI, the first of which seems to be the upcoming film Transcendence, could spread public awareness about current existential risks.

I also learned a lot about cosmology from Max’s fascinating talk related to his book Our Mathematical Universe; I’ve posted notes for the talk here. If you find the notes interesting, definitely check out his book, which Ben recommends as an excellent read.

Quick reflections on Tanzania: Part 2, on the Difficulty of Doing Development

For most of January, I worked in Tanzania as a Tech in the World Fellow. Many people have asked me about my reflections on it and how my life changes after it, so I’ve written up my reflections here in Part 1 and this post.

Tech in the World has a stated mission: “to expose top computer science students to underserved needs in developing communities and the various ways technology can be applied to address these global issues.” Certainly there is something more underlying that mission; why expose computer science students to developing nations? So that those students will work there and help make an impact on an area of the world that is impoverished and could greatly improve in quality of life.

So if you want a final assessment on Tech in the World and whether it is achieving this ultimate goal, you will ask the question, “Andrew, how do your future plans change after doing development work in Tanzania?” For some reason, I found this question quite hard to answer the first few times I was asked, but then simplified the question by envisioning two (among many possible) post-graduate futures for myself. The first has me working as a technologist and problem solver in Silicon Valley, surrounded by people I admire and learn from, and solving a problem interesting both technically and in terms of the “business” questions surrounding the value my company can provide, my long-term strategy to achieving my mission, etc. Ideally, I am riding an innovation “wave” in a slow but important industry that is just beginning to accelerate, such as government, education, or energy. Let’s call this future “Comfortable Future.”

The second future has me in Tanzania doing (and rising in) software, global health, investment, or really any type of work that improves the state of human and economic development in the country (see Part 1 for concrete examples of development problems to be solved). I may be working within an institution like Ifakara Health Institute or starting my own, and of course I’ll be living in Tanzania with both my favorite and least favorite aspects of its culture, climate, and daily facts of life (such as electricity outages). Let’s call this future “Uncertain Future.”

Which future looks better as I close my eyes to imagine each? If “Uncertain Future” means graduating and immediately pursuing work akin to my Tech in the World experience prolonged for several years, everything else constant, then I would prefer “Comfortable Future.” This is not to say that I didn’t enjoy my Tech in the World experience—given the same choice back in the fall with the hindsight I have now, I would certainly still have gone. Rather, I initially feel uncomfortable with the idea of being one of the very few Harvard (math and CS) graduates, technologists, people from the United States, and people from my friend group to dedicate several years of my life struggling to solve problems in Tanzania’s pole pole culture, being almost alone in my decision to go there in the first place. After trying to break down this discomfort in terms of my values of personal growth and world welfare (note how this has evolved from previously named “memorable achievement”), I can imagine changes to the “Uncertain Future” scenario that would make me prefer it over “Comfortable Future.” I think that these changes actually illustrate some of the reasons that many peers and I hesitate about doing development work despite knowing about the significant problems to which they could contribute.

I would prefer “Uncertain Future” over “Comfortable Future”…

1. If I were no longer personally growing in “Comfortable Future.”

For example, if I found that in 20 years, I had learned all I cared to learn about Silicon Valley—developing my software and hardware engineering expertise, having extensive experience leading a company or two in different industries, seeing a wide variety of problems, learning to work with all types of people (within Silicon Valley, that is), and building relationships and community with shakers and movers—then I would prefer the new challenge and growth opportunity offered by “Uncertain Future.” My desire for personal growth is like the American obsession with expansion of the frontier during manifest destiny, always pushing boundaries into the unknown and untested parts of me and improving (aka colonizing) those parts. This scenario is pretty conceivable.

2. If a bunch of people I admired and wanted to learn from decided to start working in “Uncertain Future.”

Even if this happened right after graduation, I think I would go for “Uncertain Future” in a heartbeat. Unfortunately (based on the anecdotal evidence of my friends and network at Harvard and MIT), I see a much higher concentration of people I admire and can learn from following “Comfortable Future” instead of “Uncertain Future.” These “people I admire” include several highly visionary, charismatic or empathic, and/or brilliant friends I have met in college or while working, as well as leaders who inspire the entire communities I come from—Silicon Valley and Harvard. (From Silicon Valley, such leaders include leaders of recent visionary enterprises like Google, Microsoft, Khan Academy, Udacity, Palantir, Dropbox, Asana, Cloudera, OpenGov, as well as Silicon Valley legends like Xerox PARC, investors like Peter Thiel, and innovators like Elon Musk. From Harvard, such leaders mainly include academics like Amartya Sen, Niall Ferguson, Steve Pinker, Doug Melton, Joe Blitzstein, Ed Glaeser, and Paul Farmer.) This higher concentration of people I admire mainly doing work in entrepreneurship, technology, academia, and (to a much lesser extent) finance and consulting doesn’t seem to be spilling over to work in international development, and I sense this is a chicken and egg problem in which the people who I think I could learn from are not in development because they themselves want to be surrounded by people they admire, and of course not many of them are willing to go do work in Tanzania without having their circle of mentors and high-achieving peers around them. There are many exceptions to this generalization: of course many of the leaders I mentioned and my inspirational friends and co-workers do impact world welfare via philanthropy and charity, whether they are my roommate Ben Kuhn (who runs Harvard Effective Altruism), tech giants like Dustin Moskovitz (who started Good Ventures with his wife Cari Tuna), former Bridgewater analysts like Holden Karnofsky and Elie Hassenfeld (who founded GiveWell), and of course Bill and Melinda Gates through their grant-making foundation. But I can point to fewer people I want to learn from who have actually done development work themselves beyond making or optimizing donations (not to trivialize donations, which are incredibly important), and even fewer who are doing it at the time I graduate. (A few exceptions I know of include Dimagi and some of the leaders at MIT’s D-Lab). If more of these greats were to start doing development work, I would happily join them so that I would be learning from people better than I and personally growing while achieving world welfare. (If you are an effective altruist pointing out that you might have more comparative advantage making lots of money and donating it instead of doing the development work yourself, please see my thoughts on that below [1].)

3. If there were more social and economic support for “Uncertain Future.”

By social and economic support, I mean that I have close friends (and perhaps a significant other) nearby who are positive, curious, and compassionate people; and some source of income that meets a modest standard of living but enables me to freely pursue interests and projects without feeling my agency restricted. I think both of these are quite possible (i.e. I can make new friends, try to convince old friends to join me, and make a reasonable income), but the point I want to make is that when I first pictured working by myself post-graduation, I briefly (and irrationally) pictured the lack of social and economic support I have just talked about (i.e. not having friends and not living with enough money), even though the lack of friends would be solved by Tanzanians’ friendliness and the lower income solved by lower cost of living. I believe many people who have not been to Tanzania will seriously picture a lack of social and economic support when you ask them to imagine doing work there, and this might cause the gut discomfort with “Uncertain Future.”

4. If my comparative advantage were strongly in favor of “Uncertain Future.”

This is where my world welfare value comes in (notice that the first three concerned personal growth). You might think that, on the world welfare criterion, “Uncertain Future” of improving the health of an impoverished nation clearly wins over “Comfortable Future” of solving a problem in the wealthy United States. For me, seeing the developing world completely without sickness is more important to me than seeing everyone in the United States with a proper education. But the other question I must ask myself for the world welfare criterion is about my comparative advantage—i.e. on which problem does my choice to work on it (versus not working on it) make the biggest difference? For a person with a problem solving, getting-things-done, people, and narrowly technical skillset (in data analysis and software engineering), I can see that I still have some comparative advantage in “Comfortable Future” (although I think I would be replaceable in the “Comfortable Future” setting). My comparative advantage in “Uncertain Future” highly depends on the problem I am working on. If I am trying to solve one of Tanzania’s bigger problems in electricity infrastructure or drinkable running water, I lack any technical comparative advantage but could still contribute as a generalist in terms of enterprise strategy, attracting technical talent, or executing on projects. If I were building applications for mobile phones, then I would have technical as well as other comparative advantage. The reason I think “Uncertain Future” is not winning significantly on this criterion is because the problems I would have lots of comparative advantage on in Tanzania (e.g. problems involving data and software) do not impact world welfare much more than similar projects out in Silicon Valley (e.g. I could work on online education here in the United States, with implications for the rest of the world), and the problems that have high welfare impact in Tanzania (such as electricity or water infrastructure) are not ones I have comparative advantage in.

Back to the Question

So how does this answer the original question of how my future plans change after Tech in the World? I think conditions 1-4 will happen at some point in my lifetime, perhaps within the next 25 years, and at that point I will prefer “Uncertain Future” to “Comfortable Future.” Tech in the World has helped me consider the possibility of “Uncertain Future” at all and characterize what is holding me (and I believe, many of my peers) back from doing impactful work in development problems ranging from providing drinkable running water to teaching more effectively in schools.

Because of Tech in the World, I am significantly more likely to do more impactful work in the developing world in the future.


[1] One note on the effective altruist argument that, depending on who you are, your comparative advantage in maximizing world welfare might be to make a lot of money and donate it instead of doing the development work. I used to buy this argument strongly for myself, but being in Tanzania has made me reconsider this (although I can’t generalize to other nations). The claim that I should spend my time making a lot of money and donating it instead of doing development work myself (whether medical work, broader health research, technology development, or education) assumes that my donations cause multiple people to go in my stead, who combined are more effective than I alone would have been. Then (depending on which kind of people I want) I would guess that spending my time increasing the incentives to do development work and breaking down the barriers mentioned in this blog post (e.g. by starting a scalable version of a program like Tech in the World) is a more effective way to cause people with medical, health research, development economics, technological, and pedagogical background to do development work. (This assumes you want to solve problems that need people with these kinds of technical expertise and motivation.) Effective altruists—what do you think about the problem of getting more people into development work?

Quick reflections on Tanzania: Part 1, on Development

Now we’re back in school—what a change of scenery to be submerged in the Boston snow after four weeks in 90-degree Tanzanian weather! It’s helpful to take a step back from the rush of school (yes, including getting used to being surrounded by hundreds of peers) and think about my experience in Tanzania. In this series of blog posts, I’ll talk about the big things I learned, and then (the harder and more interesting question of) what changes in my life and my plans now.

The order of certain technological developments in Tanzania (at least Dar es Salaam) is different than those same developments in the United States. Cell phones are very popular now, and the order of developments in Tanzania has been widespread 3G and cell phones (even though only 20 percent of the country has electricity), then accessible personal computers, then widespread electricity and Ethernet/WiFi. Compare this to the almost opposite order in the United States. It’s interesting to think that many Tanzanians quite likely will never even use personal computers as their main devices for communication and other needs like transferring cash (see M-Pesa), instead defaulting to their phones (as Dave Morin has emphasized before). There are already many entrepreneurs and problem solvers, many of them local Tanzanians at incubators like TANZICT, who are taking this to heart and developing applications for the old Nokia mobile phones (not smartphones). Here is a likely opportunity to influence Tanzania’s technological development in the next 10 years.

Beyond the technological view on development, there is a lot of room to improve the general quality of life. In the next 20 years, it seems certain that Tanzania will need drinkable running water, cheap and well-distributed anti-malarial treatment (especially in rural areas), and a public transit system (since the traffic congestion is terrible enough that it is possible to waste 3 hours to drive 20 km to get to the airport). I am not as certain about the future of other possible improvements to standards that we have in the United States—such as a “modern” education system focused on teaching students how to think instead of the current pattern of taking tools/skills (e.g. Java) from the West and trying to adapt them to the students. Compared with the needs to stay hydrated, stay healthy, and get to places, the need for education is less well-defined; it’s clear that the purposes of the first three are critical to life, but the purpose of education—whether vocational training or cultivation of good citizens—is something that is still not even settled in the United States and thus could lead to a completely different form of “modern” education than the system in the United States today. Even just a hundred years ago, the United States and major European powers all had different purposes for education, which manifested in university systems that looked completely different:

[Speaking about 1890-1940:] Universities had long existed in Europe, where they took several forms: the classical studies of British universities, the scientific training of French grand ecoles, and the graduate and research institutes of Germany. The modern university of the New World, however, was a different creature than its European counterpart, for it served a far broader clientele of students and the state, yet increasingly strove to be a research center. [1]

I am very interested to see how the education system in Tanzania develops just as I am learning that different countries have potential to develop in completely different ways (which relates to cultural differences such as the lack of private space and ownership). Just as Tanzania is skipping personal computers to using mobile phones, and just as Estonia skipped from no internet infrastructure following Soviet collapse to using the internet to vote, do tax returns, and issue prescriptions, I expect the Tanzanian education system to skip to some of the cutting-edge work in education—including the use of online resources like Udacity—by virtue of not having an inertial university and secondary education system. And I’m especially excited about the creative solutions to be devised in Tanzania because it’s pretty clear that the rest of the world hasn’t exactly solved education yet. As my friend Jacob Cole pointed out, creative businesses like Habari Mazao (a website that Tanzanian consumers and farmers can visit to get fair prices for crops), which emerged from the first Tanzania-MIT Tele-Hackathon, would never have been thought of in the United States.

It seems that comparatively studying development, both in the economic and social sense, could be fruitful for shedding light on how to predict the trajectory of a country like Tanzania, which we couldn’t just say is where the United States was in the past, partly because Tanzania is starting from a different place in time and culture, and partly because she is surrounded by modernized countries that have already developed (but not finished) their own solutions to problems like education, energy and the environment, and effective governance. Studying comparative development might help one think about this problem and give useful case studies, but I am afraid that the lack of sufficiently many data points regarding development of different nations would lead to unhelpful generalizations. Who knows? I’ll have to take a look.

Action Items from Part 1, on Development

  • Look into research and classes surrounding economic development at Harvard.


  1. The Shaping of Higher Education: The Formative Years in the United States, 1890 to 1940. Claudia Goldin and Lawrence F. Katz. The Journal of Economic Perspectives , Vol. 13, No. 1 (Winter, 1999) , pp. 37-62. Published by: American Economic Association. Article Stable URL: http://www.jstor.org/stable/2647136.