The easiest answer to “why startups?” is negative: because you can’t develop new technology in existing entities. There’s something wrong with big companies, governments, and non-profits. Perhaps they can’t recognize financial needs; the federal government, hamstrung by its own bureaucracy, obviously overcompensates some while grossly undercompensating others in its employ. Or maybe these entities can’t handle personal needs; you can’t always get recognition, respect, or fame from a huge bureaucracy. Anyone on a mission tends to want to go from 0 to 1. You can only do that if you’re surrounded by others to want to go from 0 to 1. That happens in startups, not huge companies or government.
Thanks to quite a few of you who have reached out to me regarding my last post. Your comments not only continued my thinking about memorability and how it relates to my value framework, but also helped me realize how many of us are thinking through similar questions. I want to share how I’ve been thinking about these questions and ask about your philosophical processes – how have you arrived at the beliefs you hold today, and (more importantly) the questions you wonder about today?
For me, I suppose that each person lives according to some framework or value system (which may change), and within each framework each person has some goal or objective function. My goal is to optimize my self-chosen values; for others, the goal could be to discover some fundamental truth about the workings of the world or to serve as a mirror image of God. Throughout life, each person is trying to best achieve this goal.
I personally view achieving my goal as a reinforcement learning problem. I best achieve my goal by pursuing exploration (i.e. gathering information to figure out which activities would best achieve my goal) and exploitation (i.e. gathering immediate reward from activities that I already know achieve my goal pretty well, even if they’re not globally optimal). Although this tradeoff is an entire fascinating discussion in of itself, I am more interested in discussing exploration here because I feel that exploitation is something that is personally specific and straightforward (i.e. everyone knows of activities and experiences that accord with his or her own framework, and it’s pretty intuitive to figure out how to keep exploiting those activities) while exploration is a less straightforward pursuit filled with common challenges.
In exploration, I am Bayesian updating, i.e. I have some prior belief on which activities achieve my goal and update my belief with each new incoming piece of information or experience, hopefully honing in (accurately) on “best” activities as time goes on. Now each of my updates has two steps: 1) retrieving the new piece of information and experience, and 2) incorporating that into a new belief on best activities. To borrow words from Confucius, doing 1) and 2) is akin to “learning” and “thinking.” I first learn things: I observe how my friends’ reactions differ when I walk up to them smiling instead of frowning; I swim in a pool and capture sensory details from my environment; I go to topology lecture and finally have some (vague) understanding of manifolds; I notice that I feel less sad after my second house move than my first; etc. Then I think about these things, subconsciously or not; I somehow incorporate these learnings into my beliefs about which activities best achieve my goals of self-enriching and achieving.
It’s intuitive why and how I should do step one of learning. To again draw on machine learning analogy, the more data I have, the more informed I am to (generally) make better decisions and have a closer approximation of the optimum. If learning is good, how do I learn more? I try new things; I have different conversations everyday, explore different cuisines everytime I move to another city, go to college to expose myself to diverse people and pursuits, etc. Especially as babies, humans exhibit these learning tendencies by putting everything in our mouths or touching anything in sight; I think these learning inclinations are innate to all of us.
I also think we innately know why and how they should do step two of thinking. However, it’s much harder for me to articulate how I think in the same way that I just articulated how I learn. I could again offer the Bayesian analogy and say that when I think, I calculate the probability that what I just learned is actually true given my prior belief on optimal activities, and use that to update my beliefs (the improbability of each of my “learnings” correlates with how drastically I should shift my beliefs). But that calculation step is still a black box, both in actuality and in my attempt to explain it intuitively.
From personal experience, I’m going to offer intuition on how this black box works. In some ways, I think of the input-output process of this updating black box – which turns learning input into new belief output – as I would think about solving a math problem – which turns an input set of assumptions or conditions into an output set of answers or implications. More often than not, I start with techniques I already know that might help me get part of the way to the answer (e.g. draw the figure out on paper); then in the likely case that I still need to do more, I start experimenting with simple and intuitive or related approaches, often with many unfruitful trials, until finally I get that “aha” intuition, however fuzzy or hand-wavy it may be. Then I spend the rest of my effort trying to precisely explain that intuition and mold it exactly into an answer. In my mind, updating proceeds similarly. If I observe a learning input that is already very consistent with my prior beliefs, I can just leave my existing prior untouched (akin to using my existing “techniques”). If I see a novel learning input, subconsciously I try to connect it with previous related thought processes or learning experiences, with many of these attempted connections striking no personally resonant chord until I get some “aha” connection that for some reason “feels right” to me. Once I get that “aha”, I spend time consciously thinking or writing about this connection until I can articulate it precisely and make it consistent with the rest of my newly updated beliefs.
Concretely, I think that the “aha” intuitions happen mostly subconsciously and are brought out by events mostly beyond my control – during discussions with friends, a certain question or comment may spark revelation; a new pursuit like photography could help me notice something new and groundbreaking about the subject; or during free-writing I might let my consciousness stream and end up on a topic I never would’ve chosen to write about. Once I get these “ahas”, I try to talk through them with my friends or think and write about them explicitly in order to articulate them into my new belief set.
Some might then conclude that these “ahas,” not articulation ability, are the limiting factor to this updating because we can’t control them and therefore they must be some rare, magical occurrences. But “ahas” are only a limiting factor if few of them occur, and while we cannot directly control our “ahas,” we can affect the number of belief updates that we have to do by increasing the amount of learning we do, so that we update more frequently and more easily (with each new learning, we just have so many more things we can use to make those “aha” connections!).
This formulation of exploration has led me to balance my amount of learning (i.e. undergoing new experiences without necessarily thinking about how they help me achieve my values) with thinking (i.e. converting my learning into an updated set of beliefs and prediction machine). I used to think it incredibly important to think about things before learning or experimenting with them. After all, isn’t it much more efficient and powerful if I can predict something by thinking about it rather than having to actually conduct an experiment? It turns out that thinking, both in process and input, thrives on existing data (i.e. learning), and that thinking without learning can lead to fruitless mind-racking and “dangerously” wrong conclusions, to quote Confucius.
To accelerate my exploration process, I also ask myself how to increase “ahas.” We unintendedly do so in our daily (often bilateral) conversations and active pursuit of novelty (see post on memorability). But can we specifically design situations that would bring out lots of “ahas”? I think one way to do so is to have multilateral conversations through which we can broadcast and collect our learnings and beliefs in a many-to-many model rather than one-to-one discussions or self-contained thought processes. That many-to-many conversation is what I hope to spark with my thoughts and questions here on this blog. I encourage you to help initiate and participate in the discussion as well!
Many of my friends know I consciously live my life according to a value system – I choose pursuits that optimize two values, self-enrichment and achievement. I’ve picked these values based on what I’ve (perhaps subconsciously) noted about myself and my priorities over the past decade or so. Then I’ve built in rituals into my daily life that facilitate me fully pursuing these pursuits that optimize my values, including physical enrichment (working out in the morning, with lifting MWF and swimming TSa), intellectual enrichment (allocating half an hour to read for leisure every night), and even a set of times and places during the day when I work exclusively on startup or school. I adhere to rituals because once I get used to them, it takes little energy to “be disciplined” and follow them, and I spend less time wondering what I’m going to do next or how my rituals contribute to my grander value system.
I’ve been confident that this is the best way to live my life. When I say “best,” I mean according to this framework of optimizing for self-enrichment and achievement. But yesterday, while reading Joshua Foer’s Moonwalk with Einstein, I came across a passage that led me to reexamine my framework. The book documents Foer’s experience of training for the US Memory Championship, and the specific passage that provoked me describes Grand Master of Memory Ed Cooke seeking to make his life maximally memorable by packing his life with memories. Foer suggests that because we remember events relative in time to other events in our lives (e.g. I had my first kiss after that Flight Deck ride at Great America, after getting soaked on the Logger ride, etc.), we can make our lives more memorable just by increasing the number and novelty of experiences (e.g. the number and novelty of “afters” in the above sequence).
After I read this, the idea of maximum memorability began to resonate with me. One of Foer’s statements in particular articulates this seemingly strange resonance:
Like the proverbial tree that falls without anyone hearing it, can an experience that isn’t remembered be meaningfully said to have happened at all? Socrates thought the unexamined life was not worth living. How much more so the unremembered life?
Another explanation for my resonance with maximum memorability is its natural interpretation as maximizing psychological lifetime, or subjective experience of time, if we merely measure this “time” by number and novelty of experiences. I find subjective time a natural personal value to optimize. For one, I think this desire to maximize subjective lifetime could be the reason that I (and many humans in general) seek novelty and change in pursuits. This idea that humans measure subjective experience of time by novelty of life rather than by physical, objective time comes up everywhere. In Duane Michals’s Now Becoming Then, Michals tells stories of twisted relationships, mystical and religious occurrences, and even entirely different worlds (“Empty New York”) by capturing snapshots of “points of novelty” in each story’s trajectory – the points at which the story changes most significantly – rather than by taking snapshots at constant time intervals. Why are these points of novelty so much more interesting to us as chronological markers of subjective time than time itself? In finance, one problem that traders commonly encounter is how to index “time” in the market, given that incredible volatility and trade volume can be concentrated into such short times of day while the remainder of the day trudges slowly along. One approach to indexing time is by counting specific changes or events in the market, which suggests that change or novelty gauges the subjective time we’re interested in. In computer vision, a common approach to identifying objects in an image is to scan across the image and detect significant changes in pixel values, which correspond to one object disappearing and another beginning, suggesting that novelty is an index of objects’ very existence. All of these modes of thinking imply that we seek novelty because we seek to lengthen our psychological experiences of time, i.e. make our lives more memorable.
So I think it’s natural to value memorability; I certainly place some value on it. (I should be clear that I value memorability in the sense that I value the mere number and novelty of memories that I possess and thus am continually influenced by, however subconsciously, rather than some efficient system for fetching these memories by rearranging my neural connections or any other type of conscious recall.) And if I value memorability, I should incorporate it into my value framework, but how? I could add “memorability” as another value, but that seems unnatural because I don’t view it as a competing priority that I should optimize. Rather, I should use memorability as a metric and choose to measure how greatly an experience or activity achieves my two values of self-enrichment and achievement based on its memorability, i.e. its subjective impact on me, rather than based on any other criterion. For example, in my self-enrichment value, memorability is already naturally encoded, because by definition self-enrichment emphasizes pursuits that have self-impact. But as for my achievement value, until now I have had in the back of my mind some external metric for achievement (e.g. number of people impacted) that felt less genuine to me. What I really value in “achievement” is that subjectively experienced (“memorable”) magnitude of achievement. I can’t truthfully say that my 14-minute TEDx talk to 100 Gunn students was a more memorable, impactful achievement for me than 14 minutes of fixing certain bugs in my pathway identification algorithm in a cubicle, even though in many standard definitions of “achievement” the former would be greater than the latter. And because memorable achievement is genuinely what I value, that’s how I should evaluate how each of my actions optimizes achievement.