Treatise on Startup, Part 1

Why “do startups”? It’s hard to imagine a phenomenon in recent memory that has had a stronger stranglehold on the imagination of young, ambitious America. Popularized and made sexy by The Social Network and $1 billion Instagram exits, startups have become the combination of all American principles and definitions of success – rags-to-riches, fame, be-your-own-boss individualism, exploration of unknown frontiers, powerful impact on society, and (for the younger generation) a cool combination of technology and business. The potential to achieve success and become the 21st century American technological cowboy has led many to “do startups,” while this very same prospect of danger has scared others away into less risky pursuits.

So why do startups? One certainty is that it doesn’t make sense to “do startup” without thinking about what startups are designed to accomplish or why starting a company might be important to the founder. I’m not going to support or criticize any of the above “success” criteria as reasons for pursuing startups, because the validity of each reason depends completely on the individual. Then a better question to ask is, “Why should you do startups?”

Below, I outline the framework that I use to answer this question. I hope this framework confirms or adds perspective to the seasoned entrepreneur, the incipient dabbler, and the uninformed outsider. If you’ve read other posts from this blog, you won’t be surprised to learn that this framework for deciding whether to pursue startup and which startup to work on builds first and foremost upon the personal value system (just like everything else).

Start from your value system. What do you care about most, and what are the principles you live by? Those are the values you aspire for in your everyday activities; they constitute your value system. If you’re interested in entrepreneurship, your value system likely includes something like “making an impact beyond self,” in which case you probably (perhaps subconsciously) see specific things in the world worth impacting. Then you have some vision for the world that is different from the status quo. You’ve developed this vision after years of life experience, collecting data from your environment and (subconsciously) synthesizing that data into intuition and world view.

Articulate this world view. Then you can articulate the differences between this world vision and the status quo. What are the biggest differences as measured by your values? For example, one of your values might be learning, which might mean that you care about everyone learning together in a collaborative, college-like style because in your experience this style has helped you learn the most. So you might envision a world in which learning from kindergarten onward emphasizes social interactions with others in addition to (or even instead of) individualistic classroom-style learning or rote memorization of knowledge; this refocusing of the education system is your “difference.”

“Differences” could be solutions to present problems that frustrate you beyond belief (e.g. ridding of inefficiency in our healthcare system) or improvements you could not see the future possibly living without (e.g. cheap knowledge of my personal genome). Choose one such “difference” – let’s call it an “innovation.” Do you personally wish to invest the effort to create this innovation? You should consider how creating this innovation would optimize your personal values (e.g. how much would you learn? How much would you enjoy the struggle and unknown of solving this problem? How much money would you make?). If one of the values you’re optimizing is world impact, you have to be especially careful to not assume anything in measuring the impact of this innovation. You have to find out if enough other people value your innovation, an undertaking I call “problem discovery” (in business talk, “market research”). Go further than cold-calling one hundred potential customers (i.e. people you plan to impact) and asking each if he/she would value this innovation; in many cases, you’ll have to envision or even prototype a minimum viable product that your customers actually use to help them understand how much they value (e.g. how much they’d pay for) your innovation; you’ll probably even want to fairly sample your customer base beforehand and construct an entire demand curve to know who places what value on your innovation. Be sure to ask your customers questions about related innovations too, and to sense them beyond asking questions, because you might then choose a slightly different innovation for a slightly different customer group that achieves much more impact and similarly satisfies your other values.

Just as you should be meticulous measuring your innovation’s impact if you value impact, you should be equally deliberate in measuring your innovation’s achievement of your other values. Evaluating your potential innovation based on your values should tell you whether you actually want to make it happen. Compare the world in which you’re creating this innovation to worlds in which you’re pursuing other fulfilling activities, and evaluate your opportunity cost based on your values. Given that you could invest the same energy and time elsewhere, do you still want to do this?

If you do, the next question is how to do so. Notice that I haven’t mentioned the word “startup” in this framework once up to this point. That’s because many of the innovations you’d want to create are not best facilitated in a startup or even a business. For instance, you may wish to solve one of Hilbert’s unsolved problems in order to contribute to mathematical thinking around the world; this is probably best done in academia. You may want to change the way that the United States conducts foreign policy; this is probably best done in government. But a surprisingly large number of innovations are best facilitated by companies and startups specifically. Startups in particular have the mission-oriented aspect that makes them great facilitators for innovation. Just by their smaller size, they have lower n-squared bilateral communication costs (because the cost of transferring information between each pair in a company scales quadratically with the company size). Just by smaller size, startups have a higher concentration of people strongly aligned in a similar version of the above personal philosophy. Peter Thiel speaks on why it’s difficult to innovate on your personal mission in other environments:

The easiest answer to “why startups?” is negative: because you can’t develop new technology in existing entities. There’s something wrong with big companies, governments, and non-profits. Perhaps they can’t recognize financial needs; the federal government, hamstrung by its own bureaucracy, obviously overcompensates some while grossly undercompensating others in its employ. Or maybe these entities can’t handle personal needs; you can’t always get recognition, respect, or fame from a huge bureaucracy. Anyone on a mission tends to want to go from 0 to 1. You can only do that if you’re surrounded by others to want to go from 0 to 1. That happens in startups, not huge companies or government.

This is why good entrepreneurs build startups (and, in my opinion, the only valid reason for you to build them) – because their values drive them to achieve some innovation in their world vision that doesn’t currently exist, and because startups happen to best facilitate this innovation and mission. Why do I think that this is the right philosophy for thinking about startup? Simply because this philosophy is consistent with the entrepreneur’s (and the human’s) fundamental individual decision-making framework by building on his/her values. The fact that the choice to pursue a specific startup springs forth from personal values keeps the entrepreneur motivated, visionary, and certain that that specific startup is where the entrepreneur wants to invest energy. Imagine asking such entrepreneurs why they are working on their startups. In essence, you’d be asking, “Why are you doing something that you’ve determined is in highest accordance with your values… Wait, never mind; silly question.” The entrepreneurs’ responses would just be a matter of articulating values and vision.

This being said, I certainly don’t advocate consciously thinking at such a high level 24/7 while working on a startup. Too much pondering and optimizing at this level makes it impossible to focus and actually experiment without the fear of being wrong. But values and vision in the above philosophy are things I believe all good entrepreneurs keep in the back of their minds, and they are things that good entrepreneurs revisit every once in a while to paint the big picture for themselves and their teammates. You can tell when you’re talking with a good entrepreneur; ask any question, and the answer you get is some piece of a bigger world vision that forms a clear picture in the entrepreneur’s mind and drives the excitement in the entrepreneur’s voice. As you ask more questions, you get more puzzle pieces that you can fit together into a world vision. How do you know if you’d like working with, investing in, or even buying from this entrepreneur? Ask yourself how beautiful his/her vision is to you.
Advertisements

Thinking

Thanks to quite a few of you who have reached out to me regarding my last post. Your comments not only continued my thinking about memorability and how it relates to my value framework, but also helped me realize how many of us are thinking through similar questions. I want to share how I’ve been thinking about these questions and ask about your philosophical processes – how have you arrived at the beliefs you hold today, and (more importantly) the questions you wonder about today?

For me, I suppose that each person lives according to some framework or value system (which may change), and within each framework each person has some goal or objective function. My goal is to optimize my self-chosen values; for others, the goal could be to discover some fundamental truth about the workings of the world or to serve as a mirror image of God. Throughout life, each person is trying to best achieve this goal.

I personally view achieving my goal as a reinforcement learning problem. I best achieve my goal by pursuing exploration (i.e. gathering information to figure out which activities would best achieve my goal) and exploitation (i.e. gathering immediate reward from activities that I already know achieve my goal pretty well, even if they’re not globally optimal). Although this tradeoff is an entire fascinating discussion in of itself, I am more interested in discussing exploration here because I feel that exploitation is something that is personally specific and straightforward (i.e. everyone knows of activities and experiences that accord with his or her own framework, and it’s pretty intuitive to figure out how to keep exploiting those activities) while exploration is a less straightforward pursuit filled with common challenges.

In exploration, I am Bayesian updating, i.e. I have some prior belief on which activities achieve my goal and update my belief with each new incoming piece of information or experience, hopefully honing in (accurately) on “best” activities as time goes on. Now each of my updates has two steps: 1) retrieving the new piece of information and experience, and 2) incorporating that into a new belief on best activities. To borrow words from Confucius, doing 1) and 2) is akin to “learning” and “thinking.” I first learn things: I observe how my friends’ reactions differ when I walk up to them smiling instead of frowning; I swim in a pool and capture sensory details from my environment; I go to topology lecture and finally have some (vague) understanding of manifolds; I notice that I feel less sad after my second house move than my first; etc. Then I think about these things, subconsciously or not; I somehow incorporate these learnings into my beliefs about which activities best achieve my goals of self-enriching and achieving.

It’s intuitive why and how I should do step one of learning. To again draw on machine learning analogy, the more data I have, the more informed I am to (generally) make better decisions and have a closer approximation of the optimum. If learning is good, how do I learn more? I try new things; I have different conversations everyday, explore different cuisines everytime I move to another city, go to college to expose myself to diverse people and pursuits, etc. Especially as babies, humans exhibit these learning tendencies by putting everything in our mouths or touching anything in sight; I think these learning inclinations are innate to all of us.

I also think we innately know why and how they should do step two of thinking. However, it’s much harder for me to articulate how I think in the same way that I just articulated how I learn. I could again offer the Bayesian analogy and say that when I think, I calculate the probability that what I just learned is actually true given my prior belief on optimal activities, and use that to update my beliefs (the improbability of each of my “learnings” correlates with how drastically I should shift my beliefs). But that calculation step is still a black box, both in actuality and in my attempt to explain it intuitively.

From personal experience, I’m going to offer intuition on how this black box works. In some ways, I think of the input-output process of this updating black box – which turns learning input into new belief output – as I would think about solving a math problem – which turns an input set of assumptions or conditions into an output set of answers or implications. More often than not, I start with techniques I already know that might help me get part of the way to the answer (e.g. draw the figure out on paper); then in the likely case that I still need to do more, I start experimenting with simple and intuitive or related approaches, often with many unfruitful trials, until finally I get that “aha” intuition, however fuzzy or hand-wavy it may be. Then I spend the rest of my effort trying to precisely explain that intuition and mold it exactly into an answer. In my mind, updating proceeds similarly. If I observe a learning input that is already very consistent with my prior beliefs, I can just leave my existing prior untouched (akin to using my existing “techniques”). If I see a novel learning input, subconsciously I try to connect it with previous related thought processes or learning experiences, with many of these attempted connections striking no personally resonant chord until I get some “aha” connection that for some reason “feels right” to me. Once I get that “aha”, I spend time consciously thinking or writing about this connection until I can articulate it precisely and make it consistent with the rest of my newly updated beliefs.

Concretely, I think that the “aha” intuitions happen mostly subconsciously and are brought out by events mostly beyond my control – during discussions with friends, a certain question or comment may spark revelation; a new pursuit like photography could help me notice something new and groundbreaking about the subject; or during free-writing I might let my consciousness stream and end up on a topic I never would’ve chosen to write about. Once I get these “ahas”, I try to talk through them with my friends or think and write about them explicitly in order to articulate them into my new belief set.

Some might then conclude that these “ahas,” not articulation ability, are the limiting factor to this updating because we can’t control them and therefore they must be some rare, magical occurrences. But “ahas” are only a limiting factor if few of them occur, and while we cannot directly control our “ahas,” we can affect the number of belief updates that we have to do by increasing the amount of learning we do, so that we update more frequently and more easily (with each new learning, we just have so many more things we can use to make those “aha” connections!).

This formulation of exploration has led me to balance my amount of learning (i.e. undergoing new experiences without necessarily thinking about how they help me achieve my values) with thinking (i.e. converting my learning into an updated set of beliefs and prediction machine). I used to think it incredibly important to think about things before learning or experimenting with them. After all, isn’t it much more efficient and powerful if I can predict something by thinking about it rather than having to actually conduct an experiment? It turns out that thinking, both in process and input, thrives on existing data (i.e. learning), and that thinking without learning can lead to fruitless mind-racking and “dangerously” wrong conclusions, to quote Confucius.

To accelerate my exploration process, I also ask myself how to increase “ahas.” We unintendedly do so in our daily (often bilateral) conversations and active pursuit of novelty (see post on memorability). But can we specifically design situations that would bring out lots of “ahas”? I think one way to do so is to have multilateral conversations through which we can broadcast and collect our learnings and beliefs in a many-to-many model rather than one-to-one discussions or self-contained thought processes. That many-to-many conversation is what I hope to spark with my thoughts and questions here on this blog. I encourage you to help initiate and participate in the discussion as well!

Memorability

Many of my friends know I consciously live my life according to a value system – I choose pursuits that optimize two values, self-enrichment and achievement. I’ve picked these values based on what I’ve (perhaps subconsciously) noted about myself and my priorities over the past decade or so. Then I’ve built in rituals into my daily life that facilitate me fully pursuing these pursuits that optimize my values, including physical enrichment (working out in the morning, with lifting MWF and swimming TSa), intellectual enrichment (allocating half an hour to read for leisure every night), and even a set of times and places during the day when I work exclusively on startup or school. I adhere to rituals because once I get used to them, it takes little energy to “be disciplined” and follow them, and I spend less time wondering what I’m going to do next or how my rituals contribute to my grander value system.

I’ve been confident that this is the best way to live my life. When I say “best,” I mean according to this framework of optimizing for self-enrichment and achievement. But yesterday, while reading Joshua Foer’s Moonwalk with Einstein, I came across a passage that led me to reexamine my framework. The book documents Foer’s experience of training for the US Memory Championship, and the specific passage that provoked me describes Grand Master of Memory Ed Cooke seeking to make his life maximally memorable by packing his life with memories. Foer suggests that because we remember events relative in time to other events in our lives (e.g. I had my first kiss after that Flight Deck ride at Great America, after getting soaked on the Logger ride, etc.), we can make our lives more memorable just by increasing the number and novelty of experiences (e.g. the number and novelty of “afters” in the above sequence).

After I read this, the idea of maximum memorability began to resonate with me. One of Foer’s statements in particular articulates this seemingly strange resonance:

Like the proverbial tree that falls without anyone hearing it, can an experience that isn’t remembered be meaningfully said to have happened at all? Socrates thought the unexamined life was not worth living. How much more so the unremembered life?

Another explanation for my resonance with maximum memorability is its natural interpretation as maximizing psychological lifetime, or subjective experience of time, if we merely measure this “time” by number and novelty of experiences. I find subjective time a natural personal value to optimize. For one, I think this desire to maximize subjective lifetime could be the reason that I (and many humans in general) seek novelty and change in pursuits. This idea that humans measure subjective experience of time by novelty of life rather than by physical, objective time comes up everywhere. In Duane Michals’s Now Becoming Then, Michals tells stories of twisted relationships, mystical and religious occurrences, and even entirely different worlds (“Empty New York”) by capturing snapshots of “points of novelty” in each story’s trajectory – the points at which the story changes most significantly – rather than by taking snapshots at constant time intervals. Why are these points of novelty so much more interesting to us as chronological markers of subjective time than time itself? In finance, one problem that traders commonly encounter is how to index “time” in the market, given that incredible volatility and trade volume can be concentrated into such short times of day while the remainder of the day trudges slowly along. One approach to indexing time is by counting specific changes or events in the market, which suggests that change or novelty gauges the subjective time we’re interested in. In computer vision, a common approach to identifying objects in an image is to scan across the image and detect significant changes in pixel values, which correspond to one object disappearing and another beginning, suggesting that novelty is an index of objects’ very existence. All of these modes of thinking imply that we seek novelty because we seek to lengthen our psychological experiences of time, i.e. make our lives more memorable.

So I think it’s natural to value memorability; I certainly place some value on it. (I should be clear that I value memorability in the sense that I value the mere number and novelty of memories that I possess and thus am continually influenced by, however subconsciously, rather than some efficient system for fetching these memories by rearranging my neural connections or any other type of conscious recall.) And if I value memorability, I should incorporate it into my value framework, but how? I could add “memorability” as another value, but that seems unnatural because I don’t view it as a competing priority that I should optimize. Rather, I should use memorability as a metric and choose to measure how greatly an experience or activity achieves my two values of self-enrichment and achievement based on its memorability, i.e. its subjective impact on me, rather than based on any other criterion. For example, in my self-enrichment value, memorability is already naturally encoded, because by definition self-enrichment emphasizes pursuits that have self-impact. But as for my achievement value, until now I have had in the back of my mind some external metric for achievement (e.g. number of people impacted) that felt less genuine to me. What I really value in “achievement” is that subjectively experienced (“memorable”) magnitude of achievement. I can’t truthfully say that my 14-minute TEDx talk to 100 Gunn students was a more memorable, impactful achievement for me than 14 minutes of fixing certain bugs in my pathway identification algorithm in a cubicle, even though in many standard definitions of “achievement” the former would be greater than the latter. And because memorable achievement is genuinely what I value, that’s how I should evaluate how each of my actions optimizes achievement.

Thanks to Kanjun Qiu and Carl Gao for their thoughts on this.