Thinking

Thanks to quite a few of you who have reached out to me regarding my last post. Your comments not only continued my thinking about memorability and how it relates to my value framework, but also helped me realize how many of us are thinking through similar questions. I want to share how I’ve been thinking about these questions and ask about your philosophical processes – how have you arrived at the beliefs you hold today, and (more importantly) the questions you wonder about today?

For me, I suppose that each person lives according to some framework or value system (which may change), and within each framework each person has some goal or objective function. My goal is to optimize my self-chosen values; for others, the goal could be to discover some fundamental truth about the workings of the world or to serve as a mirror image of God. Throughout life, each person is trying to best achieve this goal.

I personally view achieving my goal as a reinforcement learning problem. I best achieve my goal by pursuing exploration (i.e. gathering information to figure out which activities would best achieve my goal) and exploitation (i.e. gathering immediate reward from activities that I already know achieve my goal pretty well, even if they’re not globally optimal). Although this tradeoff is an entire fascinating discussion in of itself, I am more interested in discussing exploration here because I feel that exploitation is something that is personally specific and straightforward (i.e. everyone knows of activities and experiences that accord with his or her own framework, and it’s pretty intuitive to figure out how to keep exploiting those activities) while exploration is a less straightforward pursuit filled with common challenges.

In exploration, I am Bayesian updating, i.e. I have some prior belief on which activities achieve my goal and update my belief with each new incoming piece of information or experience, hopefully honing in (accurately) on “best” activities as time goes on. Now each of my updates has two steps: 1) retrieving the new piece of information and experience, and 2) incorporating that into a new belief on best activities. To borrow words from Confucius, doing 1) and 2) is akin to “learning” and “thinking.” I first learn things: I observe how my friends’ reactions differ when I walk up to them smiling instead of frowning; I swim in a pool and capture sensory details from my environment; I go to topology lecture and finally have some (vague) understanding of manifolds; I notice that I feel less sad after my second house move than my first; etc. Then I think about these things, subconsciously or not; I somehow incorporate these learnings into my beliefs about which activities best achieve my goals of self-enriching and achieving.

It’s intuitive why and how I should do step one of learning. To again draw on machine learning analogy, the more data I have, the more informed I am to (generally) make better decisions and have a closer approximation of the optimum. If learning is good, how do I learn more? I try new things; I have different conversations everyday, explore different cuisines everytime I move to another city, go to college to expose myself to diverse people and pursuits, etc. Especially as babies, humans exhibit these learning tendencies by putting everything in our mouths or touching anything in sight; I think these learning inclinations are innate to all of us.

I also think we innately know why and how they should do step two of thinking. However, it’s much harder for me to articulate how I think in the same way that I just articulated how I learn. I could again offer the Bayesian analogy and say that when I think, I calculate the probability that what I just learned is actually true given my prior belief on optimal activities, and use that to update my beliefs (the improbability of each of my “learnings” correlates with how drastically I should shift my beliefs). But that calculation step is still a black box, both in actuality and in my attempt to explain it intuitively.

From personal experience, I’m going to offer intuition on how this black box works. In some ways, I think of the input-output process of this updating black box – which turns learning input into new belief output – as I would think about solving a math problem – which turns an input set of assumptions or conditions into an output set of answers or implications. More often than not, I start with techniques I already know that might help me get part of the way to the answer (e.g. draw the figure out on paper); then in the likely case that I still need to do more, I start experimenting with simple and intuitive or related approaches, often with many unfruitful trials, until finally I get that “aha” intuition, however fuzzy or hand-wavy it may be. Then I spend the rest of my effort trying to precisely explain that intuition and mold it exactly into an answer. In my mind, updating proceeds similarly. If I observe a learning input that is already very consistent with my prior beliefs, I can just leave my existing prior untouched (akin to using my existing “techniques”). If I see a novel learning input, subconsciously I try to connect it with previous related thought processes or learning experiences, with many of these attempted connections striking no personally resonant chord until I get some “aha” connection that for some reason “feels right” to me. Once I get that “aha”, I spend time consciously thinking or writing about this connection until I can articulate it precisely and make it consistent with the rest of my newly updated beliefs.

Concretely, I think that the “aha” intuitions happen mostly subconsciously and are brought out by events mostly beyond my control – during discussions with friends, a certain question or comment may spark revelation; a new pursuit like photography could help me notice something new and groundbreaking about the subject; or during free-writing I might let my consciousness stream and end up on a topic I never would’ve chosen to write about. Once I get these “ahas”, I try to talk through them with my friends or think and write about them explicitly in order to articulate them into my new belief set.

Some might then conclude that these “ahas,” not articulation ability, are the limiting factor to this updating because we can’t control them and therefore they must be some rare, magical occurrences. But “ahas” are only a limiting factor if few of them occur, and while we cannot directly control our “ahas,” we can affect the number of belief updates that we have to do by increasing the amount of learning we do, so that we update more frequently and more easily (with each new learning, we just have so many more things we can use to make those “aha” connections!).

This formulation of exploration has led me to balance my amount of learning (i.e. undergoing new experiences without necessarily thinking about how they help me achieve my values) with thinking (i.e. converting my learning into an updated set of beliefs and prediction machine). I used to think it incredibly important to think about things before learning or experimenting with them. After all, isn’t it much more efficient and powerful if I can predict something by thinking about it rather than having to actually conduct an experiment? It turns out that thinking, both in process and input, thrives on existing data (i.e. learning), and that thinking without learning can lead to fruitless mind-racking and “dangerously” wrong conclusions, to quote Confucius.

To accelerate my exploration process, I also ask myself how to increase “ahas.” We unintendedly do so in our daily (often bilateral) conversations and active pursuit of novelty (see post on memorability). But can we specifically design situations that would bring out lots of “ahas”? I think one way to do so is to have multilateral conversations through which we can broadcast and collect our learnings and beliefs in a many-to-many model rather than one-to-one discussions or self-contained thought processes. That many-to-many conversation is what I hope to spark with my thoughts and questions here on this blog. I encourage you to help initiate and participate in the discussion as well!

Advertisements

4 thoughts on “Thinking

  1. “For me, I suppose that each person lives according to some framework or value system (which may change), and within each framework each person has some goal or objective function. ”

    I actually don’t really agree with this idea (framework/value system), because I feel like to say that is somehow implying that there is some significant level of awareness about said system. My personal philosophy is that trying to analyze and optimize a value system is hopeless, because it is in some vague sense like incompleteness and undecidability in logic.

    In particular because said framework/value system can change, there’s no way to talk about optimizing anything, which may not be totally on topic but I get the feeling that this is what you’re trying to talk about. I think the only metric that can be imposed on life is perhaps that of “happiness” or “satisfaction” or whatever, but even that is perhaps impossible to define precisely. If we want to get a little mathy as you seem to not mind doing, it seems like we could talk about maximizing this, assuming it is the result of the satisfaction of the values deemed to be important in the first place. I think what happens is that there are so many different factors influencing “happiness” in different ways that it’s hard to actually end up with a vastly different “result” (I guess this would be…amount of happiness?). An analog would be the efficiency of engines in the real world, while there is an “ideal” operating condition, a large range of operating conditions result in reasonably close to peak efficiency.

    Of course, throwing out all sense of direction in life is not a great idea, but I think that the strategy of following instinct and doing what “feels” right perhaps deserves a bit more credit than it usually gets. Then again, I’m writing this at 5am, so you can tell how much organization I have in my life 😛

    • I would say that each person lives by a set of values, whether consciously or not. Imagine that you could observe each person from the beginning of his or her life and begin to recognize patterns in his or her actions and thoughts. Some of these patterns would signal to you that each person is guided by some values, shaped by nature and nurture.

      That means that it’s not necessary for people to be consciously aware of their value systems or to be actively thinking about optimizing within some framework for their decisions, because they’re already empirically living by their values to some extent and (perhaps subconsciously) trying to live out their values even more fully. Perhaps view a person’s set of values as some vector in n-space and his actions as another vector; then each person is trying to fully align (i.e. make parallel) the actions vector with the values one. Sure the values vector can change, but that’s not to say we can’t talk about optimization because in the present we derive utility from living in accord with our values. The angle between the action and values vector is what I claim people are (subconsciously mostly) seeking to minimize. So in fact, many people’s value systems may not even include consciously thinking about optimizing their value systems.

      Then you talk about the difficulty of optimizing a value system. Hopefully the above clarification frames this “optimization” process as one that is more natural and partly empirical. Of course, there are conscious elements – when I talk about thinking above, there is the conscious step of articulating subconscious intuition and this certainly is a part of the optimization. But this thinking is just the natural analysis people perform on their daily behaviors.

      So living by instinct is actually exactly in line with value system optimization, if “instinct” defaults to actions that achieve things you value. In some ways, this is similar to the Taoist interpretation that we should strive to live in accord with the Way – i.e. our natural state – but I’m saying that not everyone’s natural values are the same, thus everyone has an individual value system.

  2. I understand what you mean. What I’m trying to get at is that minimizing the “angle” to the “value vector” is not the same as deriving utility (satisfaction, happiness, fulfillment, whatever). There is some utility that can be derived from *not* following said values, for example, not doing a pset because “I just don’t feel like it”, sleeping in instead of going to class, getting totally wasted, etc. Maybe not the best examples, but the idea is more that everyone encounters moments of disappointment, moments where they’ve “given up”, and yet life goes on, we convince ourselves that things aren’t so bad; We “look at the bright side”.

    So the question is what does this mean for the “value system” n-space? If you’re trying to maximize say, the dot product of the “life vector” and the “value vector”, and “values” are the only “coordinates” present, then such a mishap would seriously affect the dot product. Yet I think that the situation is rarely as simple as that, bad experiences often have positive effects of some form.

    I argue that living by instinct is to maximize apparent utility (as “defined” before, wow I am using a lot of quote marks), the mind has a way of subconsciously taking into all aspects, including the value system. Yet utility is directly affected by aspects beyond the value system, hence it cannot be derived from the value system. I think that the value system makes up a significant part of decision making, but to say that people live “according to” it is a bit of a stretch. Certainly some people do make more of a conscious effort at following the value system, but I don’t believe in this because the value system is more or less a conscious construct of sorts, and attempting to carefully think it out is hopeless; Only the subconscious mind’s “gut feeling” can see the big picture.

    On a side note, it’s funny I’m saying this as a[n aspiring] mathematician since I deliberate very carefully over many small decisions, and I think this is a somewhat common trait amongst math people.

    Hope that wasn’t too unclear.

    • In the spirit of math, I thought I might add that if you want to call everything influencing decisions “values”, then what I’m saying is that there seem to exist non-orthogonal values, which have independent impact on utility. Hence we can’t think of it as being optimization of say a dot product in R^n, rather it’s the sum of many (real valued I suppose) functions on R^n with fewer assumed restrictions, but likely containing at least part of the dot product you’re thinking of in some way (let’s call this the rational component of utility). In particular, there may be many equally desirable solutions (I don’t like this word but I can’t think of anything better ATM).

      In decision making people aim for some solution they believe to be desirable, but what is aimed for may not be what is achieved. Additionally the value system can change, which is a fact we are all aware of. Thus I think that people don’t live by a value system, rather they have the rational component of utility that they think they are sure of (but can’t really be, because the value system can change!) and the other more complicated bits, which are even harder to optimize (an optimal solution may not even exist) even assuming the value system is fixed, and so ultimately decisions can end up being somewhat arbitrary, as it is impossible to pin down the best path, even locally (in time). How much of this one decides to make conscious affects how much you can call it “living by a value system”, IMO.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s