Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Passing the Recursive Buck

15 Post author: Eliezer_Yudkowsky 16 June 2008 04:50AM

Followup toArtificial Addition, The Ultimate Source, Gödel, Escher, Bach: An Eternal Golden Braid

Yesterday, I talked about what happens when you look at your own mind, reflecting upon yourself, and search for the source of your own decisions.

Let's say you decided to run into a burning orphanage and save a young child.  You look back on the decision and wonder: was your empathy with children, your ability to imagine what it would be like to be on fire, the decisive factor?  Did it compel you to run into the orphanage?

No, you reason, because if you'd needed to prevent a nuclear weapon from going off in the building next door, you would have run to disarm the nuke, and let the orphanage burn.  So a burning orphanage is not something that controls you directly.  Your fear certainly didn't control you.  And as for your duties, it seems like you could have ignored them (if you wanted to).

So if none of these parts of yourself that you focus upon, are of themselves decisive... then there must be some extra and additional thing that is decisive!  And that, of course, would be this "you" thing that is looking over your thoughts from outside.

Imagine if human beings had a tiny bit more introspective ability than they have today, so that they could see a single neuron firing - but only one neuron at a time.  We might even have the ability to modify the firing of this neuron.  It would seem, then, like no individual neuron was in control of us, and that indeed, we had the power to control the neuron.  It would seem we were in control of our neurons, not controlled by them.  Whenever you look at a single neuron, it seems not to control you, that-which-is-looking...

So it might look like you were moved to run into the orphanage by your built-in empathy or your inculcated morals, and that this overcame your fear of fire.  But really there was an additional you, beyond these emotions, which chose to give in to the good emotions rather than the bad ones.  That's moral responsibility, innit?

But wait - how does this additional you decide to flow along with your empathy and not your fear?  Is it programmed to always be good?  Does it roll a die and do whatever the die says?

Ordinarily, this question is not asked.  Once you say that you choose of your own "free will", you've explained the choice - drawn a causal arrow coming from a black box, which feels like an explanation.  At this point, you're supposed to stop asking questions, not look inside the black box to figure out how it works.

But what if the one does ask the question, "How did I choose to go along with my empathy and duty, and not my fear and selfishness?"

In real life, this question probably doesn't have an answer.  We are the sum of our parts, as a hand is its fingers, palm, and thumb.  Empathy and duty overpowered fear and selfishness - that was the choice.  It may be that no one factor was decisive, but all of them together are you just as much as you are your brain.  You did not choose for heroic factors to overpower antiheroic ones; that overpowering was your choice.  Or else where did the meta-choice to favor heroic factors come from?  I don't think there would, in fact, have been a deliberation on the meta-choice, in which you actually pondered the consequences of accepting first-order emotions and duties.  There probably would not have been a detailed philosophical exploration, as you stood in front of that burning orphanage.

But memory is malleable.  So if you look back and ask "How did I choose that?" and try to actually answer with something beyond the "free will!" stopsign, your mind is liable to start generating a philosophical discussion of morality that never happened.

And then it will appear that no particular argument in the philosophical discussion is absolutely decisive, since you could (primitive reachable) have decided to ignore it.

Clearly, there's an extra additional you that decides which philosophical arguments deserve attention.

You see where this is going.  If you don't see where this is going, then you haven't read Douglas Hofstadter's Gödel, Escher, Bach: An Eternal Golden Braid, which makes you incomplete as a human being.

The general antipattern at work might be called "Passing the Recursive Buck".  It is closely related to Artificial Addition (your mind generates infinite lists of surface phenomena, using a compact process you can't see into) and Mysterious Answer.  This antipattern happens when you try to look into a black box, fail, and explain the black box using another black box.

Passing the Recursive Buck is rarer than Mysterious Answer, because most people just stop on the first black box.  (When was the last time you heard postulated an infinite hierarchy of Gods, none of which create themselves, as the answer to the First Cause?)

How do you stop a recursive buck from passing?

You use the counter-pattern:  The Recursive Buck Stops Here.

But how do you apply this counter-pattern?

You use the recursive buck-stopping trick.

And what does it take to execute this trick?

Recursive buck stopping talent.

And how do you develop this talent?

Get a lot of practice stopping recursive bucks.

Ahem.

So, the first trick is learning to notice when you pass the buck.

"The Recursive Buck Stops Here" tells you that you shouldn't be trying to solve the puzzle of your black box, by looking for another black box inside it.  To appeal to meta-free-will, or to say "Free will ordinal hierarchy!" is just another way of running away from the scary real problem, which is to look inside the damn box.

This pattern was on display in Causality and Moral Responsibility:

Even if the system is - gasp!- deterministic, you will see a system that, lo and behold, deterministically adds numbers.  Even if someone - gasp! - designed the system, you will see that it was designed to add numbers.  Even if the system was - gasp!- caused, you will see that it was caused to add numbers.

To stop passing the recursive buck, you must find the non-mysterious structure that simply is the buck.

Take the Cartesian homunculus.  Light passes into your eyes, but how can you find the shape of an apple in the visual information?  Is there a little person inside your head, watching the light on a screen, and pointing out the apples?  But then does the little person have a metahomunculus inside their head?  If you have the notion of a "visual cortex", and you know even a little about how specifically the visual cortex processes and reconstructs the transcoded retinal information, then you can see that there is no need for a meta-visual-cortex that looks at the first visual cortex.  The information is being transformed into cognitively usable form right there in the neurons.

I've already given a deal of advice on how to notice black boxes.

And I've even given some advice on how to start looking inside.

But ultimately, each black box is its own scientific problem.  There is no easy, comforting, safe procedure you follow to "look inside".  They aren't all as straightforward as free will.  My main meta-advice has to do with subtasks like recognizing the black box, not running away screaming into the night, and not stopping on a fake explanation.

Comments (17)

Sort By: Old
Comment author: B._Riley 16 June 2008 06:04:30AM 9 points [-]

"When was the last time you heard postulated an infinite hierarchy of Gods, none of which create themselves, as the answer to the First Cause?"

Fairly recently, as a matter of fact. Although the LDS church does not officially emphasize this doctrine, I know multiple Mormons that hold that exact view.

It's an easy conclusion to reach if you take either Joseph Smith's statement that "God himself was once as we are now, and is an exalted man, and sits enthroned in yonder heavens!" or Lorenzo Snow's that "As man is, God once was; as God is, man may become" seriously. Conversely, according to my Mormon acquaintances, there will be lots of additional gods in the future.

Anyhow, wonderful post. Sometimes I wonder where you are going with a topic, but then posts like this one make everything come into focus.

Comment author: Arandur 03 August 2011 10:36:27PM 2 points [-]

As a member of the LDS church myself, I can say that yes, I do hold this view until I find one more likely to explain the given doctrine. :3 It's a curious puzzle, and one that I, for one, will not stop investigating.

Comment author: Siamak 16 June 2008 06:49:52AM 0 points [-]

If you agree that objective attribution is just an strong hypothesis to explain subjective experience in the first place, then trying to explain consciousness as a subjective experience based on that hypothesis is redicules, it's like trying to explain cause of a cause by it's effect. Now we can call it looking into the bucket of subjective experience and avoiding it, but in the end it's the only existential cause for objective explanation of the bucket at the first place. I agree that loopiness! seems to be a characteristic of such phenomenon if it's looked upon from outside, but does it imply the conclusion that you are making?

Comment author: Hopefully_Anonymous 16 June 2008 08:19:32AM 0 points [-]

"But ultimately, each black box is its own scientific problem. There is no easy, comforting, safe procedure you follow to "look inside". They aren't all as straightforward as free will. My main meta-advice has to do with subtasks like recognizing the black box, not failing in standard ways, not running away screaming into the night, and not stopping on a fake explanation."

Interesting post, and good meta-advice, in my opinion.

Comment author: Ian_C. 16 June 2008 08:19:48AM -1 points [-]

Objects are more than just their attributes, they are their actions also. Both are aspects of *the whole that is the object.* OO programmers recognize this with the concept of a "method" in which an function is part of an object instead of something somewhere else in the program.

Therefore a hand is more than just fingers, palm and thumb. Attribute-wise, that's all it is, but action wise, the hand has a new ability ("grasping") that the component objects don't have. So reductionism is wrong - a thing can be more than the sum of it's parts (since "thing" includes action). And a man made of predictable little atoms does not *necessarily* have no free will.

So there's need to go recursively back in to the atoms. You go back far enough until you see something that exists. Until you witness yourself making a choice or you don't.

Comment author: Ian_C. 16 June 2008 08:24:53AM 0 points [-]

Sorry, that should be "no need." No need because if you keep going back you won't find it, it only exists at the higher level, like the ability to grasp only exists at a higher level.

Comment author: Lukas_P. 16 June 2008 11:45:19AM 1 point [-]

"So reductionism is wrong - a thing can be more than the sum of it's parts (since "thing" includes action)."

The problem with this statement is that you don't define what you mean by sum. I for one cannot imagine what the term 'fingers + palm + thumb' is supposed to mean. Apparently by sum you don't mean arithmetic sum, but something different. Perhaps by 'sum' you mean something like 'put those ingredients into a beaker, shake it a little and then see what you get'.

And of course, if you defined 'sum', you'll need to define 'more' (and 'less') in this context. Perhaps you'll see that it's about the language and how we use it. We overload many words to mean different things and too often we use the special meaning of a word in a context where it doesn't belong.

Comment author: Ian_C. 16 June 2008 12:12:26PM 0 points [-]

"The problem with this statement is that you don't define what you mean by sum."

I mean if you list all the actions that it's parts can do alone, the combined thing can have actions that aren't in that list.

Comment author: Virge2 16 June 2008 12:38:16PM 5 points [-]

Ian, there's nothing wrong with reductionism.

Overly simplistic reductionism is wrong, e.g., if you divide a computer into individual bits, each of which can be in one of two states, then you can't explain the operation of the computer in just the states of its bits. However, that reduction omitted an important part, the interconnections of the bits--how each affects the others. When you reduce a computer to individual bits and their immediate relationships with other bits, you can indeed explain the whole computer's operation, completely. (It just becomes unwieldy to do so.)

"I mean if you list all the actions that it's parts can do alone, the combined thing can have actions that aren't in that list."

What are these "actions that aren't in that list"? They are still aggregations of interactions that take place at a lower level, but we assign meaning to them. The extra "actions" are in our interpretations of the whole, not in the parts or the whole itself.

Comment author: Dojan 25 December 2011 05:33:29PM 2 points [-]

A car without its engine isn't very good for driving, and neather is the engine all by itself. But that doesn't mean anything magical happens when you put them together. But that doesn't mean you can put them together any which way.

Comment author: CLEric 16 June 2008 12:40:38PM 0 points [-]

"Therefore a hand is more than just fingers, palm and thumb. Attribute-wise, that's all it is, but action wise, the hand has a new ability ("grasping") that the component objects don't have."

But "grasping" is itself composed of actions and abilities already contained in the parts of the hand. Sorting into objects and actions, there's an analogy: the hand is the sum of fingers, palm, and thumb, and grasping is the sum of particular muscles contracting to pull tendons in (each of) the fingers, palm, and thumb. Saying that a new ability ("grasping") arose from the addition is just like saying that a new attribute ("hand-ness") arose as well.

Comment author: ME3 16 June 2008 04:01:50PM 0 points [-]

In other words, the algorithm is,

explain_box(box) { if(|box.boxes| > 1) print(boxes) else explain_box(box.boxes[0]) }

which works for most real-world concepts, but gets into an infinite loop if the concept is irreducible.

Comment author: billswift 19 June 2008 06:19:07AM 0 points [-]

Rarely is any one thing decisive. Most decisions we make are over-determined, meaning that there are multiple reasons to do or not do most of the actions we choose. What motivates a particular choice is the sum of the weights of the reasons in our mind at the time we choose. Availability bias, confirmation bias, framing, and many other biases affect our choices, which is a good reason for learning about them. Lately, this blog has put more emphasis into agreement and disagreement and the biases affecting that, but the biases affecting your thinking without any interaction with others are at least as important, but can be more easily rooted out given the time and knowledge and effort.

Comment author: shaun 19 June 2008 11:57:47AM 0 points [-]

I think that decisions like this are different from individual to individual. I think that different aspects of our personality, our upbringing, and a myriad of other factors make up our decisions. Free will ultimately decide the outcome because we always have a choice to go against our nature. But not everyone can overcome their own nature.

Comment author: DilGreen 30 September 2010 09:22:17PM *  0 points [-]

Excellent post - and excellent advice.

I'm fairly new here, and very definitely not an AI 'insider'. But it sems to me that the approach to rationality promulgated here is strongly influenced by experience in thinking about what and how AI might be.

As someone who aspires to rationality, I find much of interest, and much to chew on, as I look around.

This post has crystallised a feeling I was getting that the approach taken here is perhaps fixated on a certain sort of mechanistic rationalism - of the type assumed in many game/economics theoretical approaches.

The example that launches the post is fatally undermined by the philosophically and experientally obvious point (and luckily for me, a point which is increasingly based in the science that comes from using fMRI) that the decision taken was NOT a rational decision. It was largely taken by the unconscious (I prefer pre-conscious, for the same reasons that you dislike the word 'emergent' - unconscious has come to be a mystical term - and that is not what I intend).

Rational behaviour is a mode of behaviour - one of many. The reason that increased rationality among humans is desirable is that it is a mode that is almost never practiced. We are - like it or not - creatures with an only lately evolved capacity for rational thought of any kind. Almost everything that we can be or do, can be achieved without rational thought (if I ever get the nerve to write a post in this forbiddingly precise atmosphere, I may be able to make this seem less tendentious).

Thus the impact of rational thinking has been out of all proportion to its prevalence. Rational behaviour is like the one-eyed man who is king in the nation of the blind. But for the one eyed man to declare that anything not encompassed by sight is irrelevant or dangerous would not be optimal for his subjects.

So I end up thinking, with regard to progress in artificial intelligence (without the slightest expectation of originality), that if research is focussed on 'artificial rationality', then any recognisable 'intelligence' is unlikely to result.

Comment author: Dojan 06 February 2012 06:23:20PM 1 point [-]

"Of course it is happening inside your head, Harry, but why on earth should that mean that it is not real?" -Albus Dumbledore

Comment author: Laoch 26 November 2013 05:25:55PM *  1 point [-]

Surely there is a more concise more up to date book other than GEB?