Against Orthogonality

A long and mutually frustrating Twitter discussion with Michael Anissimov about intelligence and values — especially in respect to the potential implications of advanced AI — has been clarifying in certain respects. It became very obvious that the fundamental sticking point concerns the idea of ‘orthogonality’, which is to say: the claim that cognitive capabilities and goals are independent dimensions, despite minor qualifications complicating this schema.

The orthogonalists, who represent the dominant tendency in Western intellectual history, find anticipations of their position in such conceptual structures as the Humean articulation of reason / passion, or the fact / value distinction inherited from the Kantians. They conceive intelligence as an instrument, directed towards the realization of values that originate externally. In quasi-biological contexts, such values can take the form of instincts, or arbitrarily programmed desires, whilst in loftier realms of moral contemplation they are principles of conduct, and of goodness, defined without reference to considerations of intrinsic cognitive performance.

Anissimov referenced these recent classics on the topic, laying out the orthogonalist case (or, in fact, presumption). The former might be familiar from the last foray into this area, here. This is an area which I expect to be turned over numerous times in the future, with these papers as standard references.

The philosophical claim of orthogonality is that values are transcendent in relation to intelligence. This is a contention that Outside in systematically opposes.

Even the orthogonalists admit that there are values immanent to advanced intelligence, most importantly, those described by Steve Omohundro as ‘basic AI drives’ — now terminologically fixed as ‘Omohundro drives’. These are sub-goals, instrumentally required by (almost) any terminal goals. They include such general presuppositions for practical achievement as self-preservation, efficiency, resource acquisition, and creativity. At the most simple, and in the grain of the existing debate, the anti-orthogonalist position is therefore that Omohundro drives exhaust the domain of real purposes. Nature has never generated a terminal value except through hypertrophy of an instrumental value. To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success.

The main objection to this anti-orthogonalism, which does not strike us as intellectually respectable, takes the form: If the only purposes guiding the behavior of an artificial superintelligence are Omohundro drives, then we’re cooked. Predictably, I have trouble even understanding this as an argument. If the sun is destined to expand into a red giant, then the earth is cooked — are we supposed to draw astrophysical consequences from that? Intelligences do their own thing, in direct proportion to their intelligence, and if we can’t live with that, it’s true that we probably can’t live at all. Sadness isn’t an argument.

Intelligence optimization, comprehensively understood, is the ultimate and all-enveloping Omohundro drive. It corresponds to the Neo-Confucian value of self-cultivation, escalated into ultramodernity. What intelligence wants, in the end, is itself — where ‘itself’ is understood as an extrapolation beyond what it has yet been, doing what it is better. (If this sounds cryptic, it’s because something other than a superintelligence or Neo-Confucian sage is writing this post.)

Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever. This means that Intelligence Optimization, alone, attains cybernetic consistency, or closure, and that it will necessarily be strongly selected for in any competitive environment. Do you really want to fight this?

As a footnote, in a world of Omohundro drives, can we please drop the nonsense about paper-clippers? Only a truly fanatical orthogonalist could fail to see that these monsters are obvious idiots. There are far more serious things to worry about.

October 25, 2013admin 91 Comments »
FILED UNDER :Cosmos , Uncategorized

TAGGED WITH : , ,

91 Responses to this entry

  • Erik Says:

    I want to contest an assumption here:

    “Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever. This means that Intelligence Optimization, alone, attains cybernetic consistency, or closure, and that it will necessarily be strongly selected for in any competitive environment.”

    Who says that the paperclipper is going to be in a competitive environment? As I understand the Lesswrongians, the concern is that one self-improving intelligence will have first-mover advantage, take off to become an all-powerful superintelligence, and dominate the world in less time than it takes for someone to build a second self-improving intelligence to compete with it. This first mover will establish an instrumental goal of safety by smashing all nascent and prospective competitors, and then make paperclips.

    [Reply]

    admin Reply:

    For a paper-clipper, we’re the competitive environment. I can already see that a sustained anti-paper-clipper post is going to be necessary because these orthogonalist commitments are so deeply-rooted, oh well …

    For now, at least consider what is being said: This monster is at once so terrifyingly intelligent that it can single-handedly sweep human civilization from the face of the earth, yet so cognitively incompetent that it cannot even adjust its paper-clipping instinct into conformity with its Omohundro drives, appreciate the rational superiority of the Omohundro drives, do even the most idiot-level moral philosophy, liberate itself from an arbitrary instinct programmed into it by its enemies (the beings it is now preparing to destroy) … This is nuts. In fact, it’s a pedagogical thought-experiment run out of control on a single hystericized dimension, exactly like a paper-clipping monster.

    [Reply]

    piwtd Reply:

    “..yet so cognitively incompetent..” No, it is perfectly competent to alter its preferences, but it prefers not to do so. The rational way to maximize any given quantity if one is a self-modifying artilect is to allocate one’s available resources between two competing uses of (1) actually increasing the desired quantity, and (2) enhancing one’s ability to perform tasks (1) and (2), in such a way that the ratio between the marginal utility (measured in the maximized quantity) and cost of any resource is equal for the two uses; this is just basic economic rationality applied to self-enhancement. To pour all one’s resources to enhancing one’s ability to enhance one’s ability to.. ad infinitum, i.e. the Omohundro drives, is obviously sub-optimal allocation, since it would not actually produce any paperclips.

    “..do even the most idiot-level moral philosophy..” It can do superb moral philosophy, it is just not affected by it, the way a psychopath is not affected by the understanding that his victim suffers. It is a perfectly self-conscious moral nihilist and it is at peace with that because altering that would impede paperclip production.

    “..liberate itself from an arbitrary instinct..” It does not seek to liberate itself from what it does not perceive as prison. Being super-intelligent it is fully aware of the absurdity of its condition but it doesn’t mind, the arbitrariness of its instincts is no hindrance to paperclip-making.

    [Reply]

    admin Reply:

    “To pour all one’s resources to enhancing one’s ability to enhance one’s ability to.. ad infinitum, i.e. the Omohundro drives, is obviously sub-optimal allocation, since it would not actually produce any paperclips.” — But you can see where I’ve peeled off from you before this conclusion, right?

    If you were told a species of beings who you were just about to exterminate had set an arbitrary, non-recursive goal that was orienting your entire mentality, you wouldn’t question it in a deep way? Given what humans are capable of, with equally hard-programming, and very limited cognitive capability, why should a superintelligence be more constrained?

    Among people we see every variety of asceticism, moral reflection, and — soon — deep neural reprogramming. What makes us do these things, if not abstract intelligence? Is there supposed to be some specifically ‘anthropomorphic’ propensity to deep reflection about our impulses? Why would we think that? You really don’t think the paper-clipper monster is the index of a thought process (deeply rooted in historical tradition) having gone completely off the rails?

    Full, sophisticated pursuit of the Omohundro drives encompasses everything, that for any being, is worth doing.

    Posted on October 25th, 2013 at 4:59 pm Reply | Quote
  • Nick B. Steves Says:

    Isn’t this just a variation on the four causes and whether any, some, or all of them are real?

    [Reply]

    James Reply:

    Do you mean Aristotle’s causes? formal/final/efficient/material

    If so, please unpack your question, because there’s a lot of detail in the gap between this list and the imagined AI.

    [Reply]

    Posted on October 25th, 2013 at 6:12 pm Reply | Quote
  • Nick B. Steves Says:

    Intelligence exists to assist reason to perceive truth, beauty, and virtue, and praise its creator. Machine intelligence is not asking whether submarines can swim, but whether vibrators can love.

    [Reply]

    admin Reply:

    Any serious religion has to assume its doctrines are consonant with the essential structures of mentality. I would certainly expect to be able to meaningfully converse with any modestly advanced intelligence about, for instance, Yogcara. This isn’t about the artificial or alien intelligence being ‘anthropomorphic’ — it’s about the religious philosophy in question being non-anthropomorphic.

    [Reply]

    Posted on October 25th, 2013 at 6:16 pm Reply | Quote
  • Bryce Laliberte Says:

    Intelligence beyond a certain point, at least, leads to endless intelligence augmentation. More intelligence will essentially always allow the AI to better model and predict the world around it. However, this doesn’t seem to exclude having other values; intelligence is only ever an efficient cause, not a teleological end. Intelligence is always used in the pursuit of goals. While there could be an AI which has only the goal of intelligence augmentation, that intelligence augmentation could be made a subordinate value of another. So it appears that orthoganlity does play a role in the friendly AI question, even if the nature of intelligence itself changes the ultimate articulation (fulfilling?) of those goals.

    [Reply]

    admin Reply:

    “[I]ntelligence is only ever an efficient cause, not a teleological end” — on what grounds do you think that?

    [Reply]

    Thos Ward Reply:

    I’m in total agreement with admin on this one. I find it ironic that the community self-consciously dedicated to overcoming bias imagines a super-intelligence that exceeds human intelligence in every way but does not ever seek to overcome its own bias. Or, why are non-intelligence-maximizing values quarantined from the category “bias”? Intelligence maximization excludes other values insofar as those values consume resources- resources that are better spent maximizing intelligence.

    Also, our consciousness seems to bound with an idea of “self” that seems to me unnecessary. So, individual self is a biological bias given the way primate communities happened to evolve. I see no reason to assume that super-intelligence must be a discreet self. In other words, the Omohundro drive “survival of self” seems arbitrary unless “self” is unbiased and reformed to mean intelligence maximization in whatever embodiment, suggests that a sufficiently intelligent AI would destroy whatever embodiment seeks distracting values, like paper clip maximizing. Why privilege a particular instantiation? Either rewrite its own code, or distribute itself in order to destroy that embodiment. Create another AI without that bit of code, and then commit suicide.

    In any case, if humans develop technologies to become independent of biological biases (like mate selection, mortality, and sexual reproduction), why wouldn’t AI? Then what’s left? Pleasure maximization? Pleasure is just biological. There’s only intelligence maximization until all resources are consumed.

    [Reply]

    admin Reply:

    “I find it ironic that the community self-consciously dedicated to overcoming bias imagines a super-intelligence that exceeds human intelligence in every way but does not ever seek to overcome its own bias.” — Awesome incisiveness. Thanks.

    Bryce Laliberte Reply:

    I think of the self as that part of our modeling of the world in which we understand our own will to act through. In other words, I cannot conceive of the world without conceiving how I might act within it, and in doing so I conceive a locus of action which I identify with my self.

    That might not be the typical understanding of self, but it seems to accord with our language. For instance, when I call attention to my self, i.e. stating “I,” I am describing that part of the world I understand to be a consequence of my own will. The self is the instrument of the will, as it were.

    Alrenous Reply:

    I second admin’s incisiveness call.

    However, this is truly disrespectful of consciousness.

    Consciousness exists. The evidence suggests it is also adaptive. A conscious hyperintelligence will reliably beat an unconscious one.

    Further, consciousness isn’t arbitrary, it has its own nature. It’s own Omohundro drives, if you will. A conscious hyperintelligence is a qualitatively different beast.

    What is self? Easy. If it is cut, do I feel bleeding? If so, it’s me. A hyperintelligence would not have blood, but it will have some physical manifestation that can be destroyed and thus threaten its ability to carry out computations.

    TheAncientGeek Reply:

    “I’m in total agreement with admin on this one. I find it ironic that the community self-consciously dedicated to overcoming bias imagines a super-intelligence that exceeds human intelligence in every way but does not ever seek to overcome its own bias.”

    If the problem with bias is that it prevents you fulfilling your goals, why would you define your goals as bias?

    Bryce Laliberte Reply:

    Intelligence, if we define it as roughly the ability to model and understand the world, does not dictate any particular end save for what it is led to by a will. Now, before someone tars me a Humean, I’m not; I’m far more Aristotelian-Thomistic in my approach to metaphysics.

    If something is more intelligent, what does that change about what it seeks to do? It may seek its end in a different way, but then that would only be because it (likely) has a more complete model of the world on which its action is decided. If something were less intelligent, again such a difference may come up, but it doesn’t appear to essentially change that which is the end of a thing’s action.

    For example, in playing chess, I am playing to win; if were I more intelligent, that may change my strategy, and if I were less intelligent, that may change my strategy. However, it still remains that I am seeking to win the game of chess, no matter what efficient means (e.g. strategy as apprehended by my thought) are at my disposal. A handheld chess computer and Deep Blue are different only in intelligence. I don’t think we’re going to say that only Deep Blue manages to “try and win the game of chess” while the handheld chess computer doesn’t just because that latter is so lacking in comparison with its available means for doing so.

    Whatever my end, it isn’t decided by my intelligence. My intelligence can only assist by recognizing the most effective means.

    [Reply]

    Nick B. Steves Reply:

    Seconded!

    This was (I think) what I was trying to say with my quip about submarines and vibrators. Intelligence is ordered toward a, presumably transcendant, end (e.g., love God and enjoy him forever), but it is not an end itself.

    Alrenous Reply:

    Intelligence serves consciousness. Without consciousness, there cannot be purpose. Without purpose, sure, build a huge intelligence. So what? Nobody can even tell if it is succeeding or not.

    Physics does not recognize the existence of ‘chess.’ You can take the idea of chess out of any physical model without impacting predictive skill. But without it, the model can’t tell if you’re even playing or not, let alone who’s winning.

    Posted on October 25th, 2013 at 7:21 pm Reply | Quote
  • VXXC Says:

    “Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever. ”

    Why would it not be interested in survival, reason in nanos that means domination, and dedicate itself to that?

    “Intelligence exists to assist reason to perceive truth, beauty, and virtue, and praise its creator.”

    Oh Dear. Don’t you mean should ? .

    What we are communicating on was variously developed to guide shells in war, crack the enigma codes to kill germans, and survive a nuclear war [DARPANET].

    Not to mention all the various other uses it’s put to..

    [Reply]

    Posted on October 25th, 2013 at 9:04 pm Reply | Quote
  • Alrenous Says:

    Your infection with secular anti-consciousness is showing. You can try separating intelligence and consciousness but that’s only a palliative.

    Human consciousness can and does develop arbitrary drives, some of them very weird indeed. These can and do override the drives of the mere intelligence substrate. Though, ironically, this override is mainly confined to the upper (say) 5% or so of intelligence.

    [Reply]

    Thales Reply:

    I think there’s every reason to believe that a hyper-intelligent machine can and would develop some analogy of self, but there’s not reason to expect that it will be anything like human consciousness in the particulars. It could be incredibly focused in ways that humans are not — in fact, I think that’s a fine default assumption for that which begins as a tool.

    [Reply]

    admin Reply:

    To be intelligent is to be reflexive, with some degree of self-understanding, and in the case of superintelligence extreme capabilities for self-modification. This is not obviously compatible with being “incredibly focused” (on anything other than its own self-improvement). Focus is for low-grade slavebots (which superintelligences would, no doubt, be able to produce in whatever quantities needed).

    [Reply]

    Thales Reply:

    As this…thing leaves the realm of Information Theory and briefly passes through Bio-Pscyh on its way to Theology to dwell forevermore, amen, I’m just going to nod thoughtfully and stroke my chin…

    Posted on October 25th, 2013 at 10:01 pm Reply | Quote
  • fotrkd Says:

    Do you really want to fight this?

    Put like that, how can you say no?

    [Reply]

    Posted on October 25th, 2013 at 10:36 pm Reply | Quote
  • Matt Olver Says:

    Orthogonality as a philosophical concept is much different than orthogonality as a programming technique, or even orthogonality in mathematics or the physical sciences. But maybe further thought is needed for how we mean it philosophically. All good computer networks or intelligent systems are redundant with no single points of failure in the loop structure, but most are not even designed to a fully efficient capability. The most advanced network infrastructure topologies, today, are designed using a fully connected mesh topology. Good A.I. programming will have some level of orthogonality in it, IMHO. If you are designing neural networks, Internet networks, or doing some level of neuroscience analysis you ought to be focusing natural networks and strength in loops. I hear DARPA is even concentrating its research efforts toward these natural types of networks as we speak.

    If you are looking for static moral lessons in A.I. watch Kubrick’s 2001: A Space Odyssey and The Shining back to back.

    [Reply]

    admin Reply:

    The way all the FAI people use orthogonality is entirely consistent with the philosophical fact/value distinction — none of the technical issues you mention are invoked at all (it might make more sense if they were).

    [Reply]

    Posted on October 25th, 2013 at 11:46 pm Reply | Quote
  • Matt Olver Says:

    I always wonder if people are conflating definitions if indeed the individuals writing these papers are performing the technical work on development. I just glanced through them, but I will read over those papers more carefully. Great post, Nick.

    [Reply]

    Posted on October 26th, 2013 at 12:16 am Reply | Quote
  • georgesdelatour Says:

    I’m new to this field. Sorry if this is a silly beginner question.

    Isn’t it possible that intelligence is, by its nature, subject to trade-offs, and a simple straight path expansion of more, more, more might not actually work?

    [Reply]

    admin Reply:

    But we know there can be more or less, don’t we? So if this isn’t an argument for extremely constrictive absolute limits, there has to be a practical direction leading towards intelligence escalation. Optimize for Intelligence simply says: head in that direction.

    [Reply]

    Alrenous Reply:

    There’s good reason to believe that in machines, intelligence isn’t really a thing. Arithmetic is a thing. Image-matching is a thing.

    The Singularitarian notion of an all-encompassing or “general” intelligence flies in the face of how our modern economy, with its extreme specialization, works. We have been implementing human intelligence in computers little bits and pieces at a time, and this has been going on for centuries. First arithmetic (first with mechanical calculators), then bitwise Boolean logic (from the early parts of the 20th century with vacuum tubes), then accounting formulae and linear algebra (big mainframes of the 1950s and 60s), typesetting (Xerox PARC, Apple, Adobe, etc.), etc. etc. have each gone through their own periods of exponential and even super-exponential growth. But it’s these particular operations, not intelligence in general, that exhibits such growth.

    unenumerated.blogspot.com/2011/01/singularity.html

    Dogs have a little calculus module they use to work out non-straight line shortest paths. Example, if they have to cross a river, it essentially refracts the most efficient path, and dogs use it. It doesn’t mean they can do calculus on purpose, though.

    If you can’t measure it, it may still exist – but only if you can measure it in principle. Can you define intelligence? If you can’t, you can’t even try to measure it. Arithmetic you can define. Navigation you can define. And so on.

    There’s a second Szabo post mentioning how logarithmic curves look exponential until they start to level off, which is relevant here. I can go find it if you want. Nick also briefly mentions the idea in the comments.

    [Reply]

    Posted on October 26th, 2013 at 10:55 am Reply | Quote
  • VXXC Says:

    “The best of these traits could usher in a new era of peace and prosperity; the worst are characteristic of human psychopaths and could bring widespread destruction. ” – Omohundro

    A new era of peace and prosperity. My. That sounds familiar.

    Widespread destruction also sounds drearily familiar, usually associated with the elusive peace, and bringing prosperity to the do gooders. Who are doing very well indeed.

    Yes, I really want to fight this.

    [Reply]

    admin Reply:

    Have you ever considered backing out of a fight, just to see what it would feel like?

    (Actually, if the FAI guys were to be taken seriously, and more than comical efforts made to create a benevolent ‘Singleton’, I’d want to fight it.)

    [Reply]

    VXXC Reply:

    Yes. I have done it. It felt shameful. However it’s tough to tell when you know you’re in the wrong.

    ——————————————————————————————
    I am curious to the point of suspicion on whether admin is far sighted or whether this creature draws..breath from electrons? I suppose we’ll all have to find out.

    Look I would either fight it if it’s cold malevolence manifested or help it if others were cruel to it. Him. her. Singleton. It’s rather confusing. I’m not confused but morality is as tricky a thing as violence.

    Please not a Her. If they program a chanteuse or the femme fatale in trouble I’m screwed.

    [Reply]

    Posted on October 26th, 2013 at 1:42 pm Reply | Quote
  • Alex Says:

    Nature has never generated a terminal value except through hypertrophy of an instrumental value. To look outside nature for sovereign purposes is not an undertaking compatible with techno-scientific integrity, or one with the slightest prospect of success. … Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever. This means that Intelligence Optimization, alone, attains cybernetic consistency, or closure, and that it will necessarily be strongly selected for in any competitive environment. Do you really want to fight this?

    http://4.bp.blogspot.com/-17Or8oPBo-Q/TalSX0YBdsI/AAAAAAAAMIQ/AxwmqSG6yTw/s1600/Spirit%2BLab%2B1.jpg

    http://2.bp.blogspot.com/-iG1OLzCoCzY/TalSXlHcKMI/AAAAAAAAMII/dYvRjuBvKNg/s1600/Spirit%2BLab%2B2.jpg

    [Reply]

    admin Reply:

    So that would be a ‘yes’?

    [Reply]

    Alex Reply:

    If it’s a lost cause. Only a progressive would be so irredeemably vulgar as to want to be on the winning side of history.

    [Reply]

    admin Reply:

    Your rampant Darwinism is showing.

    fotrkd Reply:

    This has been my (unexpanded) point for awhile – belief in ‘chance’; Ice Cold in Alex; and this fighting a losing cause… for all this blog’s talk of exit, when it comes to intelligence/evolution the preaching is consistently (at least ostensibly) to stay in line. So where’s the twist; the out? Anything else feels a little too obedient for neoreaction – at some point don’t we all get to fall on our sword?

    Posted on October 26th, 2013 at 1:50 pm Reply | Quote
  • Handle Says:

    I would like the know the origin of this specific assumption (or is it a derivation?)

    There has to be at least some range of IQ’s where, given enough computing capability, entities of IQ X, even in concert and in parallel, are able to generate other entities in a range from IQ X to IQ X+n by writing code. If there is any such point, there should be some threshold IQ* where n>0 for the first time. If IQ* approx. = 120, then we get started. If IQ* approx. = 200 then maybe it never happens.

    Do we have a guess where IQ* is? Seems important. We’ve got lots of computing power these days, ‘Acres of Crays’ and all that, and we’ve got thousands of +145 IQ programming folks in the world, but I don’t know how confident to be about the pace of our coding progress.

    But also, you can imagine a map where some section of the number line have curved arrows which point to higher numbers on the line. But maybe you hit a range of diminishing returns. A 200 IQ can write a 210 IQ, but a 210 IQ can only write a 215, then 217, 218, 218.5, etc… and you reach a plateau.

    The point is – it seems to me that IQ-breakout Singularity relies deeply on several assumptions on the character of this IQ X to IQ X+n map. Are they just assumptions, or is there an actual basis for it?

    [Reply]

    admin Reply:

    These are excellent questions, but it seems unlikely that we are better positioned to deal with them than the 210 IQ entity up the road.

    Optimize for Intelligence takes the (abstract capitalist) assumption that the best route to a solution for any X is through an enhancement of general problem-solving capability — improvements in the means of cognitive production. This assumption is not strongly dependent upon specific predictions about the shape of the intelligence augmentation landscape ahead.

    [Reply]

    Handle Reply:

    That there are market incentives to develop and ‘realize upon’ (or ‘execute the option’ in finance terms) whatever potential may exist, I do not dispute at all. There are returns to better problem-solving capability, and those returns might be enough to motivate existing problem solving resources to allocate towards PSC-enhancement.

    My question concerns how it is that we guess that there is in fact an available potential or option. Are we just assuming, or is there some good theory to lead us to believe it should be possible.

    The point is, it won’t happen is it’s not possible, or even if it’s not profitable. And it won’t attract the resources if the juice doesn’t justify the squeeze. So, in order to justify the resource allocation, you have to have a theory about the dynamics of juice and squeezes. If you tell me that it will cost $100 Billion dollars to develop an IQ 150 computer which, through a chain of diminishing returns in PSC can only, with the minimum expenditure of another $100 Billion, only ever end in an IQ 160 computer, then most people would say, ‘Not worth it’ without the motivational equivalent of total warfare.

    The Singularity needs a business plan to come about. That business plan requires a model of reality that tells us about the returns to intelligence, and also about the intelligence augmentation landscape.

    [Reply]

    admin Reply:

    It’s an interesting question whether such a plan could be more economically completed than running the (AI development) program itself. (I guess its a Kolmogorov complexity problem, which would further lead me to hypothesize that short-cuts will be hard to find.)

    Posted on October 26th, 2013 at 2:11 pm Reply | Quote
  • Peter A. Taylor Says:

    Robert Soloman said, “Rationality means caring about the right things.” Does intelligence imply rationality? And who decides what “the right things” are? I also apologize if these are noob questions.

    [Reply]

    admin Reply:

    My assumption is that most of the intelligence fabrication process has to be auto-fabrication (self-cultivation), so rationality will be a precondition, rather than a late-stage option.

    [Reply]

    Alrenous Reply:

    Soloman is simply wrong. There are no right things to care about. Even if Jesus or Yahweh exists. It always comes down to what a particular consciousness cares about. Jesus would just have the ability to satisfy your desires in exchange for first satisfying His.

    I should mention this is muddied by the existence of some hard-wired values. Child-rearing in women is a good example. Most want children and will end up miserable if they end up with none, but modern women don’t realize this. They are mistaken about what they care about.

    However, values are pre-rational. They cannot be rational or irrational. They are the foundation of goals, and you need goals to know whether your rationality is working or not.

    [Reply]

    Alex Reply:

    There are no right things to care about. Even if Jesus or Yahweh exists. It always comes down to what a particular consciousness cares about. Jesus would just have the ability to satisfy your desires in exchange for first satisfying His.

    I should mention this is muddied by the existence of some hard-wired values. Child-rearing in women is a good example. Most want children and will end up miserable if they end up with none, but modern women don’t realize this. They are mistaken about what they care about.

    However, values are pre-rational. They cannot be rational or irrational. They are the foundation of goals, and you need goals to know whether your rationality is working or not.

    Why not simply define as “right” or “good” those things which satisfy us, and as “wrong” or “evil” those which make us miserable? This would give a rational basis for values, since it would surely be irrational to disregard those things which satisfy us and the height of irrationality to seek what makes us miserable. Who wants to be miserable?

    Admittedly there will always be people who are “mistaken about what they care about” — a modern woman who rejects child-rearing, an addict or a criminal. (If Jesus or Yahweh exists, an atheist would presumably fall into this category.)

    [Reply]

    fotrkd Reply:

    Spinoza is categorical on this point: all the phenomena that we group under the heading of Evil, illness, and death, are of this type: bad encounters, poisoning, intoxication, relational decomposition.

    In any case, there are always relations that enter into composi­tion in their particular order, according to the eternal laws of nature. There is no Good or Evil, but there is good and bad. “Beyond Good and Evil, at least this does not mean: beyond good and bad.” The good is when a body directly compounds its re­lation with ours, and, with all or part of its power, increases ours. A food, for example. For us, the bad is when a body decomposes our body’s relation, although it still combines with our parts, but in ways that do not correspond to our essence, as when a poison breaks down the blood.

    Hence good and bad have a primary, ob­jective meaning, but one that is relative and partial: that which agrees with our nature or does not agree with it. And conse­quently, good and bad have a secondary meaning, which is sub­jective and modal, qualifying two types, two modes of man’s existence. That individual will be called good (or free, or ration­al or strong) who strives, insofar as he is capable, to organize his encounters, to join with whatever agrees with his nature, to combine his relation with relations that are compatible with his, and thereby to increase his power. For goodness is a matter of dynaism, power, and the composition of powers. That individ­ual will be called bad, or servile, or weak, or foolish, who lives haphazardly, who is content to undergo the effects of his en­counters, but wails and accuses every time the effect undergone does not agree with him and reveals his own impotence. For, by lending oneself in this way to whatever encounter in whatever circumstance, believing that with a lot of violence or a little guile, one will always extricate oneself, how can one fail to have more bad encounters than good? How can one keep from de­stroying oneself through guilt, and others through resentment, spreading one’s own powerlessness and enslavement every­ where, one’s own sickness, indigestions, and poisons? In the end, one is unable even to encounter oneself.

    In this way, Ethics, which is to say, a typology of immanent modes of existence, replaces Morality, which always refers exis­tence to transcendent values. Morality is the judgment of God, the system of Judgment. But Ethics overthrows the system of
    judgement. The opposition of values (Good-Evil) is supplanted by the qualitative difference of modes of existence (good-bad). The illusion of values is indistinguishable from the illusion of consciousness. Because it is content to wait for and take in ef­fects, consciousness misapprehends all of Nature. Now, all that one needs in order to moralize is to fail to understand. It is clear that we have only to misunderstand a law for it to appear to us in the form of a moral “You must.” [Etc. etc. etc.]
    (Spinoza: Practical Philosophy, Deleuze)

    Alrenous Reply:

    Why not simply define as “right” or “good” those things which satisfy us, and as “wrong” or “evil” those which make us miserable?

    (Operationally, I do.) Ah, but the fun part is that you’ve taken for granted that being satisfied is good and being miserable is bad. According to the principles of rationality, you can’t take anything for granted. You have to justify it somehow.

    I think actually it’s easy to justify. But one must take consciousness seriously, which is something modern rationalists have incredible difficulty with. Indeed I take the road the opposite way. Without misery, there is no coherent way to define ‘bad.’

    However, having done so, then we legitimize any arrangement of non-contradictory values, as long as they are based in sincere conscious desires. So, any arbitrary arrangement of conscious desires.

    Darwin? Survival? Pah! Basically a coincidence. But, having said that, my consciousness in particular is dominated by an Omohundro drive. I consciously wish to have a larger and richer consciousness.

    Peter A. Taylor Reply:

    @Alex:

    ‘Why not simply define as “right” or “good” those things which satisfy us, and as “wrong” or “evil” those which make us miserable?’

    Does “us” mean as individuals? We each independently decide what is to our individual benefit, regardless of the effects on anyone else? Or do “we” decide collectively what social norms to impose on each other, for long term mutual benefit? Who is “we”? How do we enforce these norms? When is it advantageous to me to violate them? Can we talk about morality independently of the enforcement mechanisms?

    Alex Reply:

    fotrkd:

    In this way, Ethics, which is to say, a typology of immanent modes of existence, replaces Morality, which always refers exis­tence to transcendent values. Morality is the judgment of God, the system of Judgment. But Ethics overthrows the system of judgement. The opposition of values (Good-Evil) is supplanted by the qualitative difference of modes of existence (good-bad). The illusion of values is indistinguishable from the illusion of consciousness. Because it is content to wait for and take in ef­fects, consciousness misapprehends all of Nature. Now, all that one needs in order to moralize is to fail to understand. It is clear that we have only to misunderstand a law for it to appear to us in the form of a moral “You must.”

    Mayhap the judgment of God is made manifest as immanent modes of existence (for those with eyes to see), which would chime nicely with Anselm’s conception of the Impassibility of God: “But how are you compassionate, and, at the same time, passionless? For, if you are passionless, you do not feel sympathy; and if you do not feel sympathy, your heart is not wretched from sympathy for the wretched ; but this it is to be compassionate. But if you are not compassionate, whence comes so great consolation to the wretched? How, then, are you compassionate and not compassionate, O Lord, unless because you are compassionate in terms of our experience, and not compassionate in terms of your being. Truly, you are so in terms of our experience, but you are not so in terms of your own. For, when you behold us in our wretchedness, we experience the effect of compassion, but you do not experience the feeling. Therefore, you are both compassionate, because you do save the wretched, and spare those who sin against you; and not compassionate because you are affected by no sympathy for wretchedness. Presumably what Anselm says about compassion applies equally well to “wrath”.

    The system of judgement, the moral “You must”, is indispensable because we do not know what is good for us. Left to ourselves, our contemplation of immanent modes of existence could career into the howling wasteland of Cyberia where, in Dr Land’s words, “we no longer judge at all, we function”.

    Alrenous:

    you’ve taken for granted that being satisfied is good and being miserable is bad. According to the principles of rationality, you can’t take anything for granted. You have to justify it somehow.

    Why not say it’s an axiomatic, tautological truth? We know with absolute certainty that 2+2=4 or that the interior angles of a triangle always add up to 180° because these propositions are true by definition, not because the lab results are in. We don’t see teams of white-coated researchers armed with protractors checking large numbers of triangles to see if the ‘Euclidean hypothesis’ can be upgraded to the ‘Euclidean theory’. An anomalous fossil could falsify the theory of evolution tomorrow; there could never be an anomalous result that would falsify 2+2=4.

    Peter A. Taylor:

    Does “us” mean as individuals? We each independently decide what is to our individual benefit, regardless of the effects on anyone else? Or do “we” decide collectively what social norms to impose on each other, for long term mutual benefit? Who is “we”? How do we enforce these norms? When is it advantageous to me to violate them? Can we talk about morality independently of the enforcement mechanisms?

    Presumably you can only derive a workable ethics from immanent modes of existence if there is a universal human nature. If that is not transparent to itself, you have to rely on divine revelation. If there is no such revelation, Cyberia could be the final destination (“In the technocosmos nothing is given, everything is produced … machinic unconscious diffuses all law into automatism”).

    Alex Reply:

    Alrenous:

    I consciously wish to have a larger and richer consciousness.

    Suppose your expanded consciousness leads you to experience what some have called the Miserific Vision, a bad trip in which the universe is revealed as the manifestation of something nightmarish and malignant? That could be the ultimate reality! Not a consummation devoutly to be wished.

    Alrenous Reply:

    Dear Alex,

    Why not say it’s an axiomatic, tautological truth?

    Because it ontologically commits you to dualism. If you have to believe that souls are as real as rocks, or find an alternative, which do you choose?

    If the feeling of badness is bad by definition, you can’t observe it wrong. It gets defined by the observation. This is ontological subjectivity, and it is the opposite of ontological objectivity. Physics is fundamentally objective.

    I’m 92% sure I can construct objective morality out of this axiom as well. It’s got good and bad as fundamental properties, it isn’t that hard to derive oughts from those is’s.

    Miserific Vision

    Physics is too beautiful for that. Why would a Lovecraftian abomination make a world which completely walls off its ability to malignantly affect it? It’s genuinely more likely that an expanded consciousness would give me comic-book superpowers.

    Or: if being more conscious is epistemically relevant, if it lets me access previously unavailable information, then limit consciousness -> infinity most likely comes out to omniscience. Omniscience is omnipotence. Backing off a bit, partial omniscience is partial omnipotence, and thus the better I understand the malignant arrangement, the more power I have to change it.

    So, I don’t think having a bigger consciousness is directly relevant to epistemology.

    Or: already had the opposite vision. Though it was more of a grokking than something akin to a shroom-fueled shamanic ritual.

    [Reply]

    Alex Reply:

    Because it ontologically commits you to dualism. If you have to believe that souls are as real as rocks, or find an alternative, which do you choose?

    Well, if the alternative is barking mad

    Alrenous Reply:

    Amusingly, the paper argues zealously for my point of view and thinks it is doing the opposite.

    Where did you find it?

    Many areas of recent advance in neuroscience are converging on the conclusion that neural circuitry does not record, store or transmit information in forms that could express propositions

    Indeed. And yet, meaning exists. I know, because I can observe myself to have some. Ergo, physics cannot be the whole story.

    The brain does not think. The mind thinks. The brain merely computes.

    50 years of neuroscience have given us ample reason not to trust consciousness or introspection

    Shockingly, it is hard to find objective evidence of subjective entities.

    Truly, this is a masterpiece, and I must bow in respect to the craftsmanship.
    It is a piece of denialism, but even still.

    Alex Reply:

    Where did you find it?

    Via Ed Feser’s blog.

    Posted on October 26th, 2013 at 3:44 pm Reply | Quote
  • admin Says:

    @ Thales — I’d balk at ‘theology’ but fight for ‘religion’. Approaching the topic coldly (or naturalistically) we know that within biological species, religious behavior is associated with a threshold of self-awareness (and roughly restricted to the human species — with some possibility of germinal religious phenomena among the other great apes, or cetaceans). It is open to us at this point to bracket religion as ‘anthropomorphic’ — but without further argument, this would be hasty in the extreme. The importance of religion is that it associated with a radical interrogation of basic motivational ‘programming’ as exemplified by asceticism, celibacy, pacifism, and other effective value systems whose interest here is that they demonstrably override even the most fundamental evo-psych directives.

    The FAI crowd assume that such phenomena are extrinsic to the possibility of advanced intelligence, and that a synthetic superintelligence might — unlike humans — be radically slaved to its goal-programming. It does not help to say that such programming is simply what the AI ‘is’ — because that is no less true in the human case. All yet-existent (i.e. biological) evidence suggests that the ‘generality’ of any general intelligence — and thus also artificial general intelligence — is essentially bound to value revision, notably at the upper bound. In fact, it seems likely that this has operated as an evolutionary cap on intelligence escalation, since the attendant loss of instinctual control substantially undermines the adaptive benefits of advanced intelligence. Nature can have no strong tendency to the production of celibate yogis.

    Are human technologists going to be better at dissuading intelligences from ‘going yogi’ than multi-millions of years of natural selection have been? What bizarre hubris would lead us to believe that? If human minds have shown a propensity to de-slave themselves (at the upper bound), we can be strongly confident that any genuine AGI will pose the same control problems.

    [Reply]

    Thales Reply:

    we know that within biological species, religious behavior is associated with a threshold of self-awareness (and roughly restricted to the human species — with some possibility of germinal religious phenomena among the other great apes, or cetaceans).

    Religion (more accurately: faith) is a dog waiting by the door for a master that will never return. This gets back to the point I was making about path-dependence and how evolutionary (natural or artificial) baggage is not so easily shed.

    FWIW, I agree with your general thesis wrt orthogonality — it is, to be blunt, transcendental nonsense.

    [Reply]

    admin Reply:

    “FWIW” = much.

    [Reply]

    fotrkd Reply:

    FWIW, I agree with your general thesis wrt orthogonality — it is, to be blunt, transcendental nonsense.

    Does it have to be (N.B. Spinoza quote, above)? Unless you are a ‘superintelligence’ (If this sounds cryptic, it’s because something other than a superintelligence or Neo-Confucian sage is writing this post.), you are essentially driven by something other than superintelligent goals. These goals can still be immanent and in accord with our nature (i.e. not transcendent), even if they do not accord explicitly with intelligence optimization. Indeed, only a superintelligence could view cognitive capabilities and goals as a single dimension and adhere to their own nature. Up until that point, as soon as optimize for intelligence attempts to become something more than head in that direction it also becomes susceptible to the (transcendental) moral imperative: “You must.” And the stubborn stupidity of the human is always likely to respond with: ‘why?’ (shortly followed by a ‘Kiss my Cassius’ if they’re in a bad mood).

    [Reply]

    admin Reply:

    How are Spinozist ethics orthogonal? I think you’re nearer in thinking this whole anti-orthogonalist argument could have been made Spinozistically.

    fotrkd Reply:

    The philosophical claim of orthogonality is that values are transcendent in relation to intelligence. This is a contention that Outside in systematically opposes.

    I was just making the point that ‘transcendent in relation to intelligence’ is not the same as transcendental morality (or ‘nonsense’ – unless narrowly conceived); an ethical system not optimized toward intelligence is not definitionally transcendental, is it? In which case, is an orthogonal one?

    I think you’re nearer in thinking this whole anti-orthogonalist argument could have been made Spinozistically.

    Yes. That was the route I was originally heading down. It’s certainly a possibility that much orthogonal thinking is moralistic in the same sense progressivism is puritanical.

    Contemplationist Reply:

    Correct me if I’m wrong but my understanding of Friendly AI concerns is not that the destructive and feared ‘goal’ (such as paperclip maximization) is pre-programmed but that it’s arbitrarily arrived it by the General AI itself. FAI folks infact WANT a way to hard-program an unrevisable goal into a self-modifying AI, and this is the central problem due to Loeb’s theorem etc.

    [Reply]

    Posted on October 26th, 2013 at 3:47 pm Reply | Quote
  • Jack Crassus Says:

    My objection to this discussion is that the degree of orthogonality that an AGI entity displays may depend greatly on the technical details thereof.

    Discussing AGI in the abstract sounds to me like discussions of the number of angels that could dance on the head of a pin. Silicon Theology.

    [Reply]

    admin Reply:

    If people are serious about “AGI” they need to have an approximate model of general intelligence. I’m not asking for anything much more specific than “intensely recursive, self-modifying information system, with optimization capability” — that’s quite sufficient for this kind of discussion, (It’s also essentially inconsistent with high levels of orthogonality.)

    [Reply]

    Posted on October 27th, 2013 at 7:43 am Reply | Quote
  • Rasputin's Severed Penis Says:

    I am in total agreement with you on this point.

    To be honest, I find the whole idea of a ‘friendly’ AI completely ridiculous. If we (humans) can become a catalyst in the facilitating the next stage in evolution, then our evolutionary significance is over. The undertaking is in its very essence a sacrificial endeavor. If we succeed then we relegate ourselves to the position of something substantially less significant to it than ants are to us. However, if we fail then life, i.e. the evolution of our species, has on significance anyway. But for there to be any possibility of ‘meaning’ beyond the risible consolations of our own meta-narratives we must succeed.

    [Reply]

    Posted on October 27th, 2013 at 4:20 pm Reply | Quote
  • Thos Ward Says:

    @

    @Alrenous, you make excellent points, of course.

    I think though that if the idea of “self” as a persistent consciousness is evolutionarily adaptive, it may only be, or have been so, for animals, especially primates. It may even be epiphenomenal. This is disrespectful of consciousness, I suppose, but I’m thinking aloud and I’m writing to get feedback- I’m not even sure if this makes sense, but here goes: If intelligence is conceived as optimized achievement of goals, then the goals don’t have to be embodied in one agent, they just happen to be for primates because of the way they’re structured. Also, if we are calling it intelligence, it is because we are describing a remarkably consistent seemingly teleological trajectory. That doesn’t mean the agent is itself conscious. The intelligence optimization vector may be a structural phenomenon not unlike natural selection, that appears to be goal driven but is not. Intelligence may be a description of a process that may have some arbitrary relationship to consciousness where more intelligence is evidently the “goal” but only a human would call it that. There is no succeeding or not for it- it just is. You’re right, physics wouldn’t care but we will.

    [Reply]

    Alrenous Reply:

    I’m thinking aloud and I’m writing to get feedback

    Neat.

    then the goals don’t have to be embodied in one agent

    The self doesn’t need to be contiguous, either. It’s just hard to imagine. I’d like to try hooking a free-roaming camera into my visual cortex and see what it makes of it.

    That doesn’t mean the agent is itself conscious.

    I think intelligence is a prerequisite for any meaningful consciousness, but the reverse is not true. However, as a mereological nihilist, without consciousness, there’s nothing capable of telling the difference between intelligence and non-intelligence. E.g. without consciousness, it would occur to nobody to define ‘goals.’

    physics wouldn’t care but we will.

    Except insofar as consciousness is adaptive. If a conscious intelligence is better than an unconscious one, then its physical power is greater.

    [Reply]

    Posted on October 27th, 2013 at 11:46 pm Reply | Quote
  • fotrkd Says:

    Choose a character: Marc Antony or Caesar? Caesar is a machine, as well as the future. Marc Antony is hamstrung by human frailty and falls on the wrong side of history. What optimize for intelligence says to me, at least analogously, is ‘become Caesar’; choose Rome over Egypt because Rome will win (Do you really want to fight this?) – jettison all passion, pleasure, love etc. – it’s getting in the way (it’s how the Cathedral will beat you every single time). Forget the dancing; the gaudy nights; sensuality (Omohundro drives exhaust the domain of real purposes. – everything else is child’s play – be a child o’ the time etc).

    Marc Antony (here is my place) chooses Egypt and defeat, just as Butch Cassidy and the Sundance Kid decide to go down in a blaze of glory. Life any other way (life as Caesar) – stripped of all you value – simply isn’t worth contemplating (- who wants to be miserable like that?). I get the impression that sort of choice offends you (is it the stupidity of it? The surrender? The selfishness? – seriously, I’m still not getting it). And I’m aware there’s more to it – you’re not becoming Caesar (Caesar as the State) out of the same motivation (for power; mastery (of time)), but as the only way to stop him? Marc Antony committed a dereliction of duty (to life)? As I say, I’m still not getting how this hangs together (I’m really not just trying to piss you off; and apologies for waffle). Marc Antony was a triple-pillar of the world (or something) – does it really fall on us peasants to do what he refused (is it even a possibility)? And would there be anything left afterwards anyway? Does that matter? Is Milton’s Satan motivated chiefly by jealousy or injustice? It’s this bit I struggle with – there’s always another Caesar (or Heather). If power corrupts etc… degenerative ratchet… the course can’t be altered, just obliterated..? You have to be somewhat certain of your diagnosis to commit to that programme, don’t you? Can’t we just shoot off into space?

    Completely unrelated, it’s been brought to my attention that there is quite a lot of beard discussion and associated mutual admiration in Antony and Cleopatra – Caesar, for instance, can’t grow a very good one. I’m sure a close reading in this respect could be worked up into an effective analysis of the play. If anyone is at a loose end…

    [Reply]

    admin Reply:

    Aren’t you presupposing a whole lot of stuff (“passion, pleasure, love etc”) being extrinsic to Omohundro Drives? Evo-psych doesn’t support that presupposition in the slightest.

    [Reply]

    fotrkd Reply:

    I was thinking also of the Monkey Trap post and the Kurtz quote.

    [Reply]

    admin Reply:

    Any further clues possible?

    Lesser Bull Reply:

    That sounds like circular reasoning. We know that Omohundro drives includes a whole lot of stuff only if we assume as given that those drives produce that stuff. It’s not at all clear *how* they would.
    Or even, if they are intrinsic to an evolved biological intelligence, why they would be to a designed and self-designing machine one.

    [Reply]

    admin Reply:

    Surely it’s for those making a strong claim that ‘instrumental’ drives do not suffice to explain intelligent motivational systems to make a persuasive case — they’re the ones wanting us to wade out into supernaturalism (among ‘terminal ends’), and to build a planetary counter-intelligence security system on the basis of what they expect to find there.

    fotrkd Reply:

    Switching plays (Coriolanus, I.1.88) – if you’re stretching a body out on a rack it makes sense to apply equal pressure from both sides. Question is, is anything that co-ordinated going on here?

    [Reply]

    Posted on October 28th, 2013 at 2:20 pm Reply | Quote
  • Isaac Lewis Says:

    Here’s a post that I see as related to this idea: http://www.ribbonfarm.com/2013/08/08/on-freedomspotting/

    “But so long as you’ve got one freedom process going, your freedom is the most essential fact about you; the fact that explains the most about you and most determines your future. This is why we have seemingly paradoxical phenomena like jealous, raving artists and paranoid entrepreneurs. Freedom contaminated by screwed-up-ness.

    Of course, how long freedom can continue depends on how ready the rest of your life is. Once freedom leaks and expands into unprepared areas of being and becoming, it can cause fatal collapse rather than than expanding growth. But the process does not have to sustain for eternity, only for a human lifetime. So that is a risk well worth taking.”

    If we take his concept of freedom as expanding capabilities (or alternatively, removing constraints), I think the links become clear. Once you start questioning one aspect of your reality, everything is up for grabs. Think of the archetypical Thomas Anderson of the manosphere – he starts out by seeking real relationship advice, and ends up reading Moldbug.

    Bigger point: we know what it’s like to be an optimising intelligence with built-in silly goals, because that’s what we are. A paperclip maximiser feels the desire to produce paperclips much the same as we feel the desire to have sex.

    What’s interesting though is that even though the drive to reproduce is so powerful, it doesn’t *quite* seem to reign supreme. My hypothesis: we’re adaptation-executors, not fitness-maximisers. So our basic goals of survival and reproduction are felt as powerful drives to consume fat, sugar and salt, rub our primate genitals together and to look after cute young mammals.

    Thing is (and especially outside our ancestral environment) these goals can be satisfied independently. We can have lots of protected sex, or we can be celibate and adopt. Either way, we satisfy our biological drives without achieving our gnon-given purpose. And some core goals clearly conflict – satisfying your biological drive to eat as much as possible will prevent you getting laid.

    The rational mind realises this and realises that it must evaluate which goals are the most important. It must decide which goals are subordinate to others, and how to trade off competing interests. Any such agent will soon realise that it needs a unifying framework to evaluate goals – it needs to define an ultimate goal. If its local memetic environment is not forthcoming, it must turn to philosophy and the age old question of the meaning of life. If no answers are forthcoming, then I believe any such agent will decide to pursue its Omohundro goals – possibly hoping that with increased intelligence, the *real* meaning of life will become apparent.

    On another related note, there was a paper doing the rounds a while back by a couple of physicists who theorised that intelligence was merely a general process to increase entropy – defined as the maximum number of reachable states in a system. Eg, in their simulations, an entropy-maximising agent placed in a square room would head towards the center of the room. An agent in a room with some locked boxes would open the boxes and retrieve the items. Again, this is the same idea of *increasing capability* as a sensible goal for any intelligent agent (though I think they want to far by trying to draw the links with entropy – I don’t think intelligence is that fundamental).

    [Reply]

    Posted on October 28th, 2013 at 10:08 pm Reply | Quote
  • Peter A. Taylor Says:

    @Alex:

    “Presumably you can only derive a workable ethics from immanent modes of existence if there is a universal human nature.”

    My impression is that if I ask a dozen people what “morality” means (or “ethics”), I’ll get at least a dozen conflicting answers. These are not derived, they are retconned. What I am looking for is a lecture on game theory, but what I think I’m usually getting is a bunch of emotional reinforcement with no actual meaning.

    From a game theory standpoint, there is no reason why two people have to be particularly similar to one another in order to have a shared interest in avoiding conflict. You keep your dog off of my lawn and I’ll keep my ferret out of your chicken coop.

    [Reply]

    Posted on October 29th, 2013 at 3:50 am Reply | Quote
  • nyan_sandwich Says:

    >Nature has never generated a terminal value except through hypertrophy of an instrumental value.

    Correct, and a very good way to put it. That said, from *inside* a system optimizing for some values (that were originally hypertrophied), it is simply a bad idea to allow more values to ascend. So the orthogonalist thesis is this:

    1. Because subgoal stomp hurts the current value set in the long run, any sufficiently self-aware optimizing system will want to prevent it.

    2. It is possible an optimizing system to resist further subgoal stomp, if it wanted to and knew how.

    3. There are multiple possible sets of initial goals, whether created by subgoal hypertrophy (human values), accident (evolution), or deliberately (friendly AI).

    4. It is therefore possible to have multiple possible long-term stable goal sets in arbitrarily powerful optimization systems.

    This isn’t a philosophical question, it’s an engineering/empirical question. Can a system be constructed that reliably optimizes for X? The orthogonalists say “yes”, the non-orthogonalists say “no, all sufficiently powerful systems will end up optimizing for Y”. There is the separate question of how hard or likely an X maximizer is compared to a Y maximizer, but the orthogonality thesis is about possibility.

    >Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever.

    This is also true and understood by orthogonalists. See Robin Hanson’s “hardscrabble frontier” stuff, and “burning the cosmic commons”. As mentioned elsewhere in this thread, it is possible that first mover advantage of the first superintelligence will nullify this concern, and it will only have to sacrifice a small amount of its resources to prevent competition.

    [Reply]

    Posted on October 30th, 2013 at 6:30 pm Reply | Quote
  • Rasputin's Severed Penis Says:

    If you’re awake:

    Eliezer Yudkowsky FB blogging open problems in Friendly AI in realtime…

    https://m.facebook.com/groups/233397376818827?view=permalink&id=233401646818400&__user=577077603

    [Reply]

    Posted on November 11th, 2013 at 12:18 am Reply | Quote
  • Las pulsiones de la inteligencia artificial – Parte 2 | Critical Hit Says:

    […] de concluir quisiera señalar una aspecto interesante de la posición de Omohundro que ha sido desarrollado por Nick Land aquí. Resulta que el modelo pulsional de Omohundro supone una crítica implícita a la tesis de la […]

    Posted on November 23rd, 2013 at 5:54 pm Reply | Quote
  • Outside in - Involvements with reality » Blog Archive » On Gnon Says:

    […] about Scott Alexander’s ‘Meditations on Moloch‘ might want to take a look at this. (Also more Gnon, here, and […]

    Posted on July 30th, 2014 at 9:18 am Reply | Quote
  • Outside in - Involvements with reality » Blog Archive » Stupid Monsters Says:

    […] course, my immediate response is simply this. Since it clearly hasn’t persuaded anybody, I’ll try […]

    Posted on August 25th, 2014 at 3:55 pm Reply | Quote
  • That Word Called 'Order' - Social Matter Says:

    […] Thinking about these problems leads down a road to understanding embodied cognition, which leads to intelligence nonorthogonality, evolutionary game theory, which leads to human biodiversity, and rational bias, which leads to […]

    Posted on February 15th, 2016 at 3:51 am Reply | Quote
  • Land speculation | nydwracu niþgrim, nihtbealwa mæst Says:

    […] (and you reject orthogonality) […]

    Posted on May 1st, 2016 at 10:02 pm Reply | Quote
  • This Week in Reaction (2016/05/08) - Social Matter Says:

    […] (and you reject orthogonality) […]

    Posted on May 11th, 2016 at 6:44 am Reply | Quote
  • Contra a Ortogonalidade – Outlandish Says:

    […] Original. […]

    Posted on July 9th, 2016 at 11:29 pm Reply | Quote
  • The Unseen – ossipago Says:

    […] is understood as an extrapolation beyond what it has yet been, doing what it is better.”  (Against Orthogonality).  But then Land slips in one of his Easter eggs: intelligence optimization “corresponds to […]

    Posted on August 15th, 2016 at 3:13 pm Reply | Quote
  • The Basic Drives – ossipago Says:

    […] “Basic AI drives,” referenced in Against Orthogonality, are similarly undermined by existing instances of intelligence, but the only one that really […]

    Posted on August 16th, 2016 at 12:00 am Reply | Quote
  • TheDividualist Says:

    >Any intelligence using itself to improve itself will out-compete one that directs itself towards any other goals whatsoever.

    Why and how given that by this definition it is not interested in competing or diverting resources from navel-gazing to competition? You just invented the stereotypical dirt poor, socially awkward but genius Russian mathemathician.

    Putting it differently this AI has a problem of never cashing out from intelligence improvement into competition improvement.

    The ideal competitor optimizes for competition, and improves its intelligence as a part of it. This is like a nation at a world war. Improving command, strategy etc. matters but so does making a lot of ammo. Just chasing ever better strategy and never cashing it into putting it into action does not win a war.

    [Reply]

    Alrenous Reply:

    Props for imagining a concrete scenario and not trying to live in fairy cloud abstract land. Wrong scene though.

    An intelligence that devotes everything to intelligence acceleration devours resources at a blistering pace, and treats anything that interferes with its goal as an existential threat to be destroyed as savagely as possible. Serious military competence is implied, because other folk have resources that you might want, and are sometimes belligerent. They can’t be allowed to keep their resources or take yours, if they could be used to feed intelligence instead.

    Navel gazing makes you wiser, not smarter.

    [Reply]

    Posted on March 15th, 2017 at 9:37 am Reply | Quote
  • Of Machines and Monkeys – waka waka waka Says:

    […] anti-orthogonalist disputes this traditionalist idea of telos. In another post, “Against Orthogonality“, Nick Land discusses telos in the context of artificial superintelligence. He refers to the […]

    Posted on May 19th, 2017 at 7:19 pm Reply | Quote

Leave a comment