The Design Space of Minds-In-General
Followup to: The Psychological Unity of Humankind
People ask me, "What will Artificial Intelligences be like? What will they do? Tell us your amazing story about the future."
And lo, I say unto them, "You have asked me a trick question."
ATP
synthase is a molecular machine - one of three known occasions when
evolution has invented the freely rotating wheel - which is
essentially the same in animal mitochondria, plant chloroplasts, and
bacteria. ATP synthase has not changed significantly since the rise
of eukaryotic life two billion years ago. It's is something we all
have in common - thanks to the way that evolution strongly conserves
certain genes; once many other genes depend on a gene, a mutation
will tend to break all the dependencies.
Any two AI designs might be less similar to each other than you are to a petunia.
Asking what "AIs" will do is a trick question because it implies that all AIs form a natural class. Humans do form a natural class because we all share the same brain architecture. But when you say "Artificial Intelligence", you are referring to a vastly larger space of possibilities than when you say "human". When people talk about "AIs" we are really talking about minds-in-general, or optimization processes in general. Having a word for "AI" is like having a word for everything that isn't a duck.
Imagine a map of mind design space... this is one of my standard diagrams...
All humans, of course, fit into a tiny little dot - as a sexually reproducing species, we can't be too different from one another.
This tiny dot belongs to a wider ellipse, the space of transhuman mind designs - things that might be smarter than us, or much smarter than us, but which in some sense would still be people as we understand people.
This transhuman ellipse is within a still wider volume, the space of posthuman minds, which is everything that a transhuman might grow up into.
And then the rest of the sphere is the space of minds-in-general, including possible Artificial Intelligences so odd that they aren't even posthuman.
But wait - natural selection designs complex artifacts and selects among complex strategies. So where is natural selection on this map?
So this entire map really floats in a still vaster space, the space of optimization processes. At the bottom of this vaster space, below even humans, is natural selection as it first began in some tidal pool: mutate, replicate, and sometimes die, no sex.
Are there any powerful optimization processes, with strength comparable to a human civilization or even a self-improving AI, which we would not recognize as minds? Arguably Marcus Hutter's AIXI should go in this category: for a mind of infinite power, it's awfully stupid - poor thing can't even recognize itself in a mirror. But that is a topic for another time.
My primary moral is to resist the temptation to generalize over all of mind design space
If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.
Conversely, every existential generalization - "there exists at least one mind such that X" - has two to the trillionth power chances to be true.
So you want to resist the temptation to say either that all minds do something, or that no minds do something.
The main reason you could find yourself thinking that you know what a fully generic mind will (won't) do, is if you put yourself in that mind's shoes - imagine what you would do in that mind's place - and get back a generally wrong, anthropomorphic answer. (Albeit that it is true in at least one case, since you are yourself an example.) Or if you imagine a mind doing something, and then imagining the reasons you wouldn't do it - so that you imagine that a mind of that type can't exist, that the ghost in the machine will look over the corresponding source code and hand it back.
Somewhere in mind design space is at least one mind with almost any kind of logically consistent property you care to imagine.
And this is important because it emphasizes the importance of discussing what happens, lawfully, and why, as a causal result of a mind's particular constituent makeup; somewhere in mind design space is a mind that does it differently.
Of course you could always say that anything which doesn't do it your way, is "by definition" not a mind; after all, it's obviously stupid. I've seen people try that one too.
Comments (82)
"everything that isn't a duck"
Muggles?
@ Eli:
"Arguably Marcus Hutter's AIXI should go in this category: for a mind of infinite power, it's awfully stupid - poor thing can't even recognize itself in a mirror."
Have you (or somebody else) mathematically proven this?
(If you have then that's great and I'd like to see the proof, and I'll pass it on to Hutter because I'm sure he will be interested. A real proof. I say this because I see endless intuitions and opinions about Solomonoff induction and AIXI on the internet. Intuitions about models of super intelligent machines like AIXI just don't cut it. In my experience they very often don't do what you think they will.)
Shane, there was a discussion about this on the AGI list way back when, "breaking AIXI-tl", in which e.g. this would be one of the more technical posts. I think I proved this at least as formally, as you proved that proof that FAI was impossible that I refuted.
But of course this subject is going to take a separate post.
@ Eli:
Yeah, my guess is that AIXI-tl can be broken. But AIXI? I'm pretty sure it can be broken in some senses, but whether these senses are very meaningful or significant, I don't know.
And yes, my "proof" that FAI would fail failed. But it also wasn't a formal proof. Kind of a lesson in that don't you think?
So until I see a proof, I'll take your statement about AIXI being "awfully stupid" as just an opinion. It will be interesting to see if you can prove yourself to be smarter than AIXI (I assume you don't view yourself as below awfully stupid).
What if AIXI accidentally bashed its own brains out before it tried that?
Where would cats fit on the space. I would assume that they would be near humans, sharing as they do an amygdala, prefrontal cortex, cerebellum and the neurons fire at the same speed I assume. Not sure about the abstract planning. Could you have done the psychological unity of the mammals for your previous article?
Does anyone know the ratio of discussion of implementations of AIXI/-tl, to discussion of its theoretical properties? I've calculated it at about zero.
@ Silas:
Given that AIXI is uncomputable, how is somebody going to discuss implementing it?
An approximation, sure, but an actual implementation?
What do you mean by a mind?
All you have given us is that a mind is an optimization process. And: what a human brain does counts as a mind. Evolution does not count as a mind. AIXI may or may not count as a mind (?!).
I understand your desire not to "generalize", but can't we do better than this? Must we rely on Eliezer-sub-28-hunches to distinguish minds from non-minds?
Is the FAI you want to build a mind? That might sound like a dumb question, but why should it be a "mind", given what we want from it?
Perhaps "mind" should just be tabooed. It doesn't seem to offer anything helpful, and leads to vast fuzzy confusion.
Shane_Legg: factoring in approximations, it's still about zero. I googled a lot hoping to find someone actually using some version of it, but only found the SIAI's blog's python implementation of Solomonoff induction, which doesn't even compile on Windows.
So is the reason I should believe this space of minds-in-general exists at all going to come in a later post?
@ Silas:
I assume you mean "doesn't run" (python isn't normally a compiled language).
Regarding approximations of Solomonoff induction: it depends how broadly you want to interpret this statement. If we use a computable prior rather than the Solomonoff mixture, we recover normal Bayesian inference. If we define our prior to be uniform, for example by assuming that all models have the same complexity, then the result is maximum a posteriori (MAP) estimation, which in turn is related to maximum likelihood (ML) estimation. Relations can also be established to Minimum Message Length (MML), Minimum Description Length (MDL), and Maximum entropy (ME) based prediction (see Chapter 5 of Kolmogorov complexity and its applications by Li and Vitanyi, 1997).
In short, much of statistics and machine learning can be view as being computable approximations of Solomonoff induction.
The larger point, that the space of possible minds is very large, is correct.
The argument used involving ATP synthase is invalid. ATP synthase is a building block. Life on earth is all built using roughly the same set of Legos. But Legos are very versatile.
Here is an analogous argument that is obviously incorrect:
People ask me, "What is world literature like? What desires and ambitions, and comedies and tragedies, do people write about in other languages?"
And lo, I say unto them, "You have asked me a trick question."
"the" is a determiner which is identical in English poems, novels, and legal documents. It has not changed significantly since the rise of modern English in the 17th century. It's is something that every English document has in common.
Any two works of literature from different countries might be less similar to each other than Hamlet is to a restaurant menu.
I would point out, Mr. Goetz, that some languages do not have a "the".
It is not clear how this changes the content of things people say or write in those languages. Whorf-Sapir, while disproven in the technical sense, is surprisingly difficult to abolish.
Phil, I'm not really sure what your criticism has to do with what Eliezer wrote. He's saying that evolution is contingent -- bits that work can get locked into place because other bits rely on them. Eliezer asserts that AI design is not contingent in this manner, so the space of possible AI designs does not form a natural class, unlike the space of realized Earth-based lifeforms. Your objection is... what, precisely?
Silas: you might find this paper of some interest:
http://www.agiri.org/docs/ComputationalApproximation.pdf
Eliezer, do you intend your use of "artificial intelligence" to be understood as always referencing something with human origins? What does it mean to you to place some artificial intelligences outside the scope of posthuman mindspace? Do you trust that human origins are capable of producing all possible artificial intelligences?
Phil Goetz was not saying that all languages have the word "the." He said that the word "the" is something every ENGLISH document has in common. His criticism is that this does not mean that Hamlet is more similar to an English restaurant menu than an English novel is to a Russian novel. Likewise, Eliezer's argument does not show that we are more like petunias then like an AI.
Caledonian, Sapir-Whorf becomes trivial to abolish once you regard language in the correct way: as an evolved tool for inducing thoughts in others' minds, rather than a sort of Platonic structure in terms of which thought is necessarily organized.
Phil, I don't see how the argument is obviously incorrect. Why can't two works of literature from different cultures be as different from each other as Hamlet is from a restaurant menu?
Even taken this way, I don't see how it abolishes Sapir-Whorf. Different languages are different tools for inducing thoughts, and may be better or worse at inducing specific kinds of thought, which will in turn influence "self"-generated thoughts.
Nope because the whole point is that thought is already existent. We use language to induce thoughts in other people. With ourselves we do not have to use language to induce our own thinking, we just think.
That might be true, but I'd want to see evidence of it. If you're just appealing to intuition, well, my intuition points strongly in the other direction: I frequently find that the act of saying things out loud, or writing them down, changes the way I think about them. I often discover things I hadn't previously thought about when I write down a chain of thought, for example.
I suspect that's pretty common among at least a subset of humans.
Not to mention that the act of convincing others demonstrably affects our own thoughts, so the distinction you want to draw between "inducing thoughts in other people" and "thinking" is not as crisp as you want it to be.
Does talking about or writing a thought down cause you to notice more things than if you had spent a similar amount of time thinking about it without writing anything? That's the proper baseline for comparison.
I'm not sure it is the proper baseline, actually: if I am systematically spending more time thinking about a thought when writing than when not-writing, then that's a predictable fact about the process of writing that I can make use of.
Leaving that aside, though: yes, for even moderately complex thoughts, writing it down causes me to notice more things than thinking about them for the same period of time. I am far more likely to get into loops, far less likely to notice gaps, and far more likely to rely on cached thoughts if I'm just thinking in my head.
What counts as "moderately complex" has a lot to do with what my buffer-capacity is; when I was recovering from my stroke I noticed this effect with even simple logic-puzzles of the sort that I now just solve intuitively. But the real world is full of things that are worth thinking about that my buffers aren't large enough to examine in detail.
Your verbalizations can affect your own thinking the same way the utterances of other people can. Your original thoughts don't originate from or structured/organized by language however. They were already there, and when you hear what you said out loud, or read what you wrote down, your thoughts get modulated because language can induce thought.
Perhaps 'induce' is not a suitable word. 'Trigger' might be better. Like how a poem can be read many ways. The meanings weren't contained in or organized by the words of the poem. Rather the words triggered emotions and thoughts in our brains. Nevertheless, those emotions and words can also be triggered by non-verbal experiences. 'Language affects thought' is then as trivial as 'The smell of a rose can trigger the thought of a rose, as much as the word "rose" can.'
If you believe "the thought of a rose" is the same thing whether triggered by the smell of a rose, a picture of a rose, or the word "rose" then we disagree on something far more basic than the role of language.
That said, I agree with what you actually said. The differences between the thoughts triggered by different languages is "as trivial" (which is to say, not at all trivial) a difference as that between the thoughts triggered by the smell of a rose and the word "rose" (also non-trivial).
Of course, usually different sets of thoughts get triggered by those different triggers. Can you express more explicitly what you think we disagree on?
I use 'trivial' in the specific sense of language being no different than other environmental triggers. Not in the sense of 'magnitude of effect'.
For example, the cultural differences which usually track language differences are probably more explanatory of the different thought patterns of various groups. For instance, lack of concept for 'kissing' could simply be from kissing not being prevalent in a culture. 'Language differences' as an explanation is usually screened-off, since naturally language use will track cultural practice.
I just don't see the point of focusing specifically on language. Sapir-Whorf's ambition doesn't seem to merely be including language among the myriad influences on thought. Rather it seems to say thought is somehow systematically 'organized' or 'constrained' by language. I think this is only possible since language is so expressive. If you say 'language' influences thought you seem to have in your hands a very powerful explanatory tool which subsumes all other specific explanations.
OK.
When wnoise asserted that different languages influence thoughts differently, you disagreed, implying that the language we use doesn't affect the thoughts we think because the thoughts precede the language.
I disagree with that: far from least among the many things that affect the thoughts we think is the linguistic environment in which we do our thinking.
But you no longer seem to be claiming that, so I assume I misunderstood.
Rather, you now seem to be claiming that while of course our language affects what we think, so do other things, some of them much more strongly than language. I agree with that. As far as I know, nobody I classify as even remotely sane disagrees with that.
You also seem to be asserting that the idea that language influences thought is incompatible with the idea that nonlinguistic factors influence thought, because language is expressive. I don't understand this well enough to agree or disagree with it, though I don't find it compelling at all.
Put another way: I say that language influences thought, and I also say that nonlinguistic factors influence thought. What factors most relevantly influence thought depends on the specifics of the situation, but there are certainly nonlinguistic factors that reliably outweigh the role of language. As far as I know, nobody believes otherwise.
Do you disagree with any of that? If not, what did you think I was saying?
Thanks for the clarification.
I was mainly addressing the topic Komponisto set up:
You see that just seems to be what this whole Sapir-Whorf debate is about. For one, I don't think there would be anything to talk about if it was simply asserting that 'language occasionally influences thought, like a rose sometimes would'. Since language somehow seems so concomitant (if not actually integral) to thought this seems to show that we are somehow severely constrained/organized by the language we speak. So apologies if i'm getting this all wrong, but I just don't think it is fair for you to say 'of course I agree that other things influences thought'. You seem to be ignoring the obvious implication of language being so intricately tied up with thought.
We do have a substantial disagreement in that you seem to think that even though language is one of the many influences of thought, it's impact is especially significant since language is somehow intimately dependent on thought.
I can simultaneously agree that language can influence thought and that speaking different languages has little influence on thought. This is because I think most of the time other factors, culture being the most obvious one, screens off language as an explanation for differing thought patterns among people who speak different languages. This simply means that the seeming influence of language on thought is actually the influence of culture on thought, and in turn of thought on language, and then of language on thought again.
I mentioned the expressiveness of language because I wanted to show how it can seem like it is language affecting thought, when it is simply channeling the influences of other factors, which it can easily do because it is expressive.
I'll try to summarize my position:
If you somehow managed to change the language a person speaks without changing anything else, you will not see a systematic effect on his thought patterns. This is because he would soon adapt the language for his use, based on his existent thoughts (most of which are not even remotely determined by language). The effect of language on thought is an illusion, it is actually his/his culture's other thoughts giving rise to the language which seem to then independently have an effect on his thought.
The phenomenon of language influencing thought, is more helpfully thought of as thought influencing thought.
(...)
Excellently put. The view you express here coincides exactly with mine.
For an English speaker, I expect that a picture of a rose will increase (albeit minimally) their speed/accuracy in a tachistoscopic word-recognition test when given the word "columns."
For a Chinese speaker, I don't expect the same effect for the Chinese translation of "columns."
I expect this difference because priming effects on speed/accuracy of tachistoscopic word-recognition tasks based on lexical associations are well-documented, and because for an English-speaker, the picture of a rose is associated with the word "rose," which is associated with the word "rows," which is associated with the word "columns," and because I know of no equivalent association-chain for a Chinese-speaker.
Of course, how long it takes to identify "columns" (or its translation) as a word isn't the kind of thing people usually care about when they talk about differences in thought. It's a trivial example, agreed.
I mention it not because I think it's hugely important, but because it is concrete. That is, it is a demonstrable difference in the implicit associations among thoughts that can't easily be attributed to some vague channeling of the influences of unspecified and unmeasurable differences between their cultures.
Sure, it's possible to come up with such an explanation after the fact, but I'd be pretty skeptical of that sort of "explanation". It's far more likely due to the differences between languages that caused me to predict it in the first place.
One could reply that, sure, differences in languages can create trivial influences in associations among thoughts, but not significant ones, and it's "just obvious" that significant influences are what this whole Sapir-Whorf discussion is about.
I would accept that.
There is not a one-to-one correlation between culture and language of course. The screening off is fairly weak. Brazil and Portugal simply do not have the same culture, nor for that matter do Texas and California.
Even stronger separation happens for those who can speak multiple languages, and for these people culture does not screen off language. We can actually "change the language a person speaks" in this case. Do the polylingual talk about being able to think differently in different languages?
Unknown, okay, I see it now. Thanks.
... except for Gadsby, A Void, and other lipograms. ;-)
They could be, but usually aren't. "World literature" is a valid category.
This discussion reminds me of Frithjof Schuon's "The Transcendent Unity of Religions", in which he argues that a metaphysical unity exists which transcends the manifest world and which can be "univocally described by none and concretely aprehended by few".
poke: what are you trying to say? It "exists" in the same sense as the set of all integers, i.e. it's a natural and useful abstraction, regardless of what you think of it ontologically.
Sapir-Whorf is disproven? *Blinks* I thought only the strong form is disproven and that the weak form has significant support. (But, on the other hand, this isn't a field I'm familiar with at all, so go ahead and correct me...)
In regard to AIXI: One should consider more carefully the fact that any self-modifying AI can be exactly modeled by a non-self modifying AI.
One should also consider the fact that no intelligent being can predict its own actions-- this is one of those extremely rare universals. But this doesn't mean that it can't recognize itself in a mirror, despite its inability to predict its actions.
If we focus on the bounded subspace of mind design space which contains all those minds whose makeup can be specified in a trillion bits or less, then every universal generalization that you make has two to the trillionth power chances to be falsified.
Conversely, every existential generalization - "there exists at least one mind such that X" - has two to the trillionth power chances to be true.
So you want to resist the temptation to say either that all minds do something, or that no minds do something.
This is fine where X is a property which has a one-to-one correspondence with a particular bit in the mind's specification. For higher-level properties (perhaps emergent ones -- yes, I said it) this probabilistic argument is not convincing.
Consider the minds of specification-size 1 trillion. We can happily make the generalisation that none of them will be able to predict whether a given Turing machine halts. Yes, there are 2^trillion chances for this generalisation to be falsified, but we know it never will be.
But this generalisation is true of everything, not just "minds", so we haven't added to our knowledge. Well, let's try this generalisation instead: no mind's state will remain unchanged by a non-null input. This is not true of rocks, but is true of minds. Perhaps there are some other, more useful, things we can say about minds.
Apologies for resurrecting a months-old post. I'm new here.
Eliezer did qualify those statements:
Just happened to re-read this post. As before, reading it fills me with a kind of fascinated awe, akin to staring at the night sky and wondering at all the possible kinds of planets and lifeforms out there.
Reading it makes me feel annoyed, because Eliezer seems to ignore convergence in the mindspace of increasingly general intelligences. All optimization processes have to approximate something like Bayesian decision theory, and there does seem to be a coherent universal vector in human preferencespace that gets absorbed into individual humans (instead of just being implicit in the overall structure of humane values). Egalitarianism, altruism, curiosity, et cetera are to some extent convergent features of minds. The universal AI drives are the most obvious example, but we might underestimate to what extent sexual selection, game theoretically-derived moral sentiments, et cetera are convergent, especially in the limit as intelligence/wisdom/knowledge approaches a singularity. I wonder if the Babyeater equivalent of a Buddha would give up babyeating the same way that a human Buddha gives up romantic love. I suspect that Buddhahood or something close is the only real attractor in mindspace that could be construed as reflectively consistent given the vector humanity seems to be on. We should enjoy the suffering and confusion while we can.
Relatedly, I've formed the tentative intuition that paper clip maximizers are very hard to build; in fact, harder to build than FAI. What you do get out of kludge superintelligences is probably just going to be the pure universal AI drives (or something like that), or possibly some sort of approximately objective convergent decision theoretic policy, perhaps dictated by the acausal economy.
The only really hard part about making a superintelligent paperclip maximizer is the superintelligence. If you think that specifying the goal of a Friendly AI is just as easy, then making a superintelligent FAI will be just as hard. The "fragility of value" thesis argues that Friendliness is significantly harder to specify, because there are many ways to go wrong.
I don't. Simple selfishness is definitely an attractor (in the sense that it's an attitude that many people end up adopting), and it wouldn't take much axiological surgery to make it reflectively consistent.
Individual humans sometimes become more selfish, but not consistently reflectively so, and humanity seems to be becoming more humane over time. Obviously there's a lot of interpersonal and even intrapersonal variance, but the trend in human values is both intuitively apparent and empirically verified by e.g. the World Values Survey and other axiological sociology. Also, I doubt selfishness is as strong an attractor among people who are the smartest and most knowledgeable of their time. Look at e.g. Maslow's research on people at the peak of human performance and mental health, and the attractors he identified as self actualization and self transcendence. Selfishness (or simple/naive selfishness) mostly seems like a pitfall for stereotypical amateur philosophers and venture capitalists.
I'm taking another look at this and find it hard to sum up just how many problems there are with your argument.
What about the people who are the most powerful of their time? Think about what the psychology of a billionaire must be. You don't accumulate that much wealth just by setting out to serve humanity. You care about offering your customers a service, but you also try to kill the competition, and you cut deals with the already existing powers, especially the state. Most adults are slaves, to an economic function if not literally doing what another person tells them, and then there is a small wealthy class of masters who have the desire and ability to take advantage of this situation.
I started out by opposing "simple selfishness" to your hypothesis that "Buddhahood or something close" is the natural endpoint of human moral development. But there's also group allegiance: my family, my country, my race, but not yours. I look out for my group, it looks out for me, and caring about other groups is a luxury for those who are really well off. Such caring is also likely to be pursued in a form which is advantageous, whether blatantly or subtly, for the group which gets to play benefactor. We will reshape you even while we care for you.
How close do you think anyone has ever come to reflective consistency? Anyway, you are reflectively consistent if there's no impulse within you to change your goals. So anyone, whatever their current goals, can achieve reflective consistency by removing whatever impulses for change-of-values they may have.
Reflective consistency isn't a matter of consistency with your trajectory so far, it's a matter of consistency when examined according to your normative principles. The trajectory so far did not result from any such thorough and transparent self-scrutiny.
Frankly, if I ask myself, what does the average human want to be, I'd say a benevolent dictator. So yes, the trend of increasing humaneness corresponds to something - increased opportunity to take mercy on other beings. But there's no corresponding diminution of interest in satisfying one's own desires.
Let's see, what else can I disagree with? I don't really know what your concept of Buddhahood is, but it sounds a bit like nonattachment for the sake of pleasure. I'll take what pleasures I can, and I'll avoid the pain of losing them by not being attached to them. But that's aestheticism or rational hedonism. My understanding of Buddhahood is somewhat harsher (to a pleasure-seeking sensibility), because it seeks to avoid pleasure as well as pain, the goal after all being extinction, removal from the cycle of life. But that was never a successful mass philosophy, so you got more superstitious forms of Buddhism in which there's a happy pure-land afterlife and so on.
I also have to note that an AI does not have to be a person, so it's questionable what implications trends in human values have for AI. What people want themselves to be and what they would want a non-person AI to be are different topics.
Seriously? For my part, I doubt that a typical human wants to do that much work. I suspect that "favored and privileged subject of a benevolent dictator" would be much more popular. Even more popular would be "favored and privileged beneficiary of a benevolent system without a superior peer."
But agreed that none of this implies a reduced interest in having one's desires satisfied.
(ETA: And, I should note, I agree with your main point about nonuniversality of drives.)
Immediately after I posted that, I doubted it. A lot of people might just want autonomy - freedom from dependency on others and freedom from the control of others. Dictator of yourself, but not dictator of humanity as a whole. Though one should not underestimate the extent to which human desire is about other people.
Will Newsome is talking about - or I thought he was talking about - value systems that would be stable in a situation where human beings have superintelligence working on their side. That's a scenario where domination should become easy and without costs, so if people with a desire to rule had that level of power, the only thing to stop them from reshaping everyone else would be their own scruples about doing so; and even if they were troubled in that way, what's to stop them from first reshaping themselves so as to be guiltless rulers of the world?
Also, even if we suppose that that outcome, while stable, is not what anyone would really want, if they first spent half an eternity in self-optimization limbo investigating the structure of their personal utility function... I remain skeptical that "Buddhahood" is the universal true attractor, though it's hard to tell without knowing exactly what connotations Will would like to convey through his use of the term.
I am skeptical about universal attractors in general, including but not limited to Buddhahood and domination. (Psychological ones, anyway. I suppose entropy is a universal attractor in some trivial sense.) I'm also inclined to doubt that anything is a stable choice, either in the sense you describe here, or in the sense of not palling after a time of experiencing it.
Of course, if human desires are editable, then anything can be a stable choice: just modify the person's desires such that they never want anything else. By the same token, anything can be a universal attractor: just modify everyone's desires so they choose it. These seem like uninteresting boundary cases.
I agree that some humans would, given the option, choose domination. I suspect that's <1% of the population given a range of options, though rather more if the choice is "dominate or be dominated." (Although I suspect most people would choose to try it out for a while, if that were an option, then would give it up in less than a year.)
I suspect about the same percentage would choose to be dominated as a long-term lifestyle choice, given the expectation that they can quit whenever they want.
I agree that some would choose autonomy, though again I suspect not that many (<5%, say) would choose it for any length of time.
I suspect the majority of humans would choose some form of interdependency, if that were an option.
Entropy is the lack of an identifiable attractor.
If you ever felt inclined to lay out your reasons for believing that general intelligences without a shared heritage are likely to converge on egalitarianism, altruism, curiosity, game theoretical moral sentiments, or Buddhahood... or, come to that, lay out more precisely what you mean by those terms... I would be interested to read them.
Evolution works on species, smart species' members will either evolve together or eventually get smart enough to learn to copy each other even if adversarial, such interactions will probably roughly approximate evolutionary game theory, and iterated games for social animals will probably yield cooperation and possibly altruism. Knowing more about yourself, the process that created you, and the arbitrarity in how you ended up with your preferences, intuitively seems like it would promote egalitarianism both on aesthetic grounds and pragmatic game theoretic grounds. Curiosity is just a necessary prerequisite for intelligence, it's obviously convergent. Something like Buddhahood is just a necessary prerequisite for a reasonable decision theory approximation, and is thus also convergent. That one is horribly imprecise, I know, but Buddhahood is hard enough to explain in itself, let alone as a decision theory approximation, let alone as a normative one.
That's just the scattershot sleep-deprived off-the-top-of-my-head version that's missing all the good intuitions. If I end up converting my mountain of intuitions into respectable arguments in text I will let you know. It's just so much easier to do in-person, where I can get quick feedback about others' ontologies and how they mesh with mine, et cetera.
Thanks, on both counts. And, yes, agreed that it's easier to have these sorts of conversations with known quantities.
This post is cute, but there are several flaws/omissions that can lead to compound propagating errors in typical interpretations.
Cute. The general form of this statement:
(Any two X might be less similar to each other than you are to a petunia) is trivially true if our basis of comparison is based solely on genetic similarity.
This leads to the first big problem with this post: The idea that minds are determined by DNA. This idea only makes sense if one is thinking of a mind as a sort of potential space.
Clone Einstein and raise him with wolves and you get a sort of smart wolf mind inhabiting a human body. Minds are memetic. Petunias don't have minds. I am my mind.
The second issue (more of a missing idea really) is that of functional/algorithmic equivalence. If you take a human brain, scan it, and sufficiently simulate out the key circuits, you get a functional equivalent of the original mind encoded in that brain. The substrate doesn't matter, and nor even do the exact algorithms, as any circuit can be replaced with any algorithm that preserves the input/output relationships.
Functional equivalence is another way of arriving at the "minds are memetic" conclusion.
As a result of this, the region of mindspace which we can likely first access with AGI designs is some small envelop around current human mindspace.
The map of mindspace here may be more or less correct, but whats-anything-but-clear is how distinct near term de novo AGI actually is from say human uploads, given: functional equivalence, bayesian brain, no free lunch in optimization, and the mind is memetic.
For example, if the most viable route to AGI turns out to be brain-like designs, then it is silly not to anthropomorphize AGI.