Tag Archives: Ems

Fuller on Age of Em

I’d heard that an academic review of Age of Em was forthcoming from the new Journal of Posthuman Studies. And after hearing about Baum’s review, the author Steve Fuller of this second academic review (which won’t be published for a few months) gave me permission to quote from it here. First some praise: Continue reading "Fuller on Age of Em" »

GD Star Rating
loading...
Tagged as:

Baum on Age of Em

In the Journal Futures, Seth Baum gives the first academic review of Age of Em. First, some words of praise: Continue reading "Baum on Age of Em" »

GD Star Rating
loading...
Tagged as: , ,

Future Gender Is Far

What’s the worst systematic bias in thinking on the future? My guess: too much abstraction. The far vs. near mode distinction was first noticed in future thinking, because the effect is so big there.

I posted a few weeks ago that the problem with the word “posthuman” is that it assumes our descendants will differ somehow in a way to make them “other,” without specifying any a particular change to do that. It abstracts from particular changes to just embody the abstract idea of othering-change. And I’ve previously noted there are taboos against assuming that something we see as a problem won’t be solved, and even against presenting such a problem without proposing a solution.

In this post let me point out that a related problem plagues future gender relation thoughts. While many hope that future gender relations will be “better”, most aren’t at all clear on what specifically that entails. For some, all differing behaviors and expectations about genders should disappear, while for others only “legitimate” differences remain, with little agreement on which are legitimate. This makes it hard to describe any concrete future of gender relations without violating our taboo against failing to solve problems.

For example, at The Good Men Project, Joseph Gelfer discusses the Age of Em. He seems to like or respect the book overall:

Fascinating exploration of what the world may look like once large numbers of computer-based brain emulations are a reality.

But he less likes what he reads on gender:

Hanson sees a future where an em workforce mirrors the most useful and productive forms of workforce that we experience today. .. likely choose [to scan] workaholic competitive types. Because such types tend to be male, Hanson imagines an em workforce that is disproportionately male (these workers also tend to rise early, work alone and use stimulants).

This disproportionately male workforce has implications for how sexuality manifests in em society. First, because the reproductive impetus of sex is erased in the world of ems, sexual desire will be seen as less compelling. In turn, this could lead to “mind tweaks” that have the effect of castration, .. [or] greater cultural acceptance of non-hetero forms of sexual orientation, or software that make ems of the same sex appear as the opposite sex. .. [or] paying professional em sex workers.

It is important to note that Hanson does not argue that this is the way em society should look, rather how he imagines it will look by extrapolating what he identifies in society both today and through the arc of human history. So, if we can identify certain male traits that stretch back to the beginning of the agricultural era, we should also be able to locate those same traits in the em era. What might be missing in this methodology is a full application of exponential change. In other words, Hanson rightly notes how population, technology and so forth have evolved with increasing speed throughout history, yet does not apply that same speed of evolution to attitudes towards gender. Given how much perceptions around gender have changed in the past 50 years, if we accept a pattern of exponential development in such perceptions, the minds that are scanned for first generation ems will likely have a very different attitude toward gender than today, let alone thousands of years past. (more)

Obviously Gelfer doesn’t like something about the scenario I describe, but he doesn’t identify anything particular he disagrees with, nor offer any particular arguments. His only contrary argument is a maximally abstract “exponential” trend, whereby everything gets better. Therefore gender relations must get better, therefore any future gender relations feature that he or anyone doesn’t like is doubtful.

For the record, I didn’t say the em world selects for “competitive types”, that people would work alone, or that there’d be more men. Instead I have a whole section on a likely “Gender Imbalance”:

Although it is hard to predict which gender will be more in demand in the em world, one gender might end up supplying proportionally more workers than the other.

Though I doubt Gelfer is any happier with a future with may more women than men; any big imbalance probably sounds worse to most people, and thus can’t happen according to the better future gender relations principle.

I suspect Gelfer’s errors about my book are consistently in the direction of incorrectly attributing features to the scenario that he likes less. People usually paint the future as a heaven or a hell, and so if my scenario isn’t Gelfer’s heaven, it must be his hell.

GD Star Rating
loading...
Tagged as: , ,

Imagine A Mars Boom

Most who think they like the future really just like where their favorite stories took place. As a result, much future talk focuses on space, even though prospects for much activity beyond Earth anytime foreseeable seem dim. Even so, consider the following hypothetical, with three key assumptions:

Mars boom: An extremely valuable material (anti-matter? glueballs? negative mass?) is found on Mars, justifying huge economic efforts to extract it, process it, and return it to Earth. Many orgs compete strongly against one another in all of these stages to profit from the Martian boom.

A few top workers: As robots just aren’t yet up to the task, a thousand humans must be sent to and housed on Mars. The cost of this is so great that all trips are one-way, at least for a while, and it is worth paying extra to get the very highest quality workers possible. So Martians are very impressive workers, and Mars is “where the action is” in terms of influencing the future. As slavery is rare on Earth, most all Mars workers must volunteer for the move.

Martians as aliens: Many, perhaps even most, people on Earth see those who live on Mars as aliens, for whom the usual moral rules do not apply – morality is to protect Earthlings only. Such Earth folks are less reluctant to enslave Martians. Martians undergo some changes to their body, and perhaps also to their brain, but when seen in films or tv, or when talked to via (20+min delayed) Skype, Martians act very human.

Okay, now my question for you is: Are most Martians slaves? Are they selected for and trained into being extremely docile and servile?

Slavery might let Martian orgs make Martians work harder, and thereby extract more profit from each worker. But an expectation of being enslaved should make it much harder to attract the very best human workers to volunteer. Many Earth governments may even not allow free Earthlings to volunteer to become enslaved Martians. So my best guess is that in this hypothetical, Martians are free workers, rich and high status celebrities followed and admired by most Earthlings.

I’ve created this Mars scenario as an allegory of my em scenario, because someone I respect recently told me they were persuaded by Bryan Caplan’s claim that ems would be very docile slaves. As with these hypothesized Martians, the em economy would produce enormous wealth and be where the action is, and it would result from competing orgs enticing a thousand or fewer of the most productive humans to volunteer for an expensive one-way trip to become ems. When viewed in virtual reality, or in android bodies, these ems would act very human. While some like Bryan see ems as worth little moral consideration, others disagree.

GD Star Rating
loading...
Tagged as: , ,

Brains Simpler Than Brain Cells?

Consider two possible routes to generating human level artificial intelligence (AI): brain emulation (ems) versus ordinary AI (wherein I lump together all the other usual approaches to making smart code). Both approaches require that we understand something well enough to create a functional replacement for it. Ordinary AI requires this for entire brains, while ems require this only for brain cells.

That is, to make ordinary AI we need to find algorithms that can substitute for most everything useful that a human brain does. But to make brain emulations, we need only find models that can substitute for what brain cells do for brains: take input signals, change internal states, and then send output signals. (Such brain cell models need not model most of the vast complexity of cells, complexity that lets cells reproduce, defend against predators, etc.)

To make an em, we will also require brain scans at a sufficient spatial and chemical resolution, and enough cheap fast parallel computers. But the difficulty of achieving these other requirements scales with the difficulty of modeling brain cells. The simpler brain cells are, the less detail we’ll need to scan, and the smaller computers we’ll need to emulate them. So the relative difficulty of ems vs ordinary AI mainly comes down to the relative model complexity of brain cells versus brains.

Today we are seeing a burst of excitement about rapid progress in ordinary AI. While we’ve seen such bursts every decade or two for a long time, many people say “this time is different.” Just as they’ve done before; for a long time the median published forecast has said human level AI will appear in thirty years, and the median AI researcher surveyed has said forty years. (Even though such people estimate 5-10x slower progress in their subfield in the past twenty years.)

In contrast, we see far less excitement now about about rapid progress in brain cell modeling. Few neuroscientists publicly estimate brain emulations soon, and no one has even bothered to survey them. Many take these different levels of hype and excitement as showing that in fact brains are simpler than brain cells – we will more quickly find models and algorithms that substitute for brains than we will those that can substitute for brain cells.

Now while it just isn’t possible for brains to be simpler than brain cells, it is possible for our best models that substitute for brains to be simpler than our best models that substitute for brain cells. This requires only that brains be far more complex than our best models that substitute for them, and that our best models that substitute for brain cells are not far less complex than such cells. That is, humans will soon discover a solution to the basic problem of how to construct a human-level intelligence that is far simpler than the solution evolution found, but evolution’s solution is strongly tied to its choice of very complex brain cells, cells whose complexity cannot be substantially reduced via clever modeling. While evolution searched hard for simpler cheaper variations on the first design it found that could do the job, all of its attempts to simplify brains and brain cells destroyed the overall intelligence that it sought to maintain.

So maybe what the median AI researcher and his or her fans have in mind is that the intelligence of the human brain is essentially simple, while brain cells are essentially complex. This essential simplicity of intelligence view is what I’ve attributed to my ex-co-blogger Eliezer Yudkowsky in our foom debates. And it seems consistent with a view common among fast AI fans that once AI displaces humans, AIs would drop most of the distinctive features of human minds and behavior, such as language, laughter, love, art, etc., and also most features of human societies, such as families, friendship, teams, law, markets, firms, nations, conversation, etc. Such people tend to see such human things as useless wastes.

In contrast, I see the term “intelligence” as mostly used to mean “mental betterness.” And I don’t see a good reason to think that intelligence is intrinsically much simpler than betterness. Human brains sure look complex, and even if big chucks of them by volume may be modeled simply, the other chunks can contain vast complexity. Humans really do a very wide range of tasks, and successful artificial systems have only done a small range of those tasks. So even if each task can be done by a relatively simple system, it may take a complex system to do them all. And most of the distinctive features of human minds and societies seem to me functional – something like them seems useful in most large advanced societies.

In contrast, for the parts of the brain that we’ve been able to emulate, such as parts that process the first inputs of sight and sound, what brain cells there do for the brain really does seem pretty simple. And in most brain organs what most cells do for the body is pretty simple. So the chances look pretty good that what most brain cells do for the brain is pretty simple.

So my bet is that brain cells can be modeled more simply than can entire brains. But some seem to disagree.

GD Star Rating
loading...
Tagged as: , ,

Ems Give Longer Human Legacy

Imagine that you were an older software engineer at Microsoft in 1990. If your goal was to have the most influence on software used in 2016, you should have hoped that Microsoft would continue to dominate computer operating systems and related software frameworks. Or at least do so for longer and stronger. Your software contributions were more compatible with Microsoft frameworks than with frameworks introduced by first like Apple and Google. In scenarios where those other frameworks became more popular faster, more systems would be redesigned more from scratch, and your design choices would be more often replaced by others.

In contrast, if you were a young software engineer with the same goal, then you should instead have hoped that new frameworks would replace Microsoft frameworks faster. You could more easily jump to those new frameworks, and build new systems matched to them. Then it would be your design choices that would last longer into the future of software. If you were not a software engineer in 1990, but just cared about the overall quality of software in 2016, your preference is less clear. You’d just want efficient effective software, and so want frameworks to be replaced at the optimal rate, neither too fast nor too slow.

This seems a general pattern. When the goal is distant future influence, those more tied to old frameworks want them to continue, while those who can more influence new frameworks prefer old ones be replaced. Those who just want useful frameworks want something in between.

Consider now two overall frameworks for future intelligence: ordinary software versus humans minds. At the moment human minds, and other systems adapted to them, make up by far the more powerful overall framework. The human mind framework contains the most powerful known toolkit by far for dealing with a wide variety of important computing tasks, both technical and social. But for many decades the world has been slowly accumulating content in a rather different software framework, one that is run on computers that we make in factories. This new framework has been improving more rapidly; while sometimes software has replaced humans on job tasks, the reverse almost never happens.

One possible scenario for the future is that this new software framework continues to improve until it eventually replaces pretty much all humans on jobs. (Ordinary software of course contains many kinds of parts, and the relative emphasis of different kinds of parts could change.) Along the way software engineers will have tried to include as many as possible of the innovations they understand from human brains and attached systems. But that process will be limited by their limited understanding of the brain. And when better understanding finally arrives, perhaps so much will have been invested in very different approaches that it won’t be worth trying to transfer approaches from brains.

A second scenario for the future, as I outline in my book, is that brain emulations (ems) become feasible well before ordinary software displaces most humans on jobs. Humans are then immediately replaced by ems on almost all jobs. Because ems are more cost-effective than humans, for any given level of the quality of software, efficiency-oriented system designers will rely more on ems instead of ordinary software, compared to what they would have done in the first scenario. Because of this, the evolution of wider systems, such as for communication, work, trade, war, or politics, will be more matched to humans for longer than they would have under the first scenario.

In addition, ems would seek ways to usefully take apart and modify brain emulations, in addition to seeking ways to write better ordinary software. They would be more successful at this than humans would have been had ems not arrived. This would allow human-mind-like computational features, design elements, and standards to have more influence on ordinary software design, and on future software that combines elements of both approaches. Software in the long run would inherit more from human minds. And so would the larger social systems matched to future software.

If you are typical human today who wants things like you to persist, this second scenario seems better for you, as the future looks more like you for “longer”, i.e., through more doublings of the world economy, and more degrees of change of various technologies. However, I note that many young software engineers and their friends today seem quite enthusiastic about scenarios where artificial software quickly displaces all human workers very soon. They seem to presume that this will give them a larger percentage influence on the future, and prefer that outcome.

Of course I’ve only been talking about one channel by which we today might influence the distant future. You might also hope to influence the distant future by saving resources to be spent later by yourself or by an organization to which you bequeath instructions. Or you might hope to strengthen institutions of global governance, and somehow push them into an equilibrium where they are able to and want to continue to strongly regulate software and the world in order to preserve the things that you value.

However, historically related savings and governance processes have had rather small influences on distant futures. For billions of years, the main source of long distance influence has been attempts by biological creatures to ensure that the immediate future had more creatures very much like themselves. And for many thousands of years of human cultural evolution, there has also been a strong process whereby local cultural practices worked to ensure that the immediate future had more similar cultural practices. In contrast, individual creatures and organizations have been short-lived, and global governance has mostly been nonexistent.

Thus it seems to me that if you want the distant future to longer have more things like typical humans, you prefer a scenario where ems appear before ordinary software displaces most all humans on jobs.

Added 15Dec: In this book chapter I expand a bit on this post.

GD Star Rating
loading...
Tagged as: , ,

Seduced by Tech

We think about tech differently when we imagine it before-hand, versus when we’ve personally seen it deployed. Obviously we have more data afterward, but this isn’t the only or even main difference.

Having more data puts us into more of a near, relative to far, mental mode. In far mode we think abstractly, allowing fewer exceptions to our moral and value principles, and we less allow messy details to reduce our confidence in our theories. Most imagined techs will fail, leaving little chance that we’ll be embarrassed by having opposed them. We also know that they have fewer allies who might retaliate against us for opposing them. And we are more easily seen as non-conformist for opposing a widely adopted tech, compared to opposing a possible future tech.

The net effect is that we are much more easily persuaded by weak arguments that a future tech may have intolerable social or moral consequences. If we thought more about the actual tech in the world around us, we’d realize that much of it also has serious moral and social downsides. But we don’t usually think about that.

A lot of tech fits this pattern. Initially it faces widespread opposition or skepticism, or would if a wider public were asked. Sometimes such opposition prevents a tech from even being tried. But when a few people can try it, others nearby can see if it offers personal concrete practical benefits, relative to costs. Then, even though more abstract criticisms haven’t been much addressed, the tech may be increasingly adopted. Sometime it takes decades to see wider social or moral consequences, and sometimes those are in fact bad. Even so, the tech usually stays, though new versions might be prevented. And for some consequences, no one ever really knows.

This is actually a general pattern of seduction. Often we have abstract concerns about possible romantic partners, jobs, products to buy, etc. Usually such abstract concerns are not addressed very well. Even so, we are often seduced via vivid exposure to attractive details to eventually set aside these abstract concerns. As most good salespeople know very well.

For example, if our political systems had been asked directly to approve Uber or AirBnB, they’d have said no. But once enough people used them without legal permission, politicians have been became reluctant to stop them. Opponents of in vitro fertilization (IVF), first done in 1978, initially suggested that it would deform babies and degrade human dignity, but after decades of use this tech faces little opposition, even though it still isn’t clear if it degrades dignity.

Opponents of the first steam trains argued that train smoke, noise, and speeds would extract passenger organs, prevent passenger breathing, disturb and discolor nearby animals, blight nearby crops, weaken moral standards, weaken community ties, and confuse class distinctions. But opposition quickly faded with passenger experience. Even though those last three more abstract concerns seem to have been confirmed.

Many indigenous peoples have strongly opposed cameras upon first exposure, fearing not only cameras “stealing souls”, but also extracting vital fluids like blood and fat. But by now such people mostly accept cameras, even though we still have little evidence on that soul thing. Some have feared that ghosts can travel through telephone lines, and while there’s little evidence to disprove this, few now seem concerned.

Consider the imagined future tech of the Star Trek type transporter. While most people might have heard some vague description of how it might work, such as info being read and transmitted to construct a new body, what they mainly know is that you would walk in at one place and the next thing you know you walk out apparently unchanged at another place far away. While it is possible to describe internal details such that most people would dislike such transport, without such details most people tend to assume it is okay.

When hundreds of ordinary people are asked if they’d prefer to commute via transporter, about 2/3 to 4/5 say they’d do it. Their main concern seems to be not wanting to get to work too fast. In a survey of 258 of my twitter contacts, 2/3 agreed. But if one asks 932 philosophers, who are taught abstract concerns about if transporters preserve identity, only 36.2% think they’d survive, 31.1% think they’d die and be replaced by someone else, and 32.7% think something else.

Philosopher Mark Walker says that he’s discussed such identity issue with about a thousand of students so far. If they imagine they are about to enter a transporter, only half of them see their identity as preserved. But if they imagine that they have just exited a transporter, almost all see their identity as preserved. Exiting evokes a nearer mental mode than entering, just as history evokes a nearer mode than the future.

Given our observed tech history, I’m pretty sure that few would express much concern if real transporters had actually been reliably used by millions of people to achieve great travel convenience without apparent problems. Even though that would actually offer little evidence regarding key identity concerns.

Yes, some might become reluctant if they focused attention on abstract concerns about human dignity, community ties, or preservation of identity. Just as some today can similarly become abstractly concerned that IVF hurts human dignity, fast transport hurts morals and communities, or even that cameras steal souls (where no contrary evidence has ever been presented).

In my debate with Bryan Caplan last Monday in New York City, I said he’s the sort of person who is reluctant to get into a transporter, and he agrees. He is also confident that ems lack consciousness, and thinks almost everyone would agree with him so strongly that humans would enslave ems and treat any deviation from extreme em docility very harshly, preventing ems from ever escaping slavery.

I admit that today, long before ems exist, it isn’t that hard to get many people into an abstract frame of mind where they doubt ems would be conscious, or doubt an em of them would be them. In that mental state, they are reluctant to move via destructive scanning from being a human to an em. Just as today many can get into a frame of mind where they fear a transporter. But even from an abstract view many others are attracted to the idea of becoming an em.

Once ems actually became possible, however, humans could interact directly and concretely with them, and see their beautiful worlds, beautiful bodies, lack of pain, hunger, disease, or grime, and articulate defense of their value and consciousness. These details would move most people to see ems in a far more concrete mental mode.

Once ems were cheap and began to become the main workers in the economy, a significant number of humans would accept destructive scanning to become ems. Those humans would ask for and mostly get ways to become non-slave ems. And once some of those new ems started to have high influence and status, other humans would envy them and want to follow, to achieve such concrete status ends. Abstract concerns would greatly fade, just as they would if we had real Star Trek transporters.

The debate proposition that I defended was “Robots will eventually dominate the world and eliminate human abilities to earn wages.” Initially the pro/con percentage was 22.73/60.23; finally it was 27.27/64.77. Each side gained the same added percentage. Since my side started out 3x smaller I gained a 3x larger fractional increase, but as I said when I debated Bryan before, the underdog side actually usually gains more in absolute terms.

So yes, attitudes today are not on net that favorable to ems. But neither were related attitudes before cameras, steam trains, or IVF. Such attitudes mostly reflect an abstract view that could be displaced by concrete details once the tech was actually available and offered apparently large concrete personal gains. Yes, sometimes we can be hurt by our human tendency to neglect abstract concerns when concrete gains seduce us. But thankfully, not, I think, usually.

GD Star Rating
loading...
Tagged as: ,

Play Will Persist

We live in the third human era, industry, which followed the farming and foraging eras. Each era introduced innovations that we expect will persist into future eras. Yet some are skeptical. They foresee “post-apocalyptic” scenarios wherein civilization collapses, industrial machines are lost, and we revert to using animals like mules and horses for motive power. Where we lose cities and instead spread across the land. We might even lose organized law, and revert to each small band enforcing its own local law.

On the surface, the future scenario I describe in my book The Age of Em looks nothing like a civilization collapse. It has more better bigger tech, machines, cities, and organizations. Yet many worry that in it we would lose an even more ancient innovation: play. As in laughter, music, teasing, banter, stories, sports, hobbies, etc. Because the em era is a more competitive world where wages return to near subsistence levels, many fear the loss of play and related activities. All of life becomes nose-to-the-grindstone work, where souls grind into dust.

Yet the farming and foraging eras were full of play, even though they were also competitive eras with subsistence wages. Moreover, play is quite common among animals, pretty much all of whom have lived in competitive worlds near subsistence levels:

Play is .. found in a wide range of animals, including marsupials, birds, turtles, lizards, fish, and invertebrates. .. [It] is a diverse phenomenon that evolved independently and was even secondarily reduced or lost in many groups of animals. (more)

Here is where we’ve found play in the evolutionary tree:

playhistory

We know roughly what kind of animals play:

Animals that play often share common traits, including active life styles, moderate to high metabolic rates, generalist ecological needs requiring behavioral flexibility or plasticity, and adequate to abundant food resources. Object play is most often found in species with carnivorous, omnivorous, or scavenging foraging modes. Locomotor play is prominent in species that navigate in three-dimensional (e.g., trees, water) or complex environments and rely on escape to avoid predation. Social play is not easily summarized, but play fighting, chasing, and wrestling are the major types recorded and occur in almost every major group of animals in which play is found. (more)

Not only are humans generalists with an active lifestyle, we have neoteny, which extends youthful features and behaviors, including play, throughout our lives. So humans have always played, a lot. Given this long robust history of play in humans and animals, why would anyone expect play to suddenly disappear with ems?

Part of the problem is that from the inside play feels like an activity without a “useful” purpose:

Playful activities can be characterized as being (1) incompletely functional in the context expressed; (2) voluntary, pleasurable, or self rewarding; (3) different structurally or temporally from related serious behavior systems; (4) expressed repeatedly during at least some part of an animal’s life span; and (5) initiated in relatively benign situations. (more)

While during serious behavior we are usually aware of some important functions our behaviors serve, in play we enter a “magic circle” wherein we feel safe, focus on pleasure, and act out a wider variety of apparently-safe behaviors. We stop play temporarily when something serious needs doing, and also for longer periods when we are very stressed, such as when depressed or starving. These help give us the impression that play is “extra”, serving no other purpose than “fun.”

But of course such a robust animal behavior must serve important functions. Many specific adaptive functions have been proposed, and while there isn’t strong agreement on their relative importance, we are pretty confident that since play has big costs, it must also give big gains:

Juveniles spend an estimated 2 to 15 percent of their daily calorie budget on play, using up calories the young animal could more profitably use for growing. Frisky playing can also be dangerous, making animals conspicuous and inattentive, more vulnerable to predators and more likely to hurt themselves as they romp and cavort. .. Harcourt witnessed 102 seal pups attacked by southern sea lions; 26 of them were killed. ‘‘Of these observed kills,’’ Harcourt reported in the British journal Animal Behaviour, ‘‘22 of the pups were playing in the shallow tidal pools immediately before the attack and appeared to be oblivious to the other animals fleeing nearby.’’ In other words, nearly 85 percent of the pups that were killed had been playing. (more)

Play can help to explore possibilities, both to learn and practice the usual ways of doing things, and also to discover new ways. In addition, play can be used to signal loyalty, develop trust and coordination, and establish relative status. And via play one can indirectly say things one doesn’t like to say directly. All of these functions should continue to be relevant for ems.

Given all this, I can’t see much doubt that ems would play, at least during the early em era, and play nearly as typical humans in history. Sure it is hard to offer much assurance that play will continue into the indefinite future. But this is mainly because it is hard to offer much assurance of anything in the indefinite future, not because we have good specific reasons to expect play to go away.

GD Star Rating
loading...
Tagged as: , ,

Social Science Critics

Many critics of Age of Em are critics of social science; they suggest that even though we might be able to use today’s physics or computer science to guess at futures, social science is far less useful.

For example At Crooked Timber Henry Farrell was “a lot more skeptical that social science can help you make predictions”, though he was more skeptical about thinking in terms of markets than in terms of “vast and distributed hierarchies of exploitation”, as these “generate complexities” instead of “ breaking them down.”

At Science Fact & Science Fiction Concatenation, Jonathan Cowie suggests social science only applies to biological creatures:

While Hanson’s treatise is engaging and interesting, I confess that personally I simply do not buy into it. Not only have I read too much SF to think that em life will be as prescriptive as Hanson portrays, but coming from the biological sciences, I am acutely aware of the frailties of the human brain hence mind (on a psychobiological basis). Furthermore, I am uncomfortable in the way that the social science works Hanson draws upon to support his em conclusions: it is an apples and oranges thing, I do not think that they can readily translate from one to the other; from real life sociobiological constructs to, in effect, machine code. There is much we simply do not know about this, as yet, untrodden land glimpsed from afar.

At Ricochet, John Walker suggests we can’t do social science if we don’t know detail stories of specific lives:

The book is simultaneously breathtaking and tedious. The author tries to work out every aspect of em society: the structure of cities, economics, law, social structure, love, trust, governance, religion, customs, and more. Much of this strikes me as highly speculative, especially since we don’t know anything about the actual experience of living as an em or how we will make the transition from our present society to one dominated by ems.

At his blog, Lance Fortnow suggests my social science assumes too much rationality:

I don’t agree with all of Hanson’s conclusions, in particular he expects a certain rationality from ems that we don’t often see in humans, and if ems are just human emulations, they may not want a short life and long retirement. Perhaps this book isn’t about ems and robots at all, but about Hanson’s vision of human-like creatures as true economic beings as he espouses in his blog. Not sure it is a world I’d like to be a part of, but it’s a fascinating world nevertheless.

At Entropy Chat List, Rafal Smigrodzki suggests social science doesn’t apply if ems adjust their brain design:

My second major objection: Your pervasive assumption that em will remain largely static in their overall structure and function. I think this assumption is at least as unlikely as the em-before-AI assumption. Imagine .. you have the detailed knowledge of your own mind, the tools to modify it, and the ability to generate millions of copies to try out various modifications. .. you do analyze this possibility, you consider some options but in the end you still assume ems will be just like us. Of course, if ems are not like us, then a lot of the detailed sociological research produced on humans would not be very applicable to their world and the book would have to be shorter, but then it might be a better one. In one chapter you mention that lesbian women make more money and therefore lesbian ems might make money as well. This comes at the end of many levels of suspension of disbelief, making the sociology/gender/psychology chapters quite exhausting.

At his blog, J Storrs Hall said something similar:

Robin’s scenario precludes some of these concerns by being very specific to a single possibility: that we have the technology to copy off any single particular human brain, we don’t understand them well enough to modify them arbitrarily. Thus they have to operated in a virtual reality that is reasonably close to a simulated physical world. There is a good reason for doing it this way, of course: that’s the only uploading scenario in which all the social science studies and papers and results and so forth can be assumed to still apply.

Most social scientists, and especially most economists, don’t see what they have learned as being quite so fragile. Yes it is nice to check abstract theories against concrete anecdotes, but in fact most who publish papers do little such checking, and their results only suffer modestly from the lack. Yes being non-biological, or messing a bit with brain design, may make some modest differences. But most social science theory just isn’t that sensitive to such details. As I say in the book:

Our economic theories apply reasonably well not only to other classes and regions within rich nations today, but also to other very different nations today and to people and places thousands of years ago. Furthermore, formal economic models apply widely even though quite alien creatures usually populate them, that is, selfish rational strategic agents who never forget or make mistakes. If economic theory built using such agents can apply to us today, it can plausibly apply to future ems.

The human brain is a very large complex legacy system whose designer did not put a priority on making it easy to understand, modify, or redesign. That should greatly limit the rate at which big useful redesign is possible.

GD Star Rating
loading...
Tagged as: ,

How Culturally Plastic?

Typical farming behaviors violated forager values. Farmers added marriage, property, war, inequality, and much less art, leisure and travel. 100K years ago if someone had suggested that foragers would be replaced by farmers, critics could easily have doubted that foragers would act like that. But tens of thousands of years was enough time for cultural variation and selection to produce new farming cultures more compatible with the new farming ways.

A typical subsistence farmer from a thousand years ago might have been similarly skeptical about a future industrial world wherein most people (not just elites) pick leaders by voting, have little religion, spend fifteen years of their youth in schools, and are promiscuous, work few hours, abide in skyscrapers, ride in fast trains, cars, & planes, and work in factories and large organizations with much and explicit rules, ranking, and domination. Many of these acts would have scared or offended typical farmers. Even those who knew that tens of millennia was enough to create cultures that embraced farming values might have doubted a few centuries was enough for industry values. But it was.

In my book The Age of Em I describe a world after which it has adapted to brain emulation tech. While I tend to assume that culture has changed to support habits productive in the competitive em world, a common criticism of my book is that the behaviors I posit for the em world conflict with values commonly held today. For example, from Steven Poole’s Guardian review:

Hanson assumes there is no big problem about the continuity of identity among such copies. .. But there is plausibly a show-stopping problem here. If someone announces they will upload my consciousness into a robot and then destroy my existing body, I will take this as a threat of murder. The robot running an exact copy of my consciousness won’t actually be “me”. (Such issues are richly analysed in the philosophical literature stemming from Derek Parfit’s thought experiments about teleportation and the like in the 1980s.) So ems – the first of whom are, by definition, going to have minds identical to those of humans – may very well exhibit the same kind of reaction, in which case a lot of Hanson’s more thrillingly bizarre social developments will not happen. (more)

Peter McCluskey has similar reservations about my saying at least dozens of human children would be scanned to supply an em economy with flexible young minds:

Robin predicts few regulatory obstacles to uploading children, because he expects the world to be dominated by ems. I’m skeptical of that. Ems will be dominant in the sense of having most of the population, but that doesn’t tell us much about em influence on human society – farmers became a large fraction of the world population without meddling much in hunter-gatherer political systems. And it’s unclear whether em political systems would want to alter the relevant regulations – em societies will have much the same conflicting interest groups pushing for and against immigration that human societies have. (more)

Farmers may not have meddled much in internal forager cultures, nor industry in internal farmer culture. But when prior era cultural values have conflicted with key activities of the new era, new eras have consistently won such conflicts. And since the em era should encompass thousands of years of subjective experience for typical ems, there seems plenty of time for em culture to adapt to new conditions. But as humans may only experience a few years during the em era and its preceding transition, it seems more of an open question how far human behaviors would adapt.

We are talking about the em world needing a small number of humans scanned, especially children. Such scans are probably destructive, at least initially. As individual human inclinations vary quite a lot, if the choice is up to individuals, enough humans would volunteer. So the question is if human coordinate enough in each area to prevent this, such as via law. If they coordinate well in most areas, but not in a few other areas, then if there are huge productivity advantages from being able to scan people or kids, the few places that allow it will quickly dominate the rest. And in anticipation of that loss, other places would cave as well. So without global coordination to prevent this, it happens.

Peter talks about the possibility of directly emulating the growth of baby brains all the way from the beginning. And yes if this was easy enough, the em world wouldn’t bother to fight organized human opposition. However, since emulation from conception seems a substantial new capacity, I didn’t feel comfortable assuming it in my book. So I focused on the case where it isn’t possible early on, in which case the above analysis applies.

This whole topic is mostly about: how culturally plastic are we? I’ve been assuming a lot of plasticity, and my critics have been saying less. The academics who most specialize in cultural plasticity, such as anthropologists, tend to say we are quite plastic. So as with my recent post on physicists being confident that there is no extra non-physical feeling stuff, this seems another case where most people have strong intuitions that conflict with expert claims, and they won’t defer to experts.

GD Star Rating
loading...
Tagged as: ,