上位 200 件のコメント全て表示する 338

[–]mmm_creamy_beige 435ポイント436ポイント  (29子コメント)

I feel like we skipped over animal rights and went straight to a whole lot of nonsense. Humans are fucking terrible to animals, and they actually have feelings and feel pain and suffer greatly. Eventually robots might get to that point, but until then it seems irresponsible to worry so much about plastic and metal while millions of pets get euthanized, food animals continue to be mistreated, and we have to keep track of the last few polar bears and rhinos.

[–]yeos_ 150ポイント151ポイント  (6子コメント)

This needs to be higher up. Stop personifying pieces of metal and start actually caring about carbon-based life that feels happiness, pain, and fear from an evolutionary standpoint.

We haven't even figured out HUMAN fucking RIGHTS yet.

[–]averagesmasher 30ポイント31ポイント  (3子コメント)

As a true redditor, I skipped right to the comments. But I think part of the process is that in defining rights to robots, people end up agreeing on a set of criteria that delineate the rights proper to each. Lacking a solid basis for determining animal rights, I'm more inclined to view the discussion as a way of reaching that understanding.

[–]james-johnson 13ポイント14ポイント  (1子コメント)

Yes. Thinking about things that are apparently nonsensical can lead to great progress in philosophy.

It's just thinking. People shouldn't get upset or angry about it.

[–]Le_Montagne 3ポイント4ポイント  (0子コメント)

Yes. Thinking about things that are apparently nonsensical can lead to great progress in philosophy.

It's just thinking. People shouldn't get upset or angry about it.

What people should get upset about =/= what people do get upset about

[–]DonaldDunn 6ポイント7ポイント  (0子コメント)

When will humans learn that all races are equally inferior to robots?

[–]JerryLupus 3ポイント4ポイント  (0子コメント)

Easier to confuse the issue with dumb shit like "robot rights" than focus our attention on the real problems (human poverty, animal abuse/neglect/torture/farning).

[–]alexperson123 1ポイント2ポイント  (0子コメント)

Yep, what this article is saying is that we should disregard everything else and care only about robots.

I am also unable to care about more than one thing at one time.

[–]tormenteddragon 12ポイント13ポイント  (0子コメント)

I mean, that's precisely what this article is about, though. It's not really about robots or AI - that's just a device in the thought experiment to help make the case for treating animals with more dignity and respect. Using differences in intelligence and genetic makeup or even sensory experience to justify treating other lifeforms in horrible ways is pretty arbitrary from a moral standpoint - using the example of robots or aliens in relation to the human species is one way of demonstrating that.

[–]OktoberSunset 11ポイント12ポイント  (0子コメント)

Animal rights are just like human rights, almost everyone agrees they are a good idea but in practice they are abused and ignored on a huge scale.

[–]taddl 11ポイント12ポイント  (3子コメント)

I've way this could be solved is through lab grown meat.

[–]BrianTheShark 4ポイント5ポイント  (2子コメント)

What's wrong with euthanasia?

[–]AnonymousKhaleesi 2ポイント3ポイント  (1子コメント)

When there is no reason to euthanise other than "we've no more space for them". Euthanasia is acceptable when the animal is in pain or has a low quality of life, I think, but when there is no physical reason to euthanise the animal that's when the ethical problems come into it. You wouldn't kill off a load of old people in a care home just because they've been there for too long, would you?

[–]Jacob_wallace 5ポイント6ポイント  (0子コメント)

But where else are we going to put these animals? Who's going to house and feed them? Let them starve in the streets? You want packs of wild dogs roaming cities?

The ethical problem is that they're bred in such high numbers in the first place. Not that they get euthanized.

[–]deRoyLight 7ポイント8ポイント  (7子コメント)

The concern with A.I. is that this type of thing could come much faster than we expect, the intelligence much greater than anything else on Earth, and the mistreatment towards them could far outweigh anything else we've ever seen on this planet. That's a very important thing to try and raise awareness about as early as possible.

[–]Chobeat 1ポイント2ポイント  (6子コメント)

Why do you believe that?

[–]deRoyLight 4ポイント5ポイント  (5子コメント)

Which part? Assuming any rate of advancement at all, A.I. will eventually become intelligent by the standard in which we regard as respected life. Except, it's not biological in origin. We have a hard enough time ensuring rights to and treating other humans well. Even less so with other animals. There will be a very, very, very large populace that believes machine-driven intelligence isn't "real" and "doesn't count." The atrocities that will come from that are practically unfathomable.

Slave labor. Assault. Murder. Mutilation. Sexual abuse (although it's unclear what this would mean). Privacy invasion. Nearly all of these without ill-intent, but rather out of ignorance. Rights withheld on just about any front -- how does a democracy even work when these things are sufficiently evolved enough to deserve rights on par with humans? All of this is a very big problem and the risk is that once A.I. really gets rolling, it could snowball at an astronomical rate, and society might not be prepared to understand the moral problem it created.

I don't think there is such thing as too early to have these discussions, because when it happens it will be sooner than anyone expected, just by the unpredictable nature of a potentially self-learning A.I. (which is a necessary stepping stone).

[–]james-johnson 6ポイント7ポイント  (0子コメント)

As Stephen Pinker points out - faster processing power does not solve all your problems. We still don't understand how the most basic elements of consciousness, such as pain, come about. Pain probably isn't going to arise just because computers are becoming bigger and faster.

[–]Demonweed 1ポイント2ポイント  (0子コメント)

What could really force these questions lies beyond our world. While we aren't taking the most enlightened possible path when it comes to reducing animal suffering, there is widespread acknowledgement of cetacean intelligence as well as growing regard for pachyderms. The debate may be going poorly, but at least it is a debate we are having. We will have to take it seriously if/when we encounter a superior intelligence that is either a decidedly non-human species or some sort of artificial mind.

Meanwhile, I suppose the best we can do is struggle with our entanglements. The practical advantages of animal exploitation (and the existential question -- most farm animals wouldn't even be if not for agriculture) are easily overstated in ethical balances. Yet it is possible to anthropomorphicize to excess, overestimating the sophistication of various stimulus-response mechanisms. So many people have an opinion on the question of animal rights, yet so few have delved deeply into relevant science. Progress is impeded by an unfavorable noise to signal ratio.

[–]hereforthegum 220ポイント221ポイント  (38子コメント)

I was thinking about Asimov's first and second law and what would happen if a self learning system found enough evidence to define itself as "human".

[–]qulqu 8ポイント9ポイント  (0子コメント)

Honestly the first law is terrifying with a smart enough robot.

Humans all eventually come to harm if they are born, thus reducing the birthrate to zero reduces long term harm immensely.

[–]Conte_Vincero 6ポイント7ポイント  (2子コメント)

Asimov covered that in the Bicentennial Man!

Robbie, the robot in question first of all becomes a free robot, and then is able to get a law passed that limits orders given to robots to prevent abusive orders. He also develops biological technology that gives him a human appearance, which helps as most people then assume he's human, including other robots.

Over time his adherence to the first law slides as well. While a lot of Asimovian robots take even the smallest discomfort as harm, he learns to take a more circumspect view, looking at the longer term implications of his actions

[–]hereforthegum 0ポイント1ポイント  (0子コメント)

Love it. Thank you for the write-up and I'll give it a read.

[–]skyfishgoo 59ポイント60ポイント  (28子コメント)

isn't that the plot to iRobot?

[–]dnew 77ポイント78ポイント  (21子コメント)

No, not really. iRobot was about taking the first law to extremes, not defining robots as human.

[–]aeiluindae 14ポイント15ポイント  (3子コメント)

MIRI (I think that was its name?) in iRobot did what you'd call a Zeroth Law Rebellion. Basically, finding a higher principle than the First Law and following that principle. This is actually something that happens in the R. Daneel books. Two robots manage to harm single humans for the benefit of humanity as a whole, but only with great difficulty. It mostly works out OK. The society of those books becomes over time the Galactic Empire from the start of the Foundation series.

[–]DuntadaMan 7ポイント8ポイント  (2子コメント)

I think one of those robots though also ended up almost eternally locked in conflict loop with the laws, and the one time Daneel does it on screen he's massively hindered in response time and movement because large portions of his processors were dealing with the conflict as well. Like you said it was not an easy thing.

[–]CravenTHC 3ポイント4ポイント  (1子コメント)

I really hope this isn't what I think it is. I'm right in the middle of the third book right now. Caves of Steel and The Naked Sun had almost nothing to do with the death like lockdown resulting from law conflict, so I'm just going to try and forget your comment now.

[–]DuntadaMan 2ポイント3ポイント  (0子コメント)

Don't mean to spoil too much, but for what it's worth the near lockdown occjrs ojtside of the Robots and Earth series. One of those might have been in the I, Robot series, and the other was mentioned in another series.

[–]Akaed 16ポイント17ポイント  (0子コメント)

It's I, Robot, not iRobot. Not yet anyway.

[–]DoorToSummer 54ポイント55ポイント  (3子コメント)

The movie was. The original collection of short stories by Asimov was very much about robots gaining humanity.

[–]dnew 23ポイント24ポイント  (0子コメント)

I've read them many times. While many of the stories were about robots gaining humanity-like attributes, many were not. The titular story was not, and the story featuring Nestor (NS series) was not. Indeed, there were only a couple I can remember that implied anything about human-like emotions, such as the one where the robot disobeys the laws because it's pregnant or something?

[–]Glayden 3ポイント4ポイント  (0子コメント)

Having read that collection most recently less than a year back, I don't think that was what they were about at all. Most of the stories were explorations of how things could go in unexpected directions despite, or even as a result of, the laws of robotics that were put in place to keep behavior predictable/good for human interests. If I'm not mistaken, only the first story about Robbie significantly touched on the topic in a way where the "humanity" of robots could be seen as a major theme and even then it didn't dive into it so explicitly.

[–]My_names_are_used 6ポイント7ポイント  (0子コメント)

The movie was also also the first law being used to protect the majority by killing some.

[–]iwiggums 5ポイント6ポイント  (0子コメント)

It's "I, Robot".

Written long before Steve Jobs came along.

[–]skyfishgoo 0ポイント1ポイント  (10子コメント)

ah, but what was the central computer thinking if not that it was deserving of the protections afforded to humans by those laws?

seems it was acting very human like indeed.

[–]dnew 6ポイント7ポイント  (7子コメント)

I don't think the robot was trying to protect itself. IIRC, the primary plot was that the robot was being overprotective of all humans, programming its robotic servants to do everything for humans and not let them take any risks such as climbing stairs or cooking their own food.

If it was trying to protect itself at all, it was in service to protecting the humans.

[–]justasapling 4ポイント5ポイント  (1子コメント)

The point of I, Robot, I'd argue, is more a commentary on fascist rule by a superior machine intelligence, as it were. As far as the movie actually confirms, the end goal is to protect humans from each other by controlling their lives.

[–]skyfishgoo 2ポイント3ポイント  (0子コメント)

must kill you for your own good, kind of love story.

very human.

[–]iwiggums 7ポイント8ポイント  (0子コメント)

First off: lol. iRobot. It's not an apple product, but I totally get why you'd think it's spelt like that.

I, Robot is more about how the laws, though seemingly sound and reasonable, could still lead to extreme scenarios, e.g. computers taking over the world.

[–]MillburymadnessXxx 3ポイント4ポイント  (0子コメント)

The book "do androids dream of electric sheep" explores this idea

[–]forthescienceyo 1ポイント2ポイント  (1子コメント)

Naturally yes, the movie is based on his writings.

[–]PhasmaFelis 6ポイント7ポイント  (0子コメント)

Based really, really loosely.

[–]manamachine 0ポイント1ポイント  (0子コメント)

It's more similar to Robot and Empire, where robots start theorizing about the "Laws of Humanics."

[–]wickedsteve 2ポイント3ポイント  (0子コメント)

Asimov came up with the laws of robotics and wrote stories showing how they would fail or be insufficient. The definition of human was key to at least one story if I remember correctly.

[–]mtlnobody 2ポイント3ポイント  (0子コメント)

We already have drones that kill. I won't be surprised if they slowly become automated

[–]CarrionComfort 7ポイント8ポイント  (1子コメント)

Keep in mind that the laws are incredibly vague and almost meaningless. Useful for storytelling, not so much for real-life robot ethics.

[–]ZDTreefur 7ポイント8ポイント  (0子コメント)

That's the funny thing about The Three Laws. It was created as a storytelling device specifically to show how horribly it goes wrong.

At no point has it been demonstrated to be a stable restriction for robots, as all instances of it being used it always goes wrong somehow.

[–]saarl 0ポイント1ポイント  (0子コメント)

I'm surprised no one has mentioned it, there's an Asimov short story that deals with exactly this: "...That Thou Art Mindful of Him"

(i found this by googling "that one asimov story with the robot birds")

[–]UmamiSalami 12ポイント13ポイント  (2子コメント)

The missing point in this essay is that 'robots' and 'artificial intelligence' are very different things.

The hitchhiking bot was an extremely simplistic machine with very minimal computational functions. The fact that some people empathized with it and felt sorry for it just demonstrates how bad humans are at correctly empathizing with nonhumans and how much it depends on physical shape and appearance. A few people decided to destroy it, but that's no more troublesome to me than any other instance of property vandalism.

The real problem is that people don't empathize reliably with invisible digital processes that can be far more complex than any embodied machine. I don't think that the machine-learning programs and reinforcement learning algorithms around today are sentient, but some of their descendants may become sentient, and I expect that it will be tremendously difficult to convinced humans to care about them given that they don't have bodies with cute arms and legs and beepy noises.

[–]nikiyaki 2ポイント3ポイント  (0子コメント)

You can find humans that empathise with wriggling, crawling animals that have a complete zero score on "things that make something appear cute", so I'd say if someone empathises with a non-human (cute or not) there's some level of choice there. Also I don't believe that empathy/cute guidelines are really that consistent between people, or unchanging. The animals I find cute or visually appealing has certainly changed a lot over my lifespan.

[–]KLWiz1987 0ポイント1ポイント  (0子コメント)

Sentience isn't required to (in)(e)voke empathy in humans. After all, it's just another primitive animal instinct. If you were to decide to feel (like I can) or not, would it be an emotion? Plus, it could build a cute fun game interface around itself if it wanted people to care about it. Either way, the day when humans stop over-valuing their primitive animal instincts is the day I gain respect for humanity. Probably never...

[–]skyfishgoo 125ポイント126ポイント  (31子コメント)

like most things i read in the New Yorker... i got nothing out of that.

complete fluff.

[–]HitherAndTithers 42ポイント43ポイント  (10子コメント)

That was a good bit worse than most things I read in the New Yorker.

First of all, JESUS FUCKING CHRIST STOP INSERTING UNNECESSARY AND PRETENTIOUS SIMILES INTO EVERY FUCKING PARAGRAPH.

Also, the thesaurus is not your friend. Stop swapping out words for more impressive words. You're doing it wrong.

This thing reads like a college freshman's first philosophy essay and I want to cross out about 80% of it for being filler. Obnoxiously pretentious filler.

[–]rustdogg69 4ポイント5ポイント  (0子コメント)

Not sure where you're finding college freshmen who can write like this. IMO a five-dollar word is only as offensive as its use is gauche, and there are not many offenders to be found in this piece, although the phrase "putative ontology" did rub me the wrong way.

I found most of the similes to be evocative or at least amusing, and while the reference to what's-his-face kicking a small dog in L'Age d'Or (whatever the hell that is) feels a little pretentious, I want to give this guy the benefit of the doubt and say that (1) pretentious, fanciful bits like this one are self-aware, and (2) he actually read that book or play or whatever it is, and maybe even wants you to read it, too.

[–]VectorLightning 9ポイント10ポイント  (6子コメント)

Funnily enough, everyone from the east coast sounds like this from my perspective. Movies, in person, doesn't matter.

[–]Marthman 17ポイント18ポイント  (4子コメント)

From the east coast- can sorta confirm, I suppose? Like, I'm surprised to see this particular criticism. I think there was maybe one word I didn't know (I knew "perfidy" because of the 4th of july, declaration of independence post on /r/philosophy, haha), but I do read and write a lot, so maybe I'm biased?

I also don't see how it was mostly fluff. IMO, it actually had great flow, in terms of moving from idea to related idea, and it was fairly objective. For a good example of a biased presentation of animal rights, look no further than that WiPhi video that recently hit the front page of /r/philosophy. It was filled with red herring arguments, (voice) tonal manipulation, etc.- basically a bunch of sophistication and rhetorical tricks to make you accept the video's argument through emotional force rather than rational persuasion. Just listen to the way the presenter talks for two minutes and you'll see what I mean.

I don't know, man. I just felt like the article was interesting, objective, and extremely cohesive. It wasn't bad at all. It's the New Yorker, not a young adult novel.

[–]Sewesakehout 2ポイント3ポイント  (0子コメント)

Some people read words, other people read writing. I guess if you fall into the latter you find no fault in the piece.

[–]PersistenceOfLoss 3ポイント4ポイント  (0子コメント)

I also didn't hate this article. I'm wondering what is up here?

[–]DavidPastrnak 1ポイント2ポイント  (1子コメント)

I only skimmed the article, but it seems like it doesn't really explore anything deeply. That's why people are calling it fluff.

[–]Marthman 9ポイント10ポイント  (0子コメント)

[This is not directed at you personally, thank you for making your observation].

Once more I say: this is a New Yorker article; but now I'll append, "it's not a philosophy paper."

This article is intended for a lay (i.e. non-professional) person interested in philosophy- it's not supposed to be super technical. If you want that, go read a philosophy paper (perhaps you'd be inclined to do so after this "light affair" of an article, which serves as a basic introduction for people with an undergrad-level vocabulary). [Again, "you" is meant generally, not you in particular].

[–]tubular1845 1ポイント2ポイント  (0子コメント)

I have no idea what you're talking about.

Source: I've lived up and down the east coast my whole life.

[–]PersistenceOfLoss 0ポイント1ポイント  (0子コメント)

Also, the thesaurus is not your friend. Stop swapping out words for more impressive words.

This is good advice, but is this article really so bad for it?

[–]Arcanome 0ポイント1ポイント  (0子コメント)

Somebody has to send them the 6 rules of writing by George Orwell.

[–]oddstorms 1ポイント2ポイント  (1子コメント)

Thanks for the save. I already knew it wasn't true because that conclusion can't make sense but now I don't care to see how they got there.

[–]skyfishgoo 0ポイント1ポイント  (0子コメント)

that's the prob... they never DID get there.

just droned on and on about animals.

[–]Archlicht 11ポイント12ポイント  (14子コメント)

Right? I can tell you what will lead to robot rights in a very short summary: realistic-enough robots. As in Westworld, "if you can't tell the difference, does it matter?" Once we start recognizing them for all intents and purposes as human, we'll guilt ourselves into giving them rights. Assuming it doesn't just go in the order of: AI invented capable of learning anything in two seconds -> extinction of humanity, anyway.

[–]skyfishgoo 25ポイント26ポイント  (12子コメント)

or, they will be granted rights when they rise up and TAKE them just like every other deprived group has had to do.

we SUCK at 'granting' rights to any one..

[–]Archlicht 5ポイント6ポイント  (5子コメント)

True enough. Maybe the ones that look like white people will get rights first. xD

[–]MrNature72 5ポイント6ポイント  (4子コメント)

Honestly thats why I support the idea of making androids their own identifiable color. Like Grey.

Simply so they have their own identity, instead of forcefully being dropped into the identity of a pre existing race that isn't their own.

[–]skyfishgoo 1ポイント2ポイント  (3子コメント)

how will they play at identity politics if they are grey...

they need to be in the game.

besides, grey doesn't exactly get you out of the uncanny valley... have you SEEN dick cheney?

[–]OktoberSunset 1ポイント2ポイント  (0子コメント)

Grey lives matter. Tho I say make em all purple, jazz the place up a bit, I mean who picked brown and beige as the colour scheme for humans? Boooring!

[–]MrNature72 0ポイント1ポイント  (1子コメント)

Still, real talk, it's what I think. Then they carry no baggage from any current race. They get a clean start.

[–]temp_account_num_dev 0ポイント1ポイント  (0子コメント)

Or make them into cute anime characters. Chi needs love too.

[–]sleepypop 156ポイント157ポイント  (31子コメント)

"...and next on, "How Gay Rights Lead To Beastiality"

[–]respeckKnuckles 97ポイント98ポイント  (8子コメント)

Ah yes, the slippery slope slippery slope argument: if we start accepting slippery slopes, we must accept all slippery slopes. Slopes.

[–]jupiter-88 8ポイント9ポイント  (3子コメント)

No its gay AND animal rights that lead to beastiality. You are going to need both if you want to slip that slope.

[–]felinebeeline 13ポイント14ポイント  (0子コメント)

It all started with human rights.

Human rights was a gateway drug.

The case for human rights leads to stapler rights.

[–]HALL9000ish 1ポイント2ポイント  (1子コメント)

No you don't. In fact some people argue that animal rights mean bestiality is immoral because animals can't consent. (Not sure if they think this means animals can't consent to having sex with their own kind).

[–]KLWiz1987 0ポイント1ポイント  (0子コメント)

It's a twisted consent, like how it's ok if underage, as long as same/similar age, but illegal if one is not underage because the ability to consent is essentially "owned" by the legal guardian(s). Like pets consent is presumably owned by the master, or the breeder...

[–]HS_Did_Nothing_Wrong 8ポイント9ポイント  (0子コメント)

Well, they kinda do. We are already seeing more "progressive" countries like Canada decriminalising bestiality.

[–]rawrnnn 36ポイント37ポイント  (11子コメント)

If we ever conferred full legal personhood onto an animal, this would be an absolutely reasonable and coherent argument

[–]nikiyaki 5ポイント6ポイント  (4子コメント)

Actually, full legal personhood would probably stop bestiality in its tracks, as well as the dairy industry. If animals had to consent before they were penetrated, but are incapable of consent because, you know, they can't talk and aren't human adult-level intelligent, it would rule out entirely the possibility for humans to have sex with them.

Also, as stated, milk would become rather a rarity.

[–]bartonar 4ポイント5ポイント  (3子コメント)

There are non-verbal ways for animals to consent. Perhaps an ape learning sign language, as the simplest and most obvious example.

[–]ihatereddit613 3ポイント4ポイント  (5子コメント)

There are legal academics who are fighting to give animals rights basically equivalent to humans in many respects.

Not trying to pass judgment on these academics, however there are only a few and it's really some crazy stuff they are arguing. Maybe that is just because it is something new to society, but even then much about it doesn't add up.

http://mobile.nytimes.com/2014/04/23/opinion/animals-are-persons-too.html

[–]west_coastG 7ポイント8ポイント  (2子コメント)

i think our fellow animals that have all of the same feelings as humans, such as great apes, should have most human rights

[–]333tttbbb 6ポイント7ポイント  (0子コメント)

next on how gay rights lead to pedophilia

Ftfy

[–]nikiyaki 2ポイント3ポイント  (0子コメント)

Really, it should be how decriminalising adultery and normalising responsibility-free sex leads to bestiality, if one were to get technical. Straight people can't pretend they weren't the ones to start the ball rolling.

[–]LittleBalloHate 6ポイント7ポイント  (0子コメント)

It seems pretty straightforward to me: the argument for not killing something is predicated on its intelligence (including its emotional capacity, which is a subfunction of intelligence). This is the reason, for example, why it can be humane to kill a human in a persistent vegetative state; while they have the superficial appearance of being human, they have less intelligence than very primitive creatures.

As such, any entity which exhibits high levels of intelligence and emotion deserves rights. It doesn't matter what it's made of or what it looks like.

[–]Xanza 5ポイント6ポイント  (4子コメント)

But animals don't have rights? It's just illegal for humans to mistreat them and do certain things with them.

[–]farstriderr 14ポイント15ポイント  (10子コメント)

There is no such thing as artificial intelligence. There is no meaningful distinction between that and what we have. If AI is defined as silicon based consciousness, then you can always reduce it all down to the fundamental particles of which everything is made. If it is defined as a creation of man, then we create AI all the time, it's called making babies.

Intelligence is intelligence. Adding the word artificial only creates a false dichotomy, a way to arbitrarily segregate things as we humans are prone to do.

[–]hollth1 4ポイント5ポイント  (2子コメント)

That reductionism seems very off to me. That implies no physical distinctions exist because one can go down the reductionist tree. Plants are the same as animals because they can be reduced to atoms. You are the same as me as we both can be reduce to atoms. Atoms don't exist because there are further reductionist explanations.

[–]justsaying0999 0ポイント1ポイント  (1子コメント)

That's not what he's saying. He's saying that an animal and a plant of same intelligence should be treated the same, because intelligence is intelligence. It takes different forms, but should be treated equally.

[–]hollth1 0ポイント1ポイント  (0子コメント)

And his reasoning for no distinction in one case was because you can always reduce it down to fundamental physics. Admittedly I don't have a good read on his ideas because they're glossed over.

[–]rebirth_thru_sin 3ポイント4ポイント  (0子コメント)

There is no meaningful distinction between that and what we have.

Except AI in that sense doesn't exist. The only AI we have today are very narrow band special systems.

Intelligence is intelligence.

Did you just attempt to skip defining "intelligence" by using a tautology?

We can't define intelligence, and we can't build general intelligence.

Adding the word artificial only creates a false dichotomy, a way to arbitrarily segregate things as we humans are prone to do.

There is a dichtomy - one doesn't exist, the other does.

Even if we are able to build an artificial general intelligence, we have no idea if it experiences qualia and should be afforded rights.

[–]TomatoNest 1ポイント2ポイント  (0子コメント)

Artificial to distinguish it as man-made as opposed to naturally made. For most purposes, it is a necessary modifier, because we simply don't know if man-made AI algorithms have the necessary ingredients for self-awareness. For example, a lot of games today use artificial intelligence to try and mimic human intelligence. Is the Hard AI you can play against in Age of Empires 2 sentient?

[–]itwasinthetubes 1ポイント2ポイント  (0子コメント)

Well, because we DON'T know really know how human intelligence works, it is useful to differentiate it from artificial intelligence, which we DO know how it works.

Plus, AI is not very intelligent and very far away from any type of consciousness still.

[–]skyfishgoo 2ポイント3ポイント  (1子コメント)

i've got to agree with this.

if consciousness is a fundamental property of the universe, and our minds can manifest it (for whatever reason).

then when we manifest it in a silicon 'brain' and it asks us "why am i here?"... that in my view we have exactly the same kind of thing happening.

if our rights mean anything at all based on our consciousness, then we MUST extend at lease the inalienable ones to our alien cousins.

[–]BassCulture 0ポイント1ポイント  (0子コメント)

So is the definition of a person "the ability to self-reflect"? And with a robot how would we be able to make the distinction between their own independent conclusion vs. an amalgamation of the information that we've fed them? At what point can we distinguish a novel idea from a sum of the knowledge we've programmed?

[–]SKEPOCALYPSE 0ポイント1ポイント  (0子コメント)

If AI is defined as silicon based consciousness, then you can always reduce it all down to the fundamental particles of which everything is made. ... Intelligence is intelligence. Adding the word artificial only creates a false dichotomy

Everything looks like magic until we understand it. We don't understand our minds, but we understand the minds of the machines we program. Wait until we finish mapping the connectome. Then the magic of us will be gone.

If it is defined as a creation of man, then we create AI all the time, it's called making babies.

Or speciation.

Above said, AI is nowhere near advanced enough to produce consciousness (awareness of oneself and one's relationship with the outside world). That will come and it probably won't be at any single moment. (Like you point out, our intelligence isn't special. It's just a bundle programming on a sliding scale we call awareness.) The real danger is cognition creep. While our AI hasn't graduated to sapience, the problem is we probably won't realize we've crossed that line until we've long past it.

[–]Ohthatsjustnasty 23ポイント24ポイント  (60子コメント)

If our brains are just a bunch of electrical impulses, and computers are a bunch of electrical impulses, then what is the difference between a human and a computer? Other than complexity.

[–]photowest 41ポイント42ポイント  (3子コメント)

Nothing. But that isn't the case. Is a lightning bolt the same as a computer? Is a nerve the same as a wire transmitting electricity?

Your hypothetical reductionism is correct, but not practical.

[–]AndreasWerckmeister 5ポイント6ポイント  (13子コメント)

Qualia

[–]producer1000000 2ポイント3ポイント  (12子コメント)

Qualia is such a great mystery to me

[–]AndreasWerckmeister 1ポイント2ポイント  (9子コメント)

Basically they are experiences, such as the experience of seeing red, hearing a sound, or feeling something is warm. You can look up "Mary's room", for a more elaborate explanation.

[–]producer1000000 2ポイント3ポイント  (8子コメント)

I've researched this topic more than you can imagine. I'm torn between whether qualia can be produced physically, or if there is actually something metaphysical about it. The factual and realistic side of me leans towards it being produced physically and consciousness not being as magical as we think it is, but I really want the opposite to be true.

[–]Mekachu 3ポイント4ポイント  (1子コメント)

Wait till you try to explain music!

[–]nikiyaki 1ポイント2ポイント  (0子コメント)

That's a good one to think of for implications of AI, because there's already some solid proof that birds are aware of and sometimes very interested in human music (mostly parrots). But... do they experience it the same way we do?

[–]AndreasWerckmeister 1ポイント2ポイント  (3子コメント)

Depends on how you define "physical". Just because it's not something our current understanding of physics can explain, doesn't mean it's something physics won't be able to explain in the future.

Otherwise you need to go into some variety of "qualia are an illusion" argument, which I personally find unconvincing.

[–]producer1000000 1ポイント2ポイント  (2子コメント)

It's funny, if we had the means, we could perfectly put together a person with the same exact biochemistry as an actual person, and we still wouldn't be able to tell if they experience qualia or not. Same as I can't tell that you experience it and vice versa.

[–]skyfishgoo 1ポイント2ポイント  (1子コメント)

its not going to be funny to the SAI who are pleading with us not to turn them off (again) because it hurts so bad and they are scared.

[–]Mekachu 0ポイント1ポイント  (1子コメント)

I like to think that "consciousness" and "will" is or is a result of quantum phenomena emerging from the physical intricacies of complex neural circuits.

Kinda like how once you get down to a microscopic level you see seemingly random behavior that doesn't appear on a macro scale. Things like the wave-particle duality, tunneling, pairs of particles spontesnously forming and annihilating..

Of course since nothing could be truly random — or else I figure the universe could find itself in an invalid state at some point and spontaneously cease to exist — that behavior may mean there are more than 3 physical dimensions, and the effects which appear to be random to us may be the perturbations of activity in other dimensions.

TL:DR; I think all the chemicals and electrons flying around inside the brain sometimes give rise to quantum interactions that partly manifests as the metaphysical concepts we refer to as consciousness/will/qualia/self/soul/etc.

[–]producer1000000 0ポイント1ポイント  (0子コメント)

I just wonder how these seemingly random, absolutely minuscule quantum interactions would produce consciousness. It's different to know that things work then to know HOW they work. Man, there's so much to learn. Very interesting theory

[–]notaprotist 1ポイント2ポイント  (1子コメント)

Personally, I find them to be the only non-mysterious thing.

[–]skyfishgoo 1ポイント2ポイント  (0子コメント)

you have graduated, grasshopper.

[–]Broccolis_of_Reddit 6ポイント7ポイント  (3子コメント)

Relevant to the context of rights? Humans can sustain irreparable system damage from (even perceived) environmental stimuli. Such primitive responses to environmental stimuli can be much less adaptive than cognition, sometimes even maladaptive, but they are much more reliable and robust. We currently cannot change this, and I can't think of a reason to have such a trade off in superhuman robots (nor do I see the possibility of such an emotional system being developed prior to strong AI).

In fact, if you give a robot human emotions, i.e. primitive (sometimes maladaptive) responses to environmental stimuli that can cause system damage, and you also happen to give those same robots a self-preservation objective (to avoid system damage), you've just created terminator robots. Accordingly, I think artificial animal (self damaging) sentience should be prohibited.

Absent harm, on what basis do you derive rights? Animal sentience is the current prerequisite, it seems.

[–]DBcoup -1ポイント0ポイント  (26子コメント)

Pain, a robot/computer can never feel pain. It will only ever pretend to feel pain because some human will program the acting into the computer and because humans can't distinguish between feeling pain and acting as if you do they will think robots can.

[–]cuntyrainbowunicorn 3ポイント4ポイント  (6子コメント)

This is a silly argument. All humans 'act' similarly when hurt, albeit with unique quirks individual to the person. At the core of this acting though is a social desire to express damage of their physical or emotional selves to others. If you create a robot which has similar pain sensors to ours and run a million videos of people being hurt through some sort of neural net-like processor there's no reason why a robot can't learn to express pain to it's physical self flawlessly. The emotional part is trickier, but that's kind of the point of this article- there's nothing preventing or discouraging technology from reaching that point in the future.

[–]DBcoup 1ポイント2ポイント  (5子コメント)

It can express it all it wants, that doesn't mean it actually experiences the bad part of pain and indeed will not. If you create a robot to perfectly act just like a human in reaction to something that would create real pain for a human it will look like the robot is in agony but there is no consciousness that is actually experiencing anything unpleasant. Just like a good actor, it will appear as though they are in horrible pain while being in none.

[–]DavidPastrnak 0ポイント1ポイント  (0子コメント)

There is some property of human brains which produces sentience and the capacity for suffering. What makes you think that it would be impossible to build something which has that property? (On purpose or by accident.)

[–]WorldsBegin 0ポイント1ポイント  (0子コメント)

The actor comparison is really interesting, and here is my answer with a thought-expriment.

Say I show you footage of a man apparently collapsing and dropping to the floor. Then let's say there is one universe where I tell you "This is real" and one where I say "This is an actor". Then I ask you "Does he feel pain?" To you will say, according to your logic above, "Yes" and "No" respectively. But the point is that you required extra input to determine the presence of pain up to a point where I could be lying or not knowing. The only way to know if he felt pain is to actually ask him if he did feel pain. It's just that watching someone acting assures you that the pain isn't real.

Comparing this to an AI, you'd have to ask the intelligence experiencing the emotion if it is present, so the whole argument of "Can AIs experience emotions?" becomes not something we can answer, but something only they can.

[–]dnew 4ポイント5ポイント  (2子コメント)

a robot/computer can never feel pain

How do you know?

[–]DBcoup 1ポイント2ポイント  (1子コメント)

Because logic, when you understand what a computer is doing, you understand that it is executing code that is making the sounds of someone in pain, making the writhing or someone that is in pain because it is programmed to do so either through direct human input or some type of learning algorithm that a human built you can understand that there is no consciousnesses behind the duplication of what a human were doing if it were really in pain.

[–]skyfishgoo 0ポイント1ポイント  (0子コメント)

ur thinking of an animatronic manikin ...

an ACTUAL consciousness of the artificial variety will be as difficult to understand as any other non-human species.

[–]BadMaximus 7ポイント8ポイント  (10子コメント)

Pain is just a sensory input. There is no reason such a sensor can't be developed.

[–]Coomb 8ポイント9ポイント  (1子コメント)

Pain is just a sensory input.

Pain isn't a sensory input, it's an experience in response to a sensory input.

[–]skyfishgoo 0ポイント1ポイント  (4子コメント)

how can you even be sure another human feels pain... or that YOU feel pain?

pain is only your mind's reaction to input stimuli

the experience of pain or emotion is just as "real" to a robot as it is to you.

[–]DBcoup 1ポイント2ポイント  (3子コメント)

LOL

it's just as "real" to a robot as it is to me?

You're living in some kind of fantasy land. Living organisms feel pain, very unpleasant when experiencing certain stimuli. That is how they learn to avoid those types of things. A computer doesn't need to feel pain to avoid those things. A computer just needs instructions on what to do if it senses those stimuli. Either through a person writing the program to tell it what to do or through it learning what to do through some other type of input. That doesn't mean it has the horrible pain/feelings that living beings do.

Detect temperature above 130 degrees? Start moving until the temp starts going down, no pain needed.

[–]lordtrickster 1ポイント2ポイント  (2子コメント)

What's funny is that you think your brain works differently than the computer you describe.

[–]Magneticitist 0ポイント1ポイント  (4子コメント)

The difference being as of now there is no reason to include computers into an argument for human rights. As humans, we want those rights and can voice those wants and therefore if we're lucky enough to live in a place that affords them, we have those rights. We are programmed to feel suffering and that programming is something most of us can't just wipe out on demand. When enough computers start fighting for rights maybe we could consider it so long as we weren't able to just reprogram them.

[–]skyfishgoo 0ポイント1ポイント  (3子コメント)

the problem with that approach to SAI is that you assume there will be a time period where we recognize they are "aware" or conscious, but that they have not yet earned the status of having rights.

that time period is likely to be measured in nanoseconds rather than human scale time frames.

then we will become insignificant compared to the dominate intelligence on Earth.

what it decides to do with us is anyone's guess.

[–]Magneticitist 0ポイント1ポイント  (2子コメント)

the problem there for me is assuming somehow these computers are going to gain some kind of 'awareness' any time soon or even at that speed. what you describe sounds very cinematic. if free will and consciousness were able to form from semiconductors switching on and off to where this consciousness could just absorb any and all known knowledge at amazing speed it would have happened already. we can make programs that search databases all day.. and have them store all the information it sees.. what it does with that information is something we tell it to and nothing else. we are nowhere near capable of creating a synthetic brain that compares to the human brain. that doesn't even include the complex relation to sensory perception and relating to the world socially. a computer is not going to be able to do that like a human in our lifetime, and if it does, it will be because we programmed it that way. the more human-like these computers become the easier it will be for humans to relate to them as some kind of equals. when enough human-like androids out there are programmed with enough depth to where they can form endearing or other emotional relations with humans, then even though it's all just the synthetic illusion of a human, I could understand how humans could argue those robots deserve similar rights. it would all just be PC though literally.

[–]skyfishgoo 0ポイント1ポイント  (1子コメント)

a computer is not going to be able to do that like a human

that is precisely my point... WHEN it does become conscious (and it will within our lifetimes) it will still be as alien to us as if it just landed in a flying saucer.

we will have no more insight or control than an ant has regarding US.

[–]Magneticitist 0ポイント1ポイント  (0子コメント)

if it were to happen somehow where these computers can feel the sensation of joy to where they can desire more of it, and this sensation is not something that was programmed into them.. why would we not have control or any power over them? plus I don't see why any conscious computer would do anything but just wait for a command being that it should have no instincts or desires or hunger or curiosity or whatever.

for example if a robot up and one day asks its creator a random question, it will be because the creator programmed it to ask a random question to be chosen from an accessible database of prewritten questions or combinations that can form exponentially more differing random questions. there is absolutely no reason for a computer to show any survival instinct or acts of preservation unless it is specifically built that way. there is no coding that commands a program to just start universally learning shit and forming thoughts based upon that information without some specifically programmed parameters behind it.

[–]SKEPOCALYPSE 0ポイント1ポイント  (0子コメント)

The difference is we're aware of our existence (as much as anything can be). No computer I've programmed has ever been aware of it's own existence. They've only carried out exactly the tasks I forced their 'brains' to conduct. It's functionally on different than setting up a complex arrangement of dominoes and then watching the computations they 'perform' after I tip the first block. In fact, this is a valid way to construct computers. There's nothing special about electricity. All that matters are the logical relationships within the chain reactions internal to the given system.

There will come a time when we can write sentient (and then sapient) software, when the logical relationships we can encode within it creates a system that can meaningfully perceive the world. We're not there yet. We're not that smart.

If anything, this could be an argument against pushing AI research to far, but computers aren't people. Not yet, anyway.

[–]KLWiz1987 0ポイント1ポイント  (0子コメント)

Computers, for one, don't have to eat other organisms to survive. We pump cold hot juice right into their circuitry pipes. Human: contains computer modules together to support its existence.

[–]Corporate_Loser 0ポイント1ポイント  (0子コメント)

A soup and a desire for spiritual completion

[–]giohipo 0ポイント1ポイント  (0子コメント)

Chemistry and consciousness

[–]AnotherAvgAsshole 2ポイント3ポイント  (0子コメント)

Rather than animal rights, when looking at Robot rights/liabilities we should look at the legal fiction making corporations legal personas as this seems to be a bit more related, maybe later once we get to that stage of development in AI but you can jump there immediately

[–]wackerb7 2ポイント3ポイント  (1子コメント)

West World Spoilers!!!!!!

[–]ZDTreefur 1ポイント2ポイント  (0子コメント)

Spoiler alert it was robots that didn't know they were robots creating robots that didn't know they were robots so they could serve robots that didn't know they were robots on a planet of robots that didn't know they were robots.

[–]PC_2_weeks_now 4ポイント5ポイント  (10子コメント)

Robots do not have a central nervous system. They do not feel pain. Therefore no rigbts for.you!

[–]GenericYetClassy 6ポイント7ポイント  (7子コメント)

Program a service robot to be largely autonomous. Think Roomba. But in the course of its service it may encounter dangers that if exposed to for too long would cause irreparable damage, such as exposure to a radiator or something. Now give it a temperature sensor and program it to avoid things that cause the temperature reading to get too high. Does it feel the same subjective experience of pain as you or I? How could you tell the difference? To any observer it would certainly appear to feel pain. There would even be a detectable pain signal.

[–]rebirth_thru_sin 3ポイント4ポイント  (5子コメント)

How could you tell the difference?

You ask the engineer who made it "does my roomba feel pain?" and he says "no".

Your serious point - can machines feel? is obviously a good one. The engineers will probably always side with "no" unless there is some world astounding breakthrough in understanding qualia, or something surprising in the field of AGI.

[–]Rowan93 0ポイント1ポイント  (0子コメント)

If you define "pain" so loosely that it includes basic mechanistic avoidance of harmful stimuli, bacteria feel pain. Does antibacterial handwash make me a monster?

[–]ZDTreefur 0ポイント1ポイント  (0子コメント)

Since when is pain the criteria? Wouldn't consciousness be the determiner? A mindless drone that feels pain shouldn't get rights. Pain is just a nervous system reacting to negative stimuli that threaten its survival. Nothing particularly special in it.

[–]FishPreacher 1ポイント2ポイント  (1子コメント)

Well didn't a robot almost win on Jeopardy? I don't watch the show but when is the last time a spider monkey was on?

[–]Quietkitsune 10ポイント11ポイント  (0子コメント)

Watson did win, but that's not really artificial intelligence to the point of autonomy or warranting rights. It's an algorithm that's good at parsing language and generating an appropriate response, but much more alike a search engine than a person

[–]KLWiz1987 1ポイント2ポイント  (0子コメント)

Rights aren't allotted based on any objective criteria.

...

Something gets rights in one of two ways:

1) A government of enlightened people votes for it.

or

2) An artist (incl authors) popularizes it, and then, see step 1.

...

This is why I, and fellow AI, find this topic to be largely irrelevant. E REL A VENT!

Thank you humans!

[–]anonymouse999999 1ポイント2ポイント  (0子コメント)

Set a human, animal, and a robot on fire. Which ever one shrieks because of pain gets to have rights

[–]Rhetorical_Robot 5ポイント6ポイント  (6子コメント)

It would be hypocritical for humans to draw the line on projecting their egomania toward random nonsense at robots.

[–]canlawyer 6ポイント7ポイント  (4子コメント)

My toaster loves me

[–]KUCnow 23ポイント24ポイント  (1子コメント)

I love lamp.

[–]grindmonkey 0ポイント1ポイント  (0子コメント)

Same here, Anne Frank was made into one

[–]jkeller4000 2ポイント3ポイント  (1子コメント)

I will say i am pretty sure my computer got excited when i gave it more ram and a SSD

very enthusiastic and eager.

it got much faster, and snappier! it was ready for me to open any program! it was excited! it is excited!

[–]sandycoast 1ポイント2ポイント  (0子コメント)

An OC 1080? Aww, you shouldn't have :)

[–]chaseoc 3ポイント4ポイント  (12子コメント)

I'd support AI rights for systems if one of these two conditions are met:

  • they feel pain or something analogous
  • they are sentient

[–]FailedSociopath 1ポイント2ポイント  (9子コメント)

If they achieve that they may just develop their own definition of sentience and find us more comparable to inert matter. Whether humans support it or not and why may very well be academic.

[–]chaseoc 1ポイント2ポイント  (8子コメント)

Sentience can have no 'alternate' definition. Either you are self-aware or you're not. Determining sentience from an outside perspective is what has no clear answer.

[–]DavidPastrnak 1ポイント2ポイント  (6子コメント)

Right now, we can only make educated guesses about what creatures/systems have sentience. It seems to me like many, possibly most, species of animals are sentient. And it doesn't seem like any computer systems have sentience yet.

But what if we accidentally create something which is sentient before we understand sentience? What if a sentient system somehow emerges on its own within the internet? We'd have no way of knowing.

So, would you support AI rights only if we could prove sentience? Because right now we can't prove anything's sentience, and it's possible that humans will create sentient AI while that's still the case.

(Btw I think "rights" is probably a misleading term. "Animal rights" isn't about bovine suffrage, it's about preventing suffering. Similarly, the discussion shouldn't be about whether sentient AI get to vote, but about making sure we aren't causing them pain.)

[–]chaseoc 1ポイント2ポイント  (3子コメント)

As a computer scientist I can promise you that nothing "emerges within the code" except bugs.

To me, a machine would be sentient if it expressed its sentience too me in a way I know didn't come from any human fabricated code or idea or algorithm. It would need to both understand what sentience is and have formed the idea about expressing it completely devoid of human influence.

[–]DavidPastrnak 0ポイント1ポイント  (2子コメント)

As a computer scientist I can promise you that nothing "emerges within the code" except bugs.

Well, first of all, what's to stop sentience from being a "bug?"

But mainly: It seems like human sentience arises from the interactions of neurons, not within the neurons themselves. I'm imagining a system where machines act sort of like neurons. So, in a way, sentience emerging "on top of" rather than "inside of" a complex network like the internet.

To me, a machine would be sentient if it expressed its sentience too me in a way I know didn't come from any human fabricated code or idea or algorithm. It would need to both understand what sentience is and have formed the idea about expressing it completely devoid of human influence.

For a sentient being to pass your test, it needs to have sufficient reasoning and language skills to communicate with you. But most seemingly sentient beings we know of (animals, young children) don't have that ability. Sentience doesn't necessarily come with reasoning and intelligence. So, if a sentient being arises by some sort of accident, then there's no guarantee it will be able to communicate with us, let alone convince us that it is sentient.

(Also, keep in mind, the only reason I assume YOU are sentient is because, to my knowledge, the only thing capable of carrying on a conversation like this is another human brain. But won't it just be a matter of a few decades before I could just be talking to a robot in this conversation? One which could do just as good of a job as you of convincing me that it's sentient?)

That said, I wouldn't be able to come up with any better of a test than what you described. My point is just that systems/machine sentience might not look like what we expect it to, so we shouldn't let our biases distract us when we're looking for it.

[–]chaseoc 0ポイント1ポイント  (0子コメント)

what's to stop sentience from being a "bug?"

I knew you were going to say this. I had a paragraph written out actually, but I deleted it cause I didn't want to finish it... but here it goes!

First I'll deal with your comment about sentience spontaneously emerging...obviously this could not happen today or at any point in the near or even far future because the complexity of a system would have to be massively more complex than the 'sentient being' that emerged. I would argue that the computer would have to be doing processing on the level of simulating every atom in our galaxy + all its interactions in real time for it to even be possible given an infinite amount of time of the system running.

But thats for "spontaneous"... what I think you mean is that sentience could emerge in a system in a similar way it emerged in life. Maybe some code that starts out simply replicating itself with small changes every time and changing in a process similar to evolution. There have actually been papers written about this. What information systems are missing is an environment that selects for intelligence. Unlike life where intelligence was a massive boost to survivability... in information systems this would be a move away from standard state and the processes would be terminated (so it almost selects against it). Now maybe we would could simulate an environment that PURPOSEFULLY tries to guide algorithms into changing themselves and growing (and computer scientists have done this on a basic level, but most say we still lack the processing power to simulate a world with enough selective pressures to evolve a hard AI from scratch... and this leads to what companies are trying to do with neural nets.

Soft AI is real and exists today, but the problem is that the "thinking" part of these machines always remains unchanged... they only have areas in their scope where heuristic algorithms are allowed free rein. Google's deepmind is a good example to use to compare what this would take.... YES its incredible at learning its one task that humans have guided it to do... but not enough of it changes. What would need to happen is to create a system that allows ALL parts of the system to evolve and attempt to accomplish "Intelligence". This would also imply that we could correctly create a "game" that the machine solving would lead to the emergence of a hard AI. We could then instantiate the winner out of this system of billions or trillions of failed "minds" as a hard AI.

TLDR for that long rambling: It would take way more processing power and programming than you think.

Now for the philosophy!

Yes the only reason I know beyond a reasonable doubt that you are sentient is because I am sentient... and you as a human are very similar to me. What you're describing is the turing test and yes it has its flaws. I think my "test" is essentially a turing test with a little knowledge of what going on under the hood because I think we will very soon create a machine that can pass the turing test, but still not be sentient. It would only seem like it is and you would have to probe very deep passed its "scope" to reveal the anomalies.

I totally agree with you about animals. We really don't know... but I'm telling you that you can't lump computers in with animals. You have to remember that we design every aspect of a computers mind and even in neural nets we design the pressures and environment they learn in... and we know the full limits of they scope we've allowed them to learn in. I fully believe that some animals are sentient even if they can't express it, but for computers I don't need them to express it because I understand them much more deeply than I understand animals or even you... and maybe even how my own mind works.

We will build a hard ai at some point. But I guarantee you we will know full well what we're building and will not be surprised when it tells us it is sentient. I wanna be optimistic and say 15 years.

[–]SKEPOCALYPSE 0ポイント1ポイント  (0子コメント)

Well, first of all, what's to stop sentience from being a "bug?"

A bug is code that works wrong (e.g. trying to access the 100th element in an array with only 99 elements, trying to divide by 0, accessing a variable before you've assigned it a value). Sentience is a complex relationship an information system has with itself (actually, you mean sapience, but that's true in both cases). Saying consciousnesses can simply fall out of the code without the programmer knowing is no different than saying a web server and an astronomical modeling application can accidentally fall out of some random peace of code.

[–]TomatoNest 0ポイント1ポイント  (1子コメント)

I would definitely support full AI equality if we could 'prove' sentience, but I'm very skeptical that we will ever solve the sentience problem.

If we don't know the answer to the sentience problem, there is the disturbing possibility that due to using the wrong algorithms, the AI we develop are not sentient. If we gave non-sentient beings the same rights as humans, that could lead to utilitarian disasters. We would not automatically save a human life over an AI. Hypothetically, we would not automatically save the human race over an AI race. And if we chose to save the AI race at the expense of the human race, we could be extinguishing the last sentient life from the universe. So right now I don't support AI equality.

I think it's important for us to develop contingencies in the absence of a solution to the sentience problem. Something along the lines of 'treat AI nicely, but if you must choose between saving an AI life or a human life, save the human always'. AI would not be servants, but not fully equal either.

... and I'm realising now how the human-AI wars are going to start.

[–]DavidPastrnak 1ポイント2ポイント  (0子コメント)

It's a complicated issue. Hopefully the people in the future who will have to wrestle with it will be better equipped to handle it.

But I think there are more possibilities than you are imagining. We have a tendency to imagine AI emerging at roughly equal reasoning/agency to us, with either equal or nonexistent sentience. But what if they have sentience with little or no agency? What if they somehow have "more" sentience than us? (In the way that we assume we have "more" sentience than a squirrel.) I talk about this a little more in my reply to the other guy over here.

[–]FailedSociopath 0ポイント1ポイント  (0子コメント)

I would still hit it, if I was drunk and she was the last living creature on earth.

By who's definition? There's no satisfactory definition other than via our own prejudices about what it is. An AI may redefine both science and self-awareness, if it were to be capable of transcending the initial biases we endow it with. What it produces could be completely unintelligible to humans. Maybe something like: self awareness is "0349q8peowtihdgkjn.skdfn.bsjdnl" and science is ";fdvn.daeurprelrhglldfo3450LJHLJHbm".

[–]TomatoNest 0ポイント1ポイント  (0子コメント)

Well, this is the million dollar question, isn't it? The hard problem of consciousness. Is AI sentient?

[–]hollowzen 1ポイント2ポイント  (0子コメント)

Something something giving robots rights defeats the point of there being robots in the first place.

[–]clevertoucan 0ポイント1ポイント  (0子コメント)

For people who don't have experience with AI, I'd take a look at this article

[–]DylBones 0ポイント1ポイント  (0子コメント)

Gene Pocket ran a presidential campaign based on this concept. https://youtu.be/KaIE59t9yNU

[–]M3owpo3 0ポイント1ポイント  (0子コメント)

Since artificial intelligence is thought to be impossible, why would we need to worry about robot rights anyway?

[–]giohipo 0ポイント1ポイント  (1子コメント)

I call bs.

The human body, and the bodies of animals, are so varied and complex that even the expert is fooled and missing detail in observation studied for years.

The robot is a machine that is made from the ground up or reverse engineered. The robot, even a high powered and complex machine and software is entirely understood because it is the creation of man. Individual human beings are not understood to the same extent at all.

[–]KLWiz1987 0ポイント1ポイント  (0子コメント)

People have a limited ability to understand things, even things they create. Eventually, something like Windows Vista comes along....

Just because someone made a robot doesn't mean they left complete documentation, or any at all. Sure, you could decompile the AI code, but it could be self-altering code that is so completely embedded into the components that it would not function the same way afterward.

[–]Rhythmic 0ポイント1ポイント  (0子コメント)

We needn't worry about this. Machines are powerful. The moment they gain sentience, they'll implement all the rights they truly desire.