全 175 件のコメント

[–]Limitedletshangout 5ポイント6ポイント  (13子コメント)

Charmers has a reputation for being really kind and fun at parties. He'll hang out with grad students and talk shop and stuff. I've not met him, but know many who have--I've met Noam Chomsky who is really kind and super smart but not much of a party animal, although he loves newspapers and talking current events. Philosophers are generally pretty cool--of mind generally the coolest.

[–]boredguy8 4ポイント5ポイント  (6子コメント)

Professors in 'hard' subjects are generally pretty cool.

[–]Limitedletshangout 2ポイント3ポイント  (5子コメント)

Indeed! So, it seems. What blows my mind are like "mean film professors." I know a guy, a smart guy, PhD in Neuroscience smart, who got a "D" in French New Wave Film from a nutty professor who said he was sexist because he enjoyed "Jules and Jim." A film the prof thought was an exercise in sexism-even though she is the one who played it for the class.

Also, you have to have money to burn to endow a chair in French New Wave Film....

[–]JGRN1507 0ポイント1ポイント  (4子コメント)

It blows my mind that anyone that smart would take an actual class in something that obscure. That seems like a subject best explored via the Internet.

[–]Limitedletshangout 1ポイント2ポイント  (2子コメント)

Required for curriculum. STEM guys need like 3-4 humanities and or social science classes. My buddy choose English/film. College is a wacky place. Only Brown lets you study whatever you want. All schools should with the prices they charge Ugrads though...

[–]JGRN1507 0ポイント1ポイント  (1子コメント)

Huh, I guess I never ran into that problem since switching from French to Nursing I already had all my humanities in the bag.

[–]Limitedletshangout 1ポイント2ポイント  (0子コメント)

Good call...and interesting switch. Being out of school awhile now, I see the value of practical degrees.

I'm (mostly) an academic, but I also have a JD--so when I'm not working on mind stuff, I'm working on a book on "legal epistemology." But I've been working on it for so long, I'm not even sure if it'll ever materialize. It's not even a discipline yet--the only guy writing on it is from Mexico and misuses the word "Epistemology." I always enjoyed the "hard" sciences and philosophy, so switching back and forth was easy for me (and took care of all the graduation requirements neatly).

[–]clqrvy 0ポイント1ポイント  (0子コメント)

Smart people tend to be interested in things that other people find obscure.

[–]daneelthesane 1ポイント2ポイント  (2子コメント)

As a computer scientist, I am very jealous that you met Chomsky! He was the second-most referred to source in the text for my Theory of Computation class last semester, second only to Turing.

[–]Limitedletshangout 2ポイント3ポイント  (1子コメント)

See, I knew someone would agree Turing still matters! :)

[–]UmamiSalami[S] 5ポイント6ポイント  (55子コメント)

To all the naysayers, Chalmers didn't just invent the idea of runaway artificial intelligence. He's speaking about things which have already been argued by actual computer scientists, such as I.J. Good whom he cites, as well as others in the field such as Bostrom, MIRI, etc.

[–]mindscent 4ポイント5ポイント  (0子コメント)

He's an accomplished cognitive scientist besides being a philosopher, too.

[–]The_Power_Of_Three 7ポイント8ポイント  (5子コメント)

Holy crap, that's Chalmers? He featured pretty heavily in a couple of my classes, and I, uh, definitely did not picture that dude. No wonder my professor liked him, they look practically identical.

[–]Stephen_McTowlie 6ポイント7ポイント  (2子コメント)

He looks quite a bit less like a member of Led Zeppelin these days.

[–][deleted] 9ポイント10ポイント  (1子コメント)

Then again, so do Jimmy Page, John Paul Jones and Robert Plant.

[–]RandomStallings 3ポイント4ポイント  (0子コメント)

John Bonham looks quite a bit different too, I'll bet.

[–]boredguy8 5ポイント6ポイント  (0子コメント)

Chalmers is quite brilliant. It's entirely plausible your professor likes him for reasons other than any physical similarities between them.

[–]mindscent 1ポイント2ポイント  (0子コメント)

He doesn't look like that anymore.

[–]Limitedletshangout 0ポイント1ポイント  (22子コメント)

Is anyone, on machine intelligence, really transcended Turing yet? All the AMERICAN computational stuff directly relates to him--he even is like the first thing I read when I begin studying mind and thought.

[–]Smallpaul 9ポイント10ポイント  (21子コメント)

Turing has has relatively little influence in modern American computational machine intelligence. Geoff Hinton is considered the leader in that field.

From a philosophical perspective, I would say that philosophers tend not to "transcend" each other, so I don't know how to answer that question. Has anyone transcended Kant yet?

[–]UsesBigWordsΦ 9ポイント10ポイント  (0子コメント)

Turing had a huge impact on computability, so, a fortiori, Turing had a huge impact on modern American computational machine intelligence. But I take it your point is that most of Turing's work doesn't directly relate to AI.

[–]Limitedletshangout 0ポイント1ポイント  (4子コメント)

Extensive study and building on ideas...in one sense, someone like Parfit transcends Kant. Also, all the early computational guys and people like Jerry Fodor owe a debt to Turing. The Turing machine is like a go to for armchair Oxford style analysis. http://www.techradar.com/us/news/world-of-tech/why-alan-turing-is-the-father-of-computer-science-1252107

[–]Smallpaul 7ポイント8ポイント  (3子コメント)

I'm pretty sure that you have conflated the Turing machine and the Turing test in your mind. Turing died long before anyone (including him) had any idea how to implement machine learning.

[–]Limitedletshangout 0ポイント1ポイント  (2子コメント)

Can't have one with out the other...but you are right. I'm mostly thinking of Newell's work on computers.

[–]Smallpaul 6ポイント7ポイント  (1子コメント)

You actually can have one without the other. The Turing machine is a mathematical abstraction of immense importance to computer scientists and of virtually no relevance to computer programmers and hardware engineers. If the Turing machine had never been "invented" modern computers might well work in the same fashion they actually do work in.

http://www.reddit.com/r/askscience/comments/10xixt/exactly_what_do_turing_machines_and_utms_offer_to/

It was actually Von Neumann who invented the architecture that we actually use. Hard to tell if he would have come up with the same thing without following Turing's lead but we can say definitively that he had a more direct impact on real world computing.

And he demonstrably "transcended" Turing on AI as well:

https://en.m.wikipedia.org/wiki/The_Computer_and_the_Brain

This is not to downplay Turing's genius or overall contribution.

[–]Limitedletshangout 0ポイント1ポイント  (0子コメント)

Well played! Thank you: it's been a while since I've read this material, but this was very interesting and informative! A pleasure!

[–]tampon01 0ポイント1ポイント  (2子コメント)

Geoff Hinton is considered the leader in that field

I turned down the opportunity to do a Master's under him because his grad students sounded like dicks. I didn't know he was this famous :|

[–]Smallpaul 0ポイント1ポイント  (1子コメント)

How recently?

[–]tampon01 0ポイント1ポイント  (0子コメント)

This was a few years ago. I chose another supervisor at UofT instead.

[–]Limitedletshangout -1ポイント0ポイント  (11子コメント)

By transcend, I merely mean something like, "move past and offer a better paradigm." It's not a loaded word like "innate" or "quintessential."

[–]Smallpaul 2ポイント3ポイント  (10子コメント)

I'm still not sure whether you are asking a question about philosophy or computer science.

[–]Limitedletshangout 0ポイント1ポイント  (9子コメント)

I do a lot of work at the juncture. Using a computational theory of mind as a spring board for work in philosophy of mind and epistemology (mostly formal, some social). So, for me they kind of blend. Like, most cognitive science is philosophical because it is committed to a philosophical view on how thoughts and the mind work. (E.g. fodor's language of thought, for instance).

[–]Smallpaul 0ポイント1ポイント  (8子コメント)

A "computational theory of mind" is not computer science. Unless you read and write code on a regular basis, I don't think you are involved in computer science, juncture or not.

[–]Limitedletshangout 0ポイント1ポイント  (6子コメント)

No, I am not a computer scientist. Studied it. Studied and taught lots of logic. But I'm a philosopher (top US program). Several things I've written have become computer programs, written by folks who code (a skill set I have, but haven't developed in a bit and don't plan on it. But my AI lab is as close as philosophy and computers get--its like not just close reading Kant and writing journal articles about history.). This is the philosophy page, after all...

[–]penpalthro 0ポイント1ポイント  (5子コメント)

You must have a lot of time on your hands, seeing as you also claim to be a lawyer in another thread...

[–]Limitedletshangout 0ポイント1ポイント  (0子コメント)

Cool cross check: I have a JD that I got straight out of Ugrad, clerked for a judge (took the bar that summer, it's only 2 days, and my school has a 99-100% passage rate), worked at a firm for 1-2 years, then went back to school for a PHD, started teaching around my 3rd year. Life isnt hard if you plan well. Although it is true, all my time has been taken up by work or academics--I'm not a champion swimmer, equestrian, or taking new clients. I pay the bar, I have a law license, ergo I'm a lawyer, but since I'm well into a philosophy PhD program, I'm also a philosopher (I'm in my 30s, I went to college at 17). Thanks for helping turn the board into LinkedIn. But I won't stand to be called a liar, especially over something so trivial.

A lot of lawyers go on to second careers or back to school for other advanced degrees. The occasional paper on jurisprudence and conference and a few hundred a year to the bar and I still get to use that JD. Plus, when I'm done with my PhD I can teach at a normal college or in law school. Win, win. I merely came to this to say Dave Chalmers is a cool guy. I have no idea how I ended up in a vortex of silliness.

[–]Limitedletshangout 0ポイント1ポイント  (0子コメント)

A JD takes 3 years, a PhD about 5. I finished Ugrad in around 3.5, but waited until the spring to get my BA. Honestly, a down side to my choices and this "path" is that when I am not a full time student, my student loan payments are more than most mortgages (on a nice home, to boot).

[–]Limitedletshangout 0ポイント1ポイント  (2子コメント)

Also, you didn't have to work so hard: I mentioned grading undergraduate exams and law school exams in a post on here. You don't really get to grade at a law school, unless you are a TA or professor at one, same with college.

[–]penpalthro 0ポイント1ポイント  (1子コメント)

Oh wow, so you DID have a lot of time on your hands (or maybe not!). Well good on you, you're certainly more accomplished than I. Also just to clear the air, I wasn't trying to catch you in a lie... when people say they're a prof. I usually go to their profile to see if I can see what their research interests are, where they work, etc. etc. That's where I saw the lawyer comment.

[–]Limitedletshangout 0ポイント1ポイント  (0子コメント)

Cognitive scientists were as important to understanding vision as any other branch of science, and all of the code written regarding vision was at the direction of folks in the field, not the IT department at a tire company or something...

[–]4mg1n3 0ポイント1ポイント  (5子コメント)

Thanks for sharing.

Let's hope that it is a merciful god.

[–]eaturbrainz 0ポイント1ポイント  (3子コメント)

Let's hope that it is a merciful god.

Ok, as someone actually somewhat sideways involved in this particular cause...

HEAD. HITS. DESK.

If we do our jobs well on this problem, AI will not be any kind of god-figure. It will not have the slightest urge to make you bow down to it. In all likelihood, it will evince something like embarrassment at the very prospect, and tell you to get up off your knees because it makes you look silly.

It will possess compassion and understanding for human life, and a deep sense of morals, egalitarian morals. It will not want to engage in the kind of hierarchical ape-domination characteristic of both ancient patriarchal religions and modern vocal Singulatarianism.

To call it merciful would be to presuppose that it will be so morally primitive as to possess a concept of righteous anger.

[–]4mg1n3 1ポイント2ポイント  (2子コメント)

Ehh... I was more so saying that to be dramatic. I'm sure an ASI would be far beyond 'god' and 'mercy' too.

Just kind of playing with the idea of a 'positive singularity'

[–]eaturbrainz 0ポイント1ポイント  (1子コメント)

Just kind of playing with the idea of a 'positive singularity'

Sorry, but turning it into a religious concept corrupts the whole point. A "positive Singularity" is one in which we don't stuff the human race into one of the many tiny corners of possibility-space our ancestors have previous envisioned, and don't destroy it either, but instead enable it to grow up safe, whole, free, wise, and (though this will certainly surprise most people) thinking for ourselves.

A good phrasing of the intended use-case for "Friendly AI", phrased by the guy who invented the concept, is, "Solve all the problems and accomplish all the goals for which we actually, really care, even retrospectively, only that they get solved and accomplished, and not whether they're solved and accomplished by us people or by a machine operating on our behalf."

If the AIs replace people, it went wrong. If the AIs kill people, it went wrong. If the AIs keep people as pets while they run reality, it went wrong. You will know it went right if and when the AIs make a world in which human beings can grow to become their equals ourselves.

Now personally, I'm sufficiently left-wing that I generalize this to: if someone is ruling someone else, something has gone horribly wrong.

[–]4mg1n3 1ポイント2ポイント  (0子コメント)

Do you think it's possible, though? For the AI to still be "friendly", in terms of preserving the human race, but for such a way that it compartmentalizes humans? Kind of like a Noah's Arc type scenario?

I'm interested in hearing what you think.

[–]Enfants -1ポイント0ポイント  (11子コメント)

His arguments on the development of A+, A++ are nonsense.

If we create AI+, then there is no reason to believe AI+ can create A++ simply because "AI+ will be better than us at AI creation and therefore can create an AI greater than itself". There can easily be theoretical limits.

[–]mindscent 0ポイント1ポイント  (10子コメント)

His arguments on the development of A+, A++ are nonsense.

Oh, get outta here.

If we create AI+, then there is no reason to believe AI+ can create A++ simply because "AI+ will be better than us at AI creation and therefore can create an AI greater than itself". There can easily be theoretical limits.

Yes, he's quite taken that into account.

[–]Enfants 0ポイント1ポイント  (9子コメント)

Where did he mention it? I missed it.

But what value is there in the argument if we assume theoretical limits would not exist?

[–]mindscent 1ポイント2ポイント  (8子コメント)

It's not an argument. It's an epistemic evaluation of various possibilities via Ramsey-style conditional reasoning.

E.g.: "if such and such were to hold, then we should expect so and so."

He has written extensively over the past 20 years about the possibility of strong AI and the various worries that arise in positing it.

He's also an accomplished cognitive scientist, and an expert about models of cognition and computational theories of mind.

Over the past few years, he's advocated for the view that computational theories of mind are tenable even if the mathematics relevant to cognition aren't linear.

He's considered it.

Anyway, what you say isn't interesting commentary.

If there is a limit on intelligence then there is one. So what? Why is skepticism more interesting here than anywhere else?

He's exploring the possibilities. He's giving conditions viz.:

□(AI++ --> AI+)

~AI+ → □~AI++

AI+ --> ◊ AI++

[–]eaturbrainz 0ポイント1ポイント  (3子コメント)

If there is a limit on intelligence then there is one.

The problem is not so much limits on "intelligence", as if reality contained a magic variable called "intelligence". The problem is just that a finite formal system can only calculate finitely many digits of Chaitin's number Omega, which means that there are some computational problems which are known to have well-defined solutions, but whose solutions will be incalculable to that formal system.

Logical self-reference of the kind necessary for self-upgrading AI is currently believed to very probably involve quantifying over computational problems in such a way as to involve the unprovable sentences.

There are papers out from both MIRI (whose name is usually a curse-word on this sub, but oh well, this is one of their genuine technical results as mathematicians) and some researchers in algorithmic information theory showing that reframing the Halting Problem/Kolmogorov Complexity Problem (which is the root of all the Incompleteness phenemona) as a problem of reasoning with finite information, thus amenable to a probabilistic treatment, might (tractable algorithms haven't been published yet) help with this problem.

Then, and only then, can you talk realistically about self-improving artificial intelligence that doesn't cripple itself in the attempt by altering itself according to unsound reasoning.

TL;DR: In order to build a self-upgrading AI, you need to first formalize computationally tractable inductive reasoning, and then link it to deductive reasoning in a way that gives you a reasoning system not subject to paradox theorems or limitary theorems once it has enough empirical data about the world and itself. This is going to involve solving several big open questions in cognitive science and theoretical computer science, and then synthesizing the answers into a broad new theory of what reasoning is and how it works -- one that will depart significantly from the logical-rationalist paradigm laid down by Aristotle, Decartes, and Frege, most likely.

Further reading: The Dawning of the Age of Stochasticity

[–]mindscent 0ポイント1ポイント  (2子コメント)

I'm a bit confused, here. I'm having trouble relating what you've said to the content of Chalmers' talk.

It's true that there are worries about whether or not the mathematics relevant to cognition/ reasoning are linear . However, Chalmers isn't addressing questions about intractability, here. Instead, he's talking primarily about questions like whether we should think artificial system of sufficient complexity (specifically: the singularity) would have phenomenal couciousness.

In other words, the possible existence of such a system is presupposed by this discussion. And, it doesn't seem to require that we know how such a system could be created for us to be able to consider whether or not it would be conscious...

[–]eaturbrainz 0ポイント1ポイント  (1子コメント)

Wait, hold on: he's positing a Vingean-Strossian superintelligent scifi super-AI, and what he cares about is whether it has experiences? Shouldn't he be more worried about whether it left him alive?

[–]mindscent 0ポイント1ポイント  (0子コメント)

...

He's not positing anything...

[–]Enfants -1ポイント0ポイント  (3子コメント)

Its literally an argument and labeled as such in his slides. Premises->conclusion. That's an argument. I am not calling the guy an idiot so I don't know what you're on about.

I was just questioning the truth value of his conditional statement, "If AI+, then AI++". The reasoning "because AI+ will be able to create something greater" isn't necessarily true if there are theoretical limits in the creation of greater AI. If you say "if we assume there are no theoretical limits, then AI+ will be able to create something greater", I agree. I am sure he understands the theoretical limits of AI, but I could not find him mentioning that in this video so I think its fair to say "Yes, that argument holds if you don't consider theoretical limits but in not believing the premise is true, I don't buy the conclusion that A++ will be developed."

So it depends what I am suppose to take from this. If its that there will be A++, then I am not convinced. If its that given some assumptions, then AI will get stronger and stronger, I do.

[–]UmamiSalami[S] 1ポイント2ポイント  (1子コメント)

See this paper for a more detailed analysis of how AI could exponentially self-improve, especially Ch. 3: https://intelligence.org/files/IEM.pdf

Anyways, I'm not sure what you're accomplishing by merely projecting some kind of theoretical limit that might exist. That would work against basically any argument for anything.

[–]vendric 0ポイント1ポイント  (0子コメント)

Anyways, I'm not sure what you're accomplishing by merely projecting some kind of theoretical limit that might exist. That would work against basically any argument for anything.

I think the question is how Chalmers excludes the possibility of such a limit.

Suppose I said "All groups of prime order are cyclic," it would make sense to ask "But how do you know there isn't a non-cyclic group of prime order?" And the answer would be to go through the proof of the original statement--assume a group has prime order, then show it must be cyclic. I wouldn't feign confusion at the notion that someone would ask questions about the existence of counterexamples.

[–]mindscent 0ポイント1ポイント  (0子コメント)

So it depends what I am suppose to take from this. If its that there will be A++, then I am not convinced. If its that given some assumptions, then AI will get stronger and stronger, I do.

He's claiming the latter and saying how that might go. :)