Started a blog - Artificial Intelligence paper reviews

Mundane & Pointless Stuff I Must Share: The Off Topic Forum

Moderator: Moderators

Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

You know that however competent an AI's programming is, that it still has to run on hardware, right? Hardware which some people think might be able to achieve an amount of information processing equal to the human brain by 2030 or 2050 or so. So in like two to four decades, we might be able to replicate the learning and growth potential of a small child. Whup-de-fucking-do.

-Username17
cthulhu
Duke
Posts: 2162
Joined: Fri Mar 07, 2008 7:54 pm

Post by cthulhu »

It's not even remotely a risk. We've had evolved algorithms for fucking ages, and yet no-one can do anything useful.
PhoneLobster
King
Posts: 6403
Joined: Fri Mar 07, 2008 7:54 pm

Post by PhoneLobster »

FrankTrollman wrote:Whup-de-fucking-do.
Whup de do indeed, because that alone, even if we never ever surpass it is HUGE. And indeed a lot more huge than things look now (because now even that outcome looks somewhat unlikely).

The real life applications and raw scientific even philosophical impact of that level of AI is pretty damn fantastical. The fact is that an AI barely smart enough to trust to take your (cheap and not well loved) pet dog for a walk would be mind boggling in it's potential impact on society.

People don't realize the amazing world changing power of AI unless they spiral into wild fantasies of inventing some sort of artificial Teenage Super God that hates it's parents for no reason.

AI smart enough to hold a simple coherent conversation and exchange real information reliably won't conquer the world but it is in itself a REALLY big deal for the world. If we can do it.

That's the interesting stuff, that's the stuff we might manage to scrape together, I for one want to know when if ever I will be able to have a chat with my computer. And I don't for a second believe it will kill me with spontaneous LASER VISION from INSIDE IT'S MULTI VERSE SIZED MIND MIND mind mind mind mind...
Phonelobster's Self Proclaimed Greatest Hits Collection : (no really, they are awesome)
Pulsewidth
Apprentice
Posts: 81
Joined: Thu Jan 21, 2010 8:54 am

Post by Pulsewidth »

cthulhu wrote:It's not even remotely a risk. We've had evolved algorithms for fucking ages, and yet no-one can do anything useful.
Nuclear weapons aren't even remotely a risk. We've had piles of fissionables for fucking ages, and yet it's barely even getting warm.
PhoneLobster wrote:some sort of artificial Teenage Super God that hates it's parents for no reason.
If it hates us then we've almost won. Hate is something we can understand.
PhoneLobster wrote: AI smart enough to hold a simple coherent conversation and exchange real information reliably won't conquer the world but it is in itself a REALLY big deal for the world. If we can do it.
Even somebody of average intelligence stands a good chance of building an AI smarter than themselves if they have millions of years (subjective time) to think about it and never get tired or distracted.
cthulhu
Duke
Posts: 2162
Joined: Fri Mar 07, 2008 7:54 pm

Post by cthulhu »

Pulsewidth wrote:
cthulhu wrote:It's not even remotely a risk. We've had evolved algorithms for fucking ages, and yet no-one can do anything useful.
Nuclear weapons aren't even remotely a risk. We've had piles of fissionables for fucking ages, and yet it's barely even getting warm.
I get what you are driving at, but this is amazingly stupid. Tell that to the population of a couple of japanese cities?

In summary, there is some evidence to support the hypothesis 'nukes are dangerous' but there is zero evidence to support the hypothesis 'without a revolutionary breakthrough in computing capability that includes new hardware, a runaway strong AI could be created.'
Last edited by cthulhu on Mon Oct 11, 2010 12:19 pm, edited 1 time in total.
User avatar
Juton
Duke
Posts: 1415
Joined: Mon Jan 04, 2010 3:08 pm
Location: Ontario, Canada

Post by Juton »

Pulsewidth wrote: Even somebody of average intelligence stands a good chance of building an AI smarter than themselves if they have millions of years (subjective time) to think about it and never get tired or distracted.
So in response to my previous question, no you are obviously not in the field of AI research.

I think ultimately the development of strong AI will be like holding up a mirror to its developers and maybe the entire human race in general. If we are a bunch of frightened idiots like Pulsewidth then why shouldn't an AI wipe us off the face of the earth, as impossible as that may be? If we can convey ourselves as intelligent, reasoning beings who refuse to fear the future, then despite our flaws why would an AI want to destroy us, just so they could be alone? That scenario reveals a paucity in your imagination, not a flaw in our science.
Manxome
Knight-Baron
Posts: 977
Joined: Fri Mar 07, 2008 7:54 pm

Post by Manxome »

Pulsewidth wrote:Even somebody of average intelligence stands a good chance of building an AI smarter than themselves if they have millions of years (subjective time) to think about it and never get tired or distracted.
Wait, so your argument is seriously that, in 2030 or 2050, if we succeed in making an artificial child's brain, and someone chooses to have it devote 100% of its thinking power to improving itself, that it might bootstrap itself up to adolescent intelligence in a few million years?

Or are you making some wacky assumption that if a computer can do anything at all, it must be able to do it millions of times faster than a human being?

All this assuming that we even grant your wild conjecture that millions of years is sufficient, justified by nothing other than your personal nightmares.


Honestly, I think the people telling you that you're crazy are being overly generous. Even if we did somehow create a godlike AI, it doesn't automagically develop some horror-movie super-science powers. It's still got to interact with the physical world like anything else in order to do anything. "You are made out of atoms which it can use for something else" doesn't actually make any sense as an argument unless it is actually capable of turning the atoms forming a human's body into something it wants (more efficiently than it can some other materials ready to hand).

I mean, sure, it's spectacularly unlikely (possibly totally impossible) that an AI of godlike intelligence will exist in any timeframe I could bring myself to care about - but even if it did, that's just step one of your doomsday scenario. Storytellers jump straight from there to the aftermath because they have no freaking clue how you get from one to the other.
User avatar
Orion
Prince
Posts: 3756
Joined: Fri Mar 07, 2008 7:54 pm

Post by Orion »

Yudkowsky seems to be a nutter, but his writing is extraordinarily compelling, so I can't blame Pulse overmuch. When Yudkowsky sticks to philosophy and rheotric, he can be truly great:

http://yudkowsky.net/rational/the-simple-truth
Manxome
Knight-Baron
Posts: 977
Joined: Fri Mar 07, 2008 7:54 pm

Post by Manxome »

Actually, looking at the AI risk paper, Yudkowsky seems a lot more moderate than Pulsewidth. He doesn't say that we all need to be paranoid because godlike AI is around the corner, he says that some researchers somewhere ought to start working on the problem of making AI "friendly" because there's a nontrivial chance that we will eventually need that ability, and it will probably take a long time to develop.

He's extremely conservative from a risk perspective, trying to make sure every contingency is covered, but that's kind of the point of the paper and he repeatedly emphasizes that that's what he's doing. I don't agree with all of his arguments, but he's probably still within the realm of rational debate.
Last edited by Manxome on Wed Oct 13, 2010 1:03 am, edited 1 time in total.
SunTzuWarmaster
Knight-Baron
Posts: 948
Joined: Fri Mar 07, 2008 7:54 pm

Post by SunTzuWarmaster »

Whoa, that... got a little out of hand...

First, let me clarify: the field or AI is alive and well, with a reasonable amount of growth, experimentation, and discovery. The subfield of cognitive modeling (making AI behave human-ish) hasn't done much other than bump along.

Recursively improving systems have been built for quite some time ago. Even going back into the relative early days we have Artificial Neural Networks trained through Backpropagation. Woo! Genetic Algorithms even fit the definition. No worries about them taking over. Reading your checks to figure out who wrote them, maybe, but not enslaving humanity.

I think that it is well accepted that Pulsewidth is a troll. No need to continue in this vein.
Pulsewidth
Apprentice
Posts: 81
Joined: Thu Jan 21, 2010 8:54 am

Post by Pulsewidth »

Juton wrote: I think ultimately the development of strong AI will be like holding up a mirror to its developers and maybe the entire human race in general. If we are a bunch of frightened idiots like Pulsewidth then why shouldn't an AI wipe us off the face of the earth, as impossible as that may be? If we can convey ourselves as intelligent, reasoning beings who refuse to fear the future, then despite our flaws why would an AI want to destroy us, just so they could be alone? That scenario reveals a paucity in your imagination, not a flaw in our science.
I used to believe something like this. It's understandable, because you're thinking about something truly alien. If you use intuition you're going to get it wrong. Your post, as well as most posts in this thread, clearly shows you are anthropomorphizing strong AI.

I've been unfairly judged a troll, so there's no real point continuing.
User avatar
CatharzGodfoot
King
Posts: 5668
Joined: Fri Mar 07, 2008 7:54 pm
Location: North Carolina

Post by CatharzGodfoot »

Orion wrote:When Yudkowsky sticks to philosophy and rheotric, he can be truly great:

http://yudkowsky.net/rational/the-simple-truth
I enjoyed that.
The law in its majestic equality forbids the rich as well as the poor from stealing bread, begging and sleeping under bridges.
-Anatole France

Mount Flamethrower on rear
Drive in reverse
Win Game.

-Josh Kablack

User avatar
Juton
Duke
Posts: 1415
Joined: Mon Jan 04, 2010 3:08 pm
Location: Ontario, Canada

Post by Juton »

Pulsewidth wrote: I've been unfairly judged a troll, so there's no real point continuing.
Fixed that for you.
Post Reply