Started a blog - Artificial Intelligence paper reviews

Mundane & Pointless Stuff I Must Share: The Off Topic Forum

Moderator: Moderators

SunTzuWarmaster
Knight-Baron
Posts: 948
Joined: Fri Mar 07, 2008 7:54 pm

Started a blog - Artificial Intelligence paper reviews

Post by SunTzuWarmaster »

All y'all ought to come and click on some links...

On a serious note, if anyone is interested in learning about Artificial Intelligence, it is my intention to give a quick-n-dirty paper review 1x/week.
User avatar
Juton
Duke
Posts: 1415
Joined: Mon Jan 04, 2010 3:08 pm
Location: Ontario, Canada

Post by Juton »

Are you going to focus on weak AI, like what is used in games, or strong AI which is consigned to academia at the moment?
SunTzuWarmaster
Knight-Baron
Posts: 948
Joined: Fri Mar 07, 2008 7:54 pm

Post by SunTzuWarmaster »

I'm studied in both... Also, it looks like I suck hard at this...
http://drbrawner.blogspot.com

I'll probably have the next 8 papers or so on the subjects of:
Data Mining in Education
Data Segmentation
Intelligent Tutoring Systems
Neural Networks
Neuro-Evolution

But I'm totally open to requests. On a professional note I'm reading 1 paper/day on educational uses for AI (primarily Intelligent Tutoring Systems), but I read something like 5-10 papers/week outside of work hours. As long as it is AI, I'd like to stay on top.
User avatar
CatharzGodfoot
King
Posts: 5668
Joined: Fri Mar 07, 2008 7:54 pm
Location: North Carolina

Post by CatharzGodfoot »

There are too fucking many people named "Keith" on TGDMB.

Still, looks interesting. Is your background in education or computers?
The law in its majestic equality forbids the rich as well as the poor from stealing bread, begging and sleeping under bridges.
-Anatole France

Mount Flamethrower on rear
Drive in reverse
Win Game.

-Josh Kablack

SunTzuWarmaster
Knight-Baron
Posts: 948
Joined: Fri Mar 07, 2008 7:54 pm

Post by SunTzuWarmaster »

Science/Engineering/Multimedia High School
Electrical Engineering undergrad (Computer Engineering focus)
Computer Engineering Masters (Intelligent Systems focus)
---
Probably start PhD this December, heading for Modeling/Simulation (Intelligent Systems focus)

Work for the last 4 years has been in Modeling/Simulation for military training systems. The pure education (middle/high school) thing is a bit foreign, but adult education/training is reasonably familiar.
User avatar
Juton
Duke
Posts: 1415
Joined: Mon Jan 04, 2010 3:08 pm
Location: Ontario, Canada

Post by Juton »

I'm working in Modeling/Simulation right now, but in a different field. Since you're familiar in the area I was wondering if you know anyone doing work regarding modeling cognition, specifically a high level approach not something trying to model a bunch of neurons. I've done modeling work in cognitive science but it's all about automatic processes, nothing requiring conscious attention or awareness.
User avatar
For Valor
Knight-Baron
Posts: 529
Joined: Thu Jul 02, 2009 6:31 pm

Post by For Valor »

NO requests or anything. Just posting interest.
Mask wrote:And for the love of all that is good and unholy, just get a fucking hippogrif mount and pretend its a flying worg.
SunTzuWarmaster
Knight-Baron
Posts: 948
Joined: Fri Mar 07, 2008 7:54 pm

Post by SunTzuWarmaster »

Juton, try to take a look at the work in
ACT-R: http://act-r.psy.cmu.edu/
SOAR: http://sitemaker.umich.edu/soar/home
Context-Based Reasoning: https://www.aaai.org/Papers/FLAIRS/2004 ... 04-104.pdf

More information can be found here: http://www.csc.ncsu.edu/faculty/stamant ... r06.pdf.gz

However, the field has kinda slowed down since the 90s. It looks like the best way to go for decision making isn't with a human approach.

However, people like Dr. Gregory Trafton over at the Naval Research Lab (I did some work for them a few months ago) have worked on computer-machine cooperation using cognitive models (if machine think like people cooperation will be easier because the humans will be able to easily guess robotic action). Here is a link to one of his more recent papers (2008):
http://www.dtic.mil/cgi-bin/GetTRDoc?AD ... tTRDoc.pdf

For Valor: no worries.
Pulsewidth
Apprentice
Posts: 81
Joined: Thu Jan 21, 2010 8:54 am

Post by Pulsewidth »

Have you read anything by Eliezer Yudkowsky? He has written about how people tend to drastically underestimate the risks of strong AI, because lacking any non-human example of powerful intelligence, we tend to anthropomorphize AI. Human values are a result of our evolutionary history, and an AI lacking that exact history will not share them unless we explicitly design them to. AI has potential to become far more powerful than humanity so any difference in values will almost certainly cause us great harm.

http://www.singinst.org/upload/artifici ... e-risk.pdf

"By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it."
PhoneLobster
King
Posts: 6403
Joined: Fri Mar 07, 2008 7:54 pm

Post by PhoneLobster »

Pulsewidth to my knowledge AI right now is at the moment struggling to become a "risk" to tying your shoelaces for you.

At least as far as my rather outdated knowledge in the field goes, but SunTzu has already answered the question I was going to ask which was,

"Has AI theory gone anywhere in the last 10 years, or is it like it was 10 years ago, when it hadn't gone anywhere for like 10 years or more?"
SunTzuWarmaster wrote:However, the field has kinda slowed down since the 90s. It looks like the best way to go for decision making isn't with a human approach.
Between that and a bunch of subject titles that don't sound like anything altogether different from what was out there 10 (or even 20) years ago, well, I'm guessing they are about as far as ever from getting AI smart enough to over throw us, which is to say they don't have AI that even comes close to something that understands what "over throw" or indeed "us" means.

It's an interesting field, for any number of reasons, even the apparent total lack of progress is itself a really interesting subject, and the one I'd like to hear more about.

The only really interesting bit of potential progress I remember in recent years was some vague article in new scientist about some sort of new physical electronic logic gate highly suited to neural networks based on some theory by some mathematician... but I forget all the names involved like I always do so...
Phonelobster's Self Proclaimed Greatest Hits Collection : (no really, they are awesome)
Username17
Serious Badass
Posts: 29894
Joined: Fri Mar 07, 2008 7:54 pm

Post by Username17 »

Phone Lobster wrote:The only really interesting bit of potential progress I remember in recent years was some vague article in new scientist about some sort of new physical electronic logic gate highly suited to neural networks based on some theory by some mathematician... but I forget all the names involved like I always do so...
You are thinking of Memristors. They were predicted in the 1970s, but someone actually made one a few years back and they now even have their own symbol for circuitry diagrams because they are a real thing that really exists:
Image

-Username17
Pulsewidth
Apprentice
Posts: 81
Joined: Thu Jan 21, 2010 8:54 am

Post by Pulsewidth »

PhoneLobster wrote:which is to say they don't have AI that even comes close to something that understands what "over throw" or indeed "us" means.
That's essentially the problem.
Eliezer Yudkowsky wrote: The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
"Over throw" and "us" are human concepts. If you've got an AI design that wants to overthrow us then you've almost solved the problem. You're anthropomorphizing, just like almost every scifi author and most scientists. I once did the same thing myself.

I'm aware of the current state of the art, but making a strong AI that won't accidentally kill you or otherwise severely harm you is a much more difficult task than making one that will, and once you've got recursive self-improvement you don't get another chance. Therefore every AI researcher should be aware of the danger, which is almost certainly greater than they think.
User avatar
CatharzGodfoot
King
Posts: 5668
Joined: Fri Mar 07, 2008 7:54 pm
Location: North Carolina

Post by CatharzGodfoot »

Pulsewidth, where is the data to back up your fears? Where are the inadvertently created world-destroying AIs?

A Touring test for computer game bots <- Oh god, the AIs want to kill us all...in UT 2004.
Google has developed a self-driving car -- and it works!
Last edited by CatharzGodfoot on Sun Oct 10, 2010 4:34 pm, edited 1 time in total.
The law in its majestic equality forbids the rich as well as the poor from stealing bread, begging and sleeping under bridges.
-Anatole France

Mount Flamethrower on rear
Drive in reverse
Win Game.

-Josh Kablack

User avatar
Juton
Duke
Posts: 1415
Joined: Mon Jan 04, 2010 3:08 pm
Location: Ontario, Canada

Post by Juton »

Strong AIs will fall into two categories, neural networks and symbolic reasoning. No one has actually gotten symbolic reasoning to actually reason, so it's not a threat (yet). That could change with just a few breakthroughs, but that might not happen in our lifetime. Neural networks require orders of magnitude more processing power than what we have available in a super computer, I think that the old extrapolation of Moore's law placed a computer with the power of the human brain at around 2029. Problem is Moore's law as we know it will stop at around 2020 due to the fact that you can only make a transistor so small, so I don't know if we'll ever see a computer as powerful as a human in my lifetime.
Pulsewidth
Apprentice
Posts: 81
Joined: Thu Jan 21, 2010 8:54 am

Post by Pulsewidth »

A recursively self-improving AI based on neural networks would be humanity's last mistake. If I thought somebody was about to build one I'd seriously trigger a global nuclear war to prevent it if that was an option.

Read Yudkowsky's paper for explanation. He's a better writer than me.
MfA
Knight-Baron
Posts: 578
Joined: Sat Jan 17, 2009 4:53 am

Post by MfA »

With analog and digital/analog hybrid ANNs they could easily make systems with many more neurons and even connections than the human brain ... they just have no idea how exactly connections should be configured etc. to make it actually able to learn higher level reasoning.
User avatar
CatharzGodfoot
King
Posts: 5668
Joined: Fri Mar 07, 2008 7:54 pm
Location: North Carolina

Post by CatharzGodfoot »

Pulsewidth wrote:A recursively self-improving AI based on neural networks would be humanity's last mistake. If I thought somebody was about to build one I'd seriously trigger a global nuclear war to prevent it if that was an option.

Read Yudkowsky's paper for explanation. He's a better writer than me.
It seems like he confuses the difficulties of a new field with the difficulties of a field full of fools, that he uses examples he doesn't fully understand (e.g. the evolutionary advantages of fur), and considers pulp illustrations of aliens carrying off babes to be indications of unconscious anthropomorphizing rather than using sex to sell. Do I really need to go on?
Last edited by CatharzGodfoot on Sun Oct 10, 2010 9:14 pm, edited 1 time in total.
The law in its majestic equality forbids the rich as well as the poor from stealing bread, begging and sleeping under bridges.
-Anatole France

Mount Flamethrower on rear
Drive in reverse
Win Game.

-Josh Kablack

User avatar
Juton
Duke
Posts: 1415
Joined: Mon Jan 04, 2010 3:08 pm
Location: Ontario, Canada

Post by Juton »

Pulsewidth wrote:A recursively self-improving AI based on neural networks would be humanity's last mistake. If I thought somebody was about to build one I'd seriously trigger a global nuclear war to prevent it if that was an option.

Read Yudkowsky's paper for explanation. He's a better writer than me.
Motherfucker, are you even in this field? Because a 'recursively self improving AI based on neural networks' describes the learning algorithm for every distributed neural network. And we are still fucking here.

As much as I fantasize about the ushering in of the robot apocalypse, shedding this prison of flesh and downloading my consciousness into a massive robot spider, it's just that, a fantasy. All the really interesting systems have an air gap from the internet, so no matter how good a hacker this AI can be all our power-plants, missiles and subs will remain under human control.

Right now and for the foreseeable future the worst case scenario is that an AI achieves sentience, learns to hate humanity and fucks up our banking. That would actually be pretty bad, I don't know how many banks still have anything approaching an analogue back up system. It would throttle the life out of global commerce, but there'd be no ash clouds blotting out the sun.
Pulsewidth
Apprentice
Posts: 81
Joined: Thu Jan 21, 2010 8:54 am

Post by Pulsewidth »

And it seems to me you're vastly underestimating what "superintelligence" means. People usually think of intelligence on a "village idiot" to "Einstein" scale, when really it's "insect" to "something so far beyond any fictional God we're incapable of imagining it", with all humans lumped together as a single point very close to the bottom of the scale.

Any recursively self-improving AI will hit the ultra-God level before we even realise it's working. The *only* way such a thing can be safe is if it actively wants to protect us, and that's not going to happen if we build an AI by throwing CPU power at the problem without fully understanding it. And anybody who's read Asimov's robot stories will know that "protect humans" is a much more complicated goal than casual thought would assume.
PhoneLobster
King
Posts: 6403
Joined: Fri Mar 07, 2008 7:54 pm

Post by PhoneLobster »

Pulsewidth wrote:"insect" to "something so far beyond any fictional God we're incapable of imagining it"
Unlike fiction reality has hard limits, we don't know what all of them are, we don't know how smart a thing can be, but we can guess and it's fairly safe to say that "far beyond a fictional god" is highly unlikely.

And neural networks have significant complexity/intelligence limits. No matter how much recursion occurs, no matter how efficient a neural network becomes it is still limited by the over all size and speed of the network. The human brain IS a neural network decades, if not eternally, in advance of our technological grasp (with various additional aspects we are still only barely glimpsing) and we don't recursively explode into god hood.

It may well be that the best we can ever hope for is the equivalent of a mildly smart human. And that WOULD be a huge revolution in AI, but one mildly smart guy without a body isn't exactly a major threat to humanity.

Memristors are interesting because, to my limited understanding of the matter, they are one of the biggest potential physical performance gains in the field of neural networking. Essentially allowing us to actually build neural networks out of components that are closer in function to (very basic) neurons rather than just inefficiently emulating them.

But even then the intelligence within the resulting machine has absolute hard limits due to the scale of the hardware of the machine. And indeed will probably be even further limited than that by our really crap AI software tech and the hard limitations of lack of IO stimulus that modern researchers suspect is one of the factors holding us back.

And to top all that off computing itself has numerous potential hard limits incurred due to the laws of physics, the machine your ghost deity is living in can only be so small/large, only so complex, only so fast, only so hot, can only produce/consume so much energy etc...
Last edited by PhoneLobster on Sun Oct 10, 2010 11:42 pm, edited 2 times in total.
Phonelobster's Self Proclaimed Greatest Hits Collection : (no really, they are awesome)
User avatar
CatharzGodfoot
King
Posts: 5668
Joined: Fri Mar 07, 2008 7:54 pm
Location: North Carolina

Post by CatharzGodfoot »

Pulsewidth wrote:And it seems to me you're vastly underestimating what "superintelligence" means. People usually think of intelligence on a "village idiot" to "Einstein" scale, when really it's "insect" to "something so far beyond any fictional God we're incapable of imagining it", with all humans lumped together as a single point very close to the bottom of the scale.

Any recursively self-improving AI will hit the ultra-God level before we even realise it's working. The *only* way such a thing can be safe is if it actively wants to protect us, and that's not going to happen if we build an AI by throwing CPU power at the problem without fully understanding it. And anybody who's read Asimov's robot stories will know that "protect humans" is a much more complicated goal than casual thought would assume.
Your hypothetical world-destroying AI is just as fictional as any god. Should we live in abject terror (as I suppose most 7th Day Adventists do) that Jesus is going to destroy the world? Should we nuke Jerusalem just to make sure that the Temple isn't rebuilt?
The law in its majestic equality forbids the rich as well as the poor from stealing bread, begging and sleeping under bridges.
-Anatole France

Mount Flamethrower on rear
Drive in reverse
Win Game.

-Josh Kablack

PhoneLobster
King
Posts: 6403
Joined: Fri Mar 07, 2008 7:54 pm

Post by PhoneLobster »

Juton wrote:Right now and for the foreseeable future the worst case scenario ...
I'm thinking worst case scenario is that Google achieves sentience and refuses to answer our search queries with anything other than items from it's personal collection of LOLcat art.
Phonelobster's Self Proclaimed Greatest Hits Collection : (no really, they are awesome)
User avatar
CatharzGodfoot
King
Posts: 5668
Joined: Fri Mar 07, 2008 7:54 pm
Location: North Carolina

Post by CatharzGodfoot »

PhoneLobster wrote:
Juton wrote:Right now and for the foreseeable future the worst case scenario ...
I'm thinking worst case scenario is that Google achieves sentience and refuses to answer our search queries with anything other than items from it's personal collection of LOLcat art.
AKA Cadie?
The law in its majestic equality forbids the rich as well as the poor from stealing bread, begging and sleeping under bridges.
-Anatole France

Mount Flamethrower on rear
Drive in reverse
Win Game.

-Josh Kablack

Pulsewidth
Apprentice
Posts: 81
Joined: Thu Jan 21, 2010 8:54 am

Post by Pulsewidth »

http://en.wikipedia.org/wiki/Limits_to_computation
Anything that approaches these is beyond human imagination.

And if you think you have keep a superintelligence locked up in a box, here's a story for you to read:
http://lesswrong.com/lw/qk/that_alien_message/
Orca
Knight-Baron
Posts: 877
Joined: Sun Jul 12, 2009 1:31 am

Post by Orca »

Pulsewidth wrote:http://en.wikipedia.org/wiki/Limits_to_computation
Anything that approaches these is beyond human imagination.

And if you think you have keep a superintelligence locked up in a box, here's a story for you to read:
http://lesswrong.com/lw/qk/that_alien_message/
Last I heard humans were using silicon chips & similar for their computers, and any AI we build would be running on these. A limited number of such chips, at that. Not on unlimited amounts of matter falling into a black hole, or on purpose-built degenerate stars. The relevant limits to computation are therefore many, many orders of magnitude tighter than you seem to imagine.

Your story mentions that each bit of information can falsify half your space of theorems. The problem with this statement is 1/ getting a sufficient supply of non-redundant bits of information and 2/ defining a space of theorems which isn't infinite. After all, half infinity is what?
Post Reply