Artificial Intelligence

55 posts

Flag Post

What would it take for you to accept a non-biological system is intelligent?

Some background reading:

Wikipedia (of course!)
Turing Test
Computer beats chess world-champion Kasparov
plato.stanford.edu

 
Flag Post

Sub-questions that might need to be answered are:

What do we mean by “intelligent”?
How do we find out if something meets those requirements?
What distinguishes a biological system from a non-biological one? (Because some suggest building artificial neural networks to emulate how animal brains work.)

 
Flag Post

There are a number of cognitive measures to determining intelligence. In general, anything that solves problems can be seen as a form of intelligence, but assuming you are aiming at a human level of intelligence, well I think it is possible but we are a ways off.

How do we find out if something meets those requirements?

Mainly through tests of reasoning. It is all about the ability to solve problems.

What distinguishes a biological system from a non-biological one?

There really is none, ideally. Right now the only difference is in the limitations in our current systems.

 
Flag Post

I saw a program recently (on BBC, featuring James May if anyone else saw it and knows the title) with a robot that was shown a chair and told it was a chair. It was then shown a stool and knew it was a form of chair though not technically a chair and answered “yes” to “Is this a chair?” It was then shown a table and answered no that wasn’t a chair because of the height of the horizontal from the ground. It was argued that these are reasoning skills, which in a way they are, but it’s still a big list of IF statements.

They say intelligence is when something can make a decision for it’s self. Solve a problem for it’s self. Machines can do this as much as we can. We go through all that lower education and think when will I ever use this in life, but we do. Everything we learn is stored somewhere on some level in a library of knowledge and skills ready to be called upon and combined to solve a problem. Without the basic knowledge though we couldn’t solve that problem. We have new techniques and skills programmed into us every day and these for our version of a hard drive of skills data that is accessed when a problem needs to be solved. If we could make a robot as versatile and dexterous as a human in joints and such then gave it total knowledge of all of somebodies experiences would it then be able to solve problems as well as that person? or has God kept the source code for how we apply these memories and experiences to our future problems.

 
Flag Post

It was argued that these are reasoning skills, which in a way they are, but it’s still a big list of IF statements.

It isn’t terribly different from how our minds work though. I think you explained it well enough in your subsequent paragraph.

 
Flag Post

The difference in intelligence would be the ability to question. A computer system, so far as we have been able to develop, will not naturally question the input you give it. A human being may. Such a non-biological system, for me to define as intelligent, would have to be able to learn, grow and question from it’s experiences and input given to it.

 
Flag Post
Originally posted by Eyedol:

The difference in intelligence would be the ability to question. A computer system, so far as we have been able to develop, will not naturally question the input you give it. A human being may.

Would a dog?

 
Flag Post
Originally posted by SaintAjora:
Originally posted by Eyedol:

The difference in intelligence would be the ability to question. A computer system, so far as we have been able to develop, will not naturally question the input you give it. A human being may.

Would a dog?

I would consider a dog an intelligent biological system. I did not mean to limit it to “human” only.

 
Flag Post
Originally posted by Eyedol:

The difference in intelligence would be the ability to question. A computer system, so far as we have been able to develop, will not naturally question the input you give it. A human being may. Such a non-biological system, for me to define as intelligent, would have to be able to learn, grow and question from it’s experiences and input given to it.

AI systems can indeed question inputs, though they’re more likely simply to discard them. Consider a medical expert system diagnosing patients. It might learn by being given lists of symptoms and being told what a doctor’s diagnosis was. Eventually, given enough diagnoses, the AI system would be able to gauge how well another diagnosis fit the network it had generated and decide whether or not to incorporate it into its records.

As for “naturally”… does that qualifier really belong here?

 
Flag Post

But, if it stumbled upon a patient for which no input yielded a match to any diagnosis would it be curious as to why that was and seek to find an answer? I think you hit upon the difference when you said it would most likely discard the anomaly. Can we teach an AI system to be “curious”?

 
Flag Post

In such a situation it would fail and wait for a real doctor’s diagnosis to supplement its network. We could make it “ask” a doctor, which could resemble curiosity. In fact, Bayesian networks yield probabilities for their outputs, so we could even get it to say “I’m 73% sure you have mononucleosis, but I’d like a second opinion” and go off and ask a doctor anyway.

 
Flag Post

For me to accept a non-biological system as intelligent, and not simply programmed like deep blue was to play chess(crunching millions of iterations at a time, but still following a basic program) a system needs to have the following:
1) true reasoning-which we are seeing in neural networks these days
2) creativity-which some nets have been programmed to have
3) spontaneity-the ability to do things at a whim, outside of a ‘program’
4) self awareness-the ability to know that one exists and to identify one’s place in the world, also defined.

 
Flag Post
Originally posted by Syneil:

In such a situation it would fail and wait for a real doctor’s diagnosis to supplement its network. We could make it “ask” a doctor, which could resemble curiosity. In fact, Bayesian networks yield probabilities for their outputs, so we could even get it to say “I’m 73% sure you have mononucleosis, but I’d like a second opinion” and go off and ask a doctor anyway.

I’m still looking for it to do it’s own research and come up with probable theories before I would call it intelligent. I guess I ruled out dogs again.

 
Flag Post

Hmm… What kind of research are you suggesting? Web-crawlers are already prominent, for example, and a knowledge-base is precisely what the system would be building up.

 
Flag Post
Originally posted by solid2112:

For me to accept a non-biological system as intelligent, and not simply programmed like deep blue was to play chess(crunching millions of iterations at a time, but still following a basic program) a system needs to have the following:

1) true reasoning-which we are seeing in neural networks these days

2) creativity-which some nets have been programmed to have

3) spontaneity-the ability to do things at a whim, outside of a ‘program’

4) self awareness-the ability to know that one exists and to identify one’s place in the world, also defined.

3) Why is spontaneity indicative of intelligence?
4) How would you test for this?

 
Flag Post

Meaning if it doesn’t find it in it’s database, can’t find a suitable answer in any available literature (obvious web access), and can’t get an answer from a doctor would it be able to reason that it’s found a new disorder or just a new set of symptoms for an old obscure disease and proceed with an investigation based on laboratory results (kind of like they do on House, M.D.). What if it gets symptoms that merely suggest there are multiple disorders present would it successfully predict this or assume it’s new.

What about reaction to tactile response as well? You can program a machine to “feel” heat with a temperature sensor but would it ever truly understand that excessive heat would mean destruction of it’s parts and that removal from heat is vital to survival? Is running a program that simply tells it to move equate with a human thought of ow that hurts I better move it or lose it?

 
Flag Post

I saw a program recently (on BBC, featuring James May if anyone else saw it and knows the title)

James May’s 20th Century I’m pretty sure that was it.

Lemme check

Yeah I think so.

The Disney robot was on it. And the recognising Mini one.

 
Flag Post
Originally posted by Eyedol:

Meaning if it doesn’t find it in it’s database, can’t find a suitable answer in any available literature (obvious web access), and can’t get an answer from a doctor would it be able to reason that it’s found a new disorder or just a new set of symptoms for an old obscure disease and proceed with an investigation based on laboratory results (kind of like they do on House, M.D.). What if it gets symptoms that merely suggest there are multiple disorders present would it successfully predict this or assume it’s new.

What about reaction to tactile response as well? You can program a machine to “feel” heat with a temperature sensor but would it ever truly understand that excessive heat would mean destruction of it’s parts and that removal from heat is vital to survival? Is running a program that simply tells it to move equate with a human thought of ow that hurts I better move it or lose it?

It could certainly suggest that it’s a new condition not in its data stores. It could even made connections based on the symptoms (think of Amazon’s “other customers that bought this book also enjoyed X, Y and Z”) and suggest possible treatments, if it did treatments too, based on other condition-treatment pairs it knew of. I suspect the ability to wander off and conduct a laboratory experiment would be asking a tad too much.

Humans react to pain a bit before they actually sense it, as the spinal cord can send messages back to muscles before passing on the signal up to the brain. Reaction to stimuli doesn’t suggest intelligence to me.

 
Flag Post

I think the main problem with this sort of thing is that humans keep on constantly moving the goalposts. Eyedol’s question is a great example of this; I doubt even one in a hundred human doctors could come up with an accurate diagnosis in such an extreme scenario, so why would we expect an AI to do any better? For some reason we expect AIs to be perfectly infallible, when in fact human intelligences will fail quite frequently in the same scenarios.

 
Flag Post

I don’t really agree with human comparisons, like in the Turing test, in the first place. One way in which an AI system could fail the Turing test is if it gave responses too quickly… To pass the test, the system would have to delay its response deliberately. Why should a system that mimics a human’s response-time be deemed intelligent, when one that gives the same output but much more quickly not be?

 
Flag Post
Originally posted by Syneil:
Originally posted by solid2112:

For me to accept a non-biological system as intelligent, and not simply programmed like deep blue was to play chess(crunching millions of iterations at a time, but still following a basic program) a system needs to have the following:


1) true reasoning-which we are seeing in neural networks these days


2) creativity-which some nets have been programmed to have


3) spontaneity-the ability to do things at a whim, outside of a ‘program’


4) self awareness-the ability to know that one exists and to identify one’s place in the world, also defined.

3) Why is spontaneity indicative of intelligence?

4) How would you test for this?

In this sense, I refer to it’s ability to “do things on it’s own.” Although not a direct indicator of intellect, It is manifested in what we consider “intelligent” animals, rather than those that we deem, more instinctive-i.e. ants. Higher order mammals are seen to play, Birds court, fish court, etc. It comes down to will, choice, etc vs executing a program or simply following a rule set.

Self admittedly, there may exist, or have potential to exist, an intellect who doesn’t have any spontaneity. I cannot know that. But when I am asked what I would need to see, I must base my response on what I know of intellect in our world. For example, Bees and Ants, although they are highly evolved/adapted/instinctive. (don’t want to get into the creation debate, theres another long thread on that one), do not exhibit high intellect. Same with june bugs. They can fly great, have many awesome skills, but they can’t figure out how to back up and will get stuck in a little bottle that they can’t turn around in. On the other hand, we see some things that have sponteneity but lack the other qualities I mention.

Regarding the sponteneity test…great question. Playfullness and sense of humor are great starts but not necessarily proofs.

 
Flag Post

I was talking about the subject of AI outside of Kong I had it put to me that “People have emotion, machines don’t”. However I think emotion is yet another thing that’s programmed into us. A Nazi’s solider (sorry I dropped the N bomb) stationed at Auschwitz wouldn’t feel any remorse or sorrow in killing hundreds of Jews because they are not programmed to feel emotion towards Jews. I hold doors as a reflex not as a choice it’s not out of kindness anymore it’s behavioral programming.

Emotion and acts of kindness don’t come from the heart or the soul they come from what you’ve been taught is right and wrong. Therefore teach a computer that it shouldn’t kill or it should hold doors and it will act like a civilized person.

 
Flag Post

solid2112, I was asking how you’d test for your number 4) self-awareness.

Originally posted by dd790:

I was talking about the subject of AI outside of Kong I had it put to me that “People have emotion, machines don’t”. However I think emotion is yet another thing that’s programmed into us. A Nazi’s solider (sorry I dropped the N bomb) stationed at Auschwitz wouldn’t feel any remorse or sorrow in killing hundreds of Jews because they are not programmed to feel emotion towards Jews. I hold doors as a reflex not as a choice it’s not out of kindness anymore it’s behavioral programming.

Emotion and acts of kindness don’t come from the heart or the soul they come from what you’ve been taught is right and wrong. Therefore teach a computer that it shouldn’t kill or it should hold doors and it will act like a civilized person.

Emotion is not related to intelligence – or at least, I’ve never heard it said to be. Emotional and intellectual capacity are both faculties of the mind – but I’m not asking about creating an artificial mind, merely artificial intelligence.

 
Flag Post

Syneil, sorry bout that.

To test for self-awareness, we would have to see over time, what the entity did and only infer the result based on actions. If the AI started to have truly original thought, wow, we’d have something. If it became suicidal, we might know. If it could communicate a self worth quotient…? Great question.

 
Flag Post

Emotions are the product of faulty wiring. To claim artificial intelligence is somehow lesser intelligence is simply absurd. The concept of decision making isn’t very difficult to understand either. However, the problem people have is understanding whether robots FEEL or not. The reality is that our feelings are just signals that we’ve labeled “bad.” I always do this, but if we look into Buddhist monks, and see how they are able to completely experience pain as a signal instead of an emotion, they aren’t biologically changing their makeup: All the same stuff is there. They’ve just pulled back the mystification of our evolutionary patchwork and got down to the nitty gritty “main” processing unit. They’ve “shut off” the animalistic, and very flawed methods of decision making. However effective and selected, selection is NOT the best process for developing decision making skills. It’ll keep you alive, but humans have overcome the reaction-based life form and become an active, decision maker. We make decisions based on modeling and prediction, not on reaction to the past. We can take the past and use it to make assumptions about the future. We still haven’t gotten over all of our reactionary characteristics, and robots don’t even start with them. Only useful reactions based on intelligence can be made.

I do struggle with the thought that, at a certain level of intelligence, doesn’t individual existence become moot? This is the question involved with the singularity. Quite obviously, the human race cannot survive as flesh. Does that mean we must succeed our flawed processing units for perfect processors that use much more accurate measuring devices? I mean, half of the reason we experience time like we do is because of the disconnect between our senses and our brain. We literally live in the past simply by the way our brains, all of the brains on the earth, have evolved to process images. If we transcended this, and overcame the flaws that differentiate us from computers, are we some how less?

But I usually conclude with the idea that we are the result of a simple system breaking into more and more complexity. Human information creation is astronomical. We create so much data it’s catastrophic. I believe that life may literally change from one based on experiential time to one based on the lag between super computers containing consciences, limited only by the stream of data and the local processing power based on the speed data can travel. Literally, we’d live in a world of lag, not time. It’s almost impossible to imagine, but it is inevitably a likely future that we should be aware of, and may possible have to consider the value of individuality, or discern a way to test the values in a lag-based universe. If we could maintain our human self, but experience a biologically-omniscient conscience in a computer universe (I dare not call it a simulation). Aw crap, Matrix’d.