The ethics of the creation of highly specialised minds

30 posts

Flag Post

With the AGI conference just a few days away, I’m trying to get my thoughts in order. One thought that keeps circling in my mind, is that of specialised minds for specialised tasks.

For example, say we have a tram, and it is self-driving, controlled by a synthetic mind embedded inside the tram itself. Free thinking, with free will.

However, this mind has been engineered for the purpose. It has instincts built into it that give it instinctual reactions that line up perfectly with the road safety and courtesy laws. Additionally, it has been designed such that it feels happy when it is safely ferrying passengers, and feels physically ill when one of them gets hurt either by its own doing or that of another driver/passenger. A moral core is instilled which serves as an artificial conscience, forcing the mind along certain pathways beneficial for the transport service.

Is it ethical to create free-willed minds for a specific purpose, and stack the deck like this, to practically force them into designated behavior constraints from ‘birth’ if you will. To create a mind purpose-designed for a task, whilst still giving it the free will and imagination necessary for independent thought? Or would such a mind be ethically criminal to engineer?

Why?

 
Flag Post

I am not sure free will is meaningful or fruitful if we are speaking of designing an AGI mind for a specific function. It’s a pointless and thus inefficient addition on top of the design for a specific AI for the same purpose.

I suppose this must rely on the specifics of the engineering of AGI whenever we work out how to do it, so we’re speculating here, but I cannot imagine a scenario in which we would want to create an AGI of this sort. Except maybe because we can. Alright, assuming we do it because we can.

It is not, I feel, immoral per se, objectively a life enjoyed via the excellence at a task is generally one we humans see as one well lived. However I think it unethical because it is simply a criminal waste of a mind, by design. Humans are not restricted by design to anything beyond the mundane needs of survival and reproduction, we can choose (presuming free will) to excel at whatever we can grasp. If we can make a mind that enjoys excellence and consistency… then it is an unethical waste of that mind to give it as banal a task as being a driver with no choice, no opportunity and perhaps even no concept of anything else. Well, ignoring the banality of the task, would it even be ethical to design an AGI for a great purpose, say managing the economy?

We’re talking slavery, here, afterall, with the slaves being made to find their slavery fulfilling… can we justify that for even important and highly impactful tasks, let alone for the banal? Pragmatically yes, but not ethically (I am assuming an ethical paradigm whereby we grant AGI equality with humans).

 
Flag Post
Originally posted by Redem:

I am not sure free will is meaningful or fruitful if we are speaking of designing an AGI mind for a specific function. It’s a pointless and thus inefficient addition on top of the design for a specific AI for the same purpose.

Actually, its not. Its why I used the driving example. We need a mind capable of independent thought and imagination for a good driver, because it must be able to deal with the unexpected. To think fast and develop new ideas when presented with utterly unplanned-for situations. If it is a tram, it needs to interact with the passengers naturally, dealing with free-flow conversation, and telling jokes and idioms apart from actual meaningful commands.

As such a mind with free will and independent thought is going to produce far superior results to any expert system.

We’re talking slavery, here, afterall, with the slaves being made to find their slavery fulfilling…

Well not entirely slavery. They may be free to pursue other tasks, but yes their minds have been deliberately made to specifically enjoy the job for which they were envisaged, and specifically tailored to be extremely good at it. That’s the bit I’m struggling with the ethics of, myself.

 
Flag Post

Actually, its not. Its why I used the driving example. We need a mind capable of independent thought and imagination for a good driver, because it must be able to deal with the unexpected. To think fast and develop new ideas when presented with utterly unplanned-for situations. … As such a mind with free will and independent thought is going to produce far superior results to any expert system.

I do not agree, though I suppose I could be convinced otherwise. A non-general AI, a non-Mind, should be capable of these things. It’s 90% the rules of the road, conventions of the road, and decent situational awareness.

If it is a tram, it needs to interact with the passengers naturally, dealing with free-flow conversation, and telling jokes and idioms apart from actual meaningful commands.

This, I agree, would likely require an AGI.

Well not entirely slavery. They may be free to pursue other tasks, but yes their minds have been deliberately made to specifically enjoy the job for which they were envisaged, and specifically tailored to be extremely good at it. That’s the bit I’m struggling with the ethics of, myself.

Pragmatically workable. We can bypass the whole issue that way. If we can make a thing that is worthy of the name “Mind”, then we cannot ethically enslave it.

 
Flag Post
Originally posted by Redem:

I do not agree, though I suppose I could be convinced otherwise. A non-general AI, a non-Mind, should be capable of these things. It’s 90% the rules of the road, conventions of the road, and decent situational awareness.

True. Right now, self-driving AI cars are street-legal in California, Florida, and Nevada. However, as we discussed in my previous robotic embodiment thread – and as Beauval specifically brought up – right now there is a public perception in many countries against such vehicles, because they need a driver to feel safe, even if that driver is doing nothing, and has no access to the controls of the vehicle.

An AGI would bridge that gap, giving passengers someone to talk to, and to feel safe about them being in the car with – a psychological control measure.

 
Flag Post
Originally posted by vikaTae:
Originally posted by Redem:

I do not agree, though I suppose I could be convinced otherwise. A non-general AI, a non-Mind, should be capable of these things. It’s 90% the rules of the road, conventions of the road, and decent situational awareness.

True. Right now, self-driving AI cars are street-legal in California, Florida, and Nevada. However, as we discussed in my previous robotic embodiment thread – and as Beauval specifically brought up – right now there is a public perception in many countries against such vehicles, because they need a driver to feel safe, even if that driver is doing nothing, and has no access to the controls of the vehicle.

An AGI would bridge that gap, giving passengers someone to talk to, and to feel safe about them being in the car with – a psychological control measure.

You talk about giving a machine a fully functional brain, have you ever wondered,
What if it gets suicidal?
or gets angry at someone?
if this really happenned, jeremy clarckson will be really unhappy.

 
Flag Post
Originally posted by thepunisher52:

You talk about giving a machine a fully functional brain, have you ever wondered,
What if it gets suicidal?
or gets angry at someone?
if this really happenned, jeremy clarckson will be really unhappy.

Hence the use of artificial instincts and behavior modifications at a fundamental level as described in my OP.

 
Flag Post
Originally posted by vikaTae:

With the AGI conference just a few days away, I’m trying to get my thoughts in order. One thought that keeps circling in my mind, is that of specialised minds for specialised tasks.

For example, say we have a tram, and it is self-driving, controlled by a synthetic mind embedded inside the tram itself. Free thinking, with free will.

However, this mind has been engineered for the purpose. It has instincts built into it that give it instinctual reactions that line up perfectly with the road safety and courtesy laws. Additionally, it has been designed such that it feels happy when it is safely ferrying passengers, and feels physically ill when one of them gets hurt either by its own doing or that of another driver/passenger. A moral core is instilled which serves as an artificial conscience, forcing the mind along certain pathways beneficial for the transport service.

Is it ethical to create free-willed minds for a specific purpose, and stack the deck like this, to practically force them into designated behavior constraints from ‘birth’ if you will. To create a mind purpose-designed for a task, whilst still giving it the free will and imagination necessary for independent thought? Or would such a mind be ethically criminal to engineer?

Why?

errrr, well first of all, that would be absolutely wasteful. if all it has to do is a simple mundane task, why give it such a vast, adaptable mind capable of choice and learning and stuff?

but uhm, to answer your question… yeah, you’re talking about slavery. giving it free will, but limiting it’s essential hormone-generation to a specific task that benifits us. yes, that’s pretty evil.

 
Flag Post

Clearly the tram would need to be aware of it relevant surroundings, but do the qualities you envisage for it necessarily require that it should be self aware? And would there be a way of determining whether self awareness was present? There is little to no indication of self awareness in most of the animals we share the planet with, and yet they are perfectly capable of dealing with unexpected situations.

Why would the tram need to interact with its passengers to the extent of being able to understand a joke? Perhaps things are different in the Athens of the North, but in London we barely interact with drivers who are human, and I have no reason to suspect that we would treat an AGI any differently.

This tram is to a considerable extent imprisoned by the tracks upon which it runs. Its entire universe consists of little more than the tramway system which it inhabits, and it has no way of experiencing anything much beyond that. So what kind of independent thought did you have in mind for it? You could certainly feed it lots of information about the rest of the universe, but is there not a danger of the AGI being driven mad by either boredom or frustration?

 
Flag Post
Originally posted by beauval:

Clearly the tram would need to be aware of it relevant surroundings, but do the qualities you envisage for it necessarily require that it should be self aware?

I’m not sure. That’s one of the questions I’ll be taking with me. Does a dynamic, intelligent, creative mind need to be self-aware? At the moment, I am assuming so, but it’s one of those key points that needs to be hashed out.

but is there not a danger of the AGI being driven mad by either boredom or frustration?

No.

That’s one of the beautiful things about designing how a mind will develop in terms of what instinctual behaviors and divisions you set. It is quite plasusible to come up with a basic structure that the mind grown into where boredom is physically not possible. Likewise with frustration. They can be excluded, although the effect on the resulting mind is utterly unknown at this time. It should not affect stability, but a very different worldview would be likely.

The only thing we know for certain will drive an artificial mind stark raving mad at this point, is sensory depravation.


Of course that quest alone, does edge into the limits of ethics, as it is. It is only going to get worse as they become more powerful.

 
Flag Post

Technically, if we don’t have free will, we are predestined to create those specialized minds, and those minds are predestined lead those individuals where they may.

If we do have free will, then the choice is up to us to create specialized minds, and then those minds will have the choice to use there specialization in any way they choose.

I personally feel that both are in a way, at play in our lives. We might have the moral responsibility to create those minds in order to solve the problems we have created for ourselves… or we might have the moral responsibility to not allow it, for ia such a powerful mind could heal us, if perverted it could hurt us all.

 
Flag Post

Yes. The machine loves driving a tram more than anything else in the world, so why not let it drive trams?

 
Flag Post

I have to say, I don’t see the benefit to giving the tram free will. If you’re going to create something with an incredibly specific task, where is the benefit to giving it the ability to decide its own rights, thoughts. etc.

Originally posted by helltank:

Yes. The machine loves driving a tram more than anything else in the world, so why not let it drive trams?

But that’s like making it to do something, and forcing it to like it. Why not just remove the emotional element entirely?

 
Flag Post
Originally posted by onlineidiot1994:

I have to say, I don’t see the benefit to giving the tram free will. If you’re going to create something with an incredibly specific task, where is the benefit to giving it the ability to decide its own rights, thoughts. etc.

Because they are going to require a moral core in order to fulfill their function both safely and legally. There needs to be a mind to blame if everything all goes wrong as well. Not a ‘computer program’, a mind. That means endowing them with certain capabilities such as individualism, personalty, and the ability to make rational decisions.

 
Flag Post

If you want to give the AI functionality for conversation, couldn’t you just program conversational capabilities?
It wouldn’t necessarily need a moral core and thinking capability?

 
Flag Post
Originally posted by vikaTae:
Originally posted by onlineidiot1994:

I have to say, I don’t see the benefit to giving the tram free will. If you’re going to create something with an incredibly specific task, where is the benefit to giving it the ability to decide its own rights, thoughts. etc.

Because they are going to require a moral core in order to fulfill their function both safely and legally. There needs to be a mind to blame if everything all goes wrong as well. Not a ‘computer program’, a mind. That means endowing them with certain capabilities such as individualism, personalty, and the ability to make rational decisions.

Now this doesn’t sound like you at all, but you appear to be suggesting that we should feel entitled to create a fully sentient mind in order to satisfy the requirements of modern day blame culture (which may not last more than a decade or so anyway). The insurance companies need someone to point the finger at, so let’s keep them sweet by creating a victim which can’t argue back. That’s not terribly ethical.

 
Flag Post
Originally posted by beauval:

Now this doesn’t sound like you at all, but you appear to be suggesting that we should feel entitled to create a fully sentient mind in order to satisfy the requirements of modern day blame culture (which may not last more than a decade or so anyway).

I’m not thinking of the culture of blame. I’m thinking of human psychology. When something awful happens and you lose a lovd one in a violent accident, you need someone to blame. You said yourself that the British won’t use automated services without a driver on board to blame if anything goes wrong. The line of reasoning your thoughts spurred is what I’m basing this on.

If you have a driver you can interact with, share a laugh with, who puts you at ease, and feels like a person, you’re more likely to treat them and see them as a person. If something goes wrong,and they are living their own life so to speak, then they had as much to lose as you did, and it becomes easier to move on. If they’re just an AI, its just an unfeeling program, and it’ll start a drive to get the AIs off the roads.

Originally posted by TheAznSensation:

If you want to give the AI functionality for conversation, couldn’t you just program conversational capabilities?
It wouldn’t necessarily need a moral core and thinking capability?

AIs have been around for over 50 years. Yet we’ve never successfully created one that can realistically hold the illusion of a meaningful conversation for more than a few minutes. This is beyond the realm of AI. We need something that can understand dynamic changes in language, reason out meanings, reason out multiple meanings, understand innuendos, when to apply and when not to apply them, and conversation appropriate for each individual. In other words we need a mind with imagination, highly developed reasoning and problem solving capabilities, an associaive memory system and evolved deep learning capabilities, capable of self-directed learning. In other words, we really do need a mind.

The moral core is more a legal thing. If these beings are going to be carrying humans, then from a legal perspective they need to understand a code of ethics. Not just follow programmed rules, but actually understand what ethics are, when to apply them and when not to apply them. To respond in an intelligent and highly reasoned manner to ever-changing circumstances, making their own decisions based on past experiences and similar situations encountered by others.

AI is again incapable of this. AGI is capable, but does come with certain other baggage. Like I said, it is most likely going to have to be self-aware and capable of forming its own opinions and goals in life, in order to have these capabilities.

 
Flag Post
Originally posted by vikaTae:
Originally posted by thepunisher52:

You talk about giving a machine a fully functional brain, have you ever wondered,
What if it gets suicidal?
or gets angry at someone?
if this really happenned, jeremy clarckson will be really unhappy.

Hence the use of artificial instincts and behavior modifications at a fundamental level as described in my OP.

but human mind adopts things, learns things, if you can make you trams brain in a way that it won’t do so, then it probably be safer.
Other wise it will learn whole lot of thinngs which we don’t want it to learn.

 
Flag Post

I’m not thinking of the culture of blame. I’m thinking of human psychology. When something awful happens and you lose a lovd one in a violent accident, you need someone to blame. You said yourself that the British won’t use automated services without a driver on board to blame if anything goes wrong. The line of reasoning your thoughts spurred is what I’m basing this on.

I’m not sure that it’s all about blame. The Victoria Line on London’s tube was the first in the world to have driverless trains back in 1967. The public reacted with horror, but we have become much more used to automation during the intervening 45 years. The smooth runing of our lives is now totally dependant on the millions of chips we have created to work for us, and most of us are pretty aware of that.This BBC blog is relevant, and shows the wide range of opinion Londoners still have on this issue. I wouldn’t take too much notice of what Bob Crow has to say. He is the leader of the rail union, extremely left wing and a champagne socialist to boot – a thoroughly despicable man. He really doesn’t care about the travelling public at all, his interest is 100% directed towards retaining his power base, as he has demonstrated on a number of occasions by calling for strikes on the thinnest of pretexts, at times which are guaranteed to cause maximum disruption to the public.

The blog does raise the point that in the event of an accident or breakdown a driver can calm the passengers, lead them out of tunnels etc., so the idea of an interactive tram is beginning to make more sense to me.

I’m inclined to agree with Redem that it does smack of slavery, also that in the long run we will probably do it simply because we can. But for some people freedom is an overrated commodity. I have mentioned before the man who takes regular drug overdoses in order to get himself institutionalised in the local hospital. He doesn’t want freedom, he actually wants and needs someone else to run his life and tell him what to do all the time. Considering that, if an AGI was constructed which would actually be happier in that sort of situation, then I think I would be pretty cool about it as long as I was convinced that driving trams and looking after people was what it really wanted most.

 
Flag Post

Would it be more ethical to raise a synthetic mind to feel happiness when safely carrying passengers instead of programming it to feel that from the start? I’m bringing this up this because the distinction may be important if no (legal) difference between a synthetic conscience and natural conscience is made.

 
Flag Post

If this has been mentioned before, feel free to ignore this post, I kind of just skimmed everything past OP and a few replies.

But, I wouldn’t see why we would feel obligated to make an AI any certain way at all. Maybe have a few ethics laws preventing creating a synthetic mind that being tortured, or whatever. But other than that, how would making an AI enjoy something be immoral? I can see there being an issue if the AI doesn’t want to do what you meant it to do, since at that point you’re actually forcing it to do something, but that wouldn’t be what was happening in the example OP provided.
I mean, if this was immoral, wouldn’t making any AI at all be immoral? You’re making it with a certain mindset against its will, right?
tl;dr I don’t see why people are assuming creating something for a purpose is immoral.

 
Flag Post
Originally posted by Mandopedo:

But, I wouldn’t see why we would feel obligated to make an AI any certain way at all.

I’m at the conference I was referring to in the OP right now, so I don’t have a lot of time – the next session will start in five minutes. However, I will get back to both your points later. Still, I did need to point out that none of this was ever about AI. I was discussing AGI, not AI.

AGI being human-level artificial general intelligence. Ie a machine brain that can think and reason. It has almost nothing in common with AI which is, in general a failed field of endeavor.

AI creates extremely limited systems with a narrow focus, incapable of generating new thought or self-awareness. It is what was once known as weak AI. That sums up the entire AI effort these days, and has done for some decades.

AGI is strong AI, what most AI researchers have long believed could not be done so they stopped trying to do it. In truth it can, and we’re only just beginning to discover that; hence the renewed push into AGI (strong AI) as a separate field of its own, with rather different goals.

Out of time, will continue this later.

 
Flag Post
Originally posted by vikaTae:
Originally posted by Mandopedo:

But, I wouldn’t see why we would feel obligated to make an AI any certain way at all.

I’m at the conference I was referring to in the OP right now, so I don’t have a lot of time – the next session will start in five minutes. However, I will get back to both your points later. Still, I did need to point out that none of this was ever about AI. I was discussing AGI, not AI.

AGI being human-level artificial general intelligence. Ie a machine brain that can think and reason. It has almost nothing in common with AI which is, in general a failed field of endeavor.

AI creates extremely limited systems with a narrow focus, incapable of generating new thought or self-awareness. It is what was once known as weak AI. That sums up the entire AI effort these days, and has done for some decades.

AGI is strong AI, what most AI researchers have long believed could not be done so they stopped trying to do it. In truth it can, and we’re only just beginning to discover that; hence the renewed push into AGI (strong AI) as a separate field of its own, with rather different goals.

Out of time, will continue this later.

Sarah connor will be unhappy about your efforts.

 
Flag Post

Okay, getting back to this, I’ll try to address TuJe and Mandopedo’s points properly.

Originally posted by TuJe:

Would it be more ethical to raise a synthetic mind to feel happiness when safely carrying passengers instead of programming it to feel that from the start? I’m bringing this up this because the distinction may be important if no (legal) difference between a synthetic conscience and natural conscience is made.

Yes, an AGI will of course be raised as opposed to programming it – an AGI is not programmed, beyond the basics. It cannot be. The raising will be a lot swifter than it would be with a human child, and the mature mind can be of course cloned a few times to give therequired number of embodied minds.

I feel I was not clear enough with what I was asking at the start.

A human mind is not programmed would you agree? You’re not programmed what to think and feel, but you do have programmed elements that add into the mix, whilst your mind makes its own decisions and tries to come to rational conclusions.

You have a basic mind that is your own – abstract thought, imagination, ability to compute numbers, memories, et-cetera.

Then on top of that you have programming. Your emotions are an example of programming. When you become angry, your mind is changed by the emotion. Certain areas are powered down, and others come on-line. You have no control over this process; its inbuilt in your hindbrain.

Similarly, when your child falls and scrapes their knee, you are fed an instinct (pre-programmed) to protect that child, to comfort them. You can fight this instinct, but it is a compulsion. Something that kicks in just ahead of your conscious mind.

Another example of an instinct is the sex drive. When you are near an individual who is physically attractive to you, an instinct (pre-programmed response) kicks in and you feel arousal. This alters how you might otherwise behave. Again, you can fight it to varying degrees, but it predisposes you towards a certain goal.

This is what I’m talking about giving the synthetic consciousness. Preprogrammed instincts and limited emotional states. We can lose anger, lose frustration; maybe go with love, indifference, happiness, sadness, fear. These have uses in the desired position whereas anger and frustration would only be detrimental. So we take away the ability of their brain to produce these emotions.

We are not touching their mind itself at this point, but the underlying structure. If they were human, we would be surgically removing the brain’s ability to become angry, bored or frustrated, whilst leaving everything else untouched. So the mind’s still free to do as it pleases, but doesn’t know those emotions.

Likewise giving it a set of instincts that are useful to the job, and kick in just like ours – as impulses before the conscious mind is involved. They can be at least partially overruled by conscious choice, but it takes an effort of will to do so.

A sampling of such instincts in this case might be:

  • Instinct to protect: Avoid harm to a human on the tracks ahead.
  • Instinct to self-protect: Flinch back from a collision with anothr vehicle
  • Pain impulse upon damage to the vehicle.
  • Sadness triggered on harm to a human whether you caused it or not.
  • Wanderlust: Urge to travel and keep on travelling
  • Urge to conform and obey laws
  • Pleasure at a job well done.

There could be plenty of others, but you get the point. These instincts and emotional states are trying to guide the free-thinking mind towards a specific mental outlook, whilst preserving their ability to think and act independently.

Originally posted by Mandopedo:

Maybe have a few ethics laws preventing creating a synthetic mind that being tortured, or whatever. But other than that, how would making an AI enjoy something be immoral?

As Redem pointed out, it could be immoral, because you’re predisposing how their mind functions from birth. It is taking a sentient mind and shaping it to be predisposed to what you wish it to be predisposed to.

You’re making it with a certain mindset against its will, right?

No, the mindset is its own. Free to think about whatever it truly wishes to. The tram could quite plausibly end up publishing scientific papers on bird species; because it is a general intelligence, its thoughts are not limited to just the task you wish it to do.

This is its strength; it can react in an intelligent and reasoned manner to disaster situations, cope with humans on a 1:1 or 1:many basis, holding its own in conversation, sharing interests, debating philosophy with the passengers – whatever is necessary to make the journey more enjoyable for all.

Could even file recommendations on town planning from a vehicle’s point of view. Emergent properties that don’t occur with AI, but do occur when you have a thinking, feeling mind on the job. A general intelligence as opposed to a narrow-focus expert system.

The problem is I have been unable to work out if that would be ethical or not – and so far have only succeeded in stymying other specialist minds on the subject.

Originally posted by thepunisher52:

Sarah connor will be unhappy about your efforts.

Sarah Connor would be entirely fictional.

Also, Punisher, please don’t just quote a long post just to add one line of comment on the end, ok? It is really, really irritating when someone does that.

Thanks :)

 
Flag Post

Aah, now I understand. I actually hadn’t heard of AGIs before this. I just checked the wiki and found out AGI means Auxiliary General Intelligence. No article about it exists in the wiki… It just redirects to this, which doesn’t help at all. Thinking this more tomorrow.

EDIT: Woah, why is my post so messed up? The link should be this: http://en.wikipedia.org/wiki/Auxiliary_General_Intelligence
EDIT: Fixed, I just had to use <i>to italicize</i> instead of _the easier way_. Somehow the formatting didn’t like the easier way.