The ethics of the creation of highly specialised minds page 2

30 posts

Flag Post

Heh, TuJe, AGI means Artificial General Intelligence, and there are a few links as to the basics. I’ll start with Wikipedia, and move on from there. I know I listed some definitions in my previous thread on AGIs, but I probably should have added some in here too.

Wikipedia definition of an Artificial General Intelligence

AGIRI’s website is down right now, typical. Looks like someone screwed up and posted exprimental code to the main site. When it is actually up, this is the address of a dedicated wiki for AGI research.

The AGI Society is a good stopping place for more scholarly research on the subject. The Fifth annual conference on AGI is taking place right now, but the proceedings won’t be available till after it finishes. This mad person (me) is in attendance, probably unsurprisingly.

 
Flag Post

I Somehow missed your page from this list. Didn’t check what “auxiliary” meant either. Oh well. I think it’s time to finally go to sleep now.

 
Flag Post
Originally posted by TuJe:

Didn’t check what “auxiliary” meant either. Oh well. I think it’s time to finally go to sleep now.

It’s the name given to a covert operations military ship.

 
Flag Post

The AGI in your example is created specifically for one purpose – driving a tram. So in constructing an AGI, how specialised does it need to be. Would it be possible to have a more versatile AGI, which might have a range of “careers” available to it?

As humans, we are born with certain abilities, certain strengths and weaknesses, which inevitably have a part in steering our course through life, disqualifying us from serious involvement in certain activities and making others more likely. We are all influenced by those around us, especially parents and teachers, who help to direct us on to what they see as the most suitable paths. So while we all have choices, they are never unlimited. We all adopt interests and hobbies in what may seem to be the most unlikely areas of intellectual and physical endeavour, not necessarily because we are good at them, but just because we enjoy them.

So rather than create an AGI for a specific purpose, would it be possible to create say a mechanically minded AGI, or one with an architectural bent, and then allow it make its own choices about how to apply its abilities? I wondered if that might eliminate the slave labour aspect which Redem brought up, and perhaps make it more productive in the long term.

 
Flag Post
Originally posted by beauval:

The AGI in your example is created specifically for one purpose – driving a tram. So in constructing an AGI, how specialised does it need to be. Would it be possible to have a more versatile AGI, which might have a range of “careers” available to it?

Yes. The first ones will doubtless be like this, and all efforts are focussed on creating such. Later on today is the seminar looking at the CHREST synthetic mind architecture which looks to build a general purpose mind such as the one you describe. I’m awake now, meh, because its Monday and up at 5 has become a habit.

We all adopt interests and hobbies in what may seem to be the most unlikely areas of intellectual and physical endeavour, not necessarily because we are good at them, but just because we enjoy them.

Yes, synthetics will be no different. What I was describing was a method of stacking the deck, so that the interests they naturally enjoy, line up with the job they are expected to do. An AGI mental health assistant for example, might have a brain which is incapable of laziness, does away with sleep (by doing a file sort & defrag – basically what our sleep is – whilst they are conscious). Might be given an innate caring attitude and a tendency to be firm.

What the entity does with these tendencies is still up to it to decide, but the decks are stacked in favor of the intended purpose.

So rather than create an AGI for a specific purpose, would it be possible to create say a mechanically minded AGI, or one with an architectural bent, and then allow it make its own choices about how to apply its abilities?

Absolutely. In theory we could construct a mind’s growth framework however we pleased, if we can get the essentials right.

I can see a lot of potential for them inside synthetic environments, living out lives there, with their purpose being to bring the environment to life. But that’s going down the road of my other AGI thread, and I probably should steer clear of that direction in this one.

I wondered if that might eliminate the slave labour aspect which Redem brought up, and perhaps make it more productive in the long term.

Perhaps. There’s no real way of knowing that, I suspect. Would you be any more or less productive than you have been throughout your life, if you discovered someone designed your brain to be the way it is, specifically? There would probably be resentment – assuming you are capable of feeling resentment of course, we should be able to remove that emotion.

Of course this really heads onto dangerous ground now, as if we keep removing and accentuating mind states, we do very much create a slave. One whose collar is their own mind. Trapped within the bounds we have defined, and fully aware of it, but able to do nothing about it.