My apologies for not getting back to this sooner. I did take note of the thread when it was published, but I have been extremely busy of late, and have had neither time, nor inclination for a long answer.
So, getting back to the point of this thread, I’ll address the last question of the OP, first.
Controlling a prosthetic where there never was an original limb or body part
And would it be possible for a person to control a mechanical implant that does not replace a limb, as if it was a fifth limb?
The answer is an unequivocal yes. I can say this because we’ve done it experimentally. Not with humans, but with monkeys. We know it works, and we know why it works. When we get around to repeating the experiments with humans – which we will do sooner or later, for a plethora of reasons I’m not going to get into here – we know it will work the same way with us.
The first experiment on giving a living primate a fifth limb, was carried out way back in the mists of time, in late 2003. This would prove to be the first of several such experiments, each with different agendas, spanning most of a decade. There will doubtless be many more. However, from my perspective, this was the most important, as it pioneered a technique I am intimately familiar with. That of TMR, or Targeted Muscle Re-enervation. It is basically where the modern smart prosthetic’s interface with the body was born.
The experiment in question was carried out by researchers from Duke University, a North Carolina institute. It was attached directly to the peripheral nervous system of a chimpanzee using a technique that had never been carried out to such a degree of complexity prior. Electrodes were wired up to a large muscle in the trunk known as the obliquus externus abdominis, which as you might gather from the name, runs around the front of the torso, weaving in and out of the ribs of the ribcage.
Hair was shaved from the front of the chimpanzee, and an array of electrodes was pressed into the skin there, pushing deep into the muscle. We know now, not to do things like that, as when the skin flexes, it alters the electrical conductivity of the electrodes. Nowadays we stick them under the skin, to avoid that problem.
The electrodes pressed into the muscle, read the neural codes transferred into the musculature by the lower thoracoabdominal nerves. They were placed in such a way as that each group of electrodes would pick up the signals from a different thoracoabdominal branch. This is why such a large muscle was chosen in the first place – so many branches go into it, so many different signals to read.
The electrodes were attached to a computer system that wasn’t hooked to the monkey. We were looking at a cluster server linked to oscilloscopes here, as nobody really knew what they were doing back then. We did not have any of the neural codes mapped, so we were interested in detecting distinct high-frequency signal patterns, not looking at which motion they controlled. As such, brute-force computer power was required. The system was responsible for telling when these signals were detected, and for moving the actuators in the mechanical arm in response to different signals.
The monkeys in question were deliberately underfed, and during the experiments, each was tied into the chair, which was itself bolted into the floor. In the room with it was a robot arm, controlled by the TMR on its chest, and a table with some bananas on it. The bananas were well within the reach of the robot arm, likewise the monkey was well within the robot arm’s reach, but the table was beyond the monkey’s reach even if its arms were not strapped down.
The monkey had to learn how to operate it’s new appendage from scratch. By consciously moving its abdominal muscles; forcing the obliquus externus abdominis to expand and contract, neural signals were sent down its nerve pathways (the six lower thoracoabdominal pathways being the ones of interest) and passed into the electrodes inserted nearby. These signals then drove the arm. However, neither the monkey nor the researchers knew which muscle movement would trigger what command in the robot arm. The monkey had to learn from scratch, exactly how to do it.
EEGs of the brains of the primates showed that over time as the monkeys grew more proficient at using their new arm to grab the food their malnourished bodies needed, the brain allocated additional neurons to the task. Certain areas of the cerebellum became devoted to the new arm, as the brain integrated this appendage into it’s body map. It was more than plastic enough to eventually accept the new appendage as natural, and work with the robot with a similar level of dexterity to what the monkey had shown with its own arms. However this took a good 3-6 months to achieve.
Subsequent experiments on various primates have taken place at Carnegie Mellon (in 2008), and the University of Pittsburgh (2010). Each independent study turned up the same basic results, with more powerful robotic arms, and an increased understanding of the relevant neural codes.
However, the long and short of it is, that a brain very similar to ours, and with the same degree of plasticity as the human brain has been shown to possess time and time again, can learn in a relatively short time frame, how to integrate additional body parts into its internal map and treat them as part of its body. We use the same basic concept in TMR with humans with smart prosthetics, tapping different nerve bundles to replace the functions of ones lost through amputation.
There is no question whatsoever of the brain not accepting new body parts and adapting to them. Even in humans, it works without question
2003 Experiment (Duke University)
2008 Experiment (Carnegie Mellon)
2010 Experiment (University of Pittsburgh)
Replacing a lost limb with something else
Now, as to the second question. This bit:
But would it be possible to replace a lost limb with something that does not function at all like the aforementioned limb and have the person be able to use it?
From the above, I hope you can see how this question has been mostly answered. However, there are a few additional things to cover in this particular question. By chance I was discussing the same briefly with Tenco on another thread on here, (one of my tech-threads) after another poster saw fit to use images to show just how disgusting to him, the iLimb models are. In particular he was showing how the iLimb digits hand piece slots into a socket on the iLimb Ultra’s carapace. In other words the hand is separable from the rest of the arm. It unplugs just below the wrist.
Now, the poster in question we were arguing against was trying to explain how unacceptably creepy he found the prospect of a fully functional hand that you could just unplug and set aside. However, what I was bringing into the conversation at the time – and what Tenco picked up on – was that this ability to unplug it was a good thing. If the hand was damaged, you could just unplug it and plug in a replacement without having to replace the whole prosthetic, as an example.
But it goes deeper than that. Again as I touched on at the time, we have a prosthetic designed for a specific purpose; to replace a lost arm. It has as faithfully as possible connected to the right nerve endings, and so using the exact same neural codes the previous, natural arm used. This is to reduce the learning curve as much as possible. Because so many of the commands are exactly the same for the prosthetic as they were for your old arm, you are not in the position of having to start over from scratch, and you can start to make use out of it, immediately. Even before the swelling subsides.
However, you are also now in possession of a replacement arm that links your peripheral nervous system to a socket into which any compatible device can be plugged. Unlike with the chimps, the entire computer system to control the arm, is built into the arm itself, so there is nothing extra to lug around with you. This is why if you plug a new hand in, the control circuitry will line up precisely with the existing codes being sent from your arm, and will be able to send the expected ones back – it will work exactly the same as the damaged hand that was removed.
But, you don’t have to plug a new hand in. any device which has been designed to recognise the basic neural signals of the human arm, and has been engineered to fit the same socket will work – just as any usb peripheral will plug into any usb socket, regardless of who made it. So long as everything lines up, you’re good to go.
It doesn’t even have to do the same thing the hand did. You are sending it control codes, sure. But those control codes match up to signals your arm is sending to the hand. Like when you call an API, and you are just calling the function name with expected variables. If the developers of the API change what their code does, so long as it has the same name and same number and type of variables, all your old API calls will still work just fine – they’ll just be doing something different.
So it is with the hand. Take it off, and put say, a projectile weapon on the end of your wrist. What would ordinarily be pulling your thumb inwards, works the trigger. Waggling your pinky toggles the safety. Tweaking the index finger loads a new round and expels the old one. It sends different feelings down your arm corresponding to how many rounds are left – one left feels like touching an apple say. Two left feels like touching glass, three left, feels like carpet… and so on. You’ll learn the codes soon enough. The main thing is it would work. The connection would be seamless, and short of learning what you had to envisage yourself doing to access which command, it would work from the moment you plugged the thing on the end of your arm socket.
Your question about flight works the same way. Wings do not have to be mounted on the end of the socket, just something to control the signals has to be there. You could be sitting in a plane; a small Sesna two-seater. It has no traditional controls whatsoever. The control console has the normal gauges, but there is no stick, no flap or aileron controls. Instead there is a small black box at the end of your prosthetic arm, connected to the socket where your hand should go. Every minute flick of the muscles that your brain commands, instead controls the plane directly.
You have hopefully spent countless hours in the simulator testing this out before you do it for real, but you send the command to make a fist to that hand, and instead the engine hums into life. You tilt your hand forward at the wrist and the plant taxis forwards, responding with speed proportional to the degree of tilt you gave. Already you can feel the air flowing over your wings, the tarmac moving under your wheels; sent back to you as touch signals against your hand. Transmitted from your hand into the prosthetic and up into your brain. You can feel every pressure change over the wings and tail as a soft wind blowing over your hand. There’s no pain, and there never will be, unless something in the plane goes wrong – and then the pain will tell you what it is that has failed. As you taxi down the runway gathering speed, you open your fist. The engine won’t cut out whilst you’re moving, so instead as the plane leaves the ground, you’ll use the tilt of your fingers to control the plane’s orientation. It will climb and descend, bank and roll, in direct response to how you feel your hand moving. The correct flap and aileron movements happening automatically like subconscious responses to your muscle controls. The wind blowing over the plane’s skin blowing over your hand and down your arm. You can feel every shift like a bird with it’s feathers, feeling out the thermals by touch alone. As the pressure shifts, so do you, and your plane banks into the rising column of air.
This becomes even more fun when you realise it doesn’t have to be just your hand interfacing this way. We could do it with your whole body. As far as you’d feel, whilst you were sitting in the pilot’s seat, you would be the plane. Every shift in wind or pressure, raising goosebumps on your skin. The moisture in the air directly dampening you, as far as touch could tell. Seeing through the radar directly, augmenting your own eyes. Embodied as the plane you are flying. A living vehicle souring through the skies.
There’s no doubt it’s doable. The only real question is ‘how do we interface it?’