How Smart Can You Be If You Ain't Got No Body?
The rate at which a middle-aged man is going to grow new leg bone and/or ligament tissue is
a) fairly fixed; b) not fast.
Which gives me time to catch up on various blogs, including the one written by Scott Locklin. His post "Open Problems In Robotics" warms my heart, because he and I have come independently to some of the same conclusions, and have been influenced by some of the same concepts. He's a scientist, while I'm a computer scientist. The gap between these two professions is immense, and entirely to the advantage of the real scientists. Yet since I'm also a writer by trade, allow me to take a shot at making a few things clear(er) on this particular topic.
About a month before Locklin's "open problems" post, I told Hagerty readers that we were very, very far away from a car that could run a Burger King errand for you. Locklin's chosen example of robotic/autonomous difficulty is an even simpler one: a robot that will get you a beer out of the fridge on command, the same way that another human being would.
We all know that Amazon and the automakers and various other companies have "solved" this problem, largely by reducing the number of variables to zero. Honda's East Liberty plant is filled with "robots" that can go get, say, the proper seat for the next CR-V coming down the line. This is done by mapping out every single inch of the journey in question and making sure that the seat in question is always presented in exactly the same fashion, and so on. The various sort-and-stack machines used by FedEx, Amazon, and other firms operate in similar fashion. As Locklin notes, they are all too "stupid" to even know where they are; if you unbolted the sorting machine and moved it two inches to the left, you would render it useless.
What Locklin wanted to build was a robot that could be turned on at any time, that would immediately figure out where it was, then wait for the command to get a beer, then find the fridge, then move anything that was not a beer out of the way of the thing that was a beer without causing damage, then obtain the beer, then repack the fridge, then find Locklin and hand the beer over. As with the mythical Burger King Errand Car, we are no closer to accomplishing this goal today than we were in, say, 1980.
Locklin points out that not even the most sophisticated robot on earth can automatically decide to avoid something that is swatting at it. A fly can do that, using 135,000 neurons. It takes a lot of computing power to simulate 135,000 neurons in accurate fashion --- a lot, like "fills a room" lot --- but the mere act of creating 135,000 psuedo-neurons doesn't actually get you anything.
Why, as KRS-One asked, is that?
Locklin suggests that a a brain without a body is useless. In other words, the brain is somehow programmed by the body as an organism grows and develops. This is also the theory employed by Steve Grand in the charming book Growing Up With Lucy. Perhaps "programmed" is the wrong word. Let's go back to the fly for a minute. After it is born, it learns to operate by firing different neurons, seeing what happens, and strengthening the neural connections which result in successful behavior. This was how I learned to play the beginning of Supertramp's "Goodbye Stranger" on the piano this morning. I read the sheet music, then I tried operating my hands while listening to the results. When things went wrong, I stopped, which was a negative reinforcement. When they went well, I repeated the performance, which was a positive reinforcement. Now I can do something I could not do this morning, kinda-sorta.
All the things that a robot can't really do --- easily understand its position in a room by looking around, make a guess about how hard to push an object to get it out of the way, find a refrigerator that has moved by eight inches since it last went for a beer --- are things that a growing organism learns to do through physical-neural feedback. Moreover, it would appear that many of these skills are really just small manifestations of greater skills. They have robots that can ride bicycles, but they don't have a robot that can get off a 29" downhill MTB, get onto a 20" BMX bike, and ride the same course again without crashing. (To be fair, that's not always something that I can do, either.) So the "ride a bike" skill is really a small visible part of the "learn to ride a random bike" skill, which in turn is a "learn to ride a random bike on random terrain, in random weather" skill, and so on, and so forth, and you get the idea.
If this idea that the body is father to the mind turns out to be correct --- and it's far better-supported than any competing idea at the moment --- then it suggests that we will never get "artificial intelligence" just by loading a program into a really powerful computer. Instead, you'd somehow have to "grow" the computer and a physical manifestation of that computer together.
"Wait, do it in simulation!" you respond. Except that we don't really know which parts of the physical/mental interface to simulate. What if you spend a decade having a supercomputer learn to throw an imaginary ball via simulation, only to discover a decade later that it was the act of rubbing fingers together that starts the path towards consciousness? More likely, what if you spend a hundred years trying to simulate the inciting incident of intelligence generation, and never get anywhere?
As I've previously noted in these pages, it's easier to teach a computer to beat Bobby Fischer than it is to teach a robot to consistently touch its own nose. It does not seem likely that this will change any time soon. Computers will become better and better at handling data, and they will become faster and faster at it, but they are very unlikely to do "robot-like" or "autonomous-like" things in the foreseeable future.
Unless.
Unless you are willing to change your idea of what a robot is. If you started right now, you could create a mentat long before anyone creates useful AI. Which is to say that you could do some mild genetic engineering on a human pattern, then raise that human being in such a way as to apply hyper-intelligence to the problems you place before it. This was how Frank Herbert waved his hand at supercomputers in the Dune books. Poor Frank grew up in an era where "strong AI" was always right around the corner, but he didn't want to write about that, so he came up with the "Butlerian Jihad" that destroyed all the AI, leaving enhanced humans, mentats, to do the work of supercomputers. Sixty years later, we are no closer to strong AI than the scientists of his time --- but we could have raised five generations of human mentats in that time, the same way you create new dog breeds.
Naturally, you're not going to get away with openly breeding people like animals, against their will and whatnot. Instead, you'd want to create artificial conditions to ensure that the highest-IQ people had higher-IQ children who could then be matched and bred. Your homework, dear reader, is to
a) conceive of an American society that disproportionately incentivized breeding with people of like intelligence, at the expense of all other qualities; b) demonstrate how, if at all, this differs from the way we pack certain American colleges with a pre-determined ratio of high-IQ men and women nowadays.
If I wanted to write sci-fi, I would start with this concept: It's XXX years in the future. Most "computing" is done biologically, by creatures that are vat-grown with bodies that enhance their abilities to develop certain forms of intelligence. General-purpose people are thin on the ground, and there are some bio-mechs in power who don't like the idea of letting them run around at all. Pow. There's your conflict all set up and ready to go.
The problem, of course, is that you'd have to come up with some sort of hand-waving idea as to why the general-purpose humans aren't trivially easy to destroy. Maybe they're the only people who can eat fresh-grown food or something like that. Otherwise it's a turkey shoot. Feel free to leave your ideas in the comments.
All of this goes a long way to suggest that post-humanism will become a reality well before the Golden Age Sci-Fi scenarios of Planet-Sized Supercomputers and whatnot. Frank Herbert might have been on the right track with his Bene Gesserit and Guild Navigators and whatnot. Maybe the way you get to artificial intelligence isn't by starting with artifice and making it intelligent; maybe you start with intelligence, and make it more artificial.
Apropos of nothing, this is how the rebooted Battlestar Galactica series worked: the "Cylon Raiders" were cyborgs with living tissue inside a metal ship. That's not quite right, unless you have a way to let the ship grow and develop along with the brain. Which is a very difficult problem, but probably not as hard as creating "strong AI" out of an Intel chip. That's never going to happen. As Locklin notes elsewhere, when the tech firms say something will be handled by "AI" they might really mean "Aliens and/or Immigrants". Longer than anyone reading this blog will be alive, the cheapest way to address a problem will be continue to be the sourcing of cheaper labor. Which means that you're already living in the future, every time you order an Uber or eat some foodie meal that can't be harvested and assembled in any world that pays a living wage to all of its laborers. Another way to look at it: Scott Locklin's "beer robot" has existed for thousands of years, as an "indentured servant" or "slave" or just "employee". If it ever gets rendered in metal-and-motors form, there still might be a biological brain doing the work behind the mask. Would such a creature have a soul? What about the creature who created it in the first place? What about any of us?