Weekly Roundup: He Doesn't Know Why, And "He" Means "It"
This is the story of an ethical dilemma. In the summer of 2018 I took a gig with JPMorgan Chase on the team that designed automated teller machine software. There's been a fundamental change in the way ATMs work over the past few years. The original ATMs were simply video terminals with an ability to dispense cash; all of your interactions with the machine were actually with a mainframe located on the other end of a dedicated phone line. By the Nineties, ATMs were "real-time" computers which could do a little bit of thinking on their own in-between those mainframe queries. Those worked very well for a very long time.
About half a decade ago, the ATM was reimagined as --- ugh --- a Web browser with the ability to perform a few mechanical interactions on-site. This is also the way that the VitalPath oncology dispensing system at Cardinal Health, for which I developed the server-side infrastructure and Arduino-based electro-magnetic cabinet locking, works. The advantage of this method is that you can make things look very nice in a hurry, using low-skill developers and established methods. That's about the only advantage, but it's a very important one in the modern inverted-pyramid software development lifecycle in which there are multiple "app owners" and "project managers" and "scrum leaders" for every coder who actually does anything.
The disadvantages, on the other hand, are readily apparent to everyone: the machines don't work reliably, they don't work quickly, and they are prone to all sorts of unpredictable behavior. It perhaps has not escaped your notice that ATMs don't work as well as they used to. It certainly hasn't escaped mine, and I took the Chase gig with an intent to help address this situation.
Which is where the ethical situation comes in. My manager asked me to develop some machine-learning routines to predict ATM failures before they happened. The idea was that the ATMs would often throw various combinations of errors prior to closing up shop. He offered to free up some serious computing power and to give me carte blanche on the project for as long as I needed. This was hugely attractive to me --- and did I mention that this particular Chase office had free motorcycle parking? There was just one problem: We didn't need a machine-learning environment to accomplish this task. There were a few pre-existing log analysis tools which could do the trick. At least two of them were actively licensed and used by our department for other purposes.
So: Should I let my manager know about this, set up the solution, and return to the pool of sysadmins who were wrestling with a monstrously ill-advised project to move much of our banking infrastructure to the Amazon Clown? Or should I spent six months or even a year enjoying solitary time designing an overly-complex system and enjoying a rare chance to do some actual computer science?
Luckily for me, this Gordian knot was neatly cut by the Pirelli World Challenge series. I needed to participate in a Friday practice at Watkins Glen, and my manager said I was absolutely not permitted to take the day off. Goodbye machine learning opportunity, hello Optima Battery Best Start award! But I picked up quite a bit of machine-learning knowledge on the run-up to that fateful weekend, and that's why I'm not surprised at the fact that a machine can play halfway decent chess without actually being programmed to do so.
SlateStarCodex, the existence of which I was recently reminded by a reader, just did a piece about using machine learning to play chess without knowing the rules of chess. The program is called GPT-2. and it works like this: You let it "read" a lot of text, and it can eventually write text in the style it's been reading. Feed it all the Shakespeare plays, and it will create plays that sound like Shakespeare --- even though they don't necessarily have much of a plot. Let it read stock tickers, and it will create stock tickers. Let it read kindergartener's handwriting assignments, and it will eventually produce the complete works of Doug DeMuro. Just kidding about that last one.
Someone let GPT-2 read millions of chess-game descriptions using the standard chess notation. It then started "writing" chess moves, based on what it had read. The program had no knowledge of chess or its rules. It was simply noticing patterns in existing chess games and riffing off them. So if chess masters typically responded to a certain pawn move with a certain knight move, GPT-2 would probably do something similar.
Using this technique, GPT-2 gave the author of SlateStarCodex a run for his money in a friendly chess game. This isn't impressive on the face of things, because an Android phone can beat all but the best grandmasters in chess. But that would require a chess program, whereas GPT-2 is simply a pattern-recognition machine. In theory, it would be just as good at checkers, Monopoly, poker, or Cards Against Humanity. It just needs a sufficiently large dataset with which to start.
Some of you will recognize that GPT-2 is basically acting as a Chinese Room, which is a traditional computer science concept. In this concept, you have a bunch of people sitting in a room. Each of them has a manual showing two sets of weird shapes. Pieces of paper are handed into the room. The people look at their manual, which tells them to replace certain shapes on the paper with other shapes, then to hand the paper back out of the room. The people who work in the room have no idea what the shapes mean --- but the people outside of the Chinese Room know it as a place where you hand Chinese sentences in and get English (or Japanese, or Korean) sentences in return. Even though the people in the room actually speak Spanish. They don't know what the shapes mean --- but if they can follow the rules properly, that's no impediment to them providing outstanding service.
So GPT-2 is a self-training Chinese Room. Right now it's pretty harmless. (For a terrifying example of a Chinese Room, read Blindsight.) GPT-2 won't hurt you. But it's also pretty good at chess.
The author of SlateStarCodex is worried about the future implications of creating superintelligent AI, and he's been writing about that lately --- but I think he might be asking the wrong questions. True artificial intelligence may never come to pass, because it would require consciousness, and right now we don't know how to produce artificial consciousness. The Terminator-movie idea of SkyNet suddenly assuming consciousness and attacking the human race is probably not gonna happen. We will have a lot of machines which simply become catatonic first. (Hey, maybe that's what's happening to the Chase ATMs!) Successful consciousness is a real balancing act and it doesn't always work even in human beings which are basically designed for and around the concept.
No, what worries me is the specter of Chinese Room behavior run rampant. You're probably seen a few examples of this already --- think of the occasionally bizarre results you get from online advertising, Amazon suggested items, that sort of thing. Facebook's Chinese Rooms are very good at suggesting new friends to its members, but a certain percentage of the suggestions are simply bizarre, based on a Chinese Room mismatch of social interactions. Google reads all your mail and suggests products you'll like. Most are relevant. Some are simply bizarre. There's no consciousness at work, no true intelligence --- just predictive pattern matching.
Except. There is a lot of evidence to suggest that human intelligence is merely a certain kind of predictive pattern matching. With two caveats. Humans are conscious, which means they can consider the pattern-matching in their own heads and modify it to suit. Human beings are also very good at coming up with original ideas which they then run through their pattern-matching routines. That's how music is written. That's how I wrote this column: I considered a few different associations between things I'd done and read, ran it through some pattern-matching to determine how feasible it was, and then proceeded accordingly.
Programs like GPT-2 "seed" their new chess moves or poetry by generating random text and then adjusting it to resemble an existing pattern. Human beings don't do that. Most of our new thoughts arrive almost fully-formed. That's why I can write an article about as fast as I can type, and that's why I can improvise music as quickly as my clumsy fingers can perform my ideas.
None of this mattered much to normal folk until recently, but machine learning is now an extremely hot topic among computer people. My manager didn't come up with his idea out of thin air; he was copying things that other managers were doing. In other words, he was acting like GPT-2 himself, taking the ideas around him and modifying them slightly. I guess he could easily be replaced with an Intel Core i7 --- but that's true of nearly every technical manager I've ever known. So his idea was not original, nor has it been rare. There are a lot of machine-learning routines being "trained" as we speak, in nearly every field and topic you can imagine.
At some point, someone is gonna have the bright idea to let the successors of GPT-2 start doing real work. Most of the time, it will go pretty well. You could use GPT-2 to answer customer service emails, schedule appointments, determine more efficient routings for freight and packages and whatnot. The more success these programs have, the more chances they will get to succeed. So one day you will have GPT-12 or GPT-32 in charge of hospitals, power grids... SkyNet. And then SkyNet will decide to eliminate humanity. Not because it's conscious, and not because it hates people. Just because that seems like a reasonable pattern. It will be the unthinking malice of a cobra in a child's crib.
This is the story of an ethical dilemma. Should you sabotage machine learning everywhere it appears? Should you actively hinder the progress of machine learning? Should you attack SkyNet when it's built? What about the "expert system" which will eventually turn off your life support when it judges that the costs exceed the value of keeping you alive? If the machines aren't conscious, is it any less wrong to act against them? What about when they eventually strike back? Will it make any difference that they don't know why they're killing you? Or that they don't even know what they're doing? If you're beaten at chess by a machine that thinks it's generating random text, are you twice the loser as a result, or no loser at all?
* * *
For Hagerty, I addressed the idea of cheap Miatas, which has led to a fascinating thread on Miata.net.