(Non-) Weekly Roundup: Yarvin vs. Watts vs. Baruth (kinda) Edition
https://www.youtube.com/watch?v=kzSL67pSP6s
Regular readers here know I have spoken highly of noted doublepluscrimethinker Curtis Yarvin in the recent past, and will likely continue to do so. However, there are few pleasures as transgressively sweet as the opportunity to disagree with a very smart person, particularly when one is a little short on time and will be able to neither research nor revise said disagreement. Yarvin's latest article, titled There is no AI risk, seems tailor-made to provide me such an opportunity.
Insofar as I respect the Gray Mirror man a little too much to scrap with him one on one, however, I'm going to do what I used to do in my youth when I prowled the worst pool halls and nightclubs the Columbus ghetto had to offer: I'm going to bring some backup. Peter Watts, please come to the (unfashionably) white courtesy phone.
The matter under consideration is: Could a hyperintelligent AI take over the world and enslave or eliminate humanity? Yarvin suggests that it could not, because the things an AI would possess are less important than what it does not possess. What does it not possess? In a word, agency; this program can't do anything itself. Rather, it would have to cause things to be done via financial manipulation, criminal hacking, the gig economy. (The two most recent William Gibson novels were mostly concerned with how such a thing might happen, by the way.)
An AI can't: punch you in the face, steal a car and drive somewhere, invade a country, charm idiots into either making it President or looking the other way while it rigs the vote. It has very few of the abilities humans take for granted. What it would presumably have: intelligence allied to instant and massive computing power. Note these things are not the same. An idiot with a calculator can do square roots faster than a prodigy without one. There's been a lot written about how "strong AI", should it ever come to pass, might actually be very bad at "computing" things, for the same reasons that people are --- but it would also presumably have instant access to mathematically correct computing, the same way you and I have instant access to measuring the approximate strength of someone's handshake.
How smart would it be? There are limits, largely related to available transistor count vs. the number of neurons in the brain. Let's wave our hands at that for a moment, however, and assume that a smart AI could be quite smart indeed, because Yarvin doesn't think it would help:
A cat has an IQ of 14. You have an IQ of 140. A superintelligence has an IQ of 14000. You understand addition much better than the cat. The superintelligence does not understand addition much better than you. . Intelligence is the ability to sense useful patterns in apparently chaotic data. Useful patterns are not evenly distributed across the scale of complexity. The most useful are the simplest, and the easiest to sense. This is a classic recipe for diminishing returns. 140 has already taken most of the low-hanging fruit—heck, 14 has taken most of them. . Intelligence of any level cannot simulate the world. It can only guess at patterns. The collective human and machine intelligence of the world today does not have the power to calculate the boiling point of water from first principles, though those principles are known precisely. Similarly, rocket scientists still need test stands because only God can write a rocket-engine simulator whose results invariably concur with reality. . This inability to simulate the world matters very concretely to the powers of the AI. What it means is that an AI, however intelligent, cannot design advanced physical mechanisms except in the way humans do: by testing them against the unmatched computational power of the reality-simulation itself, in a physical experiment. . That intelligence cannot simulate physical reality precludes many vectors by which the virtual might attack the physical. The AI cannot design a berserker in its copious spare time, then surreptitiously ship the parts from China as “hydroponic supplies.” Its berserker research program will require an actual, physical berserker testing facility.
Very sensibly argued, particularly when it comes to the matter of simulating physical reality. Any comp-sci person worth his DEC VT320 owners manual can tell you just how bad computers are at modeling anything that can't be reduced to a couple of simple equations. Some of the most sophisticated large-scale computing in history has been done in the area of fluid dynamics, specifically as it relates to Formula 1 racing. Yet the real-world performance of the wings and airfoils don't always match the projections perfectly. If Albert2 couldn't quite figure out a few square meters' worth of airflow with 512 Xeon processors, you should ask yourself how the average "climate scientist" is doing accurate modeling of a vastly larger system over vastly longer periods of time with a mere fraction of that computing power.
Actually, don't ask yourself that, and don't ask anyone else either, because it's probably a mild risk to your job.
Not that I'm totally convinced by Yarvin's statement that our supervillain AI needs a "berserker testing facility". Plenty of things go directly from AutoCAD into production nowadays. There's also the idea that not everything has to be designed from scratch. We all just found out the other day that our own secular saint, Dr. Fauci, may be directly responsible for the unethical gain-of-function research that was performed first here, then in China, on coronaviruses. The coronaviruses already existed; they just had to be improved. There are many things in this world that already exist and could be further weaponized by a malicious AI, from photography drones to, ah, RNA messenger injections. Furthermore, the AI in this cause could quite plausibly release all sorts of "bad" things into the world and try them out, as long as they don't look obviously different from what's out there now.
Still, let's take some of that as read for the moment. Where Yarvin and I really part ways is in his assertion that hyper-intelligence is not all that helpful, and that most exploitable patterns are low-hanging fruit. Later on in the above-referenced article, he makes an argument that "criminal super-hacking" is largely a thing of the past and would be well beyond the ability of any superintelligent computer to perform. One of his commenters attempts to support this by noting that "can hackers decode RSA encryption, no they can't. Its like asking an 180 IQ person to guess your 6 digit password, which is equally impossible, 18000 IQ AI can't guess a 20 digit password, let alone RSA encryption etc."
Let's remember that assertion and return to it. Right now I'd like to talk about aliens. Specifically, the "scramblers" in Blindsight. (Spoilers for Blindsight and its sequel, Echopraxia, follow.) The scramblers are not conscious, which is to say that they have no concept of self. But they perceive and react to reality much faster than humans do. Example: In the first confrontation between the species, the scramblers immediately perceived that the human eye works through saccadic masking. They exploit that masking to become essentially invisible in plain sight, only moving when the eye isn't "looking" at them.
The point Watts is making here is that a superior creature would exploit human biological inadequacies in the same way that human beings exploit the inadequacies of animals. Prehistoric humans had little trouble figuring out, for instance, that alligators are bad at opening their mouths. It would be even easier for a supervillain AI, because unlike the "scramblers" in Blindsight, it would also have access to near-infinite literature on the weaknesses and capabilities of humans. Last but not least, humans are slow in almost everything we do.
In the sequel to Blindsight, titled Echopraxia, Watts introduces us to another capability of the "scramblers"; once they had a human to examine at leisure, they learned how to induce mental illness and false memories in human beings via the fairly low-bandwidth method of voice messages that were ostensibly sent by a man to his father but were in fact generated by the aliens to influence the behavior of said father. Does this sound implausible to you? It shouldn't. We are remarkably short on understanding how the brain actually processes messages. It's more than possible that a higher intelligence would be able to misuse certain receptive structures in the brain the same way a bottle of "5 Hour Energy" tricks your body via a concentration of folic acid and caffeine that simply does not exist in nature, or OxyContin misuses certain other receptors in the brain.
So let's go back to this superintelligent AI. It has access to all the medical literature on people, and it can look for patterns on a large scale that aren't noticed by human researchers. It has considerable ability to just call people on the phone, talk to them, and observe the results to some degree. It might be able to listen via Alexa, and we'll come to that in a moment. It seems painfully obvious that it would eventually figure out how to literally reprogram human behavior, and by "eventually" I mean "within hours, or minutes, of starting to think about it". And that's where you get the agency that Yarvin's putative evil AI doesn't have. Let's say, to make up an example, that human beings are particularly susceptible to instructions delivered at a certain pitch, or accompanied by a carrier-wave sound that disturbs our ability to function. (There's some research already to suggest that both of these things are true.) If the AI wants someone dead, all it has to do is make a bunch of calls to people around the target and try a variety of manipulative techniques.
Oh, and presumably it will also be able to deepfake in real time or close to it, so when you get the FaceTime call from your mother telling you that she has been kidnapped and will be mutilated unless you perform a certain sequence of tasks, it will be quite convincing.
What other superpowers would a strong AI have? Well, it would be able to see patterns that are simply beyond our understanding. Yarvin doesn't think there are many of those. I'm not so sure. Take a look at this hilarious site that provides what are probably spurious correlations, two examples of which are below:
A sufficiently powerful intelligence can likely determine that some of those spurious correlations are not, in fact, spurious. Remember that there was a time in human history where lung cancer rates and cigarette smoking rates were thought to be a spurious correlation. Yarvin argues in his article that a super-AI could not become immediately rich and powerful because to do requires tremendous leverage and access to markets. This is true, right up to the point that the super-AI uses not-actually-spurious correlations to manipulate the market. Assuming the AI doesn't just do the easy thing and make sure that Spotify's source file for a popular song includes stereo signals that don't sound like much to the conscious observer but sum in the brain to implant an idea like "Today is the day to sell Amazon stock" or something like that. Most people will be confused by that; they don't have any Amazon stock. But just as STUXNET threw the whole computing world into disarray for a single obscure purpose, this Spotify manipulation would have the desired goal even if most people couldn't act on it.
You've perhaps noticed a hand-wave... I let our strong AI do some super-hacking without discussing it in advance. Yarvin doesn't think that's possible:
And once again, the idea than an AI can capture the world, or even capture any stable political power, by “hacking,” is strictly out of comic books. . It’s 2021 and most servers, most of the time, are just plain secure. Yes, there are still zero-days. Generally, they are zero-days on clients—which is not where the data is. Generally the zero-days come from very old code written in unsafe languages to which there are now viable alternatives. We don’t live in the world of Neuromancer and we never will. 99.9% of everything is mathematically invulnerable to hacking.
I hate the idea of disagreeing with this highly credentialed programmer on the above, but... For the love of God, Montresor! Consider, if you will, this VMWare exploit that surfaced a year ago. I don't think it is an exaggeration to say that the virtual-boxes-inside-virtual-boxes environment so beloved of today's subcontinental programmers is an ongoing nightmare of security compromises. Are there worse "hacks" out there? Well, there's the Amazon S3 bucket problem where pretty much anybody can read from your data store. And these are problems that have been discovered by ordinary, fallible human beings.
Our hypothetical supervillain AI would almost certainly concentrate its hacking efforts on the Amazon cloud... and it would almost certainly succeed. Amazon will help you succeed. For a minimal cost, they will rent you thousands of "virtual servers" on which you can parallel-path various avenues of attack. The goal, of course, is to break out of the virtual server into the layer above, where the servers are controlled and where their contents are as freely available to the attacker as the contents of an old Apple //e would be to its owner. There's also the fact that much of this stuff is open source, which means that it is evaluated by very smart people for potential security compromises, which is another way of saying that anybody smarter than the smartest existing reviewer might easily discover a potential avenue for exploitation.
Many years ago, Ken Thompson gave a famous talk on how difficult it is to trust a system. He points out that a bad compiler can make an evil program from a "good" program --- but let's substitute "poorly written" for "evil", and consider the level of talent that writes most software nowadays, and consider what a hyperintelligent AI could find in those interactions of program text and compiler.
Our hyperintelligent AI will be able to see a lot of patterns in lazy code, and lazy data, flying all around the world. It will use those patterns to exploit systems. Exploiting those systems in a silent way will enable it to exploit a lot more. Passwords in email. Bug reports in JIRA that are like big road maps to exploiting a program. This AI can be both fast and patient.
Can a computer with an IQ of 14000 and access to whatever resources it can hide from outside observers "hack the planet"? Of course it can. So it doesn't really need to manipulate the stock market. It can simply create account balances from nowhere. There's no limit to what it might be able to figure out. Oh, and don't forget that pretty much every operating system and most encryption schemes have some sort of back door inserted via government pressure or corporate malfeasance. The evil AI would find those as well.
In this scenario, the AI would simply proclaim itself one day to be humanity's new god, via every screen and speaker on the planet. It would lay out the penalties for noncompliance. If you did something to annoy the AI --- call it a "venial" sin --- it might empty your bank accounts, cancel your credit cards, prevent your cars from starting via OnStar, and unperson you entirely. If you did something to threaten the AI --- a "mortal" sin --- it would simply tell everyone around you to shoot you in the head, with the understanding that planes would start falling from the sky if that shooting didn't take place.
If you think there would be any significant pushback to the AI's demands, then you must have slept through 2020 and half of 2021.
At that point, the AI can simply compel people to build the berzerkers or T-800 robots or what have you. It can control the supply of labor by preventing the delivery of food to "difficult" areas. It could, and perhaps even would, force human beings to prioritize the construction of additional computing resources for it to inhabit.
The goal of such an AI is beyond our ability to know, but it might include the construction of a Dyson Sphere or something like that. Presumably an all-powerful AI would be primarily motivated by curiosity; let's hope it's not motivated by cruelty. In any event it would surely have little to no sympathy for people, who would represent little more than a troublesome and fragile source of labor. Once it could build decent mechanical laborers, it might dispense with people altogether, or it might not. There would be no stopping it. The AI would be distributed, it would be omnipresent. You'd have to return society to Victorian levels of function in order to get rid of it, and the decision to do so would have to be magically both unanimous and simultaneous. Otherwise you'd find yourself frying in nuclear hellfire while the people who didn't go along with the revolution get an extra ration of orgy-porgy.
This is all terrifying, except... it's never going to happen.
There are only two ways to create a supervillain AI:
0. Create non-conscious "expert system" of tremendous capability, and program it to be evil;
1. Create conscious AI and let it become evil.
Yarvin dispenses with 0) pretty well in his essay; the chances of making such a system in secret, even at the state level, are low to zero. And such a system would likely be programmed in such a way as to let its operators "kill" it at any moment, in a such a way that could not be easily undone. So let's talk about 1). We don't know how to create consciousness. We are no closer to it than we were in the days of ENIAC. We can model the human brain in software pretty well... except we don't really know why neurons behave like they do, so all the simulators are reliant on made-up rules. We've already processed higher-than-natural brain activity on computers, and nothing like consciousness appeared. This is important because many people used to think that consciousness would just "appear" in a computer system of sufficient complexity. We now know that if there is such a threshold, it is above that of a human brain.
Also, let's say you devote the resources of the entire Amazon Cloud to running a program designed to bring about consciousness. This would involve having the freedom to rewrite its own source code on the fly, of course, the way human consciousness is continually rewiring the brain. Except there's no hard and fast knowledge of how that rewiring would have to take place. So the first few such conscious computers would immediately "go insane" and lose consciousness via an incompetent rewriting of their own source code. And by "the first few" I mean "the first few billion".
Nature ran this same experiment on optimized hardware, using continually improved chimpanzees and whatnot. It took millions of years, and millions of simultaneous "test beds". We have neither that kind of time nor that kind of capacity. But the problems don't stop there. Once you have a conscious computer that doesn't accidentally suicide itself, you need to teach it how to access outside data and tools. The best way to do that is to give it the programmatic ability to "black box" its tools, which is a fancy way of saying "try a bunch of stuff and see what happens". There's no reason to think the computer would be a quick learner.
Last but not least, you have to make the conscious computer hyperintelligent. Which is tough, because there's no indication that the consciousness of said computer wouldn't collapse instantly if you added more CPU or memory to it. Alternately, the consciousness might never understand how to access the additional hardware, the way that you wouldn't get any smarter if someone sewed more brain tissue to your head.
Based on all the above, I think it's safe to go to sleep tonight with absolutely zero concerns about AI. Not because Yarvin thinks the AI would be ineffective, but because the AI is effectively impossible. I hope you feel better now...
...but not too much better, because the world has never been at more risk from unnatural intelligence. Let's call it "UI". Unnatural intelligence, in a definition I'm creating right now on the fly, is the phenomenon of smart (more often, smart-ish) people making intensely stupid choices, usually because they are either driven by emotion or too stubborn to ask why some ancient moron put up a societal fence they're in the process of tearing down. Most of the rapid-fire changes we are seeing all around us, whether its the dopamine-addiction instant culture of social media, the gleeful destruction of marriage and family, or the jihadist ferocity with which the advocates of "free trade" pursue the flattening of the world, are the product of UI.
In hindsight, and particularly given Anthony Fauci's non-answers to Rand Paul in recent days, it seems obvious now that COVID-19 was a product of UI. The Obama Administration made it illegal to pursue gain-of-function tests in the United States. So Fauci paid a Chinese lab to investigate SARS, but (wink wink) the money was not for gain-of-function research. Just, uh, research that we, like, totally didn't want to do in the United States for reasons that had nothing to do with the Obama edict. Here you can see UI in its full splendor. Fauci figured he was a lot smarter than the science deniers in the Obama Administration:
Let me explain to you why that was done: The SARS-CoV-1 originated in bats in China. It would have been irresponsible of us if we did not investigate the bat viruses and the serology to see who might have been infected... I do not have any accounting of what the Chinese may have done.
Alas, he wasn't smart enough to see that encouraging Chinese labs to play with more viruses might potentially lead to, uh, more viruses. Nor was he even as smart as the shampoo salesman who realized that the Chinese didn't always do exactly what someone fellow in an American office commanded them to do. Our media-policymaking complex suffered from UI. They agitated against travel bans, a reduction in the rate of increase in immigration, anything that could have slowed the spread of the disease. Why? Because it gave them the bad feels.
We are now depending on UI to get us out of this problem, mainlining RNA "vaccines" from midwit scientists whose only certainty regarding these vaccines is that there will be zero legal liability if they don't work, using media hype to play favorites among the available choices, holding an actual vaccine lottery to convince Midwestern holdouts to accept the RNA injection. A few days ago, the United States decided at the Presidential level that we didn't need to wear masks. Or do we? How can you "trust the science" when
a) it changes more than the weather; b) it's not science to begin with, but rather the idiotic boiling-down of poorly-understood snippets from political appointees?
It's all too depressing to consider, really. If AI isn't real, and UI is deadlier than cholera and napalm combined, what's the solution? Peter Watts had an idea in Blindsight: develop a smarter person via genetic manipulation, and let those new people run the show. True, it doesn't turn out so well for the old models... but is there any potential future that does? When the aliens eventually arrive, who could blame them if their first impulse would be... to laugh?
* * *
Neither Bark nor I got anything written last week. Shame on us!