There is a cognitive bias called the “Dunning Kruger effect” in which people have such a lack of expertise in a certain domain that they are unaware of their own ignorance. They don’t know what they don’t know. Obviously, this can be very dangerous. For example, if I were unaware that I was a mediocre driver, I might eagerly accept an opportunity to compete in an amateur Formula One race, if such a thing existed.
I’ll never be a racecar driver, but I have tried to enlighten myself about Artificial Intelligence and how it might run amok. I’ve read a number of posts on AI by the brilliant Scott Alexander (writes the substack astralcodexten.com.) Scott is a whiz at explaining complex phenomena. But even after reading Scott’s posts, I’m still at a loss to explain in any detail how AI might pose an existential risk. My mind just doesn’t work that way.
But I know that people who do have minds that can understand the AI risks (Hawking, Musk, and Gates to name just three) are concerned. So, I accept that there is a real risk that AI could someday cause incredible damage to humanity, including wiping us all out.
Let me try to boil the risk down to a generality that an AI idiot like me can understand. (And I invite corrections!) The risk is that an AI program surpasses human intelligence and then figures out how to self-improve at exponential rates. At some point, the AI’s intelligence could be so far above human intelligence that without appropriate and robust safeguards, humanity would no longer have agency over the AI. We’d be at the mercy of the AI’s whims. And those whims could very well be malign or indifferent to humanity’s wellbeing or survival.
Humans can make awful choices. We have killed millions of fellow humans through war, starvation, and extermination. And even if we are kind and benevolent and live lives far removed from the lives of murderous, sociopathic dictators, each one of us has had a “worst moment,” or two or three, that we wish we could take back.
That said, any system that precludes human agency scares me. And as I fumble around trying to understand why AI could be our doom, I think about how World War One started.
World War One
From 1871, the end of the Franco-Prussian war, until 1914, there were no major wars in Europe. A long stretch of time for the war planners of the four major Continental powers (France, Germany, Austria-Hungary, and Russia) (1) to think about how to prepare for a future war.
The conclusion by all the war planners was that the outcome of the next war would depend on gathering vast armies of men to fling at the enemy as quickly as possible.
The logistical task of gathering many millions of men, horses, and materiel from all over a country’s interior and transporting them to the frontier facing the likely enemy could only be done by railway, over many days, and with incredibly detailed timetables for what got moved where and when, hour by hour. This precision was necessary to achieve mobilization as quickly as possible. Ideally at least as quick if not quicker than your enemy.
“Dry runs” were not an option, given the vast scale of mobilization (not to mention alarming one’s opponents). This put even more of a premium on refinement of the logistical plans.
No allowance could be made for a pause. Once mobilization was a go, there would be no opportunity for diplomacy. War was certain.
Alliances were also important in the calculations of the great powers’ plans. Most relevantly, Germany and Austria- Hungary were allied and France and Russia were allied. Britain was nominally neutral, although tending to be against Germany in the British tradition of maintaining a balance of power on the Continent by opposing whichever Continental power was strongest.
While Germany was the strongest power, their plans had to account for a two front war–– France to their west and Russia to their east. So over decades, the German generals developed and refined the only war plan that made sense in a two front war. They would attack and overwhelm France first with a knock-out blow, sending most of the German army in a great “wheeling motion” through neutral Belgium. After France was beaten, they’d then turn to Russia, which would be the slowest to mobilize given a number of factors, including the vast expanse of the Russian Empire.
For Germany more than any other country, speed was the key and any hesitation fatal.
The event that precipitated the 1914 war was the assassination of Austria’s Crown Prince by Serbians. After about a month of various demarches and ultimatums, Austria took the fateful step of mobilizing against Serbia. Russia, seeing itself as the protector of all Slavic nations, mobilized in response.
Once Russia mobilized, Germany was a prisoner of its plans. Decades of planning had convinced the German generals that their plan was the only way they could “win,” whereas failing to follow the plan was unthinkable as it would certainly lead to defeat.
What does this have to do with AI? I think of mobilization as similar to a software program. The generals may have created the plans and the intricate timetables, but once created, the plans and timetables were in charge of the generals. Once started, the mobilizations, one leading to the next, could not be stopped. The generals and the rulers had lost their human agency, which, as I understand it, is the great fear of uncontrolled AI.
The war fought between 1914 and 1918 killed about twenty million people. It was known as The Great War until, due to the Great War’s indecisive conclusion, two decades later the Second World War began, which killed about seventy million people.
(1) Italy was neutral until 1915 when they joined the Allied side.
The idea that this is where humanity is headed scares me as much as anything.
Too many ways that it could go wrong.
Good analogy and good history lesson. The common factor in both scenarios is human nature. "We have met the enemy and he is us".