AI call home.

            Initially the plaything of the very rich, now cars are everywhere in the world.  The first time I saw a cordless phone was in the movie “Wall Street” (dir. Oliver Stone, 1983); recently I saw a newspaper picture of a displaced Palestinian family in Gaza riding on a donkey-cart with the obese “pater familias” talking on his cell-phone.  The point is that all useful new technologies tend to become cheap and widely available.[1] 

            Today’s rapidly emerging technologies include artificial intelligence (AI)[2] and “synthetic biology.”[3]  Some people foresee the dawn of a new golden age.  “What dreams may come” true?  Great scientific breakthroughs, leading to cures for diseases, are vaunted.  International co-operation plus AI might end world poverty and hunger, or climate change.  “There seems no obvious upper limit on what’s possible.” 

Other people feel more alarm than glee.  The fact that a new technology is useful, cheap, and widely available doesn’t guarantee that all its effects will be beneficial.  Cars run on carbon; cell-phones facilitate bullying, among other harms.  Then there are “guns of the hand.” 

Probably the most intense concern among the lay public is the fear that AI will escape human control, that we will end as slaves of the machine that we—“they” once it has happened and people are looking for someone to blame—have created.[4]  We’ll all have to shape up according to the dictates of a Vegan-eating, Alcoholics Anonymous-belonging, Pilates-loving, classical music-listening, PBS-watching, and armed-to-the-teeth-with-nuclear-weapons super-computer named Pythia. 

There is another, more realistic, fear.  What if “AI” does NOT escape human control?  What if it falls into the “wrong” hands as well as into the “right” ones?[5]  Criminals, terrorists, and countries or companies gone “rogue” are all drawn to the immense possibilities of “AI.”[6] 

One solution might be to restrict the legal right to develop AI and its off-shoots to “responsible certified developers.”  This could be backed by some international apparatus of audits, controls on the transfer of the most advanced computer chips, and regulating the flow of information on the internet.   It’s difficult to imagine how this would work effectively in a world of competing nation-states, a still very open world economy, and intractably curious scientists.  The proliferation of nuclear weapons is one example of the difficulties. 

Even if regulation limps behind any kind of innovation, it is worth asking “How can we guide technology in a way that allows us to benefit from its extraordinary promise without being destroyed by its exceptional power?”[7] 


[1] Mustafa Suleyman, with Michael Bhaskhar, The Coming Wave: Technology, Power, and the 21st Century’s Biggest Dilemma (2023).  OTOH, we’re less than a quarter of the way into the 21st Century, so who knows? 

[2] See: Artificial intelligence – Wikipedia 

[3] On the latter, see, for starters: Synthetic biology – Wikipedia 

[4] We have been prepped for this fear by popular culture.  See the seminal works: “2001: Space Odyssey” (dir. Stanley Kubrick, 1968); “Colossus: The Forbin Project” (dir. Joseph Sargent, 1970); “The Matrix” (dir. The Wachowskis, 1999); and “Ex Machina” (dir. Alex Garland, 2014).  The threat resonates most powerfully with liberal arts faculty.  Many deans would fail the Voight-Kampff Test.  Blade Runner – Voight-Kampff Test (HQ) – YouTube 

[5] Who has the “right” hands?  The European Community?  UNESCO?  The Sackler family?  OK, the Fed. 

[6] I’m reading William Gibson, Neuromancer (1984), so none of this seems far-fetched to me. 

[7] David Shaywitz, review of The Coming Wave, WSJ, 7 November 2023. 

Leave a comment