A Back to the Future exercise: time-traveling to survey the sci-tech scene in the mid-1980s
by Pepe Escobar with permission and first posted at Asia Times
I have been going through my Asia Times archives selecting reports and columns for a new e-book on the Forever Wars – Afghanistan and Iraq. But then, out of the blue, I found this palimpsest, originally published by Asia Times in February 2014. It happened to be a Back to the Future exercise – traveling in time to survey the scene in the mid-1980s across Silicon Valley, MIT’s AI lab, DARPA and the NSA, weaving an intersection of themes, and a fabulous cast of characters, which prefigure the Brave New Techno World we’re now immersed in, especially concerning the role of artificial intelligence. So this might be read today as a sort of preamble, or a background companion piece, to No Escape from our Techno-Feudal World, published early this month. Incidentally, everything that takes place in this account was happening 18 years before the end of the Pentagon’s LifeLog project, run by DARPA, and the simultaneous launch of Facebook. Enjoy the time travel.
In the spring of 1986, Back to the Future, the Michael J Fox blockbuster featuring a time-traveling DeLorean car, was less than a year old. The Apple Macintosh, launched via a single, iconic ad directed by Ridley (Blade Runner) Scott, was less than two years old. Ronald Reagan, immortalized by Gore Vidal as “the acting president,” was hailing the mujahideen in Afghanistan as “freedom fighters.”
The world was mired in Cyber Cold War mode; the talk was all about electronic counter-measures, with American C3s (command, control, communications) programmed to destroy Soviet C3s, and both the US and the USSR under MAD (mutually assured destruction) nuclear policies being able to destroy the earth 100 times over. Edward Snowden was not yet a three-year-old.
It was in this context that I set out to do a special report for a now-defunct magazine about artificial intelligence (AI), roving from the Computer Museum in Boston to Apple in Cupertino and Pixar in San Rafael, and then to the campuses of Stanford, Berkeley and MIT.
AI had been “inaugurated” in 1956 by Stanford’s John McCarthy and Marvin Minsky, a future MIT professor who at the time had been a student at Harvard. The basic idea, according to Minsky, was that any intelligence trait could be described so precisely that a machine could be created to simulate it.
My trip inevitably involved meeting a fabulous cast of characters. At MIT’s AI lab, there was Minsky and also an inveterate iconoclast, Joseph Weizenbaum, who had coined the term “artificial intelligentsia” and believed computers could never “think” just like a human being.
At Stanford, there was Edward Feigenbaum, absolutely paranoid about Japanese scientific progress; he believed that if the Japanese developed a fifth-generation computer, based on artificial intelligence, that could think, reason and speak even such a difficult language as Japanese “the US will be able to bill itself as the first great post-industrial agrarian society.”
And at Berkeley, still under the flame of hippie utopian populism, I found Robert Wilensky – Brooklyn accent, Yale gloss, California overtones; and philosopher Hubert Dreyfus, a tireless enemy of AI who got his kicks delivering lectures such as “Conventional AI as a Paradigm of Degenerated Research.”
Meet Kim No-VAX
Soon I was deep into Minsky’s “frames” – a basic concept to organize every subsequent AI program – and the Chomsky paradigm: the notion that language is at the root of knowledge, and that formal syntax is at the root of language. That was the Bible of cognitive science at MIT.
Minsky was a serious AI enthusiast. One of his favorite themes was that people were afflicted with “carbon chauvinism”: “This is central to the AI phenomenon. Because it’s possible that more sophisticated forms of intelligence are not incorporated in cellular form. If there are other forms of intelligent life, then we may speculate over other types of computer structure.”
At the MIT cafeteria, Minsky delivered a futurist rap without in the least resembling Dr Emmett Brown in Back to the Future:
I believe that in less than five centuries we will be producing machines very similar to us, representing our thoughts and point of view. If we can build a miniaturized human brain weighing, let’s say, one gram, we can lodge it in a spaceship and make it travel at the speed of light. It would be very hard to build a spaceship to carry an astronaut and all his food for 10,000 years of travel …
With Professor Feigenbaum, in Stanford’s philosophical garden, the only space available was for the coming yellow apocalypse. But then one day I crossed Berkeley’s post-hippie Rubicon and opened the door of the fourth floor of Evans Hall, where I met none other than Kim No-VAX.
No, that was not the Hitchcock blonde and Vertigo icon; it was an altered hardware computer (No-VAX because it had moved beyond Digital Equpment Corporation’s VAX line of supercomputers), financed by the mellifluously acronymed Pentagon military agency DARPA, decorated with a photo of Kim Novak and humming with the sexy vibration of – at the time immense – 2,900 megabytes of electronic data spread over its body.
The US government’s Defense Advanced Research Projects Agency – or DARPA – was all about computer science. In the mid-1980s, DARPA was immersed in a very ambitious program linking microelectronics, computer architecture and AI way beyond a mere military program. That was comparable to the Japanese fifth generation computer program. At MIT, the overwhelming majority of scientists were huge DARPA cheerleaders, stressing how the agency was leading research. Yet Terry Winograd, a computer science professor at Stanford, warned that had DARPA been a civilian agency, “I believe we would have made much more progress”.
It was up to Professor Dreyfus to provide the voice of reason amidst so much cyber-euphoria: “Computers cannot think like human beings because there’s no way to represent all retrospective knowledge of an average human life – that is, ‘common sense’ – in a form that a computer may apprehend.” Dreyfus’s drift was that with the boom of computer science, philosophy was dead – and he was a philosopher: “Heidegger said that philosophy ended because it reached its apex in technology. Philosophy in fact reached its limit with AI. They, the scientists, inherited our questions. What is the mind? Now they have to answer for it. Philosophy is over.”
Yet Dreyfus was still teaching. Likewise at MIT, Weizenbaum was condemning AI as a racket for “lunatics and psychopaths” – but still continued to work at the AI lab.
NSA’s wet web dream
In no time, helped by these brilliant minds, I figured out that the AI “secret” would be a military affair, and that meant the National Security Agency – already in the mid-1980s vaguely known as “no such agency,” with double the CIA’s annual budget to pay for snooping on the whole planet. The mission back then was to penetrate and monitor the global electronic net – that was years before all the hype over the “information highway” – and at the same time reassure the Pentagon over the inviolability of its lines of communication. For those comrades – remember, the Cold War, even with Gorbachev in power in the USSR, was still on – AI was a gift from God (beating Pope Francis by almost three decades).
So what was the Pentagon/NSA up to, at the height of the star wars hype, and over a decade and a half before the revolution in military affairs and the full spectrum dominance doctrine?
They already wanted to control their ships and planes and heavy weapons with their voices, not their hands; voice command a la Hal, the star computer in Stanley Kubrick’s 2001: A Space Odyssey. Still, that was a faraway dream. Minsky believed that “only in the next century” would we be able to talk to a computer. Others believed that would never happen. Anyway, IBM was already working on a system accepting dictation; and MIT on another system that identified words spoken by different people; while Intel was developing a special chip for all this.
Although, predictably, prevented from visiting the NSA, I soon learned that the Pentagon was expecting to possess “intelligent” computing systems by the 1990s; Hollywood, after all, already had unleashed the Terminator series. It was up to Professor Wilensky, in Berkeley, to sound the alarm bells:
Human beings don’t have the appropriate engineering for the society they developed. Over a million years of evolution, the instinct of getting together in small communities, belligerent and compact, turned out to be correct. But then, in the 20th century, man ceased to adapt. Technology overtook evolution. The brain of an ancestral creature, like a rat, which sees provocation in the face of every stranger, is the brain that now controls the earth’s destiny.
It was as if Wilensky was describing the NSA as it would be 28 years later. Some questions still remain unanswered; for instance, if our race does not fit anymore the society it built, who’d guarantee that its machines are properly engineered? Who’d guarantee that intelligent machines act in our interest?
What was already clear by then was that “intelligent” computers would not end a global arms race. And it would be a long time, up to the Snowden revelations in 2013, for most of the planet to have a clearer idea of how the NSA orchestrates the Orwellian-Panopticon complex. As for my back to the future trip, in the end I did not manage to uncover the “secret” of AI. But I’ll always remain very fond of Kim No-VAX.