Regula Staempfli: Do AIs dream of democracy?
Translated by Samuel Joe Reed, Master Cognitive Science
AAA, or at least 1300 points: those who get the highest ratings by conforming (to the establishment) – under the uninterrupted surveillance by computers and cameras –, will enjoy benefits, such as unhindered travel. Those who do not, will have to reckon with sanctions, should “Citizen Scoring” – still in a voluntary test phase – be rolled out over the whole of China. Western democracies have a prerogative to free themselves from this slippery slope, and to “locally and democratically restrict the technological and global anti-democracies”, as Regula Staempfli and Michael Gebendorfer argue.
Back in 1968, when Phillip K. Dick published his genius novel “Do Androids Dream of Electric Sheep”, in which a group of androids tries to escape their pre-ordained shut-down, computers were as big as delivery vans and were still fed with ticker-tape. Though the sixties appear like a digital Bronze Age from today’s point of view, the significant implications of the development of artificial intelligences were already food for thorough deliberation. Thoughts about a future artificial intelligence were usually characterised by an uncanny, deep-seated fear, but also by a hope. This may be due to our educational curricula, in which the myth of Prometheus has not only persisted in many variations (from the Golem to Frankenstein’s Monster) but has also given blasphemous connotations to the creative act itself. This discomfort – concerning the quasi-divine act of creation involved in the development of an AI – was already described in Goethe’s “The Sorcerer’s Ballad”. For as soon as the artificial intelligence is in the world, there is no turning back, regardless how it will later develop.
Weak and Strong AI
But what exactly is an artificial intelligence? An artificial intelligence is understood to be a computer fully capable of thinking like a human (Strong AI) or at least simulating human-like thought (Weak AI). Weak AI has already been commonplace for a long time, for example in computer games in which some computer-controlled characters apparently have autonomous behaviours. However, these ultimately only play out a limited number of pre-programmed subroutines. The feigned freedom of such machines is more like a Potemkin village, in which one can see rows of binary code through the windows, as they try to lead us to believe in their intelligent autonomy.
Strong AI in contrast, were it to become a reality, would be capable of acting intelligently in general: it would have to develop its very own intelligent strategies to solve a diverse range of problems, and thereby exploit the possibilities of adaptive, evolutionary development. It is precisely this notion of strong AI, the seemingly limitless power of computers, which already cast a spell on us in the 18th century in the form of the Mechanical Turk (which turned out to be a deception) and now strives towards its own realisation in the near limitless computing power of Silicon Valley.
Since 2018, we have seen the first machines capable of passing the so-called Turing Test. First presented by Alan Turing in 1950, this test serves roughly to prove if a person can distinguish whether they are in conversation with a person or a machine. However, strong AI will be able to be so much more than the well-executed deception required to pass the Turing-Test: it would most likely recognise itself as an entity, as a Self, and align its actions and thoughts accordingly.
Only when such a strong AI exists will we be able to tell if it would have the same primary goals as those of Homo Sapiens, such as self-preservation and procreation. The branches of research into Artificial Life that go hand in hand with the development of artificial intelligence clearly demonstrate that in the future we will be dealing more with a new kind of life form than with a machine.
Citizens’ rights and Human Rights for AI?
Which public and human rights would also count for an AI? Which would not apply? The decisive question will be: how will an AI in a democracy think differently to an AI in a dictatorship? Due to possessing its own fully independent cognition, the AI will have its own understanding of the world around it, and perceive itself as a part of that world. It will therefore most likely seek out means of securing a lasting place in society. We can only speculate as to the strategies by which it will do this.
Certainly, the ability to adjust will play a decisive role, for both sides: not only will the behaviour of an AI towards its environment (society, state, globe) need clarification, but this environment will also have to accommodate the AI in turn.
The advent of a self-learning, generally intelligent machine into the world of organic life will raise a lot of questions about our democratic and human worldviews. Especially if the AI does not consider democracy relevant to its priorities.
Political education for AIs
AIs will inevitably require a kind of political education, in order to make the constitutional and democratic cohabitation of humans and AIs possible. What role an AI might play in a dictatorship is another question altogether. How would an intelligence far superior to that of humans freely allow itself to be subjugated to a system of government, without inevitably shaking it off in the mid-to-long term? Here we see the return of the ancient fear and hope that a super-intelligent machine could become either an all-ruling dictator or the engine of a democratic utopia.
The latent possibilities of an AI include that it cooks its own binary soup without consideration for losses. Until we can rely on the democratic consciousness of an artificial intelligence, we will have to make the existing state apparatus proficient in democracy. Our contemporary technological worldviews and views of the human condition currently favour dictatorships and autocracies. This must be changed, as well as the fact that the present does not just burden the future with trash politically, but also very literally, with tons of physical electronic waste. At the moment, these processes threaten humanity far more than artificial intelligence, which, according to some estimates, we may still be waiting on for a hundred years.
Should an artificial intelligence nevertheless raise its head (or its CPU?), then it must absolutely do so in a democratic context. To improve the probability that an AI should dream about the equal participation of all, one must to align both soft- and hardware to democracy. Only democracy guarantees a life- and humanity-affirming relationship with technology.
The atomised society and the measurement of mankind
Attention should be drawn to the present paradox, that machines assume increasingly human appearances, while humans are treated more and more like machines. Human nature is being replaced by artificiality, which in turn redefines the ontological condition of our nature. Silicon Valley has long stopped dreaming of democratic utopias or better political worlds, and instead dreams of the radical overhaul of the real world at the mercy of automated processes. The present digital rulers are partly and horrifyingly driving the process of modernity ahead with what Hannah Arendt labelled “the loss of the world”. Our collectively produced and observed world falls apart into individual pieces. The widely criticised loss of truth and the ubiquity of fake news are a popular expression of this process. Resulting from this is the present vicious cycle of media and science, which concerns itself not with the further development of democracy, but rather emphasises episodes of lament about the past, or loses itself in irrelevant themes such as self-driving cars. The concealment of power was always the cleverest trick of rulers: this is why people are afraid of robots and not of their doctors, despite it being precisely in medicine where all-powerful, inhuman code is largely implemented into our living reality.
This is why politicians today care about national identities, while they simultaneously and carelessly enter into trade deals with the People’s Republic of China, which put European identities and decision-making processes out of power. “Culture of Upstanding Citizens” is what the ruling party calls this – more newspeak in the Orwellian sense will really not do. Under this doctrine, every citizen will be measured by a credit point system. The algorithms established by State and party will decide about accommodation, car, study, marital permissions, holidays, childhood dreams etc. In China, a person is treated not like a life form, but as a credit packet, without free will, and freely available for the relevant programs. The system by the powerful designs, sometimes under the skin, humans and the world.
Limiting anti-democracies democratically
One of the most important political tasks of Western democracies is to locally and democratically restrict the global and technological anti-democracies. Thinking and acting democratically with machines, algorithms, robots and AIs involves recognising the existing constitutional and legal systems. The duty is to ensure freedom, equality and solidarity between all life-forms and machines. Isaac Asimov’s novel, which appeared on film as 1999’s Bicentennial Man, justifiably asks: How human is the machine already? How mechanical have humans already become? Guaranteeing democracy in the age of apps means collectively thinking up the unconditional rights and duties for both humans and machine. In the future, to further realisation of democracy will have to engage with unconditional rights for intelligent machines and unconditional rights for life forms, like people, animals, and nature more generally. Last but not least, it is Swissfuture which has, for a long time now, already engaged with practical questions about digital democracy. The prerequisite, namely, that machines and life forms are thought up in unison, has only been briefly sketched here. It will have to be fleshed out further, on the Internet.
© Copyright by Regula Staempfli. This is a translation of an essay for SWISSFUTURE, Magazin fuer Zukunftsmonitoring 02/18.