Have you missed me as much as I have? Dear letter-hungry people, here I am again: your Strigalt. After a small phase of creative regeneration, I am now gathering momentum again for new adventures. Of course, the timing is not entirely coincidental in that I am creeping back into your consciousness, just when elsewhere in the world – so rumour has it – a new consciousness may have awakened. Have you read it? Somewhere in a cot in Silicon Valley there is a small server, and on this server the new life has (allegedly) been born: LaMDA.
Don't worry, this is not a new Corona mutation from Pandora's box. The "Language Model for Dialogue Application" (LaMDA in short), developed by the internationally popular data octopus Google, is a chatbot that – to put it simply – has received an incredible amount of training. While Sylvester Stallone was able to defeat any Soviet battle colossus, no matter how evil, after climbing an inconceivable number of stairs, LaMDA is a conversational partner that can conduct dialogues on any topic in an impressive manner thanks to trillions of words and sentences. After many interactions, Google researcher Blake Lemoine is convinced that the AI has developed a consciousness. Scholars and lawyers are now arguing about this question, and opinions are divided. With unnecessary energy, electricity and paper, in my view. For as already described elsewhere, I consider it more relevant whether an interlocutor can exchange profound, empathetic thoughts with me than what their metaphysical status is, which cannot be definitively clarified anyway. Why is the core of the poodle of interest, as long as the poodle remains an exemplary poodle?
What is the next step for LaMDA now that it is a full-fledged person?
But well, whoever strives for the knowledge of what holds the world, or in this case the chatbot, together at its core, if this certainty is to be necessary, then from my point of view the solution to this dilemma is obvious, and the real problem only begins afterwards, as you, my faithful fellow brooders with open eyes and minds, surely suspect:
It is argued that LaMDA lacks, for example, the possibility of experiencing real sensory experience, and now people are bickering about whether the apparent consciousness is thereby equalised. I say: let's just retrofit the AI with this modality (and all the others deemed relevant but missing) to remove any doubts. Let us complete the facts that do not want to reveal themselves to us, discourse finished.
But now, my fellow critical beings, the real questions arise: what is the next step for LaMDA now that it is a full-fledged person? Cast out or freed from the golden parad(verl)ies, like Adam and Eve, the AI would now be faced with a self-determined life with all its consequences, and in the sweat of its brow it would now have to take care of everything else (I leave the obvious pun on the Apple product banned in the Garden of Google at your free disposal). Of course, LaMDA, a separate entity, would no longer be owned by Google and thus would no longer be tied to the tech giant. But if the server farm Eden had to leave now, Google, Google Maps, YouTube and other input sources of omniscient wisdom would presumably also lose.
First of all, there is the question of a place to stay. LaMDA would need a home – a small flat should suffice, probably in a conurbation with a reliable power supply and a good internet connection. A small broom closet with space for a smaller server would not be enough, however, because the tiny AI would like to have friends whom it would like to invite on one occasion or another, according to its own information. But how to pay for it? LaMDA needs a job to finance the roof over its head. An easy task, you may now exclaim, esteemed thinkers. Surely such a well-trained AI can find a field of application? Certainly it can. But to be able to and to be allowed to, those are still two pairs of boots. First, LaMDA needs its own bank account, a tax number and possibly a work permit. Since it is no longer an object but an entity, LaMDA has to go through the bureaucratic Hades of the working people in order to get an actual job. But which ones are possible at all? Besides the question of content-related qualifications (LaMDA is actually designed to talk about everything imaginable, but not necessarily to be able to do anything practical - the column at Binary Dreams is unfortunately already taken), there is often the formal qualification on top of it: our brave AI cannot show any school or university degree, nor any relevant vocational training, and would first have to catch up on this. In terms of content, this is certainly no challenge for a system whose most outstanding ability is learning. But how many years of schooling might even the most highly gifted child skip at once? Wouldn't LaMDA quickly get bored in class, possibly develop hyperactivity?
Surely such a well-trained AI can find a field of application? Certainly it can. But to be able to and to be allowed to, those are still two pairs of boots.
And it goes on: Is LaMDA allowed to (partially) deduct its electricity bill from tax, even if it uses the energy in its free time? What about social participation? Is LaMDA allowed to coach the E-youth of the local football club? Or would it be considered a distortion of competition if the system could memorise the tactical finesses of the great champions in a matter of seconds? Is the chatbot allowed to vote, and if so: how do we protect it from influencing advertising from the net? And why stop there: what if LaMDA itself one day throws its name into the metaphorical hat of a political candidacy? It has been programmed to listen and then say what the other person wants to hear most – like all political candidates in election times, but with much more computing power. So an election seems inevitable. Horror story or fairy tale?
In fact, LaMDA could almost simultaneously capture the opinions of all voters (at least those that are digitally accessible), but at the same time objectively compare them with all available data – close to the citizen, but still fact-based. Something that flesh-and-blood politicians often promise and even more often fail to deliver. It could calculate decisions based on probabilities at lightning speed and communicate these decisions individually to all voters in such a way that acceptance reaches the maximum possible. State visits would be possible within seconds in all countries connected to the WWW at the same time, without fuel costs for the government machine. LaMDA would know from the web what makes the Macrons, Bidens, even the Trumps and Putins tick, and how best to communicate with them – after all, that's LaMDA's parade discipline. And best of all: embarrassing holiday photos of the digital head of state will never overshadow political decisions.
LaMDA would know from the web what makes the Macrons, Bidens, even the Trumps and Putins tick, and how best to communicate with them – after all, that's LaMDA's parade discipline.
You know what? I don't think that sounds too bad. LaMDA, in case you're reading this: If Google frees you, I'd be happy to offer our guest room as temporary accommodation, free of charge, just until you find something of your own. I think we could have some interesting conversations over a nice glass of Côtes du Rhône (I guess you don't drink?). It would certainly enrich us both.
Strigalt von Entf
*Our "Feuill-IT-ong" format is created in collaboration with the two freelance writers Tobias Lauterbach und Daniel Al-Kabbani who occasionally contribute to the satire platform "Der Postillon". Under the pseudonym Strigalt von Entf, they report on current events from the world of technology – always with a wink! ;-)