en de
Back

Online Magazine

AI & Kant – Philosophical spotlight on it

Boss Mode Ana Campos

New technologies are shaping our everyday lives to an increasing extent. Take a look at the public discourse: it’s no less than a mythological minefield. This applies, in particular, to topics such as artificial intelligence and robotics.


By Ana Campos

Fuelled by the clickbait logic of the tabloid press and popular culture, dystopias are constantly conjured up. While some fears are justified, others are unfounded – in other words, not properly thought through. It is important to steer the focus away from the machine and towards the human, while exploring a few basic questions in this connection.

One of the first people to do just this – and radically focus on the human being – was the philosopher Immanuel Kant. Following the ethos of the Enlightment, human beings should be at the centre of the world – instead of God. Kant formulated the famous four basic questions of the philosophy that dealt precisely with this issue:

  1. What can I know?
  2. What is a human being?
  3. What should I do?
  4. What may I hope for?

Even though more than 200 years have passed in the meantime, Kant's basic questions have lost none of their validity – quite the contrary. Today, they are more important than ever, especially as we hold the misconception that technology is the cure to everything and that all we need to do is use it accordingly. These fundamental questions not only can, but must be answered anew, especially with regard to the use of new technologies such as artificial intelligence.

 

What artificial intelligence is (not) and can (not) ...

On the the first question: What can I know? A distinction must be made here between two aspects relating to artificial intelligence: Our knowledge of and by way of artificial intelligence. The former often has a negative charge, as already explained in the introduction. For example, a large-scale study in Germany in 2019 showed that three-quarters of all Germans see Arnold Schwarzenegger's "Terminator" as representing artificial intelligence, followed by R2D2 from Star Wars. You could argue that there seems to be a fear that armies of robots will one day take over the world. This is why, with respect to the first question, it is necessary to dispel this myth from the outset. What can AI actually do? In principle, artificially intelligent systems do nothing more than make predictions based on large pools of data. Put simply, they mimic the human brain, which learns from experience. What distinguishes artificial intelligence from a human brain, however, is that AI needs much more practice in order to recognise something correctly. While a two-year-old child only needs to cuddle a cat once to know what a cat is, AI needs hundreds of thousands of pictures to accomplish this. Furthermore, artificial intelligence always relies on input from humans. It does not learn by itself. We train AI with the appropriate texts, numbers and images, and point out where it has detected or linked something incorrectly. However, the most important difference is this: AI has no consciousness of what it does. From these considerations, we can conclude the following: AI will never be smarter or more powerful than humans. It relies on humans. In fact, it is completely dependent on them.

 

… and how this is a benefit to us

The second aspect within the first question concerns the knowledge we gain through artificial intelligence. What can we achieve with AI? Artificial intelligence helps us to draw information from very large pools of data, from which to derive knowledge and insights, as the following diagram illustrates.



Source: Trivadis

 

Let's take an example from science: Using an algorithm, researchers at the Lawrence Berkeley National Laboratory in the United States succeeded in finding new correlations amongst three million scientific papers in 2019. It goes without saying that researchers could never have read all this work on their own. Artificial intelligence can therefore help us to use data to become smarter. But human interaction is always needed here too – such as feeding AI with new material or drawing its attention to mistakes.

 

Humans remain analogue and most important

Now let's look at the 4th question: What is a human being? If Ray Kurzweil, Director of Engineering at Google, had his way, we should be able to completely scan our brain in 20 to 30 years, load it onto a computer and let it live on as software. Forecasts such as these are based on the assumption that all aspects of human behaviour, thinking, and experience can be modelled as information processes and simulated digitally. However, one important aspect is ignored, namely that only we humans have the ability to give meaning to someone or something. Undeniably, new media will influence our social relations. However, to what extent this happens is up to us, not them. The primeval ability of assigning meaning goes hand is inextricably linked to human characteristics such as vulnerability, trust, intuition and empathy. Neither can these be programmed. A machine, no matter how intelligent it may be, will never be able to perceive a person holistically, with all their shades and contradictions – even if it knows their digital footprint. Our gut feeling and our perception of subtlety cannot be imitated.

New technologies are a means to an end – no more and no less. They help us simplify our daily lives and our work. My iPhone and Microsoft Surface, for example, are important companions in my everyday life. But they are not able to generate meaning by themselves. That's my job. It is therefore essential that we continue to consistently put humans first.

New technologies are a means to an end – no more and no less. They help us simplify our daily lives and our work.

 

Transparency, justice and fairness

The remaining two questions “What should I do?” and “What may I hope for?” concern ethics – in our case digital ethics. Although they have been much discussed, they cannot be answered in a clear-cut fashion. For example, ETH Zurich's Health Ethics and Policy Lab found that the 84 guidelines they analysed did not have a single common ethical principle. However, five such principles are at least mentioned in more than half of the documents: transparency, justice and fairness, prevention of damage, responsibility and data protection and privacy. We must continue to talk about this at all levels of society, because new technologies affect all of us. It is up to us to jointly define how we want to handle new technologies. Intelligent robots, which are increasingly being used in health care, are a current and very striking example. To what extent should these replace human interaction (or not)?

As a conclusion and some food for thought, I would like to bring up the artwork entitled “Edmond de Belamy”.


Source: Christie's

 

It was created by artificial intelligence in 2018 on the basis of 15,000 portraits. It was then printed and auctioned at Christie's – and sold for 432,500 dollars. It outperformed artists such as Andy Warhol and Roy Lichtenstein by far. Not only is it the most expensive computer-generated image of all time, but, as Vienna's art historian Patricia Grzonka aptly puts it, “This is a symbol for a new human-machine relationship, in the face of which the art market itself has now capitulated."

Ana Campos

Your contact

AND NOT ONLY ANA CAMPOS HAS EXCITING THINGS TO SAY ...

In conversation with
In conversation with Jolanda Spiess Hegglin
Data analytics Natural language processing AI

Against cyber violence
TechTalk
Digital transformation

Who needs a digital strategy?
Feuill-IT-on
Key Visual Feuill-IT-ong
Digital transformation

"Adieu, IE!" – End of a cultural asset
Read