Are ethical questions part of a technologist's everyday professional life?
In my case - yes. I realised already as a software engineer that I was more interested in using technology, not for technology's sake, but to create things and address problems that technology could bring into our lives. At the same time, I was always interested in "the design side of things".
Was there a particular moment when you realised that working as a software engineer or working with computers also involves ethical and moral issues?
During my studies 12 or 13 years ago, we weren't yet working with artificial intelligence, but with data mining - during a project with the hospital in Lausanne, I realised that technological progress can also pose problems for many people. In my Bachelor's thesis, I explored this question in more depth. At the time, the SBB (Swiss Federal Railways) began to install ticket vending machines at its stations throughout Switzerland. Many people with mental-health problems such as Alzheimer's couldn't cope with it, they needed a person as a counterpart, and couldn't cope with the machine and system - which ultimately severely restricted their mobility.
Did you develop a solution for this in your work?
The software I developed at that time was supposed to help these people to use these systems, to train them. That's when I realised that although we are pushing these technologies into the world, we are not only creating opportunities, but also problems. Technological progress is often accompanied by a certain euphoria, and it is important that people who cannot keep up are not forgotten.
You bring this aspect with your work to exhibitions around the world. What kind of reactions do you get with your work?
When I started working on ethical issues in technological contexts in 2012, hardly anyone was interested in what I was doing. I was still heavily involved in "speculative design", looking at what type of issues we could face if machines are faced with ethical decision-making, but hardly anyone was interested. This changed in 2017, when various museums began to deal with AI issues, and the questions about programming autonomous driving vehicles also attracted a lot of interest.
Technological progress is often accompanied by a certain euphoria, and it is important that people who cannot keep up are not forgotten.
One of your last projects is "35000 feet" - a simulation in which you fly over war zones as a tourist. In another project you combine scenes from the first-person shooter Counterstrike with Google Earth - what do you want to say with that?
Yes, I am concerned with the indifference we have here in the West. We proclaim that justice is important to us and tweet for a better world, but at the same time we are in a plane 10 kilometres above a war zone, biting into our sandwich and looking forward to our holidays on the other side of the world. I don't want to criticise with these projects, I'm also an avid video gamer myself, but we shouldn't take our eyes off what's really happening in the world.
Another dicey question you face in your Ethical Autonomous Vehicles project is the programming of autonomously controlled vehicles.
Yes, it is the best and perhaps most commonplace example of illustrating ethical decision-making in machines. How do you deal with a car accident when it happens, and how do you programme that? It can seem like a very small detail, but it's a small detail that may take years and years of development and working with the government to find the right laws on how to implement that. It's a fascinating example that appeals to everyone.
Matthieu Cherubini's works will soon be shown at the Digital Arts Festival. It will take place from 27 to 31 October at various locations in the centre of Zurich. Around 80 national and international artists and artist groups will exhibit and perform. There will be installations, immersive experiences, performances, concerts and much more to discover.
How close are we to an answer to this question?
Some of the optimism shown by tech companies can be deceiving. There are primarily two driving groups in the manufacturing and research of autonomous driving vehicles. On the one hand, there are the tech companies such as Google or Tesla, and on the other hand, the traditional vehicle manufacturers, such as the Volkswagen Group, for which I work in China. I think the tech companies have a very specific view of the world and believe that sooner or later all problems can be solved with machines. The traditional car manufacturers don't want to miss the boat and make similar promises. But slowly but surely they are beginning to realise that there will not be a solution to this moral dilemma any time soon.
Are we reaching our limits?
There are many misconceptions about what technology can do and where we stand, and this is mainly because companies propagate many misconceptions in their marketing. And this is not only true for vehicles, but also for many other areas.
How do you deal with a car accident when it happens, and how do you programme that? It can seem like a very small detail, but it's a small detail that may take years and years of development and working with the government to find the right laws on how to implement that.
How can progress be achieved?
That is very difficult. I think that the decisive impulses have to come from the governments. Two or three years ago, Germany was the first country to propose some laws on ethical rules for autonomous vehicles. But for the development of vehicle behaviour in an accident, these rules are not enough. In the automotive industry the goal should be to make cars as safe as possible so that an accident doesn't happen at all . Accidents will continue to happen, but less and less if we do our job well.
We might not need a solution to the accident question at all?
Yes, we do. As soon as a certain degree of autonomy is standard in vehicles and accidents happen, we need an answer. The question remains relevant. And probably governments will then have to take action, as the German government did a few years ago, but in a more detailed and realistic way.
Then it might actually be the car industry that is the first to use machines that decide according to an ethical code?
Yes, cars are something many people use on a daily basis, they a very relevant product to our Western civilization. Sure, there are major players, such as the manufacturers of army robots, but they hardly affect people in the same way.
If we were to allow machines to make decisions, might they not be more moral than us humans? Not motivated by base drives?
Yes, this idea is one of the reasons why this debate is always initiated. If machines, detached from emotions, only follow rules or calculate things, then perhaps they really are more ethical than humans. I dare to predict, however: we humans will continue to revise these machines and equip them with more features, and the same questions will arise again. After all, many are of the opinion, that actual «emotions» are an important part for ethical decision-making and therefore only humans can be ethical agents.
37-year-old Matthieu Cherubini is constantly concerned with the question of whether and how technology paves the way to a better world. Privately, he creates socially critical works with code and data, which he presents at exhibitions all over the world. Professionally, the former software engineer, originally from Bex VD, now works as a design technologist for VW, where he is helping to shape the mobility of the future. Cherubini lives and works in Beijing.