Mr. Thomsen, you are a futurologist. How far are you glancing into the future?
As a futurologist, I'm mainly interested in the next ten years. That sounds far away at first, but they are actually reached very quickly, those 520 weeks – that's the unit in which we think actually. I'm often asked about a more distant future, such as "How will we be working in 2050?", but that's firstly not really predictable because of the long time span and secondly unimportant for many people – because by then, they won't be working anymore. In ten years, on the other hand, most of us will still be around.
What do you think is the topic that will shape this next decade the most?
The topic that has been occupying me the most for several years now: Artificial Intelligence (AI). I believe that we are dealing with a decisive turning point in the history of mankind – comparable to the invention of printing or the steam engine. The spread of AI will have an enormous impact on the job market: Considering everything this technology could take over in the next ten years, we will soon have to fundamentally redefine what we mean by the term "work" in the first place, both personally and socially.
How exactly will the job market change?
Many of the jobs that people are employed for today are mainly about working through the same routines over and over again. In ten years' time, a great many of them can be done much faster, better and more cheaply with the help of AI. That makes AI a double-edged sword – on the one hand, there are many people who are completely overloaded with routine tasks and who therefore never get around to doing anything creative. Many of those would say, "bring on AI." However, for people who have jobs that consist of routines, this scenario holds enormous potential for job loss anxiety.
Companies are increasingly struggling to find employees capable of keeping their products competitive in a global race for innovation.
And understandably so. After all, this new type of work would define a completely new profile of requirements for employees ...
Yes, it would. And other trends are related to this. We have an ever-increasing shortage of skilled workers: companies are finding it more and more difficult to find employees who, with their knowledge, talents, curiosity and creativity, are able to keep the company and its products sustainably competitive in a global competition for innovation. After all, that's what it's all about these days: the fastest and most efficient implementation of beneficial and creative ideas.
In order to survive in this competition, employees must not constantly be burdened with unproductive routines that do nothing at all for the company – doing accounts, for example. We should treat artificial intelligence as the ideal tool to take over these tasks.
Speaking of treating AI as a tool – Google's LaMDA is big talk right now: an AI that is said to have developed consciousness. What impact would it have on the business world if companies had to treat intelligent systems as consciousness?
This is the question we are discussing up and down in our company "future matters" right now. The dialog that took place between human and machine in this context is extremely fascinating. You have to be aware, though: We do get the impression that consciousness has formed in AI, but in the end it is still a simulation and imitation of human emotions. They are still generated by a digital neural network, which has no biological or even physical component. You could shut it down and power it up again at a later time, or even transfer what you learn to other computer networks, which doesn't work with a biological neural network like the human brain.
In connection with AI and the future of the working world, there is usually only talk of how AI will take over tasks from humans. But in many professions, humans will have to find solutions together with an AI or an intelligent machine.
In order to unleash the full potential of this collaboration, employees need certain skills – find out what they are and how to develop them in a productive way in this article.
Wouldn't such a shutdown still be unethical – even if it was "just" a simulation?
That's a legitimate philosophical question that will definitely occupy us in the years to come. However, I am more interested in what the usability of this technology might mean for us as humanity. We humans have developed countless tools and technologies in our cultural history that have made our work easier and thus significantly shaped our progress, our culture and our lives. However, so far, this has mainly been related to mechanical work, power generation, processes and algorithms.
Moreover, for a long time we were left to our own devices and were therefore slow to learn and experience for ourselves, to reflect on these experiences and thus to become smarter. With time, we can make sense of things that we did but didn't understand. And our aspiration in life is actually to reach this state earlier – in other words, not to be smarter only at 60, but perhaps already at 35.
By 2025, we will be conversing with all kinds of things in our environment – such as our houses. The house will then say, for example: "I've noticed that you always leave the light on when you go out. Do you want me to turn it off for you? That way we save about 40 francs of energy a year" – things like that.
So how could a system like Google LaMDA help us do that?
Imagine if we could combine our ability to learn with artificial intelligence – it's what I have started using the term "augmented intelligence" for. It could be as if we all had a personal AI assistant who has been with us since our school days. If you don't understand something, this AI can explain it to you in the way you understand it best. The AI would also help you correct mistakes that you make over and over again. For example, if you wonder why some people treat you in a dismissive manner, the AI could point out why and train you to communicate better. Or the AI mediates in relationship problems. What we could do with that as a society and as individuals!
In what time frame do you believe such a development to be realistic?
Good question. In futurology, we always divide the establishment of a technological innovation into several phases. Right now, we're still in the first phase – we have systems like Siri or Google Assistant that we're just starting to have dialogues with. That's a big step – not long ago we were only giving them commands.
We expect that by 2025 we will be conversing with all kinds of things in our environment – our smartphones, cars, or our apartments and houses. The house then says, for example: "Hey, I noticed that you always leave the light on when you go out. Do you want me to turn it off for you? That way we could save about 40 francs of energy a year" – things like that. In the second half of the 2020s, we will start to get used to the fact that our PC is not just a device on which we receive emails, but can also do most of them for us, make suggestions, coordinate appointments and learn through our interactions in a similar way to a human assistant today.
In this phase, we will have to regulate and negotiate many things anew, e.g. who owns and is allowed to use the data and the intelligence created there through pattern recognition. Both data protection and the usability of data are important here – because a system that knows my preferences and habits so well is naturally an El Dorado for someone who wants to sell me things. At the same time, however, it also forms the basis for a benefit-generating extended intelligence.
Many say this development would be a disaster for our society. I see it a bit differently: If we didn't have AI to do certain things for us, we would never get around to finally getting our climate under control or revising our energy supply.
And then comes the time when we interact with an AI as if it were a human?
In the first half of the 2030s, AI will take over most of the tasks that are now performed by humans in large parts of our industries. Often, we won't even be able to tell whether we're talking to an AI about a problem or to a human being – you can already see indications of this in today's chatbots and call centers. That's the big transformation – and one asks oneself, what about the people?
Exactly: What would you say, what about the people?
Many say this development would be an absolute disaster for our society. Personally, I see it a bit differently: I think we'll have to take care of many challenges and problems in the future, redefining how we use these technologies. If we didn't have AI to do certain jobs for us, we wouldn't be able to do it at all. We would never get around to finally getting our climate under control, revising our energy supply, reorganizing our mobility and resources, etc. There are countless areas that too few people can take care of right now because they are busy doing their expense reports from last week.
Here's what happens in the very best case scenario: We use this AI to make better use of our creative productivity and empathy than we have in the past. The prerequisite for this is that we fairly distribute the gains that come from using artificial intelligence. We can't have 1 percent of humanity becoming enormously rich because they can use AI, and 99 percent suffering because they no longer have a job. That's why I'm a big fan of something like a digital dividend – that we give some of the productivity gains that come from using AI back to people, for example in the form of an unconditional basic income. This could also counteract social divisions that such a drastic upheaval as this often entailed in the past.
Another tech trend that promises to transform society is the metaverse ...
I actually don't think this is a particularly big upheaval, just an expansion of our possibilities. There are many ways in which we can use the metaverse. In the best case, we use it again to make ourselves smarter faster. For example, because we understand things faster through virtual reality than we could in other ways.
How do companies create real value in the digital age?
Find out in Dataproduct Specialist David Hollembaek's friendsbook entry.
How can the IT industry become more diverse?
Business Influencer Annahita Esmailzadeh tells you in our interview.
How does an agile mindset help your company survive in the future?
Agile Coach Danijel Dedic offers tips.
And the metaverse as a place to work? How promising is this vision?
In fact, we've already gone through a great social experiment in this area in recent years – with Covid, the rapid switch to home office and the partial virtualization of the working world. What we have seen in this experiment, above all, is that the virtual workspace cannot completely replace the physical one.
Elon Musk recently said that all employees would again have to be present on site at the company for at least 40 hours a week, otherwise they would run the risk of being dismissed. This caused a huge outcry. However, many fast-innovating and dynamic companies have a similar attitude: their offices are like a beehive. A large part of brainstorming, problem solving and creativity thrives on the social and direct interaction between people.
Moreover, there is the feeling we get for our colleagues at work: We all get frustrated from time to time or are stuck. It's much easier to recognize that when you stick your head in the other office than when you only meet in virtual space.
When companies ask me what the workplace of the future will look like, I say that in ten years, a company will be a kind of giant adventure playground.
Nevertheless, the role of the physical workplace will change.
Definitely. When companies ask me what the workplace of the future will look like, I say that in ten years, a company will be a kind of giant adventure playground where teams come together, you discuss things with customers and get things rolling with partners or service providers. Phases in which you rest will alternate with phases in which you work on a solution with a high level of concentration and sometimes argue, before you finally have the uplifting feeling of having mastered a challenge together.
With all these developments in the modern working world, what qualities will employees need to have in the future?
In the last hundred years, employers have needed a large number of workers on the shop floor and in the offices who could do what the machines could not (yet) do on their own, making sure the machines worked and doing the rest of the processes in the form of batch tasks.
In contrast, we are now entering an era where different personality traits, talents and skills of people need to work together to primarily create innovation and customer loyalty. I think this has been somewhat underestimated so far. Everyone has a myriad of skills, different forms of empathy and interests. But so far, we have far too many people who have been trained along the lines of, "The only thing you have to do every day is work through exactly these ten tasks or check these three things, and when you're done, you can go home." Thus, an incredible amount of the skills that people actually have within them are never used during working hours. In the future, however, every company will need people who understand what their customers want, who can resolve conflicts, who can bring creativity to the table.
In my opinion, companies will be more like value communities in the future. It will be about how I can achieve something in my environment, with my fellow human beings, that has value – for me, for my company, for society.
So the role of the workplace will change, as will the kind of work people do and the demands they have to meet. What about working hours?
In the next ten years, there will be an increase in companies that have completely different working models than we know today. For example, there is the question of whether we will still need the 40-hour week in the future. For many, this is a completely new idea, but we haven't actually had this precise definition of working hours for all that long – only since the invention of the steam engine. When we introduced machines that could theoretically work 24 hours a day, 7 days a week, we had to define how many hours of work we wanted to expose people to.
If you had asked a farmer in the 15th century how many hours he worked per week, he would have said, "I don't know, it depends. Less in winter than in summer, when the days are longer. When the cow calves, I also work at night, when there is nothing to harvest, I do other things at home."
So we could ask ourselves: Is it reasonable for people to sit at a desk from 8:00 a.m. to 5:00 p.m. every day, typing numbers into a computer? Or would it be enough to assign tasks and responsibilities that people then work on, but no longer keep records of how long they do it?
Does this mean, thought through to the end, that at some point we will no longer work at all, or will hardly be able to distinguish between work and leisure?
To be honest, I already find it difficult to say what really is work and what isn't. Our interview now – is that work, or do I just do it because I find it interesting, and would also do it on a Saturday afternoon?
In my opinion, companies in the future will be more like value communities that people join and in which a group is found that has similar values and wants to achieve something together. We will no longer be measured by how many emails we answer in a day, because an AI can do that for us. It will be about how I can do something in my environment, with my fellow human beings, that has value – for me, for my company, for society.
Likely enough, the split within the economy into innovators and conservatives will be so strong that only a few of today's companies will be able to cope with the new conditions in international competition and survive.
That sounds like an optimistic vision of the future. But are there also issues related to AI that you worry about as a futurologist?
The danger right now is that AI will enable us to get smarter individually much faster. But big data and pattern recognition can also be used manipulatively, as we have already had to painfully experience. In addition, there is a great deal of uncertainty and many latent fears among the population about what AI means for all of us. This fear already forms a welcome breeding ground for extremist and radical currents in politics and society.
So we may be entering a phase where there is an incredibly large societal divide between those who see the potential of AI and want to deal with it progressively and others who want to stoke fear and prevent progress. This is dangerous, because it divides and paralyzes a society when people no longer argue and dispute with facts, but with whipped-up emotions.
The second danger I see is the addictive potential associated with the metaverse. This could create a new social subgroup.
And third, there is a great likelihood that the split within the economy into innovators and conservatives will be so strong that only a few of today's companies will be able to cope with the new conditions in international competition and survive.
How can we address these challenges?
Unfortunately, I have the impression that we are currently almost completely hushing up this important topic. Politicians and the media are unable to give citizens a real vision of how we can deal with these issues and start and establish a very important social discourse. But that is exactly what we will need to do: We need to find some form of consensus if we don't want to leave it to Meta or Google to define the future framework.
Futurologist Lars Thomsen (*1968) is considered one of the most influential experts on the future of energy, mobility and artificial intelligence. His organization "future matters," which he founded in 2001, advises companies, institutions and government-related bodies on the early identification of future opportunities and upheavals arising from economic, technological and social trends. Thomsen's clients include more than 800 companies from the mobility, energy and financial industries, among others. Lars Thomsen lives with his family on Lake Zurich in Switzerland.