en de
Back

Online Magazine

"We are decades away from machine superintelligence"

Whether for private individuals or for companies: Computer science has become an important part of our existence. It helps us make decisions and solve problems - from very small ones in everyday life to very big ones like the Corona pandemic. In this interview, Donald Kossmann, head of the Microsoft Research Lab, talks about the opportunities, risks and limitations of current technologies, how he sees the future of computer science and why it doesn't have to be as complicated as it sometimes seems.

Oliver Bosse spoke with Donald Kossmann

You emphasise that in the world of informatics, lifelong learning should be given particularly great importance - by all of us, especially companies. Why?
Computer science and digitalisation really do affect everyone today. Everyone should benefit from technological progress. To do this, everyone must have a basic understanding of technology and learn how computers work and can help today.

However, technology is constantly and rapidly changing. Therefore, it is not enough to develop this basic understanding once and then sit back. You have to keep on the ball and keep an eye on how technology is developing and might develop in the future.

Lifelong learning of information technologies applies to individuals who want to develop in their profession or who have to make decisions about the use of technology in everyday life, such as choosing apps for their smartphone or using entertainment or security solutions for their home. But it also applies to businesses, large and small, that need to make investment decisions.

You wrote the book "The Miracle of Computer Science" for non-computer scientists and young people, which describes the essential ideas and concepts of computer science over time. Why is a low-threshold approach to this topic important?
Computer science is still considered complicated, which is why people don't bother with it. There are many misconceptions that lead to people who are unsuited for computer science studying computer science and, almost worse, people who are predestined for computer science not studying computer science. Therefore, the book offers an intuitive introduction to this subject with many everyday examples. Computer science does nothing other than what people have been doing for thousands of years - even without computers. Computer science only formalises these ideas in order to automate them with the help of the computer. Through this perspective and the everyday examples, the book aims to lower the entry threshold and create a basis so that non-computer scientists and young people can deal with computer science and future developments much more calmly.

Can experienced computer scientists also learn something from the book?
I have also received positive feedback from computer scientists, especially about the chapter on artificial intelligence. The recent, rapid developments in artificial intelligence overwhelm even many computer scientists, and the book provides a simple perspective with examples that demystify the myth of artificial intelligence.

More and more companies are successfully using AI

Companies from a wide range of industries have already discovered the benefits of artificial intelligence. According to a study by Accenture, 12% of them are already so-called "AI Achievers", who clearly stand out from the competition with their advanced AI projects and for whom AI accounts for a large share of their business success. According to the study, the share of "AI Achievers" will rise to 27% in the next few years.

Click here to read the study!

First came the computer, then the internet, and according to you, the next big steps are the cloud and artificial intelligence (AI). How do these two things take us a decisive step further?
From my point of view, the dams of information technology have really broken with the cloud - and we are already feeling that today. Data that we collect with PCs and smartphones or on the web is increasingly moving to the cloud. There, we have practically unlimited possibilities to merge and process the data. The cloud can also be used to share experiences or data with others, who can learn from it again.

The cloud and AI allow us to solve many problems with experience. There is a saying that "smart people learn from the mistakes of others." This is exactly what the cloud and modern AI allow us to do. At the beginning of the Corona pandemic, Western governments, doctors and scientists learned from China's experience and data. Today, the people in China are also learning from the experiences we have had in the West. All of this has been made possible by the sharing of data in the cloud - especially in science - and the modern AI methods that allow us to perform scientific analysis on these enormous amounts of data.

You are also convinced that mutual understanding between computer scientists and other professional groups will become increasingly important in the future, especially in relation to the topic of AI. Why?
In order for computer scientists to be able to contribute to improving the processes of other professions, they need to have a basic understanding of their activities. For example, they need to understand how a surgeon works in order to develop targeted technology offerings. On the flip side, the surgeon must equally understand the essential concepts of informatics - or AI specifically. The surgeon must ultimately decide which technologies to use and how. True innovation comes from collaboration between computer scientists and surgeons.

At Microsoft you have to think big. That's what you're doing with your vision that not only will the whole world become a computer, but there will also be only one big computer for everyone. What are the considerations behind this?
Today we live in a world where everything is connected. This blurs the boundaries. Let's take a car as an example. A car consists of more than 100 computers; for example, the petrol injection system, the navigation system, the control of the window regulators, etc. These systems are connected and disappear in the overall concept of "car". In the end, we perceive driving a car as one big, joyful experience and forget the complexity of the 100 computers and other technology under the bonnet. That's how technology should be.

If we think big, we should extrapolate from the car to the planet. Don't we all wish that our life on this planet is one big, joyful experience? And we only have to look at the complexity of the subsystems when needed, when something is broken? To do this, we need to make the world programmable, just as a telephone or a car is already programmable today.

Data that we collect with PCs and smartphones or on the web is increasingly moving into the cloud. There, we have practically unlimited possibilities to merge and process the data.

Could a big computer also be capable of solving the world's big problems - let's think of pandemics or climate change?
Ultimately, only humans can solve the world's big problems. There are no patent remedies and, as with the Corona pandemic, humans must decide how to balance the personal liberties of the population with the protection of risk groups.

But the computer and especially the cloud and modern AI or the "world computer" can be important tools and aids for people. As with the Corona pandemic, in dealing with the consequences of climate change, computers can help process experience and prepare decisions so that humans can focus on their real task of asking the right questions and making the best possible decisions based on our desires and value systems.

From the opportunities to the risks - where do you locate these in relation to your vision and how can they be avoided? The term "transparent human being" spontaneously comes to mind.
When you talk about Big Data and modern, data-oriented AI, there are inevitably concerns about protecting the data that drives these technologies. The more digital data we have, the better these technologies work and the greater the social and economic benefits of these technologies. But the more digital data we collect, the higher the risk that this data will be misused.

Weighing the benefits and risks of modern information technologies in particular is a key reason why computer scientists and non-computer scientists need to work together and why non-computer scientists need to develop a basic understanding of computer science. How can society make decisions about the benefits of technology if it does not understand the benefits and risks?

The good news is that technological progress can also help to reduce the risks and create more transparency. At Microsoft, we consider privacy a fundamental human right. To this end, Microsoft scientist Cynthia Dwork, for example, has developed a promising technology called "Differential Privacy". This technology mathematically limits how much one can learn about data from the results of statistical analyses. Each person can thus decide for themselves how much privacy they give up and, accordingly, how much benefit can be gained from their data.

The more digital data we have, the better these technologies work and the greater the social and economic benefits of these technologies. But the more digital data we collect, the higher the risk that this data will be misused.

Where do you see the social responsibility that a company like Microsoft has as a leader in the development of new technologies like AI, and how do they take it?
At Microsoft, we recognise that AI systems can be used for both desirable and undesirable purposes and that their use can have unintended consequences. As AI systems become more sophisticated and play an increasing role in people's lives, we believe it is essential to define clear principles for the development, use and application of AI systems. According to our guidelines, they must be fair, reliable, private and secure, inclusive, transparent and accountable.

Let's take a step back. We have talked a lot about visions. Where in this development are we today?
Over the decades, the ambitions regarding AI have become smaller and smaller and the technology has become better and better. We are now in the fourth serious attempt to make AI usable as a technology: This time, however, there is hope that expectations have been sufficiently scaled down and computer science has sufficiently advanced so that we can expect real results from AI. But we are still decades away from the vision of machine superintelligence conveyed in some Hollywood films like Terminator. We can already solve simple targeted tasks by machine today, for example in image recognition, but modern AI is simply still too inefficient to come close to human intelligence. To stay with image recognition, the computer needs about 1000 times more energy than the human brain for the same simple task, for example recognising a cat or a dog in a photo.

If you had to give a company a tip on what strategy it should adopt in terms of technology and how it should position itself, what would it be?
Satya Nadella once said that every company is becoming a software company. What he means is that every company must become an innovator in the field of digitalisation. To give an example, 50 per cent of car breakdowns are caused by software errors. But overall, the number of breakdowns has decreased over the years - thanks to software! Software causes breakdowns - but it prevents many more breakdowns. Just like car manufacturers, bakers and surgeons must constantly consider how digitalisation can improve their products. If they don't, the competition will.

Finally: You once said in an interview that one of your biggest problems as a researcher was to define the right goal. Is there nevertheless one that you can tell us?
My big goal is to democratise technology. As a company, our big goal is to ensure that our customers can benefit as much as possible from all technologies - today and in the future. We provide the platforms so that our customers can use the latest technologies as easily as you can buy milk in the supermarket today.

 

Watch the Sparx video

About Donald Kossmann

Donald Kossmann (born in 1968) is distinguished scientist and the Director of the Microsoft Research Lab in Redmond, USA. Prior to this, the native of Germany also worked in teaching and research as professor of computer science at ETH Zurich. His latest book “Wunder Informatik” (The Miracle of Computer Science) proves that teaching IT is a matter close to his heart. With this book he hopes to bring children and young people closer to the world of IT with the help of many everyday examples.

BESIDES DONALD KOSSMANN, WE ALSO HAVE THIS FOR YOU

In conversation with
In conversation with Jolanda Spiess Hegglin
Data analytics Natural language processing AI

Against cyber violence
Hack of the Week
Data analytics

Excel Files & Azure Synapse Analytics
Ice Queek
Key Visual Ice Queek
Automation

Ice Queek under Home Office Stress
Read