en de

Online Magazine

AI monitors armed conflicts

The war in Ukraine is currently the focus of world attention. Meanwhile, other conflict areas disappear from sight. Jan Dirk Wegner, from the University of Zurich, and his team want to change that: Together with the Red Cross, they are developing an AI for global remote monitoring of armed conflicts.

Eliane Eisenring spoke with Jan Dirk Wegner

Mr. Wegner, what do you consider belonging to the term "armed conflict"?
Basically, any conflict that leads to destruction and endangers human life. This includes both internal and external conflicts. Nor does it matter which actors are involved – political parties or criminal gangs.

In technical terms, we mainly deal with large-scale destruction or with punctual damage spread over a broader area. The reason for this is that we work with satellite images: on them, we often cannot detect small damages.

What is the goal of remote monitoring an armed conflict, and how is it done today?
The goal is to create an area-wide damage mapping or at least a risk map. Currently, such maps are mostly drawn manually: One buys high-resolution satellite images, experts regularly look at what's changing on them and plot it in a geographic information system (GIS). This is very expensive, scales poorly, and many things are overlooked.

In the civilian sector, the purchase of such satellite images quickly exceeds available resources. Organizations like the Red Cross, with whom we are working on this project, therefore mostly talk to people on the ground – they report over the phone what has been destroyed in a current conflict.

The ideal scenario for remote monitoring would be to keep a permanent eye on the entire globe. Until now, however, this is too expensive and too complex.

You are currently developing a deep learning solution for remote monitoring. How will it improve this approach?
The ideal scenario for remote monitoring would be to keep a permanent eye on the entire globe. This would allow us to recognize at an early stage if a conflict is developing somewhere – even in areas that are not currently the focus of attention. At the moment, for example, there's a lot of coverage of Ukraine, but we don't hear anything about what's happening in Mali or Niger. It would be particularly important for the Red Cross or organizations such as the World Food Programme to learn about a conflict well beforehand so that they can provide aid in good time.

However, such permanent monitoring is currently not possible for the reasons already mentioned – it would be too expensive to constantly buy high-resolution satellite images from all over the world. And also too time-consuming to have them constantly evaluated by experts.

We thus had the following idea: instead of buying expensive, high-resolution satellite images, we could initially use satellite images from the Sentinel program of the European Space Agency ESA. These images have a much lower resolution – one pixel of the Sentinel-2 satellites is about 10 by 10 meters on the Earth's surface, so one overlooks a lot of things. But: they are publicly accessible and available free of charge for almost the entire globe.

And your solution would then analyze these images?
Exactly. The AI would be able to recognize when something is happening even in images with medium spatial resolution. Since the reduced-quality satellite images and also the automated analysis are much cheaper, it would be possible to permanently cover the entire globe. The AI would classify new developments into defined damage classes – from barely to completely destroyed. And if it detects destruction in a part of Mali where no human experts are watching, the Red Cross can buy high-resolution satellite images for that specific area and perform more accurate analysis – instead of having to buy the images for all of Mali.

Our AI could also recognize when something is happening on satellite images in medium spatial resolution. Since such images and automated analysis are much cheaper, we could permanently cover the entire planet.

So, humans would still be involved in the remote monitoring process.
Absolutely. That's the case in all critical areas where AI is used: It's not a fully automated process, because the risk of errors in the algorithm having a direct impact on human lives is too great. In our project, the AI might miss certain conflicts entirely. Or, based on the AI analysis, one might send food and support to a place where in fact nothing happens. Instead of letting an AI take care of the entire process, the idea is to use the resources of human experts in a more targeted way.

How exactly would the deep learning model analyze the satellite images, or what would it watch out for?
The algorithm is trained to detect changes. What doesn't work is to feed a single image into the model and then ask what is damaged and what is not. Instead, you need a time series of satellite images to track what has changed over time. The smallest possible series is two images – one before and one after – but the more images the better. The difficulty here is to distinguish destruction due to a conflict from all other developments, such as construction sites, seasonal vegetation differences, etc. – after all, things are changing all the time.

The algorithm is trained to detect changes. The difficulty is to distinguish destruction due to conflict from all other developments, such as construction sites, seasonal differences, etc.


In our conversation with Marlon Nuske from Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI), he explains that by analysing satellite images, we can identify flooded areas, predict forest fires early on or analyse the development of green spaces in cities.

Apart from that, DFKI uses AI to prevent collisions between satellites and space debris.

Find out more about how AI is used to monitor the Earth as well as outer space in this interview.

What data do you use to train your algorithm to recognize such specific destructions?
In fact, training data, meaning satellite imagery that is already categorized in terms of destruction, is not that easy to get – except in the military domain, to which we do not have access. Fortunately, UNOSAT, a division of the United Nations, has manually drawn an enormous amount of such maps and verified them with contacts on the ground – also in the context of armed conflict. Much of this data is publicly available. For that reason only do we even have enough reference data to train our model.

You said you are aiming for global remote monitoring of armed conflicts – can you achieve this with a single generalized deep learning model? Or does that require multiple models?
Deep learning models learn to distinguish patterns based on the image content and thus analyze a scene. Since, for example, Ukraine has completely different landscapes and architectural styles than, say, Mali, the model learns to pay attention to completely different textures in the image. Therefore, one cannot train a model with data from Ukraine and apply it to Mali. The destruction is also different: In Mali or Sudan, it is common for entire villages to be burned down – everything is then simply black. In Ukraine, on the other hand, it's concrete buildings that have been shot up. Generalization therefore does not really work – as is often the case with AI.

The pragmatic solution is usually to develop a model, train it with data from Ukraine, for example, and apply it to Ukraine. For another use case, you take the same model, but train it with new reference data from the corresponding region. Thus, you don't start from scratch each time – you take the existing model and adapt it to the new scene.

UNOSAT, a division of the United Nations, has manually drawn an enormous amount of maps and verified them with contacts on the ground. For that reason only do we even have enough reference data to train our model.

Aren't multiple models a much bigger effort?
No. Whether you have five identically constructed models and train each one for a single country, or one huge global model that is trained on all countries at once, does not make much of a difference. From the user point of view, it is often even more advisable to have several individual models. They are easier to handle, more precise, and one can better understand what is happening – keyword explainability.

In our project, we make sure to get a lot of feedback from the Red Cross in this regard. So that we understand what they actually need. From a scientific standpoint, one always wants to develop an all-encompassing solution, but in practice, that's often not required. It's enough if a model is trained for a specific case and works reliably there.

So, your AI solution will primarily be used to detect and classify conflicts. Beyond that, how can automated monitoring influence armed conflicts?
For one thing, it clearly documents the actions of the various parties to the conflict and ensures that they cannot evade their responsibility. What the Red Cross is doing is already going in this direction: the maps they create manually, for example in the current Ukraine conflict, are made available to both parties to the conflict – the Ukrainian as well as the Russian side. Thus, neither can claim that it was unaware of any destruction or other development.

Second, the categorized images can also be used retrospectively as documentation of destruction within a conflict. For example, for assessing war crimes.

On the one hand, categorized images document the deeds of the various parties to the conflict and ensure that they cannot evade their responsibility. And they can also be used later as documentation of the destruction within a conflict. For example, for the assessment of war crimes.

Could automated monitoring even help prevent a conflict from happening in the first place?
Indeed, it might. Knowing that there is a tool that permanently monitors what is happening on the ground could let conflict parties think twice about whether to enter into a conflict at all. Of course, such monitoring already takes place today, on the part of the military. But with an automated tool on the civilian side, the public would be informed and could exert pressure much more quickly.

How long will the development of this deep learning model take?
The project is laid out for four years. But I hope we will have a prototype before that which gives good results for certain damages, and which we can further develop together with the Red Cross and other stakeholders.

What we won't have in four years is a fully developed tool that we can present to the public. There are too many sensitive issues, including from the Red Cross side, that need to be addressed first. For example, one important question is whether such a tool should be made public at all. Many maps that the Red Cross creates are ultimately secret because the organization does not want to lose its neutrality status.

Looking further into the future: What do you hope to accomplish with this project in the long run?
What would be exciting to figure out is how to involve the population on the ground. Ultimately, we are developing this tool to directly improve the living situation of the people affected. We still have to find out exactly how that could be done.

About Jan Dirk Wegner

Prof. Dr. Jan Dirk Wegner (*1982) has been Head of the EcoVision Lab at ETH Zurich since 2017 and holds the professorship "Data Science for Sciences" at the University of Zurich as Associate Professor since 2021. He is also an associate member at the ETH AI Center and director of the PhD Graduate School "Data Science" at the University of Zurich. Wegner conducts research at the intersection of machine learning, computer vision, and remote sensing to solve scientific questions in the environmental and earth sciences. In 2020, he was selected for the WEF Young Scientist Class as one of the world's top 25 researchers under the age of 40 who are working to integrate scientific knowledge into society for the common good.


Data analytics Machine learning

7 habits to shorten the time-to-value in process mining
AI in medicine Data analytics

The smart drinking cup
AI in business Data analytics Machine learning

How can banks become truly AI-driven?