Machine-Learning to Detect Battle Damage Using Satellite Images

Army researchers discovered a machine-learning technique that will get accurate information to soldiers quicker than ever before and can be implemented in various devices used by soldiers (source: https://www.army.mil/article/236647/machine_learning_algorithms_promise_better_situational_awareness).
Army researchers discovered a machine-learning technique that will get accurate information to soldiers quicker than ever before and can be implemented in various devices used by soldiers (source: https://www.army.mil/article/236647/machine_learning_algorithms_promise_better_situational_awareness).

Posted on December 19, 2022 | Completed on October 26, 2022 | By: Taylor H. Knight

What computer-vision/machine-learning work has been done to detect battle damage from satellite images?

The Defense Systems Information Analysis Center (DSIAC) was asked to identify machine-learning (ML) and/or computer vision research being done to detect battle damage using satellite images. The research can be applied to developing tools to automate physical battle damage assessment. DSIAC searched the Defense Technical Information Center’s repository, the R&E Gateway, a variety of literature databases, and open-source information to locate relevant research. A promising study from researchers in Spain combined computer vision techniques and publicly available, high-resolution satellite images to produce building destruction estimates. Researchers then trained a convolutional neural network to spot destruction features from heavy weaponry attacks in satellite images. Research related to ML that detects damage after natural disasters is included in this report, as some of the techniques and algorithms may be translated to detecting battle damage.

 


1.0 Introduction

The Defense Systems Information Analysis Center researched open-source information and literature databases to identify relevant battle damage research. Many studies have demonstrated the use of computer vision or ML on satellite imagery to identify different types of destruction from natural disasters. Natural disaster damage tends to be spatially concentrated, focusing on one point in time, and uses training datasets made of damaged and undamaged images [1]. Finding existing data on building destruction in conflict zones typically relies on eyewitness reports or manual detection, making datasets scarce, incomplete, and potentially biased. This lack of data limits media reporting, humanitarian relief efforts, human-rights monitoring, and battle damage assessments. An automated building-damage classifier for use with satellite imagery, which has a low rate of false positives in unbalanced samples and allows tracking on-the-ground destruction in close to real-time, would be valuable for a variety of U.S. Department of Defense purposes.

This report will describe research that applies ML for battle damage detection and to natural disaster damage. The latter is included in the report so the techniques and/or research can hopefully be applied to battle damage detection in the future.

 


2.0 Detecting Battle Damage

2.1. Syrian Civil War Battle Damage Detection

In 2021, Spanish researchers introduced an automated method of measuring destruction in high-resolution satellite images using deep-learning techniques combined with label augmentation and spatial and temporal smoothing, which exploit the underlying spatial and temporal structure of destruction [1]. As a proof of concept, they applied this method to the Syrian Civil War and reconstructed the evolution of damage in major cities across the country.

Researchers combined computer-vision techniques and publicly available high-resolution satellite images to produce building-destruction estimates. They trained a convolutional neural network (CNN) to spot destruction features from heavy weaponry attacks in satellite images, including rubble from collapsed buildings and the presence of bomb craters.

A label-augmentation method was used for expanding destruction class labels. Next, a two-stage classification process was used to control spatial and temporal noise. The CNN results are processed through a random-forest model that relies on spatial and temporal leads and lags to improve classification performance. Finally, researchers applied their trained computer-vision model to repeated satellite images of the entire populated areas of major Syrian cities and produced longitudinal estimates of building destruction over the course of the Syrian Civil War.

Results highlighted the importance of repeated satellite imagery combined with temporal filtering to improve monitoring performance. This approach can be applied to any populated area, given that repeated, high-resolution satellite imagery is available. The method of identifying building destruction combines the existing state-of-the-art computer-vision methods with an additional postprocessing step and exploits the time dimension of destruction data to expand the training dataset. This allowed researchers to exploit the repetition of imagery to bring down error rates when classifying destruction. Due to these advances, researchers could achieve an area under the receiver operating characteristic curve (abbreviated as AUC) of above 0.9 and an average precision of over 0.42 in the unbalanced sample from six Syrian cities. It was also shown that this approach can identify the timing and location of building destruction out of sample, i.e., in areas of Aleppo that have not been used for training the classifier.

2.2 Google Artificial Intelligence (AI) Building Damage Detection

A team from Google AI used ML to automate the detection of building damage in satellite imagery by comparing the following four different CNN models after the 2010 Haiti earthquake [2]:

  1. Concatenated Channel Model: Concatenate the pre- and postdisaster images into a single six-channel image to use as the baseline model.
  2. Postimage-Only Model: Only uses the three-channel postdisaster image as input. This model loses the information from the predisaster image but avoids problems such as misalignment and brightness differences in the pre- and postdisaster images.
  3. Twin-Tower Concatenate (TTC) Model: This architecture is designed to compare the pre- and postdisaster images based on abstract features extracted by the convolutional layers instead of comparing pixels directly. This makes the model more robust to nonuniformity in the pre- and postdisaster images, such as misalignment.
  4. Twin-Tower Substrate (TTS) Model: The same as TTC except it combines the extracted feature values by subtracting them elementwise instead of concatenating them. This architecture is designed to capture the differences more directly in the pre- and postdisaster images, which is a good indicator of building damage.

Experiment results showed twin-tower models outperformed single-tower models and the TTS model achieved the best performance with 0.832 validation AUC. This indicated that useful information could be extracted by comparing buildings and their surroundings in the postdisaster images against those in the predisaster images. Based on these results, researchers used the TTS model in all subsequent experiments and found that this can be applied to new regions and disasters once it is fine-tuned using a small set of samples from that region.

 


3.0 Detecting Natural Disaster Damage

Most of the ML research on detecting building damage using satellite imagery focused on postnatural disasters. Some brief overviews of current research in this area are included to hopefully assess battle damage in a comparable manner.

3.1 University of California, Berkeley Research

A team based at the University of California, Berkeley, devised an ML system to tap the problem-solving potential of satellite imaging using low-cost, easy-to-use technology that could bring access and analytical power to researchers and governments worldwide [3]. The study “A Generalizable and Accessible Approach to Machine Learning With Global Satellite Imagery” was published in July 2021.

The system is called the Multi-Task Observation using Satellite Imagery and Kitchen Sinks, or MOSAIKS. MOSAIKS could have the capability to analyze hundreds of variables drawn from satellite data at a global level. This includes, but is not limited to, housing, health, poverty, soil conditions, and water conditions.

3.2 Postnatural Disaster Satellite Imagery

Hurricanes

To improve the efficiency and accuracy of damage assessment, a team proposed to automatically detect damaged buildings using image classification algorithms [4]. The method was applied to the case study of 2017 Hurricane Harvey by a team from the University of Washington. Research demonstrated that through deep learning, automatic detection of damaged buildings could be accomplished satisfactorily. Other researchers can use the dataset and methodology to study and experiment with different uses of satellite imagery in disaster response. A pretrained architecture that achieves satisfactory results is also in the works to facilitate transfer learning either in feature extraction, fine tuning, or as a baseline to speed up the learning process in future development/events with similar properties.

In 2019, North China University of Technology in Beijing, China, developed a damaged building assessment method using a single-shot multibox detector (SSD) with pretraining and data augmentation. The following aspects were highlighted from the research [5]:

  1. Objects can be detected and classified into undamaged buildings, damaged buildings, and ruins.
  2. A convolution autoencoder (CAE) that consists of VGG16 is constructed and trained using unlabeled postdisaster images. As a transfer learning strategy, the weights of the SSD model are initialized using the weights of the CAE counterpart.
  3. Data augmentation strategies, such as image mirroring, rotation, Gaussian blur, and Gaussian noise processing, are utilized to augment the training data set.

Researchers used hurricane photos, and the data showed that the pretraining strategy could improve up to 10% in terms of overall accuracy compared with the SSD trained from scratch.

Earthquakes and Explosions

A research group based at the University of Twente in The Netherlands tested an advanced CNN for detecting visible structural damage after earthquakes and explosions [6]. Results showed that quality metrics were influenced by the composition of training samples used in the network. Three pretrained networks, optimized for satellites, airborne, and UAV image spatial resolutions/viewing angles, were made available to the scientific community to promote their wider use.

3.3 Defense Innovation Unit’s xView2 Challenge

In 2019, the Defense Innovation Unit’s (DIU’s) xView2 Challenge sought to automate postdisaster damage assessment. DIU challenged ML experts to develop computer vision algorithms to speed up analysis of satellite and aerial imagery by localizing and categorizing various types of building damage caused by natural disasters [7]. The competition sought out computer-vision algorithms to locate and identify distinct objects on the ground that are useful to first responders.

DIU created a new dataset, xBD, to enable localization and damage assessment before and after disasters. The dataset provided the foundation for the challenge. While several open datasets for object detection from satellite imagery already exist (e.g., SpaceNet and xView), each represent only a single snapshot in time and lack information about the type and severity of damage after a disaster.

xBD allowed users to generate and test models to help automate building damage assessment. The open-source, electro-optical imagery (0.3-m resolution) xBD dataset will encompass 700,000 building annotations across 5,000 km2 of freely available imagery from 15 countries. Seven disaster types are included—wildfire, landslides, dam collapses, volcanic eruptions, earthquakes/tsunamis, winds, and floods.

The challenge resulted in 2,000+ submissions, with top solutions being deployed to assist in the 2020 California wildfires. End users also included the Federal Emergency Management Agency, National Geospatial-Intelligence Agency, National Aeronautics and Space Administration, California Air National Guard, California Governor’s Office of Emergency Services, and the United Nations Satellite Centre UNOSAT.

 


References

[1] Mueller, H., A. Groeger, J. Hersh, A. Matranga, and J. Serrat. “Monitoring War Destruction From Space Using Machine Learning.” Proceedings of the National Academy of Sciences, vol. 118, issue 23, 3 June 2021.

[2] Xu, J. Z., W. Lu, Z. Li, P. Khaitan, and V. Zaytseva. “Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks.” Google AI, 14 October 2019.

[3] Lempinen, E. “A Machine Learning Breakthrough Uses Satellite Images to Improve Lives.” Nature Communications, https://news.berkeley.edu/2021/07/20/a-machine-learning-breakthrough-using-satellite-images-to-improve-human-lives/, 20 July 2021.

[4] Cao, Q. D., and Y. Choe. “Deep Learning Based Damage Detection on Post-Hurricane Satellite Imagery.” Department of Industrial and Systems Engineering, University of Washington, https://deepai.org/publication/deep-learning-based-damage-detection-on-post-hurricane-satellite-imagery, 4 July 2018.

[5] Yundong, L., W. Hu, H. Dong, and X. Zhang. “Building Damage Detection From Post-Event Aerial Imagery Using Single Shot Multibox Detector.” Applied Sciences, vol. 9, issue 6, p. 1128, 18 March 2019.

[6] Nex, F., D. Duarte, F. G. Tonolo, and N. Kerle. “Structural Building Damage Detection with Deep Learning: Assessment of a State-of-the-Art CNN in Operational Conditions.” Remote Sensing, vol. 11, no. 23, p. 2765, 2019.

[7] Defense Innovation Unit. “Eye in the Sky: DoD Announces AI Challenge.” https://www.defense.gov/News/News-Stories/Article/Article/1934806/eye-in-the-sky-dod-announces-ai-challenge/, 15 April 2019.

Want to find out more about this topic?

Request a FREE Technical Inquiry!