Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham e-Theses
You are in:

Evaluating the Potential of Machine Learning to Automate Deforestation Mapping in Guyana

WIECEK, MATTHEW,GREGORY (2025) Evaluating the Potential of Machine Learning to Automate Deforestation Mapping in Guyana. Doctoral thesis, Durham University.

[img]
Preview
PDF (PhD Dissertation) - Accepted Version
27Mb

Abstract

Global forest cover has decreased by 12% from 2001 to 2022, and the rate of loss is increasing, which has consequence for carbon flows and ecosystem health. A UN framework, Reducing Emissions from Deforestation and Forest Degradation (REDD+), seeks to provide financial support to developing countries to measure and mitigate deforestation at the country-scale using satellite images. Deforestation measurement in the context of a REDD+ program would benefit from automation via machine learning – that would be able to measure deforestation in countrywide data quickly and accurately.
Existing attempts at using image classification to measure deforestation in satellite images have yielded mixed results with regard to REDD+ reporting requirements. This was done using a limited range of satellites, and a very limited range of classification algorithms. However, the remote sensing literature has much research on the effectiveness of and combinations of different satellite sensors and algorithms that can be applied here. This research compares a broader range of classification algorithms and satellite sensors (both multispectral and SAR) and combinations of satellites, tested in a range of deforestation drivers in Guyana, a country with a large proportion of mixed tropical rainforests and an advanced, experienced REDD+ program.
When comparing three algorithms (Random Forest, Gradient Boosted Trees, Naïve Bayes), with default probability thresholds for prediction decisions, on Sentinel-2 satellite data, the overall accuracies were 96%, 95% and 93%, respectively. When comparing satellites, the overall accuracies were 77% (ALOS-2), 69% (Sentinel-1), 90% (Landsat), 87% (Sentinel-2), 70% (RapidEye), 89% (Planetscope). The overall accuracy when classifying sedimented rivers was 96%, 99% for clear rivers, and 68% when the model was trained on sedimented rivers and tested on clear rivers. When classifying mining sites with no vegetation inside the mine, the Consumer’s Accuracy is 91% for forest and 94% for mining, but when classifying mining sites with vegetation inside the mine, the Consumer’s Accuracy is 98% for forest and 48% for mining. In all cases, manually tuning the class probability threshold for prediction decisions away from 50% created a map of deforestation that followed the true labels very closely. These results indicate that, in addition to choosing the algorithm and sensor, image classification of deforestation in a REDD+ context must account for the optimal probability threshold and the within-class variation.

Item Type:Thesis (Doctoral)
Award:Doctor of Philosophy
Keywords:REDD+, Guyana, Machine Learning, Sensor Fusion, Accuracy Assessment, Remote Sensing, Deforestation
Faculty and Department:Faculty of Social Sciences and Health > Geography, Department of
Thesis Date:2025
Copyright:Copyright of this thesis is held by the author
Deposited On:23 Jun 2025 15:12

Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter