Fine-Tuning Foundation Models for Downstream Applications of Remote Sensing Data
Topics:
Keywords: Foundation Model, AI, Remote Sensing, Machine Learning, Satellite Imagery
Abstract Type: Paper Abstract
Authors:
Hamed Alemohammad Clark University
Abstract
Foundation Models (FM) are revolutionizing how machine learning (ML) models are developed. FMs are trained using self-supervised techniques on a large number of unlabeled data such as satellite imagery which are abundant at global scale. These models are then fine-tuned for different downstream applications using a limited number of labeled data. This approach has shown to be very effective (in terms of accuracy and training cost); in particular, where collection and curation of labeled data is expensive.
In this presentation, we demonstrate the value of applying Prithvi, a geospatial FM, for three downstream applications of segmentation, image classification, and gap filling. The FM is trained on ~175,000 multi-spectral and multi-temporal image chips from the Harmonized Landsat Sentinel (HLS) imagery. For each of the three downstream tasks, high quality labeled data is curated and a baseline supervised model is also trained for comparison.
We demonstrate the tradeoffs for using the FM versus the baseline in terms of the number of labeled data required to achieve certain accuracy, training cost and the highest achievable accuracy. Overall, the FM achieves similar performance compared to baseline models but using a much smaller sample size, and higher performance using similar sample sizes.
Fine-Tuning Foundation Models for Downstream Applications of Remote Sensing Data
Category
Paper Abstract
Description
Submitted By:
Hamed Alemohammad Clark University
halemohammad@clarku.edu
This abstract is part of a session: GeoAI and Deep Learning Symposium: AI for Earth Observation