Multi-stream STGAN: A Spatiotemporal Image Fusion Model with Improved Spatial Transferability
Topics:
Keywords: Spatiotemporal Image Fusion, Deep Learning, Remote Sensing
Abstract Type: Paper Abstract
Authors:
Fangzheng Lyu University of Illinois Urbana-Champaign
Zijun Yang University of Illinois Urbana-Champaign
Chunyuan Diao University of Illinois Urbana-Champaign
Shaowen Wang University of Illinois Urbana-Champaign
Abstract
Spatiotemporal satellite image fusion, the process of generating remote sensing images with both high spatial and temporal resolutions, has gained significant attention in recent years. Spatiotemporal image fusion integrates different remote sensing datasets (i.e., coarse and fine images) with distinct characteristics in their spatial and temporal resolutions. The coarse images are with low spatial resolution but a higher temporal frequency, such as the MODIS data, while the fine images are featured with high spatial resolution but lower temporal frequency, such as the Landsat images. The resulting fusion images with high spatiotemporal resolutions can significantly enhance the application potential of remote sensing data, benefiting a variety of applications, including agriculture, environment, natural resources, disaster managements. In the paper, we proposed a multi-stream spatiotemporal fusion GAN (STGAN) model for spatiotemporal satellite image fusion that can accommodate drastic temporal difference. The proposed STGAN model aims to provide accurate prediction of fusion images even without temporally close images. STGAN employs a conditional GAN modeling structure that incorporates multi-stream input, which allows the model to better learn the temporal changes in surface reflectance from the high-temporal-resolution MODIS images. In the meantime, the spatial features could be better retrieved on condition of the Landsat image of high spatial resolution, which has great potential in predicting land cover changes over time. The proposed STGAN structure can thus have enhanced capabilities in establishing the mapping relationships between the low- and high-resolution images, accommodating the changes in both spectral reflectance and spatial structures.
Multi-stream STGAN: A Spatiotemporal Image Fusion Model with Improved Spatial Transferability
Category
Paper Abstract
Description
Submitted By:
Fangzheng Lyu Virginia Tech
fangzheng@vt.edu
This abstract is part of a session: HRDSG Wildfires and Risk Modeling