DeepSatellite: A Large-Scale Dataset and Benchmark of Deepfake Satellite Imagery Detection
Topics:
Keywords: satellite image, deepfake detection, dataset, benchmark, deep learning
Abstract Type: Paper Abstract
Authors:
Yifan Sun,
Qianheng Zhang,
Yuantai Li,
Bo Zhao,
,
,
,
,
,
,
Abstract
Despite its rising ubiquity and widely trust, satellite imagery which comprises an objective-appearance reflection of Earth’s surface, can be taken out of context, outdated, manipulated, or outright falsified, causing misinformation to go viral. In light of the increasingly powerful AI deepfake technology and widespread deepfake multimedia, it’s critical to establish the countermeasures of existing and potential deepfake satellite imagery abuse as it has been identified as a serious national security concern by the US government agencies. Despite the promising performance of current deepfake satellite imagery detection models, the small-scale, under-represented and mostly closed-source status quo of existing datasets seriously hinders the in-depth development of countermeasures for deepfake satellite imagery abuse and their industrial deployment. Directly response to the urgent need, the research first construct an open-source large-scale deepfake detection dataset with its benchmark. The released dataset DeepSatellite_v1.0.0 covers four types of deepfake procedure (random generation, conditional generation, semantic manipulation, mask inpainting), 5 generation models (Pix2Pix, CycleGAN, Pix2PixHD, PGGAN, StyleGAN2-ADA) and 4 data sources (Google Earth level 16 & 18, Sential-2, Landsat-8) of different resolution by 50,000 pieces of 512*512 pixels RGB satellite imagery. And the benchmark consists of both the performance of four existing deepfake satellite detection methods (about 85% accuracy) and the human performance on the test dataset (about 65% accuracy).
DeepSatellite: A Large-Scale Dataset and Benchmark of Deepfake Satellite Imagery Detection
Category
Paper Abstract