Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution

Thumbnail Image
Date
2020-10-01
Authors
Martins, Vitor
Kaleita, Amy
Gelder, Brian
da Silveira, Hilton
Abe, Camila
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Authors
Person
Kaleita, Amy
Department Chair
Research Projects
Organizational Units
Journal Issue
Is Version Of
Versions
Series
Department
Agricultural and Biosystems Engineering
Abstract

Convolutional Neural Network (CNN) has been increasingly used for land cover mapping of remotely sensed imagery. However, large-area classification using traditional CNN is computationally expensive and produces coarse maps using a sliding window approach. To address this problem, object-based CNN (OCNN) becomes an alternative solution to improve classification performance. However, previous studies were mainly focused on urban areas or small scenes, and implementation of OCNN method is still needed for large-area classification over heterogeneous landscape. Additionally, the massive labeling of segmented objects requires a practical approach for less computation, including object analysis and multiple CNNs. This study presents a new multiscale OCNN (multi-OCNN) framework for large-scale land cover classification at 1-m resolution over 145,740 km2. Our approach consists of three main steps: (i) image segmentation, (ii) object analysis with skeleton-based algorithm, and (iii) application of multiple CNNs for final classification. Also, we developed a large benchmark dataset, called IowaNet, with 1 million labeled images and 10 classes. In our approach, multiscale CNNs were trained to capture the best contextual information during the semantic labeling of objects. Meanwhile, skeletonization algorithm provided morphological representation (“medial axis”) of objects to support the selection of convolutional locations for CNN predictions. In general, proposed multi-OCNN presented better classification accuracy (overall accuracy ~87.2%) compared to traditional patch-based CNN (81.6%) and fixed-input OCNN (82%). In addition, the results showed that this framework is 8.1 and 111.5 times faster than traditional pixel-wise CNN16 or CNN256, respectively. Multiple CNNs and object analysis have proved to be essential for accurate and fast classification. While multi-OCNN produced a high-level of spatial details in the land cover product, misclassification was observed for some classes, such as road versus buildings or shadow versus lake. Despite these minor drawbacks, our results also demonstrated the benefits of IowaNet training dataset in the model performance; overfitting process reduces as the number of samples increases. The limitations of multi-OCNN are partially explained by segmentation quality and limited number of spectral bands in the aerial data. With the advance of deep learning methods, this study supports the claim of multi-OCNN benefits for operational large-scale land cover product at 1-m resolution.

Comments

This is a manuscript of an article published as Martins, Vitor S., Amy L. Kaleita, Brian K. Gelder, Hilton LF da Silveira, and Camila A. Abe. "Exploring multiscale object-based convolutional neural network (multi-OCNN) for remote sensing image classification at high spatial resolution." ISPRS Journal of Photogrammetry and Remote Sensing 168 (2020): 56-73. DOI: 10.1016/j.isprsjprs.2020.08.004. Posted with permission.

Description
Keywords
Citation
DOI
Copyright
Wed Jan 01 00:00:00 UTC 2020
Collections