AmirAbbas Davari

Symbolic picture for the article. The link opens the image in a large view.

Hyperspectral Image Analysis using Limited Data

This thesis studies the use case of hyperspectral images in two applications, namely remote sensing and art history. The common challenge that is present in both these applications is the limited availability of labeled data. This limitation is caused by the tedious, time-consuming, and expensive manual data labeling process by the experts in each respective field. At the same time, hyperspectral images and their feature vectors are typically significantly high dimensional. Combination of these two challenge the supervised machine learning algorithms. In order to tackle this problem, this work proposes to either adapt the limited data to the classifier, or adapt the classifier to the limited training data. Any discrete data can be assumed samples from an unknown distribution which is not accessible. Having access to this underlying distribution enables drawing infinite number of data points. Motivated by this idea, this work takes advantage of Gaussian mixture model (GMM) to estimate the underlying distribution of each class in the dataset. Considering the limited available data, in order to limit the number of parameters, GMMs are constrained to have diagonal covariance matrix. Both on phantom data and the real hyperspectral images, it has been shown that adding only a few synthetic training samples significantly improves the untuned classifier’s performance. Further, it has been observed that the untuned classifiers reinforced with the synthesized training samples outperform the tuned classifier’s performance on the original training set. The latter suggests that the synthetic samples can replace the expensive parameter tuning process in classifiers. In a different approach, this work proposes to adapt the classifier to the limited data. Traditional classifiers with high capacity often overfit on extremely small training data sets. The Bayesian learning regime has a hardcoded regularization property in its formulation. This property motivates the idea of using Bayesian neural networks to remedy the overfitting problem of the normal (frequentist) convolutional neural networks (CNNs). The experimental results demonstrate that for the same convolutional network architecture, the Bayesian variant outperforms the frequentist version. Using ensemble learning on the sample networks drawn from the Bayesian network further improves the classification performance. Moreover, studying the evolution of the train and validation loss plots for both the Bayesian and the frequentist CNN clearly shows that the Bayesian CNN is significantly more robust against overfitting in the case of extremely limited training data and has higher generalization capability in this situation. For the second application, i.e., the layer separation in the old master drawings, this work studies the effectiveness of hyperspectral images, introduces to use the extended multi-attribute profiles (EMAPs) and hyper-hue features, and compares them against the other state-of-the-art features, using synthesized and real data. The results show that the EMAPs and hyper-hue are more informative and representative feature spaces. Mapping the HS images to these spaces resulted in more accurate color pigment layer segmentation.