Improvement of Multimodal Images Classification Based on DSMT Using Visual Saliency Model Fusion With SVM
Multimodal images carry available information that can be complementary, redundant information, and overcomes the various problems attached to the unimodal classification task, by modeling and combining these information together. Although, this classification gives acceptable classification results, it still does not reach the level of the visual perception model that has a great ability to classify easily observed scene thanks to the powerful mechanism of the human brain.
In order to improve the classification task in multimodal image area, we propose a methodology based on Dezert-Smarandache formalism (DSmT), allowing fusing the combined spectral and dense SURF features extracted from each modality and pre-classified by the SVM classifier. Then we integrate the visual perception model in the fusion process.
To prove the efficiency of the use of salient features in a fusion process with DSmT, the proposed methodology is tested and validated on a large datasets extracted from acquisitions on cultural heritage wall paintings. Each set implements four imaging modalities covering UV, IR, Visible and fluorescence, and the results are promising.
Copyright (c) 2019 INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain the copyright of their manuscripts, and all Open Access articles are distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided that the original work is properly cited.