TYPE Original Research PUBLISHED 01 November 2022 DOI 10.3389/fncom.2022.1000435 Transfer learning-based OPEN ACCESS modified inception model for the diagnosis of Alzheimer’s EDITED BY Gaurav Dhiman, Government Bikram College of Commerce Patiala, India REVIEWED BY disease Asadullah Shaikh, Najran University, Saudi Arabia Vandana Khanna, Sarang Sharma1 , Sheifali Gupta1 , Deepali Gupta1 , The Northcap University, India Sapna Juneja2 , Amena Mahmoud3 , Shaker El–Sappagh4,5 and *CORRESPONDENCE Kyung-Sup Kwak Kyung-Sup Kwak6*

[email protected]

1 Department of Computer Science and Engineering, Chitkara University Institute of Engineering and RECEIVED 22 July 2022 Technology, Chandigarh, Punjab, India, 2 Department of Computer Science, KIET Group of ACCEPTED 29 August 2022 Institutions, Ghaziabad, India, 3 Department of Computer Science, Kafrelsheikh University, Kafr PUBLISHED 01 November 2022 el-Sheikh, Egypt, 4 Faculty of Computer Science and Engineering, Galala University, Suez, Egypt, 5 CITATION Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha Sharma S, Gupta S, Gupta D, Juneja S, University, Banha, Egypt, 6 Department of Information and Communication Engineering, Inha Mahmoud A, El–Sappagh S and University, Incheon, South Korea Kwak K-S (2022) Transfer learning-based modified inception model for the diagnosis of Alzheimer’s Alzheimer’s disease (AD) is a neurodegenerative ailment, which gradually disease. Front. Comput. Neurosci. 16:1000435. deteriorates memory and weakens the cognitive functions and capacities of doi: 10.3389/fncom.2022.1000435 the body, such as recall and logic. To diagnose this disease, CT, MRI, PET, COPYRIGHT etc. are used. However, these methods are time-consuming and sometimes © 2022 Sharma, Gupta, Gupta, Juneja, yield inaccurate results. Thus, deep learning models are utilized, which are Mahmoud, El–Sappagh and Kwak. This is an open-access article distributed less time-consuming and yield results with better accuracy, and could be used under the terms of the Creative with ease. This article proposes a transfer learning-based modified inception Commons Attribution License (CC BY). The use, distribution or reproduction model with pre-processing methods of normalization and data addition. The in other forums is permitted, provided proposed model achieved an accuracy of 94.92 and a sensitivity of 94.94. It the original author(s) and the copyright is concluded from the results that the proposed model performs better than owner(s) are credited and that the original publication in this journal is other state-of-the-art models. For training purposes, a Kaggle dataset was cited, in accordance with accepted used comprising 6,200 images, with 896 mild demented (M.D) images, 64 academic practice. No use, distribution or reproduction is permitted which moderate demented (Mod.D) images, and 3,200 non-demented (N.D) images, does not comply with these terms. and 1,966 veritably mild demented (V.M.D) images. These models could be employed for developing clinically useful results that are suitable to descry announcements in MRI images. KEYWORDS feature visualization, modified inception, classification, confusion matrix, Alzheimer’s disease Introduction Alzheimer’s disease (AD) is a neurological condition that damages the neurons and slowly deteriorates memory and hampers basic cognitive functions and abilities. This disease is detected by monitoring changes in the brain, which eventually result in neuron loss and their connections. According to the WHO, around 50 million people suffer from dementia, and nearly 10 million new cases of AD are reported every year. Ultimately, AD destroys the part of the brain that controls breathing and heart monitoring, eventually leading to Frontiers in Computational Neuroscience 01 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 fatality. AD involves three stages: very mild, mild, and Materials and methods moderate (Feng et al., 2020; Rallabandi et al., 2020). However, an individual affected by AD begins to show symptoms at This model classifies AD into ND, VMD, MD, and Mod D the moderate stage. This disease affects the communication classes (Sarraf and Tofighi, 2016). The proposed model utilizes between neurons. the Kaggle dataset containing 6,200 AD images. The model In the mild stage, progressive deterioration eventually involves augmentation of data (Zhao et al., 2017) and extraction hinders independence, with patients unable to perform most of of features using a modified inception model (Fan et al., 2008), the common activities of daily living. Speech difficulties become as shown in Figure 1. The model is executed using the Keras evident due to an inability to recall vocabulary, which leads package in Python with Tensorflow, which is used at the backend to frequent incorrect word substitutions. Reading and writing of Intel(R) Core (TM) i5-6400 CPU 2.70 GHz processors and skills are also progressively lost. Complex motor sequences 12GB RAM. become less coordinated with time as the disease progresses, so the risk of falling increases. During this phase, memory problems worsen, and the patients may fail to recognize even Input dataset close relatives. Long-term memory, which was previously intact, becomes impaired. Moreover, old age alone does not cause The database used in this study consists of a total of 6,200 AD AD, several health, environmental, and lifestyle factors also images that are retrieved from the Kaggle database. It comprises contribute to AD (Ebrahimi-Ghahnavieh et al., 2019; Talo et al., grayscale images of 896 MD, 64 Mod D, 3,200 ND, and 1,966 2019; Nakagawa et al., 2020), including heart disease, lack of VMD images, with a dimension of (208 × 176 × 3) pixels. social engagement, and sleep (Hon and Khan, 2017; Aderghal The dataset for evaluation is divided in such a way that 80% et al., 2018; Islam and Zhang, 2018). of the image samples are utilized for training the model and This study utilizes a novel modified inception-based model the remaining 20% are utilized for testing the model (Filipovych that classifies AD into four sub-categories: V.M.D, M.D, Mod.D, et al., 2011). Figure 2 shows the database of MRI images. Table 2 and N.D. The model was run on a large MRI dataset (Jha shows the publicly available AD dataset. et al., 2017). The following research points can be inferred from Table 3 shows the dataset description in which the number of the study: training images, testing images, and validation images are given for AD classes. The total number of images in the dataset is 1. A modified inception v3 model was implemented to 6,200, of which 179 are MD, 12 are Mod.D, 640 are ND, and 448 classify AD into four classes. are VMD images. The complete dataset is divided into training 2. This model was modified by adding six convolutional and validation (Misra et al., 2009; Moradi et al., 2015). layers, four dense block layers, two dropout layers, one flattening layer, and one layer with an activation function. 3. Image enhancement and augmentation processes were Normalization utilized to expand the image quantities in the dataset. Data normalization preserves the numerical stability of the The proposed model was implemented using an Adam modified inception model (Serra et al., 2016; Rathore et al., optimizer and 1,000 epochs. 2017). The MRI images have values ranging from 0 to 255. By utilizing the normalization technique, the images in the proposed model are trained faster (Rashid et al., 2022). Background literature Most of the research work has applied the binary Augmentation classification of AD (Jha et al., 2017; Aderghal et al., 2018; Ebrahimi-Ghahnavieh et al., 2019; Rallabandi et al., 2020) To enhance usefulness, a dataset with the maximum number and a smaller dataset to design their proposed model, which of samples is required, but numerous site, privacy, and data may not be adaptable. The researchers (Hon and Khan, restrictions often accompany while acquiring the dataset. 2017; Talo et al., 2019; Feng et al., 2020; Nakagawa et al., Therefore, to overcome these issues, augmentation of data 2020; Rallabandi et al., 2020) working on a large dataset is performed, which increases the original data quantity. have implemented two output-based classifications (Ali et al., Augmentation includes flipping (FL), rotation (Ro), and 2016) or classifications of binary inputs (Bin. C) (Kang brightness (Bss). Both vertical (VF) and horizontal flipping (HF) et al., 2021), which resulted in only marginal accuracy (Li techniques (Dhankhar et al., 2021; Juneja A. et al., 2021; Juneja S. et al., 2021). Table 1 compares the existing state-of-the- et al., 2021) are shown in Figure 3. The Ro technique, as shown art models. in Figure 4, is implemented in an anticlockwise direction by an Frontiers in Computational Neuroscience 02 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 TABLE 1 Literature survey of existing models. Ref. Journal Techniques Aim Challenges of the approach Rallabandi et al. Informatics in Non-Linear SVM with 2D To develop an automated Dataset contained 1,167 brain MRI images. It utilized Bin.C (2020) Medicine Unlocked CNN technique to classify normal, and showed 75% accuracy early and late mild AD individuals Feng et al. (2020) International 3D CNN-SVM To distinguish Mod. D and Dataset contained 3,127, 3T T1-weighted MRI brain images. Journal of Neural M.D individuals from N.D It utilized classification of three inputs and showed 88.9% Systems individuals, for improving accuracy. It also aims to focus on regressing Mod. D value-related care of Mod. D individuals to healthy individuals predict Mod. D individuals in medical progression and improve in diagnosis of AD in future facilities Nakagawa et al. Brain Cox, DeepHit To diagnose conversion time Dataset contained 2,142, T1-weighted images. It utilized (2020) Communications from normal individual to AD classification of three inputs and showed 92.3% accuracy. It individual by using deep aims to diagnose the group of M.D individuals that would survival analysis model convert to AD in future Ebrahimi- IAICT GoogleNet, AlexNet, To detect AD on MRI scans Dataset contained 177 images. It utilized Bin.C and showed Ghahnavieh et al. VGGNet16, VGGNet19, using D.L techniques 84.38% accuracy. To comprise PET scans in the system to (2019) SqueezeNet, ResNet18, examine several aspects of AD ResNet50, ResNet101, inceptionv3 Talo et al. (2019) Computerized AlexNet, VGGNet16, To diagnose MRI images into Dataset contained 1,074; T2-weighted MRI images and it Medical Imaging ResNet18, ResNet34, N.D and Mod.D utilized classification of multiple inputs and showed 95.59% and Graphics ResNet50 accuracy Islam and Zhang Brain Informatics inceptionV4, ResNet, ADNet To diagnose AD by utilizing Dataset contained 416; T1-weighted sMRI scans and it (2018) Deep-CNN ensemble utilized classification of multiple inputs and showed 93.18% accuracy. To predict AD from proposed model other brain diseases Aderghal et al. CBMS Data Augmentation, CNN To classify AD analysis by Dataset contained 416; sMRI image scans and it utilized (2018) using Cross-Modal Transfer Bin.C and showed 82.1% accuracy. To utilize a longitudinal Learning dataset and implement cross modal method based on ROI spatial optimization Hon and Khan BIBM CNN, Transfer Learning To classify AD by using cross Dataset contained 6,400 brain images and yielded 92.3% (2017) modal transfer learning accuracy while utilizing binary classification algorithms Jha et al. (2017) Journal of DTCWT, PCA, (FNN) To develop a CAD system to Dataset contained 416; T1- weighted image scans and it Healthcare early diagnose AD individulas implemented binary classification and yielded an accuracy of Engineering 90.06%. To test 3D-DTCWT, wavelet packet analysis, utilize ICA, LDA and PCA Ali et al. (2016) International VGG16, inceptionV4 To classify AD by utilizing Dataset contained 416; MRI AD and MCI image scans were Journal of transfer learning algorithms utilized and utilized Bin.C and showed 74.12% accuracy by Computer in pre-trained models scratch and 92.3% by transfer learning algorithms Applications Kang et al. (2021) CBM 2D-CNN, VGG16 To classify AD by using Dataset contained 798; T1-weighted image scans and it ensemble based CNN utilized Bin.C and showed 90.36% accuracy. To distinguish AD from MCI images by using 2D-CNN Li et al. (2021) BMRI SVM, CNN To distinguish MCI from AD Dataset contained 1,167; T1-weighted image scans and it by using SVM classifier with utilized Bin.C and showed 69.37% accuracy. To distinguish linear kernel AD from MCI images by using SVM-CNN Venugopalan et al. SR SVM, k-NN, CNN To distinguish MCI from AD Dataset contained 1,311; T1 and T2 weighted image scans (2021) by using SVM and k-NN and utilized Bin.C and showed 75% accuracy. To distinguish AD from MCI images by using SVN-CNN, KNN Frontiers in Computational Neuroscience 03 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 FIGURE 1 Proposed Alzheimer’s disease detection model. FIGURE 2 Alzheimer’s disease: (A) M.D, (B) Mod.D, (C) N.D, and (D) V.M.D. angle of 90 degrees each. Bss, as shown in Figure 5, is also applied TABLE 2 Publicly available Alzheimer’s disease dataset. to the image dataset by taking brightness factor values as 0.3 Dataset Classes Class name Class images Total images and 0.7. Table 3 exhibits the number of images before and after data OASIS 4 N.D 292 416 augmentation. Furthermore, a disproportion in the number of V.M.D 24 images was found in every class. To improve this disproportion M.D 28 (Sharma et al., 2022c), augmentation of data was performed, as Mod.D 72 mentioned before. After their execution, the samples increased ADNI 3 ND 159 469 from 6,200 to 10,760 images, which represent the updated MD 157 images. This is applied only to the training images. Before Mod.D 153 augmentation, the training images of MD, Mod D, ND, and Harvard 4 Mod.D 378 1,680 VMD were 896, 64, 3,200, and 1,966, respectively. After the Medical augmentation, the total number of training images became School 10,760, which represents the total number of images of training Kaggle 4 M.D 896 6,126 and validation data after augmentation. Mod.D 64 V.M.D 1,966 N.D 3,200 Feature extraction using the modified inception model modified inception architecture consisted of 12 blocks. In the In the proposed model, input images with a dimension first and second blocks, two inception layers of size 3 and of 208 ∗ 176 pixels were applied, as shown in Figure 6. The one max pooling layer of size 2 with 32, 64, and 128 filters, Frontiers in Computational Neuroscience 04 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 TABLE 3 Alzheimer’s dataset description. S.No. Alzheimer Training Validating Before After Training Validating images images augmentation augmentation images images 1 M.D 717 179 896 2,688 2,150 538 2 Mod.D 52 12 64 640 512 128 3 N.D 2,560 640 3,200 3,500 2,800 700 4 V.M.D 1,518 448 1,966 3,932 3,145 787 FIGURE 3 FL applied to the dataset: (A) original, (B) HF, and (C) VF. FIGURE 4 Ro applied to the dataset: (A) original, (B) 90 degrees anticlockwise, (C) 180 degrees anticlockwise, and (D) 270 degrees anticlockwise. respectively; in the third and fourth blocks, two convolution filters, respectively, followed by another dropout layer with 128 layers with 32 filters; and in the fifth and sixth blocks with one filters. Then, the flattened layer was connected with 512 filters, convolution layer with 256 and 128 filters, respectively, were and the ninth, 10th, 11th, and 12th dense layers consisted of 512, applied. These layers were followed by a dropout layer with 128 128, 64, and 32 filters, respectively. At last, the fully connected filters. The seventh and eighth layers consisted of 256 and 512 layers were implemented, and the classified output was obtained. Frontiers in Computational Neuroscience 05 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 FIGURE 5 Bss applied to the dataset: (A) original image, (B) with Bss factor 0.3, and (C) with Bss factor 0.7. FIGURE 6 Modified inception architecture. Frontiers in Computational Neuroscience 06 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 TABLE 4 Filter visualization for every convolution layers. First convolution layer inception_v3_input inception_v3 inception_v3_1 max_pooling2d sequential sequential_1 Last convolution layer Filter for first convolution layer sequential_2 sequential_3 dropout sequential_4 sequential_5 dropout_1 Filter for last convolution layer sequential_2 sequential_3 dropout sequential_4 sequential_5 dropout_1 Table 4 exhibits the filter visualization image of every carried out using 1,000 epochs. Table 8 shows the name of convolution layer. The single kernel or filter for each hypertuning parameters and their values. convolutional layer is mentioned. The images are filtered with the help of kernels, as given in Table 5 as it displays the feature-visualized images of each Confusion matrix convolutional layer (Chugh et al., 2022; Dhiman et al., 2022). It displays the first and last feature-visualized images for every Figure 7 shows the confusion matrix, which represents convolutional layer (Sharma et al., 2022a,b). classification predictions. The accuracy of the entire model is 94.92%. The confusion matrix parameters are converted by classification report. These confusion matrix parameters are given as follows: Results a. Accuracy (Acc) is the ratio of true predictions to observed Various tuning parameters were applied to AD images, like predictions, as in Equation 1: optimizer, batch size (BS), and epochs, which modified neural network features and thus minimized the losses. The Adam TP + TN Accuracy = (1) optimizer was used in this model. BS specifies managed images TP + TN + FP + FN in a single iteration. BS 32 was utilized in these models. A b. Precision (Prec) is the ratio of correct positive predictions total of 1,000 epochs were used in these models. The Adam to the total positive predictions, which can be given by optimizer was used for training the deep learning algorithms Equation 2: as it includes both functionalities of AdaGrad and RMSProp optimizers. Large BS results in heavy computational processes TP Precision = (2) during deep learning model training, whereas small BS results in TP + FP a faster computational process. Hence, there is always a trade- c. Specificity (Spec) is the ratio of correct negative predictions off between large and small BS. The number of epochs should to the total negatives, which can be given by Equation 3: be more so that error can be minimized during model training; however, a large number of epochs increase the computational TN Specificity = (3) time. In this study, the simulation of the proposed model is TN + FP Frontiers in Computational Neuroscience 07 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 TABLE 5 Images after each dense block. Filter for first convolution layer inception_v3_input inception_v3 inception_v3_1 max_pooling2d sequential sequential_1 Filter for last convolution layer inception_v3_input inception_v3 inception_v3_1 max_pooling2d sequential sequential_1 Filter for first convolution layer sequential_2 sequential_3 dropout sequential_4 sequential_5 dropout_1 Filter for last convolution layer sequential_2 sequential_3 dropout sequential_4 sequential_5 dropout_1 FIGURE 7 Confusion matrix for the modified inception model. Frontiers in Computational Neuroscience 08 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 FIGURE 8 Graph of confusion matrix parameters with F1-Score. TABLE 6 Confusion matrix constituents of the modified inception TABLE 7 Confusion matrix constituents of the modified inception model with the Adam optimizer. model with the Adadelta optimizer. AD type Precision Sensitivity Specificity F1-score AD Type Precision Sensitivity Specificity N.D 97.51 97.97 99.16 97 N.D 94.23 96.13 97.52 V.M.D 100 100 100 100 V.M.D 100 100 100 M.D 92.19 90.93 97.31 91 M.D 91.7 88.57 95.21 Mod.D 90 90.86 96.74 91 Mod.D 84.43 86.61 92.5 Avg. precision 94.93 – – – Avg. precision 92.59 – – Avg. sensitivity – 94.94 – – Avg. sensitivity – 92.83 – Avg. specificity – – 98.3 – Avg. specificity – – 96.31 94.75 d. Sensitivity (Sens) is the ratio of correct positive predictions Also, V.M.D exhibits a sensitivity of 100%, followed by N.D, to the total positives, which is given by Equation 4: with a sensitivity of 97.97%. Furthermore, V.M.D displays a specificity of 100%, followed by N.D with a specificity of 99.16%. TP The average Prec, Sens, and Spec of a batch size model of 32 Sensitivity = . (4) with Adam, Adadelta, and SGD optimizers are exhibited in TP + FN Tables 6–8, respectively. Figure 7 displays the confusion matrix for the modified A comparison of all the optimizers is shown in Table 9, inception model. The accuracy value of the proposed model is where the SGD optimizer showed better average precision, 94.92%. average sensitivity, and average specificity than both Adam and Figure 8 exhibits the precision, sensitivity, and specificity Adadelta optimizers. values for all AD classes for a batch size of 32 with the Adam Similarly, the average Prec, Sens, and Spec of a batch size optimizer. In Figure 8, V.M.D exhibits a maximum precision of of 64 in the inception model with Adam, Adadelta, and SGD 100%, followed by ND, with a maximum precision of 97.51%. optimizers are exhibited in Tables 10–12, respectively. By adding Frontiers in Computational Neuroscience 09 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 TABLE 8 Confusion matrix constituents of the modified inception TABLE 12 Confusion matrix constituents of the modified inception model with the SGD optimizer. model with the SGD optimizer. AD type Precision Sensitivity Specificity AD type Precision Sensitivity Specificity N.D 97.62 98.6 99.2 N.D 95.13 96.5 97.51 V.M.D 100 100 100 V.M.D 100 100 100 M.D 94.5 92.27 98.33 M.D 93.91 90.2 95.8 Mod.D 91.7 92.5 96.2 Mod.D 89.18 89.82 92.13 Avg. precision 95.9 – – Avg. precision 94.55 – – Avg. sensitivity – 95.84 – Avg. sensitivity – 94.13 – Avg. specificity – – 98.43 Avg. specificity – – 96.36 TABLE 9 Confusion matrix constituents of the modified inception TABLE 13 Confusion matrix constituents of the modified inception model with the SGD optimizer. model with the Gaussian NB classifier. Optimizers Avg. precision Avg. sensitivity Avg. specificity AD type Precision Sensitivity Specificity SGD 95.9 95.84 98.43 N.D 97.8 98.21 99.38 Adam 94.93 94.94 98.3 V.M.D 100 100 100 Adadelta 92.59 92.83 96.31 M.D 96.25 91.97 98.5 Mod.D 93.12 93.8 96.41 Avg. precision 96.79 – – TABLE 10 Confusion matrix constituents of the modified inception model with the Adam optimizer. Avg. sensitivity – 95.9 – Avg. specificity – – 98.57 AD type Precision Sensitivity Specificity N.D 91.6 92.5 96.3 V.M.D 100 100 100 Discussion M.D 90.11 88.7 95.17 For the training of the proposed model, the Adam Mod.D 85.7 86.2 92.8 optimizer was utilized. Confusion matrix parameters and Avg. precision 91.85 – – training performance parameters for the model are shown in Avg. sensitivity – 91.86 – Figure 8. From Figure 9, it can be inferred that this model Avg. specificity – – 96.06 obtained the comparatively highest parametric values with a Avg. F1-score – – – Prec of 94.93%, a Sens of 94.94%, a Spec of 98.3%, and a yielded Acc of 94.92%. Model accuracy was used for evaluating TABLE 11 Confusion matrix constituents of the modified inception classification models, and model loss was used for optimizing model with the Adadelta optimizer. parameter values. Figure 9A displays the graphs of training accuracy and validation accuracy for the modified inception AD type Precision Sensitivity Specificity model, from which it can be inferred that training accuracy N.D 89.38 91.13 93.71 was better than validation accuracy for all the epochs. Figure 9B V.M.D 100 100 100 shows the graphs of the training area under the curve (AUC) and M.D 87.32 88.57 94.93 validation area under the curve, from which it can be deduced Mod.D 84.43 85.14 91.17 that the AUC for training data was 1, whereas the AUC was Avg. precision 90.28 – – <1 for validation data. Figure 9C shows the graphs of training loss and validation loss for the modified inception model, from Avg. sensitivity – 91.21 – which it can be inferred that validation loss is high only at the Avg. specificity – – 94.95 800th epoch; otherwise, its value is <0.5. From Table 14, it can be deduced that at the 1,000th epoch, the training Acc value is maximum at BS 32, that is, 95.11%, Gaussian NB to the last layer of the inception model of a whereas training loss is minimum, that is, 0.3483. Furthermore, batch size of 32 with the Adam optimizer, the results denote at the 1,000th epoch, the training Acc value is maximum at a significant increase in performance parameters, as shown in BS 64, that is, 93.57%, whereas training loss is minimum, that Table 13. is, 0.3442. Frontiers in Computational Neuroscience 10 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 FIGURE 9 Modified inception model graphs containing (A) model accuracy, (B) model AUC, and (C) model loss. TABLE 14 Training performance of the modified inception model with TABLE 15 Comparison with previous implementations. the Adam optimizer. Authors Database ImagesTechniques Weighted Epoch Train Train Validation Val Acc value loss accuracy loss accuracy (%) Rallabandi et al. ADNI 1,167 SVM with D.L 75% For batch size 32 (2020) 200 0.0182 0.9964 0.2894 0.9413 Feng et al. (2020) ADNI 3,127 2D-CNN with D.L 82.57% 400 0.0111 0.9969 0.3890 0.9482 Nakagawa et al. ADNI 2,142 Cox model with D.L 92% 600 0.0067 0.9987 0.3421 0.9502 (2020) 800 0.0022 0.9996 0.7948 0.9419 Ebrahimi- ADNI 177 DenseNet-201 84.38% 1,000 0.0017 0.9998 0.3483 0.9517 Ghahnavieh et al. ResNet50 81.25% For batch size 64 (2019) 200 0.0212 0.9836 0.3577 0.9223 Talo et al. (2019) Harvard 1,074 VGG16 92.49% 400 0.0152 0.9897 0.3463 0.9267 Medical 600 0.0082 0.9923 0.3861 0.9109 School 800 0.0065 0.9946 0.3458 0.9291 Islam and Zhang OASIS 416 ResNet50 93.18% 1,000 0.0023 0.9971 0.3442 0.9357 (2018) Aderghal et al. OASIS 416 Cross-Modal Transfer 83.57% (2018) Learning Hon and Khan Kaggle 6,400 VGG16 92.3% Performance evaluation with previous (2017) implementations Jha et al. (2017) OASIS 416 DTCWT and PCA 90.06% with FNN Results obtained from the model are displayed in Table 10 Ali et al. (2016) OASIS 416 VGG16 92.3 which shows that the model achieved better parametric values Kang et al. (2021) ADNI 798 2D-CNN, VGG16 90.36% than previous models due to several pre-processing methods. Li et al. (2021) ADNI 1,167 SVM, CNN 69.37% However, some studies have utilized comparatively larger image Venugopalan et al. ADNI 1,311 SVM, k-NN, CNN 75% datasets to validate their models (Hon and Khan, 2017; Talo (2021) Proposed Kaggle 6,400 Transfer Learning 94.92% et al., 2019; Feng et al., 2020; Nakagawa et al., 2020; Rallabandi methodology Based Modified et al., 2020). Furthermore, Bin.C was achieved in most studies; inception Model previous studies have also performed tertiary or multiclass Frontiers in Computational Neuroscience 11 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 classification (Talo et al., 2019; Feng et al., 2020; Nakagawa et al., Author contributions 2020). Table 15 displays the comparison between the proposed and existing models. SS, SG, DG, and SJ contributed to conception and design of the study. AM organized the database and performed the statistical analysis. SE–S wrote the first draft of the Conclusion manuscript. K-SK wrote sections of the manuscript. All authors contributed to manuscript revision, read, and approved the In this study, the effectiveness of the proposed model for the submitted version. discovery of announcements has been completely estimated. The dataset for the announcement was acquired from Kaggle by one of the authors (Sarvesh Dubey). The results were attained after Funding the training and analysis of these models. Furthermore, by duly working the optimizer and images, these results demonstrated This work was supported by the National the effectiveness of the proposed models. Acc and Sens of 94.92 Research Foundation of Korea through a grant funded and 94.94 independently were achieved with the proposed model by the Korean government (Ministry of Science with the Adam optimizer. The study models performed better in and ICT)-NRF-2020R1A2B5B02002478. both training and testing, with similar results. A possible limitation would be to guarantee reproducibility; however, this issue could be solved by using a large brain Conflict of interest MRI dataset. A transfer learning-based approach places the convolution information into machine learning parts and the The authors declare that the research was conducted in AD images into deep learning parts before adding both the the absence of any commercial or financial relationships results of the processes. This study helps for a more accurate that could be construed as a potential conflict opinion for the development of D.L model. Different transfer of interest. learning-based models and optimization processes would also be employed to further enhance the effectiveness of the proposed model. Medical image analysis is one of the grueling Publisher’s note tasks with useful computational methods on the scale of All claims expressed in this article are solely those imaging operations. of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, Data availability statement the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by The raw data supporting the conclusions of this article will its manufacturer, is not guaranteed or endorsed by the be made available by the authors, without undue reservation. publisher. References Aderghal, K., Khvostikov, A., Krylov, A., Benois-Pineau, J., Afdel, K., Catheline, for tumor identification in medical image processing. Sustainability 14, 1447. G., et al. (2018). “Classification of Alzheimer disease on imaging modalities with doi: 10.3390/su14031447 deep CNNs using cross-modal transfer learning,” in In 2018 IEEE 31st international Ebrahimi-Ghahnavieh, A., Luo, S., and Chiong, R. (2019). “Transfer learning symposium on computer-based medical systems (CBMS), Vol. 1 (Bordeaux: IEEE), for Alzheimer’s disease detection on MRI images,” in In 2019 IEEE International 345–350. doi: 10.1109/CBMS.2018.00067 Conference on Industry 4.0. Artificial Intelligence, and Communications Technology Ali, E. M., Seddik, A. F., and Haggag, M. H. (2016). Automatic detection and (IAICT) Vol. 1 (Bali: IEEE), 133–138. doi: 10.1109/ICIAICT.2019.8784845 classification of Alzheimer’s disease from MRI using TANNN. Int. J. Comput. Appl. Fan, Y., Resnick, S. M., Wu, X., and Davatzikos, C. (2008). Structural 148, 30–34. doi: 10.5120/ijca2016911320 and functional biomarkers of prodromal Alzheimer’s disease: a high- Chugh, H., Gupta, S., Garg, M., Gupta, D., Juneja, S., Turabieh, H., et al. dimensional pattern classification study. Neuroimage 41, 277–285. (2022). Image retrieval using different distance methods and color difference doi: 10.1016/j.neuroimage.2008.02.043 histogram descriptor for human healthcare. J. Healthc. Eng. 2022, 9523009. Feng, W., Halm-Lutterodt, N. V., Tang, H., Mecum, A., Mesregah, M. doi: 10.1155/2022/9523009 K., Ma, Y., et al. (2020). Automated MRI-based deep learning model for Dhankhar, A., Juneja, S., Juneja, A., and Bali, V. (2021). Kernel parameter detection of Alzheimer’s disease process. Int. J. Neural Syst. 30, 2050032. tuning to tweak the performance of classifiers for identification of heart doi: 10.1142/S012906572050032X diseases. Int. J. E-Health Med. Commun. 12, 1–16. doi: 10.4018/IJEHMC.202 10701.oa1 Filipovych, R., and Davatzikos, C., and Alzheimer’s Disease Neuroimaging Initiative. (2011). Semi-supervised pattern classification of medical images: Dhiman, G., Juneja, S., Viriyasitavat, W., Mohafez, H., Hadizadeh, M., Islam, application to mild cognitive impairment (MCI). Neuroimage. 55, 1109–1119. M. A., et al. (2022). A novel machine-learning-based hybrid CNN model doi: 10.1016/j.neuroimage.2010.12.066 Frontiers in Computational Neuroscience 12 frontiersin.org Sharma et al. 10.3389/fncom.2022.1000435 Hon, M., and Khan, N. M. (2017). “November. Towards Alzheimer’s disease mild cognitive impairment and Alzheimer’s disease using structural MRI analysis. classification through transfer learning,” in 2017 IEEE International conference on Inform. Med. Unlocked 18, 100305. doi: 10.1016/j.imu.2020.100305 bioinformatics and biomedicine (BIBM) Vol. 1 (Kansas City, MO: IEEE), 1166– Rashid, J., Batool, S., Kim, J., Wasif Nisar, M., Hussain, A., Juneja, 1169. doi: 10.1109/BIBM.2017.8217822 S., et al. (2022). An augmented artificial intelligence approach for chronic Islam, J., and Zhang, Y. (2018). Brain MRI analysis for Alzheimer’s disease diseases prediction. Front. Public Health 10, 860396. doi: 10.3389/fpubh.2022.86 diagnosis using an ensemble system of deep convolutional neural networks. Brain 0396 Inform. 5, 1–14. doi: 10.1186/s40708-018-0080-3 Rathore, S., Habes, M., Iftikhar, M. A., Shacklett, A., and Davatzikos, C. (2017). A review on neuroimaging-based classification studies and associated feature Jha, D., Kim, J. I., and Kwon, G.-R. (2017). Diagnosis of Alzheimer’s disease using extraction methods for Alzheimer’s disease and its prodromal stages. Neuroimage. dual-tree complex wavelet transform, PCA, and feed-forward neural network. J. 155, 530–548. doi: 10.1016/j.neuroimage.2017.03.057 Healthc. Eng. 21, 9060124. doi: 10.1155/2017/9060124 Sarraf, S., and Tofighi, G. (2016). DeepAD: Alzheimer’s disease classification Juneja, A., Juneja, S., Kaur, S., and Kumar, V. (2021). Predicting diabetes mellitus via deep convolutional neural networks using MRI and fMRI. bioRxiv. [preprint]. with machine learning techniques using multi-criteria decision making. Int. J. Inf. doi: 10.1101/070441 Retr. Res. 11, 38–52. doi: 10.4018/IJIRR.2021040103 Serra, L., Cercignani, M., Mastropasqua, C., Torso, M., Spanò, B., Makovac Juneja, S., Juneja, A., Dhiman, G., Jain, S., Dhankhar, A., Kautish, S., et al. E, et al. (2016). Longitudinal changes in functional brain connectivity (2021). Computer vision-enabled character recognition of hand gestures for predicts conversion to Alzheimer’s disease. J. Alzheimers Dis. 51, 377–389. patients with hearing and speaking disability. Mob. Inf. Syst. 2021, 4912486. doi: 10.3233/JAD-150961 doi: 10.1155/2021/4912486 Sharma, S., Gupta, S., Gupta, D., Juneja, S., Gupta, P., Dhiman, G., et al. (2022a). Kang, W., Lin, L., Zhang, B., Shen, X., Wu, S., et al. (2021). Multi-model Deep learning model for the automatic classification of white blood cells. Comput. and multi-slice ensemble learning architecture based on 2D convolutional neural Intell. Neurosci. 2022, 7384131. doi: 10.1155/2022/7384131 networks for Alzheimer’s disease diagnosis. Comput. Biol. Med. 36, 104678. Sharma, S., Gupta, S., Gupta, D., Juneja, S., Singal, G., Dhiman, G., et al. (2022b). doi: 10.1016/j.compbiomed.2021.104678 Recognition of Gurmukhi Handwritten City names using deep learning and cloud Li, Y., Fang, Y., Wang, J., Zhang, H., and Hu, B. (2021). Biomarker extraction computing. Sci. Program. 2022, 5945117. doi: 10.1155/2022/5945117 based on subspace learning for the prediction of mild cognitive impairment Sharma, S., Gupta, S., Gupta, D., Juneja, S., Turabieh, H., Sharma, L., et al. conversion. Biomed Res. Int. 1, 1940–1952. doi: 10.1155/2021/5531940 (2022c). Optimized CNN-based recognition of district names of Punjab state in Misra, C., Fan, Y., and Davatzikos, C. (2009). Baseline and longitudinal Gurmukhi script. J. Math. 2022, 1–10. doi: 10.1155/2022/6580839 patterns of brain atrophy in MCI patients, and their use in prediction of Talo, M., Yildirim, O., Baloglu, U. B., Aydin, G., and Acharya, U. short-term conversion to AD: results from ADNI. Neuroimage. 44, 1415–1422. R. (2019). Convolutional neural networks for multi-class brain disease doi: 10.1016/j.neuroimage.2008.10.031 detection using MRI images. Comput. Med. Imaging Graph. 78, 101673. Moradi, E., Pepe, A., Gaser, C., Huttunen, H., Tohka, J., and Alzheimer’s Disease doi: 10.1016/j.compmedimag.2019.101673 Neuroimaging Initiative. (2015). Machine learning framework for early MRI-based Venugopalan, J., Tong, L., Hassanzadeh, H. R., and Wang, M. D. (2021). Alzheimer’s conversion prediction in MCI subjects. Neuroimage. 104, 398–412. Multimodal deep learning models for early detection of Alzheimer’s disease stage. doi: 10.1016/j.neuroimage.2014.10.002 Sci. Rep. 11, 3254. doi: 10.1038/s41598-020-74399-w Nakagawa, T., Ishida, M., Naito, J., Nagai, A., Yamaguchi, S., Onoda, K., et al. Zhao, Y., Raichle, M. E., Wen, J., Benzinger, T. L., Fagan, A. M., (2020). Prediction of conversion to Alzheimer’s disease using deep survival analysis Hassenstab, J., et al. (2017). In vivo detection of microstructural correlates of MRI images. Brain Commun. 2, fcaa057. doi: 10.1093/braincomms/fcaa057 of brain pathology in preclinical and early Alzheimer Disease with magnetic Rallabandi, V. S., Tulpule, K., Gattufor, M., and The Alzheimer’s Disease resonance imaging. Neuroimage 148, 296–304. doi: 10.1016/j.neuroimage.2016. Neuroimaging Initiative. (2020). Automatic classification of cognitively normal, 12.026 Frontiers in Computational Neuroscience 13 frontiersin.org