medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. A Dense sub-graph based approach for Automatic detection of Optic Disc 1 Subrata Jana, 2Tribeni Prasad Banerjee, 3Gour Sundar Mitra Thakur, 4Pabitra Mitra 1. Master Of Computer Application Calcutta Institute Of Technology Howrah, India,

[email protected]

2.Electronics and Communication Engineering ,Dr.B.C.Roy Engineering College,Durgapur,India,

[email protected]

3. Computer Science and Engineering ,Dr.B.C.Roy Engineering College,Durgapur, India,

[email protected]

4. Computer Science and Engineering,Indian Institute of Technology,Kharagpur,India,

[email protected]

Abstract Glaucoma is a situation of greater than normal intraocular pressure surrounded by the eyes. This explains the harm to the optic nerves as the limb passes in sequence to the brain. The graph base is used in this paper for automatic localization of the optic disc. This paper proposed and modified a new dense sub- graph approach to locate the affected optic disc by using DRIVE, STAIR, and Drishti-GS1 databases. This model has proved to be more accurate when compared with other standard models representing the concert's progress. This method provides a new idea for the location of the optic disc with a system accuracy of 93%. Keywords— Glaucoma, Feature Extraction, Saliency Detection, DBSCAN, Random Walk, k-dense sub- graph, Markov chain model, Equilibrium distribution 1 Introduction: Glaucoma infection is a general infection in ophthalmology. This is primary optical infection. Many infections will have different symptoms on the fundus, such as diabetes, hypertension, and so on. The main physical structures are the fundus image, the group of blood vessels, the optic disc, the optic lens, and the mark. The position of the optic disc is helpful to the consequent position of the mark. The optical disc location is primarily based on the following characteristics of the optical disc: One of the features of optic discs is their round or oval shape. Another feature is that the retina blood vessels are either thick or thin from source to end. The blood vessels on the entire retina may be roughly parabola shaped. Blood vessel circulation in the optic disc area is vertical, whereas blood vessel circulation in other areas is horizontal. People are expected to perceive visually idiosyncratic, or salient, scene areas fluently and quickly. This filter area is then purged and the procedure is better understood for the removal of the comfortable high level in sequence. The ability has extensively been considered by cognitive scientists and has recently fascinated a set of awareness in the computer vision society, mostly because it assists in locating the object [1] or area that proficiently a view and thus controls multifarious vision troubles such as view accepting. The goal of this research is to locate the optic disc [2, 3] in order to remove the prominent area in an image by combining excellent pixel segmentation and graph theory based saliency area detection. Dense K-sub-graph constructions are oppressed to acquire and improve saliency diagrams. First, we screen the image into the areas using the SLIC [4] segmentation method. The graph-based Markov chain [5] random walk[6] model using saliency model features is color, compactness, NOTE: and intensity. This preprint reports Thethat new research SLIC method has not generates been certified a different by peer review and sparser graph should not ontowhich be used the k-dense guide clinical practice. [7] medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. sub-graph is worked out. The clustering algorithm is beautiful for the assignment of class detection. The application to a large spatial database increases the clustering algorithm's subsequent constraint. (1) Sphere data to decide the input constraint since suitable values are repeatedly not identified in progress when selling the big database. (2) The detection of the cluster with a random contour is necessary because the contour of the cluster in a spatial database may be circular, long-lasting, linear etc. We present the SPDBSCAN [8] clustering algorithm. It requires four input parameters and supports the user in determining an appropriate value for them. It determines the cluster of dissimilar contours. SPDBSCAN is a well-organized level for big databases. A number of graph-based saliency algorithms are well-known in the literature to help improve on the contrast determined to represent the inter-nodal evolution probability [9], present superior random walk measures like random walk, or use various entropy functions [10]. Most graph- based techniques construct a fuzzy saliency map. The most salient region [11, 32] is useful for saliency mapping to filter. Our model is a graph-based technique that uses dense sub-graph calculation and sorts out the most important areas after a random walk. Salient regions and non-salient regions are combined; the deferred saliency map gives better results than any other existing method. This has been realised by taking into consideration an extra informative local graph construction, i.e., a dense sub-graph, rather than easy centrality evaluation in finding the map. A dense k-sub-graph based algorithm for compact salient image region detection [12, 32] using the saliency model. This technique does not provide better results for optic disc location, but we have modified the slic technique and use spdbscan to get better results than only the saliency technique. The rest of the paper follows the order: Section 2 describes related work; Section 3 Proposed Methodology, Section 4 Graph based Saliency Detection, Section 5, Describes experimental results; Section 6, tests plot graphically; and Section 7 conclusion and future work. 2 Related Work: Saliency computation is a deep-rooted psychosomatic hypothesis about human concentration, such as Feature Integration Theory. The hypothesis confirms that a variety of facial expressions are method in equivalent in unusual areas of the human brain, and the position of the facial appearance is combined in master map position and concentration selects the existing region of concern. It is used for arbitrary shape of clustering using DBSCAN [8]. It is a density-based technique. This algorithm gives better results and more efficiency. Graph Based Visual Saliency Model [9] detects high salient regions and upper saliency values in the image plane. The dissimilar methods used to develop the unprocessed fundus images, such as the difference improvement, clarification rectification, mask production, and the technique used for segmenting the optic disc, were categorized into possessions based vessels junction and pattern matching methods. As Site Entropy Rate (SER) [10] supports the rule in sequence maximization, saliency finding in both still and video images, visual saliency is definite. This model estimates the conversion probability among two graph nodes by fusing the difference in sub-band facial appearance answers and the spatial distance with a pre-set limitation. The graph based saliency discovery technique is relevant on segmented images established by the SLIC super medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. pixel segmentation algorithm and k-dense [12] sub-graph ruling problem to that of saliency detection to get a better drawing out of salient parts in the illustration in sequence. We can find salient objects [15], which is prepared as an image segmentation using a set of local, area, and global salient objects. This finding has wider uses. Physically assembling and preparing images for object detection is very costly. The bottom up saliency decision method with incorrect border [16] line elimination and normalized random walk ranking and incorrect border line deletion process successfully abolishes the image border line, with the border line adjacent to the centre super pixels. It is a bottom-up [17] loom using a low level feature. It is also a multi-state loom where preferred scales are needed. The visual saliency finds nodes in the most important demonstration, and quantifiable representation of hierarchical [18] entity based concentration for computer visualization. It is entity-based and spaced-based concentration can be incorporated by using assemblage-based salience to covenant with lively diagram task. The task saliency based area selection gets better object detection in high chaotic [19] scene reflection and to guess human fixation on images, and confirms the pressure of numerous scales, color space weights, and quaternion change axes. The global and local possessions of a region can be explored by hauling out random walks on an entire graph and a k-regular graph [21].The cellular automata [22] technique to create a conditions-based map, which obtains both global color and spatial space matrix into contemplation based cellular automata instinctive renew method, is planned to utilize the fundamental connectivity of salient objects throughout communications with neighbors.. The GBVS provoked [9,23] centre bias by making active then standardizing an identical image using this algorithm. They also discuss how the typical algorithm guesses fascination better but still poorer than GBVS. The visual consideration necessitates three different stages. Firstly, a set of basic features is calculated in equivalent crossways the illustration field and is symbolized in a set of cortical topographic maps. These maps are united into the saliency map, training the relative conspicuity of the visual scene. Next, the WTA system working on this map singles out the most prominent location. Next, the assets of the select position are routed to the essential illustration. The WTA network then moves robotically to the most prominent place to show the facility to successfully shrink [25] the alter times and hunt errors to choose helpful things, regions, and prepared groups, and the capability to elastically choose visual things whether positioned in the fovea field or on the diagram fringe. The finding of the optic disc [26] location and function of the macula also discusses corner detection, the optic disc location convoluted, the corner detection complex and the computational tool for automatic [27] glaucoma detection and thresholding measure cup and segmentation ratio. The dense dilated element removal block into an encoder-decoder [28] configuration to remove accrued attributes at different scales and an optic disc and optic cup segmentation technique using the Graph Convolution Network (GCN) [29] method. G_Net, C_Net proposed two new techniques to locate the optic disc and optic cup, and iris segmentation using IsqEUNet [34]. This technique has good results. Other than the PCA and Hessian Method, vessel segmentation using the max-flow graph [33] yields better results. 3 Proposed Methodology: The block diagram of the optic disc location is exposed in Fig. 1. The structure has been separated into three steps, that is to say, The three stages are: Feature Extraction stage, Training Stage, Testing and Evaluation Stage. medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. Step1: The first stage is the image region or super-pixel, generated from the original image using the SLIC technique. Step2: We use an extract feature map graph based saliency model and a compactness factor for saliency map computation. Step3: We apply a graph based edge threshold in sparse graph construction. Step4: The last stage is the evaluation stage. At this stage, we apply k-dense subgraph computation, which results in a salient region. Fig.1. Block Diagram of Optic Disc Location Framework medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. Fig2. Flow Chart for Modified Dense Sub-graph technique It may be practical from Fig. 2 that we remove the element in sequence and then use the graph based saliency model to get an in-between saliency map, which is extra polished by dense sub-graph calculation, to attain the ultimate saliency map. This technique creates the connectivity graph based on super-pixel, which calculates the graph based on a rectangular region. The HSV color space was chosen because the Euclidean distance in this color breathing space is uniform, and it has been experimentally proven to produce better results than YCbCr and RGB. 3.1. Algorithm: SLIC stands for Simple Linear Iterative Clustering, it is used our modified SLIC[4] technique. This algorithm accepts 6 parameters. First parameter is input image denoted im, second parameter is k that is preferred number of super pixel same size, we consider say super-pixel 2000, third parameter is m that is use smoothing shape, fourth parameter is seRadius that is denoted adjacent region, fifth parameter is colopt indicating how the color center to be computed, sixth parameter is mw indicating the window size, it is optional. We can used initial cluster center Di=[li,ai,bi,xi,yi]T and usual grid space denoted S. We construct approximately same size super-pixel, N space gap is S  . The Cluster center are stimulated the minimum gradient position in a 3x3 quarter. We use K this technique reduce noise of super-pixel. K-means clustering each pixel compared to all clustering center where as super-pixel clustering each pixel compared small region super-pixel center. Super-pixel is an area approx SxS, correspondingly super-pixel center region is approx 2S x2S and measuring Euclidian distance denoted D. All the medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. pixel belonging to the nearest cluster center and update cluster center Di=[li,ai,bi,xi,yi]T and usual grid space denoted S. The Cluster center are stimulated the minimum gradient position in a 3*3 quarter. Algorithm . Initial Cluster center Di=[li,ai,bi,xi,yi]T and grid space denoted S Lowest 3x3 neighbors position grid space. Step1: Set label(i)=-1 Set distance p(i)=  Repeat Step2:While cluster center Di do Step3:While pixel j in 2S*2S and cluster center Ck do Step4:Calculate Euclidian distance P between Di and j If P<p(i) then P(i)=P L(i)=i End if Step5:Calculate new cluster center Until  <=thresh. The SPDBSCAN function determines the neighborhood of any super-pixel. The following criteria are used: Any two super-pixels are adjacent. The clustering distance measure is the lab color distance between the color centers of two super-pixels. If any two super-pixels are not adjacent, the clustering distance is measured to be infinite. SPDBSCAN required four parameters: lm, Sp, Am, and E. The first parameter, lm, denotes the label image generated by the SLIC function. The second parameter, Sp, in each column gives the attributes of each super-pixel region generated by SLIC. The third parameter, Am, is the adjacency matrix of the label image. The fourth parameter is E, which denotes the threshold that controls which super-pixels are clustered together. In our program, we consider the threshold value E to be 5. SPDBSCAN function to find the cluster beginning with an arbitrary starting point Sp and rescue all points. If m1 is a boundary point, do not consider this point. SPDBSCAN can take the next point. SPDBSCAN used two global variables. SPDBSCAN combines two clusters into a single cluster. Let the distance between two sets of points M1 and M2 be what we can describe as dist(M 1 , M 2 )  min dist( p, q) | pM 1 , qM 2  Input: Initial this function takes label image of super-pixel, Adjacency matrix of segment, structure array of super- pixel Output: New cluster region Step1:Set Np=length(structure array of super pixel) Step2:While n<Np do If n<> visit Visit(n)=1 medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. Step3:Neighbours=regionquery(Sp,Am,n,Ec) Step4:While ind<=Neighbours do Step5:Nb=Neighbours(ind) Step6:neighboursP = regionQueryM(Sp, Am, nb, Ec); Step7:neighbours = [neighbours neighboursP]; If the super-pixel has not been visited yet, mark it as visited, then find its neighbour and increase the number of clusters. If the neighbour has not visited, mark it as visited and add it to the list. If the neighbour is not a member of the cluster, add them to the cluster list. 3.2 Super-pixel Segmentation and Feature Extraction: The first image is segmented using the SLIC method, takes 6 input parameters such as im for input image and k for the number of desired super-pixel. This is nominal; the actual number of super-pixel generated will generally be a bit larger. In our program, k value of 100 pixels per super-pixel is used. The next parameter is m, which is the weighting factor between color and super-pixel. We can now define the value of 25.parameter is SeRadius. Regions morphologically smaller than this are merged with adjacent regions. The SeRadius range is 1 to 1.5. In our program, we can define value as 1. The next parameter is colopt, indicating how the cluster colour centre should be computed. In our programme we can use the median. The next parameter is mw. It is an optional median filter window size. The slic method returns l, Am, and c, where l is a labelled image of a super-pixel, Am is a segment adjacency matrix, and c returns a super-pixel attribute structure array with fields L, a, b, r, c, stdL, stda, stdb, N, edges, and D . This step is SPDBSCAN, or the Super Pixel Density Base Spatial Clustering Application With Noise method. This function is used for clustering for image segmentation. This function takes four parameters: l, Cp, Am, and E. Now to explain the first parameter l, labelled image of clusters/regions generated by a slic method. The second parameter is Cp. Each column gives the attributes of each super-pixel region. Its value reruns the slic method. The next parameter is Am, an adjacency matrix of the labelled image. Its value is returned by the slic method. Value is E, the matching tolerance value/distance threshold that controls which super-pixels are clustered together. In our program, we defined value 5. This function returns The following function is DRAWREGIONBOUNDARIES, which displays the drawn boundaries of labelled regions in an image. Channel denotes a hue image, the S channel denotes a saturated image, and the V channel denotes a value (intensity) image. We only use the Hand S channel. 4 Graph Based Saliency Computation. Image segmentation is accomplished through the use of the SLIC, SPDBSCAN, and DRAWREGIONBOUNDARY approaches. Now we'll make a graph. Image by taking into consideration segmented image area as vertex and aloofness. Graph Base Visual Segmentation united the loads as the product of different loads [12].  ImI  ImL* Ima* Imb*  T (1) Where ImL*,Ima*,Imb* are the regularized attribute strength plan matching to L*,a*,b* apparatus of the image and ImI is a vector including three features map. medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. Pim 3 i  i 2 f (m, n)  t Im m,t  Im n,t  (2) I I Where Imm,t and Im n,t are mean intensity assessment of the element canal t,where t=1,2,3 for L*,a*,b* consider for nodes m,n correspondingl [12].  2 2  ( xm  xn )  ( y m  y n )  Psp m, n   1   im  (3)  D    Where xn and yn symbolize the mean x and y co-ordinates and D represent diagonal length of the image. MGBVS is a fully connected graph, whereas GGBVS is based on the saliency map. Now we calculate the K dense sub-graph of this graph. The graph Gim derived from N*N transition matrix , where N is the number of nodes. The element TP(i, j) is proportional to the graph weight w(i, j).The graph Gimage is a diagonal matrix whose degree matrix which is denoted W,the equation is[12] W (i, j )   w(i, j ) (4) j Each column sum of the value in transition matrix must be 1.the transition matrix equation is [12] TP  AW 1 (5) A random walk from the Markov chain is used [22]. Gimage has a fixed number of nodes and is fully related by construction to the stationary distribution for Markov chain exit. MGBVS is an entirely connected graph using the GGBVS saliency graph. The K-dense[7] subgraphs calculate the solidity of the graph, using the threshold edge. 4.2 Saliency Graph Thresholding: The saliency map shows different feature values. Dissimilar features are proportional to edge weights. In saliency computation, they require higher edge weights. A threshold partitions the graph where more thresholds accept the graph and fewer thresholds discard the graph. The distribution of edge weights between all pairs of vertices is analyzed using threshold T. We can create two sets of edges: one set is the reject set, denoted Pr, and the accepted set is Pa. Distribution detect threshold has been used in entropy. For a particular threshold t, the ratio r of the summation of weights for the rejected set Pr of edges to the total set Pr U Pa, of edges is calculated as [12]  wi wi t r (6)  wi i Above Equation (6) returns the selected set of edges to the total set of edges. En denotes the rejected and selected set of edges. The equation given below [12] En  r log(r )  (1  r ) log(1  r ) (7) The threshold t varies with the edge weight entropy rate. En. The edge weight threshold T max is chosen, then the edge weight entropy maximum. My experiment threshold period is.1, where the starting threshold Ti =.1 and the end threshold Tf =.95. The highest max-entropy value is.69, the maximum threshold value is.44, the ration rejection value is.03, and the ratio accepted value is.97. medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 4.3 Dense k-Subgraph Computation A graph is a collection of vertex and edge groups in which the densest k-subgraph rules a sub-graph precisely k vertices, i.e., the maximum number of edges. It is also known as the NP hard problem. The dense k sub-graph problem can be formulated as given below [8]. max xT Ax n s.t  xi  k (8) i 1 xi  {0,1},   {1,... n} In the above equation, A is the adjacency matrix and k is a positive integer between 3 and n-2, and xi is a binary variable that obtains value one. Let f(x)=xTAx and n=N with each step coordinate value renewing. In each step I,let Ti,|Ti|=m the set of random coordinate renew concurrently. Each iteration, the random coordinate value is updated following the equation [8]  i w ( x) if t  Ti U i ( x)t   t t  1......n (9)  xt otherwise Where w ( x)   solution is dished optimization dilemma. i m p wi ( x)  arg max i i  t f ( x)( wt  xt )   t ( wt  xt ) 2 tT tT 2 wi i i (10) s.t  w it  p   xt tTi tTi 0  w it  1 t T i In above equation (10) solved linear programming problem. 4.3 Saliency Map Computation: The map follows the sequence of steps. Step 1: Let Ssalience = v1, v2, v3,... vn represent the vertices' positions in a subgraph, and Msaliency (x, y) represent the saliency value at a pixel M(x, y), where x and y are pixel positions. Step 2: In step 2, the pixel value corresponds to vertex I for the image's pixel. If I establish that the mean vertex degree is compared with the degree of vertex I and assigns value in saliency. If I do not establish Msaliency (x, y), its value is zero. Step 3: For the final step of the saliency map, we use the equation [11]. medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission.   1  (11) Deg ( veri )     max i Deg ( veri )     Msali( x , y )    Deg ( ver ) i     max i Deg ( Ver i  )  In the above equation   1 ,the pixel section of the saliency value has an association to a low degree. On the other hand pixel communicates with a high degree of allocation of greater saliency value. 5 Experimental Results: Origin-al Proposed ROI Dense Graph Based Image Model Model Saliency Manifold Model Model Fig.3. Comparison of saliency map different model We used the most popular datasets, such as the DRIVE and STAIR datasets. We have implemented our algorithm and other models. In our algorithm, two parameters are required: one k-dense k, and where k-dense subgraph determines the region of saliency value, which is an improvement part and maps to the correct superiority. We contrast our model with other four models such as the Graph Based Manifold Model [13], Dense Saliency Model [12], and ROI model [14]. We can see that our model outperformed the other four models. We can use a large number of super-pixel nodes (n = 100). We use a large value to implement a super-pixel smoother shape. The m value is 25, and we use the seRadius value is 1. We experimental that k=.8%n better result than other method. We discovered that   1 produced a more visible result than other existing techniques. We choose  value of.3 for prominent results. medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 6 . Text Plot Graphically: Accuracy concludes that how proportion of test data is properly secret. It can be calculated according below this equation (Tp  Tn) Accuracy=  100 (12) (Tp  Fp  Tn  Fn) Tp denoted True Positive,Tn denoted True Negative,Fp denoted False positive value, Fn denoted False Negative value. TABLE I. ACCURACY RESULT Model Name Graph Manifold Model Spectral Saliency ROI Model Dense Saliency Model Proposed Model Accuracy(%) 83 84 85 87 93 Graph Manifold model[13] avg accuracy 83%,Spectral Saliecy multichannel model[31] avg, accuracy 84%, ROI model[30] avg. accuracy 85%. dense saliency Model[12] avg. accuracy 87% and our model avg. accuracy 93%. Fig.4.Graphically represent different techniques 6. Conclusion and Future Scope. We have proposed a new technique for salient region detection. Our primary goal is simple linear iterative clustering, followed by image segmentation via SPDBSCAN from the super pixel cluster. It is more efficient to determine the neighbourhood of each super pixel. It is faster than other exciting techniques. It gives better results than other techniques. The experimental results show that regulating two parameters differently than our method produces prominent results. Future work includes convolution neural network detection and noisy image datasets. medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. References [1] Y.Sun,and R.Fisher, “Object-based attention for computer vision”,Artif.Intel.vol.146,no.1,pp.77-123,2003 [2] A.K.Whardana,N.Suciati, “A simple method for optic disk Segmentation from Retinal Fundus Image”,DOI:10.5815/ijigsp.2014.11.05,2014 [3] J.R.H.Kumar,A.K.Pediredia,C.S.Seelamantula, “Active disc for automated optic disc segmentation”,IEEEGlobal Conference on Signal and Information Processin Orlando,FL,pp.225-229,2015 [4] R.Achanta,A.Shaji,K.Smith,A.Lucchi,P.Fua and S.Susstrunk, “SLIC Superpixels Compared to state-of-the Art Superpixel Methods”,IEEE Trans,Pattern Anal.Mach.Intell,vol.34,no.11,pp.2274-2282,2012 [5] B.Jang,L.Zhang,H.Lu,C.Yang,M-H Yang, “ Saliency Detection via Absorbing Markov chain”,in IEEE int.Conf.on Computer Vision,pp.1665-1672,2013 [6] V.Gopalakrishnan,Y.Hu,D.Ranjan, “Random walks on Graphs to model saliency in images”,in Proc.IEEE Int.Conf.Comput. Vision Pattern Recognition,pp.1698-1705,2009 [7] U.Feige,G.Kortsarz,D.Peleg, “The Dense k-subgraph problem”,Algorithmica,vol.29,pp.410-421,2001 [8] M.Ester,H.P.Kriegel,J.Sander,X.Xu, “ A Density Based Algorithm for Discovering clusters in large Spatial Databases with Noise”,in KDD-96,AAAI oress,pp.226-231,1996 [9] J.Harel,C.Koch,P.perona, “ Graph –Based Visual Saliency”, in Proc.Adv.Neural Inf.Process,Syst.,pp: 545- 552,2006. [10] W.Wang,Y.Wang,Q.Huang,W.Gao, “ Measuring visual saliency by site Entropy rate “,in Proc.IEEE int.Conf.Compute,Vision pattern Recognit,pp.2368-2375,Jun-2010 [11] M.Z.Aziz,B.Mertsching, “Fast and Robust generation of feature maps for region-based visual attention using stochastic image modeling”,IEEE trans.Pattern Anal.Mach.IntellVol.32,no.4,pp.693-708,2010 [12] S.Chakraborty,P.Mitra, “ A dense Subgraph based algorithm for compact salient image region detection”,Computer Vision and Image Understanding ,DOI:10.1016/j.cviu.2015.12.005, Vol:145, 2015. [13] C.Yang,L.Zhang,H.Lu,X.Ruan,M-H Yang, “Saliency Detection via Graph Based Manifold Ranking”,in PROC. IEEE intel,Conf. on Computer Vision patern Recognition,,pp.3166-3173,2013 [14] B.H.Lee,J.Liu,H.Y.Lim,H.Li, “Optic Disc Region of Interest Localization in Fundus Image for Glaucoma Detection in ARGALI”,IEEE Xplore,DOI:10.1109/ICIEA.2010.5515221,2010 [15] T.Liu,J.Sun,N.Zheng,X.Tang,HShum, “ Learning to detect a salient object” IEEE Trans.Pattern Anal.Mach.Intell,vol 33,No.2,pp-353-367,2011 [16] C.Li,Y.Yuan,W.Cai,Y.Xia,D.D.Feng, “ Robust saliency detection via Regularized Random Walks Ranking”, IEEE conference on computer vision and Pattern Recognition , doi:10.1109/CVPR.2015.7298887 ,2015 [17] R.Pal,A.Mukherjee,P.Mitra,J.Mukherjee, “ Modelling Viual Saliency using degree centrality”,IET Computer Vision,Vol.4,no.3,pp.218-229,Sep.2010 [18] Y.Sun,R.Fisher, “ Object based Visual attention for computer vision”, Artif. Intell. Vol.146,no.1,pp.77- 123,2003 [19] D.Walther,U.Rutishauser,C.Koch,P.perona, “ on the usefulness of attention for object recognition”, Workshop on attention and performance in computational vision at ECCV,2004 [20] B.Schauerte,R.Stiefelhangen, “ Quaternion –based spectral saliency detection for eye fixation”, European Conference On Computer Vision,2012 [21] V.Gopalakrishnan,Y.Hu,D.Rajan, “ Random Walks on Graphs to model saliency in Images”, IEEE Trans.Image Processing,vol.19,no.12,pp.3232-3242,Dec.2010 [22] Y.Qin,H.Lu,Y.Xu,H.Wang, “ Saliency Detection via Cellur automata”, in IEEE int.Conf.Comput.Vision Pattern Recognit.,2014 [23] R.Sotirov, “On solving the densest k-subgraph possible on large graphs”,Optimization Methods and software , DOI:10.1080/10556788.2019.1595620,2019 [24] A.M.N.Allam,A.A.H.Youssif,A.Z.Ghalwwash, “ Automatic Segmentation of optic Disc in Eye Fundus Images:A Survey”,DOI:10.5565/rev/elcvia.680,vol.14,2015 [25] C.Koch,S.Ullman, “Shifts in Selective visual attention :towards the underlying neural circuitry”,Human Neurobiology,vol:4,No.4,pp.219-227,1985 [26] B.Gui,R.J.Shuai,P.Chen, “ Optic Disc localization algorithm based on improved corner detection”,Elsever Ltd,vol.131,pp.311-319,2018 [27] J. Carrillo, L. Bautista, J. Villamizar, J. Rueda, M. Sanchez and D. rueda, "Glaucoma Detection Using Fundus Images of The Eye," 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), Bucaramanga, Colombia,pp.1-4, doi: 10.1109/STSIVA.2019.8730250.2019 [28] L. Mou, L. Chen, J. Cheng, Z. Gu, Y. Zhao and J. Liu, "Dense Dilated Network With Probability Regularized Walk for Vessel Detection," in IEEE Transactions on Medical Imaging, doi: medRxiv preprint doi: https://doi.org/10.1101/2022.06.27.22276966; this version posted June 28, 2022. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission. 10.1109/TMI.2019.2950051. vol. 39, no. 5, pp. 1392-1403, May 2020, [29] Z.Tian,Y.Zheng,X.Li,S.Du,X.Xu, “ Graph Convolution Network Based Optic Disc and Cup Segmentation on Fundus Image”,Biomedical Optics Express, vol.11,N.6,2020 [30] R.Bharath,L.Z.J.Nicholas,X.Cheng,”Scalable Scene Understanding Using Saliency –Guided Object Localization”, DOI:10.1109/ICCA.2013.6565074,2013 [31] H.H.Yeh and C.S.Chen, “ From Rarenen to Compactness:Contrast Aware Image Saliency Detection”,In Proc.IEEE int.Conf.Image Processing,Orlando,Florida,USA,2012 [32] H.Luo,G.Han,P.Liu,Y.Wu, “Saliency Region Detection Using Diffusion Process With Nonlocal Connections”,Applied Sci. doi:10.3390/app8122526,2018 [33] S.Jana,S.Ray,P.Adhikary,T.P.Banerjee, “ Graph Based Approach for Image Data Retrival in Medical Application”,Cognitive Computing in Human Cognition,Springer Cham,Vol (17),pp:91- 98,DOI:10.1007/978-3-030-48118-6_9,2020 [34] M.Sardar,S.Banerjee,S.Mitra, “Iris Sementation Using Interactive Deep Learning,IEEE Access,vol:8,pp:219322-219330, DOI:10.1109/ACCESS.2020.3041519,2020