Orthogonal Moment Extraction and Classification of

Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
Orthogonal Moment Extraction and Classification of Melanoma Images
Sudhakar Singh1, Shabana Urooj2
1,2
Department of Electrical Engineering, Gautam Buddha University, Greater Noida, India
sudhakarsingh86@gmail.com1, shabanaurooj@ieee.org2
Abstract: This paper provides orthogonal moments (OM) such as, Zernike Moments(ZM), Psuedo
Zernike Moments(PZM) and Orthogonal Fourier Mellin Moments(OFMM) for the analysis of
melanoma images. The moment invariants may vary with respect to geometric variations. For the
analysis of orthogonal moments hundred random melanoma images and hundred non-melanoma
images have been taken into consideration from the database of 570 melanoma images and 250
non-melanoma images respectively. Orthoganal moments have been computed by varying the
phase angles
from 10 to 40 with an equal interval of
10 degree for the orders 2,
4,8,16,32,64,128,256 respectively. For the optimal OMs Particle Swarm Optimization (PSO)
technique have been used. These set of extracted optimal OMs have been further applied to
classify melanoma images. Support Vector Machine (SVM) has been used for the classification of
[1]sensitivity=88.78% .
Keywords: Moments Invariants, ZM,PZM, OFMM, SVM, PSO.
1. Introduction:
Melanoma is a most widely recognized skin disease around the world, and its rate is increasing
endlessly[1] [2]. Computational analysis frameworks have been developed to help dermatologists
in right time for the early diagnosis of skin disease from dermoscopy images[3]. Skin diseases like
tanning, pigment darkening, sunburn, skin cancers, and infective diseases are rising at a more
speedily due to litter, ultraviolet light, and global warming. A one percent decline in ozone leads
© 2018 by the author(s). Distributed under a Creative Commons CC BY license.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
to a 2-5 % boost in the prevalence of skin cancers and other related disease[4]. Computer Aided
Detection (CAD)[5] [6]can be used on the skin disease images to assist the skin expert radiologist
as a subsequent reader. CAD can find and differentiate the skin disease that a skin disease expert
may not sprout[7] [8]. Figure 1 represents the distribustion of diseases in india.
Diseases
Skin Disease=
15.83%
Other
Disease=84.17%
Figure1: Distribution of Diseases
In image investigation pattern recognition [9]texture classification [10]image indexing and target
direction assessment, OMs played very important role.
The article presents classification of melanoma images using OMs. There are very few articles
which have been used OM features for the classification of images. The performance of the
proposed method is better than other existing techniques. OMs are non-redundant features so less
number of features are required for better melanoma classification, and hence reduced the
computational complexcity.
The paper is arranged in six sections , first section deals with introduction part of the paper, second
section introduced the orthogonal moments, section three explains about the moment selection
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
technique, section four deals with the classification technique, section five for the result discussion
,section six summarize the paper.
1.1Background
Orthogonal Rotation-Invariant Moments (ORIMs) are very significant features for imaging.
Moments hold important data in images, and they are invariant to image moment. ORIMs contain
Zernike Moments, Pseudo-Zernike Moments, and Orthogonal Fourier–Mellin Moments (OFMMs)
[11][12]. Zernike moments, was planned by Zernike in 1934 as the eigenfunctions of differential
equation . Zernike moments can also be produced from Legendre polynomials with firm
constraints 1954, found by Bhatia and Wolf in 1954. Sergio Dominguez [13] proposed a technique
for the recognotoin of 3-D object and pose analysis. Chandan et al.[14] proposed a technique for
the error computation, Sheng et al. proposed OFMMs[15]–[18] for pattern recognition. The
amounts of Orthogonal moments are invariant to the variation of the signal[16].Orthogonal
moments are digitized together geometrically on the unit disk and numerically in the orthogonal
function values for imaging applications[6][11]. Certainly, a bundel of pixels are designated to
cover the unit disk and a value is allocated for each orthogonal function and each selected
pixel[20]. Straight methods determine the orthogonal function value for a pixel by directly
sampling the functions at one or several locations of the pixel[21].
In image investigation, pattern recognition [9]texture classification [10],image indexing and target
direction assessment OM played very important role. Orthogonal moments[7], [10], [22], [23] are
noise resistant [6] and are used as a basis to create moments with new belongings. Few additional
efforts have been made to decline computation time of the orthogonal moments[24]. However,
digitization negotiations the precision of the orthogonal moments. Two types of errors had been
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
identified direct method in digitization .The first error derives from the approximation of the
continuous unit disk by a finite set of pixels, called geometric error. The second error is after
sampling orthogonal functions.
It is denoted as the numerical error. If pixel size would be small then numerical error would be
small [16]. Liao and Pawlak proposed a method for selecting the radius of the disk and thus the
number of pixels to control the geometric error, numerical error when determining the orthogonal
function values[21].Rrecently, Xin et al. developed a technique to compute Zernike moments in
polar space [21]
2.Methodology
Figure 2 shows flow-chart of proposed methodoly. First step is to pic the images from standard
ISIC database[6] and pre-process and normalize the image then OMs of the image have been
computed, these computed momets are optimized by PSO and the classified by SVM.
Normalization
Extracted
moments
Moment
optimization
Pre-processing
Classification of
Melanoma Images
Test Image
Database
Figure 2: Flow chart of proposed methodology
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
Image function f (j,k) is defined in four-sided engagements of pixels with i represents rows and k
represents shows
Therefore, the ORIMs are defined for image function f (xj, yk) as follows
Am,n =
p + 1 N −1 N −1
 f ( x , y )V
j
π
k
*
m,n
( x j , yk )Δ 2
(1)
j =o k =0
Here m is non-negative integer, n is integer and Vm*,n ( x, y ) is comlex conjugate of Vm ,n ( x, y )
moment basis function Vm ,n ( x, y ) can be defined as below
Vm,n ( x, y ) = Rm,n ( ρ )e jnθ
(2)
where ρ = ( x 2 + y 2 )1/2 , θ = tan −1 ( y / x) and
j = −1 , R ( ρ ) is radial polynomial. The image
m,n
function f(x, y) can be represented as
f ( x, y ) =  Am,nVm,n ( x, y )
m
(3)
n
Modified radial polynomial can be expressed as
Am,n =
p + 1 N −1 N −1
 f ( x , y )V
j
π
Here Δ =
k
*
m,n
( x j , yk ) Δ 2
(4)
j =o k =0
2
D
2.1 Zernike Moment
The proposed method has been applied to capture the pattern and edge qualities of the image. It is
well defined that Zernike polynomials are orthogonal to each other, these moments can
characterize the qualities of an image with no idleness or coincide of facts between the moments.
Because of these unique properties, these features have broadly been used in different types of
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
applications [19][25]. It has been used in pattern-based image recovery in edge detection and like
a feature set in model recognition .
The Zernike polynomials are orthogonal.Therefore it can draw out the Zernike moments from a
ROI irrespective of the shape of the target[14]. The formulation of Zernike moment appears to be
very favored, outperforming the options (in phrase of noise resilience, information redundancy
and reconstruction capability).
Complex Zernike Plynomial for a distinct image using current pixel f(x,y) are explained as
R Z m,n ( ρ ) =
( m − n /)2

s =0
(−1) s ( p − s )! ρ m − 2 s
m+ n
  m− n

s !
− s  !
− s !
 2
 2

(5)
where n ≤ m and m − n =even
2.3 Psuedo Zernike Polynomial
R P m,n ( ρ ) =
( m− m )

s =0
(−1) s (2 p + 1 − s )! ρ m − s
s !( m + n + 1 − s ) !( m − n − s ) !
(6)
where m ≤ n
2.4 Orthogonal Furier –Mellin Plynomial
(−1) s + m (m + 1 + s )! ρ s
s !( m − s ) !( s + 1) !
s =0
m
RF m (ρ ) = 
(7)
Since OFMPs are independent of n, unlike ZMs and PZMs.
For the computation of the moments, the image (or area of focus) is initially mapped to the unit
disc operating polar coordinates (ρ,θ), in which the middle of the test image is the source of the
unit disc. Limited measures of ORIM at various orders (n =2 ,4,8, 16, 31, 64, 128, 256) have been
taken.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
3. Feature Selection using Particle Swarm Optimization (PSO)
If L be a dataset of R images with dimensions of P which is defined as L=R * P array. The aim of
the feature (moment) set choice is to get p size out of the total feature (moments) area where p<P,
that optimizes a function A(X). There are two types of features one is of best features and other is
local feature in the subgroup. F is features sets of the melanoma images feature subset S ⊆F and
S=p which enhances the condition feature i.e.
A( S ) = max A(Y )
Y ⊆F
(8)
Unluckily, not any single technique occurs for computing the standard of the feature set that works
best for all melanoma images. Accuracy is the mainly used decisive factor function for the
assessment of categorization models. In this method, the precision of the categorization bought
through a feature set S, that is described as fitness factor. This technique developed by Kennedy
and Eberhart in 1995[26]. Such as different natural techniques, PSO additionally utilizes a group
of potential solutions to search the study space. Therefore, the company inside the community is
presumed to “fly” through all search house so as to search out a offering area of the situation. Try
to let, particle ‘i’ of the swarm be depicted by the perspective vector xi = (xi1, xi2, ….,xid ) and the
ideal particle of the swarm, is represented by the index g. The finest past position of particle i is
captured and listed as pi= (pi1, pi2, …, pid[26],[27]. The location alter of particle i is Vi=(Vi1, Vi2,
…, Vid). Particles modify the velocity and position throughout searching two kinds of ‘best’ value.
Among its personal best (pbest), that is the position of its maximum fitness value[28]The other is
the universal best (gbest), that is the position of total finest value, bought by any particles in the
population. Particles upgrade their unique positions and velocities in accordance to the following
equations:
vj(i) = wvj(i-l) +r1 c1(pbest(j) – xj(i)) +r2 c2(gbest–xj(i))
(9)
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
pj (i) = pj(i - 1) + vj(i)
doi:10.20944/preprints201803.0128.v1
(10)
In which, vj(i) is the speed of the jth particle in the ith iteration, pj (i) is the similar position, pbest
and gbest similar individual local best and global best respectively, the parameter w is the inertia
weight, and c1 and c2 are the accelerate parameters and r1 and r2 are arbitrary constant values .
w = ( w − 0.4) * ( Maxitr − Itr ) / Maxitr + 0.4
(11)
The inertia weight reduces through the iterations, differing from 1.4 to 0.4 according to the
previous formula. The flow chart of PSO centered feature collection[30] has confirmed in on top
Fig. When most of our test, we operate the same setup of the PSO algorithm variables so as to
enhances the robustness of the contrast around various other techniques. In specific problem,
parameters are taken such as , D =100; c1=2; c2=2; ωmax = 1.4; ωmin = 0.4, and Maxitr=200 in
the PSO parametric quantity set.
4. Support Vector Machine (SVM) Classifier
Generally different techniques are available to train and verify the precision of a categorization
version. In this research, two methods have been used. One specific technique is to separate the
information setup into 2 different sets referred to as training and test units. Then the classification
system is trained by using the training set and validate through sub-testing from the test set[29].
One more typically applied strategy in classification system is that the 5 fold cross-validation
technique. Using this technique, the dataset of magnitutde of ZM,PZM,OFFM is separated into nalike subsets. Then, n-units of training/testing are usually done, wherever in every spherical, n-1
units tend to be applied for training the system and also the left nth subset is range in concerning
the super-plane and additionally the nearby datum to every category[28]. The selection attribute
made merely the SVM classifier for a couple of problems are often developed , employing a kernel
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
perform K(x,xi) like analogue and radial basis function (RBF) to a replacement trial x and a
training pattern xi as given bellow:
f ( x) = αi y j K ( x, xi ) + α0 , αi ≥ 0∀i
(12)
i
Where yi = ±1 the mark of trial xi. The variables αi ≥0 are optimized through the training process.
5. Results and Discussion
In the proposed work a set of 570 melanoma and 250 non-melanoma images are taken from ISIB2016 [24] database. All Images are of the same size .
Melanoma
Images
NonMelanoma
Images
Figure: 3 Examples of Melanoma and No-melanoma Images
Table 1: Orthogonal Moments Comparision for Melanoma Image
Phase
angel
(ϕ)
Zernike Moment (ZM)
Psudo Zernike Moment
Orthogonal Furier–Mellin
(PZM)
moments (OFMM)
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
Norm
Magnit Elapse
Norm
Magnit Elapse Normalized
alized
ude of
d time
alized
ude of
ZM
ZM
(Sec.)
PZM
PZM
0.0291 1.738
9.054e
d time
OFMM
Magnitud
Elapse
e of
d time
OFMM
N=2 order of moment
10
20
30
40
9.197+ 4.789e
03
+07
2.394e
1.701+
+04
08
3.874e
3.324e
+04
+08
4.753e
4.445e
+04
+08
0.021
+03
0.0269 2.320
1.655e
2.275e
0.021
2.528e
0.020
-0.081+
2.603e+03 0.037
0.090i
0.021
+04
0.0396 2.703
1.280+03
0.051i
+04
0.0316 2.652
-0.063+
-0.052+
2.812e+03 0.031
0.096i
0.023
+04
-0.017+
2.855e+03 0.028
0.100i
N=4 order of moment
10
20
30
40
1.132e
1.005e
+09
+12
1.132e
4.846e
+09
+12
2.210e
1.137e
+09
+13
1.655e
1.655e
+13
+13
N=8 order of moment
0.0324 4.361
1.362e
0.024
+04
0.0457 5.784
2.475e
3.425+
0.026
3.785e
+04
-3.184e+03
3.521e+03 0.023
-1.502e+03i
0.028
04
0.0598 6.745
1.405e+03 0.021
0.058i
+04
0.0515 6.653
-0.068 -
-0.212 -
5.692e+03 0.025
0.060i
0.034
-0.244 0.023i
6.884e+03 0.029
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
10
20
30
40
2.730e
4.739e
+17
+20
1.760e
4.186e
+18
+21
4.930e
1.410e
+18
+22
7.790e
2.428e
+18
+22
0.0476 13.994
2.429e
0.034
+04
0.0653 18.963
4.509e
6.217e
0.043
6.878e
8.452e+02 0.022
0.077+
1.675e+03 0.023
0.011i
0.050
+04
0.0808 22.063
0.046+
0.028i
+04
0.0792 21.737
doi:10.20944/preprints201803.0128.v1
0.070-
1.834e+03 0.025
0.006i
0.064
+04
0.056-
1.595e+03 0.028
0.006i
N=16 order of moment
10
20
30
40
1.276e
1.172e
+35
+38
2.820e
3.550e
+36
+39
1.622e
2.457e
+37
+40
3.607e
5.953e
+37
+40
0.0801 49.957
4.590e
0.088
+04
0.1283 67.587
8.508e
1.175e
0.098
1.301e
0.016+
3.487e+02 0.024
0.002i
0.114
+05
0.1607 78.864
3.470e+02 0.023
0.009i
+04
0.1590 77.652
0.022 -
0.017-
4.664e+02 0.033
0.002i
0.090
+05
0.019+
5.783e+02 0.027
0.006i
N=32 order of moment
10
1.755e
8.308e
+70
+72
0.1380 1.885e
+07
8.927e
+09
0.102
0.001 0.001i
34.916
0.023
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
20
4.739e
3.073+
+72
75
0.2412 2.731e
1.771e
0.139
+10
doi:10.20944/preprints201803.0128.v1
0.006 -
1.317e+02 0.035
0.0004i
+07
30
1.164e
9.085e
+74
+76
0.2608 4.008e
3.126e
0.170
+10
0.005+
1.388e+02 0.028
0.0007i
+07
40
5.160e
4.387e
+74
+77
0.2589 3.808e
3.237e
0.174
+10
1.258e+02 0.029
1.258e+02 -
+07
4.723e-01i
N=64 order of moment
10
2.672e
5.177e
+140
+142
0.2672 1.725e
4.147e
0.268
+33
5.868e-04 -
15.723
8.178e-04i
0.021
+31
20
8.967e
2.952e
+144
+147
0.4553 3.123e
1.028e
0.268
+34
0.001+
22.012
0.023
38.923
0.026
32.817
0.029
0.0001i
+31
30
4.083e
1.617e
+147
+150
0.5281 4.623e
1.830+
0.305
34
0.0015 0.0001i
+31
40
7.222e
3.117e
+148
+151
0.7580 4.245e
+31
N=128
1.832e
+34
0.321
0.001 0.0004i
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
10
2.481e
4.324e
+134
+140
0.2773 0.725e
3.231e
doi:10.20944/preprints201803.0128.v1
0.3763 3.762e-03 –
25.534
0.023
21.354
0.025
37.635
0.036
31.356
0.056
0.4653 1.242e+02 – 18.456
0.076
+30
6.353e-02i
+22
20
8.967e
2.952e
+92
+100
0.4653 0.023e
1.143e
0.5763 0.003+
+26
0.004i
+18
30
40
4.083e
1.617e
+88
+90
7.222e
3.117e
+78
+82
3.232e
5.177e
+79
+80
8.967e
2.952e
+73
+76
4.083e
1.617e
+70
+75
5.265e
3.117e
+67
+68
0.6781 0.623e
+14
0.7580 1.246e
+12
1.534e
0.5463 0.175 -
+23
1.753e
0.362i
0.6324 0.003 -
+18
0.002i
N=256
10
20
30
40
0.2522 1.625e
+08
0.6453 1.253e
+06
0.7880 1.364e
+04
0.8681 1.543e
+04
3.634e
+33
1.635e
2.354e+02i
0.7835 2.523e+03+
24.576
0.083
0.7265 1.654e+03 – 40.564
0.078
+34
1.543+
1.253e+02 i
34
1.763e
1.753e+03i
0.8263 3.536e+04 – 33.475
+34
4.653e+04i
Table 2 Classification results of Melanoma Images
Techniques
Accuracy
Specificity
Sensitivity
Sheha et al.[30]
89.30
82.60
84.00
0.088
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
Silvio M. et al. [31]
81.9%
72.21%
84.76%
Proposed Model
91.24%
88.76%.
86.78%
In this paper three orthogonal moments have been investigated as shown in table 1. Melanoma
images are analysed by rotating the image phase for each order of moment.
From the table1 moments of the melanoma images are investigated for different orders and phase
angles. The range of order is 2,4,8,16,32,64,128,256 for each order phase angle varied from 10
to 40 . From the table1 it is confirmed that as order of moment increased from 2 to 256, the
magnitude of ZM and normalized ZM is increased up to N= 64 and after N=64 it started to decrease
as shown in table1. Magnitude of PZM and normalized PZM is increased up to N=64 , after N=64
the magnitude started to decreases as shown in table1. The magnitude of OFMM is gradualy
increased upto N=8, after N=8 magnitude started to deacrease.
It is observed from table1 that by increasing the order of moments the magnitude of ZM,PZM
increased means the information of images increased and hence easy to classify the melanoma
images. At N=64 the obtained moments are more significant because the magnitude of ZM, PZM
is maximum. For the classification 100 melanoma and 100 non-melanoma images have been used.
These images have been classified using SVM and compared the results with other existing
technique as shown in table 2.The performance of the proposed technique is found better than
earlier two techniques as shown in table 2.
From the table first technique [30] has been tested for 20 images, while the second technique [31]
has been tested for 171 images. Thus the result obtained from the proposed technique is better and
useful for the diagnosis of melanoma images.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
Classification performance is examined by the specificity, sensitivity, accuracy. If, it is verified
skin image having melanoma then the considered as true positive (TP). If the skin image do not
have melanoma and classified as true negative (TN). In the same way, when a melanoma is
confirmed absent in a patient, the analysis examination indicates the illness is absent as well, the
test outcome is true negative (TN). When the analysis test shows the bearing of melanoma in a
skin image that actually has no such illness, the test result is false positive (FP). Furthermore, if
the outcome of the analysis of experiments suggests that the illness is away for a skin image with
melanoma for certain, the test outcome is false negative (FN). Then
Sensitivity = TP/(TP + FN)
Specificity = TN/(TN + FP)
Accuracy = (TN + TP)/(TN+TP+FN+FP
Sensitivity is the proportion to true positives that are precisely determined by classification, It
reveals exactly how effective the examination is performing. Specificity is the symmetry of the
true negatives precisely determined by classification[35]. It implies exactly how good the testing
is classified normal (negative) condition. Accuracy is symmetry of reliability outcomes, whether
true positive or true negative, in a group. Sensitivity, specificity and accuracy of 570 melanoma
images are 88.78% , 87.86% and 91.24% respectively are computed using RBF SVM which are
better than linear SVM as shown in table 2.
6. Conclusion
In this paper a set of orthogonal moments have been proposed for classification of melanoma
images. Moment invariants available in literature and past research haven not made any significant
contribution in melanoma image classification. The proposed method is better as it has the ability
to wipe off the symmetric entity of majority of the moments. These extracted moments are selected
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
by using PSO, furthermore selected moments are classified by SVM. Classification rate of the
proposed method is 91.24%. It is clear from the obtatined simulation result that the obtained
magnitude of ZM, and PZM of melanoma images at N=64 found better as compared to lower and
higher order. Poposed technique is superior than other existiting techniques. OMs are very
sensitive to noise and nonredundant hence very limited moments are required for
better
classification as compare to the color featurs.
It is significan to classify the disease assessment in a better way by analyzing the image.
Computer analysis available so far still need much specialized environment and tools especially
for the less skilled skin experts. The proposed melanoma classification algorithm may be utilized
for betterment to improve diagnostic accuacy of melanoma and hence the societal welfare.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
M. Rastgoo, R. Garcia, O. Morel, and F. Marzani, “Automatic differentiation of melanoma
from dysplastic nevi,” Comput. Med. Imaging Graph., vol. 43, pp. 44–52, 2015.
I. Maglogiannis and C. N. Doukas, “Overview of advanced computer vision systems for
skin lesions characterization,” IEEE Transactions on Information Technology in
Biomedicine, vol. 13, no. 5. pp. 721–733, 2009.
O. Abuzaghleh, B. D. Barkana, and M. Faezipour, “Noninvasive real-time automated skin
lesion analysis system for melanoma early detection and prevention,” IEEE J. Transl. Eng.
Heal. Med., vol. 3, 2015.
A. G. Manousaki et al., “A simple digital image processing system to aid in melanoma
diagnosis in an everyday melanocytic skin lesion unit. A preliminary report,” Int. J.
Dermatol., vol. 45, no. 4, pp. 402–410, 2006.
S. Bakheet, “An SVM Framework for Malignant Melanoma Detection Based on Optimized
HOG Features,” Computation, vol. 5, no. 1, p. 4, 2017.
M. A. Marchetti et al., “Results of the 2016 International Skin Imaging Collaboration
International Symposium on Biomedical Imaging challenge: Comparison of the accuracy
of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic
images,” Journal of the American Academy of Dermatology, 2017.
S. Jain, V. Jagtap, and N. Pise, “Computer aided melanoma skin cancer detection using
image processing,” in Procedia Computer Science, 2015, vol. 48, no. C, pp. 736–741.
R. B. Oliveira, J. P. Papa, A. S. Pereira, and J. M. R. S. Tavares, “Computational methods
for pigmented skin lesion classification in images: review and future trends,” Neural
Comput. Appl., pp. 1–24, 2016.
Z. Ping, H. Ren, J. Zou, Y. Sheng, and W. Bo, “Generic orthogonal moments: JacobiFourier moments for invariant image description,” Pattern Recognit., vol. 40, no. 4, pp.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
1245–1254, 2007.
[10] R. Dhir, “Moment based invariant feature extraction techniques for Bilingual Character
Recognition,” ICETC 2010 - 2010 2nd Int. Conf. Educ. Technol. Comput., vol. 4, 2010.
[11] S. Urooj, S. P. Singh, and A. Q. Ansari, Computer Aided Detection of Breast Cancer using
Pseudo Zernike Moment as a Texture Descriptors, vol. 651, no. March. 2013.
[12] S. P. Singh and S. Urooj, “Accurate and Fast Computation of Exponent Fourier Moment,”
Arab. J. Sci. Eng., vol. 42, no. 8, pp. 3299–3306, 2017.
[13] S. Dominguez, “Simultaneous recognition and relative pose estimation of 3D objects using
4D orthonormal moments,” Sensors (Switzerland), vol. 17, no. 9, 2017.
[14] C. Singh and R. Upneja, “Error analysis in the computation of orthogonal rotation invariant
moments,” J. Math. Imaging Vis., vol. 49, no. 1, pp. 251–271, 2014.
[15] C. Kan and M. D. Srinath, “Invariant character recognition with Zernike and orthogonal
Fourier-Mellin moments,” Pattern Recognit., vol. 35, no. 1, pp. 143–154, 2002.
[16] H. Zhang, Z. Li, and Y. Liu, “Fractional orthogonal fourier-mellin moments for pattern
recognition,” in Communications in Computer and Information Science, 2016, vol. 662, pp.
766–778.
[17] H. Zhang, H. Z. Shu, P. Haigron, B. S. Li, and L. M. Luo, “Construction of a complete set
of orthogonal Fourier-Mellin moment invariants for pattern recognition applications,”
Image Vis. Comput., vol. 28, no. 1, pp. 38–44, 2010.
[18] Y. Sheng and H. H. Arsenault, “Experiments on pattern recognition using invariant Fourier–
Mellin descriptors,” J. Opt. Soc. Am. A, vol. 3, no. 6, p. 771, 1986.
[19] A. Khotanzad and Y. H. Hong, “Invariant Image Recognition by Zernike Moments,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 12, no. 5, pp. 489–497, 1990.
[20] T. V. Hoang and S. Tabbone, “Erratum: Generic orthogonal moments: Jacobi-Fourier
moments for invariant image description (Pattern Recognition),” Pattern Recognition, vol.
46, no. 11. pp. 3148–3155, 2013.
[21] X. Li and A. Song, “A new edge detection method using Gaussian-Zemike moment
operator,” CAR 2010 - 2010 2nd International Asia Conference on Informatics in Control,
Automation and Robotics, vol. 1. pp. 276–279, 2010.
[22] Z. Shao, H. Shu, J. Wu, B. Chen, and J. L. Coatrieux, “Quaternion Bessel-Fourier moments
and their invariant descriptors for object reconstruction and recognition,” Pattern Recognit.,
vol. 47, no. 2, pp. 603–611, 2014.
[23] J. Mennesson, C. Saint-Jean, and L. Mascarilla, “Color Fourier-Mellin descriptors for image
recognition,” Pattern Recognit. Lett., vol. 40, no. 1, pp. 27–35, 2014.
[24] Y. Sheng and L. Shen, “Orthogonal Fourier–Mellin moments for invariant pattern
recognition,” J. Opt. Soc. Am. A, vol. 11, no. 6, p. 1748, 1994.
[25] B. H. Shekar and D. S. Rajesh, “Affine Normalized Krawtchouk Moments Based Face
Recognition,” in Procedia Computer Science, 2015, vol. 58, pp. 66–75.
[26] F. Marini and B. Walczak, “Particle swarm optimization (PSO). A tutorial,” Chemom. Intell.
Lab. Syst., vol. 149, pp. 153–165, 2015.
[27] C. Tu, L. Chuang, J. Chang, and C. Yang, “Feature selection using PSO-SVM,” IAENG Int.
J. Comput. Sci., vol. 33, no. 1, pp. 1–6, 2007.
[28] H. Zeng and H. J. Trussell, “Feature selection using,” IAENG Int. J. Comput. Sci., vol. 33,
no. February, pp. 997–1000, 2006.
[29] J. Von Hagen, “Money growth targeting by the bundesbank*,” J. Monet. Econ., vol. 43, no.
3, pp. 681–701, 1999.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 16 March 2018
doi:10.20944/preprints201803.0128.v1
[30] M. Sheha, M. Mabrouk, and A. Sharawy, “Automatic detection of melanoma skin cancer
using texture analysis,” Int. J. Comput. …, vol. 42, no. 20, pp. 22–26, 2012.
[31] S. M. Pereira, M. A. C. Frade, R. M. Rangayyan, and P. M. Azevedo-Marques,
“Classification of color images of dermatological ulcers,” IEEE J. Biomed. Heal.
Informatics, vol. 17, no. 1, pp. 136–142, 2013.
Download PDF
Similar pages