Although not popular in feature selection literature, perhaps the simplest way to understand discriminatory nature of feature in a training set with two classes can be by using a search using naive bayes classifier. A low probability of error of individual features as obtained using baysian classifier would indicate good discriminatory ability and asserts the usefulness of the feature.

A standard approach in feature selection literature is to directly apply training and selection criteria on the feature values. However, when natural variability in the data is high and number of training samples are less, even minor changes in feature values would introduce errors in the bayes probability calculations. Classification methods such as SVM on the other hand try to get around this problem by normalising the feature values and by parametric training of the classifiers against several possible changes in features values. In classifier studies, this essentially shifts the focus from feature values to distance values. Instead of directly optimising the classifier parameters based on feature values, the distance functions itself is trained and optimised.

### Proposed method

In this work, we attempt to develop a technique of feature selection by using the new concept of distance probability distributions. This is a very different concept to that of filter methods that applies various criterion such as inter-feature distance, bayes error or correlation measures to determine set of features having low redundancy. Instead of complicating the feature selection process by different search and filter schemes to remove redundant features and to maintain relevant features, we focus our work in using all features that are most discriminative and useful for a classifier. Further, rather than looking at feature selection as a problem of finding inter-feature dependencies for reducing number of features, we treat each feature individually and arrive at features that would have the ability to contribute to classifiers performance improvement.

Suppose there are *M* classes in a training set having patterns with a set of *J* features, with _{
ω
ij
} as class label for feature *j*, where *i*∈{1*.M*} and *j*∈{1*.J*}. And let _{
x
jk
} be a feature in the *k* th training pattern that can be used to calculate the inter-class and intra-class distance probability distributions. The intra-class distances {y}_{j}^{a} of the *j* th feature in a training set is equal to the distance 1-{e}^{-|{x}_{\mathrm{jk}}-{x}_{j\stackrel{\u0304}{k}}|}, where *k*∈{1*.K*}, \stackrel{\u0304}{k}\in \left\{1\mathrm{.K}\right\} with k\ne \stackrel{\u0304}{k} within a class in training set with *K* samples. The inter-class distances {y}_{j}^{e} of a feature _{
x
jk
} in a training set belonging to a class _{
ω
ij
} is equal to the distance 1-{e}^{-|{x}_{\mathrm{jk}}-{\stackrel{\u0304}{x}}_{j}|}, where {\stackrel{\u0304}{x}}_{j} is a feature at *j* belonging to a sample in another class other than that of _{
x
jk
}. We can represent the set of classes that does not belong to the class _{
ω
ij
} as {\stackrel{\u0304}{\omega}}_{\mathrm{ij}}. Then the intra-class distance probability distribution of feature *j* in class _{
ω
ij
} is p\left({y}_{j}^{a}\right|{\omega}_{\mathrm{ij}}) and the corresponding inter-class distance probability distribution is p\left({y}_{j}^{e}\right|{\stackrel{\u0304}{\omega}}_{\mathrm{ij}}). The area of overlap of these distributions can be seen as the probability of error of feature at *j* for a class label at *i* and represents the discriminatory ability of feature. Since, in practice we are dealing with samples in discrete form the probability density can be represented in discrete from with *m* bins, and the area of overlap _{P(j|i)} can be represented as:

{P}_{\left(j\right|i)}=\frac{1}{2}\sum _{m=-\infty}^{{y}_{0}}{p}_{m}\left({y}_{j}^{a}\right|{\omega}_{\mathrm{ij}})\mathrm{dy}+\frac{1}{2}\sum _{m={y}_{0}}^{\infty}{p}_{m}({y}_{j}^{e}\left|{\stackrel{\u0304}{\omega}}_{\mathrm{ij}}\right)\mathrm{dy}

(1)

The relative area of overlap of feature among all the classes can be then found as:

{\widehat{P}}_{\left(j\right|i)}={P}_{\left(j\right|i)}-\underset{i}{min}\phantom{\rule{1em}{0ex}}{P}_{\left(j\right|i)}

(2)

The minimum area of overlap for feature across different classes can be then calculated as a measure to establish the discriminatory ability of feature:

{\widehat{P}}_{j}=1-\underset{i}{min}\phantom{\rule{1em}{0ex}}{\widehat{P}}_{\left(j\right|i)}

(3)

Taking the minimum value of {\widehat{P}}_{\left(j\right|i)} across different classes ensures that features that could discriminate well for any one of the class among many and such features can be considered as useful for classification. The features are ranked in descending order based on the value of {\widehat{P}}_{j}, a value of 0 would force the feature to take a low rank while a value of 1 would force the feature to take top rank. Let *R* represent the set of {\widehat{P}}_{j}, arranged in the order of their ranks, each rank representing feature or group of features. *R* set can be used to form a rank based probability distribution by normalising the {\widehat{P}}_{j}.

It is well known that almost every other ranked distributions of empirical nature originating from realistic back end data follow a power law distribution. The top ranked features in a ranked distribution often retain most of the information. This effect is observed in different problems and applications, and has formed the basis of Winner-take-all and Pareto principles.

The ranked distribution is formed with {\stackrel{\u0304}{P}}_{r}=\frac{{\widehat{P}}_{j}}{\sum _{j=1}^{J}{\widehat{P}}_{j}} represent the normalised value of {\widehat{P}}_{j} for the feature at *j* having a rank *r*. The cumulative ranked distribution {c}_{j}^{r} is obtained as:

{c}_{r}={\widehat{P}}_{r}+{c}_{r-1},\phantom{\rule{1em}{0ex}}\text{where}\phantom{\rule{1em}{0ex}}{c}_{-1}=0

(4)

The top ranked values of _{
c
r
}can used to select the most discriminative set of features. Applying the winner-take-all principle, and in the lines of 20−80 concept of rank-size distributions, it is logical to assume that the top ranked features would have maximum amount of discriminative information. The subset of features *X* having a size *L*∈[1,*J*] from the ranked features can be selected based on a selection threshold *θ*.

{x}_{j}\in X\phantom{\rule{2.77626pt}{0ex}}\iff \phantom{\rule{2.77626pt}{0ex}}{c}_{r}\le \theta

(5)

In other words, the features _{
x
j
}corresponding to the ranks that fall below the cumulative area threshold *θ* is selected to form *X* with size *L*. The selection threshold *θ* for selecting the top ranked features is done using the proposed Def 1.

#### Definition 1

The selection threshold *θ* is equal to the standard deviation *σ* of the distribution of {c}_{j}^{r}, where \sigma =\sqrt{\frac{1}{N}\sum _{i=1}^{N}{({c}_{j}^{r}-\frac{1}{N}\sum _{i=1}^{N}{c}_{j}^{r})}^{2}}.

If each feature in *X* is uncorrelated and independent, the features within *X* will be very few or no be redundant features. The selection of *X* based on the discriminatory ability is sufficient to ensure good classification performance. However, in feature selection problem, there is a chance that the subset of discriminant feature would have very similar features, and such features become redundant in improving classification performance. Identifying the independence of discriminant features would ensure the detection of least redundant features. For two features, {_{
x
r
},_{x r + 1}}, ranked in order of {\stackrel{\u0304}{P}}_{r} and {\stackrel{\u0304}{P}}_{r+1} values, let *p*(_{
x
r
}) and *p*(_{x r + 1}) be the probability density functions, and *p*(_{
x
r
},_{x r + 1}) be the joint probability density function, where *r*∈[1,*L*] is the rank of a feature in *X* corresponds to an index *j* in the original feature space. Then the features are independent if it can be established that *p*(_{
x
r
},_{x r + 1})=*p*(_{
x
r
})*p*(_{x r + 1}). This idea of independence testing is utilised in finding an independence score of a feature. The area score between the probability densities _{
p
}(_{
x
r
},_{x r + 1}) and _{
p
}(_{
x
r
})_{
p
}(_{x r + 1}) in discrete domain is calculated as:

{A}_{r,r+1}=\frac{1}{2}\sum _{m=-\infty}^{{x}_{0}}{p}_{m}\left({x}_{r}\right){p}_{{}_{m}}\left({x}_{r+1}\right)\mathrm{dx}+\frac{1}{2}\sum _{m={x}_{0}}^{\infty}{p}_{m}({x}_{r},{x}_{r+1})\mathrm{dx}

(6)

The independence score _{
I
r
}of feature _{
x
r
} with respect to remaining *L*−1 features in *X* is determined as:

{I}_{r}=\frac{1}{L-1}\sum _{r=1}^{L-1}{A}_{r,r+1}

(7)

A value of _{
I
r
}=1 would indicate that _{
x
r
} is an independent feature in *X* (or _{
x
j
}in the feature set with *j* th feature in the original feature space corresponding to the *r* th rank feature in *X*), while a value of _{
I
r
} would indicate that _{
x
r
}is redundant and should be removed. The independence score _{
I
r
}corresponding to the feature at *j* in the sample along with the discriminatory score {\widehat{P}}_{j} can be used to select the most independent set of discriminant features.

{z}_{s}={x}_{j}\phantom{\rule{2.77626pt}{0ex}}\iff \phantom{\rule{2.77626pt}{0ex}}{I}_{r}{\widehat{P}}_{j}\le \epsilon

(8)

where the value of *ε*=0.01 is a small number, and _{
z
s
}is the set of most relevant discriminative independent features _{
x
j
}, with *s*≤*J*.

These subset of top ranked features are considered as useful for classification. However, parameters and nature of decision boundary imposed by a specific classifier need to be considered before these features can be used for classification. Consider using a nearest neighbour classifier, then the relative importance of feature _{
z
s
}∈*X* can be rated based on the recognition performance of using individual feature _{
z
s
} alone for classification. Assuming the independence of features, using a leave one out cross validation, the classification accuracy of *s* th feature and *j* th sample in training set with size *J*, and *l*∈*J* is found by the identification of the class as:

{w}^{\ast}=\text{arg}\phantom{\rule{0.3em}{0ex}}\underset{l,l\ne j}{min}\phantom{\rule{1em}{0ex}}d({z}_{\mathrm{sj}},{z}_{\mathrm{sl}})

(9)

The selected features _{
z
s
}are ranked based on the total number of correct class identification ^{w∗}in descending order. The top ranked features represent the most discriminant features while the lower ranked ones are relatively of lower in class discriminatory ability when using a nearest neighbour classifier. Such a ranking of the features for a given classifier identifies itself as the best responding features for that classifier.