-
Notifications
You must be signed in to change notification settings - Fork 3
D. ROC Analysis
As mentioned before, a simple classification algorithm for biometric verification relies on a binary decision based on the normalization metric. The higher the metric, the stronger the possibility that the biometric of interest is in the query image. By comparing with a suitable threshold value, it is possible to decide if there is a match somewhere on the image plane. The performance of this type of classifiers can be assessed in ROC space. In this space, the rate of accurately categorized images over a true-class dataset is compared to the rate of falsely identified matches over a false-class dataset. Considering the so called confusion matrix and the behavior of the classifier in the ROC space it is possible to obtain not only the ideal threshold value, but also the expected error in the performance of the filter.
When developing a classifier algorithm it is clear that some images are going to be falsely identified as matches, whereas some might possibly be wrongly classified as no-match. Hence, four important quantities should be computed to measure the performance of the algorithm and characterize the tradeoff between accuracy and simplicity of the biometric recognition application:
-
True Positive Rate (TPR): Over a true-class dataset, it measures the ratio of accurately identified matches to the total number of true matches in the set. Ideally, the classifier should give a TPR = 1.
-
False Negative Rate (FNR): Over a true-class dataset, it measures the ratio of wrongly rejected matches to the total number of true matches in the set. Ideally, the classifier should give a FNR = 0.
-
False Positive Rate (FPR): Over a false-class dataset, it measures the ratio of wrongly identified matches to the total number of impostor images in the set. Ideally, the classifier should give a FPR = 0.
-
True Negative Rate (TNR): Over a false-class dataset, it measures the ratio of accurately rejected matches to the total number of impostor images in the set. Ideally, the classifier should give a TNR = 1.
In the above image the considered indicators are arranged in the so called confusion matrix. It is clear that the better the performance, the similar would that matrix be to the identity matrix. For a particular value of the threshold metric value, there is a corresponding confusion matrix. If that value is too low, virtually all images would be classified as matches and TPR = 1, FPR = 1. On the other hand, if it is too high, TPR = 0, FPR = 0. In both cases, the classifier is said to operate like a random classifier. For certain values of the threshold, the classifier might perform perfectly, then TPR = 1, FPR = 0. That last condition is constrained by the actual metric computation and its ability to discriminate properly between true-class and false-class sets. Once the ideal threshold value is determined, the expected Equal Error Rate (EER) is the ordinate of the point in ROC space (see figure bellow).
The Receiver Operating Characteristic is a common performance metric for statistical classifiers. These curves are obtained by plotting the TPR vs. FPR. The more accurate the classifier, the nearer would its ROC point be to the ideal (1,1). The random classifiers tend to locate near the principal diagonal of the plane (y = x). A sample of a typical ROC curve is shown in the figure bellow (taken from Afalou et. al.).
Usually, the curve is generated by varying the threshold metric value for the classifier. A good classifier must have a ROC curve above the principal diagonal of the space. This menas that it performs better than random. When the classifier operates bellow this diagonal, the classifier is said to be negated, for it seems to be consistently rejecting the true matches, and accepting the impostors. A measure of the accuracy of the Area Under the Curve (AUC) of the ROC. Statistically, it measures the probability that more actual true matches are produced by the classifier than false positives. An AUC = 1 means that the classifier is completely accurate. An AOC = 0 means that the classifier is negated. And an AUC = 0.5 means that the classifier is likely to be random. Hence, the ROC curve and AUC can be used to assess the accuracy of the whole classifier algorithm.
To determine the most suitable threshold value, it is necessary to locate the ROC point that is nearest to the ideal (1,1) or upper left corner of the space. When this is done, the expected error of the classifier can be estimated by the ordinate of the point.
The fundamental references of the project are:
-
[2]. Composite versus multichannel binary phase-only filtering. de la Tocnaye, Quemener & Petillot.
-
[3]. A Technique for Optically Convolving Two Functions. C. S. Weaver and J. W. Goodman
-
[4]. Signal detection by complex spatial filtering. A. V. Lugt
-
[5]. Face Verification using Correlation Filters. Kumar et. Al.
-
[6]. MACE Correlation Filter Algorithm for Face Verification ni Surveillance. Omidora et. Al.
-
[8]. New Perspectives in face Correlation Research: A Tutorial. Wang et. Al