# /smash/get/diva2:19612/FULLTEXT01.pdf

**Examensarbete **

LITH-ITN-MT-EX--04/002--SE

**Scale-Space Methods as a Means of Fingerprint Image **

**Enhancement **

# Karl Larsson

2004-01-15

**Department of Science and Technology **

**Linköpings Universitet **

**SE-601 74 Norrköping, Sweden **

**Institutionen för teknik och naturvetenskap **

**Linköpings Universitet **

**601 74 Norrköping **

LITH-ITN-MT-EX--04/002--SE

**Scale-Space Methods as a **

**Means of Fingerprint Image **

**Enhancement **

Examensarbete utfört i Medieteknik vid Linköpings Tekniska Högskola, Campus Norrköping

# Karl Larsson

Handledare: Kenneth Jonsson (Fingerprint Cards)

Examinator: Björn Kruse (ITN)

Norrköping 2004-01-15* *

**Språk**

Language

Svenska/Swedish

Engelska/English

_ ________________

**Avdelning, Institution **

Division, Department

Institutionen för teknik och naturvetenskap

Department of Science and Technology

**Rapporttyp**

Report category

Examensarbete

B-uppsats

C-uppsats

D-uppsats

_ ________________

**Datum **

Date

**2004-01-15 **

**ISBN **

**____________________________________________________ **

**ISRN **

**LITH-ITN-MT-EX--04/002--SE **

_________________________________________________________________

**Serietitel och serienummer **

** ISSN**

Title of series, numbering ___________________________________

**URL för elektronisk version http://www.ep.liu.se/exjobb/itn/2004/mt/002/ **

**Titel **

Title

Skalrymdsmetoder som förbättring av fingeravtrycksbilder

Scale-Space Methods as a Means of Fingerprint Image Enhancement

**Författare**

Author

Karl Larsson

**Sammanfattning**

Abstract

Utveckling och användning av automatiska fingeravtrycksidentifieringssystem har ökat märkbart under de senaste åren. Det är ett välkänt faktum att små skillnader kan uppträda i ett fingeravtryck över en viss tid. Detta problem benämns ”template ageing”, och tillsammans med andra orsaker till skillnader mellan två bilder av ett och samma fingeravtryck kan det försvåra verifieringsprocessen betydligt, då distinkta egenskaper i de båda bilderna kan uppträda väldigt olikt. För att minimera denna typ av problem utför man vanligtvis någon typ av bildförbättring före verifieringsprocessen. Detta examensarbete testar skalrymdsmetoder och utvärderar deras prestanda som förbättrare av fingeravtrycksbilder.

De metoder som har utvärderats innefattar linjär skalrymd, i vilken ingen föregående information om bilderna existerar, samt skalär- och tensorberoende diffusion, i vilka analys av bilden föregår och kontrollerar bildförbättringsprocessen.

The usage of automatic fingerprint identification systems as a means of identification and/or verification have increased substantially during the last couple of years. It is well known that small deviations may occur within a fingerprint over time, a problem referred to as template ageing. This problem, and other reasons for deviations between two images of the same fingerprint, complicates the identification/verification process, since distinct features may appear somewhat different in the two images that are matched. Commonly used to try and minimise this type of problem are different kinds of fingerprint image enhancement algorithms. This thesis tests different methods within the scale-space framework and evaluate their performance as fingerprint image enhancement methods.

The methods tested within this thesis ranges from linear scale-space filtering, where no prior information about the images is known, to scalar and tensor driven diffusion where analysis of the images precedes and controls the diffusion process.

**Nyckelord **

Keyword fingeravtryck, template aging, korrelation, bildbehandling, bildanalys, bildförbättring, skalrymd, skalrymdsmetoder, diffusion, skalärberoende diffusion, skalärstyrd diffusion, tensorberoende diffusion, tensorstyrd diffusion fingerprint, template aging, correlation, image enhancement, scale-space, Gaussian scale-space, linear scale-space, nonlinear isotropic scale-space, nonlinear anisotropic scale-space, diffusion, scalar driven diffusion, scalar dependent diffusion, tensor driven diffusion, tensor dependent diffusion

**Abstract **

The usage of automatic fingerprint identification systems as a means of identification and/or verification have increased substantially during the last couple of years. It is well known that small deviations may occur within a fingerprint over time, a problem referred to as template ageing. This problem, and other reasons for deviations between two images of the same fingerprint, complicates the identification/verification process, since distinct features may appear somewhat different in the two images that are matched. Commonly used to try and minimise this type of problem are different kinds of fingerprint image enhancement algorithms. This thesis tests different methods within the scale-space framework and evaluate their performance as fingerprint image enhancement methods.

The methods tested within this thesis ranges from linear scale-space filtering, where no prior information about the images is known, to scalar and tensor driven diffusion where analysis of the images precedes and controls the diffusion process.

The linear scale-space approach is shown to improve correlation values, which was anticipated since the image structure is flattened at coarser scales. There is however no increase in the number of accurate matches, since inaccurate features also tends to get higher correlation value at large scales.

The nonlinear isotropic scale-space (scalar dependent diffusion), or the edge-preservation, approach is proven to be an ill fit method for fingerprint image enhancement. This is due to the fact that the analysis of edges may be unreliable, since edge structure is often distorted in fingerprints affected by the template ageing problem.

The nonlinear anisotropic scale-space (tensor dependent diffusion), or coherence-enhancing, method does not give any overall improvements of the number of accurate matches. It is however shown that for a certain type of template ageing problem, where the deviating structure does not significantly affect the ridge orientation, the nonlinear anisotropic diffusion is able to accurately match correlation pairs that resulted in a false match before they were enhanced. i

**Sammanfattning **

Utveckling och användning av automatiska fingeravtrycksidentifieringssystem har ökat märkbart under de senaste åren. Det är ett välkänt faktum att små skillnader kan uppträda i ett fingeravtryck över en viss tid. Detta problem benämns ”template ageing”, och tillsammans med andra orsaker till skillnader mellan två bilder av ett och samma fingeravtryck kan det försvåra verifieringsprocessen betydligt, då distinkta egenskaper i de båda bilderna kan uppträda väldigt olikt. För att minimera denna typ av problem utför man vanligtvis någon typ av bildförbättring före verifieringsprocessen. Detta examensarbete testar skalrymdsmetoder och utvärderar deras prestanda som förbättrare av fingeravtrycksbilder.

De metoder som har utvärderats innefattar linjär skalrymd, i vilken ingen föregående information om bilderna existerar, samt skalär- och tensorberoende diffusion, i vilka analys av bilden föregår och kontrollerar bildförbättringsprocessen.

Den linjära skalrymdsmetoden förbättrar nästan uteslutande korrelationsvärdena, vilket var förväntat då bildens struktur jämnas ut vid grövre skalor. Metoden ger dock ingen förbättring i antalet korrekta matchningar, då även felaktiga egenskaper får ett högre korrelationsvärde vid en större skala.

Den olinjära isotropa skalrymds-, eller kantbervarande, metoden (skalärberoende diffusion) bevisas vara olämplig för förbättring av fingeravtrycksbilder. Detta beror på att analysen av kanter inte är pålitlig som representation av fingeravtrycksstrukturen, då kanterna vanligtvis

är förvrängda i fingeravtryck som är påverkade av ”template ageing”-problemet.

Den olinjära anisotropa skalrymds-, eller koherensförbättrande, metoden (tensorberoende diffusion) ger ingen övergripande förbättring av antalet korrekta matchningar. Däremot visar det sig att för vissa typer av ”template ageing”-problemet, där skillnaderna i strukturen inte märkbart påverkar orienteringen av åsar och dalar, lyckas metoden att korrekt matcha fingeravtrycksegenskaper som initialt misslyckades. ii

**Table of Contents **

Framework for Testing and Evaluating Scale-Space Methods........................... 44

5.3.4 Implementation of Nonlinear Anisotropic Diffusion.......................................... 62

iii

iv

**1 Introduction **

The usage of automatic fingerprint identification systems as a means of identification and/or verification have increased substantially the last couple of years, and is likely to continue to grow in the near future. Systems of this type generally compares detailed features of two fingerprint images to assess whether an attempted verification correspond to the registered fingerprint. However, it is well known that small deviations may occur within a fingerprint over time, a problem referred to as template ageing. These differences complicate the verification process since distinct features may appear somewhat different in the two fingerprint images that are matched. Commonly used to try and minimise this problem are different kinds of fingerprint image enhancement algorithms. This thesis tests different methods within the scale-space framework, and their performance as fingerprint image enhancement methods is analysed in detail.

*1.1 Aim *

*1.1 Aim*

The aim for the thesis is to try and find a scale-space method that enhances correlation between two fingerprint images. The thesis especially focuses on fingerprints that vary considerably over time, a problem commonly referred to as template ageing. The expectation is for the thesis to find a method that results in a more time robust matching algorithm for localisation of distinct fingerprint features.

The methods tested within this thesis ranges from linear scale-space filtering, where no prior information about the images is known, to scalar and tensor driven diffusion where analysis of the images precedes and controls the diffusion process. The scalar dependent diffusion is implemented as an edge-preservation method, adapted from Perona & Malik [24], where an edge detector ultimately determines the diffusion process. The tensor dependent diffusion used, is a coherence-enhancing process as proposed by Weickert in [28].

The different scale-space methods are motivated through different approaches. However, the general idea is that scale-space filtering will suppress small-scaled information which is more likely to be unstable over time. The anticipation is that the suppression of these types of features will make two images of the same fingerprint more similar, hence enhance the correlation value between them. It is however equally important that the image enhancing process does not suppress distinct features since this would affect an identification or verification process negatively leading to false mismatches, or in the worst case, to false matches.

The linear scale-space follows the approach that detailed information smaller than a certain scale, at any position in the image, are more likely to be unstable than larger features. If this assumption is accurate, the scale-space smoothing of two images of the same fingerprint at a defined scale is likely to increase accurate matches.

The approach of the nonlinear isotropic scale-space is that distinct information in a fingerprint is closely connected to, and or bound by, edges between ridges and furrows. Hence, if the diffusion process is halted for structure that is considered edges, it will preserve these features while smoothing structure that is not edges. Again, if this assumption is correct it will be the non-distinct features of the fingerprint that is suppressed, hence this would increase correlation values and the number of accurate matches.

1

The nonlinear anisotropic scale-space method analysis structure at a larger scale than the nonlinear isotropic scale-space method. Instead of edges, the anisotropic approach considers ridge structure orientation, and suggests that small deviations in the orientation (i.e. smaller than ridge bifurcations and endings) are unstable over time. Hence, by enhancing the coherence in the fingerprint image, correlation values, as well as the total number of matches, are likely to increase.

*1.2 Methodology *

*1.2 Methodology*

The methodological processes of this thesis are based on qualitative strategies, focusing on the evaluation of the effect of applying different scale-space methods on fingerprint images before verification. Qualitative evaluation in this case means that not only the overall effect on the final result is considered, but rather analyses of the effects the method has on fingerprint image details is performed. For this purpose a small sample of selected fingerprints from different individuals, representing different probable occurrences of template ageing problems, will be used for testing. The same selections will be used for all testing to render possible comparison between testing results from the different methods.

The fingerprints and the areas of interest for correlation have been selected manually and provided to the author by the company Fingerprint Cards AB. There has been no appraisal of whether a specific area is distinct enough to be used for identification purposes, the features have instead been selected mainly depending on how they are affected by the template ageing problem. The images have been selected to represent a broad range of fingerprint image quality, including clear ridge patterns that appear similar over time, clear ridge patterns that deviate substantially over time and smudgy (low quality) ridge pattern fingerprint images.

To be able to appropriately evaluate a scale-space method as intended it is desirable to try and isolate the problem and therefore try to eliminate other reasons that may affect the results. As previously stated the main aim is to try and suppress problems due to template ageing

(differences in two images of the same fingerprint). Dissimilarities between two images that are caused by other reasons should be eliminated before applying the method being evaluated.

One problem already identified is the non-uniform brightness in images of the same fingerprint, which may depend on the measurement device or on user input. To eliminate this problem all images in the thesis are pre-processed to locally normalise their intensity values.

The methodology utilised in this thesis employs a local linear histogram stretch, which produced the desired results. This method is used throughout the report for pre-processing on all images in all testing.

All fingerprint images provided to the author by Fingerprint Cards AB had been manually matched (aligned) so that no translation or rotation appears between images of the same fingerprint (i.e. a feature in one image appears at the same position in all other images of the same fingerprint). This effectively excludes the problem of misregistration between two fingerprint images, which is outside the boundaries of this thesis.

All methods tested within this thesis utilise the same framework for testing and evaluating the results. This framework includes the selection of fingerprint images (mentioned above), preprocessing of images (image normalisation, as mentioned above) and definitions of evaluation measures. There are two different evaluation measures defined in this thesis; modified normalised correlation coefficient and relative uniqueness measure. The modified normalised correlation coefficient (*MNCC*) is the measurement used to compare two fingerprint images, or rather to correlate a feature from a fingerprint image over a target image. The relative

2

uniqueness measure compares the correlation value at the feature position with the highest correlation value throughout the rest of the image, and also takes into consideration how distinct (unique) the feature is within the original fingerprint (referred to as the initial uniqueness measure, *UM*

*init*

).

The use of a standardised framework for testing and evaluation have first and foremost been purposed to isolate the problem at hand and aids the comparability of results. It has also tremendously supported the practical component, implementation and evaluation, of the thesis.

*1.3 Delimitation *

*1.3 Delimitation*

This thesis is solely concentrated on the result of the scale-space methods why no consideration has been given to computational efficiency and memory usage.

Limitations have also been made to the number and size of fingerprint images used for testing, as described above. Often a fingerprint identification/verification algorithm is evaluated by testing it on a large database. This thesis will instead focus on evaluation of the effects scale-space methods have on correlation between local areas of fingerprint images.

The thesis is divided into six different main chapters. The first chapter is the introduction, which includes description of the problem that is to be investigated, as well as motivation and brief descriptions of the methods evaluated within the thesis. It also includes an explanation of the methodology used during the practical part of the thesis work and for writing this report.

The introductive chapter ends with an overview of the report structure.

The second chapter of the thesis is the background, and it includes all the theoretical information that the reader needs to be able to understand the implementation part. This chapter contains information that is essential to the thesis work, without being closely connected to the exact implementation used in the practical part of the thesis. In other words it includes essential information of a more general type. The second chapter describes, for instance, fingerprints, automatic fingerprint identification systems, the notion of scale, linear scale-space, nonlinear isotropic scale-space and nonlinear anisotropic scale-space.

Chapter 3 gives a brief overview of previously published papers that have adopted the same,

or similar, scale-space methods as means of fingerprint image enhancement. The similarities and deviations between those papers and this thesis are explained.

The fourth chapter defines and describes the selected data set, as well as the general testing framework and its measures. It also explains how the images were pre-processed to further try and isolate the template ageing problem as the only cause to deviations between two images of the same fingerprint.

Chapter 5 covers the implementation of scale-space schemes and evaluation of the methods

and their results. Each scale-space method is described and motivated within the context of fingerprint image enhancement, with specific focus on the template ageing problem. The detailed analysis examines if the anticipated results were fulfilled. Conclusions may then motivate and/or limit the subsequent method. The evaluation of each method will also involve an assessment on whether they are apt to use as fingerprint image enhancement or not.

3

The final part, which is actually two chapters, are the summary of the results achieved in the implementation section, a conclusion summary and proposals to future work and improvements of the methods tested within the thesis.

4

**2 Background **

The main purpose of this chapter is to contextualize the thesis and give a brief introduction to the different areas investigated within this work. Chapter two will familiarise its reader with the theory and terminology needed to understand the rest of the thesis. Areas described within this chapter include the characteristics of fingerprints, the structure of Automatic Fingerprint

Identification Systems, the basic theory of linear scale-space, and nonlinear isotropic and anisotropic scale-spaces.

*2.1 Fingerprints *

*2.1 Fingerprints*

**2.1.1 Biometrics and Fingerprints **

Personal identification are usually divided into three types; by what one owns (e.g. a credit card or keys), by something you know (e.g. a password or a PIN code) or by physiological or behavioural characteristics. The last method is referred to as biometrics and the six most commonly used features include face, voice, iris, signature, hand geometry and of course fingerprint identification [1].

**Method Examples **

What you know

What you have

What you are (biometrics) password, PIN code, user id cards, keys, badges fingerprint, face, voice, iris, signature, hand geometry

**Table 2-1: Identification methods **

It has been established, and is commonly known, that everyone has a unique fingerprint [2] which does not change over time [3].

Each person’s finger has its own unique pattern, hence

any finger could be used to successfully identify a person.

**2.1.2 A Fingerprint Autopsy **

A fingerprint’s surface is made up of a series of ridges and furrows. It is the exact pattern of these ridges and furrows (or valleys) that makes the fingerprint unique. The features of a fingerprint can be divided into three different scales of detail, of which the coarsest is the classification of the fingerprint.

The classification of fingerprints can be traced back to 1899 when Sir Edward Richard Henry, a British policeman, introduced the Henry Classification System [4, 5, 6] which classifies fingerprints into five different types; right loop, left loop, whorl, arch, tented arch. This classification system is still in use today but has been extended to include more types, for example double loop, central pocket loop and accidental.

1

That a fingerprint does not change over time is actually a qualified truth and it will be described more in the section on template aging.

5

**Figure 2-1: Different fingerprint types of the Henry Classification System. **

**Top to bottom, left to right: right loop, left loop, whorl, arch, tented arch [40] **

Fingerprint databases, which usually tend to be comprehensive, often index fingerprints based on their classification types [7]. Before searching, a quick classification of the fingerprint will help exclude most of the database, which consequently will reduce the search time. The classification indexing method is also often adopted by automatic fingerprint identification systems, where short search times are essential.

The second scale of fingerprint details consists of features at ridge level. The discontinuities

(endings, bifurcations, etc.) that interrupt the otherwise smooth flow of ridges are called minutiae, and analysis of them, their position and direction, is what identifies a person. Many courts of law considers a match with 12 concurring points (*the 12-point rule*) present in a clear fingerprint as adequate for unique positive identification [3].

Some of the most common minutiae are presented in Table 2-2. The more unusual a minutiae

is the more significance it has when used for identification.

Ending (or termination)

Bifurcation

Independent (or short) ridge

Dot

Bridge

Spur (or hook)

Eye (or island)

**Table 2-2: Examples of minutiae types **

6

A special category of minutiae is the type usually referred to as singularity points which include core and delta. A core is defined as the topmost point on the innermost upward uturning ridge [6], and a delta is defined as the centre of a triangular region where there is a convergence of ridges that flow from three different directions [6]. The number of cores and deltas depends on the type of fingerprint. There are even fingerprint types completely lacking singularity points. However, using the number of singularity points, their location and type, is a common way to decide the fingerprint type in automatic fingerprint identification systems

[8, 9] and can also be used to calculate the rotation and translation between two images of the same fingerprint [10].

**Figure 2-2: Core (**

Ο**) and delta (**∆**) points in three different types of fingerprints **

The third detail level is the finest level at which fingerprints can be analysed. Features at this scale include for example ridge path deviation, ridge width, ridge edge contour and pores.

Analyses of third level detail require that the method or device used for acquisition of the fingerprint pattern is highly detailed and accurate. Historically pores have been used to assist in forensic identification, however most matching methods mainly use minutiae comparisons while pore correlation can sometimes be used as a secondary identification method [11].

**Figure 2-3: Level 3 detail of fingerprint [41] **

**2.1.3 Template Ageing and Fingerprint Quality **

It is commonly known that the two foremost advantages with fingerprint identification are that fingerprints are unique (individuality) [2] and remain unchanged over time (persistence)

[3]. The latter statement is however a qualified truth as fingerprint may actually vary significantly during a short period of time. The main pattern will not change but at a smaller, more detailed scale differences may occur due to wear and tear, scars, wetness of skin etc.

This is referred to as the problem of template ageing.

The reading of a fingerprint at two separate occasions may give relatively different results.

For the minutiae extraction to be as accurate as possible the quality of the fingerprint needs to be adequate. Apart from the method or technology used, the quality of an acquired fingerprint depends highly on the condition of the skin. The characteristic most likely to differ between two readings is the wetness of the skin.

7

Dry prints can appear broken or incomplete to electronic imaging systems, and with a broken ridge structure identification becomes harder due to the appearing of false minutiae. Too wet a fingerprint on the other hand causes adjacent features to blend together.

Scar tissue is also highly affected by the wetness of the skin; a dry finger and the scar will not print well, a wet finger and the scar will have the appearance of a puddle. Since scar appearance is even more sensitive to the level of skin wetness than ordinary ridge structure a scar that is not permanent can affect the accuracy of the minutiae extraction tremendously.

**Figure 2-4: Fingerprints of different qualities; (I) too dry, (II) too wet, (III) just right and (IV) scarred. Copyrighted imaged by BIO-key International, Inc., and used with permission [42]. **

**2.1.4 Automatic Fingerprint Identification System (AFIS) **

For contemporary applications the fingerprint identification/verification process is undertaken automatically. Such a system is commonly known as an Automatic Fingerprint Identification

System (AFIS).

A generic AFIS consists of five different stages; fingerprint acquisition, image enhancement,

feature extraction, matching and decision, which is illustrated in Figure 2-5.

Stored templates

Scanner

Image enhancement

Feature extraction

Matcher match / no match

**Figure 2-5: A generic Automatic Fingerprint Identification System [based on 1, 12] **

**2.1.4.1 Identification and Verification **

An AFIS distinguishes between two different types of algorithm; identification and verification. For identification, also known as 1:N matching, a match for the acquired fingerprint is searched for in a database containing many different fingerprints. A match is achieved when a person is identified. Whereas for verification, also known as 1:1 matching, a single fingerprint template is available for comparison. In this case a match verifies that the person leaving the fingerprint is the same person who’s fingerprint the template was originally created from. An example could be a smart card containing the template, in which case a

8

verification would prove that the person who left the fingerprint is the actual owner of the smart card; hence it could for instance be used to replace the PIN code for a cash card.

**2.1.4.2 Enrolment **

To be identified as a valid user of a system a person first needs to be registered on that system. For an AFIS this means the enrolment of one or more fingerprints. However, a fingerprint image requires relatively large storage space and contains a lot of unnecessary information, why only specific information used for identification purposes is stored in the system. The stored fingerprint information is usually referred to as a feature template (or feature vector).

**2.1.4.3 Acquisition **

The acquisition of a fingerprint is achieved via a fingerprint scanner and several different types exist. These scanners are known as “livescan” fingerprint scanners since they do not use ink but direct finger contact to acquire the fingerprint. They can be divided into five groups depending on the technique by which they acquire the fingerprint; optical, capacitive, thermal, ultrasound and non-contact methods [1]. The characteristic of the image a scanner returns depends on the type of scanners used. For example optical and capacitive scanners tend to be sensitive to the dryness/wetness of the skin and thermal scanners, although overcoming the wetness problem, gives images with poor grey values [1].

**2.1.4.4 Image Enhancement **

After acquisition, a fingerprint image usually contains noise and other defects due to poor quality of the scanning device or similar reasons. Therefore image enhancement is required.

The performance of a feature extraction algorithm relies heavily on the quality of the input fingerprint images, so the typical purpose of image enhancement in an AFIS is to prepare for feature extraction by improving the clarity of ridges and furrows [13] and suppress noise [14].

It is however difficult to suppress noise and other spurious information, without corrupting the actual fingerprint pattern. Various image processing techniques have been proposed, and which to use depends on what type of image defects need to be suppressed. Some examples includes normalisation [13], clipping [8] and compensation for non-uniform inking or illumination characteristics of an optical scanner [2].

A further example of image processing, closely related to image enhancement, is the segmentation of fingerprint images [15]. A segmentation algorithm is used to decide which part of the image is the actual fingerprint and what part is the background (i.e. the noisy area at the borders of the image). Discarding the background will reduce the number of false features detected.

Also often used is some type of quality measure, which has a similar goal as a segmentation algorithm, namely to define the part of the image that contains fingerprint pattern of adequate quality. This is accomplished by determining the fingerprint image quality locally over the whole image, and then discarding parts of the fingerprint not reaching the required quality value. Examples include the coherence measure [16] and certainty level of the orientation field [12].

**2.1.4.5 Feature Extraction **

The fingerprint signal in its raw form contains the necessary data for successful identification hidden amongst a lot of irrelevant information.

9

Thus image enhancing processes will remove noise and other clutter before the next step of localising and identifying distinct features, so called feature extraction. Today’s AFISes commonly identify only ridge endings and bifurcations as distinct features [1, 12]. This mainly because all other type of minutiae can be expressed using only these two main types and they are by far the most common [19]. Algorithms often return too many features, some of which are not actual minutiae; hence some kind of post-processing to remove these spurious minutiae is necessary.

A typical feature extraction algorithm is shown in Figure 2-6, and is explained more

thoroughly in [12]. It involves five operations; (I) orientation estimation, with the purpose to estimate local ridge directions (II) ridge detection, which separate ridges from the valleys by using the orientation estimation resulting in a binary image (III) thinning algorithm/skeletonization, giving the ridges a width of 1 pixel, (IV) minutiae detection, identifying ridge pixels with three ridge pixel neighbours as ridge bifurcations and those with one ridge pixel neighbour as ridge endings and (V) post processing, which removes spurious minutiae.

**Figure 2-6: Example of minutiae extraction algorithm; (I) input fingerprint, (II) orientation field, (III) extracted ridges, (IV) skeletonized image and (V) extracted minutiae. **

**2.1.4.6 Matching **

The matching module determines whether two different fingerprint representations (extracted features from test finger and feature template) are impressions of the same finger [12, 17].

There are six possible differences between the extracted template and the reference template that need to be compensated for [1]; (I) translation, (II) rotation, (III) missing features, (IV) additional features, (V) spurious features and (VI) elastic distortion between a pair of feature sets. Missing and additional features may depend on overlap mismatch due to translation between the two fingerprint readings.

Fingerprint matching algorithms usually adopt a two-stage strategy; firstly the correspondence between the feature sets are recognized and secondly the actual matching is performed [17,

12]. The matching algorithm defines a metric (the match score) of the similarity between the two fingerprint feature sets and a comparison with a system defined decision threshold results in a match or a non-match. The value of the decision threshold decides the system security level; a high value will give a more secure system but will also result in more false rejections, while a lower value may give additional false acceptances and hence be less secure.

An example of a match score is the Goodness Index [18] which takes into consideration the number of spurious, missing, and paired minutiae and weighs them with a local quality factor.

10

The effect of the quality factor is that spurious and missing minutiae in a high quality area of the fingerprint affects the Goodness Index more than in a low quality area.

**2.1.4.7 Performance Evaluation **

There are four possible outcomes of an identification or verification attempt; a valid person being accepted (true positive), a valid person being rejected (false negative or false rejection), an impostor being rejected (true negative) and an impostor being accepted (false positive or false acceptance). The accuracy of an AFIS is defined by the relative number of false acceptances (false acceptance rate, FAR) and false rejections (false rejection rate, FRR).

Figure 2-7 shows a plot of impostor (*H*

*1*

) and genuine (*H*

*0*

) distribution curves with the match score (*s*) on the horizontal axis and a decision threshold (*T*

*d*

) defined as a specific match score.

A matching attempt giving a match score higher than the decision threshold will result in user acceptance and a match score lower than the decision threshold will give a rejection. The area under the genuine distribution, left of the decision threshold, is the FRR and the area under the impostor distribution, right of the decision threshold is the FAR. An optimal situation would be for the distribution curves to be completely separated since that would allow for a decision threshold resulting in zero FAR and FRR. However, in reality no AFIS is that accurate and the threshold must be decided depending on the sought characteristics of the

AFIS. The value of the decision threshold is a trade-off between security and user inconvenience. For example a high security access applications for obvious reasons uses a high decision threshold to get a low FAR whereas a less secure system may use a lower decision threshold to avoid unnecessary false rejections that could disturb a user of the system. Further examples of this type of application that uses a low decision threshold include forensic applications, which want to make sure that the AFIS do not overlook a potential suspect. Thus at the cost of more false acceptances a low decision threshold is preferred.

The FAR and the FRR distribution curves are usually used when evaluating an AFIS. The two measurements give information on different characteristics of an AFIS system. FAR analysis focuses on the individuality of fingerprints (i.e. how unique a fingerprint or fingerprint representation actually is), as the reason for a high FAR is a high level of similarity between non-matching fingerprints/fingerprint representations. FRR analyses centres on the template ageing problem, because a high FRR is due to the dissimilarities between two different acquisitions of the same fingerprint [11].

*p *

*H*

*1*

*H*

*0*

FRR

*T d s *

FAR

**Figure 2-7: Impostor (H1) and genuine (H0) distribution curves **

11

*2.2 Image Representation at Different Scale *

*2.2 Image Representation at Different Scale*

**2.2.1 The Notion of Scale **

As stated by Tony Lindeberg in [20] *“An inherent property of real-world objects is that they *

*only exist as meaningful entities over certain ranges of scale”*. Everyday humans view many objects over a large range of scales without reflecting on them. To better be able to describe the concept of scale require going outside the scale range perceivable by human vision. These scale ranges are less intuitive from the human vision point of view but will hopefully, and because of that, make the notion of scale more comprehendible. A perfect example is the

Power of 10 series [39] where images at scales of integer powers of 10 meters are shown. A

sample is shown in Figure 2-8 where the leftmost image representing the scale of 10

21

m illustrates a swirl of billions of stars within the Milky Way galaxy, the middle one of scale 10

0 m shows a man resting at a picnic and the rightmost picture shows individual neutrons and protons that make up the structure of the carbon atom at the scale of 10

-14

m.

**Figure 2-8: Powers of 10. Images with scales of powers of 10 meters. (I) 10**

**21**

** m, a swirl of a hundred billion stars in our Milky Way galaxy, (II) 10**

**7**

** m, the Earth, (III) 1 m, a man, **

**(IV) 10**

**-3**

** m, just below the skin of the man's hand and (V) 10**

**-14**

** m, an atom with individual neutrons and protons visible. [39] **

Each image in Figure 2-8 represents a certain scale range. For example the image of Earth has

a defined scale of 10

7

m, hence objects of larger scale are not visible in this image. This is referred to as the *outer scale* of the image. When observing the next image, of the man at scale 1 m, we realise that the man (or any other human being for that sake) is not visible in the image of Earth. This means that there is also a smallest scale of what is being depicted in an image. This is referred to as the *inner scale* of the image and it is defined by the resolution of

the image. In the case of Figure 2-8 all images are of resolution 198 x 198 pixels, which gives

an inner scale of approximately 51 mm for the image of the man. This means that we are able to see the fingers of the man, since the scale range for a grown man’s finger is within the scale range of 51 mm to 1 meter, but we are not able to identify the hands of the watch since their scale range is outside the image scale range.

The scale range of an object is closely connected to the process of observation. An observation is the measurement of a physical property made by an *aperture*. In a mathematical sense a measurement can be made infinitely small (sampling), however for physical measurements the aperture must for obvious reasons be of finite size. The physical property is integrated (weighted integration) over the size of the aperture, and the size of the aperture defines the resolution of the resulting signal.

12

Take for instance a digital camera where each element (aperture) of the CCD

light over a spatial area resulting in a pixel value in the final image. A measurement device

(like an object) is also limited by a scale range, which is defined by the smallest (inner scale) and the largest (outer scale) size of measurable objects or features. The outer scale is thus bounded by the size of the detector (e.g. the whole CCD for a digital camera) and the inner scale is limited to the integration size of the smallest aperture (e.g. a pixel for a digital camera) [35]. However, the scale boundaries of for example a camera is not fixed since it depends on the distance between the camera and the object of interest, instead the ratio between the outer scale and the inner scale is commonly used to define the dimension of a measurement device [35].

An example of the use of scale and measurement device limitations is image dithering. When printing a grey-scale image on a black and white laser printer, the grey-scale intensity values are achieved by adjusting the frequency of the black dots on the paper, the higher the frequency of printed black dots, the darker the colour. This is called dithering. The reason for this is because the dots applied on paper by a laser printer are smaller than what a human eye is able to perceive. Thus the eye will integrate the intensity values over a small area (defined by the inner scale of the eye) and the relative coverage of black respectively white within such

an area defines the intensity value perceived by the eye. Figure 2-9 illustrates an example of

image dithering.

**Figure 2-9: (I) Original image, (II) dithered image and (III) magnification of a detail. **

**2.2.2 Linear Scale-Space **

One of the most basic and important tasks for the field of image analysis is deriving useful information about the structure of a 2D signal. To extract data of any type of representation from an image an operator is used to interact with the data. General questions that always need to be answered when developing a system of automatic image analysis include firstly what kind of operator to utilise, and secondly what size it should have. Regarding the first question of which type of operator that should be used is dependant on what feature or sort of structure is being detected in the image. Examples of image features commonly interesting

(within the field of image analysis) include edges, corners and ridges (i.e. lines).

The second question concerning the size of the operator is depending on the expected size of features to detect. However, sometimes the size of the sought features are not known, why it may be of interest to search for features at different scales. This section of the thesis will

2

A CCD (charge coupled device) is a small chip with several hundred thousand of individual picture elements. It is commonly used in digital cameras where it at each pixel position absorbs incident light and converts it to an electric signal. Each picture element on the CCD results in a pixel in the digital image.

13

describe how the notion of scale has been incorporated into the mathematics of uncommitted observation resulting in the framework of linear scale-space.

The initial scale-space idea is to be able to represent an image at arbitrary scales. An image is initially bound by its inner and outer scale which limits the scale ranges that may be represented (i.e. there is no information in the 2D signal about objects or features of scale ranges outside the scale range of the image). In other words, it is impossible to derive an image of a finer scale than the inner scale of the original image without additional information. However, what is possible is to describe the image at coarser scales by raising the inner scale. A very common practical example where this may be useful is to suppress noise. Since noise is usually apparent at fine scales (often at pixel level) it will be effectively suppressed if the inner scale of the image is raised not to include the scale of the noise.

Representing an image at a coarser scale can be compared to observing it through an aperture of larger width than its inner scale. Practically this means that the image must be filtered by an operator (which represents the aperture). The basic question to ask is what operator to use.

To be able to derive an operator that is used to represent images at coarser scales some requirements of its behaviour must be specified. Linear scale-space can be compared to the visual front-end of human vision. The visual front-end is defined as the first stage of vision, where an uncommitted observation is made. No prior knowledge about, and no feedback to, the image is available at this stage of the vision process. From this starting point several papers have defined a number of similar axioms, or requirements, from which they all in different ways have derived the Gaussian kernel to be the unique kernel for a linear scalespace framework [36].

Using the concept of an uncommitted observation as a prerequisite the following axioms may be used to derive the Gaussian [21]:

• *linearity*, no a priori knowledge about, or model of, the image exists,

• *spatial shift invariance (homogeneity)*, no spatial position is preferred (i.e. the whole image is treated equally),

• *isotropy*, no preferred direction, features of all directions are treated equally (this axiom automatically results in a circular operator in 2D, and spherical in 3D [35])

• *scale invariance*, no specific scale is emphasized.

The Gaussian kernel supplies the tool of a one-parameter kernel family to describe images at arbitrary (coarser) scales. The Gaussian kernel of second dimension;

*g*

(

*x*

,

*y*

;

*t*

)

=

1

2

π

*t e*

−

⎛

⎜

*x*

2

+

2

*t y*

2

⎞

⎟ where *x* and *y* are the spatial coordinates and *t* is the scale parameter. The relation between the scale parameter and the standard deviation of the Gaussian is:

*t*

=

σ

2

. The factor in front of the exponential function, *e*, is a normalising factor which results in the integral of the

Gaussian function to always be exactly one.

− ∞

∞

∫ ∫

∞

− ∞

*g*

(

*x*

,

*y*

)

*dx dy*

=

1

This is an important feature of the Gaussian function when used within the scale-space concept since it ensures that the average grey-level of the image remains the same when blurring with the Gaussian kernel [35]. The normalisation must not be forgotten when

14

discretising the function. Figure 2-10 shows the Gaussian kernel both as a 2D image and a 3D

mesh.

**Figure 2-10: 2D Gaussian kernel; (I) 2D image, (II) 3D mesh. **

An image scale-space (a term developed by Witkin 1983 [22] and Koenderink 1984) [21] is defined as the stack of images created by including the original image and all subsequent images resulting from convolution with the Gaussian kernel of increasing width, with the scale parameter bounded by the image inner and outer scale. Although Witkin’s and

Koenderink’s articles are considered to have pioneered the concept of linear scale-space in the western world, it is also necessary to point out that similar results had already been achieved in 1959 in Japan by Taizo Iijima [23]. However, these results and research following them were not known in the western world until much later (first known reference in western

research literature is dated to 1996 [23]). Figure 2-11 shows three different images and

samples from their scale-spaces. An image of scale zero (*t* = 0) is defined to be the original, unsmoothed, image.

**Figure 2-11: Scale-space images; (I) t = 0, (II) t = 4, (III) t = 8, (IV) t = 16 and (V) t = 32. **

It is easy to see how features of finer scales are suppressed at higher scales. Take for instance

the images of the baboon, in the top row of Figure 2-11. In the leftmost (original) image the

fine structure of the hair in the baboon’s fur is visible, but it quickly disappears when

15

traversing up the scale-space ladder. At coarser scales somewhat larger features, like the eyes and nostrils, disappear, and at scale *t* = 32 only the outlines of the different parts of the baboon’s face are visible. The images in the middle row illustrate a cosine signal with varying period. It can be interpreted as vertical lines of different scales. It is easily noticed how the lines of smaller scales disappear early in the image scale-space, and how thicker lines are successively smoothed at coarser scales. At a large enough scale an image will always converge towards a single grey value.

Figure 2-12 shows a detail of the fingerprint in Figure 2-11, and how it evolves at coarser

scales. The upper left image is the original fingerprint feature and following are (left-to-right, top-to-bottom) scale-space smoothed versions at t = 1, 4 and 16. The top right image

(calculated at t = 1) shows how deviations within the ridges and furrows (i.e. very small features) are evened out. In the image calculated at scale, t = 4, it is noticeable how the ridge/furrow-pattern itself is weakened, and for the lower right image (t = 16) the feature is almost completely flattened and there is barely any structure left.

1

0.5

0

10

20

30

1

402

4

6

8

0.5

0

10

20

30

402

4

6

8

1 1

0.5

0.5

0 0

10

20

30

402

4

6

8

10

20

30

402

4

6

8

**Figure 2-12: Detail of fingerprint; (left-to-right, top-to-bottom) original and scale-spaced smoothed at t = {1, 4, 16}. **

As previously mentioned linear scale-space has been derived in many different ways, where one more is essential to mention. In the 1984 article “The structure of images“ Koenderink was first to show that the *diffusion equation* is the generating equation of a linear scale-space

[21]. Koenderink used the concept of *causality* as a starting point, an axiom which defines that new level surfaces must not be created in the scale-space representations when the scale parameter is increased [20]. This axiom has also been formulated in several different ways, of

16

which one is that local extrema isn’t enhanced at coarser scales (i.e. intensity values of maxima decrease and minima increase).

The diffusion equation is defined as:

∂

*s*

*L*

= ∇

2

*L*

= ∆

*L*

, where *s* is the scale parameter. The diffusion equation states that the derivative to scale equals the divergence of the gradient of *L*, the luminance function (or image) in our case [21]. This is the same as the sum of the second partial derivatives, which is the Laplacean (

∆

*L*

). Bart ter

Haar Romeny supplies an interpretation of the diffusion equation in the case of scale-space smoothing an image [35]; “The luminance can be considered a flow that is pushed away from a certain location by a force equal to the gradient. The divergence of this gradient gives how much the total entity (luminance in our case) diminishes with time”. The relationship between scale parameter *t* in the Gaussian function and *s* in the diffusion equation is *t* = 2*s* [21]. In the two dimensional case the diffusion equation becomes:

∂

*s*

*L*

=

∂

∂

*L s*

=

∂

∂

2

*x*

2

*L*

+

∂

∂

2

*y*

*L*

2

=

*L xx*

+

*L yy*

With the requirements of causality, isotropy, homogeneity and linearity the solution to the diffusion equation is the Gaussian kernel [21]. This solution is referred to as the Green’s function of the diffusion equation. The initial condition to the diffusion equation is defined as

*L*

(

⋅

; 0 )

=

*L*

0

, which means that the scale-space image ( *L*) at scale 0 is the original image (*L*

*0*

).

The diffusion equation is well known within physics and is often referred to as the

*heat *

*equation* within the field of thermodynamics, since it describes the heat distribution (*L*) over time ( *s*) in a homogeneous medium with uniform conductivity [37].

Solving the linear diffusion equation and convolving with a Gaussian gives the same results, thus there are two options when implementing a linear scale-space; to approximate the diffusion equation or the convolution process [36]. Throughout this report the linear scalespace is implemented by approximating the convolution with a Gaussian kernel. Nevertheless, the diffusion equation will prove to be a better alternative for implementing nonlinear scalespaces, which will be described in the following sections.

An essential additional result to linear scale-space is that the spatial derivatives (of arbitrary

order) of the Gaussian are also solutions to the diffusion equation [21]. Shown in Figure 2-13

is an image and a 3D mesh of the 1st derivative in x-direction of a 2D Gaussian kernel.

**Figure 2-13: 1st derivative of a 2D Gaussian in x-direction; (I) 2D image, (II) 3D mesh. **

Together with the Gaussian kernel, the Gaussian derivatives form a complete family of differential operators [21]. Using the scale-space family of differential operators it is possible

to create a scale-space of any measurement. Figure 2-11 emphasized samples from the scale-

17

space of image intensity values. Figure 2-14 shows a scale-space of the gradient magnitude.

In this case the scale parameter defines the size of the Gaussian derivatives. The gradient magnitude at scale *t* is calculated by:

∇

*t*

*L*

=

⎛

⎜⎜

∂

*L t*

∂

*x*

⎞

⎟⎟

2

+

⎛

⎜⎜

∂

*L t*

∂

*y*

⎞

⎟⎟

2

=

*L t x*

2

+

*L t y*

2

**Figure 2-14: Samples from a gradient magnitude scale-space; (I) original image, (II) t = **

**0.01, (III) t = 1, (IV) t = 4 and (V) t = 16. **

Obviously the edges of finer scale features are apparent in images of low scale and in the images at scale t = 16 only the edges of larger objects are visible. An interesting feature in the fingerprint image at scale t = 4 is that the gradient magnitude of the ridges close to the core point of the fingerprint is reasonably strong, while the gradient magnitude of ridges elsewhere in the fingerprint are barely noticeable. This is because the ridges in the centre of the fingerprint are slightly wider and further apart, thus they are still large enough to be detected as edges at scale t = 4.

This section has described the concept of linear scale-space and demonstrated it to be a framework for multi-scale image analysis, based on a solid mathematical foundation, which gives us the tools of a one-parameter family of kernels to derive images and image measurements at arbitrary scales within the image scale range. The solid mathematical foundation highly motivates the usage of scale-space methods in image processing and analysis.

**2.2.3 Nonlinear Isotropic Scale-Spaces **

Chapter 2.2.2 showed that the isotropic Gaussian kernel is a unique kernel to form linear

scale-space and the obvious choice when no prior knowledge is available about an image and its structure. There are however some important disadvantages with using operators from the linear scale-space family. Firstly, filtering an image with the Gaussian smoothes both noise and other unwanted features as well as important features (like edges) which makes identification harder [36]. Secondly edges are dislocated when smoothing an image at coarser scales [36] which makes it harder to relate edges detected at coarse scales with edges at finer scales. By relaxing the axioms defined for the linear scale-space it is possible to design the smoothing process to better preserve (or even enhance) important features like edges.

18

In the case of *nonlinear isotropic scale-spaces*, as described in this section, two of the required axioms for the linear scale-space are excluded, namely homogeneity and linearity.

Excluding the axiom of homogeneity (making the process *inhomogeneous*) introduces the possibility to smooth with different scales at different positions in the image. For example coarser scales of smoothing may be used in image areas of similar intensity values, while finer smoothing scales can be used at edges and at features where the gradient is strong (i.e. where the intensity values locally differ rapidly). Since the smoothing process changes the image it may be wise to reanalyse the structure of the image at different scales to create a more accurate image evolution towards coarser scales. This introduction of feedback to the system makes the process *nonlinear*, and requires an iterative implementation to allow for reanalysis of the image structure during the diffusion process.

The idea of a nonlinear isotropic scale-space was first introduced [36] in Perona & Malik’s

1987 article “Scale space and edge detection using anisotropic diffusion“ [24]. They proposed an implementation using the diffusion equation:

∂

*s*

*L*

= ∇ ⋅

(

*c*

∇

*L*

)

, where *c* is a scalar function dependent on spatial position and recalculated at each iteration.

Written in its spatial components, the partial differential equation is [26]:

∂

*s*

*L*

= ∂

*x*

(

*c*

∂

*x*

*L*

)

+ ∂

*y*

(

*c*

∂

*y*

*L*

)

As the title of the Perona & Malik article states, their scale-space method is referred to as being anisotropic. Following the terminology of Weickert [36] the Perona & Malik method should however be considered isotropic. Since the diffusion process is controlled by a scalar, resulting in equal diffusion in each spatial direction (i.e. isotropy).

The main examination when implementing the diffusion equation as described above, is related to how one chooses the scalar function *c*, which is often referred to as the conductivity function. Perona & Malik define three criteria [24, 25]; causality, immediate localisation and piecewise smoothing. The causality criteria have previously been explained in the section on

linear scale-space (see 2.2.2), and Perona & Malik define it as “no ‘spurious detail’ should be

generated passing from finer to coarser scales“ [24, 25]. The second criteria, immediate localisation, suggests that region boundaries at a certain scale should be sharp and localised at positions meaningful for region boundaries at that particular scale, i.e. edges should not be dislocated at coarse scales. Piecewise smoothing means that the smoothing process should be stronger intra-regionally than inter-regionally. In other words intensity values within a region should be blurred together before blurring with intensity values over the region boundaries.

An intuitive representation of region boundaries are strong edges which separates regions of similar intensity values. Considering this definition it is reasonable to consider an edge detection operator to define the diffusivity of an image. The most commonly used conductivity coefficient (i.e. diffusivity), also the one used by Perona & Malik [24, 25], is that of the gradient magnitude (

∇

*L c*

(

∇

*L*

)

). Hence a function depending on the gradient magnitude,

, would be a consistent selection for the conductivity function. Before defining the conductivity function we shall take a closer look at an implementation of the nonlinear isotropic (or scalar driven) diffusion to obtain a better understanding of its effect on the image evolution.

19

(

A discrete approximation of the scalar driven diffusion is described in [26], and is defined as:

*L s x*

+

,

*y ds c s x*

+

1 ,

*y*

=

+

*L s x c s*

,

*x*

,

*y y*

+

*ds*

)(

*L s x*

2

+

1 ,

[

*y*

(

*c s x*

−

,

*y*

+

1

*L s x*

,

*y*

+

*c s x*

,

) (

*y c*

)(

*s x*

,

*L s x y*

,

+

*y*

+

1

*c s*

−

*x*

−

1 ,

*y*

*L s x*

,

)(

*y*

*L s x*

,

) (

*y*

−

*c s x*

,

*y*

+

*L s x*

−

1 ,

*y c*

)

]

*s x*

,

*y*

−

1

)(

*L s x*

,

*y*

−

*L s x*

,

*y*

−

1

)

+ where *s* is the scale, *x* and *y* are the spatial position, *L*

*s*

is the image at scale *s* and *c* is the conductivity function. The scale-step, *ds*, should be set to less than 0.25 to ensure a stable

*L s x*

+

,

*y ds*

=

*L s x*

,

*y*

+

*ds*

[

*L s x*

,

*y*

+

1

+

*L s x*

,

*y*

−

1

+

*L s x*

+

1 ,

*y*

+

*L s x*

−

1 ,

*y*

−

4

*L s x*

,

*y*

Within the parenthesis on the right hand side of the equation is the commonly used discrete version of the Laplacean:

1

1

−

4 1

1

Recollecting the definition of the linear scale-space diffusion equation (see 2.2.2), where the

right hand side of the equation is the Laplacean, it is evident that replacing the conductivity function with a constant value of 1 results in a discrete approximation of the linear scale-space diffusion. If we instead set the conductivity function to a constant value of zero, the result achieved will be that the image at the coarser scale is the same as the one at finer scale (i.e. the image is not diffused at all).

Using these two conclusions the conductivity function should be selected to give a value equal to one (i.e. maximum diffusion) at low values of the gradient magnitude, and values of zero (i.e. no diffusion) for strong gradients. This will preserve edges while blurring regions of similar intensity values. Perona & Malik [24, 25] proposed two different conductivity functions with the properties described;

*c*

*PM*

1

= exp

⎛

⎝

−

⎛ ∇

λ

*L*

⎞

2

⎞

⎠

and

*c*

*PM*

2

=

1

1

+

⎛ ∇

⎜⎜

λ

*L*

⎞

⎟⎟

2

.

20

The function curves are shown in the following figures.

1

0.8

0.6

0.4

0.2

0.005

0.01

0.025

0.1

0

0 0.05

0.1

0.15

0.2

||

∇

L||

0.25

0.3

0.35

0.4

**Figure 2-15: Perona & Malik conductivity function, c**

*PM1*

**, of different **

λ

**. **

1

0.8

0.6

0.005

0.01

0.025

0.1

0.4

0.2

0

0 0.05

0.1

0.15

0.2

||

∇

L||

0.25

0.3

0.35

0.4

**Figure 2-16: Perona & Malik conductivity function, c**

*PM2*

**, of different **

λ

*.*

Other conductivity functions have been proposed, and apart from the two Perona & Malik functions described above, one more have been considered in this thesis and it is taken from

Weickert [27]:

*c*

*W*

=

⎧

⎪

⎪

1

− exp

⎛

⎝

(

1

−

3 .

315

∇

*L*

/

λ

)

4

⎞

⎠

(

(

*s s*

=

>

0

0

)

)

Function curves for *c*

*w*

with varying

1

λ

0.8

0.6

0.005

0.01

0.025

0.1

0.4

0.2

0

0 0.05

0.1

0.15

0.2

||

∇

L||

0.25

0.3

0.35

**Figure 2-17: Weickert conductivity function, c**

*W*

**, of different **

λ

**. **

0.4

21

Figure 2-18 displays examples of images taken from scale-spaces using the different

diffusivities described above. The images have been calculated at scale *s* = 0.5, with

λ

= 0.01 during 100 iterations and *ds* = 0.2. The gradient magnitude image of the first iteration is

shown in Figure 2-19 (top row, column *s* = 0.5).

**Figure 2-18: (top left) original image, resolution 400x300, (top right) Weickert diffusivity **

*c*

*W*

**, (bottom) Perona & Malik diffusivity c**

*PM1*

** (left) and c**

*PM2*

** (right). **

The images in Figure 2-18 shows that the choice of conductivity function strongly affects the

result of the diffusion process. This is also the case with the selection of the parameter

λ

which is evident considering the images in Figure 2-19. In all three conductivity functions

previously described

λ

has the role of a contrast parameter, separating high contrast and low contrast regions. Image areas with diffusivity larger than lambda, i.e.

∇*L* >

λ

, are considered to be edges and areas with

∇*L* <

λ

are regarded to belong to the interior of a region [36, 27].

The remaining parameter that strongly affects the outcome of the diffusion process is related to the calculation of the image gradient. A common way to do this is by calculating the image derivatives through convolution with the Gaussian first derivatives, and using those results to

calculate the gradient magnitude. This was utilised to calculate the images in Figure 2-14.

This means that the size (scale parameter) of the Gaussian kernels used to calculate the gradient also affects the results of the diffusion. The scale at which the gradient is calculated is sometimes referred to as the observation scale.

Figure 2-19 illustrates the results from 12 different scale-spaces of the top left image in Figure

2-18. The scale-spaces have been calculated using the Weickert conductivity function of

different

λ

-values (0.005, 0.01 and 0.025) and different scale parameter, *s* (0.05, 0.5, 1 and 4), for the gradient calculation. Included in the figure are images of the gradient magnitude of different scale, calculated at first iteration (top row). The intensity values of the gradient

22

magnitude images have been rescaled to include values between 0 (black) and 0.5 (white), to obtain higher contrast. Since the gradient magnitude never exceeded 0.47 no information is

lost by rescaling the intensity values. The images in Figure 2-19 intend to show the effect the

selection of observation scale and

λ

-value (for the Weickert conductivity function) has on the diffused images.

23

∇

*L*

*s* = 0.05 *s* = 0.5

λ

** = 0.005 **

λ

** = 0.010 **

λ

** = 0.025 **

**Figure 2-19: Nonlinear isotropic scale-spaces (scalar driven diffusion) of top left image **

**in Figure 2-18. Images evolved through 100 iterations using a scale-step, ds, of 0.2. **

24

*s* = 1

25

*s* = 4

∇

*L*

λ

** = 0.005 **

λ

** = 0.010 **

λ

** = 0.025 **

Figure 2-20 illustrates how a detail of a fingerprint is affected by nonlinear isotropic scale-

space smoothing. The fingerprint feature is the same as was previously shown in Figure 2-12.

The top left image is the original signal, and the rest of the images have been calculated at 5,

25 and 100 iterations respectively.

1

0.5

0

10

20

30

1

4

6

8

0.5

0

10

20

30

4

6

8

1 1

0.5

0.5

0 0

10

20

30

402

4

6

8

10

20

30

402

4

6

8

**Figure 2-20: Detail of fingerprint; (left-to-right, top-to-bottom) original and nonlinear isotropic scale-spaced smoothed at 5, 25 and 100 iterations. **

In the case of linear scale-space it was established that the implementation could be done either by convolving an image with the Gaussian kernel or by approximating the linear isotropic diffusion process. Both methods achieve the same result. Considering implementation using Gaussian convolution for a nonlinear isotropic scale-space would involve for the conductivity function to control the size of the Gaussian kernel at each spatial position in the image, and in that way steer the smoothing process. This is possible but it should be noted that it will not render the same results as approximating the nonlinear diffusion process since in the latter case the diffusivity (i.e. gradient magnitude) is calculated for each iteration. This feedback is missed when using a Gaussian convolution implementation. Throughout the thesis implementation of nonlinear isotropic scale-spaces has been undertaken using the discrete approximation of the diffusion process.

**2.2.4 Nonlinear Anisotropic Scale-Spaces **

An additional type of scale-space to consider is nonlinear anisotropic scale-spaces. Apart from the axioms relaxed in the case of nonlinear isotropic scale-space, the axiom of isotropy is excluded also. This makes the process both inhomogenic and *anisotropic*, hence not only is it possible to decide the scale (i.e. amount of smoothing) in each image position

(inhomogeneity), but it is also viable to define a preferred direction of the smoothing

26

(anisotropy). In the case of nonlinear isotropic scale-spaces only the amount of smoothing is decided. In the previous section it was described how the scale could be steered to get the value zero near edges, meaning that no smoothing was performed there. This means that noise at edges is not suppressed. To smooth the edges, a preferred method would be to control the smoothing process to smooth more along edges than perpendicular to them. The framework of a nonlinear anisotropic scale-space makes this possible by defining the preferred smoothing direction to be along edges.

The implementation of an nonlinear anisotropic scale-space is preferably achieved by approximating the diffusion process, as was the case for nonlinear isotropic scale-space methods. To introduce the anisotropy in the diffusion equation the diffusion process must be controlled by a tensor function, instead of a scalar function as was the case for nonlinear isotropic scale-spaces. The equation of tensor dependent diffusion is as follows:

∂

*s*

*L*

= ∇ ⋅

(

*D*

∇

*L*

)

, where *D* is a tensor function dependent on spatial position and recalculated at each iteration.

*D* is at each position a positive semidefinite symmetric tensor, which in the case of 2D images has the size 2x2 and is of the form [26]:

*D*

=

⎛

⎜⎜

*a b b c*

⎞

⎟⎟

Written in its spatial components, the partial differential equation is [26]:

∂

∂

*s x*

*L*

=

(

∂

*x*

∂

*y*

)

⎛

⎜⎜

(

*a*

∂

*x*

*L*

)

+ ∂

*x b*

∂

*a b*

⎞

⎟⎟

⎛

⎜⎜

+ ∂

∂

∂

*x*

*L*

⎞

⎟⎟

*b*

∂

= ∂

*x*

+

(

*a*

∂

*x*

*L c*

∂

+

*b*

∂

(

*b y*

*L c*

)

*y y*

(

*L x*

*L*

)

∂

*y*

(

*y*

*L*

)

*y*

*L*

+

*b*

∂

*x*

*L*

+

*c*

∂

*y*

*L*

)

=

As previously mentioned it is preferred for a diffusion process to smooth more along edges than across. For this the diffusion tensor must be adapted to the local image structure. A local coordinate frame is defined with one axis, *v*, in line with the isophote (i.e. line of constant intensity) and the other axis, *w*, is orthogonal to the first and thus aligns with the gradient.

This is referred to as Gauge coordinates [21]. In the case of fingerprint images the *v*-axis could be considered parallel to ridges and furrows, while the *w*-axis is perpendicular to them.

When the local coordinate frame has been defined it is only a matter of defining the conductivity coefficients in each direction, *c*

*v*

and *c*

*w*

. With these definitions the diffusion matrix can be defined as;

*D*

=

*R*

*T*

⎛

⎜⎜

*c w*

0

0

*c v*

⎞

⎠

*R*

, where *R* is the rotation matrix with column vectors normalised to length 1. If

α

is the angle between the Gauge axes and the image axis (i.e. *x* and *y*), then *R* can be calculated by;

*R*

=

⎛

⎜⎜ cos sin

( )

( )

− sin cos

( )

( )

⎞

⎟⎟

When *c*

*v*

and *c*

*w*

are equal the diffusion process is isotropic. Interpreting the diffusion process from the diffusion tensor means that the image will be smoothed by the strength of the eigenvalues, in the direction of the corresponding eigenvectors [28]. For the diffusion tensor defined above, calculating the eigenvalues will result in *c*

*v*

and *c*

*w*

, and calculating the eigenvectors will give the column vectors in *R* [26, 19].

A straightforward way to calculate the diffusion tensor, and an intuitive development considering the scalar driven diffusion case, is to calculate the rotation matrix from the

27

gradient direction and to control the amount of diffusion by the gradient magnitude. Such a method, referred to as edge enhancing diffusion, is mentioned in [29, 28] and described in more detail in [26].

Another implementation of tensor driven diffusion is the so called coherence-enhancing diffusion proposed by Weickert in [28]. This method is well suited for images with strong coherent structures (as is the case for fingerprint images), since it in addition to smooth noise at edges also enhances coherent structures. The coherence is a measure for the strength of the local orientation. If the orientations of a structure within a local area are parallel the coherence becomes large, and for structures with orientations equally distributed over all directions (i.e. isotropic structures) the coherence tends to zero. Due to the flow-like structures (i.e. ridges and furrows) in fingerprint images the coherence, measured at ridge scale, is large. One of the advantages with coherence-enhanced diffusion is that it uses orientation to calculate the diffusion tensor, instead of direction, which is the case when using the gradient to calculate the diffusion tensor. Directions with 180 degrees difference share the same orientation. For example in a fingerprint the direction of a ridge has the opposite sign to the direction of the adjacent furrow, but they share the same orientation. The operator used by Weickert [28], as well as Almansa and Lindeberg [19], to analyse coherent flow-like structure and calculate the local orientation is the *structure tensor* (also referred to as the *second-moment matrix* or the

*interest operator*

);

*J o*

(

∇

)

*t*

*L*

= ∇

*t*

*L*

∇

*t*

*L*

*T*

=

*L t x*

*L t y*

(

*L t x*

*L t y*

)

=

*L t x*

*L*

*L t x*

*L t y t x*

*L t x*

*L t y*

,

*L t y*

*L t y*

where

∇

*t*

*L*

is the image gradient at scale *t*, referred to as the observation scale. The structure tensor is positive and semi-definite, and its eigenvectors are parallel, respectively orthogonal, to the gradient direction [28]. The coherent structures in a fingerprint image are often larger than scale *t*, that is ridges and furrows tend to be parallel longer than their width. Since we are considering orientations, as opposed to directions, it is possible to scale-space filter the structure tensor to a scale which better represents the size of the coherent image structures. In other words the structure tensor is smoothed by a Gaussian *g*

ρ

;

*J*

ρ

(

∇

*t*

*L*

)

=

*g*

ρ

∗

(

∇

*t*

*L*

∇

*t*

*L*

*T*

)

=

⎛

⎜⎜

*j*

11

*j*

12

*j*

12

*j*

22

⎞

⎟⎟

The scale of the Gaussian,

ρ

, is referred to as the integration scale and it should reflect the characteristic size of the typical image structures [28]. The Gaussian smoothing of the structure tensor integrates the orientation locally meaning that local orientations becomes more parallel, which is the same as enhancing the coherence of the structures.

28

Figure 2-21 shows a fingerprint image and the structure tensor orientation calculated at

different observation and integration scales.

**Figure 2-21: (I) original image, (II) structure tensor orientation, t = 0.5, **

ρ

** = 1, (III) structure tensor orientation, t = 0.5, **

ρ

** = 16 and (IV) structure tensor orientation, t = 16, **

ρ

** = 0. **

It is important that the image gradient is calculated at a fairly detailed scale, to get an accurate

estimation of the structure orientation, before the smoothing is carried out. Image IV in Figure

2-21 shows a case were this has not been considered. Here the observation scale,

*t*, has been set to 16 and no smoothing has been performed afterwards (i.e. the integration scale

ρ

is 0). It is apparent that a derivative operator not accurately adjusted to the image structure will result in poor orientation estimation. In the image referred to the scale of the derivative operators have been chosen far too large and results in no detection of the finer ridge structure within the fingerprint. Instead it only gives significant response from the edges separating the

fingerprint and the background. Image II and III in Figure 2-21 shows how the coherence of

the structure tensor is enhanced for larger values of scale

ρ

.

Weickert [28] defines a coherence measure,

(

κ

, as

κ

=

µ

1 where

µ

−

µ

)

2

,

2

*1*

and

µ

*2*

are the eigenvalues of the structure tensor, calculated by

µ

1 , 2

=

1

2

*j*

11

+

*j*

22

±

(

*j*

11

−

*j*

22

)

2

+

4

2

*j*

12

.

The coherence measure becomes large for coherent structures and tends to zero for isotropic structures.

The structure tensor’s aptness to analyse coherent flow-like structure and calculate local orientation makes it suitable to define the orientation of the diffusion tensor, why the diffusion tensor is constructed to have the same eigenvectors as the structure tensor. The remaining question is how to define the conductivity coefficients, that is the eigenvalues of the diffusion tensor. The thesis continues to follow the approach described by Weickert [28], who proposed the following calculation of the eigenvalues:

λ

1

=

α

λ

2

=

⎧

⎪⎩

α

+ for *C* > 0,

(

1

α

−

∈

α

α

)

(

κ

=

0

)

0 , exp

⎝

κ

*C*

, and

κ

is the coherence measure described above. *C* works as a threshold parameter and should be adopted to the expected value range of

κ

. For

κ

>> *C* we get

λ

*2*

≈ 1,

29

and

κ

<< *C* will result in

λ

*2*

≈

α

. The parameter

α

defines the lowest level of smoothing, as well as the largest possible difference between diffusion along isophotes and perpendicular to them (i.e. 1-

α

). For a

κ

close or equal to zero, which indicate isotropic structure, both eigenvalues get the value of

α

, resulting in an isotropic diffusion process.

With the rotation matrix, *R*, defined by the columns of the structure tensor and the conductivity coefficients (or eigenvalues) defined as outlined above, the diffusion tensor *D* is calculated by [26];

*D*

=

⎡

⎢

*a b b c*

⎤

⎥

*a*

=

1

2

⎛

⎜

⎝

λ

1

*b*

=

+

λ

2

+

(

λ

1

(

*j*

11

−

−

(

*j*

11

(

λ

1

−

−

*j*

22

λ

2

)

2

)

+

*j*

12

4 *j*

12

2

λ

2

)(

*j*

22

*j*

11

)

2

+

−

4

*j*

22

*j*

12

2

)

⎞

⎟

⎠

*c*

=

1

2

⎛

⎜

⎝

λ

1

+

λ

2

−

(

λ

1

(

*j*

11

−

−

λ

2

)(

*j*

22

*j*

11

)

2

+

−

4

*j*

22

*j*

12

2

)

⎞

⎟

⎠

A discrete approximation of the tensor dependent diffusion is described in [26], and is defined as:

*L s x*

+

,

*y ds*

2

(

(

*a s x*

,

=

*y*

−

1

*L s x*

+

,

*a y s x*

,

+

*y*

)

)

*ds*

4

*L x*

,

[

*y*

−

−

1

(

*s b x*

,

−

2

*y*

(

−

1

*a*

(

*x*

,

+

*y b x s*

−

1

+

1 ,

+

*y*

2

)

*a*

*L x x*

,

+

1 ,

*y y*

+

−

1

*a x*

+

2

*y*

+

1

(

,

) (

*c*

+

*s x*

+

1 ,

*c y x*

,

*y*

+

−

1

*c*

+

*s x*

,

*y*

2

*c*

)

*L x x*

,

)

*y*

+

1 ,

+

*y c*

+

*x*

(

,

*b x s*

,

*y*

+

1

]

)

*y*

+

1

*L x*

,

*y*

+

*b*

+

*s x*

+

1 ,

2

(

*a y*

)

*x*

,

*L x*

+

1 ,

*y*

+

1

+

*y*

+

1

*a x*

,

+

*y*

)

*L x*

,

*y*

+

1

+

*b x*

,

*y*

−

1

+

*b x*

−

1 ,

*y*

*L x*

−

1 ,

*y*

−

1

+

2

*c x*

−

1 ,

*y*

+

*c x*

,

*y*

*L x*

−

1 ,

*y*

−

*b x*

,

*y*

+

1

+

*b x*

−

1 ,

*y*

*L x*

−

1 ,

*y*

+

1 where *s* is the scale, *ds* is the scale-step, *x* and *y* are the spatial position, *L*

*s*

is the image at scale *s* and *a*, *b* and *c* are the components of the diffusion tensor. This discrete implementation

has been implemented and used to calculate the images in Figure 2-22. The top row shows the

structure tensor orientation calculated at first iteration for each integration scale. The tensor diffused images illustrate how integration scale,

ρ

, and diffusivity constant, *C*, affects the diffusion process.

30

ρ

** = 1 **

ρ

** = 4 **

ρ

** = 16 **

ρ

** = 64 **

**Figure 2-22: Nonlinear anisotropic scale-spaces (tensor driven diffusion) of leftmost **

**image in Figure 2-21. Images have been calculated with the observation scale, t, set to **

**0.0625 and evolved through 50 iterations using a scale-step, ds, of 0.2. **

It has been explained that the integration scale,

ρ

, enhances the coherence of the structure tensor. Since the structure tensor is used to define the eigenvectors of the diffusion tensor (i.e. the preferred smoothing directions) the integration scale will have the effect of also enhancing coherence for the diffusion process as well as for the diffused image. This effect is apparent

when comparing images calculated using integration scales 1 and 64 in Figure 2-22 above.

Too large an integration scale may destroy features with initial low coherence, such as ridge

31

bifurcations, ridge endings and in the worst case singularity points. The diffusivity constant,

*C*, affects the diffusion by relation to the coherence,

κ

. The smaller the value of *C*, the lower coherence is needed for strong anisotropic diffusion, which is illustrated in the images in

Figure 2-23 illustrates how a detail of a fingerprint is affected by nonlinear anisotropic scale-

space smoothing. The fingerprint feature is the same as was previously shown in Figure 2-12

and Figure 2-20. The top left image is the original signal, and the rest of the images have been

calculated at 5, 25 and 100 iterations respectively.

1

0.5

0

10

20

30

1

402

4

6

8

0.5

0

10

20

30

402

4

6

8

1 1

0.5

0.5

0 0

10

20

30

402

4

6

8

10

20

30

402

4

6

8

**Figure 2-23: Detail of fingerprint; (left-to-right, top-to-bottom) original and nonlinear anisotropic scale-spaced smoothed at 5, 25 and 100 iterations. **

This section has described the theory for nonlinear anisotropic scale-spaces implemented as tensor driven diffusion, introduced the different parameters of the diffusion process and explained how they affect the outcome. Scale-space methods explained in previous sections have also been described as implementation by convolution with Gaussian kernels. Such a method also exists for nonlinear anisotropic scale-spaces. The anisotropic smoothing is achieved by creating the Gaussian kernel from the diffusion tensor, making it anisotropic.

This is often referred to as affine Gaussian scale-space. Using Gaussian convolution to implement anisotropic smoothing will exclude the recalculation of the diffusion tensor during the smoothing, i.e. the feedback of the process is lost, hence the two implementation methods will not render the same result. For more detailed description on affine Gaussian scale-space the reader is referred to [19] and Chapter 1.2.6 in [36].

32

This chapter will look at previously proposed fingerprint image enhancement methods within the scale-space framework, and highlight the similarities and differences between them and the schemes evaluated within this thesis.

Of the three different methods considered in this thesis, there is only the nonlinear anisotropic scale-space (or tensor dependent diffusion) that, to the best of our knowledge, have been previously tested as a means of fingerprint image enhancement. It is fairly common that fingerprint images are used as examples of anisotropic diffusion [28, 36], since their flow-like structure is suitable to demonstrate the impressive visible results of the method. However these papers do not evaluate the diffusion process as an fingerprint image enhancer. Four publications were found were anisotropic scale-space is implemented solely to enhance fingerprint image quality. Following is a summary of these four different approaches and their connection to this thesis work.

The first paper is the master’s thesis “Fingerprint Enhancement by Shape-Adaption of Scale-

Space Operators with Automatic Scale Selection”, written by Andrés Almansa and supervised by Tony Lindeberg [19]. Almansa proposes an anisotropic scale-space smoothing that mainly utilises the second-moment matrix (or structure tensor) to analyse the image and control the diffusion process. Although the implementation deviates in a few different ways compared to the coherence-enhancing diffusion tested in this thesis, the most noteworthy difference is that

Almansa performs a scale-selection which is also involved in controlling the diffusion process. Almansa evaluates the method by extracting minutiae from the enhanced fingerprint images and compares them to true minutiae marked by an expert. This result is then compared to several other fingerprint image enhancement approaches. The method renders significantly improved results.

Another paper that also proposes an anisotropic filter for enhancement of fingerprint images is

“Fingerprint Image Enhancement using Filtering Techniques” by Greenberg, Aladjem and

Kogan [32]. They utilise a Gaussian-shaped filter kernel, whose anisotropy ratio is controlled by two space constants, which can be compared to the standard deviation in each axis direction of a Gaussian kernel. The values of the space constants are set empirically.

Greenberg et al. also evaluates their method by implementing a minutiae extraction algorithm and comparing their results to previously proposed approaches. Their anisotropic filtermethod is shown to outperform the algorithms to which it is compared.

The third paper, “Feature Extraction Using a Chaincoded Contour Representation of

Fingerprint Images” by Govindaraju, Shi and Schneider [33], utilises the anisotropic filter proposed by Greenber et al. [32]. Although the focus of their paper is the extraction of minutiae using chaincode, the anisotropic filter is used to enhance the images before their minutiae extraction algorithm. Overall the method shows substantial improvements of results.

There are however no estimates as to the actual effect the anisotropic filtering has on those results.

The fourth, and final, paper is “Fingerprint Enhancement Using Oriented Diffusion Filter” by

Cheng, Tian, Chen and Ren. They initially analyse the image structure with the structure tensor. The same approach as is proposed in [19] and [28], as well as utilised in this thesis.

The novel part of Cheng et al.’s paper is the computational efficiency of the diffusion process, and not the diffusion filtering approach in itself. They evaluate the method in a way similar to

33

that of Greenberg et al. [32]. Again the anisotropic filtering technique proves to give improved results.

The nonlinear anisotropic diffusion method that is evaluated within this thesis has previously been implemented in similar situations that have been presented in this chapter. The largest difference, and the motivation to include the method, between this thesis and the former approaches is how it is evaluated. This thesis does not rely on a feature extraction algorithm, but will instead involve a more detailed analysis on the effect the methods have on local fingerprint features. The investigation of the results is furthermore to a large extent focused on fingerprints affected by the template ageing problem. Another reason to include the nonlinear anisotropic diffusion is that it is tested on the same data set as the other methods considered within this thesis. Hence, this will make comparison between the results of the different methods possible.

34

**4 Datasets and Evaluation Framework **

This chapter describes the practical thesis work including definition of general testing framework and measures, implementation of scale-space methods and evaluation of the methods and their results.

The first few parts describe and define the general preparatory processes and measurements involved in one or more of the scale-space methods implemented. These include selection of test data, image pre-processing, definition of evaluation measures, description of edge detection method and calculation of initial evaluation measures. These introductive parts end with a section describing the general framework defined and utilised for testing and evaluating all the scale-space methods investigated in this thesis.

This is followed by information pertaining to implementation, testing and evaluation of the different scale-space methods considered. Included in these sections are motivation, method specific implementation details, summary of overall results and more detailed samples of representative results. There are three different scale-space methods implemented; linear scale-space, nonlinear isotropic diffusion and nonlinear anisotropic diffusion.

*4.1 Selection of Test Data *

*4.1 Selection of Test Data*

The fingerprint database used for testing has been made available by the company Fingerprint

Cards AB, and the images have been acquired using their “livescan” fingerprint scanner

FPC1010. Throughout the thesis fingerprint images show ridges as dark and furrows as bright pixels.

Since the measurement of a “livescan” fingerprint scanner is done by direct contact with the finger it is possible to define its inner and outer scale. The characteristics of FPC1010 is

presented in Table 4-1, which shows an inner scale of 70x70 µm and the outer scale is

10.64x14.00 mm.

Pixel cell size 70x70 µm

Number of pixels

Active sensing area

152x200 pixels

10.64x14.00 mm

Pixel resolution 8 bits

**Table 4-1: Characteristics of “livescan” fingerprint scanner FPC1010 [38]. **

The evaluation of scale-space methods is intended to be qualitative and not quantitative, why the selection of images to be used for testing has been limited both in number and size. The main aim is to see how scale-space methods affect fingerprint images on detail level why the selected test images are 53x53 pixels small sub-areas of a fingerprint (thus reducing the outer scale to 3.71x3.71 mm). A maximum of two sub-areas from the same fingerprint have been selected. A total of 44 unique sub-areas are used of which 34 have been acquired at five different occasions, while the remaining 10 have been acquired at three different occasions.

This gives a total of 200 images, depicting 44 unique sub-areas of fingerprints. All three or five images representing the same sub-area will be referred to as a sub-area group. The subarea images have been selected to represent different probable occurrences of template ageing problems. The defined selections of images will be used for all testing to render possible comparison between results from the different scale-space methods tested.

35

All fingerprint images provided to the author by Fingerprint Cards AB had been manually matched (aligned) so that no translation or rotation appears between images of the same fingerprint (i.e. a feature in one image appears at the same position in all other images of the same fingerprint). This effectively excludes the problem of misregistration between two fingerprint images, a problem which is outside the scope of this thesis.

The fingerprint images have all been acquired over a period of 27 weeks with a minimum of one week between acquisitions. During testing and evaluation no consideration will be given to the actual time difference between the acquisitions of two fingerprints. Instead the focus lies on the amount of change or dissimilarities between the images in a sub-area group. The level of dissimilarity varies considerably between different sub-area groups, for one sub-area

other sub-group types are included for reference and for conclusions to be as general as possible.

**Figure 4-1: Sub-area group with images very similar to each other. **

**Figure 4-2: Sub-area group including images with essential dissimilarities. **

In each sub-area group the centre area of size 12x12 pixels is selected as distinct feature, hereafter referred to as feature segment or feature tile. The features tiles have been selected mainly depending on how they are affected by the template ageing problem. Since the images are aligned the feature is at the same position (a small amount of deviation is allowed) in all images of a sub-area group. The criterion for a distinct feature is that it is unique compared to the rest of the sub-area image. A distinct feature is typically, but not exclusively, minutiae. No judgements have been made as to whether a feature tile is distinct enough for identification purposes.

For testing to be worthwhile the performance of the methods and the concluded results must be evaluated, hence it is necessary to define measures for evaluation. An evaluation measure should be adapted to accurately represent the quality of the property which it is to assess. This chapter will define, and motivate the use of, the different evaluation measures that are applicable for this thesis.

36

**4.2.1 Modified Normalised Correlation Coefficient **

The aim of this thesis is to enhance fingerprint images in the sense that images depicting the same finger become more alike. Consequently some kind of similarity/matching measure would be apt to use. This will give a direct measure showing the quality of the enhancement without depending on a minutiae extraction algorithm or likewise.

Selecting a matching measure involves the decision of matching criteria. Since two images depicting the same object will probably look slightly different it is important to define the matching criteria. That is, what kinds of dissimilarities that are allowed to still consider two image segments a match. The intensity matching measure chosen for this thesis is the normalised correlation coefficient (NCC) [31].

*I*

1

and

*I*

2

are the windows (or tiles) selected for correlation and *I * and

1

*I * represent the

2 corresponding window means. Correlation of a window across an image is calculated by, for each pixel in the image, extract a window of the same size as the correlation window and calculate the NCC.

∑∑

*NCC*

=

(

*I*

1

∑∑

*m n m n*

(

*I*

1

(

*m*

,

*n*

)

(

*m*

,

*n*

)

−

*I*

1

2

)

−

*I*

1

( )

*I*

(

*n*

)

2

*m*

,

−

*I*

2

)

∑∑

*m n*

(

*I*

2

(

*m*

,

*n*

)

−

*I*

2

2

)

Although considering some other distant metrics, the normalised correlation coefficient was ultimately selected since it is invariant to linear brightness and contrast variations [31]. These dissimilarities are likely to depend on other reasons than differing fingerprint patterns.

The NCC value ranges between –1 and 1, where a negative result indicates the similarity of the inverted pattern. Applications only considering intensity differences in a pattern thus regard an inversion of the original image as a perfect match, typically use the absolute value of the NCC which gives values between 0 and 1. Where 0 is a mismatch and 1 is a perfect match. However, in fingerprint matching the actual intensity values is important since this is what distinguishes ridges from furrows. A ridge ending is always surrounded by a furrow bifurcation and a ridge bifurcation surrounds a furrow ending. Allowing inverse patterns would in other words result in high correlation score between a ridge ending and a ridge bifurcation of the same direction, which is a very misleading result. Therefore the original

NCC value is used, however to get a more intuitive result the NCC measure is mapped to lie between 0 and 1 instead of the initial range from –1 to 1. This measure will be referred to as the modified normalised correlation coefficient (MNCC).

*MNCC*

=

*NCC*

2

+

1

Correlating a feature segment over each position in a target image creates a correlation image, which at each pixel contains the correlation result computed from the corresponding position in the target image.

37

Figure 4-3 shows the original image, an extracted tile, the image it is correlated over and the

computed MNCC image. The correlation image contains values between zero (black) and one

(white), thus black means a total mismatch while white represents a perfect match.

**Figure 4-3: (I) original image (II) extracted feature tile, (III) target image, (IV) MNCC image. **

Of interest to this thesis is the value in the MNCC image at the position where the feature was extracted from the original image. Even though the images have been aligned, some misregistration is likely to exist, hence the resulting correlation value is interpreted to be the maximum value of a 5x5 pixel area centred at the feature position in the MNCC image. This area is referred to as the ground truth region and the value is referred to as the feature tile modified normalised correlation coefficient, *MNCC*

*feat*

.

**4.2.2 Uniqueness Measure **

Since the scale-space methods suppress details, a smoothed image will at coarser scales loose intricate information and the image pattern will be flattened. This especially holds for linear scale-space methods, whereas scalar diffusion may be an exception since it actually enhances edges. It is thus reasonable to assume that a selected feature tile will become more and more similar to the whole sub-area image it is correlated to when advancing up the scale-space ladder, given that both images are scale-space filtered. Consequently similarity by itself is not sufficient as a measure of evaluating the enhancement algorithms. Some form of relative similarity would instead be more informative. For this purpose the uniqueness measure (*UM*) is introduced. It is defined as the difference between the *MNCC*

*feat*

and the highest correlation measure throughout the rest of the sub-area image. This means that a positive uniqueness measure is only achieved when the *MNCC*

*feat*

is the highest correlation value in the sub-area image, meaning that the feature tile is correctly matched.

Regarding the implementation process what is interesting is the positions where the highest correlation values are found, or more accurately, whether they are within the 5x5 centre area or not. However, a correlation value is often related to the nearby correlation values, which means that a high correlation value will have several equally, or nearly as, high correlation values adjacent. Thus the only interesting correlation values are the maxima peaks. A maxima filtering process has been implemented to extract the maxima peaks across the MNCC image.

Such a method is often referred to as non-maximum suppression. Each pixel in the image that has the highest value of a surrounding centred area of 5x5 pixel, keeps its value while every other pixel is set to zero. Of note is that this does not correspond to finding all maxima in a mathematical sense since if two maximums are close enough to each other the lowest of them will be ignored.

38

The result of the maxima filtering process is shown in Figure 4-4; the left image is the MNCC

image, the centre is the maxima extracted image and the right is an image where the maxima peaks have been added to a dimmed version of the MNCC image.

**Figure 4-4: (I) MNCC image, (II) extracted maxima, (III) maxima peaks shown on the dimmed MNCC image. **

Although the most important factor is to decide if the maximal value of a MNCC image appears at the actual position of the feature tile, that is the uniqueness value is positive, it is also interesting to look at the actual value of the uniqueness measure. The higher the uniqueness measure, the more unquestionable the selection of the feature tile position would be as a correct match. As previously stated, it is assumed that as the image pattern flattens at coarser scales the correlation value across the whole sub-area image is likely to increase, which in turn would imply that the uniqueness measure will decrease at coarser scales.

The uniqueness measure, as previously explained, depends on two values; the *MNCC*

*feat*

and the highest correlation measure throughout the rest of the sub-area fingerprint image. In other words how high a correlation value obtained at the correct position (that is how similar is the feature in the two images), and the highest correlation value throughout the rest of the image

(that is how unique the feature is compared to the rest of the sub-area image). The uniqueness measure computed at different scales depends on the initial uniqueness of the feature, however no initial uniqueness measure have yet been defined. To gain a more general uniqueness measure, which may be compared to uniqueness measures computed from another correlation pair, a relative uniqueness measure (*UM*

*rel*

) is defined.

*UM rel*

=

*UM*

*UM init*

Where *U*

*M*

is the uniqueness measure and *UM*

*init*

is the initial uniqueness measure. The initial uniqueness measure is only computed for the normalised original images and is calculated as follows; the feature tile is extracted from the sub-area fingerprint image and is then correlated over the same image resulting in a MNCC image. Since the feature tile originates from the same image that it is correlated over, the highest *MNCC*

*feat*

will be 1, hence *UM*

*init*

will be one minus the highest correlation measure throughout the rest of the fingerprint image. By defining the initial uniqueness measure it is possible to calculate the relative uniqueness measure and thereby achieve a measure useful for comparison between results from different correlation pairs.

Normalisation of an image can appear quite differently depending on what image feature is to be normalised. In this thesis image normalisation is referring to image intensity values, and an image normalisation results in an 8-bit grey-scale image with pixel values between 0 and 1.

The acquisition of a fingerprint is a mapping of the ridge and valley pattern in three dimensions to a two dimensional signal. The signal is usually visualised as a grey-scale image where ridges are shown as dark pixels and valleys as bright. Apart from the ridge/valley-

39

pattern the actual pixel intensity values depend on skin wetness, the pressure of the finger on the sensor and the properties of the measurement device. Consequently two acquired images

of the same finger might appear quite differently (see Figure 4-5); hence some kind of grey-

scale transformation is needed. Considering the mentioned problem, a sufficient solution would be linear histogram stretching to make the images use the full grey-scale range.

However, varying pressure and inexact measuring by the sensor across the finger may give rise to shifting ranges of pixel intensity across the fingerprint image. Meaning that two spatially separated ridges (or furrows) of the same height in the fingerprint may not have the same intensities in the acquired image. This problem will not be helped by a global operation like linear histogram stretch; instead some local operation must be implemented. The aim is to normalise the image so that all ridges at their peaks have value 0 and furrows at their bottom have value 1 no matter what their spatial position. Because of the uncertainty in absolute ridge/valley height of a scanned fingerprint the pixel intensity alone cannot be used for identification purposes and therefore no vital information will be lost by the normalisation procedure.

**Figure 4-5: Two different images of the same fingerprint **

The method for normalisation proposed in this thesis involves a local linear histogram stretch.

However, if such a grey-scale transformation is calculated area-wise unwanted block pattern may appear. Instead the computation is done pixel-wise, implemented as follows. For every pixel in the image the normalised tile (linear history stretch) for an area of MxM, centred at the current pixel, is computed. The new calculated value of the current pixel is saved in the new image. The procedure is repeated for all pixels in the image. For implementation efficiency the calculation can be done only using the minimum and the maximum pixel intensities in the selected area around the current pixel. The new pixel value is then calculated by the following formula:

*I norm*

( )

=

*I*

( ) max

−

( )

*x*

,

*y*

min

−

( ) min

*x*

,

*y*

( )

*x*

,

*y*

Where *I(x,y)* is the image pixel value at position *(x,y)*, *I*

*norm*

is the normalised image and min*(W*

*x,y*

*)*

and max*(W*

*x,y*

*)*

is the minimum and maximum pixel values of the window *W* centred at *(x,y)*.

The only parameter to be defined is the size of *W* over which the grey-value transformation should be calculated. It should be large enough to contain a ridge maximum and a furrow minimum and small enough for the computation to be considered local. A ridge/valley period

(i.e. the width of one ridge and one valley) is typically between 6-10 pixels in the test images why a window size of 11x11 has been chosen.

40

**Figure 4-6: Normalised images of the two fingerprints from Figure 4-5, using local linear **

**histogram stretch. **

Throughout the rest of the report all initial images used should be presumed normalised, if not stated otherwise.

As described in Chapter 2.2.3, the most commonly used conductivity coefficient, or

diffusivity, for nonlinear isotropic diffusion is the gradient magnitude (

∇

*L*

). The gradient magnitude works as an edge detector and allows the diffusion process to adapt the amount of smoothing to the edge response. There are however certain issues when using the gradient magnitude as an edge detector, especially at small scales, which motivates the implementation of an alternative method.

The gradient magnitude at scale *t* is calculated by:

∇

*t*

*L*

=

⎛

⎜⎜

∂

*L t*

∂

*x*

⎞

⎟⎟

2

+

⎛

⎜⎜

∂

*L t*

∂

*y*

⎞

⎟⎟

2

=

*L t x*

2

+

*L t y*

2

The value of the gradient magnitude depends on two measures; the first derivative in x and y direction. For a horizontal (vertical) edge of maximum strength (i.e. highest possible contrast) the gradient magnitude will be equal to the maximum response of the first derivative in y (x) direction, since the response of the derivative parallel to the edge will be zero and the derivative perpendicular to the edge will give maximum response. Considering Gaussian first derivative convolution kernels which are normalised to provide a maximum response of 1, the gradient magnitude for a horizontal or vertical edge will be 1.

Figure 4.7 depicts an edge of maximum strength and with varying orientation.

**Figure 4-7: Edge of maximum strength with varying orientation. **

41

For edges with orientation that is neither perpendicular nor parallel to any of the two axes both derivatives will have values larger than zero, and the gradient magnitude may result in

values larger than one. This effect is demonstrated in Figure 4-8, where the gradient

magnitude of the image in Figure 4-7 has been calculated at five different scales. The scales

have been selected to generate Gaussian derivative kernels of sizes 3x3, 5x5, 7x7, 9x9 and

11x11. The intensity values of the images have been adjusted to represent values between 0

(black) and 1.4 (white) to be able to depict the gradient magnitude values.

1.4

1.2

1.4

1.2

1.4

1.2

1.4

1.2

1.4

1.2

1 1 1 1 1

**Figure 4-8: Top row: Gradient magnitude of Figure 4-7 at scales; (I) t = 0.0625, (II) t = **

**0.25, (III) t = 0.56, (IV) t = 1.0, (V) t = 1.56. Bottom row: Plots of maximum pixel value of each image column. **

Concluded from the examples in Figure 4-8 is that there are three disadvantages with using

the gradient magnitude as edge detector;

• it is difficult to predict the response of the gradient magnitude for a certain type of edges, since it depends on two separate measures (i.e. the horizontal and vertical derivatives),

• the gradient magnitude is not only dependent on the edge strength, but also on the orientation,

• the response of the gradient magnitude is dependent on the scale at which it is calculated.

These problems are most evident at smaller scales, but are still present at rather large scales,

as can be seen in Figure 4-8. Ridge structure in fingerprint images are fairly small therefore

accurate response from an edge detector at detailed scales is essential.

A proposed alternative to the gradient magnitude is the first derivative of the Gauge axis perpendicular to the isophote. This is the measure which is implemented and used as conductivity coefficient for the scalar driven diffusion in this thesis. Since this method only involves a single measure it is easy to control the result by normalising the maximum response. To get intuitive values for the Gauge derivative when used as an edge detector, it is normalised to give a maximum response of 1.

The Gauge derivative is implemented by calculating the Gaussian derivative of different directions at each position and selecting the maximum response. Since it is only the strength of the edge that is considered, and not the direction, it is sufficient to calculate the Gaussian derivative for 180 degrees of rotation and take the absolute value of the result. The number of directions that the Gaussian derivative should be calculated for is dependant on the desired accuracy of the result.

42

In Figure 4-9 the absolute value of the Gauge derivative of Figure 4-7 has been calculated at

the same scales as the gradient magnitudes found in Figure 4-8. The absolute value of the

Gauge derivative has been calculated with an accuracy of 5 degrees. The intensity values of

the images in Figure 4-9 have been adjusted to represent values between 0 (black) and 1.4

(white), to allow comparison with the images within Figure 4-8. The actual maximum value

in the images is however 1, which is apparent when observing the plots in the bottom row.

1.4

1.4

1.4

1.4

1.4

1.2

1.2

1.2

1.2

1.2

1 1 1 1 1

**Figure 4-9: Top row: Absolute value of gauge derivative perpendicular to isophotes of **

**Figure 4-7 at scales; (I) t = 0.06, (II) t = 0.25, (III) t = 0.56, (IV) t = 1.0, (V) t = 1.56. **

**Bottom row: Plots of maximum pixel value of each image column. **

Comparing the images in Figure 4-9 to those in Figure 4-8 proves that the Gauge derivative

provides a more even edge response. The three disadvantages of the gradient magnitude are no longer apparent when using the Gauge derivative. The negative aspect of the Gauge derivative is that it is computationally more complex. Computational efficiency is however not the focus of this thesis, hence this aspect is ignored.

*4.5 Initial Evaluation Measures *

*4.5 Initial Evaluation Measures*

To be able to appraise the quality of a fingerprint image enhancement method, it is necessary to compare evaluation measures for enhanced images with evaluation measures for original images. The latter is termed initial evaluation measures, and they are presented in this section.

As acknowledged earlier the selection of fingerprints to use in the tests include a total of 44 unique sub-areas, of which 34 were acquired at five different occasions, while the remaining

10 were acquired at three different occasions. For each sub-area group, every image is correlated over the remaining images, resulting in 20 correlation pairs for a sub-area group with five images. The number of correlation pairs for the whole data set is 740. The initial evaluation measures are calculated for every correlation pair, using the normalised images.

Included in the initial evaluation measures are *MNCC*, *MNCC*

*feat*

, *UM* and *UM*

*rel*

shows a histogram of the initial *UM*

*rel*

values. Of the 740 correlation pairs, there are initially

666 whose feature tiles are accurately matched (i.e. *UM*

*rel*

> 0).

43

45

40

35

30

25

20

15

10

5

0

-0.6

-0.4

-0.2

0 0.2

0.4

0.6

**Figure 4-10: Histogram of UM**

*rel*

** values for initial fingerprint images. **

Figure 4-11 shows a histogram of the initial *MNCC*

*feat*

values. The bars in the histogram have been divided into two groups; correlation pairs where the feature tile has been correctly matched (dark grey bars) and where they have been mismatched (light grey bars).

60

50

40

30

20

10

0

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

**Figure 4-11: Histogram of MNCC**

*feat*

** values for initial fingerprint images. Light grey bars are correlation pairs with **

*UM rel*

≤

0

** and dark grey bars represent correlation pairs with **

*UM rel*

>

0

**. **

It is evident that there is a greater risk of a mismatch for lower *MNCC*

*feat*

values, however one interesting aspect is that even *MNCC*

*feat*

values above 0.9 may give false matches. The fact that no single value for *MNCC*

*feat*

exists that separates correct and false matches proves the necessity of the uniqueness measures (i.e. *UM* and *UM*

*rel*

) as a compliment to the *MNCC*

*feat*

measure.

*4.6 Framework for Testing and Evaluating Scale-Space Methods *

*4.6 Framework for Testing and Evaluating Scale-Space Methods*

Many of the different steps used when implementing, testing and evaluating the different scale-space methods are very similar. Thus it is motivated to define a general framework to be used for all methods tested within this thesis. This will simplify the comparison between

44

results of the different methods, as well as make the descriptions of the methods easier to follow for the reader. This chapter details the different parts of this general framework.

The framework is divided into three areas; preparation, implementation and evaluation. The preparatory steps consist of definition of evaluation measures, data set selection, normalisation of images and calculation of initial evaluation measures, of which all formerly have been described in detail. The implementation part includes the different steps of implementing each scale-space method in practice, for example parameter definitions and parameter boundary specifications. The final part is the evaluation and it includes summary of overall results as well as more detailed samples of representative results.

**4.6.1 Preparation Framework **

The preparatory framework includes the steps performed prior to implementation of the scalespace methods, and they will not be included in the sections describing the implementations.

*I) Definition of Evaluation Measures *

First the measures used to evaluate the resulting effects of the scale-space methods are defined. These measures consist of modified normalised correlation coefficient (*MNCC*), feature tile modified normalised correlation coefficient (*MNCC*

*feat*

), uniqueness measure (*UM*) and relative uniqueness measure (*UM*

*rel*

). These measures have all previously been described in detail.

*II) Data Set Selection *

All tests are performed on the same data set. The selection of the images included in this data

set is described in 4.1 Selection of Test Data.

*III) Image Normalisation *

Image normalisation is explained in 4.3 Image Normalisation.

*IV) Calculation of Initial Evaluation Measures *

Calculation of initial evaluation measures is detailed in 4.5 Initial Evaluation Measures.

**4.6.2 Implementation Framework **

This section include the parts involved in implementation of the different scale-space methods. Only the steps that are shared by all methods are included in the implementation framework, additional method specific elements are described within the corresponding sections.

*I) Description and Motivation of Method *

Each implemented method is outlined and motivated. Especially noted are deviations from

basic scale-space methods, as described within 2.2 Image Representation at Different Scale.

*II) Definition of Method Parameters and Boundaries *

Parameters which affect the scale-space method are defined. The effect of the different parameters is investigated and boundaries are set to include values which provide meaningful results for fingerprint image enhancement. Parameter boundaries not possible to decide by theoretical means are instead chosen through practical testing and visual assessment. A parameter will be set to a fixed value if no other logical choice exists, or if other reasonable choices of the parameter value render very similar results, hence not affecting the final result of the scale-space smoothing method.

45

*III) Specification of Parameter Sample Values *

This part specifies sample values (i.e. values that should be tested during scale-space smoothing) for all parameters that were provided with boundaries in the previous step. For each parameter considered the effect of the boundary values are investigated. The difference in the effect of the boundaries decides how many values, or samples, that should be chosen for that parameter. For example if the results from the boundary values differ greatly, then many sampling points should be included when testing the scale-space method. If boundary values render similar results, fewer sample values are required. The amount of sample points selected and their actual values are specified through practical testing and visual assessment.

Parameter values will be sampled more frequently around values that are likely to render more interesting results. It is important to mention that the purpose is to evaluate the affect different parameters have on the results and try to find patterns which explains the behaviour.

Hence fine-tuning the parameters to provide highest possible evaluation measures is not the focus of the thesis.

There are three different aspects to consider when selecting parameter sample values;

• to include a sufficient amount of values to be able to detect patterns in the effect the parameter has on the result,

• for sample positions to represent interesting values (i.e. closer sampling at interesting values),

• to limit the number of values so that the generated amount of data is comprehendible.

The latter statement contradict the two previous ones, therefore the actual selection of the amount of values must be a balance between these three aspects.

*IV) Scale-Space Smooth Images *

In this step all 200 selected input images are scale-space smoothed by the current method, using every possible different combination of the parameters previously defined. This means that each variable parameter defined adds a new axis to the dimension of result images.

Consider for example the images in Figure 2-19 and Figure 2-22, which in both cases

produces a two dimensional result map, since they depend on two parameters.

*V) Calculate Evaluation Measures *

This step is performed for each combination of parameters. For each sub-area group the first image is selected as template image and the feature tile in the remaining images in the subarea group are correlated over the template image. This results in four (for sub-area groups with five images), or two (for sub-area groups with three images), *MNCC* images. The

*MNCC feat*

value is extracted and *UM* and *UM*

*rel*

are calculated, as previously described. In the next step the second image is selected as template image and the procedure is repeated. The evaluation measures are calculated using all images in a sub-area group as template image, and this is repeated for all sub-area groups. Thereafter images scale-space smoothed with the next combination of parameter values are selected and the whole procedure is repeated from scratch. The process is presented in pseudo-code below for a more comprehendible description of the calculation of evaluation measures.

46

**for** each defined value of variable parameter 1, **par_1**

** … **

**for **each defined value of variable parameter n, **par_n**

*Select all 200 scale-space images calculated with the current parameter values ( par_1, …, par_n) *

**for **each sub-area group

**for** each sub-area image, **templImg = 1 to 5** (or **3**)

*Select current image ( templImg) as template *

**for **remaining images, **corrImg = 1 to 4** (or **2**)

*Extract feature tile from current image (corrImg) *

**end **

** end **

** end end end **

**… **

**4.6.3 Evaluation Framework **

The evaluation of each method will start by finding the parameter settings which generate best and worst overall results. This will be estimated by counting the number of accurate matches.

The parameter settings with the lowest amount of correct matches are considered worst and vice versa. This is the only time when the result for the whole data set is considered. The following evaluation steps will consider separate correlation pairs in context of their sub-area groups.

Every method will generate a large amount of data, therefore it is impossible to analyse each correlation pair of each parameter setting in detail. Two starting points are defined from where the result data is analysed, and the amount of data (i.e. number of correlation pairs and sub-area groups) thoroughly investigated depends on whether a pattern is evident in the results or not. For each correlation pair the difference between the *UM*

*rel*

and the initial *UM*

*rel*

is calculated. The values are then sorted in a list, where positive values indicate a higher *UM*

*rel*

for the image enhancement method. The beginning and the end of this list are the two starting points from where the in depth analysis is executed. The detailed evaluation focuses on the cases where the scale-space enhanced images have performed best and worst.

One of the main focuses of the evaluation of the different scale-space methods is to examine how it performs for fingerprints affected by the template ageing problem. The concept of template ageing is however a general concept describing deviations in a fingerprint over some time. It is difficult to define a measure for the amount of difference between two instances of a fingerprint. Hence, to try and estimate how much a correlation pair is affected by template ageing is not really viable. The fingerprint images that will be considered template ageingcases in this thesis are the correlation pairs that initially fail correlation. The calculation of the

initial measures (Chapter 4.5) shows that there are 666 correlation pairs that are initially

accurately matched. Thus the remaining 74 correlation pairs are considered significantly affected by the template ageing problem, and any of these correlation pairs that succeed correlation when scale-space smoothed will be investigated in detailed.

Aspects considered when evaluating the effect of the image enhancement methods are the size of the fingerprint feature, the amount of difference between the correlation pair images,

47

clearness of fingerprint structure and similarity/difference compared to results from other correlation pairs of the same sub-area group.

To better be able to go through the large amount of data generated for each method, a

graphical user interface (GUI) was created in Matlab for this purpose (see Figure 4-12). The

GUI allowed for quick and easy overview of initial and enhanced fingerprint images, as well as correlation results for a certain template image and specific parameter settings. The plot window was implemented to visualise the effect a specific parameter has on an evaluation measure.

**Figure 4-12: GUI for visualisation of generated result data. **

The evaluation framework described is used as a basis when evaluating the scale-space methods. Deviations might however occur since the evaluation of each method to a certain extent will depend on, and be adapted to, the differences and similarities between anticipated and actual results for that method.

48

**5.1.1 Description and Motivation of Method **

The implementation of a linear scale-space method involves smoothing with an isotropic

Gaussian kernel, which will suppress features smaller than a defined scale. The main idea is that distinct stable features of a fingerprint image are larger than the smallest representation

(i.e. a pixel), and that small enough features may be unstable. Hence smoothing detailed features would result in a more stable representation of the fingerprint pattern. Scale-space smoothing an image flattens it, and using to large a scale will suppress important structure information. It is therefore likely that this proposed method works best at smaller scales.

**5.1.2 Definition of Method Parameters and Boundaries **

The linear scale-space method is an uncommitted procedure where no analysis of the image precedes the smoothing. The only parameter to consider is the scale of the Gaussian kernel, referred to as the scale parameter, *t*. The Gaussian kernel suppresses most signal structure of size smaller than the standard deviation,

σ

[30]. The scale parameter is equal to the square of the standard deviation,

*t*

=

σ

2

. Apart from the size of a feature, its relative amplitude (i.e. contrast) is an additional important characteristic which decides at what scale the attribute is suppressed. Less distinct features (i.e. low-contrast structure) are smoothed at smaller scales than high-contrast structure.

The boundaries for the scale parameter, *t*, should include values which may render interesting results. The lower boundary is selected very small to be able to investigate the effect of smoothing barely noticeable to the naked eye. Scale-space filtering at a very detailed scale flattens small features without affecting the larger structures. The lower boundary has been

chosen to 0.1 (see Figure 5-1), which results in a 3x3 sized Gaussian kernel.

The selection of the upper boundary is more delicate since larger scales suppress larger features and may flatten the actual ridge structure. By visible assessment it has been decided that fingerprint images smoothed at scales up to *t* = 2 are possible to give improvements in the

evaluation measures (see Figure 5-1). An additional scale (of size 4) has been included in the

test to validate the response of *MNCC*

*feat*

and *UM*

*rel*

when distinct fingerprint structures are smoothed.

**5.1.3 Specification of Parameter Sample Values **

The linear scale-space method only depends on one parameter, thus the number of sampling values can be chosen fairly large without generating an incomprehensible amount of data.

Smaller scales are more interesting since it is presumed that detailed information may be more unstable over time than larger structures. By visual estimation it is decided that 6 sample values should be sufficient to accurately evaluate the effect of linear scale-space smoothing.

The sampling points are distributed between the specified boundaries as shown in Table 5-1.

*t *

0.1, 0.2, 0.5, 1, 2, 4

**Table 5-1: Parameter values for linear scale-space. **

49

Three different sub-area fingerprint images, smoothed at the scales defined in Table 5-1, are

shown in Figure 5-1. The images have been selected to represent structures at different scale.

**Figure 5-1: Three different fingerprint sub-area images, original and scale-space filtered at scales t = {0.1, 0.2, 0.5, 1, 2, 4}. **

**5.1.4 Implementation of Linear Scale-Space **

The linear scale-space method has been implemented by convolution with a discrete Gaussian kernel. The kernel is normalised to give a maximum response of one at arbitrary scale.

The image borders are mirrored when convolving with the scale-space kernel, for the sake of preserving the mean grey-value intensity of the image.

**5.1.5 Results **

The overall results for the linear scale-space technique shows that the best performance is achieved for scale parameter *t* = 0.1, and the number of correct matches are decreasing at

coarser scales (see Table 5-2). There is however no single scale that performs better than

correlation of the initial images. The percentage of correlation pairs with improving *UM*

*rel*

and

*MNCC feat*

presented in Table 5-2 are compared to the initial evaluation measures.

*t *

**Higher UM**

*rel*

** (%) Higher MNCC**

*feat*

** (%) Higher UM**

*rel*

** and **

*MNCC feat*

** (%) **

0.1 40.9 97.3 40.0

**Number of correct matches**

666

0.2 36.6 95.5 34.9 665

0.5 32.8

1 29.2

2 20.4

4 10.8

93.5

88.0

75.7

61.8

30.9

27.3

18.9

10.0

654

634

580

493

**Table 5-2: Overall results for the linear scale-space method. Each correlation pair have been compared to the initial evaluation measures. **

The histogram of *UM*

*rel*

and *MNCC*

*feat*

measures for scale *t* = 0.1 are shown in Figure 5-2 and

Figure 5-3. It is difficult to visually estimate the differences between these histograms and

50

those calculated for the initial images (see Figure 4-10 and Figure 4-11). Closer numerical

analysis shows that there are more *UM*

*rel*

values decreasing than increasing at scale 0.1, which

is also shown in Table 5-2. The *MNCC*

*feat*

values are almost exclusively increasing.

50

40

30

20

10

0

-0.6

-0.4

-0.2

0 0.2

0.4

0.6

**Figure 5-2: Histogram of UM**

*rel*

** values for scale-space smoothed images ( t = 0.1). **

70

60

50

40

30

20

10

0

0.7

0.75

0.8

0.85

0.9

0.95

1

**Figure 5-3: Histogram of MNCC**

*feat*

** values for scale-space smoothed images ( t = 0.1). **

The linear scale-space method depends on one parameter, making it easy to evaluate the overall results for all parameter settings. The histograms for *UM*

*rel*

and *MNCC*

*feat*

at all scales, although not included in the thesis, have been evaluated. The *UM*

*rel*

values are normally decreasing at coarser scales. For the scales tested the *MNCC*

*feat*

are generally increasing, but the percentage of correlation pairs with improving *MNCC*

*feat*

are decreasing at coarser scales

(see Table 5-2). In other words many feature tiles get a lower match score when smoothed at

coarser scales. This was not anticipated and it will be further investigated for separate correlation pairs in the succeeding detailed analysis.

Following is an investigation of the effect that the linear scale-space smoothing process has on separate correlation pairs. Three different types of results will be examined. Firstly the best and worst cases (i.e. where the *UM*

*rel*

value diverges most compared to the initial evaluation measures) are considered. A total of 16 different correlation pairs were falsely matched initially, but correctly matched when scale-space smoothed. These correlation pairs are the third case which will be studied in detail.

The most obvious feature of the results is that the worst cases (i.e. when the calculated *UM*

*rel*

is considerably lower than the initial *UM*

*rel*

) always appear at coarser scales. Most commonly

51

at scale 4, but to some extent also at scale 2. The reasons for a decreasing *UM*

*rel*

at larger scales can be divided into two groups. The first case is in line with the predicted behaviour of the method, and it involves correlation pairs where the *MNCC*

*feat*

increases for larger scales.

Hence the single reason for a lower *UM*

*rel*

value is that there are other features within the image that also obtain a higher correlation value at coarser scales. This may occur at two separate occasions; firstly when the feature tile is not distinct enough (i.e. the initial *UM* is low), hence there are at least one further part of the fingerprint which is similar to the feature tile. This is often found in fingerprints of low quality, where the ridge structure is vague. In the second case the parts of the feature tile that makes it unique are so small that they are suppressed at larger scales, thus making the smoothed feature tile less distinct and more

similar to other parts of the fingerprint. The latter situation is depicted in Figure 5-4 where the

left-most valley is thinner than the adjacent ridges, hence the valley is smoothed faster than the ridges. The ridge ending blend with the neighbouring ridge and the feature tile gets less distinct. Although the *UM*

*rel*

value decreases considerably at larger scales, the example in

Figure 5-4 is still distinct enough at all scales to be singled out as the correct match.

**Figure 5-4: Feature tile from three weeks at scales t = {0, 0.1, 0.2, 0.5, 1, 2, 4}. **

The second group of fingerprints that produces significantly worse results when scale-space smoothed are those where the *MNCC*

*feat*

value decreases for larger scales. This effect was not anticipated, but results of this type are in fact more common than the group previously described. Feature tiles that most often give rise to this kind of result are those that are initially fairly similar but include structure that locally deviates in size or strength (i.e. contrast). When scale-space smoothing such features, the local mean grey value will differ within each of the feature tiles, and this will result in low *MNCC*

*feat*

values. In other words the structure is more similar at pixel scale for unsmoothed images of a feature than for the

smoothed versions of the same images. Figure 5-5 illustrates different readings of a feature

tile with this characteristic. The feature segment in the top row includes a ridge which is both wide and strong within the whole tile. In the centre row the feature is as strong as the top row in the upper part of the image, but weaker in the lower part. In the bottom row the circumstance is reversed compared to the centre row (i.e. strong in the lower part of the image, and weaker in the upper part compared to the top row). This will at coarser scales result in a lower *MNCC*

*feat*

value, thus decreasing the *UM*

*rel*

value.

52

**Figure 5-5: Feature tile from three weeks at scales t = {0, 0.1, 0.2, 0.5, 1, 2, 4}. **

Occurrences where the linear scale-space method performs notably better than the correlation of the unsmoothed images can also be divided into two groups; the case when similar features in the fingerprint are smoothed quicker than the feature tile, and secondly when deviating details are smaller than the distinct structure of the feature segment. The former occasion does not involve enhancement of the feature tile, but rather destruction of similar information in the fingerprint. For this type of feature segment the *MNCC*

*feat*

value is commonly equal or lower at coarser scales compared to the initial *MNCC*

*feat*

, but the *UM*

*rel*

is still increased since correlation values within the rest of the image decrease more that the *MNCC*

*feat*

.

The second type of feature tiles that perform better when scale-space smoothed include distinct structure that is larger and/or stronger than the deviating details. The main reason for an increasing *UM*

*rel*

value, when scale-space filtering this type of feature segment, is that the tile areas become more alike and thus the *MNCC*

*feat*

value is enhanced. Figure 5-6 shows an

example where the distinct structure is both larger and stronger than the deviating details, and

Figure 5-7 depicts a feature tile where spurious information is weaker than the distinct feature.

For the images in Figure 5-6 the highest *UM*

*rel*

value was calculated at scale 4, and for the

images in Figure 5-7 it was found at scales 0.5 and 1.

**Figure 5-6: Feature tile from three weeks at scales t = {0, 0.1, 0.2, 0.5, 1, 2, 4}. **

53

**Figure 5-7: Feature tile from three weeks at scales t = {0, 0.1, 0.2, 0.5, 1, 2, 4}. **

The final type of result that will be analysed in detail are the correlation pairs where the feature tile match initially failed, but succeed when the images are scale-space filtered. There are a total of 16 different correlation pairs that for one or more scales belong to this category.

Some of the initial false matches were due to misregistration. In these cases the maximum

*MNCC*

value ended up just outside the ground truth region, thus they could not be considered false matches. For the remaining occasions the most common reason for a negative initial

*UM rel*

is that the feature tile is not distinct enough. This is either because a low *UM*

*init*

or considerable differences between the feature tiles of the two correlation pair images. These correlation pairs result in such a small *UM*

*rel*

value that they cannot be considered reliable.

The remaining type of result is where the spurious information is apparent at detailed scales,

and the distinct features are of greater similarity at coarser scales. Figure 5-7 shows an

example of this type of result. The expectation of this type of effect was one of the main motivations to test the linear scale-space method. For the data set considered in this thesis there are however very few correlation pairs where this characteristic is found.

Conclusively the linear scale-space method, although in many cases improving evaluation measures at small scales, cannot be considered a method apt for fingerprints image enhancement. Firstly there were no singular scale found that gave an overall improvement of the evaluation measures. Secondly the cases when significant improvement could be proven it more often depended on destruction of similar features in the fingerprint than actual enhancement of the feature tile. There were occasions when the linear scale-space method enhanced feature tiles by suppressing unstable information, however this almost exclusively occurred for feature tiles with a large and strong structure. These are the features that are initially very distinct and not noticeably affected by the template ageing problem, hence they are not in actual need of image enhancement. It has been shown that distinct structure may appear at very detailed scales, and thus uncommitted smoothing is as likely to suppress distinct features as spurious information.

*5.2 Nonlinear Isotropic Diffusion *

*5.2 Nonlinear Isotropic Diffusion*

**5.2.1 Description and Motivation of Method **

The results from the linear scale-space method showed that relevant structure may appear even down to pixel scale. The proposal for testing nonlinear isotropic diffusion is that distinct

54

features are related to edges, and that smoothing structure that is not considered to be edges

(i.e. inside ridges and furrows) may enhance the evaluation measures.

Using the normalised Gauge first derivative as edge detector (see Chapter 4.4), the smoothing

process is controlled to blur parts of the images where the edge response is weak. This will not only blur intra-regional areas of ridges and furrows, but it will also have the effect of enhancing the edges (i.e. increasing the contrast).

**5.2.2 Definition of Method Parameters and Boundaries **

The first parameter to consider is the conductivity coefficient (i.e. diffusivity), since this measurement decides what type of features that should control the smoothing process.

Following the previous motivation an edge detection operator is a logical selection, and the method of choice is the first derivative of the Gauge axis perpendicular to the isophote. The

background on the selection of this method has been presented in Chapter 4.4 Edge Detection.

The first parameter that has to be decided is the scale of the Gauge derivative. The images in

the top row of Figure 5-8 represent ridge structure of different sizes, and the plots show the

response of the Gauge derivative for the centre row (marked with a red line) of these images.

The Gauge derivative have been calculated at scales, *t* = 0.0625 (top row of plots), 0.25

(middle row) and 0.56 (bottom row).

1

0.5

0

1

0.5

0

1

0.5

1

0.5

0

1

0.5

0

1

0.5

1

0.5

0

1

0.5

0

1

0.5

0 0 0

**Figure 5-8: Plots of Gauge first derivative for the centre row of the images in the top row. Plots calculated at scales t = {0.0625, 0.25, 0.56} (top to bottom). **

The edge response for the large (right column) and medium sized (centre column) ridge

structure in Figure 5-8, does not deviate much with the selection of the scale. For the small

structure (in the left column) the edge response is somewhat weaker at coarser scales. Since

the features in all of the images in Figure 5-8 are representing edges, the calculated edge

response should be as high as possible. Hence, a fixed value of 0.0625 for the scale of the

Gauge derivative, *t*, is appropriate to use when testing the nonlinear isotropic diffusion.

55

The next decision to take relates to what conductivity function that should be used to control

the diffusivity. By observing the edge responses in Figure 5-8 it can be determined that the

function should diffuse much for a conductivity coefficient lower than approximately 0.4, and halt the diffusion process for values over 0.5. In other words the response curve of the conductivity function should rapidly decrease around 0.4. The Weickert conductivity

function, presented in Chapter 2.2.3, possesses this property. Changing the conductivity

coefficient from the gradient magnitude to the absolute value of the Gauge first derivative, the function is written as:

*c*

*W*

=

⎧

⎪

⎪

1

− exp

⎛

⎜

(

1

−

*abs*

3 .

315

( )

/

λ

)

4

(

(

*s s*

=

>

0

0

)

)

Function curves for the Weickert conductivity function, with varying

λ

1

0.8

0.6

0.05

0.1

0.15

0.2

0.25

0.4

0.2

0

0 0.1

0.2

0.3

0.4

0.5

L

ω

0.6

0.7

0.8

0.9

**Figure 5-9: Weickert conductivity function, c**

*W*

**, of different **

λ

**. **

1

From Figure 5-8 and the plots in Figure 5-9 the boundaries for

λ

was initially decided to 0.15 and 0.25. When practically testing the method it was evident that the response from edges occasionally could be even lower than 0.4, thus the lower boundary of

λ

was changed to 0.05.

The final parameters to specify boundaries for, involve the scale-step and the number of iterations for the diffusion process. These factors decide the maximum amount of diffusion that will be performed (i.e. when the conductivity function is 1). The scale-step should be selected smaller than 0.25 to ensure a stable solution [26], and it was therefore set to a constant value of 0.2. Since the structure that is to be smoothed is fairly detailed (i.e. the width of ridges and furrows), the number of iterations could be chosen rather small.

56

The upper and lower boundaries of the number of iterations where selected to 5 and 10

respectively. This was determined through visual assessment of the images in Figure 5-10.

These images were calculated with different numbers of iterations for the diffusion process, and with the conductivity function set to a constant value of 1 (i.e. maximum homogeneous diffusion).

**Figure 5-10: Original image and maximum homogenous diffusion calculated for 5, 10, **

**15 and 20 iterations, with scale-step ds = 0.2. **

**5.2.3 Specification of Parameter Sample Values **

With the diffusivity and conductivity function decided, and the Gauge derivative scale and the diffusion scale-step set to fixed values, there only remain two parameters to specify sample values for. Namely the constant of the conductivity function,

λ

, and the number of iterations for the diffusion process.

The boundaries for the conductivity function constant have been set to 0.05 and 0.25 respectively. This constant decides which features are to be blurred, and is therefore essential for the result of the nonlinear isotropic diffusion process. Thus the number of samples chosen

should be relatively high. From the plots in Figure 5-9 it was decided to include five values in

the test;

λ

= {0.05, 0.1, 0.15, 0.2, 0.25}.

For the number of iterations it was decided that the boundary values (5 and 10) are enough and no further sampling values were used in the test.

The parameter values defined for the nonlinear isotropic diffusion method are summarised in

*t *

λ

0.0625

0.05, 0.1, 0.15, 0.2, 0.25

*ds *

0.2

No. of iterations 5, 10

**Table 5-3: Parameter values for nonlinear isotropic diffusion. **

**5.2.4 Implementation of Nonlinear Isotropic Diffusion **

The implementation of the scalar driven diffusion was adapted from the description given by

(

Rein van den Boomgaard in [26]. Boomgaard define the discrete approximation as follows:

*L s x*

+

,

*y ds c s x*

+

1 ,

*y*

=

+

*L s x c s*

,

*x*

,

*y y*

+

*ds*

)(

*L s x*

2

+

1 ,

[

*y*

(

*c s x*

−

,

*y*

+

1

*L s x*

,

*y*

+

*c s x*

,

) (

*y c*

)(

*s x*

,

*L s x y*

,

+

*y*

+

1

*c*

−

*s x*

−

1 ,

*y*

*L s x*

,

)(

*y*

*L s x*

,

) (

*y*

−

*c s x*

,

*y*

+

*L s x*

−

1 ,

*y c*

)

]

*s x*

,

*y*

−

1

)(

*L s x*

,

*y*

−

*L s x*

,

*y*

−

1

)

+ where *s* is the scale, *x* and *y* are the spatial position, *L*

*s*

is the image at scale *s*, *ds* is the scalestep and *c* is the conductivity function.

57

Readers interested in details regarding the discrete approximation are referred to [26] for further reading.

**5.2.5 Results **

The three parameter settings that results in the most number of correct matches are presented

in Table 5-4, and the three cases with the least number of correct matches are shown in Table

5-5. The number of accurate matches for the initial images was 666, hence there is no

combination of parameter values that renders a better overall result. Considering all parameter settings there were a total of 12 different correlation pairs that were initially false matches, but became successfully matched when scale-space filtered.

**iterations, **

λ

**Higher **

*UM rel*

** (%) **

5, 0.05 38.6

**Higher **

*MNCC feat*

** (%) **

46.8

**Higher UM**

*rel*

** and **

*MNCC feat*

** (%) **

24.1

**Number of correct matches **

665

10, 0.05

5, 0.1

34.9

31.1

37.3

33.9

20.0

17.8

664

658

**Table 5-4: The three best cases for scalar dependent diffusion. Percentile values are in comparison to the initial evaluation measures. iterations, **

λ

**Higher **

*UM rel*

** (%) **

10, 0.25 15.3

10, 0.2 19.3

10, 0.15 22.2

**Higher **

*MNCC feat*

** (%) **

47.0

29.9

20.8

**Higher UM**

*rel*

** and **

*MNCC feat*

** (%) **

11.4

11.4

11.5

**Number of correct matches **

536

579

617

**Table 5-5: The three worst cases for scalar dependent diffusion. Percentile values are in comparison to the initial evaluation measures. **

The percentile improvement of the *UM*

*rel*

can be compared to the results from the linear scale space method. The *MNCC*

*feat*

result conversely deteriorated. Less than half of the correlation pairs get a higher *MNCC*

*feat*

value when enhanced with the current method. This means that the most common case for the scalar driven diffusion makes an accurate match more difficult

(i.e. decreases the *MNCC*

*feat*

), while making a false match more likely (i.e. decreases the

*UM rel*

). The overall results presented in Table 5-4 and Table 5-5 suggests that the proposed

method is unsuitable as a method for fingerprint image enhancement. The following detailed analysis will investigate whether this is generally true or not.

The overall results for nonlinear isotropic scale-space shows that the best performance is achieved for

λ

= 0.05 and calculation over 5 iterations (see Table 5-4). The *UM*

*rel*

and

*MNCC feat*

histograms for the best case, are very similar to the histograms for the initial results

(see Figure 4-10 and Figure 4-11) and no additional information could be gained from them.

Therefore they are not included in this report.

The correlation pairs that performed best (i.e. significantly higher *UM*

*rel*

) were generally calculated with a

λ

-value of 0.2 or 0.25, and to some extent also for

λ

= 0.15. In other words the edge tolerance of the conductivity function was set fairly high, thus these cases also smoothed across edges. A high edge response value however, guarantee slower smoothing, even though the process is not stopped completely for high

λ

-values. The images included in

58

this group of correlation pairs are in other words fairly similar to the images filtered by the linear scale-space method. The difference is that strong edge structure is preserved, or rather

suppressed at a slower rate than non-edge structure. Figure 5-11 shows three examples of

correlation pairs where the *UM*

*rel*

was improved when the images were blurred by the scalar dependent diffusion method. The two columns to the left contain the initial images, and the two to the right depict the scale-space smoothed versions of these images.

**Figure 5-11: Three examples where the scalar dependent diffusion improved the UM**

*rel*

**value; (left) initial correlation pair, (right) scale-space filtered images. **

More common for the scalar driven diffusion were correlation pairs where both the *UM*

*rel*

and

*MNCC feat*

decreased. A closer study proves the proposed method to include some characteristics which makes it futile for enhancement of fingerprints. Consider for instance a fingerprint with a broken ridge structure. In such case the nonlinear isotropic scale-space method would smooth the valley created by the broken structure and thus enhance the contrast of the ridge gap. Another example is when part of the fingerprint is of low quality (i.e. smudged), which results in weak edges. In this case the scalar driven diffusion smoothes across the significant structure in the low quality area, while enhancing (or preserving) edges in areas of higher quality. When diffusing across ridge/valley-structure new edges will appear at the boundary between high and low quality areas, resulting in creation of new structure that is not apparent in the original fingerprint. For smudgy fingerprints there is little chance that the edge response in two images of the same fingerprint is the same, hence not only will the diffusion process distort the distinct structure in one image, but furthermore increase the dissimilarity between the two images.

59

Figure 5-12 depicts three examples where the *UM*

*rel*

value was decreased considerably due to this problem. The two image columns to the left are the initial correlation pairs, and the two columns to the right are the scale-space smoothed versions.

**Figure 5-12: Three examples where the scalar dependent diffusion noticeable decreased the UM**

*rel*

** value; (left) initial correlation pair, (right) scale-space filtered images. **

The problem illustrated in Figure 5-12 occurred frequently for the diffusion filtered images.

No single edge response value can be decided that accurately defines the edges of the actual fingerprint ridge and valley structure. Since the edge response for smudgy fingerprints appears very low, and for broken ridge structure is high.

The conclusion is that edges, as interpreted by the Gauge first derivative at detailed scale, are not reliable as definition of boundaries of distinct structures in fingerprints. Due to indistinctness of edges in low quality fingerprint images, the nonlinear isotropic scales-space method may perform uncontrolled piecewise smoothing across significant structure. This property solely classifies it as an ill fit method for fingerprint image enhancement.

*5.3 Nonlinear Anisotropic Diffusion *

*5.3 Nonlinear Anisotropic Diffusion*

**5.3.1 Description and Motivation of Method **

The two previous methods have suggested that unstable fingerprint features are not homogeneous and isotropic, nor can they exclusively be related to structure that is considered as edges. In other words, distinct characteristics may be found at any scale and are not only limited to edges. The approach of nonlinear anisotropic diffusion will investigate if important fingerprint attributes are connected to the orientation of structure at a certain scale. The suggestion is that deviations in orientation at detailed scales may be unstable; hence increasing the coherence locally would make two images of the same fingerprint more alike.

At first, this method may sound very similar to nonlinear isotropic diffusion (see Chapter 5.2),

however there are two essential differences between the previous method, and nonlinear anisotropic diffusion. Firstly, the former method used edges (which are details of level 3 scale) to control the diffusion process, while the current method considers features of coarser scales, namely ridge orientation (level 2 detail). In practice this difference is primarily due to the use of the so called integration scale,

ρ

, which decides at what scale the structure tensor

60

should be smoothed. The value of the integration scale decides at the size of the local area that the isotropy is enhanced for, the higher the integration scale, the larger the area. The second difference, compared to nonlinear isotropic diffusion, is that the diffusion process in the current method is never entirely stopped. This means that features cannot be enhanced in the same way edges were in the previous method. Though a certain type of structure is preserved, specifically data perpendicular to the preferred smoothing orientation, there is always some amount of blurring that will occur for the entire image. Relatively, structure will appear to be enhanced, but the nonlinear anisotropic diffusion will never increase the local contrast.

The method proposed in this section, and similar methods utilising nonlinear anisotropic diffusion, have previously been tested as a means of fingerprint image enhancement [19, 32,

33, 34] and have been shown to improve results of fingerprint verification. The reason for including the method in this thesis is to be able to compare it to the results rendered from the other methods implemented within this work. The similarities and differences between the method used in this thesis and the formerly proposed methods have been explained in Chapter

**5.3.2 Definition of Method Parameters and Boundaries **

The method implemented and tested in this section is the coherence-enhancing diffusion

filtering proposed by Weickert [28] and described in detail in Chapter 2.2.4. The structure

tensor will be used to analyse the fingerprint structure, hence the first parameter to specify is the scale that the structure tensor is to be calculated at, referred to as the observation scale.

Following the reasoning in Chapter 5.2.2, a small scale should be selected to guarantee an

accurate estimation of the structure of detailed features. The observation scale, *t*, is set to

0.0625, which renders kernels of size 3x3 that is used to calculate the structure tensor.

The next parameter to specify is the integration scale,

ρ

. It should be chosen large enough to suppress detailed orientation deviations, like ridge edge irregularities, but small enough not to destroy distinct features with deviating orientation, such as minutiae and singularity points.

Considering the images in Figure 2-22 the integration scale boundaries is set to 0.5 and 8.

This will allow for investigation of the effect of enhancing correlation at both a very small local scale, as well as for a rather large scale. A integration scale of 8 may seem like a too large value to render reasonable results when considering a diffusivity constant of 5

⋅

10

−

5

.

However, for larger diffusivity constants, an integration scale of 8 is a more reasonable choice, and that is why it is included in the evaluation of the method.

The parameters that are left to define are the number of iterations, the scale-step, *ds*, and the two constants involved when calculating the eigenvalues of the diffusion tensor,

α

and the diffusivity constant, *C*. To get a more comprehendible data set and simplify evaluation of the method the scale-step and furthermore the number of iterations is set to fixed values. In this way the

α

and *C* parameters single-handedly decide the amount of diffusion.

61

The calculation of the diffusion tensor eigenvalues follow the approach described by Weickert

[28], and have been explained in Chapter 2.2.4. It is repeated here for reader convenience.

λ

*1*

determines the diffusion perpendicular to the isophotes, and

λ

*2*

defines the amount of smoothing along the isophotes.

λ

1

=

α

λ

2

=

⎧

⎪⎩

α

*C* > 0,

α

+

∈

(

1

−

α

α

) exp

⎝

κ

*C*

(

κ

=

0

)

, and

κ

is the coherence measure as described in Chapter 2.2.4.

The

α

parameter decides the minimum amount of smoothing, and the maximum difference between diffusion along isophotes and perpendicular to them. Setting it to 0.01 gives a minimum ratio of 1/100. The scale-step is set to 0.2 and the number of iterations is decided to

50. These are the same parameter values that were used to calculate the images in Figure

α

-value, scale-step value and number of iterations, the diffusion perpendicular to isophotes will be constant, and the amount of diffusion along isophotes will be defined by the diffusivity constant, *C*, alone.

Once again it is convenient to turn to the images in Figure 2-22. The boundary values for the

diffusivity constant, *C*, is determined to 0.05 and 5

⋅

10

−

5

, through visual assessment. It should be noted that it is plausible that there exist combinations of

α

and *C*, which will allow a lower number of iteration steps without noticeably changing the result of the diffusion process. For example, if the number of iterations is decreased to half of the initial value, then the

α

-value should be doubled and *C* must be trimmed (decreased) to give results comparable to the previous parameter setting. This would improve the speed of the diffusion calculation.

However, this aspect is outside the scope of this thesis, and will not be examined further.

**5.3.3 Specification of Parameter Sample Values **

There are two parameters that sample values should be defined for, namely the diffusivity constant, *C*, and the integration scale,

ρ

. For both these cases the values have been chosen by

visual assessment (see Figure 2-22). The integration scale is sampled with five values, and the

diffusivity constant is sampled with four. The parameter values for the nonlinear anisotropic

diffusion method are summarised in Table 5-6.

*t ds *

0.0625

0.2

No. of iterations 50

α

0.01

*C *

0.05, 0.005, 0.0005, 5

⋅

10

−

5

ρ

0.5, 1, 2, 4, 8

**Table 5-6: Parameter values for nonlinear anisotropic diffusion. **

**5.3.4 Implementation of Nonlinear Anisotropic Diffusion **

The calculation of the diffusion tensor is following the coherence-enhancing diffusion

filtering method as proposed by Weickert [28] and described in detail in Chapter 2.2.4.

62

The discrete approximation of the tensor driven diffusion was adapted from the description given by Rein van den Boomgaard in [26], and is defined as follows:

*L s x*

,

+

*ds y*

2

(

(

*a s x*

,

*y*

=

−

1

*L s x*

,

+

*a y s x*

,

+

*y*

)

)

*ds*

4

*L x*

,

[

*y*

−

−

1

(

*s b x*

,

−

2

*y*

(

−

1

*a*

(

+

*x*

,

*y b s x*

+

1 ,

−

1

+

*y*

2

*a*

)

*L x x*

+

1 ,

,

*y y*

−

1

+

*a*

+

*x*

,

2

*y*

+

1

(

) (

*c*

+

*s x*

+

1 ,

*c y x*

,

*y*

+

−

1

*c*

+

*s x*

,

*y*

2

*c*

)

*L x*

,

*y*

)

*x*

+

1 ,

+

*y c*

+

*x*

,

(

*y b x s*

,

]

+

1

*y*

+

1

)

*L x*

,

*y*

+

*b*

+

*s x*

+

1 ,

2

(

*a y*

)

*x*

,

*L y x*

+

1 ,

+

1

+

*y*

+

1

*a x*

,

+

*y*

)

*L x*

,

*y*

+

1

+

*b x*

,

*y*

−

1 where

+

*b x*

−

1 ,

*y*

*L x*

−

1 ,

*y*

−

1

+

2

*c x*

−

1 ,

*y*

+

*c x*

,

*y*

*L x*

−

1 ,

*y*

−

*b x*

,

*y*

+

1

+

*b x*

−

1 ,

*s* is the scale, *x* and *y* are the spatial position, *L*

*s y*

*L x*

−

1 ,

*y*

+

1

is the image at scale *s* and *a*, *b* and *c*

are the components of the diffusion tensor as described in 2.2.4.

**5.3.5 Results **

The result of the three best and the three worst cases for nonlinear anisotropic diffusion are

presented in Table 5-7 and Table 5-8 respectively. Once again there is not a single parameter

combination that generates better overall results than the initially calculated evaluation

measures (see Chapter 4.5). Another conclusion that can be made is that the method performs

best for high values of the diffusivity constant, *C*, and worst for low values. This means that a high threshold for the anisotropy of the diffusion is preferred, resulting in relatively low difference between the smoothing along, and perpendicular to, the ridge structure.

*C*,

σ

0.05, 2

0.05, 4

**Higher **

*UM rel*

** (%) **

40.1

39.5

**Higher **

*MNCC feat*

** (%) **

97.0

97.3

**Higher UM**

*rel*

** and **

*MNCC feat*

** (%) **

40.1

39.5

**Number of correct matches **

664

663

0.05, 1 41.1 97.2 40.9 663

**Table 5-7: Three best cases for tensor driven diffusion. Percentile values compared to initial evaluation measures. **

*C*,

σ

5

⋅

10

−

5

, 2

5

⋅

10

−

5

, 4

5

⋅

10

−

5

, 8

**Higher **

*UM rel*

** (%) **

**Higher **

*MNCC feat*

** (%) **

**Higher UM**

*rel*

** and **

*MNCC feat*

** (%) **

25.0 88.2 24.9

**Number of correct matches **

562

24.5 88.5 24.1

22.4 91.2 22.3

562

563

**Table 5-8: Three worst cases for tensor driven diffusion. Percentile values compared to initial evaluation measures. **

The histograms of

*UM rel*

and

*MNCC feat*

for the best performing parameter setting (i.e. *C* = 0.05 and

σ

= 2) are very similar to the ones that were calculated for the linear scale-space method

(Figure 5-2 and Figure 5-3) and therefore they are not included. The

*MNCC feat*

value is almost

exclusively increasing for the three best cases that are presented in Table 5-7. This suggests

that the reason why more than half of the correlation pairs get a lower *UM*

*rel*

value is due to an increase of the *MNCC* value for structure that is similar to the feature tile.

Considering all parameter settings there were a total of 24 different correlation pairs that were initially false matches, but were successfully matched when scale-space filtered. This is far better than for the two previously tested methods, which had 12 respectively 16 correlation pairs that rendered this type of result.

63

The detailed analysis will start by examining the correlation pairs that performed worse (i.e. had a substantially lower

*UM rel*

value compared to the initially calculated

*UM rel*

). There were generally two different reasons for a decreasing

*UM rel*

value for scale-space filtered images.

Firstly, two images of the same feature tile that locally differs in local mean-grey value may enhance this deviation when they are scale-space smoothed. This problem is very similar to the problem described for the linear scale-space method, when the *MNCC*

*feat*

value decreased

at larger scales (see Chapter 5.1.5 and Figure 5-5). This problem is depicted in the two top-

most rows in Figure 5-13. The two images to the left are the initial correlation pair, and the

two to the right are the scale-space filtered images.

The second problem, which is the more common, involves images with very detailed structure that are of low quality. In these cases the orientation estimation that is calculated by the structure tensor may deviate between two images of the same feature tile. Hence, the diffusion process will enhance the coherence in different direction for the two images. This will render very poor results correlation-wise, since new structure that does not exist in the original

fingerprint may be created. This problem is illustrated in the two bottom rows in Figure 5-13.

The worst cases almost explicitly appear for the two lowest value of the diffusivity constant,

*C*, which are the situations involving maximum diffusion. It should however be noted, that even though the problem is not as serious for higher *C*-values (i.e. less diffusion), they still exist for the type of correlation pairs that include this type of problem.

**Figure 5-13: Four examples where the tensor driven diffusion noticeable decreased the **

*UM rel*

** value; (left) initial correlation pairs, (right) scale-space filtered images. **

The nonlinear anisotropic diffusion method was able to correctly match 24 correlation pairs that failed initial correlation. The images for these correlation pairs were investigated and they could be divided into three groups, depending on the reason for the improvement. Firstly there were 6 correlation pairs from the same sub-area group, where the initial mismatch was due to a very similar feature that got a higher correlation value. These false matches are due to a poor selection of a distinct feature, rather than the template ageing problem, therefore they are not investigated further. There were also a few correlation pairs that rendered false matches

64

due to misregistration between the images. Again, this problem is not within the scope of the template ageing problem. The final category, which includes most of the improved matches, are fingerprints where the local structure is of low quality and the nonlinear anisotropic diffusion is able to suppress spurious information and enhance the coarser structure. In other words the structure is more stable at a larger scale than the details within the feature tile. Four

examples of this type of result are depicted in Figure 5-14. The two columns to the left are the

initial correlation pairs, and the two to the right are the scale-space enhanced versions. This type of improvement is what was anticipated when the nonlinear anisotropic scale-space

method was initially suggested. The examples in Figure 5-14 shows that this method is

capable of enhancing the correlation, and make possible an accurate match, for correlation pairs that are considerably affected by the template ageing.

**Figure 5-14: Four examples where the tensor driven diffusion made possible an accurate match; (left) initial correlation pairs, (right) scale-space filtered images. **

The nonlinear anisotropic scale-space, or tensor driven diffusion, method proposed in this section was not able to produce an overall better result than the initially calculated evaluation measures. Furthermore, when diffused extensively there is an apparent risk that low quality structure is smoothed in orientations deviating from the actual fingerprint pattern, due to inaccuracy in the analysis of the local structure. It has thus been shown that the template ageing problem may affect structure at larger scales than the ridge structure itself. For these cases the structure tensor of integration scale is not able to accurately analyse the fingerprint structure. The risk that features may be enhanced in incorrect orientations may be avoided by strongly limiting the maximum amount of diffusion allowed, as well as the maximum value of the integration scale. The most reasonable solution would be to decrease the number of iterations as a starting point, and then adapt the remaining parameters to produce valid results.

The tensor dependent diffusion proved to be capable of enhancing the correlation, and make possible an accurate match, for images that are considerably affected by the template ageing.

This result alone makes the method highly interesting and motivates further investigation on possibilities to enhance the overall result as well. For example some quality measure could be

65

used to determine whether diffusion enhancement is necessary or not, and in this way only perform smoothing for images that are less likely to initially perform an accurate match.

66

**6 Results **

The linear scale-space approach improved correlation values, which was predicted since the image structure is flattened at coarser scales, but there were no increase in the number of accurate matches. There were a number of cases where the correlation value decreased at coarser scale. This was not anticipated, and was caused by deviations between the mean intensity value of local areas within the feature tile, of the template and target image. The linear scale-space method is not apt to use as fingerprint image enhancement since the risk of increasing the correlation value with inaccurate features is to high.

The nonlinear isotropic scale-space, or the edge-preserved, approach is also proven to be an ill fit method for fingerprint image enhancement. This is due to the fact that the analysis of edges may be unreliable, since edge structure is often distorted for fingerprints affected by the template ageing problem. It was shown that the template ageing problem generally affect structure at larger scale than edges.

The nonlinear anisotropic scale-space, or coherence-enhancing, method did not give any overall improvements of the number of accurate matches. It did however show that for a certain type of template ageing problem, where the deviating structure does not significantly affect the ridge orientation, the nonlinear anisotropic diffusion is able to accurately match correlation pairs that resulted in a false match before image enhancement. There are occurrences of the template ageing problem that affects structure at larger scale than the ridge orientation, hence make ridge orientation estimation inaccurate. There are also situations where the template ageing problem strongly affects the local fingerprint structure, but the larger scale ridge orientation is preserved. These latter cases are the ones that can render accurate matches when using the nonlinear anisotropic scale-space.

67

68

**7 Conclusion and Future Work **

The only method that rendered results that are interesting to investigate further involve the nonlinear anisotropic scale-space (or tensor dependent diffusion). The coherence-enhanced diffusion resulted in solving certain situations of the template ageing problem. However, the overall result was never improved. Therefore it would be interesting to examine if there is a measurement that can be used to determine whether diffusion filtering of a correlation pair is necessary or not, and also decide the values of the diffusion parameters. Such a method would be similar to the automatic scale-selection process described by Almansa & Lindeberg [19].

Another notion would be to diffuse the template image and the target (or correlation) image at different scales. For instance let a local coherence measure decide the amount of diffusion, and stop the process when the two images reach a similar, pre-decided, coherence value.

69

70

**8 References **

[1] Nalini K. Ratha, Andrew Senior and Ruud M. Bolle (Mar. 2001).

*Tutorial on *

*Automated Biometrics*. Proceedings of International Conference on Advances in Pattern

Recognition, Rio de Janiero, Brazil, March 2001.

[2] Norman Haas, Nalini K. Ratha and Ruud M. Bolle (Mar. 2002).

*Pre-enhancing Non-*

*uniformly Illuminated Fingerprint Images*. Auto ID 2002 - Tarrytown, NY, IEEE

Conference on Automatic Identification Advanced Technologies, March 2002.

[3] Sharath Pankanti, Salil Prabhakar and Anil K. Jain (Dec. 2001).

*On the Individuality of *

*Fingerprints*. Proc. IEEE CVPR, pp. 805-812, Hawaii, Dec 2001. http://citeseer.nj.nec.com/pankanti01individuality.html

[4] Anil Jain, Lin Hong, Sharath Pankanti and Ruud Bolle (Sept. 1997).

*An Identity *

*Authentication System Using Fingerprints*. Proc. EuroSpeech 97, IEEE CS Press, Los

Alamitos, Calif., Sept 1997, vol. 85, no. 9, pp. 1348-1388. http://citeseer.nj.nec.com/jain97identity.html

[5] Andrew Senior (1997). *A Hidden Markov Model Fingerprint Classifier*. Proceedings of the 31st Asilomar conference on Signals, Systems and Computers, 1997, pp 306–310.

[6] Qinzhi Zhang, Kai Huang and Hong Yan (2001).

*Fingerprint Classification Based on *

*Extraction and Analysis of Singularities and Pseudoridges*. Proceedings of the Pan-

Sydney area workshop on Visual information processing table of contents, Sydney,

Australia, 2001, pp. 83-87. ISBN ~ ISSN:1445-1336 , 0-909-92589-5

[7] Johan de Boer, Asker M. Bazen and Sabih H. Gerez (Nov. 2001).

*Indexing Fingerprint *

*Databases Based on Multiple Features*. ProRISC 2001 Workshop on Circuits, Systems and Signal Processing, Veldhoven, The Netherlands, November 2001.

[8] Meltem Ballan, F. Ayhan Sakarya and Brian L. Evans (Nov. 1997).

*A Fingerprint *

*Classification Technique Using Directional Images*. Proc. IEEE Asilomar Conference on Signals, Systems, and Computers, Nov. 3-5, 1997, Pacific Grove, CA, vol. 1, pp.

101-104.

[9] Kalle Karu and Anil K. Jain (1996). *Fingerprint Classification*. Pattern Recognition, vol. 29, no. 3, pp. 389-404, 1996.

[10] Asker M. Bazen and Sabih H. Gerez (Feb. 2001).

*Extraction of Singular Points from *

*Directional Fields of Fingerprints*. Mobile Communications in Perspective, Annual

CTIT Workshop, Enschede, The Netherlands, University of Twente, Center for

Telematics and Information Technology, February 2001.

[11] A. Roddy and J. Stosz (1997).

*Fingerprint Features - Statistical Analysis and System *

*Performance Estimates*. Proceedings of IEEE, 85(9), pp. 1390-1421, 1997. http://citeseer.nj.nec.com/roddy99fingerprint.html

[12] Anil Jain and Sharath Pankanti.

*Automated Fingerprint Identification and Imaging *

*Systems*. http://citeseer.nj.nec.com/453622.html

[13] Lin Hong, Yifei Wan and Anil Jain (Aug. 1998).

*Fingerprint Image Enhancement: *

*Algorithm and Performance Evaluation*. IEEE Trans. Pattern Analysis and Machine

Intell., vol. 20, no. 8, pp. 777-789, Aug. 1998. http://citeseer.nj.nec.com/hong98fingerprint.html

71

[14] Jianwei Yang, Lifeng Liu, Tianzi Jiang, and Yong Fan (Aug. 2003).

*A Modified Gabor *

*Filter Design Method for Fingerprint Image Enhancement*. Pattern Recognition Letters, vol. 24, no 12, Aug. 2003, pp. 1805-1817.

[15] Asker M. Bazen and Sabih H. Gerez (Nov. 2001). *Segmentation of Fingerprint Images*.

ProRISC 2001 Workshop on Circuits, Systems and Signal Processing, Veldhoven, The

Netherlands, Nov. 2001.

[16] Asker M. Bazen and Sabih H. Gerez (Nov. 2000).

*Directional Field Computation for *

*Fingerprints Based on the Principal Component Analysis of Local Gradients*. ProRISC

2000 Workshop on Circuits, Systems and Signal Processing, Veldhoven, The

Netherlands, November 2000.

[17] Ying Hao, Tieniu Tan and Yunhong Wang (2002).

*An Effective Algorithm for *

*Fingerprint Matching*. IEEE Region 10 Technical Conference on Computers,

Communication, Control and Power Engineering, 2002.

[18] Nalini K. Ratha, Shaoyun Chen and Anil Jain (Nov. 1995).

*Adaptive Flow Orientation *

*Based Feature Extraction in Fingerprint Images*. Pattern Recognition, 28, pp 1657-

1672, Nov. 1995. http://citeseer.nj.nec.com/ratha95adaptive.html

[19] Andrés Almansa and Tony Lindeberg (supervisor) (Mar. 1998).

*Fingerprint *

*Enhancement by Shape Adaptation of Scale-Space Operators with Automatic Scale-*

*Selection*. Master's Thesis, March 30, 1998. http://www.fing.edu.uy/~almansa/thesis/thesis.html

[20] Tony Lindeberg (Sept. 1996).

*Scale-space: A Framework for Handling Image *

*Structures at Multiple Scales*. Technical Report CVAP-TN15, Royal Institute of

Technology, Sept. 1996. http://citeseer.nj.nec.com/lindeberg96scalespace.html

[21] Bart M. ter Haar Romeny (Sept. 1996).

*Introduction to Scale-space theory: Multiscale *

*Geometric Image Analysis*. Fourth International Conference on Visualization in

Biomedical Computing, Hamburg, Germany, September 22, 1996.

[22] Andrew P. Witkin (1983). *Scale-Space Filtering*. In Proceedings of the Eighth

International Joint Conference on Artificial Intelligence, pp. 1019-1022, 1983.

[23] Joachim Weickert, Seiji

*Scale-space has been *

*discovered in Japan*. Technical Report DIKU-TR-97/18, Department of Computer

Science, University of Copenhage, Denmark. August 1997. http://citeseer.nj.nec.com/weickert97scalespace.html

[24] Pietro Perona and Jitendra Malik (1990).

*Scale-Space and Edge Detection Using *

*Anisotropic Diffusion*. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 7, pp. 629-639, July 1990.

[25] Pietro Perona, Takahiro Shiota and Jitendra Malik (1994). *Anisotropic Diffusion*. In:

B.M. ter Haar Romeny (Ed.). *Geometry-Driven Diffusion in Computer Vision*. Kluwer

Academic Press, pages 73-92, 1994.

[26] Rein van den Boomgaard. *Algorithms for Non-Linear Diffusion*. http://carol.wins.uva.nl/~rein/nldiffusionweb/material.html, nldiffusioncode.pdf (Dec.

18, 2003).

[27] Joachim Weickert, Bart M. ter Haar Romeny and Max A. Viergever (Mar. 1998).

*Efficient and Reliable Schemes for Nonlinear Diffusion Filtering*. IEEE Transactions on

Image Processing, vol. 7, no. 3, pp. 398-410, March 1998. http://citeseer.nj.nec.com/weickert98efficient.html

72

Journal of Computer Vision, vol. 31, no. 2/3, pp. 111-127, April 1999. http://citeseer.nj.nec.com/weickert99coherenceenhancing.html

Theory in Computer Vision, Lecture Notes in Computer Science, vol. 1252, Springer,

Berlin, pp. 3-28, 1997.

[30] Tony Lindeberg, Bart M. ter Haar Romeny (1994). *Linear scale-space*. In: Bart Ter

Haar Romeny (ed.). *Geometry Driven Diffusion in Computer Vision*. pp. 1-77. Kluwer

Academic Publishers, Dordrecht, 1994. http://citeseer.nj.nec.com/477405.html

[31] Dinkar N. Bhat and Shree K. Nayar (1996).

*Ordinal Measures for Visual *

*Correspondence*. IEEE Conference on Computer Vision and Pattern Recognition, pp.

351-357, 1996. http://citeseer.nj.nec.com/bhat97ordinal.html

[32] Shlomo Greenberg, Mayer Aladjem and Daniel Kogan (Jun. 2002).

*Fingerprint Image *

*Enhancement using Filtering Techniques*, RealTimeImg(8), No. 3, June 2002, pp. 227-

236. Earlier: Add A4: Dimitrov, I., ICPR00(Vol III: 326-329).

[33] Venu Govindaraju, Zhixin Shi and John Schneider (2003).

*Feature Extraction Using a *

*Chaincoded Contour Representation of Fingerprint Images*. AVBPA 2003, pp. 268-

275.

[34] Jiangang Cheng, Jie Tian, Hong Chen, Qun Ren and Xin Yang (2003).

*Fingerprint *

*Enhancement Using Oriented Diffusion Filter*, AVBPA 2003, pp. 164-171.

[35] Bart M. ter Haar Romeny (2003). *Apertures and the Notion of Scale* and *The Gaussian *

*Kernel*. In: *Front-End Vision and Multi-Scale Image Analysis*, pp. 1-12, 37-52. Kluwer

Academic Publishers, 2003. ISBN: 1402015070.

Stuttgart, Germany, 1998. ISBN 3-519-02606-6.

[37] Tony Lindeberg (1994). *Scale-Space Theory in Computer Vision*. Kluwer Academic

Publishers, 1994. ISBN 0-7923-9418-6. http://www.fingerprints.com/Media/customer_2/download/fpc1010.pdf (Dec. 18, 2003)

[39] *Powersof10.com*. http://www.powersof10.com/ (Dec. 18, 2003)

[40] *NIST Scientific and Technical databases - Fingerprints*. http://www.nist.gov/srd/fing_img.htm (Dec. 18, 2003)

[41] *Level 1, 2 and 3 Details*. http://www.onin.com/fp/level123.html (Dec. 18, 2003) key.com/429387.htm (Dec. 18, 2003)

73

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project