Distance Assisted Training Programme for Nuclear

Distance Assisted Training Programme for Nuclear



Distance Assisted Training Programme for

Nuclear Medicine Technologists

Edited by: Heather E. Patterson, Brian F. Hutton

Positron Emission Tomography

PET Physics

Author: John Dickson


Module     Unit 

The material within this document should be regarded as the property of the International Atomic

Energy Agency and should be reproduced or used only in accordance with the attached statement of ownership.

Statement of ownership


All  materials  which  form  part  of  the  project  ‘Distance  assisted  Training  for  Nuclear 

Medicine  Technologists’,  including  any  translation  of  these  materials,  remain  the  property  of  the  IAEA,  Vienna.  In  addition  the  names  of  the  original  authors  and  editors  of  the  material  shall  be  acknowledged  at  all  times.  If  the  materials  are  to  be  reproduced or printed in any manner, the statement of ownership, as well as names of  original authors and editors shall be included.  


The project materials are freely available to lecturers and students for use in Nuclear 

Medicine  training,  provided  they  are  not  used  for  commercial  purposes.  The  IAEA,  authors and editors make no guarantee regarding the accuracy of material presented  and accept no responsibility for any action arising from use of the materials.  


The materials will normally be made available only as part of national formal training  programmes  approved  by  the  IAEA.  This  is  encouraged  to  ensure  that  students  undertaking  the  training  have  adequate  supervision  and  guidance.  Also  formal  recognition  of  students  training  will  only  be  provided  subject  to  formal  student 

  assessment either via national training programmes.  

Your respect for the use of these materials will be very much appreciated.  

Please direct any queries regarding these materials or their use to: 


              Nuclear Medicine Section 

International Atomic Energy Agency, 

P.O. Box 100,  

A‐1400 Vienna,  

Austria  b

Positron Emission Tomography 

PET Physics 


Subject flowchart


Basic Science of PET

Coincidence Detection

Types of Coincidence

- True Events

- Random Events

- Scattered Events

- Multiple Events

Positron Emission Tomography Imaging

- Range of Positrons

- Positron Fraction


Detector geometry

Block Detector

Scintillation Crystals


Time of Flight








Randoms Correction


- From Singles Rates

- From Delayed Coincidence Channel

Deadtime Correction

Scatter Correction 18

- Tail Fitting

- Convolution Methods

- Simulation Based Approaches

Normalisation 19

Attenuation correction

- Using Transmission Sources

- Using CT

- Calculated Attenuation Correction



Iterative reconstruction

Maximum likelihood reconstruction

When to stop?

Improving speed: What is OS-EM?

Filtered Back Projection

3D PET reconstruction




38 c


From Measured Disintegration Events to Activity Concentration

Kinetic Modelling

Standardized Uptake Value (SUV)

- Variability of SUV


Spatial Resolution

- Positron Range

- Co-linearity

- Distance between detectors

- Intrinsic Spatial Resolution and Depth of Interation

- Reconstruction Parameters


Countrate Performance

Scatter Fraction

Quality Assurance

Quality Assurance

Quality Control Testing

Acceptance Testing

- Spatial Resolution

- Scatter Fraction, Count Losses, and Randoms Measurement

- Sensitivity

- Accuracy: Corrections for Count Losses and Randoms

- Image Quality, Accuracy of Attenuation and Scatter Corrections

Routine Quality Control


- CT

Quality Control of other equipment used in PET imaging


PET Artefacts

- Hardware Failures

- Acquisition Problems

- Processing Issues

PET-CT Artefacts

- Misregistration Artefacts

- Truncation Artefacts

- The effect of CT artefacts on PET-CT

- Contrast induced artefacts in PET-CT















Positron Emission Tomography 

PET Physics 

Basic Science of PET

Coincidence Detection

Types of Coincidence

OL ppt

Detector geometry

PE T Imaging

OL ppt


2D PET & 3D PET Time of Flight

OL ppt



Deadtime Scatter Normalisation Attenuation

OL ppt


OL ppt



Activity Concentration

Kinetic Modelling Standardized

OL ppt



System Performance

Spatial Resolution Sensitivity

Quality Assurance

Countrate Scatter

Performance Fraction

OL ppt

Quality Quality Control

Assurance Testing


PET Artefacts



Routine Quality


OL ppt

PET-CT Artefacts

OL ppt e

Positron Emission Tomography

Technical Writer: John Dickson

Production Editor: Heather Patterson


Although positron emission tomography has been around for some time now, the use of PET in routine clinical use is a relatively new development. For those familiar with Nuclear Medicine, and in particular SPECT many of the concepts and methods used in PET will have been seen before. However, there are differences. It is the aim of this unit to help you understand the physics, technology and methods used in Positron Emission Tomography. Some of you may have been introduced briefly to PET technology in a previous unit. In this unit you will build on that knowledge to help gain a detailed understanding on the physics and technology underpinning PET imaging.

At the beginning of this unit, the basic science underpinning PET such as

Positron Emission and Coincidence Detection will be discussed before the hardware and scanning modes used in PET are introduced. Reconstruction and processing of PET data is similar to that used in SPECT, so the module on reconstruction in this unit will be a bit of a revision exercise for those who understand SPECT. However, some of the differences in reconstruction together with the corrections performed in PET will also be described. The robust level of corrections applied to PET data means that data can be truly quantitative, which is one of the most important features of PET imaging. For interest, the way in which PET can quantify physiological processes in vivo will be discussed. To conclude this unit, the performance of PET systems, possible artefacts, and quality control required to maintain high quality PET imaging will be described.

There are many new concepts and terms introduced in this unit. To assist you, there is a glossary of terms at the end of this document, and for some of the more difficult concepts there are narrated PowerPoint presentations available that will give audio and visual descriptions. Throughout this unit there are also analogies that help describe equivalent ideas that we may see in general nuclear medicine and SPECT.

You will be required to perform some exercises using PET scanners and their associated processing systems. If you have not used such systems before you may need to ask your supervisor for assistance. For those who do not have access to such systems, data and results will be provided where possible.



On completion of the subject, students should be able to:

Discuss the basic science behind PET including Positron Emission, Coincidence

Detection and the different types of coincidence seen.

Be aware of the detector configurations, scintillation crystals, and scanning modes used in PET.

Describe the different types of reconstruction used in PET, and why PET acquired in 3D mode requires a slightly different approach.

Understand the different corrections applied in PET, and how they affect the quality of the image.

Recognise the quantitative power of PET and understand how this can be performed.

Perform routine QC in PET, understand the measures and results obtained, and recognise the type of artefacts used in PET.

Time Check:

Allow 25 hrs to complete the study of this subject, perform on-line self assessment revision tests and complete the exercises in your Workbook.




Basic Science of PET

In atomic physics, we remember that each atom consists of a nucleus of protons and neutrons, with a cloud of electrons surrounding this nucleus. If the atom is a radioactive element which undergoes beta decay, the unstable nucleus converts a neutron into a positively charged proton and ejects a negatively charged electron (beta particle - β


) from the atom. Positron emission is a version of radioactive beta decay. In this case (Figure 1(a)), one of the protons in the nucleus is converted into a neutron and a positively charged electron which is known as a positron (β



Once emitted, the positron will have a number of interactions with neighbouring nuclei, losing energy on its way. Eventually, when the positron has almost stopped moving, the positron will combine with an electron, destroying (annihilating) both particles in the process. The energy released in the destruction of the electron and positron is released as two photons (γ) with energy of 511 keV travelling at 180 degrees from each other (Figure 1 (b)). In

PET, individually these two photons are known as ‘singles’.

Figure 1:

(a) A radioactive nucleus undergoing positron decay with the positron leaving the atom. (b) The positron can have a twisting turning path with several interactions before annihilating with an electron resulting in two 511 keV photons.


On completion of this section you will be able to:

Understand the concept of beta decay and positron emission

Know how coincidence imaging works and understand what a ‘line of response’ is.

Appreciate how a PET scanner forms transaxial images.

Describe the main types of coincidence events, and understand the factors that affect them.

Know some of the radionuclides used in PET imaging, and the characteristics that describe their imaging properties.

Time Check:

Allow 2 hrs to complete the study of this section and complete the exercises in your Workbook and DAT website self assessment revision.


Coincidence Detection

Positron Emission Tomography (PET) is based on this physical process of two photons emitted 180 degrees from each other. In Figure 2 we see two detectors with a positron emitting source between them. Considering that the photons arising from positron annihilation travel at the speed of light, if detector A detects one photon, and at the same time detector B detects the second photon, we can assume that a positron emission occurred on a line between the two detectors. The detection of two photons at the same time is known as

‘coincidence imaging’; with the line between the two coincident events know as the ‘line of response’.

Figure 2:

Two detectors detecting two annihilation events along a line of response.

In practice we cannot state that the photons need to be detected at the same time for a coincidence to have occurred. Firstly, unless the source is at the central point between the two detectors, one photon will arrive slightly before the second. However more importantly, because the time it takes for the detector to process the detection of the photon is not instantaneous, a timing window in which coincidences will be accepted needs to be set. Typically a valid coincidence is said to have occurred if the second photon arrived within a short period (8-12 nanoseconds) after the first.

Since the positional information about the radioactive disintegration given by the line of response is determined electronically (known as ‘electronic

collimation’), there is no need for the physical collimators used in SPECT. As a result, in PET, the sensitivity is much higher than that from SPECT. However, because two photons are required to record a coincidence compared to the single photon in SPECT, the sensitivity to coincidences in PET is not as great as that to ‘single’ events.

Types of coincidences


The type of event we have discussed so far, where two photons travel unopposed to detectors is known as a ‘true’ event (Figure 3a). In an ideal system, we would only want to record ‘true’ events. Unfortunately, not all events detected in PET follow this mechanism.



We mentioned earlier that if two photons are detected within 8-12 nanoseconds of each other, we consider that the disintegration happened on a line of response between those two detectors. However, there is a chance that two independent disintegrations have photons detected at the same time. This type of event is called a ‘random’ event (Figure 3b). The likelihood of a random event occurring is proportional to the width of our coincidence detection window, and the (singles) count rate at each detector. This means that in effect, the chances of a random coincidence are proportional to the square of the activity. So for example, if we set a wider coincidence window, there is a greater chance of two unrelated disintegrations occurring. Also if we have higher activity concentration there will be a shorter time between disintegrations, which means more disintegrations will fall within our coincidence window. It is important to note that random events cannot be distinguished from true events.

To reduce the amount of random events in our images, we would ideally like to reduce the coincidence window width, and reduce the injected activity we give to the patient. Unfortunately the width of our coincidence window is normally limited by the detector system, and a reduction of injected activity may lead to scanning times which are unacceptably long to our patients. This relationship between random events and count rate is interesting. Unlike in Nuclear

Medicine imaging with gamma cameras where the injected activity given to a patient is limited by the patient radiation dose, in PET the limitation to injected activity is often the equipment itself.

Other ways of reducing random events in our final images are to scan with limited collimation which is know as 2D mode (see later section); and or to correct our final images for random events. Again this will be covered in a later section.

Figure 3:

(a) True, (b) Random, (c) Scatter, and (d) Multiple coincidences. Dotted lines show Lines or Response that may be recorded.


If one or more of the emitted photons are scattered, the line of response will be misplaced leading to a ‘scatter’ event (Figure 3c). Scattered events can be a real problem in PET. As in SPECT, energy windows are used to minimise the number of scattered events in the final image by rejecting photons whose energy falls outside this energy window. Unfortunately, because of overall system performance, this energy window can be much larger than those seen in


standard nuclear medicine with gamma cameras. Typically these windows are set between 400 – 650 keV. It is quite common therefore for scattered photons to be included in the data, providing incorrect coincidence events. Unlike random events, the fraction of scattered events is not related to count rate. It can however be affected by the object that is being imaged and the radioactivity distribution within it.

Methods to reduce scattered photons include imaging with limited collimation in 2D mode (see later section), and correction post acquisition.


If we have two or more independent disintegrations causing the simultaneous detection of three or more photons, there can be uncertainty regarding the true coincidence. This is known as a ‘multiple’ event (Figure 3d). As with random events, multiple events are related to the coincidence timing window width of the system and the singles rate at each detector. Minimising the amount of multiple events in the final image can be achieved in the same way as minimising the number of random events.

Go to

DAT website play the powerpoint entitled ‘Positron Emission’ that explains the principles of Positron Emission and coincidence imaging. It also describes the different types of coincidences that can occur.

Go to

your Workbook (PET physics) and perform Exercise 1, also answer the questions given in Exercise 2, Exercise 3 and Exercise 4.

Positron Emission Tomography Imaging

In a typical PET scanner, instead of pairs of detector, we have a ring of detectors for every transaxial slice through our object. Grouping detected lines of response for different angles provides us with projection data. Once we have this projection data we can then perform tomographic reconstruction to determine the transaxial activity distribution (Figure 4).

Figure 4:

Lines of response from ring detectors allow projection data to be acquired and used in tomographic reconstruction to create transaxial slices


Many rings of detectors are placed adjacent to each other to give an axial ‘field

of view’ of between 15cm to 20cm. This axial field of view is commonly known as a ‘bed position’, because a standard whole body scan will need several images to be acquired with the bed in different positions to cover the required length of the patient (Figure 5).


Figure 5:

Several adjacent axial fields of view (bed positions) are scanned in wholebody PET investigations.

Go to

your Workbook (PET physics) and perform Exercise 5

A wide range of radionuclides are used in PET, many of which have half-lives that are much shorter than those used in SPECT (see Table 1). Such short halflives often require a production facility (cyclotron) to be placed in close proximity to the PET imaging centre, unless Generator produced radionucldes such as Gallium-68, or Rubidium-82 are used. One exception to this is Fluorine-

18, whose 110 minute half-life allows production facilities to be situated within a two-hour travelling distance of the PET imaging centre. This explains the reason why Fluorine-18 and one of its tracers fluorodeoxyglucose (FDG) remain the most commonly used radionuclide in PET.

In addition to the half-life and production method, other properties are important in assessing the imaging properties of the radionuclide.

Table 1

Nuclide Half-life


E max



Range (mm)


Fraction (%)



Carbon-11 20.4 0.959 100 No

Nitrogen-13 9.96 1.197 1.5 100 No

100 No

Fluorine-18 110 0.633 0.6 97

Copper-64 762 0.653 18

Gallium-68 68 1.898 2.9

Rubidium-82 1.25 3.400 5.9







Table 1: Characteristics of positron emitters used in Positron Emission Tomography

Energy (E max

) and Range of Positrons

We mentioned earlier that the positron will have a number of interactions before coming to a standstill and interacting with a neighbouring electron. The energy of the positron dictates the distance it travels before it collides with the electron. In turn, this distance will determine the accuracy of locating the position of the atom which emitted the positron. So for a high energy positron, the two 511 keV photons that are created from the positron and electron


collision are likely to be further from the atom than a low energy positron. This means that radionuclides that emit high energy positrons will have larger spatial resolution.

Though the average ranges of positrons given in Table 1 are quite large, in practice, because the positron undergoes a random winding path, the actual distance from the atom that the 511 keV photons are produced can be much smaller.

Positron Fraction

When an unstable nucleus undergoes a disintegration, there is a probability described by the ‘branching ratio’ which will determine how many of these disintegrations produce positrons. The percentage of disintegrations that produce positrons is known as the ‘positron fraction’. In SPECT, an analogy would be an isotope such as Iodine-131 which undergoes both gamma and beta decay.

In PET imaging, the positron fraction can affect the sensitivity of the system to a particular radionuclide. For example, in Table 1 we see that 97% of emissions from Fluorine-18 produce positrons whereas with Copper-64 this figure is only

18%. This means that for the same activity of radionuclide, we would need to image for (97/18) = 5.4 times longer with Copper-64 to record the same number of events.

Go to

DAT website play the powerpoint entitled ‘Positron Emission Tomography’ that explains how coincidence imaging translates to PET imaging.

Go to

your Workbook (PET physics) and perform Exercise 6.

Key points:

Since the two photons from positron annihilation travel in opposing directions from each other, and the resulting line of response gives positional information about the disintegration, physical collimation such as that used in SPECT are not necessary.

There are several types of coincidence event, some of which (scatter, random, multiple) are not valid when reconstructing image data.

The amount of unwanted coincidences is dependent on system design, and acquisition protocol.

The characteristics of PET radionuclides need to be considered when assessing their PET imaging qualities.

Go to

DAT website Revision Test 1 to assess your understanding of this section.





The equipment used to perform Positron Emission Tomography will be described in this section. System configurations will be explained, and the scintillation crystals used in past present and future systems will be described.

The new technology known as ‘Time of Flight’ will also be discussed.


On completion of this section you will be able to:

Understand the composition of a Block Detector

Discuss the strengths and weaknesses of the scintillation crystals that have been used in PET

Explain how systems perform 2D and 3D PET and understand the advantages and disadvantages of each mode.

Understand the concept of ‘Time of Flight’.

Time Check:

Allow 3 hrs to complete the study of this subject and complete the exercises in your Workbook and on-line revision.

Detector Geometry

All commercial systems available at present use a full ring of PET detectors, although in the past partial ring systems that rotate around the patient, and gamma camera based PET systems have also been produced (Figure 6).

Figure 6:

Configurations of ring, partial ring, and gamma camera based PET systems.

In most commercial systems the rings of detectors are composed of elements known as block detectors.

Block Detector

Block detectors make use of the Anger principle seen in Nuclear Medicine gamma cameras. In a gamma camera we remember that a Sodium Iodide crystal, which normally sits behind a collimator, receives energy from the gamma radiation and releases the energy as a pulse of light. This scintillation crystal is coupled to photomultiplier tubes (PMTs) via lightguides, which convert the pulse of light into an amplified electric signal. The position of the


scintillation is decoded by looking at the relative amount of light coming from each PMT.

Figure 7:

(a) Arrangement of scintillation crystals, lightguides and photomultiplier tubes. (b) The crystal which detected the gamma photon can be located using a numeric combination of the outputs from photomultiplier tubes A, B, C and D.

The block detector works on a similar principle. In this case instead of one large crystal being decoded by all the PMTs in the system, in the GE Discovery DST system a grid of 6x6 crystals with dimension of (6.3x6.3x30mm) are coupled to a lightguide, which are in turn coupled to four PMTs. The lightguide has slits of various depths that act to allow the PMTs to see different amounts of light from each crystal (Figure 7 (a)). The crystal that experienced the scintillation can then be determined by a simple numeric combination of the output in each PMT

(Figure 7 (b)).

One of the advantages of this block system over the single crystal system used in gamma cameras is that because scintillations are only decoded by a subset of the total number of PMTs in the system, much higher count rates can be handled. The disadvantage of the block system is that precision is lost because there are a smaller number of PMTs decoding the position. In a current GE system there are 70 block detectors in a single ring, with four rings used to create the complete PET detector system.

Philips have recently developed and implemented an alternative to the block detector system called a pixelated detector matrix. This configuration uses small area crystal detector elements interfaced to a continuous light guide, which in turn is backed by a close packed arrangement of PMTs. The system has been optimized to locate the scintillation crystal where the event occurred while minimizing the deadtime in the area where the event happened.

Scintillation Crystals

The ideal scintillation crystal for PET systems should have:

- A high stopping power. The number of 511 keV gamma photons ‘stopped’ in the scintillation crystal to release energy (light) within the crystal should be maximized.

- High light output. Once energy has been deposited within the scintillation crystal, a crystal with a high light output will produce more light photons, which will in turn produce a larger signal with greater accuracy from the photomultiplier tubes. This improves energy resolution, which helps the system reject scattered photons more easily.


- A short decay time. Once an event produces scintillation within a crystal, a short decay time will allow the crystal to accept another event more quickly, allowing higher count rates to be achieved before deadtime effects occur. It will also allow a shorter co-incidence timing window to be used. A shorter window width can be used to reject a greater number of random events.

- Small energy resolution. A crystal with a small energy resolution can allow the system to have a narrower energy acceptance window which helps reduce the number of scatter events in the system.

Table 2 provides details of the properties of some PET scintillation materials.












Atomic No.



Light Output





Time (ns)




Resolution (%)















LYSO 7.1 65 36,000 42 10

LuAP 8.3 65 12,000 18 8

YAP 5.5 34 17,000 30 5

LaBr3 5.3 64 61,000 35 3.6

Table 2

: Characteristics of scintillation crystals used in Positron Emission Tomography

Many crystals have been used in PET systems. Sodium Iodide (NaI) crystals are used in PET capable gamma cameras and have also been used for dedicated

PET systems. Though the crystal has a good energy resolution and a high light output, the decay time is not optimal, and the stopping power (shown in the table as a combination of density and effective Z) is poor. This explains why for

PET capable gamma camera a system, a 1 inch (2.54 mm) crystal is used instead of the 3/8 inch (0.95 mm) used in standard gamma cameras.

Bismuth Germanate (BGO) crystals were very common in PET systems until

2002. This crystal offers more favourable stopping power over sodium iodide but at the expense of a marginally worse decay time, and much poorer energy resolution.

Recent developments have led to the introduction of newer ‘fast’ crystals. LSO,

LYSO and GSO are crystals with comparable stopping power to BGO and much improved decay times and energy resolution. LSO and LYSO also provide higher light output than many of their rivals. However, one of the disadvantages of the Lutetium based crystals (LSO, LYSO) is that they are inherently radioactive, contributing a background count rate to data.


Finally the table shows a group of crystals with very good energy resolution, decay time, and light outputs albeit with lower stopping power (LuAP, YAP,

LaBr3). These crystals are still in the lab rather than commercial systems, but are being proposed as future PET scintillation crystals.

Go to

your Workbook (PET physics) and answer Question 7 and 8.

2D PET and 3D PET


In 2D PET, thin lead or tungsten septa are placed between each crystal ring to restrict coincidences (Lines of Response) to that slice (in-plane) or to closely neighbouring (cross-plane) slices (Figure 8 (a)). This data can then be easily reconstructed using reconstruction algorithms used in standard nuclear medicine e.g. OSEM.

Figure 8:

(a) PET acquired in 2D mode has septa introduced in front of the crystals to limit lines of response to in-plane or cross-plane events. (b) 3D PET removes the septa to allow all lines of response.

In 3D mode, the septa are removed to allow lines of response between all planes (Figure 8 (b)). Until recently 3D data would have to be reorganised into

2D lines of response before reconstructing using standard techniques. Now, new software algorithms and more powerful computer hardware allow these reconstructions to be performed using fully 3D reconstruction algorithms.

Further information on this reorganisation (rebinning) and 3D reconstruction algorithms are given in a later section.

Impact on Sensitivity

The presence or absence of septa has a dramatic effect on Sensitivity. Firstly overall sensitivity increases massively when we move from 2D mode to 3D mode. On a modern PET system that is capable of both 2D and 3D imaging, sensitivity in 2D was measured as 1.4 counts/sec/kBq whereas in 3D mode the sensitivity increased to 6.6 counts/sec/kBq. This means that when performing imaging in 3D mode, much shorter scan times can be used for each field of view

(section of the body imaged).

Another area where sensitivity is affected is the sensitivity profile as we move through the field of view (Figure 9). In 2D mode we see relatively consistent sensitivity. The sensitivity does however dip at the edges of the field of view because lines of response are allowed between neighbouring slices and as we approach the edge of the field of view, neighbouring slices may not be


available. The jagged pattern that is visible shows the relative change in sensitivity between in-plane and cross-plane coincidences.

2D mode: Sensitivity 3D mode: Sensitivity

Slice number Slice number

Figure 9:

Sensitivity profiles of a modern scanner imaging in 2D and 3D mode.

When performing 3D PET we see a huge variation in the sensitivity profile.

Again this variation comes from the geometry of the system. At the centre of the field of view the system has many more lines of response available than at the edge of the field of view where a smaller number of coincidences are available (Figure 10).

Figure 10:

(a) When a source is at the centre of the field of view, many more lines of response are possible than when the source is at the edge of the field of view (b).

One of the consequences of this sensitivity profile is the need to overlap fields of view when performing multiple frame imaging such as whole-body imaging.

With the reduction of sensitivity comes an increase in noise. So when imaging with multiple fields of view, instead of one field of view starting where the last one finishes, there needs to be an overlap of fields of view to maintain constant noise levels throughout the image. In 2D imaging, because cross-plane coincidences are often allowed for up to three planes, this overlap tends to be several slices. However, in 3D imaging to truly maintain constant noise levels, the overlap should ideally be half of the field of view. Fortunately, because sensitivity in 3D mode is high, very short scan times per field of view can be used to maintain acceptable scan times.

Although large overlaps are necessary in theory, present 3D reconstruction times can prohibit this solution. As a compromise smaller overlaps, typically around one third of the field of view are used.

Go to

your Workbook (PET physics) perform Exercise 9 and answer Question 10.


Scatter and Random Events

The removal of septa in 3D mode, in addition to increasing the sensitivity, also results in a massive increase in the number of scattered and random events.

Indeed in 3D PET up to 50% of all registered events can have scattered photons.

Until recently this made 3D PET very difficult to perform. However, several recent developments have made good quality 3D PET imaging easier to achieve.

Faster system electronics and crystals with shorter decay times have helped reduce the amount of random and scattered events in the final image, with improved crystal energy resolution also allowing narrower energy windows to be used to reduce the amount of scattered events being registered. Correction for scattered events has also improved with more robust scatter correction methods now available to correct for the large levels of scattered events found in 3D PET.

Out of Field Activity

Something that can affect 3D PET quite dramatically is activity out of the field of view e.g. a full bladder. In 3D PET where there are no septa, it is important for the system to have adequate lead shielding at the end of the axial planes to stop out of field events contributing to the randoms rate within the field of view

(Figure 11).

Figure 11: Shielding is placed at the ends of the axial field of view to restrict out of field activity from outside the field of view e.g. bladder, or brain, increasing the random events detected.

Time of Flight

Time of flight is a process where the time each positron annihilation photon takes to reach the detector is used to help determine where the coincidence took place.

Traditionally in standard PET, as long as two photons are detected within a time deemed to be acceptable, the pair of events are said to be associated, with the positron annihilation occurring along a line of response between the two detected events (Figure 12 (a)). In time of flight, the difference in the time that the scintilation events were detected is noted and used to give a more accurate localisation of the positron annihilation. So instead of the event being somewhere along the line of response, it is said to be from a smaller portion of this line of response (Figure 12 (b)). The accuracy of this localisation is


dependent on the timing resolution of the system with faster crystals locating the annihilation to a smaller part of the line of response.

What time of flight achieves is an improvement in image noise which is proportional to the diameter of the object that is being imaged. This means for whole-body scanning, and particularly for large or obese patients, image noise should improve over what can be achieved with standard PET. This is particularly beneficial given that for these patients, a greater proportion of photons will be attenuated. However, for children, thin patients or for neurological imaging the advantage is less clear.

Figure 12:

(a) Assuming the two photons from the positron annihilation are detected as a coincidence, the annihilation event is deemed to occur along the line of response.

(b) The time difference between the two photons detected is used to localize the event to a subsection of the line of response (highlighted).

Until 2007, Time of Flight was restricted to the research systems in the laboratory. Now however, at the time of writing, commercial time of flight systems are available with more likely in the future.

Go to

DAT website play the powerpoint entitled ‘2DPET_3DPET’ that shows the  differences  between  2D  and  3D  PET  including  how  differences  in  overlap  can  affect the image quality. 

Go to

your Workbook (PET physics) and answer Question 11

Key points:

The standard detector component in a PET system is a block detector.

Several scintillation crystals have been used in PET systems each with their own advantages and disadvantages.

There are two modes of scanning in PET, 2D mode which use inter-plane septa to reduce scatter and random events, and 3D mode which does not use such septa.

Time of Flight is a technology that can localise a disintegration event to a small part of a line of response.

Go to

DAT website Revision Test 2 to assess your understanding of this section.





One of the main features of PET is its ability to quantify absolute activity in terms of activity per unit volume e.g. kBq / ml. However to achieve this, and to produce good quality images a number of corrections are applied to the data before finally displaying and reporting the images. The aim of this module is to describe these corrections and what they achieve.


On completion of this section you will be able to:

Describe how corrections for random events can be made.

Describe the methods of scatter correction that can be applied.

Understand the effect of attenuation within PET images and how they can be corrected.

Know what normalization corrections are and why they should be applied

Time Check:

Allow 3 hrs to complete the study of this subject and complete the exercises in your Workbook.

Randoms Correction

We remember from Chapter 1 that Random events are those where photons from two unrelated disintegrations are seen as True events. Random events can occur anywhere in the field of view and are unrelated to the source distribution within the field of view. Because of this, images without random correction have reduced contrast and activity concentrations that are inaccurate.

The Randoms Count Rate is proportional to the product of singles rate for each pair of detectors, and the coincidence timing window width. Shorter window widths therefore reduce the chance of random events occurring and longer widths give a greater opportunity for random events to occur. Typically the coincidence window width is 3 – 4 times the timing resolution of the system to get a compromise between sensitivity to True events and rejection of Random events. So for crystals such as LSO which has a good temporal resolution we get a better rejection of random events than for a crystal such as BGO which has a poor temporal resolution. However, if the window becomes smaller than 3-4 nano seconds, Time of Flight processing becomes necessary.

Though system and protocol design can help reduce the proportion of random events, some random events are still registered as True events. Corrections for random events are therefore necessary if an accurate representation of the source distribution is required. A description of some of the methods to correct for random events is given below.

Estimate from Singles Rates

Assuming that the distribution of activity does not change during imaging

(except for radioactive decay), we can measure the singles rate on detectors at a


given time and use the relationship between singles rate, coincidence window width and random events to estimate the number of random events.

Delayed Coincidence Channel

This method is a common method of determining random events. A coincidence event is registered when events are detected on two detectors within the defined coincidence window. With delayed coincidence channel random correction, one of these detectors is allowed to register an event a fixed time later (Figure 13). Since we know that the event measured in this delay channel paired with an original coincidence channel must be a random event, we can determine the proportion of random events.

One of the problems with this method is that by opening a delayed channel the dead time of the detector system increases. Furthermore, the random events measured in this way follow Poisson statistics and are often noisy with this noise reproduced in corrected data. Fortunately methods have been proposed to reduce the level of noise in this delayed channel.

Figure 13:

A True coincidence is detected in Detectors A and B. The acceptance window on Detector B is open for a fixed time after the true event to capture a random event between Detectors A and B. This information is used to determine the fraction of random events.

Deadtime Correction

When an event enters a detector, the detector takes a fixed period of time to process this event. If a second event enters the system when the first process is still being processed, ‘pulse pile-up’ occurs. Pulse pile-up has two possible outcomes: the total energy of the two events can mean that both events fall out of the energy acceptance window and are rejected; or the pulses are detected but with an incorrect position and energy.

In addition to pulse pile-up each detector also needs time to be reset. If a second event comes into the detector during this time, the event is lost. The time describing the period where events are lost or incorrectly registered is known as the ‘Deadtime’.

For normal clinical activity concentrations, deadtime losses don’t normally occur. However for some 3D PET applications and for cardiac imaging using

Rubidium-82 deadtime losses are likely. Nevertheless, it is standard practice for

Deadtime Correction to be applied.


The simple and obvious way to perform a correction for deadtime would be to perform an experiment to look at the relationship between activity concentration and count rate, and then produce a reference table to correct for any count losses. However this does not take into account how systems react to different source distributions, and the position of such distributions. A more common method of performing a Deadtime correction is to use a mathematical model of the system and apply this model to the measured relationship between detected events and activity concentration.

Scatter Correction

We remember that a scattered event is an event where one of the annihilation photons scatters resulting in a line of response away from where the disintegration occurred. Such events are quite common in PET, with around

15% of registered coincidences in 2D mode, and up to 50% of coincidences in

3D mode being scattered events.

One of the easiest methods of reducing the percentage of scattered events is to use a narrower energy acceptance window. However for crystals with a poor energy resolution such as BGO many scattered events will still be accepted into the energy window. Several approaches exist for correcting for scattered events.

Tail fitting

Once random events correction has been applied, events found outside the activity distribution must be from scattered events. Figure 14 shows the projection data we would find from a point source. If we take the projection data and fit a Gaussian or similar curve to the distribution, we can use the information in the tail of the distribution (shown by the dashed lines) as our estimate of scatter.


Figure 14:

A projection from a point source. The projection s fitted with a curve, with the areas outside the central portion used to calculate the number of scattered events


This approach works well in neurological PET where the activity distribution and tissue composition is simple. However, it works less well in the thorax where there are many different tissue types (bone, lung, soft tissue) make curve fitting difficult. It can also have problems where the object is large because there can be very little ‘tail’ to work with.


Convolution Methods

This method uses calibrations to determine how the system deals with scattered events. Using this information, the projection data is modified to give an estimate of the scatter in the image. Though this process is relatively simple in

2D PET, more complex versions of this method are needed for 3D PET.

This approach is good for simple anatomy and source distributions such as those in Neurological PET. One of the weaknesses of the method is that it does not account for scattered events coming from outside of the field of view.

Implementations of this method are used in some commercial scanners.

Simulation Based Approaches

Because the physics of photon interactions is well known, we can use computer based models and simulations to determine the scatter arising from imaging.

With good transmission scanning from CT or from a rotating transmission source, and an estimate of the source distribution from an initial image reconstruction we have all the components necessary for computer models to estimate the scatter contribution in the image. The models can be improved further by implementing methods to include scatter arising from activity out of the field of view.

Though computationally demanding, this approach to scatter correction works very well. One of the problems with the technique is that it become less accurate when patient are large, i.e. when there is a lot of scattered events.

Implementations of this method are used in some commercial scanners.

A more complex simulation approach uses Monte Carlo simulations to simulate both scattered and un-scattered contributions to projections. Instead of calculating the scattered contribution to a line of response like standard simulation techniques, this method actually simulates and tracks the annihilation photons from the annihilation event. Such approaches are hugely demanding on computer resources and because of this are not implemented on many systems at present.


No imaging system has a uniform response throughout all detector elements.

Whereas with gamma cameras we perform a uniformity correction to correct for non-uniformities within the detector, in PET we apply a ‘Normalisation’ correction to correct for non-uniformities in each line of response.

The standard approach to performing a normalisation correction is to irradiate all the detector pairs using a rotating rod source within the gantry (normally

Germanium-68), or a cylindrical source filled with a positron emitting radionuclide. The inverse of the outputs for each detector pair (line of response) is then used to correct the imaging system for non-uniformities.

One of the problems with performing Normalisation corrections is that to achieve good counting statistics for all lines of response, the scanning time needs to be long. In addition, the scatter environment is not typical of what is


found in a clinical PET scan. For 2D PET however, where the number of lines of response is relatively low and the scatter correction relatively simple and robust, this approach to normalisation is sufficient. 3D PET requires more complex algorithms where the response for each detector rather than lines of response are analysed, or models of true and scatter responses are built into the normalisation.

An example of what Normalisation correction does to an image is shown in

Figure 15. The attenuation corrected image on the left shows ring artefacts within the transaxial plane and general non-uniformity in the coronal plane which are a direct effect of non-uniformities for each line of response. The image on the right that has been corrected for attenuation and has a normalisation correction applied does not show these ring artefacts. The horizontal lines seen in both coronal slices are a result of the differing sensitivities of in-plane and cross plane coincidences occurring in 2D PET that were also discussed in an earlier section.

Figure 15:

Attenuation corrected images without Normalisation seen on the left, and with Normalisation on the right. Transaxial and coronal slice views clearly show nonuniformities in response without Normalisation.

Attenuation Correction

In PET, for an event to be registered, both of the photons from the positron annihilation need to be detected. If one of the photons is absorbed within the object, the coincidence event will not be recorded. Even though the energy of the photons in PET is much higher than that used in general nuclear medicine with gamma cameras, the need for both photons to be detected for the event to be registered means that the effects of attenuation are far greater in PET than in


When we look attenuation in PET we find that the attenuation depends on the combined path of both photons. This means that for any given line of response, the attenuation is independent of the source position, i.e. the attenuation of a coincidence arising at the centre of the field of view is exactly the same as that arising from a source at the edge of the field of view (Figure 16).


Figure 16:

The attenuation from an event occurring at the centre of the field of view on a given line of response is exactly the same as that near the edge of the field of view.

Until recently attenuation correction was performed using transmission sources within the systems gantry. However with the advent of PET/CT, using CT to create attenuation maps for PET data has become the norm. Each method of performing attenuation correction will be described below.

Attenuation Correction using transmission sources

If we have a rod source within the PET gantry which rotates around the ring, when the source is at a particular point, we can measure the attenuation from the source (typically Ge68) to the nearest (adjacent) ring detector and to any detector which lies at the other side of the patient (Figure 17 (a)). If we perform this scan with and without an object/patient present we can use the ratio of the two to give the attenuation due to the patient.

Figure 17:

(a) Source within the detector ring creates coincidence in neighbouring detector and opposing detector. (b) Several sources are used in some systems to maximise count statistics. (c) 3D PET systems based on singles rate shield events from adjacent crystal


To achieve good count statistics with this approach to attenuation correction, a long scan time or a strong source is required. However, long scan times are impractical and strong sources give large deadtime losses, particularly when adjacent to a detector. To overcome this problem up to three sources are used which are located at different positions in the gantry (Figure 17 (b)). In some systems the detector nearest the source is shielded so that only the singles rate on the side opposite the source are detected and used for attenuation correction

(Figure 17(c)). This method used for systems where non-PET sources such as

Cs-137 are used.


Even when using multiple sources for attenuation correction, the resulting attenuation files can be noisy for scanning times appropriate for clinical PET, with the noise translating into the corrected images. To overcome this noise, the transmission maps are often segmented (separated) out into different tissue types e.g. bone, soft tissue, and lung with the appropriate attenuation coefficient values given to these tissue types.

When performing attenuation correction in clinical studies it is normal to perform the scan without the patient (known as the blank scan) at the beginning of the day. Once the patient is injected and ready to be imaged the patient will pass through the scanner having interleaved emission and transmission scans. So for example, the patient will have an emission scan for e.g. 5 minutes then a transmission scan for e.g. 2 minutes before moving onto the next bed position.

Attenuation Correction using CT

CT is simply a map of attenuation values created by rotating a transmission source (in this case an X-ray tube) and CT detector around an object. It is relatively simple therefore to use CT for PET attenuation correction. Compared to the transmission scanning described above, CT offers better counting statistics and spatial resolution, and faster scanning times. However CT attenuation correction is not without its problems.

The speed of CT means that lungs are imaged at a particular phase of the respiratory cycle. The heart may also be imaged at a particular phase of the cardiac cycle. However in PET, because imaging takes place over several minutes, time averaged images of the heart and lungs are acquired. This mismatch in the speed of the two imaging modalities can lead to misregistration of the CT and PET data which in turn creates inaccurate attenuation of the PET data. For particular applications cardiac or respiratory gating may be required, however for wholebody oncology PET scans, this attenuation inaccuracy is rarely clinically relevant.

Another property of CT is that the average photon energy is typically between

60 – 80 keV. This is much different to the 511 keV of the photons used in PET and traditional PET transmission imaging. Algorithms do exist to convert attenuation at CT energies to those at PET energies typically using a bi-linear relationship (Figure 18). However if CT contrast agents are used this relationship fails and more complicated algorithms are necessary.

Dense objects in CT such as prostheses or dental work can cause so called

Beam-Hardening’ artefacts which result in streaking in the CT data. This in turn causes artefacts in the attenuation corrected data. However these artefacts rarely have a clinical impact, particularly in modern CT systems where methods are employed to reduce the artefacts caused by such objects.


Figure 18:

A bilinear relationship exists between CT and PET attenuation, with the change in relationship occurring around zero Hounsfield Units.

Calculated Attenuation Correction

This method is similar to the Chang Attenuation Correction method used in

SPECT. Assuming that the object has a uniform attenuation correction coefficient, this algorithm uses operator defined boundaries on the object to correct for the attenuation within these boundaries. Typically used for

Neurological PET, this method can lead to errors in the attenuation correction because of the differences in bone, sinus, and brain attenuation correction coefficients.

Go to

DAT  website  play  the  powerpoint  entitled  ‘Corrections’,  which  describes  some of the corrections performed in PET. 

Go to

your Workbook (PET physics), perform Exercise 12, 13 and 14.

Key points:

Random events cause noise throughout the field of view, the magnitude of which is dependent on the singles rate and coincidence timing window.

Correction can be performed by looking at singles rates, or taking measurements during the scan to estimate the randoms rate.

Scattered events can be corrected by looking at projection data to estimate the amount of scattered events, or through simulations of scatter distributions or individual photons themselves.

To correct for non-uniformities between detectors and lines of response, a normalisation correction should be applied.

Attenuation correction can be performed using isotope Transmission sources or

CT. Each method has its advantages and disadvantages.

Go to

DAT website Revision Test 3 to assess your understand of this section





To create 3D imaging sets from the PET projection data requires tomographic reconstruction. The basics of Filtered Back Projection will be mentioned, although because most reconstructions performed in PET follow iterative methods emphasis will be put on these methods. The aim of this chapter is to describe the principles of the reconstruction methods used in PET, highlighting their strengths and weaknesses.


On completion of this section you will be able to:

Describe the principles of iterative reconstruction methods.

Understand how filtered back projection works, and where such reconstruction methods can be useful.

Know how projection data from 3D PET can be reconstructed with 2D reconstruction algorithms

Understand the concepts and advantages of fully 3D iterative reconstruction.

The following section is duplicated in both the SPECT and PET subjects as it is applicable to both: If you have already completed this section you should only spend a little time to revise the section, noting that there are very small differences compared to SPECT and then simply proceed to the section marked ***. In this section there will be further discussion of iterative reconstruction relevant to PET.

Time check:

Allow 3 hours to read and understand the following section on iterative reconstruction plus 4 hours to complete the related exercises.


Iterative reconstruction: general principles

There are many ways in which iterative reconstruction can be performed but all have similar approaches. The most commonly used method is called maximum

likelihood reconstruction, often referred to as ML-EM reconstruction (or the accelerated form OS-EM). These refer to particular algorithms but you do not need to worry about what these stand for. Rather than concentrate on a specific method, it is initially more useful to understand what is meant by iterative reconstruction in a general sense.

Definition: EM – expectation maximization algorithm

Let us start once again by considering a simple analogy:


I am sure most of you have tried to thread a small needle with thread. Think how this is usually done. You try to push the thread through the eye of the needle, but usually find that, on the first attempt, the thread hits the needle edge. When you see this happen you tend to consciously make a small adjustment to the position of your fingers and you try again. Often it takes several attempts, making small adjustments each time until you eventually thread the needle. This process of repeating what you do, making small adjustments based on what you observe to be the problem is what you can consider as an iterative process. Typically the process gets you closer and closer to a solution the more times you try. We refer to this as converging on a solution.

Another example is the following game:

Try this!

Think of a number between 1 and 100, but do not tell your opponent what the number is. Your opponent must guess the number in the minimum number of attempts and all you can answer is high (if the answer is too high) or low (if the answer is too low). You will find that the person will automatically make guesses based on your answers that gradually get closer and closer to the correct number. Try this and see how long it takes to converge to the solution.

Note that the change in number gets smaller the closer you get to the correct answer too. So stopping after a number of iterations will typically be quite close to the correct answer whereas the initial guess can be quite wrong. This really is very similar to what happens in iterative reconstruction. As you go through this game think of what happens at each step: we will analyse this afterwards.

Proceed now to play the game with a colleague: take notes to remind yourself what is happening. Note down each of the guesses so you can plot these on a graph afterwards. How many guesses did it take to reach the correct answer?

Try the game a second time; did you reach the solution with the same number of guesses; how many steps did it take to be within 5 of the correct number?


After you complete the game consider what steps were taken: a) Your colleague (let’s call him Bill) selects a number b) You make an initial guess and tell Bill c) Bill compares your guess with the number he has selected and checks if too high or too low d) Bill tells you which direction to move e) You revise your guess in this direction and choose another number f) You tell Bill the new number g) …….and so on continuing to use steps c) to f) in a loop until you reach the correct number, then you stop.

Note !

It will be useful to refer back to this game as the steps taken with an iterative reconstruction are very similar; the only difference is that the computer is trying to simultaneously guess thousands of numbers i.e. the voxel activity at each point in the object being imaged!

Go to

your Workbook PET section question 15 and answer the question relating to iterative reconstruction.


Prepare for an Exercise

Iterative Reconstruction


An alternative to filtered back projection is iterative reconstruction techniques.

These involve taking progressively smaller steps that gradually get closer to the correct (or an acceptable) solution.

In iterative reconstruction projections estimated from the current solution are compared with the measured projection and the current solution is modified to minimise the difference. Although iterative reconstruction is quite complex the principle of iterative reconstruction can be easily understood since it is something that you automatically use in various situations (e.g. threading a needle)


On completion of this exercise you should be able to explain what is involved in iterative estimation.

Time Check:

Allow 2hrs to complete this exercise.


You want to measure the weight of some object using a balance. This operates by placing weights on one side of the balance until the object to be measured is balanced. If you have a balance or weighing scales where weight must be added you can perform the following experiment.

However you can make your own simple balance by arranging a ruler so that it is firmly supported in the centre by attaching it to a pen. This can act like a balance.

Place an object on one side of the balance and it will tip to one side. Place two light cups one on each side so that the ruler is balanced as shown.

Place a small object in one cup so that the ruler tips (make sure the cups are securely fixed to the ruler).

Now, using a syringe add water to the other cup so that the ruler is again balanced.

Measure the volume added step by step in order to get the ruler to balance (this may require adding or removing water as you get close to balance).

The final volume of water will equal the mass of the object since the density of water is 1 gm / mL.


Figure: 19.

Write the steps that you have taken to find the volume showing how you used an ‘iterative’ technique.

Draw a diagram or flowchart to illustrate the steps and decisions.

You should show how the final result involves ‘iteration’

Go to

your Workbook PET section question 16 and answer the question relating to iterative reconstruction.


Maximum likelihood reconstruction

Consider the figure below (Figure 20). What we are trying to determine is the activity distribution in the patient. We can start off with a guess as to what this distribution looks like, such as the initial number that you guessed. One way of guessing would be to simply perform a simple back projection, without any filtering. We know it is wrong but remember that it did look a little like the correct reconstruction. In this case however you have to have some means to determine if the guess makes sense. This is done by using the guess to estimate what the projections would look like i.e. what would the detectors have seen if the guessed object was imaged. It would be similar to what was measured (i.e. the sinogram) but would not be the same. You can then compare the estimated projections, based on your guess, with the real measured projections and use this comparison to alter the result.

This is like the number game: when the opponent guesses a number you compare the number with the correct answer and tell him / her if the number is too high or too low. Your opponent used this information to alter their initial guess so as to move closer to the correct number. In the same way the iterative reconstruction program uses the difference between actual measured projections and your estimated projections to alter the initial guessed distribution of activity so as to get closer to the correct activity distribution.

When you get to the correct solution the difference between the estimated and true projections will ideally be zero, or at least very small (in the case of the number game the solution is reached when the revised guess equals your initial number). The whole process is repeated, using the difference between estimated and true projections to alter the estimate of each iteration.

Figure 20:

General iterative reconstruction (left) involves both back projection (BP) and forward projection (FP) as illustrated (right). For PET the Forward Projection (FP) involves simply summing the estimated counts along the line-of-response (this corresponds to the area between opposite detectors from which annihilation photons could be detected in coincidence by those detectors).


We see from the figure that the iterative reconstruction involved really two steps: back projection (as we used previously in filtered back projection) and the opposite process of trying to estimate what the projections are, given a reconstructed object. This opposite process is known as forward projection.

Let us first consider a simplified model of what happens when you acquire an image. In the simplest case we assume that photons travel direct to opposite detectors and are only detected if they are detected in coincidence. This is illustrated in Figure 21 (a).

Of course in practice we know things are more complicated but even this simple model can be used for reconstruction (in fact this is the same simplifications assumed in filtered back projection).

All we need to know for the iterative reconstruction is: what is the probability (chance) of photons from a particular object voxel being detected at a particular ‘pair’ of detectors?

Figure 21a.

Unlike SPECT, each detected event involves a pair of opposite detectors.

For photons originating from a single object point only certain pairs of detectors will be able to measure a coincidence (e.g. detectors marked a1 or a2) whereas the possibility of detection by other pairs is zero (detectors marked b).

This set of probabilities can be stored and is called the system matrix. It looks like the figure below except in practice there are thousands of object voxels and thousands of detector pixels. For a particular object voxel (horizontal row) you can check along the column to find the probability that a photon would be detected at a particular detector pixel ‘pair’ (vertical column).


Figure 21b.

System matrix which lists the probability that a photon emitted from a particular object voxel will be detected a any detector pixel ‘pair’.

Note that this matrix can be extremely large as each voxel in the object must be linked to every possible pair of pixels in the detector.

You can immediately see that, if we want to accurately estimate what a detector would measure, given a radioactive patient in front of it, then you need to estimate exactly what happens to the photons as they pass through tissue

(including their attenuation). Provided we know the attenuation at each pixel, the exact attenuation along any path can be calculated. Therefore attenuation can be included in the forward projection step, as well as in the back projection step. Unlike filtered back projection where the filtering is needed to correct for errors, the iterative reconstruction will converge to a reasonable estimate of the imaged activity distribution, provided that the emission of photons and their transport through tissue and ultimate detection is accurately measured and included. In fact other factors can also be included such as scatter or random concidences but we do not need to consider that here.


In summary the steps of the iterative reconstruction are as follows:

1. Make an initial guess of the activity distribution (often the starting image is completely uniform)

2. Using this guess, use forward projection to estimate what the projections would be for this distribution.

3. Compare the estimated projections with the actual projections measured on the detector.

4. Use the difference between estimated and true projections to alter the previous estimate of the activity distribution (usually involving further back projection).

Note that for ML-EM reconstruction the ratio between measured and estimated projections is back projected.


5. Return to step 2 and continue until the difference in step 3 is very small or stop after a defined number of iterations.

It is worth noting the following about the iterative reconstruction:

No filtering was necessary to reach a solution, although a smoothing filter is frequently used to control noise, usually applied after reconstruction.

It was relatively easy to incorporate more detailed information about the

• attenuation, or other physical factors; this is difficult using filtered back projection.

A disadvantage with iterative reconstruction is that it does take several iterations to reach an acceptable solution; often many iterations. Each iteration takes at least as much time as a single filtered back projection; therefore iterative reconstruction is much slower than filtered back projection. With fast computers and efficient reconstruction programs this is no longer considered a major problem.

An advantage of iterative reconstruction such as the maximum likelihood method is that the final image has a different noise appearance, with reduced noise in areas with low counts and virtually no streak artifacts. The images therefore are very appealing for many clinical applications.

Figure 22:

Streak artifacts become prominent at low counts for FBP, however the reconstruction is reasonable using EM.

When to stop?

Iterative reconstruction is different from filtered back projection since you do not necessarily have a filter that controls noise. You need to understand what happens during reconstruction so that you can choose an appropriate stopping point to the iterative process. In Figure 23 the images obtained at different iteration number in a reconstruction are illustrated. You can see that at early iterations there is little detail and the image is blurred. As the number of iterations increases, the image gets sharper but also noisier. The effect is a little like increasing the cutoff frequency for a smoothing filter.


How then, in the clinical situation does one control the type of image that you end up with?

There are two approaches:

1. As can be seen from the figure, you can stop the iterative procedure at a relatively small number of iterations at a point that the image looks reasonably sharp, but with the noise reasonably controlled. (for example 15 – 20 iterations


2. An alternative is to perform a larger number of iterations, using a fixed number for all studies, and then apply a post-reconstruction smoothing filter, choosing an appropriate cutoff frequency as usual.

It is suggested that the second approach provides better reconstruction, but in practice many centres simply stop with an early number of iterations.

Figure 23.

EM reconstruction for a thorax phantom for different number of iterations.

At low iteration number little detail is visible, at high iteration number the image gets noisy.

Improving speed: What is OS-EM?

As pointed out earlier, a problem with iterative reconstruction is the much longer processing time than filtered back projection. For example 20 iterations of ML-EM takes approximately 40 times the time for filtered back projection.

Fortunately there are means of accelerating the iterative reconstruction the most commonly used being ordered subsets EM reconstruction or OS-EM. The difference between OS-EM and ML-EM is easy to understand. In normal ML-

EM each time you forward-project and back-project and update the reconstruction these are calculated for all projection angles (e.g. 1,2,3,….64). If


instead you select only a subset size of 4 that uses projection angles (e.g.

1,17,33,49) and use these to forward and back project this will take far less time.

The surprising fact is that the update to the reconstruction is almost identical whether you use a small subset of projection angles or all angles (see Figure 24).

Figure 24:

The same reconstruction as before is illustrated using ML-EM versus OS-

EM with either subset size of 4 or 2. The results are visually identical demonstrating an acceleration of 15 and 30 respectively.

A different subset of the projection angles is used each subsequent update

(which is referred to as a sub-iteration), until all projection angles have been used (which we still refer to as an iteration). So in this example if we had 64 projections in one complete iteration this would involve 64/4=16 updates and so would get to the same point as 16 standard ML-EM iterations but in 1/16 of the time.

Figure 25 illustrates a simplified example for 8 angles with a subset size of 2. In this case 1 OS-EM iteration is equivalent to 4 ML-EM iterations. Note that the subsets are chosen in an ordered fashion so as to maximize the new information being added each sub-iteration: hence the name ‘ordered subsets’.


Figure 25:

The top diagram illustrates that during each ML-EM iteration forward and back projections are computed at all angles (in this case 8 angles); using OS-EM with subset size of 2 4 sub-iterations or updates occur during the single iteration each time using a different pair of projections.

Note in practice you need to choose the number of subsets (=number of projections / subset size) and number of iterations; the effective number of ML-

EM iterations is equal to the product of these:

i.e. number of ML-EM iterations = number of subsets x number of iterations.

Note that if you choose subset size too small there can be problems so do not choose subset size less than 4. Also take care if the reconstructed image looks too noisy; if there are a large no of acquired projections (e.g.128) and you choose a subset size of 4 this will result in 32 subsets. One iteration will then be equivalent to 32 iterations of ML-EM which is already quite noisy. Choosing instead one iteration using subset size of 8 would result instead in 16 ML-EM iterations (updates) and would look more smooth.

Note however that it is common to apply a smoothing filter (Butterworth or

Gaussian) after reconstruction to finally smooth the results.

Note!! take care not to confuse subset size and number of subsets!


Prepare for an Exercise

Using Iterative Reconstruction


As with filtered back projection it is important to experiment with iterative reconstruction in order to better understand how to obtain optimal images. For this exercise you should if possible use the iterative reconstruction available on your own system. However as an alternative you can use the on-line exercise.

The appearance of images after iterative reconstruction is somewhat different to those obtained using filtered back projection, in particular the noise control is different; increasing the number of iterations produced sharper but more noisy images.


On completion of this exercise you should be able choose appropriate parameters to perform iterative reconstruction.

Time Check:

Allow 2hrs to complete this exercise.


Locate the operator manuals or instructions for your system computer and check if there is a section on iterative reconstruction: read this thoroughly and discuss the option with your supervisor. Then locate the options on your computer system. To undertake the experiment you will need to locate a clinical study that has been undertaken on your system. Make sure that the study has already been analyzed for clinical use and backed up and preferably use a copy of the original study.

If you cannot use your own system then log in to the DAT website (ref) and locate the exercise there; you will find specific instructions for use of the reconstruction exercise.

Having located the reconstruction tool on your system select the study that you wish to process. Use your workbook to take notes as you proceed through the following exercises:

To start reconstruct the study using the normal filtered back project and filter parameters used clinically; you can use this for comparison.

Note and record the possible options for iterative reconstruction on your system:


What parameters can you change (number of iterations, number of subsets, subset size, filter parameters, type of reconstruction (FBP, ML-EM, OS-EM) etc)?

First select to reconstruct without any filter selected. Try reconstructing with

FBP and ML-EM if available, then proceed to try reconstructions with OS-EM using different parameters. Try to store results so you can compare your findings.

After you have experimented without a filter then try to filter some of the reconstructions using different parameters.

Go to

your Workbook PET physics section question 17

Based on your findings answer the following questions in your workbook:

Vary subset size (or number of subsets) and iteration number:

- if you reduce number of subsets but double iteration number you should get the same results: is this true?

Comment on the image appearance compared to FBP.

What happens when you go to a very high number of iterations?

What parameters appear to give you the best results?

Comment on the speed of reconstruction

Try to identify a clinical study where there are obvious streak artefacts and reconstruct this study using OS-EM. What happens to the streak artefacts?

Key Points:

Iterative reconstruction involves matching estimated projections, based on the current estimate of the reconstruction, with the original measured projections.

Unlike filtered back projection, no ramp filter is required in iterative reconstruction.

The advantages of iterative reconstruction are that noise is less pronounced in background regions, streak artifacts are reduced and complex corrections

(e.g. for non-uniform attenuation) can be directly incorporated.

Using ordered subsets can greatly speed up the computation; choose a number of iterations so that the product of number of subsets x number of iterations gives the desired number of ML-EM iterations.



Please start here if you have skipped the previous section

Filtered Back Projection

Though filtered back projection is rarely used in PET, it is useful to have a quick reminder of the processes involved with this methodology. If we imagine that we have a distribution of activity within our PET scanner, parallel lines of response can be used to give us profiles of this distribution at different angles

(Figure 26 (a)). In practice we don’t know the distribution of activity within the scanner but we do have lines of response giving us our profiles. So, if we backproject our profiles for many different angles, adding the contributions from each projection together we recover the information about our source distribution (Figure 26 (b)).

Figure 26:

(a) Parallel lines of response from a source distribution can give profiles of that distribution. (b) Adding profiles together can be used to reconstruct an unknown source distribution.

The example we have give in Figure 26 only shows a few projections, however we can already see that although we start to recover the source distribution, the process of back projection adds a lot of artefactual blurring to our data. The way we remove the blurring is to filter the data before, during, or after the back projection process.

If we consider filtering before back projection, the filtering process adds negative counts to our profile (Figure 27) so that when we reconstruct much of the blurring is removed. More on Filtered Back Projection is given in the SPECT and CT modules.


Figure 27:

(a) Profile from a point source without filtering (b) with filtering the profile gains negative values which through back projection reduces streak artefacts.

Filtered Back Projection (FBP) is rarely used in PET because OS-EM or other iterative reconstruction algorithms provide better ways of applying corrections within the reconstruction algorithm itself. Iterative algorithms are also better suited to differentiating hot spots in a low activity background whereas reconstruction with FBP can produce star (streak) artefacts making localisation of such ‘hotspots’ more difficult.

However iterative reconstruction techniques also have their own problems. The accuracy of iterative reconstruction depends on the number of iterations used, and to achieve full convergence (accuracy) throughout the image often requires a large number of iterations. As well as being time consuming, performing reconstruction in this manner can result in images that appear very noisy. In practice, what often happens is that the number of iterations used is reduced to give the best scan appearance and not the best scan accuracy.

This uncertainty in the accuracy of iterative reconstruction methods mean that for applications where quantitative accuracy is particularly important Filtered

Back Projection is used. Examples of such areas are cardiac studies, or studies investigation the kinetics of a particular tracer.

3D PET Reconstruction

In 3D PET, image reconstruction is not as simple as with the 2D case. For 2D

PET, Lines of Response are limited to within planes (or neighbouring planes), which allows standard FBP or OS-EM algorithms to be implemented in a similar way to SPECT (Figure 28 (a)). However in 3D PET, there are many more lines of response possible, many which are not perpendicular (at right angles) to the detector ring (Figure 28 (b)). This means that reconstruction in the standard way cannot be performed.

A further problem is that some of the lines of response cannot be used, with much of the data not measured because of the finite length of the field of view

(Figure 28 (c)). Because of this loss of data, the data is said to be ‘truncated’.

Fortunately methods have been devised which allow us to reconstruct 3D PET data, many of which are based on the reconstruction methods we have already described.


Figure 28:

(a) In 2D PET all measured lines of response can be used with standard within plane reconstruction such as OS-EM and FBP (b) 3D PET has lines of response at oblique angles to the detector which can’t be reconstructed using standard methods

(c) some lines of response in 3D PET are not measured.

An early attempt at reconstructing 3D data based on filtered back projection is known as the Reprojection reconstruction algorithm. This technique consists of performing standard Filtered Back Projection on the within plane (2D) lines of response, and ‘reprojecting’ (forward projecting) these data to estimate the lines of response that were not measured because of truncation. After combining this measured and estimated data together, reconstruction using a 3D implementation of Filtered Back Projection is performed.

One of the problems found with 3D datasets is that they are very large and can require a lot of computational power to process. Until recently one of the ways around this problem was to re-bin or reassign the information into 2D projections for 2D slices. Two methods of achieving this are in common use.

One method called ‘Single Slice Rebinning (SSRB)’ looks at oblique lines of response between detectors (i.e. lines of response that are not within a detector plane) and assigns the line of response to the plane midway between these detectors. This can distort images away from the centre of the field of view so is only really used for Neurological or Small Animal PET. A more reliable solution is called Fourier Rebinning (FORE). Performed in Fourier (frequency) space, this method takes oblique lines of response, and looks at them in terms of their contributions to 2D (in-plane) coincidences. For both of these methods, once the data are rebinned, reconstruction can be performed using standard

Filtered Back Projection or OS-EM algorithms.

Improvements in computer technology in terms of disk space and processing power have recently allowed demanding 3D iterative reconstruction methods to be implemented on commercial systems. As we have learned, OS-EM and other iterative reconstruction methods allow models such as scatter, and attenuation etc. to be built into the estimate the algorithm makes during reconstruction. Fully 3D iterative algorithms also incorporate the 3D nature of the data into the system, removing the need to rebin the data. Though reconstruction by these methods often takes several minutes to complete, no assumptions of the data are made – unlike Rebinning and Reprojection algorithms. This helps preserve the noise characteristics of the data, producing better quality images.


Go to

DAT  website  play the powerpoint entitled ‘3D_Reconstruction’, which describes the different reconstruction methods used in 3D PET.

Go to

your Workbook (PET physics), and perform Exercise 18, and answer questions

19 and 20.

Key points:

Filtered back projection is rarely used in PET although for imaging where accurate quantification is necessary, FBP may be preferred.

Many lines of response in 3D cannot be measured because of the finite length of the axial field of view.

3D reconstruction can be performed using reassignment into 2D planes with standard reconstruction algorithms; a hybrid 3D FBP reconstruction method known as reprojection; or more recently using fully 3D iterative reconstruction methods.

3D PET reconstruction methods can take much longer to perform than 2D PET reconstructions.

Go to

DAT website Revision Test 4 to assess your understand of this section.





One of the great advantages of PET is the ability to quantify the uptake of tracer in vivo. In simple terms this can be a matter of looking at the activity concentration or Standardised Uptake Value in a lesion. At a more complex level, quantification of dynamic PET data can lead to molecular information about physiological processes. The aim of this module is to explain the processes that are involved in PET quantification.


On completion of this section you will be able to:

Understand how the conversion is made from measured disintegrations to activity concentration.

Describe the ideas behind kinetic modelling, and compartmental modelling.

Know the concept of Standardised Uptake Value (SUV), understand how it is measured, and the errors that can be associated with such measurements.

Time Check:

Allow 3 hrs to complete the study of this subject and complete the exercises in your Workbook.

From Measured Disintegration Events to Activity Concentration

Once we have removed bias in the data by performing all the corrections described in a previous section, we can convert recorded disintegration events to the activity concentrations (kBq/ml) in our source distribution. Such a conversion is possible once we have performed what is a known as an ‘Activity

Concentration Correction’ or a ‘Well Counter Correction’.

In some clinical systems, an activity concentration is not readily available, with these values converted directly into Standardised Uptake Value (SUV). It is still important to know about the conversion to activity concentration however since this is used to calculate SUV.

The Activity Concentration Correction follows a similar methodology to the sensitivity correction that may be performed for calculating percentage uptake in thyroid scintigraphy. A water-filled cylinder has a known amount of activity added to it so that we can calculate the activity concentration (activity/cylinder volume) within the phantom. After imaging over a time period that is sufficient to produce low noise images, data are reconstructed and corrections applied

(i.e. for randoms, scatter, deadtime, attenuation, normalisation) to give us a PET image of measured disintegration events. It is then a simple process of calculating a conversion factor of events recorded per pixel per second to activity concentration.


Normally the calibration of activity concentration is made to the radionuclide calibrator that measures the pre and post injection activity used in clinical studies. In turn the accuracy of the radionuclide calibrator is normally maintained by calibration to national traceable activity standards.

For kinetic modelling (which will be described later in this section), the activity concentrations measured in imaging often have to be compared to the activity levels measured in blood samples taken from the patient. Since the blood samples are often measured in a small sample ‘well counter’, the crosscalibration between scanning and blood sample activity levels is called a ‘well counter calibration’.

The terms activity concentration correction and well counter correction are often used to mean the same thing irrespective of whether the calibration is to a dose calibrator, or less commonly a well counter.

In theory a different activity concentration calibration should be performed for each radionuclide used on the scanner. This is because the positron fraction for radionuclides differs. So for example with Fluorine-18, 97% of the disintegrations lead to positrons, whereas for Gallium-68, only 88% of disintegrations give positrons. This is complicated yet further when other gamma emissions that may lead to measured true events can be involved. In practice however, many vendors apply factors to convert measurements made from a Fluorine-18 calibration to other radionuclides.

Kinetic Modelling

The ability of PET to quantify activity concentrations within the body, and the availability of PET tracers which can be based on physiologically occurring elements such as Carbon (


C), Nitrogen (


N), and Oxygen (


O) has made

PET a useful tool for imaging physiological processes in-vivo.

‘Kinetic modelling’ takes the information we can gain from PET a step further.

If we have a tracer that is designed to measure a particular physiological process, using dynamic imaging of the tracer and mathematical models, we can determine the underlying processes that affect the uptake and metabolism of a tracer.

An example of this from general nuclear medicine would be the Renogram.

Dynamic imaging of the kidneys using MAG3 or DTPA allows us to look at the relative function of each kidney, as well as more complex parameters such as the time it takes the tracer to pass through the kidney e.g. the transit time.

Furthermore, if imaging with DTPA from our data we can also extract information on the Glomerular Filtration Rate (GFR).

The standard way to describe a kinetic model is in terms of compartments and the movement of tracer into and out of these compartments. A compartment can take a physical form such as an organ, tissue or cell type, or, be different chemical forms within the same tissue e.g. metabolised FDG and unmetabolised FDG.


A simple model would be one such as the injection of colloid in general nuclear medicine. After injection, colloid is available in plasma which is then taken up by a particular tissue e.g. a lymph node. Once in that tissue, the colloid is trapped and does not return to the plasma.

This can be explained by the diagram in Figure 29 (a). C plasma defines the concentration of colloid in the plasma, C


describes the concentration of colloid in the tissue, and k


describes the transfer of colloid from plasma into the tissue.

In compartmental modelling terms, k

1 is knows as a ‘rate constant’, and the tissue T, can be thought of as a compartment. So, the example of the injection colloid is a non-reversible (because the tracer does not leave the compartment once it is in that compartment) single compartment model.

Practically, what we do is use dynamic imaging and ROI analysis over our tissue to get information about the change in tracer concentration within the tissue over time. To get information of about the concentration of tracer within plasma we would take multiple blood samples at different time points during our dynamic imaging. Using a centrifuge to separate out the plasma from blood, we can then put the samples into a well counter to determine the concentration at the given time points. With both bits of information it is then possible to determine the rate constant and therefore the flow of tracer from plasma to the tissue in question.

Figure 29:

(a) An irreversible one compartmental model where tracer flows in but not out, (b) a reversible one compartmental model with tracer flow in and out of compartment, (c) a reversible two compartment model.

In some instances, tracer that flows into a compartment may also flow out. This reversible single compartment model is shown in Figure 29 (b). Another more complicated model would be the one shown in Figure 29 (c). In this model the tracer can flow into and out of one particular compartment T


, and any tracer in



can also flow into and out of another compartment T



In PET, the most common kinetic model used is the one which explains glucose metabolism using


F FDG. A model explaining how FDG is metabolised in the body is shown in Figure 30. The metabolised FDG concentration C


is normally what we’re interested in when we look at uptake within tumours, or activity within the brain. Fortunately transfer of metabolised FDG out of tissue (k


) is small, meaning much of the activity that is metabolised is trapped within the tissue. This trapping of the FDG in tissue allowing the measurement of glucose metabolism is the reason for the success of FDG PET imaging.


Figure 30:

Three compartment model showing how FDG is metabolised in the body


Standardized Uptake Value (SUV)

Characterisation of glucose metabolism can be very important in PET because it allows us to determine the aggressiveness and therefore the significance of lesions within the body. This can be particularly important when assessing treatment response, where, by imaging on several occasions we can assess changes in tumour activity.

The requirement of taking blood samples and placing numerous regions of interest on images mean that full kinetic modelling is rarely used in routine clinical practice. To overcome this, the concept of ‘Standardized Uptake Value’ was derived to allow glucose metabolism to be approximated by placing one region over the feature of interest.

If we return to Figure 30, we can explain how SUV is used to approximate glucose metabolism. The concentration of metabolised FDG (C


) within the feature of interest is strongly related to glucose metabolism. However, placing a region of interest over the feature we want to look at will give us the combined concentrations of metabolised (C


) and non-metabolised FDG (C


). If imaging occurs at a time point greater than 45 minutes after the injection of FDG, it is known that the amount of non-metabolised FDG in tissue is small. It can therefore be ignored. This leaves just the concentration of metabolised FDG that we are interested in.

Once we have C


we need to correct for the concentration of FDG that was initially available in the body. This is because higher activity concentrations within the body lead to increasing metabolism of FDG. It is normally assumed that the available activity concentration can be calculated by dividing the injected activity by the weight of the subject. So to calculate our ‘Standardized

Uptake Value’ (SUV) we simply divide the activity concentration in our region of interest by (activity/weight). Typically on a PET viewing station, the ROI tool can be set up to automatically give a direct measurement of SUV.

Some people have questioned the assumption that the initial activity concentration is reflected by activity/weight. Body fat does not hold FDG and so it is suggested that instead of dividing injected activity by weight, the activity should be divided by lean body weight (i.e. weight excluding body fat) or body surface area.


The advantage of excluding body fat from SUV calculations is particularly important when comparing tumour SUV in different patients where the amount of fat may be different, or when monitoring treatment effectiveness in a single patient by imaging at multiple time points. For the assessment of treatment effectiveness, the weight may vary in the patient during treatment but the lean body mass remain constant. Since the amount of available FDG is better reflected by the non-fat measure, SUVs corrected by lean body mass would be a better value to use. Both lean body weight and body surface area can be calculated using knowledge of the patients’ height and weight.

Other corrections

Since metabolism of FDG is not the same as metabolism of glucose, correction factors can be used to give a more accurate representation of glucose metabolism. Concentrations of normal glucose within plasma can also affect

FDG metabolism, and it has been suggested that this should be corrected for. In practice however, neither correction is commonly performed.

Variability of SUV

Many factors can affect the value of SUV that is calculated.

Activity: SUV is calculated from the injected activity. It is important to measure and record the pre-injection activity and time of the syringe, and the postinjection activity and time accurately.

Weight (and Height): Weight should be measured and recorded using accurate weighing scales since this value is used to correct SUV for the initial FDG concentration. Height should also be measured and recorded accurately if lean body mass or body surface area versions of SUV are used.

Time (post injection) of imaging: Although the amount of non-metabolised

FDG is relatively small after 45 minutes, it is a good idea when performing multiple studies on the same patient that the time post injection that imaging is performed is kept constant so that changes in lesion SUV can be compared.

Plasma Glucose Levels: Non-FDG glucose within plasma will affect the metabolism of FDG and therefore SUV. It can be corrected for, but in practice only a small difference in SUV is found.

Scanner and Acquisition Protocol: Different scanners with different calibrations, different acquisition protocols, and different reconstruction algorithms will also have a subtle effect on SUV. This is particularly important to remember if a patient that has been scanned already at a difference centre comes to be scanned in your centre. All these factors should be kept consistent if at all possible.

Physiological Factors: Many physiological factors that may not be relevant to the scan report can also affect SUV e.g. systemic condition of the body.

Go to

DAT  website  play the powerpoint entitled ‘Kinetic Modelling’, which describes concepts used in Kinetic Modelling, and SUV.


Go to

your Workbook PET physics and perform exercises 21, 22, 23 and 24.

Key points:

A valid activity concentration correction is necessary to convert detected coincidence events to activity concentrations.

Kinetic modelling can be used to explore the underlying processes involved in the uptake of tracers during physiological processes.

Standardized Uptake Value (SUV) provides an easy to measure indication of glucose metabolism.

It is important when measuring SUVs that height and weight are recorded correctly, and for follow-up scanning that scanning time post injection is kept the same.

Go to

DAT website Revision Test 5 to assess your understanding of this section.



System Performance


To understand how to make the best use of PET, it is important to know the capabilities and limitations of the system. The aim of this module is to introduce the concepts of measures which help us describe and understand the abilities of

PET systems.


On completion of this section you will be able to:

Understand the concept of spatial resolution and how it can affected by system design and physical processes

Describe the term Noise Equivalent Count Rate (NECR) and how it is composed

Know how scatter is derived in PET imaging

Understand how sensitive different PET systems are

Time Check:

Allow 3 hrs to complete the study of this subject and complete the exercises in your Workbook and on-line revision tests.

Spatial Resolution

Spatial resolution is normally defined as the ability to distinguish two neighbouring point sources. Typically it is measured by placing a small point source of activity at several places within a particular slice (Figure 31 (a)). It is important to note that a source is not placed in the centre of the field of view because all possible lines of response would pass through this point resulting in a false measure of resolution. From the reconstructed images, profiles are drawn through the source position and the resolution defined as the width of the profile at half and tenth of the maximum value (Figure 31 (b)). This is known as Full-Width Half Maximum and Full-Width Tenth Maximum respectively. Two measures of resolution are then made within each transaxial slice plane: Tangential and Radial (Figure 31 (c)).

Spatial resolution is also measured in the axial direction, i.e. along the length of the bore. However because this is limited by the size of the detector element in this direction, several images of the point source are made with very small shift after each one to make up the profile from which Axial resolution can be calculated.


Figure 31:

(a) Point source positions used to measure spatial resolution with the central point 1cm from the centreof the field of view, and external points at 10 cm (b)

Profiles used to measure Full Width Half Maximum (FWHM) and Full Width Tenth

Maximum (FWTM) (c) Definitions of the Tangential and Radial resolutions.

Many factors limit/affect spatial resolution, some of which are given below.

Positron Range

The energy of the emitted positron has a distinct effect on the distance the positron travels before the positron undergoes annihilation with a neighbouring electron. This ‘path length’ has an effect on the spatial resolution possible with different positron emitters. For example Fluorine-18 has a maximum range in water of 2.4 mm, whereas for Rubidium-82 this range becomes 14.1mm. In practice however, the relative loss of resolution is limited because of the random tortuous (winding) route of the positron before annihilation and the distribution of the positron energies. The average positron energy is approximately one third of the maximum possible energy and this is reflected by the average positron range. Figure 32 gives demonstrates this with images of the heart using Fluorine 18 and Rubidium-82.

Figure 32:

Example of how positron length influences spatial resolution with the short path length of (a) Fluorine-18 and the longer path length of (b) Rubidium-82


Previously we said that the two photons arising from the positron annihilation travel in directly opposite directions, i.e. that they were co-linear. In fact because the positron and electron are not completely motionless when they collide - they carry momentum that causes the two annihilation photons to be emitted at angle that may not be exactly at 180 degrees from each other.

Typically any deviation from 180 degrees is less than 0.5 degrees, which can limit PET resolution to 2-3 mm. Interestingly this means that if sufficient photons could be gathered in SPECT, the spatial resolution of SPECT could actually be better than that in PET.


Distance between two detectors

This is related to the lack of co-linearity of PET. If the two photons are travel at an angle of e.g. 180.5 degrees, the deviation over a small distance is smaller than at large distances. Typically for a small animal scanner the deviation can be approximately 0.3 mm, whereas for a modern large bore whole body PET scanner this deviation can be around 2 mm.

Intrinsic Spatial Resolution and Depth of Interaction

The widths (w) of the individual crystals themselves can limit the spatial resolution possible in the system. Geometrically the coincidence response between two detectors changes from a Full Width Half Maximum (FWHM) of w/2 at the central point between two detectors to w at a point adjacent to the detector (Figure 33).

Figure 33:

The coincidence profile from two crystals changes from a FWHM of w/2 at the midpoint between the detectors to, at a point adjacent to one of the detectors.

The angle of incidence of the photon on the detector can also affect the spatial resolution using an effect called the ‘depth of interaction’ effect. When the photon interacts with the detector at an angle perpendicular to the crystal face

(Figure 34 (a)), the apparent width of the crystal is much narrower than if the photon interacts at an oblique angle (Figure 34 (b)). For a typical detector of width 4 mm, and depth 3 cm in a standard PET bore 80 cm in diameter, spatial resolution can be degraded by 50%, even at 10 cm from the centre of the field of view.

Figure 34:

The width of the detector to a point source perpendicular to the detector face (a) is much narrower than that when the source is off-centre (b).


Reconstruction Parameters

In tomographic reconstruction we can define parameters that modify the amount of smoothing applied in the image. This smoothing can alter the measured spatial resolution.

Typical results from resolution measurements of a modern PET scanner are given in Figure 35.

Transaxial Profile (1 cm) - Midslice Axial Profile (1 cm) - Midslice

Profile Position (mm) Profile Position (mm)

Radial FWHM

Tangential FWHM

Axial FWHM

Spatial Resolution (mm)

1 cm



10 cm




Figure 35:

Transaxial and Axial profiles for a point source positioned in a mid-slice,

1cm from the centre of the field of view. Also shown is the radial, tangential and axial spatial resolution for a point source positioned 1cm and 10cm from the centre of the field of view. Note that for the 1cm source, the radial and tangential results are equivalent.


As in general nuclear medicine, sensitivity in PET is given as the count rate (for true events) for a given activity. Typically around 7 counts/second / kBq for a modern scanner working in 3D mode imaging Fluorine-18, the sensitivity can be several times less if working in 2D mode. The scintillation crystal used in the scanner can also have an effect on the system sensitivity. When comparing the sensitivities of crystals in PET, it is important to remember that because of the need for two photons to be detected, sensitivity is proportional to the square of the stopping power of the crystal.

Another factor that can have an effect on sensitivity is the positron fraction of the isotope being used. For radionuclides used in PET, not all radioactive events will lead to the emission of a positron. The percentage of events

(disintegrations) that lead to a positron being emitted is known as the ‘positron fraction’. So for example, if we had 100MBq of Fluorine-18 in a phantom, 97% of


the disintegrations would lead to positrons, whereas if we imaged with

Copper-64, far less of the disintegrations (18%) would produce positrons. This would result in us needing to scan for (97/18 = 5.4) times longer with Copper-

64 than Fluorine-18 to achieve the same count levels in the image, or we could inject 5.4 times more activity. The latter can often be prohibitive.

Count Rate Performance

Count Rate Performance describes the relationship between activity concentration in the patient/object and count rate measured by the scanner. It is calculated so that we can determine the level at which increased activity levels no longer gives the benefit of increased count rates. In general nuclear medicine using gamma cameras the count rate performance of a system is normally dictated by the deadtime of the detector system, and normally occurs at activity levels that would give a prohibitively large dose to the patient.

However, in PET because of unwanted coincidences from random, scatter and multiple events, the contribution of these mechanisms all needs to be considered. Indeed in 3D PET, because the randoms rate increases with the square of the activity, a PET scanner can have an upper limit on useful activity concentrations which are far below that defined by the deadtime of the system.

To make sense of the impact of true, random, scatter and multiple events on count rate performance, the composite measure of Noise Equivalent Count Rate

(NECR) was defined.

Once corrections for random and scattered events have been applied, the data becomes noisier than it was before correction. It loses its Poisson nature, i.e. the noise is no longer defined as the square root of the number of events. The Noise

Equivalent Count Rate is therefore defined as the count rate that would give rise to the observed noise, if randoms and scatters were removed. This meets the objective of trying to describe the count rate without random and scattered events present.

Since corrected data has higher noise levels, the noise equivalent count rate is smaller than the actual Poisson count rate.

Mathematically NECR is written as either:


= (Trues)



Total or

= (Trues)



+ Randoms)

The difference between the two equations depends on how the correction for random events is applied. Since different corrections for random and scattered events can result in different values of NECR, it is important to note how data is corrected before comparing these figures.

Normally Count Rate Performance is determined by acquiring a dynamic study of a short half-life isotope (e.g. Fluorine-18) over many half-lives, so that a curve of activity concentration against NECR can be drawn. A special phantom is often used for this purpose. Details of the phantom and how these relationships are derived can be found in NEMA-PET 2007. An example of the count rate performance of a modern PET system working in 2D mode is given below in

Figure 36.


Activity conc. (kBq/cc)

Figure 36:

Chart showing Count Rate Performance of a Modern PET system.

Also shown are the performance parameters from these curves.

Many parameters can affect NECR. As we have already mentioned, the method by which NECR is calculated being one of them. Other things that can affect

NECR include:

- The scintillation crystal. Some crystals are naturally radioactive and so

NECR has to be performed differently. Also, crystals with a faster decay time will handle higher count rates better.

- The energy acceptance window and coincidence acceptance window.

Again linked to the crystal type, different crystals will allow these windows to be set at different levels, so affecting the random and scatter events detected in the system.

- Acquisition mode e.g. 2D or 3D. Systems working in 3D mode will have a much higher randoms and scatter rate when compared to systems working in 2D mode.

It is always important when considering NECR performance to relate this to the activity concentrations found in clinical studies. For an injected activity of 400

MBq, activity concentrations can be approximately 2 kBq/cc in soft tissue,

8kBq/cc in liver and 15 kBq/cc in the brain.

Scatter Fraction

The measurement of Scatter Fraction measures the ability of the system to reject scattered events. Scatter can arise from several sources: scatter within the patient, scatter off the gantry, and scatter within the scintillation crystal.

Measured as part of the NECR calculation, the scatter fraction can vary from being almost inconsequential in 2D PET, to up to 50% of the measured signal in


In addition to being affected by the scanning mode, scatter fraction is also highly influenced by the energy acceptance. The settings for these windows are in turn dependent on the scintillation crystal that is used.

Go to

DAT website play the powerpoint entitled ‘Performance’, which describes the performance measures used in PET.


Go to

your Workbook PET physics and perform exercises 25, 26 and 27.

Key points:

Spatial resolution varies across the field of view, and is different in axial and transaxial planes.

The Sensitivity of a PET system is much different in 2D mode compared to 3D mode.

Noise Equivalent Count Rate is a measure that looks at the noise in the image once random and scatter corrections have been performed. Comparison is made with this noise and that which would have been achieved if random and scattered events were not present.

The percentage of scatter within in an image can almost be inconsequential in

2D imaging, and up to 50% of the measured signal in 3D PET.

Go to

DAT website Revision Test 6 to assess your understanding of this section.



Quality Assurance


To achieve consistent good quality imaging, a Quality Assurance program should be in place. This section covers how a good quality system can be set up and maintained over time so that quality PET imaging can always be achieved.


On completion of this section you will be able to:

Understand how the processes of Acceptance Testing and Quality Control help achieve a good quality system.

Know what tests should be performed during acceptance testing and the why they are performed.

Know what PET quality control tests should be performed and why they should be performed.

Know what CT quality control tests should be performed and why they should be performed.

Time Check:

Allow 2 hrs to complete the study of this subject and complete the exercises in your Workbook.

Quality Assurance

Quality Assurance is normally defined as a management process that ensures that a quality product is always produced. In our case this product will be the

PETCT images and reports we produce. The quality assurance program will normally comprise of performance and review of Quality Control tests, and the regular review of the protocols and clinical images. Clinical audit of the reporting of such images can also be a part of the program, though this will not be covered here. The results of these reviews and analyses are then judged against an agreed level of quality, with corrective action taken where necessary.

Quality Control Testing

Quality Control can be split into acceptance testing and routine testing.

Acceptance Testing

When preparing to purchase a PET system it is normal to give a list of specifications that the equipment should ideally meet. In response, the manufacturer will often give a list of the system performance so that comparison can be made between the purchasers’ requirements and the systems abilities. To maintain consistency between the ways performance figures are quoted, they are often quoted as being measured following the guidelines set by the National Electrical Manufacturers Association (NEMA).

Once a new PET system is installed in the department, acceptance testing is often performed (by a physicist or by the manufacturer) to ensure that the performance measured on site matches or betters the performance quoted. This


assures the quality of the installed system. Acceptance testing also allows us to set a baseline for testing that may be performed as part of our routine quality control testing.

Following the guidelines of NEMA NU 2-2007, the tests given below should be performed.

Spatial Resolution

This test was explained in detail in the previous section. It is important to note however, that during acceptance testing measurements are made in air using

FBP with a ramp reconstruction filter, effectively giving a value of the best spatial resolution possible. Though this is not what we’re interested in when performing clinical PET it does allow us to be consistent with measurements performed at any given time, or on any given system.

Scatter Fraction, Count Losses, and Randoms Measurement

This measurement was described in a previous section characterises the relationship between activity concentration and true, scatter, random, and

Noise Equivalent Count Rates (NECR). The result of this measurement for a modern 3D scanner is given in Figure 37.

Activity conc. (kBq/cc)


Trues: solid Randoms: dotted Scatter: dashed

NECR: dash dot NECR (2R) dash dot dot

Figure 37:

Results from Scatter Fraction, Count Losses and Randoms measurements which show the relationship between activity concentration and True, Randoms,

Scatter and NEC Rates.


The sensitivity of the system expressed in counts per second per MBq should be measured at acceptance testing. The problem however with measuring sensitivity is that the positrons need to have something to annihilate with to produce the two 511 keV photons, and to provide such material will attenuate the radiation produced therefore affecting sensitivity. A method Bailey et al proposed to overcome this problem was to have an activity filled polyethylene tube over which aluminium tubes are placed to provide attenuation material

(Figure 38).


Figure 38:

Aluminium tubes used to provide attenuation material for NEMA sensitivity measurements


We acquire an image with one aluminium tube and look at the sensitivity of the scanner, then place another tube over the first tube and repeat our sensitivity measurement. After acquiring data with several aluminium tubes we can produce a graph of sensitivity against thickness of attenuating material (Figure

39). If we extrapolate this data back we can determine the sensitivity the system would have if no attenuating material was present. It is this figure that is quoted from acceptance testing.

System Event Rate

Wall thickness (cm)

Figure 39:

A plot of count rate against thickness of attenuation material used by

NEMA methodology to determine the sensitivity with no attenuation present.

Accuracy: Corrections for Count Losses and Randoms

This test uses the same methodology as that when we produce our NECR curves. However in this instance we apply the corrections for randoms and count losses and look at their accuracy by extrapolating the response at low activity concentrations where randoms and count losses are minimum, to high activity concentrations where count losses and the number of random events are high.


Image Quality, Accuracy of Attenuation and Scatter Corrections

Figure 40:

NEMA Image Quality phantom with fillable balls and lung compartment.

Though technical measurements of count rates, count losses and spatial resolution is interesting it is often difficult to understand how this effects the clinical situation. The last test that is often performed in acceptance testing is an image quality test using a phantom that mimics some of the environments found in clinical imaging (Figure 40). This phantom has a large background region where the homogeneity of uptake can be assessed, fillable spheres where the ability of the scanner to recover known lesion to background ratios can be evaluated, and a cylinder containing polystyrene balls and water to mimic lung tissue. To mimic standard wholebody imaging activity is also placed outside the field of view. This is particulary important for 3D imaging, where as we have discussed earlier, out of field activity can affect count rates.

Overall by imaging this phantom, a full assessment of image quality including the accuracy of attenuation and scatter correction models can be assessed. An example of an image from acquired of an image quality phantom is shown in

Figure 41.

Figure 41

: Image from a NEMA quality control phantom.

Routine Quality Control

Once there is confidence in the system that has been installed, periodic testing should be performed to ensure that the quality of the system is maintained. The following tests are recommended to be performed periodically.



Daily Quality Control

On a daily basis it is important to ensure that the PET detectors are responding correctly. Each manufacturer has their own version of daily QC which should be performed. Examples of daily QC tests are given below.

One of the simplest ways to do this is to perform a blank scan using a transmission source that may be used for attenuation correction. In this scan, the source is rotated internally within the gantry giving a uniform flux of radiation to the PET detectors.

An alternative method used for systems where PET transmission scanning is not available, is to scan a cylinder with a uniform distribution of activity. The cylinder can be filled with a Fluorine-18 solution, or for simplicity, a solid

Germanium-68 cylinder can be used (Figure 42). The cylinder should extend the full length of the field of view. On some systems a Na-22 point source can be used instead.

Figure 42:

Germanium phantom typical of that used for measuring PET image uniformity.

If we look at the sinograms from such acquisitions, we should see a uniform response from the detector pairs (Figure 42 (a)). If there is a problem with one of the detectors we may see a diagonal line similar to that shown in Figure 43

(b). The sinograms can also be compared to reference sinograms to detect changes in detector performance and calibration.

Figure 43:

(a) Sinogram from a correctly functioning device showing a uniform response, i.e. no streaks (b) a sinogram displaying a streak which is a result of a faulty detector pair.


After reconstructing a corrected (including attenuation corrected) PET image of  the  cylinder,  we  can  additionally  assess  the  measured  uniformity  of  the  distribution in the cylinder for any obvious artefacts. 

If  the  daily  QC  procedure  values  are  outside  the  accepted  tolerance,  then  the  scanner  should  not  be  used  for  patient  studies  until  the  problem  has  been  rectified and daily QC results are again within tolerance. 

Weekly Quality Control 

On  a  weekly  basis  many  manufacturers  recommend  some  minor  hardware  tuning of the system. To perform these you should consult your manufacturers’  operator manual. Typical tuning of the system may include alterations of PMT  voltages. 

One  generic  test  that  is  useful  to  perform  on  a  weekly  basis  is  a  check  on  the  activity  concentration  and/or  SUV  measured  by  the  system  using  standard  clinical protocols. A cylindrical phantom is filled with a known volume of F‐18  solution,  with  the  pre  and  post  syringe  activity  recorded  so  that  the  activity  concentration in the cylinder can be calculated. The filled phantom should also  be  weighed  to  allow  the  calculation  of  SUV  by  body  weight.  A  normal  whole  body  study  is  then  performed  over  two  bed  positions  entering  the  activities,  volume  and  weight  into  the  system  so  that  activity  concentrations  and  SUVs  can  be  calculated  (as  would  be  the  case  for  a  standard  patient  study).  Once  acquired,  reconstructed,  and  corrected,  regions  of  interest  are  placed  on  different  slices  within  the  phantom  with  the  resulting  activity  concentrations  and/or SUVs compared to the calculated activity concentration /SUVs. If there is  perfect  correlation  between  imaged  and  measured  concentrations,  the  SUV  should be equal to one. 

Infrequent Quality Control 

This  last  section  describes  tests  that  should  be  performed  periodically  using  intervals greater than one week. 

After  every  preventative  maintenance  visit  by  the  manufacturer,  after  any  major  work  on  the  PET  detector  system,  or  at  least  on  a  4  monthly  basis,  an  overall tune of the detector systems looking at PMT gains, positional maps, and  energy peaking should be performed. This will normally be performed by the  manufacturers’  service  engineers.  Since  the  tuning  is  likely  to  affect  the  measurement  of  disintegration  events,  the  calibration  converting  recorded  disintegration  events  to  activity  concentration  (see  earlier  section  on  activity  concentration  calibrations)  should  be  performed  to  ensure  that  the  conversion  to  activity  concentration  is  accurate.  Normalisation  corrections  will  also  be  performed at this point. 


On  a  three‐four  monthly  basis  the  registration  between  CT  and  PET  gantries  should  be  checked  so  that  images  acquired  on  the  PETCT  system  remain  inherently  aligned.  Each  manufacturer  provides  different  phantoms  for  this  purpose, so when performing this experiment, guidance should be taken from  the systems operator manual. 


More details on periodic CT quality control will be given in the CT module of  this training. For completeness, a summary of periodic CT quality control that  should be performed is given below. 

Daily Quality Control 

Before the CT tube is used on any given day, or if the tube has been inactive for  a  few  hours,  the  tube  should  be  warmed  up  prior  to  use  on  patients.  This  is  necessary because the CT tube needs to be at a particular operating temperature  before it performs properly. 

Following a tube warm up a CT blank calibration should be performed to give  the  scanner  the  series  of  blank  scans  for  different  kV p

  ,  mAs,  filter  and  collimator  configurations.  These  calibrations  are  necessary  so  that  the  ratio  of  the data with and without the patient present can be used to calculate the map  of Hounsfield Units that make up a CT image.  

Weekly Quality Control 

On an at least a weekly basis or at the frequency recommended by the vendor,  an  acquisition  of  a  CT  image  quality  phantom  should  be  made  to  assess  the  uniformity  of  response  through  a  uniform  part  of  the  phantom.  The  CT  manufacturer of the system often provides a phantom fit for this purpose, or a  standard CT phantom such as a Catphan ®  phantom can be used. In addition to  visually  assessing  the  uniformity  of  the  slice,  it  is  normal  to  measure  the  average  and  standard  deviation  Hounsfield  Units  (HU)  for  several  regions  of  interest within the slice (Figure 44). For a water filled phantom, the HU should  be equal to or very close to zero. The standard deviation gives us a measure of  the noise in the data. 

Figure 44:

An assessment of the CT uniformity using three small and one large region of interest within a uniform slice.


Infrequent Quality Control

On a quarterly to annual basis or at the frequency recommended by the vendor, it is recommended that more rigorous CT quality control is performed. Using the same phantom as that described above for the weekly

CT quality control check, parameters such as high contrast spatial resolution, low contrast detectability and slice thickness can be assessed.

Images from such tests are given in Figure 45.

Figure 45:

(a) Image showing line pairs which represent the spatial resolution of the system and slice thickness gradations. (b) An image of low contrast details.

Annual Quality Control

Annually your local radiology medical physics expert should be invited to perform more thorough tests on the CT tube. During this visit, measurements of the kV, mA, CT Dose Index (CTDI), slice thickness will be made together with a detailed assessment of image quality.

Quality Control of other equipment used in PET imaging

Though we have mentioned the quality assurance necessary for PET equipment, it is important to note that quality control should also be performed on other equipment peripheral to PET scanning.

The dose calibrator used in measuring injected activities for PET patients is critical to the measurement of activity concentrations or SUVs in vivo. On an annual basis, the dose calibrator should be checked with activity that is also measured (or traceable to) a dose calibrator held a national or international metrology unit. Once this link is in place, daily assessment using a solid source e.g. Cs-137 is normally performed to ensure the consistency of dose calibrator measurements.

Another piece of equipment that requries QC is the Glucometer that is used to measure the patients’ blood sugar levels. It is essential to measure blood sugar prior to injecting and scanning a patient using PET to ensure that the image is of good quality. Normally QC of Glucometers is performed following the manufacturers recommended guidelines.

Go to

DAT  website  play the powerpoint entitled ‘Quality Control’, which describes the Quality Control performed in PET.


Go to

your Workbook PET section and complete Exercises 28, 29, 30, 31 and 32.

Key points:

Maintaining quality using a quality assurance program should include procedural review, audit, and equipment assessment.

Equipment acceptance testing ensures that the equipment is of the required quality

A continuing equipment quality control program ensures that the equipment consistently produces high quality imaging.





This small section will cover some of the artefacts that may occur in PET and

PET-CT imaging together with the physical explanation of why such artefacts occur. Clinical or physiological artefacts will not be covered.


On completion of this section you will be able to:

Know of artefacts that are PET related, and how they can be attributed to

Hardware Failures, Acquisition Problems, and Processing Issues.

Understand artefacts that arise because of the multimodality nature of PET-CT.

Time Check:

Allow 2 hrs to complete the study of this subject and complete the exercises in your Workbook.

PET Artefacts

The type of artefacts found in PET can be divided into 3 groups. Those caused by:

Hardware Failures

Acquisition Problems

Processing Issues

Hardware Failures

For many equipment failures, the system itself would stop the operator scanning. One exception to this could be when one or more block detectors in the system fail. If a block were to fail, we would hope that it would be seen during the daily QA. If a block or blocks failed during the working day the relevance of any imaging artefacts would depend on how many blocks failed, and the position of the blocks.

If one block failed, it is unlikely that a problem would be seen. This is partly due to the fact that we have more than enough blocks to get sufficient projection data, and also because OSEM reconstruction is still able to work with missing data. If there is one or block failure, this will be seen as a fan-like ripple

(undulation) in the image. In the sinogram, block failure will be more easily identified as a black streak through the sinogram.

Acquisition Problems

One of the key problems that occur during imaging is movement of the patient.

With traditional PET scanning without CT taking up to an hour, patient motion is a distinct possibility. Fortunately with each bed position taking less than 10 minutes, there is unlikely to be too much of a problem as long as the patient does not move the part of the body that is being imaged at a given time.


Artefacts that may arise from patient motion could include:

Discontinuity of the image between bed positions if for example the patient shifts from one bed position to another (Figure 46 (a)).

Overlaid images, if for example when scanning arm(s) are in one position for part of the scan and another position for the remaining part of the scan. The arm would appear then to be in two positions simultaneously (Figure 46 (b)).

Attenuation artefacts where attenuation correction is applied incorrectly because of motion between emission and transmission scans. This could result for example with a brain scan where the misapplication of attenuation correction could lead relative decrease and uptake of tracer where it should actually be the same throughout the brain (Figure 46 (c)).

(a) (b) (c)

Figure 46:

(a) Patient movement up the bed during the scan resulted in image discontinuities between each bed position, (b) Movement of arms from the arms up position in CT to arms down in PET, (c) Misregistration of CT and PET images resulting in incorrect attenuation correction of the PET data and poor characterisation of metabolism within the brain.

Processing Issues

Incorrect attenuation correction is not the only correction that may cause artefactual data. Given below is an example of poor scatter correction (Figure

47). Around the bladder, we see a blooming artefact caused from an extremely hot feature surrounded by a low activity concentration. The scatter correction artefact here makes interpretation of transaxial images in this area extremely problematic.


Figure 47:

Gallium-68 Dotatate image showing a ’blooming’ artefact around the bladder. This scatter correction based artefact is caused by the hot bladder being in a low activity environment.

Artefacts from other corrections are unlikely to be problematic. A poor random correction affects the whole field of view equally so if there was a problem with the random correction, there may be a slight loss of contrast throughout the image which is unlikely to affect the clinical report. Deadtime corrections are rarely needed in PET, and normalisation errors or activity concentration errors are likely to be spotted during routine quality control.

Reconstruction artefacts could occur, such as streak artefacts from FBP. These have been covered in a previous section.

PETCT Artefacts

Misregistration Artefacts

In addition to PET artefacts we have artefacts that are specific to PETCT systems. Most PETCT artefacts are purely due to the CT images themselves.

However, one PETCT artefact is a consequence of the multimodality nature of

PETCT systems.

All PETCT systems comprise of a PET system and a CT system that are attached together with the fusion of PET and CT coming from a known and fixed translation of the patient from the CT to the PET gantry (Figure 48).



Figure 48:

A PET CT scanner should be calibrated so that a know shift of the bed (x) will scan the same part of the body. In this case we demonstrate the top of the head being scanned in both positions.

If the system has been set up and calibrated properly, this mechanical translation of the patient should lead to registered PETCT data. However is there is some there is some sag in the patient bed between the two positions or if there is some mechanical slippage, the calibration may no longer be valid.

This will lead to misregistration artefact (Figure 49).

Misregistration artefacts have two effects. Firstly, the data will no longer be registered, so for example a metabolically active lymph node in PET may not correspond to any lymph node in CT. Fortunately an experienced physician or radiologist could probably deal with this issue. What is more important is that because in PETCT systems the CT is used to provide data for attenuation correction, any misregistration of PET and CT data will lead to poor correction for photon attenuation. Such a problem would produce quantitative inaccuracies for example incorrect SUVs, or the brain example we gave earlier.


Figure 49:

(a) Poor calibration of the translation between gantries can lead to the top of the head in CT, being registered with eye level on PET. Poorly installed tables may create table sag between CT and PET positions.


Another type of misregistration that can occur that is not a consequence of mechanical problems, or voluntary patient motion is due to involuntary motion. An example of involuntary motion could be respiratory motion of the lungs or expansion and contraction of the heart. Respiratory motion can lead to banana like artefacts on top of the liver such as that shown in Figure 50. Though this could lead to problems with attenuation correction and quantification of lesions in the dome of the liver, base of the lung, or in the diaphragm such lesions are rare so this artefact is rarely clinically important.

Figure 50:

Artefact caused by a mismatch in the position of the dome of the liver and lung in PET and the CT used for attenuation correction.

Misregistration in the heart due to involuntary motion is more of a problem, because it is normal in the heart to look at relative uptake. If we look at Figure

51 (a), we see a PETCT image showing misregistration of the PET and CT data.

The image in Figure 51(b) shows the a splash screen showing patient data with pairs of data, the top of each pair being data corrected for misregistration, and the bottom line showing the data resulting from the misregistration shown in

Figure 51 (a). We can clear see a reduction in uptake in the basal part of the anterior wall.




Figure 51:

(a) Misregistration between PET and CT scan used for attenuation correction showing lateral misregistration of PET data into the lung. (b) Splash screen with top of each pair of rows showing registered PET and CT data and bottom showing the registration in (a). Basal part of the lateral wall shows misregistration artefact.

Truncation Artefacts

In many PETCT systems, though the axial PET field of view may be e.g. 70 cm, because of mechanical and geometrical limitations the CT field of view will be smaller e.g. 50cm. Normally this will not be a problem, however if we scan a large patient, or a patient where the arms are placed down by the side the patient may extend outside the field of view (Figure 52). The CT that we acquire will therefore not extend far enough to accurately correct the PET data for attenuation.

Figure 52:

The CT field of view is often smaller than the PET field of view in PETCT which can lead to poor correction for attenuation.

One way manufacturers have worked around this problem is to reconstruct the data with many of the lateral projections where truncation (loss) of data is less likely. Though these images are non-diagnostic in quality, they can give a good estimate of the attenuation in these areas.


The effect of CT artefacts on PETCT

In PETCT systems, because CT is used for attenuation correction maps, many artefacts that are seen in CT e.g. photon starvation from dental treatment may translate through to attenuation corrected PET data. In practice however, because CT data is normally heavily smoothed before being used for attenuation correction to reduce image noise, such artefacts can be quite subtle.

An example of a subtle photon starvation artefact is given in Figure 53.

Go to

Figure 53:

Photon starvation artefacts caused by dental treatment in CT can cause small artefacts in attenuation corrected PET data.

Contrast induced artefacts in PETCT

As we have seen in a previous section, the translation of attenuation at CT energies to attenuation at PET energies follows a bi-linear relationship. This means that for the attenuation of tissue with Hounsfield units of around water and below, there is one slope that determines the relationship between attenuation at the two energies, at Hounsfield units above water, a different slope is used.

If we have for example intravenous contrast in soft tissue, the attenuation at CT energies is greater than soft tissue (this is of course why we use contrast).

However, at PET energies the attenuation is almost identical to the attenuation for soft tissue without contrast. This means our bilinear relationship fails. As a consequence we find that the uptake in areas of contrast accumulation is overcorrected leading to higher than expected SUVs. The magnitude of these areas is variable depending on the type of contrast used. For oral contrast the problem is less than for dense i.v. contrast material.

In practice the clinical effect of these artefacts is minimal. In many instances contrast is used to delineate tissues unrelated to the lesion or area of interest.

Where this is not the case, the reporting physician/radiologist should take the artefact into consideration, but it is likely that a meaningful report can still be given albeit without giving the effected SUV. Recently, the manufacturers have introduced algorithms that overcome the problem of contrast effecting SUV, so this issue is less of an issue than it once was.

DAT website and play the powerpoint entitled ‘Artefacts, which describes  some of the artefacts found in PET and how to minimise their occurrence. 



The processes of Positron Emission and Coincidence Detection give us the very powerful imaging technique of Positron Emission Tomography. There are many hardware and software similarities between PET and Nuclear

Medicine/SPECT imaging, which we have shown, together with several differences that give Positron Emission Tomography the ability to accurately quantify and map physiological processes. It is these abilities, together with strong imaging tracers that make PET an exciting and powerful tool to help us diagnose and understand disease processes.

Key Points:

Positron Emission, and the resulting annihilation of positron and neighbouring

• electron allow us to use the process of coincidence detection to determine the occurrence and position of the radioactive disintegration

Most PET systems are made up of rings of block detector, with each block detector made from scintillation crystals interfaced via a scored lightguide to a series of photomultiplier tubes.

There are two scanning modes in PET, 2D mode which use interslice rings of tungsten and 3D mode which does not have these rings.

Variability in scintillation crystals and scanning modes available in PET systems can lead to quite different performance characteristics in these systems.

PET data acquired using 2D mode can be reconstructed using standard reconstruction algorithms used in SPECT, whereas 3D PET requires different approaches.

Applying a series of corrections allows absolute quantification of PET data.

Some suggestions for further reading are given below:

Positron Emission Tomography: Basic Sciences

Dale L Bailey, David W Townsend, Peter E Valk, and Michael N Maisey (Eds)

Springer-Verlag London Limited 2005

ISBN 1852337982

Physics in Nuclear Medicine

Simon R Cherry, James A Sorenson, Michael E Phelps

Saunders (3 rd

Edition) 2003

ISBN 072168341X



2D Mode

This is a PET scanning mode where thin rings of lead or tungsten shielding (septa) are used to separate each crystal ring in attempt to restrict coincidences to those occurring within a particular slice, or between closely neighbouring slices.

3D Mode

If a scanner does not have the septa in place that are used for scanning in 2D mode, the scanner is said to be working in 3D mode i.e. coincidences are allowed across all slices.

Acceptance Testing




Annihilation Photons When a positron is emitted it travels a short distance in tissue, losing energy. It eventually combines with an electron and the two annihilate (disappear) with the mass being converted into energy in the form of two gamma rays (511keV) that travel in opposite directions.

Beam Hardening

A CT artefact caused by X-rays passing through dense materials which can result in dark bands or streaks across the

CT image. Though corrected, the artefact can still be seen with imaging through dense bone, or metal dental work.

Bed position

This constitutes a range of tests of a system that are used to assess whether a system is fit for use on patients.

This calibration is used to calibrate activity concentration measured on a PET scanner with the administered activity measured on a radionuclide calibrator.

One PET field of view is typically 15 cm in length. So to perform whole body imaging several images are required.

Because the imaging couch is moved to acquire data for neighbouring fields of view, images from e.g. three fields of view are said to have occurred over three ‘bed positions’.

Block Detector

A detector component in the PET scanner which normally comprises of approximately 20-30 individual scintillation crystals, a lightguide and several photomultiplier tubes.


(Bismuth Germanate)

Detector material commonly used in PET cameras. It has higher density than NaI and is therefore well suited to detection of the high energy (511keV) annihilation photons.

Branching Ratio

When an unstable nucleus undergoes a radioactive disintegration, there are a number of transitions that may take place. Some disintegrations may lead to gamma decay, while others give rise to beta decay. The branching ratio gives the percentage of each type of transformation.


Chang attenuation correction

This is the method for attenuation correction commonly supplied by manufacturers. It is only an approximate correction suitable for areas of the body where attenuation can be considered uniform (head and lower abdomen). However it should not be used for the thorax where attenuation is nonuniform.

Coincidence imaging

In order to detect the two gammas emitted from a positron event, two detectors are used and a valid event is recorded when both detectors record an interaction at the same time (or within a very short time of each other). The detectors operate in coincidence.

Dead time

It takes a fixed amount of time to process a detected photon.

Photons arriving into the detector while the initial photon is being processed are often lost or given the incorrect energy.

The time period that this occurs is known as the ‘dead time’.

Depth of interaction effect

Photons incident at an oblique angle to a scintillation crystal have a larger area to interact with the crystal than those incident at perpendicular to the crystal and are therefore more likely to interact at a greater depth within the crystal. This effect, known as the depth of interaction effect can be a limiting factor to the spatial resolution of the system.

Electronic Collimation Since annihilation photons travel in opposite directions the origin of the annihilation can be defined by the straight line joining the points of detection of the two photons without the need for conventional collimation.

Field of View

This is a term used to describe the area available for scanning.

Along the axial (into bore) plane the field of view is normally fixed by the number of PET detector rings. Typical values are anything between 15cm and 20cm. The field of view within a transaxial slice is physically limited by the diameter of the bore. However on reconstruction, the field of view can be changed to give a smaller field of view. This may be appropriate with Brain imaging where only 25-50cm would be necessary. By changing the field of view the pixel size will change, much in the same way as zoom does in traditional nuclear medicine.

Fourier Rebinning

Fourier technique used in 3D PET to re-assign lines of response that are oblique to the detector face into lines of response that are perpendicular to the detector face.





(Gadolinium Silicate)

Iterative reconstruction

This compound labelled with Fluorine-18 behaves similar to glucose (a sugar), however, it is trapped during metabolism.

The resulting distribution provides an image related to glucose metabolism. FDG is the main compound used clinically for

PET imaging, particularly for application in oncology, neurology and cardiology.

Detector material currently being used in some commercially

PET systems. It has a better energy resolution and decay time than BGO.

This general term applies to a number of reconstruction algorithms that involve a repetitive process of comparison to find the best estimate of the activity distribution that matches the measured projections.


A device that is placed between the scintillation crystal and photomultiplier tube to increase the efficiency of light collection. In PET, the lightguide may also have slits in it to help locate the position of the scintillation.

Line of Response


When two photons are detected by the system at the same time, the positron is said to have been emitted along a line between the two detected events. This line is called the Line or

Response (LoR).


(Lutetium Silicate)

Detector material currently being used in some commercially

PET systems. It has a better energy resolution, decay time and light output than BGO.


(Lutetium Yttrium


Detector material currently being used in some commercially

PET systems. It has a better energy resolution, decay time and light output than BGO.

Maximum likelihood

(ML) reconstruction

ML reconstruction is a specific iterative method that finds the

most likely activity distribution that matches the acquired projections. Usually this is achieved using the expectation maximisation (EM) algorithm.

Monte Carlo simulations

A mathematical technique that tries to simulate the path of emitted particles. The name Monte Carlo is used because the random way the technique comes to a solution is similar to the random nature of events in a casino.

Multiple Coincidence When more than two photons are detected at the same time it is not possible to identify which two events are related to a single positron emission. This event is called a multiple coincidence.


Path Length

Photomultiplier Tube


This is a device that converts light photons in the scintillation crystal to an electrical signal. This is achieved by a photocathode that performs the conversion, and a series of dynodes that amplify the single.

Positron Emission


This is the distance a positron travels before it interacts with an electron to produce to 511 keV photons. The length is dependent on the energy of the positron, which is in turn dependent on the radionuclide being used.

Positron Fraction

Tomography based on the detection of the dual annihilation photons that originate from positron emission. The technique involves detection of the dual photons in coincidence (at the same time).

This is the fraction of radioactive disintegrations that lead to the emission of a positron. Similar to the branching ratio, this reflects the fact that some radionuclides may have a mix of radioactive decays.

Pulse Pileup

Quality Assurance

Quality Control

This term is used to describe when at high count rates, the pulses from different scintillations overlap and can’t be distinguished from one another. The overlapping pulses are then processed together to give an energy that falls outside the energy acceptance window with the loss of those events, or inside the energy acceptance window but with the wrong energy and therefore positional information.

This a term that is used to describe a management process that makes sure that the product (in our case PET images) is of a high quality.

Quality Control is a series of protocols and routine tests that are performed to ensure that the quality of imaging is maintained. For PET systems this would probably include acceptance testing and routine quality control testing.

Random Coincidence When two gammas originating from quite independent sources (e.g. two separate positron emissions) are detected at the same time, the line of response defined by the points of detection does not necessarily correspond to a positron emission. This incorrectly located coincidence event is referred to as a random event.


This is a reconstruction technique for 3D PET data based on filtered back projection that estimates and incorporates lines response that have been lost because of 3D data truncation.


Scattered Coincidence When one or both photons originating from a positron event are scattered and detected in coincidence the line of response defined by the points of detection does not necessarily correspond to the positron emission. This event is referred to as a scatter coincidence



When a photon is detected without a corresponding coincident photon, this is referred to as a single event. Due to the probability of detection, there are many more singles detected than coincidences.

Single Slice


This is a method of handling 3D PET data which re-assigns oblique lines of response between detectors (i.e. lines of response that are not within a transaxial slice), and assigns the line of response to a plane midway between these detectors.

Standardised Uptake


Time of Flight


True Coincidence

Well Counter


This is a measure that is used to characterise glucose metabolism within a patient, though because of several assumptions that are made, it is not a true measure of glucose metabolism. The measure is widely used in oncology PET to determine the aggressiveness of tumours.

Since two annihilation photons both travel with the speed of light, any difference in the arrival time at opposing detectors can be used to estimate the location of the positron emission.

However inaccuracy in defining the time difference currently limits the accuracy of this estimation to roughly a 8cm region.

This term is used to describe a substance that follows a physiological process.

When two annihilation photons originating from a single positron annihilation are detected in coincidence this is referred to as a true coincidence.

See Activity Concentration Correction. This term is often used instead of Activity Concentration Correction, because in kinetic modelling studies, the calibration of the activity concentrations measured on a PET scanner is often calibrated to activities measured in a well or sample counter.


Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Related manuals

Download PDF