Digital Signal Processing 2003/2004 Solutions to selected problems
University of Manchester
CS3291: Digital Signal Processing 2003/2004
Solutions to selected problems in Notes
Section 1
1.1 Electrical waveform analogous to continuous variation in other quantity such as air pressure.
1.2. Never!
1.3.
x
(
t
)
=
0 :
t
<
0 sin( 100
π
t) :
t
≥
0
1.4. Zero for t<0.
1.5
.
=
1.6. See notes.
0 : n
<
0
0 5 π ):n>=0
f s
= 10 Hz
1.7. (i) Multiplies by constant
(ii) Any linear timeinvariant signal processing system of finite order.
(iii) Output is modulus of input (full wave…)
(iv) Multiplies given signal by another signal.
1.8. Filtering music to increase or decrease bass or treble power: HIFI tone control.
1.9. Permanent storage of data: no ageing as with magnetic tape.
1.10. Loss of data can be catastrophic and total, e.g. due to a virus, rather than partial. Also data can be easily copied by unauthorized people.
1.11. Fixed point: only integers possible; fractions require a decimal or binary point to be assumed.
Floating point: fractions with wide dynamic range possible using mantissa and exponent.
1.12. The term "filter", is often assumed to be a device for removing or "filtering off" unwanted frequency components of a signal. However a filter may be more generally defined as any finite order LTI system. An analogue filter acts directly upon an analogue signal, for example using capacitors, inductors, and operational amplifiers. A digital filter operates to similar effect on digital signals.
1.13. The input signal is processed as it is being received sample by sample or block by block. Any output will be produced with fixed delay with respect to the input, as would be required, for example, for processing speech in a telephone conversation The processor must be fast enough keep up with the incoming data. The opposite is batch mode processing where the incoming data may be captured and stored, for example, on a CD.
1.14. Restoring historical music recordings.
EE3271(CS3291) DSP Solutions 2 BMGC
1.15. Analysing or predicting the behaviour of the stock exchange.
1.16. Using the "rectangulartopolar" function on a calculator Modulus (3+4j) = 5, arg = 0.929 radians.
Enter 3, shift or inv R>P, 4,"=", gives mod (5), "XÅ>Y gives arg (0.929)
Mod(3e
4j
) = 3, Arg = 4+
π
1.17. y(t)=4
ωsin(ωt) . Amplitude = 4ω for any ω.
1.18. y(t)=(4/
ω)sin(ωt) . Amplitude = 4/ω for any ω>0
1.19. (1e
(N+1)j
ω
)/(1e
j
ω
) if
ω>0 else (N+1)
1.20. 1/(10.9e
j
ω
)
1.21. 1+e
j
ω
= e
j
ω/2
(e j
ω/2
+ e
j
ω/2
) = e
j
ω/2
(2cos(
ω/2))
1.22. x
2
+0.9x+0.81 = (xRe j
θ
)(xRe
j
θ
) . Therefore R=0.9 & 2Rcos(
θ)=0; θ = arccos(1/2)=1.047.
1.23. x(t) = cos(
ωt+π/2)+(1/2)cos(2ωt+π)+(1/3)cos(3ωt+3π/2)+(1/4)cos(4ωt+2π)+…
= 0.5e
j(
ωt+π/2)
+ 0.5e
j(
ωt+π/2)
+ 0.25e
j(2
ωt+π)
+ 0.25e
j(2
ωt+π)
+ …
1.24. Lecture notes.
1.25. Just look at formulae and observe what happens when
ω is replaced by ω.
1.26. G(0)=1 i.e. 0dB. G(
ω
G(10
ω
C
) = 1 /
√ (1+10
C
2n
)=1/(
√2) i.e. 3dB.
)
≈ 1 / √ (10 2n
) = 1 / 10 n
i.e. 20n dB.
Section 2
2.1. Yes
2.2 If input to L
1
in top diagram is
δ(t) output from L
1
is h
1
(t) , i.e. its impulseresponse, and this forms the input to L
2
to produce an overall output h
1
(t)
⊗h
2
(t) as given by the convolution formula in notes. If input to L
2
in bottom diagram is
δ(t) output from L
2
is h
2
(t) and this forms the input to L
1
to produce an overall output h
2
(t)
⊗h
1
(t). Hence the impulseresponse of the top arrangement is h
1
(t)
⊗h
2 bottom arrangement is h
2
(t)
⊗h
1
(t) and the impulseresponse of the
(t) and we know that h
1
(t)
⊗h
2
(t)= h
2
(t)
⊗h
1
(t). So the two arrangements have exactly the same impulseresponse and this means that their responses to any other input signal will also be identical. A similar argument may be used when
L
1
and L
2
are discrete time LTI systems.
2.3.
H
(
j
ω
)
=
e
−
j
ω
−
/ 2
j
ω
=
∫
∞
− ∞
h
(
t
)
e
−
j
ω
t dt
[
e
−
j
ω
/ 2
=
∫
0
1
e
−
j
ω
t dt
=
−
1
j
ω
1
[ ]
1
0
:
ω
:
ω
≠
=
0
0
−
e j
ω
/ 2
]
=
( 2 /
ω
)
e
−
j
ω
/ 2 sin(
ω
/ 2 ) when
ω
≠
0
=
e
−
j
ω
/ 2 sinc (
ω
/( 2
π
)) for any
ω
EE3271(CS3291) DSP Solutions 3 BMGC
2.4.
H
(
j
ω
)
=
e
−
3 j
ω
/ 2 sinc (
ω
/( 2
π
)) for any
ω
.
Same gainresponse, but different phaseresponse.
2.5. System would be stable and causal.
Calculate Fourier transform of h(t).
2.6.

H
(
j
ω
) 
=

∫
− ∞
∞
h
(
t
)
e
−
j
ω
t dt

≤
∫
− ∞
∞

h
(
t
)
e
−
j
ω
t

dt
∞
=
∫
− ∞

h
(
t
) 
dt
=
finite
2.7. By applying the inverse Fourier transform and performing a calculation similar to that in 2.3, we get h(t) = (1/
π) sinc(t/π). This is a 'sinc' function of time that remains nonzero for all t from 
∞ to +∞ and hence is the impulseresponse of a noncausal filter.
Section 3
3.1. We can find two simple signals for which the rule for linearity does not apply.
Let x1[n]=1 for all n and x2[n]=1 for all n.
Response to {x1[n]} is {y1[n]} with y1[n]=1 for all n.
Response to {x2[n]} is {y2[n]} with y2[n] =1 for all n.
As x1[n]+x2[n]=0 for all n, the response to {x1[n]+x2[n]} will be {0
But this is not {y1[n]+y2[n]} which would be 2 for all n.
2
} i.e. zero for all n.
3.2. { …,0,… 1, 1, 1, 1, 4, 0, …, 0, … }
3.3. Signal flow graph (i) is nonrecursive like fig 3.3 whereas (ii) is recursive like fig 3.4.
3.4. (i) { …,0,…, 1, 1, 0, …, 0, …} stable & causal. It is a finite impulse response.
(ii) { …,0,…, 1, 1, 1, 1, 1, 1, …, 1, 1, …} causal but unstable. It is an infinite impulse response.
Note that not all filters with infinite impulse responses are unstable, bit this one is.
3.5. { …, 0, …, 0, 1, 0, 0, …, 0, …}
3.6. 800 Hz.
3.7. H(e j
Ω
) = 1 – e
j
Ω
= e
G(
Ω) = 2sin (Ω/2)
j
Ω/2
( e j
Ω/2
 e
j
Ω/2
) = 2 j sin (
Ω/2) e j
Ω/2
= 2 sin (
Ω/2) e j(
Ω/2 – π/2)
When
Ω>0 then φ(Ω) = (Ω/2 – π/2); when Ω<0 then φ(Ω) = π (Ω/2 – π/2) = (Ω/2 + π/2).
When
Ω=0 phase is arbitrary as G(Ω)=0, call it zero.
This is not linear phase.
3.8. { …, 0, …, 0, 4, 8, 16, 32, …} Nonstable.
3.9. We know that {e j
Ωn
} produces {H(e j
Ω
) e
We also know that H(e
j
Ω j
Ωn
} and therefore {e
) is the complex conjugate of H(e j
Ω
).
j
Ωn
} produces {H(e
j
Ω
) e
j
Ωn
}
So that since H(e j
Ω
) = G(
Ω)e j
φ(Ω)
it follows that H(e
j
Ω
) = G(
Ω)e
j
φ(Ω)
.
Now {cos(
Ωn)} = { 0.5 ( e j
Ωn
+ e
j
Ωn
)} = 0.5{ e j
Ωn
} + 0.5 {e
j
Ωn
Therefore the response to {cos(
= 0.5 G(
Ω){e
Ωn)} is 0.5{ G(Ω)e j(
φ(Ω)+Ωn)
+ e
j(
φ(Ω)+Ωn) j
φ(Ω) e j
Ωn
}.
} + 0.5 {G(
Ω)e
} = {G(
Ω) cos(Ωn + φ(Ω))}
j
φ(Ω) e
j
Ωn
}
EE3271(CS3291) DSP Solutions 4 BMGC
3.10. H(e j
Ω
) = 1 + 2e
j
Ω
+ 3 e
2j
Ω
= e
2j
Ω
+ 2 e
3j
Ω
+e
4j
Ω = e
2j
Ω
( e
2j
Ω +
2e j
Ω
+ 3 + 2e
j
Ω
( 3 + 4cos (
Ω ) + 2 cos (2Ω) )
+ e
2j
Ω
It may be shown by various means that 3 + 4cos (
Ω ) + 2 cos (2Ω) ≥ 0 for all Ω.
)
(For example show that 3 + 4cos (
Ω ) + 2 cos (2Ω) = 1 + 4cos (Ω ) + 4 cos
2
(
Ω) = (1+2cos(Ω))
2
Therefore the phase response
φ(Ω) is –2Ω for all Ω.
This is linear phase with a phase delay of 2 samples.
Note that the impulse response ( …, 0, …, 0, 1, 2, 3, 2, 1, 0, …,0, … } is symmetric about n=2.
A symmetric impulse response gives a linear phase response.
Section 4
4.1. Cutoff frequency =
π/2 radians/sample.
G
(
Ω
)
=
G
(
1
Ω
:
0
)
−
:
π
/
/
1
2
:
2
0
<
:

π
π
/
Ω
/
≤

2
2
<
<
π 
/
Ω
Ω
≤

π
<
π
/ 2
Taking
φ(Ω) to be 0, H(e j
Ω
) = G(
Ω) and the ideal impulse response is, by the inverse DTFT:
h h
[
[
n n
]
]
=
=
1
1
2
π
π
∫
∫
π
−
π
−
π
π
(
(
e e j
Ω
j
Ω
)
)
e e j
Ω
j
Ω
n n d d
Ω =
=
1
1
2
π
π
∫
∫
π
π
−
/
π
−
π
/
/
2
/
2
2
2
e e j
Ω
n j
Ω
n d d
Ω =
= sin(
n n
π
π
/
/
2
2
)
)
/(
/(
n n
π
π
)
)
when n n
≠
≠
0
0
It may be checked that h[n] = 0.5 when n=0
Therefore {h[n]} is the following infinite sequence:
{ …1/(7
π), 0, 1/(5π), 0, 1/(3π), 0, 1/π, 0.5, 1/π, 0, 1/(3π), 0, 1/(5π), 0, 1/(7π), … }
Rectangularly windowing for –5
≤ n ≤ 5 gives the following sequence:
{ …, 0, …0, 1/(5
π), 0, 1/(3π), 0, 1/π, 0.5, 1/π, 0, 1/(3π), 0, 1/(5π), 0, …, 0 … }
Delaying by 5 samples to make the impulse response causal gives:
{ …, 0, …, 0, 1/(5
π), 0, 1/(3π), 0, 1/π, 0.5, 1/π, 0, 1/(3π), 0, 1/(5π),0, …, 0 … }
We can now draw the signalflow graph of the FIR filter.
H(z) = 1/(5
π)  (1/3π) z 2
+ (1/
π) z 4
0.5 z
5
+ (1/
π)z 6
 ( 1/(3
π) ) z 8
+ (1/5
π) z 10 y[n] = (1/(5
π))x[n] (1/(3π))x[n2] + (1/π) x[n4] + 0.5 x[n5] + (1/π)x[n6]  ( 1/(3π) )x[n8]
+(1/(5
π)) x[n10]
4.2.
G
(
Ω
)
=
0
1 :
:
0 :
π
− π
/ (
/
/
4
2
)
≤

<
=

Ω
Ω
 1
 0
:
π
:
π π
π
/ / 2 4
/
<
0
<
:
π
/ / 4
2
4
≤

<

<
Ω
Ω
Ω

≤
<

<
π
π
π
/
/
2
4
h
[
n
]
=
1
2
π
∫
− π
π
H
(
e j
Ω
)
e j
Ω
n d
Ω =
1
2
π
∫
− π
−
π
/
/
2
4
e j
Ω
n d
Ω +
1
2
π
∫
π
π
/
/
4
2
e j
Ω
n d
Ω
EE3271(CS3291) DSP Solutions 5 BMGC
= (1/n
π)(sin(nπ/2) – sin(nπ/4) ) when n≠0 and 0.25 when n=0. Hence etc.
4.3.
y[n] = (1/(5
π))x[n] (1/(3π))x[n2] + (1/π) x[n4] + 0.5 x[n5] + (1/π)x[n6]  ( 1/(3π) )x[n8]
+(1/(5
π)) x[n10]
= 0.06366x[n] 0.1061x[n2] + 0.3183 x[n4] + 0.5 x[n5] + 0.3183x[n6]  0.1061x[n8]
+0.06366 x[n10]
637x[n] 1061x[n2] + 3183 x[n4] + 5000 x[n5] + 3183x[n6]  1061x[n8] +637x[n10]
=
10000
The following is the “bare bones” of a C program to read 1000 16bit integer samples of a signal from a binary file, pass these through the 10 th
order filter above implemented using integer arithmetic only, and store the output samples generated in a binary output file.
#include<stdio.h>
#include<stlab.h>
#include<math.h> void main( )
{ file *fpin;fpout;
long n, y;
short m,ix, iy, x[11], a[11]; //16bit integers
fpin=fopen(c:\\..\\infilename.dat, “rb”);
fpout=fopen(c:\\..\\outfilename.dat, “wb”);
a[0]=637; a[1]=0; a[2]=1061; a[3]=0; a[4]=3183; etc.
for (m=1;m<11;n++) x[m]=0;
for ( n=0;n<1000;n++)
{ fread(&ix, sizeof(short),1, fpin); x[0] = ix;
y=x[0]*a[0];
for (m=10; m>0; m) {y = y + x[m]*a[m]; x[m] = x[m1];}
iy = y / 10000;
fwrite(&iy, sizeof(short),1, fpout); }
fclose(fpin); fclose(fpout);
}
4.4.
G
(
Ω
)
=
G
1
0
(
1
:
:
Ω
π
−
)
π
/
=
:
π
3
/
<
0
3
≤

1
Ω
:
:
<
Ω
−
<
:
Ω

π
π
/
−
<
3
/
<
π
≤
3
π
/
Ω

3
/
<
3
<
Ω
Ω
−

≤
<
π
π
π
/
/
3
3
h
[
n
]
=
1
2
π
∫
−
π
π
H
(
e j
Ω
)
e j
Ω
n d
Ω =
1
2
π
∫
−
π
− π
/ 3
e j
Ω
n d
Ω +
1
2
π
∫
π
π
/
e
3
j
Ω
n d
Ω
h
[
n
]
=
2
1
π
∫
− π
H
(
e
)
e d
Ω =
1
2
π
∫
− π
e d
Ω +
1
2
π
∫
π
/ 3
e d
Ω
4.5. no
4.6. Straightforward calculation.
4.7. If filter is linear phase with phasedelay N, then
−φ(Ω)/Ω = N sampling intervals. Therefore:
EE3271(CS3291) DSP Solutions 6 BMGC
h
[
n
]
=
( 1 /
π
)
∫
0
π

H
(
e j
Ω
)  cos(
Ω
(
n
−
Now
h
[
N h
[
N
+
n
]
=
−
n
]
=
=
( 1 /
π
( 1 /
π
)
∫
π
0
)
∫
0
π


( 1 /
π
)
∫
0
π

H
(
e j
Ω
)  cos(
Ω
n
)
d
Ω
H
(
e j
Ω
)  cos(
Ω
(
−
n
))
d
Ω
H
(
e j
Ω
)  cos(
Ω
n
)
d
Ω =
h
[
N
+
n
]
N
))
d
Ω
since cos(
θ) = cos(θ) for any θ.
h[n] n n=N
So h[n] must be symmetric about n=N as sketched above. If h{[n]} is an infinite impulseresponse, we will clearly have a problem with causality since if it goes forward for all time, it must also go backward for all time and give us nonzero values of h[n] for n<0. We cannot have an IIR digital filter which is exactly linear phase.
But we can have an FIR digital filter which is exactly linear phase. Can you see why?
Note: By a similar argument, a linear phase analogue filter must have an impulseresponse which is symmetric about some point in time, t=D say. This means that for all t, h(D+t) = h(Dt), and if h(t) remains nonzero as t Æ
∞, h(t) must also remain nonzero as t tends to ∞. But analogue filters have infinite impulseresponses which means that an exactly linear phase analogue filter must be noncausal and therefore unrealisable.
The argument is the same for discrete time filters where h[n] is symmetric about n=N where N is an integer; i.e. h[Nn]=h[N+n] for all n.
It is also easily shown to be true where 2N is an integer and h[N+0.5+n]=h[N0.5n] for all n.
The argument for discrete time filters is a little more complicated where the symmetry is about n=M and
2M is not an integer. Fortunately we rarely encounter this case.
Examples of symmetric impulseresponses corresponding to linear phase FIR digital filters are:
{…,0, …, 0, 1, 2, 3, 7, 3, 2, 1, 0, …, 0, …} N = 3 ; x[2] = x[4], etc.
{ …, 0, …0, 1, 2, 5, 5, 2, 1, 0, …, 0, …} N=2.5: x[2]=x[3], x[1]=x[4], x[0]=x[5]
Section 5
5.1. (a) H(z) = 2  3z
1
+ 6 z
4
(b) H(z) = z
1
/ (1 + z
1
+ 0.5 z
2
)
5.2. Corresponding difference equation is y[n] = x[n1]
EE3271(CS3291) DSP Solutions 7 BMGC
5.3. (i) When input x[n] = z n
then output y[n] = H(z) z n
Substituting, H(z)z n
= z n
– 0.9 H(z)z
H(z) = 1 / (1 + 0.9z
1
)
(n1)
.
Therefore H(z) [ z n
+ 0.9z
n1
] = z n
(ii) {h[n]} = { …, 0, …, 0, 1, 0.9, 0.81, 0.9
H(z) = 1 + (0.9z
1
) + (0.9)
2 z
2
3
, 0.9
+ (0.9)
3 z
4
, … }
3
+ …
= 1 + (0.9z
1
) + (0.9z
1
)
2
+ (0.9z
= 1 / (1 + 0.9z
1
) assuming 0.9z
1
1
)
3
+ …
 < 1 i.e. z > 0.9.
5.4. y[n] = x[n] + 3x[n1] + 2x[n2] – 0.9 y[n1]
Zeros: z = 2 & z = 1; poles at z = 0 & z =  0.9.
5.5. Difference equation is: y[n] = x[n] + 2 y[n1].
{h[n]} = { …, 0, …, 0, 1, 2, 4, 8, 16, 32, …} Unstable! (NB not all IIR filters are unstable)
5.6.
1  0.9 z
1
+ 0.81 z
2
H(z) =
= =
1 0.95z
1
+ 0.9025 z
2 z z
2
2
 0.9 z + 0.81 (z  0.9e
j
π/3
0.95z + 0.9025 (z  0.95e
) (z 0.9e
j
π/3
j
π/3
) (z 0.95e
)
j
π/3
)
The gain response will have a peak of amplitude 2 (6dB) at
Ω = π/3. The gain at frequencies not close to
π/3 will be approximately one. To find out how sharp the peak is we can do various easy things.
You may choose to estimate the gain at
π/3 ±0.05. A bit of geometry (rightangle triangles as usual) tells us that the gain at
π/3 ±0.05 is √(0.1
2
+ 0.05
2
) /
√(0.05
2
+ 0.05
2
) =
√5 / √2 = 4 dB. The distance to the pole has increased by a factor
√2 (decreasing the gain by 3dB) but the distance to the zero has increased from 0.1 to 0.112, i.e. a factor 1.12 ( corresponding to an increase in gain of about 1dB).
Overall the gain decreases by 2 dB from its value of 6 dB at
π/3. This information will allow a reasonable sketch (showing 4 dB points rather that 3 dB points), but if you insist on finding the 3dB points, you can do it quite easily by finding the increase
θ in relative frequency such that
(distance to zero) / (distance to pole) reduces from 2 to approximately
√2. As usual we neglect changes to the distances to the complex conjugates of this pole and zero as they are far away.
Distance to zero
≈ √ (θ
2
+ 0.1
2
) and distance to pole
≈√(θ
2
+0.05
2
).
Solving
√ (θ 2
+ 0.1
2
) /
√ (θ 2
+ 0.05
2
Hence 3dB points are at
π/3±0.071.
) =
√2 gives θ = ±0.071 radians/sample.
Followup exercise: repeat this problem with the poles at 0.99exp(
±jπ/3) and the zeros unchanged.
The peak at
π/3 now becomes much higher, i.e. 20 dB, and you will find that the points where the gain drops by 3 dB from 20 dB to 17 dB occur very close to
π/3 ± 0.01. But where does the gain become approximately 3dB ? Solving
√ (θ 2
+ 0.1
2
) /
√ (θ 2
+ 0.01
2
) =
√2 gives θ = ±0.1
radians/sample; i.e. the gain drops to 3dB at
Ω = π/3 ± 0.1. A gain response sketch for this followup exercise can therefore be drawn with minimal calculation.
5.7. r  e
j
Ω
H(e j
Ω
) =
1  re
j
Ω r  (cos(
Ω) –j sin(Ω)) (r – cos(Ω)) + j sin(Ω)
= =
1  r (cos(
Ω) –j sin(Ω)) (1 – r cos(Ω)) + j r sin(Ω)
EE3271(CS3291) DSP Solutions 8 BMGC
(r – cos(
Ω))
2
+ sin(
Ω) 2
H(e j
Ω
)
2
=
= = 1
(1 – r cos(
Ω))
2
+ r
2
sin(
Ω) 2
r
2
 2rcos(
Ω) + 1
1  2 r cos(
Ω) + r 2
(cos
2
(
Ω) + sin 2
(
Ω))
5.8. See Example 5.4.
5.9. The coefficients a0, a1, a2, b1, & b2 are clearly not integers, and if we just round each to the nearest integer or take its integer part, the result will be rather silly. So we must choose a scaling factor K which is a large integer, say 100 or 1000 or 1024 or maybe 100000. We multiply each coefficient by
K and then round to the nearest integer. The effect of rounding is now less drastic, and we can compensate later for the scaling up of the coefficients by dividing by K. Clearly the larger K, the less serious will be the effect of rounding. However the integers produced must not be too large as overflow may occur. In a 16bit microprocessor like the TMS32010, stored integers representing filter coefficients are limited to the range –32768 to 32767. Similarly signal values from the ADC lie between –32768 to 32767. The processor can multiply together two 16bit words (e.g. a signal sample and an integerised filter coefficient) to produce a 32bit result, and can add 32bit numbers together. But any resulting 32bit number must be scaled back to 16bits before it can be stored as a signal and subjected to further multiplication processes, or output to a DAC. The scaling back to 16 bits is achieved by dividing by K. It is also much easier to divide by a constant K which is a power of two, such as 1024 than to divide by say 100 or 1000. Since second order IIR section filter coefficients normally lie between –2 and +2, choosing K=1024 is fairly safe, though not necessarily optimal. Having chosen K, we then calculate the integerised coefficients as follows:
IB1 = int (K*b1); IB2 = int (K*b2); IA0 = int(K*a0); IA1 = int(K*a1); IA2 =int(K*a2);
Now we can write the program:
IW1:=0; IW2:=0; (these are 16bit integers)
L: Input IX; (16bit integer from ADC)
P := IX*K  IB1*IW1 – IB2*IW2; (32bit result in P)
IW := P / K; (integer divide by shifting to produce the 16bit IW)
P := IW*IA0 + IW1*IA1 + IW2*IA2; (32bit result in P)
IY := P / K; (integer divide by shifting to produce the 16bit IY)
Output IY; (send 16bit result to DAC)
IW2 := IW1;
IW1 := IW;
Goto L (Go back for next sample)
These program steps are easily understood and converted to a different language such as C or assembly language. Apologies for the “goto” statement.
The program given above may be understood in a more professional way by defining IB1, IB2, etc. to be
“Q12” fixedpoint numbers; i.e. a decimal point (strictly a “binary” point) would be assumed to exist after the most significant 4 bits. If the programmer chooses to define IX as a Q12 number also, P becomes a
Q24 number which is scaled back to a Q12 number by the statement IY=P/K. The programmer must remember the Qformats assumed for each word and keep track of what happens at each stage of the calculation. Thinking about Qfactors rather than scaling by K constants is ultimately more elegant and flexible, but students tend to prefer K constants to begin with. You can generate the same code by either mode of thinking. Clearly good documentation is going to be very important as calculations get complicated. Fixed point programming is important for mobile communications since it simplifies the hardware and computational complexity (though not the programming effort) and this leads to power savings and longer battery life.
5.10. If the order is as in the question, the impulse response of the combination is {h
If the order is reversed, the impulse response becomes: {h
2
{h
1
[n]}
⊗{h
2
[n]} as we know.
[n]}
⊗{h
1
[n]}
⊗{h
2
[n]}.
1
[n]}. This is equal to
EE3271(CS3291) DSP Solutions 9 BMGC
5.11. Notch frequency is
π/2. Place zero on unit circle at z = exp(jπ/2) and its complex conjugate at z = exp(j
π/2). Place poles at z = (1α) exp(jπ/2) and z = (1α) exp(jπ/2). If α is small, the 3dB bandwidth is 2
α. Therefore 2α = 3.2*2π/200 and α = 0.05.
( z  exp(j
π/2) )( z  exp(jπ/2) ) z
2
H(z) =
= =
(z – 0.95exp(j
π/2))( z – 0.95exp(jπ/2)) z 2
+ 1 1 + z
+ 0.95
2
2
1 + 0.9025 z
2
Hence etc.
Section 6
6.1. Refer to general formula.
6.2. Multiply out the denominator in 6.1, then to scale the cutoff frequency from 1 radian/second to
ω
C
, replace s by s/
ω
C
.
y(t) – (2 /
ω
C
ω
C
= 1000
) dy(t)/dt  (2 /
ω
C
2
)d
2 y(t)/dt
2
 (1/
ω
C
3
)d
π. Sampling interval T = 0.0001 seconds. Tω
3 y(t)/dt
3
= x(t)
C
=
π/10.
1
H(z) =
1 + 20(1z
1
)/
π + 200(12z 1
+z
2
)/
π 2
+ 1000(1 – 3z
1
+ 3z
2
– z
3
)/
π 3
1
=
1 + 6.366(1z
1
) + 20.264(12z
1
+z
2
) + 32.2515(1 – 3z
1
+ 3z
2
– z
3
)
1
H(z) =
59.8815  143.649z
1
+ 117.019z
2
 32.2515 z
3
I have not checked this yet.
6.3.
Ω
C
=
π/2 radians/sample.
Prewarped analogue frequency: 2 tan(
Ω
C
/2) = 2 radians/second.
Required analogue prototype transfer function is :
1
H(s) =
(1 + s/2) (1 + s/2 + s
2
/4)
Replacing s by 2(z1)/(z+1) and rearranging, we obtain:
H(z) = (1/6) (1 + z
1
(1 + 2z
1
+ z
2
)
)
( 1 + (1/3) z
2
)
6.5. Cutoff frequency is
Ω
C
=
π/2.
EE3271(CS3291) DSP Solutions 10 BMGC
Must be –20 dB or less at
Ω = 3π/4.
Analogue prototype transfer function must have cutoff
ω
C
= 2tan(
π/4) = 2 radians/second.
Gain of analogue prototype must be –20dB or less at
ω = 2tan(3π/8) = 4.828 radians/second.
Gain of an n is: th
order Butterworth lowpass analogue filter with cutoff frequency
ω
C
= 2 radians/second
1
G(
ω) =
√ (1 + (ω / 2) 2n
)
So we need to find the smallest integer value of n such that 20 log
10
(G(4.828)) < 20 dB.
This means that we must have log
i.e. (1 + (5.827) n
10
) > 10
2
((1 + (2.414)
2n
) > 2
5.827
n
= 100
> 99 Smallest possible integer value of n is three.
We can now design the filter by applying the bilinear transformation to 3
ω
C
= 2.
rd
order H(s) with cutoff
Section 7
7.1. See recommended textbook.
7.2. Yes it is possible. We can "oversample" to simplify the analogue filters required to avoid aliasing.
Then we can digitally filter the resulting oversampled signal so that we can then reduce the sampling rate.
Lowering the sampling rate allows more efficient processing, storage and transmission. The digital lowpass filter operating on the oversampled signal can be much sharper and more reliable (no variations with temperature, manufacturing tolerance or aging, for example) than an equivalent analogue filter.
7.3. "Images" (see lecture notes) are increased in frequency. They are further away in frequency from the signal itself, and are hence easier to filter out without significantly affecting the signal.
7.4. QN power in range 10 kHz to 10 kHz:
∆ 2
/12.
Max sinewave amplitude: 2
Max SQNR:
8
∆
2
13
2
∆
/2 Max power: (2
7
∆
2
)/2
2
/ (
= 12 x 2
∆ 2
13
/12)
in range 10kHz to 10kHz.
Signal range is 4kHz to 4kHz, so can filter off QN above 4kHz.
SQNR in range 4kHz to4kHz : 12 x 2
13
/ 0.4 ; i.e. 53.9 dB.
Replacing 8 bit by 10bit ADC adds 12 dB. Decreasing sampling rate to 10 kHz means that wecan no longer divide by 0.4. Answer is 61 dB., Need more expensive ADC and sharper analogue antialiasing filter.
7.5. By argument in notes (p.2.6) H(j
ω)=sinc(ωT/(8π)) x 0.25.
When
ω=π/T (=half sampling freq) H(jω = sinc(1/8) x 0,25 = 0.24
When
Therefore gain only falls from 0.25 to 0.24, ie from 12 dB to 12.4 dB
We only drop by 0.4 dB as
ω goes from zero to half the sampling frequency.
Disadvantage: reduced signaltonose ratio. This can be corrected by increasing pulse height but this means that we have very high voltages.
EE3271(CS3291) DSP Solutions 11 BMGC
Bookwork . See notes.
7.6. See notes for explanation for zero order hold of "sample & hold" reconstruction.
H(j
ω) = e j
ωT/2
sinc(
ωT/(2π))
7.7. In notes
7.8. Ideally we need a digital filter with H(e j
Ω
) = 1 / sinc(
Ω/2)
Can we design an FIR filter by windowing method ? The inverse DTFT gives us a formula that is probably too hard to integrate. Various alternative approaches exist such as sampling 1/sinc(
Ω/2) in the frequency domain and using the DFT or FFT in place of the inverse DTFT to perform the inverse transform.
Alternatively we can approximate H(e j
Ω function:
H(e j
Ω
) = 1 +
Ω/(2π)
) = 1 / sinc(
Ω/2) by a simpler function such as the linear
7.9. Must have 20 log
10
(1/
√ (1 + (ω a
/
ωc) 2
)
≤ 37 dB. Therefore ω a
≥889.5 x 10 3
radians/second
If aliasing is going to affect our 02kHz useful signal, its frequency must be greater than 141.6 kHz, otherwise it would not be attenuated sufficiently by the filter.
If the sampling rate is fs any noise above fs/2 will be aliased, but it will not affect the signal unless the result lies between 0 and 2kHz. The lowest frequency that will cause problems is fs  2 kHz.
Therefore we must ensure that fs  2 kHz
≥ 141.6 kHz.
Therefore, minimum sampling freq is 143.6kHz for original question.
Section 8
For problems on Setction 8 and their solutions, refer to past examination papers.
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
Related manuals
advertisement