Convex Optimization Techniques for Super-resolution Parameter Estimation

Convex Optimization Techniques for Super-resolution Parameter Estimation
Convex Optimization Techniques for
Super-resolution Parameter Estimation
o
Yuejie Chio and Gongguo Tangc
ECE and BMI, The Ohio State University
c
ECE, Colorado School of Mines
IEEE International Conference on Acoustics, Speech and Signal Processing
Shanghai, China
March 2016
Acknowledgements
I
Y. Chi thanks A. Pezeshki, L.L. Scharf, P. Pakrooh, R. Calderbank, Y.
Chen, Y. Li and J. Huang for collaborations and help with materials
reported in this tutorial.
I
G. Tang thanks B. Bhaskar, Q. Li, A. Prater, B. Recht, P. Shah, and L.
Shen for collaborations and help with materials reported in this tutorial.
I
This work is partly supported by National Science Foundation under
grants CCF-1464205, CCF-1527456, and Office of Naval Research under
grant N00014-15-1-2387.
Parameter Estimation or Image Inversion I
I
Image: Observable image y ∼ p(y; θ), whose distribution is
parameterized by unknown parameters θ.
I
Inversion: Estimate θ, given a set of samples of y.
I
I
I
I
Source location estimation in MRI and EEG
DOA estimation in sensor array processing
Frequency and amplitude estimation in spectrum analysis
Range, Doppler, and azimuth estimation in radar/sonar
Parameter Estimation or Image Inversion II
I
Canonical Model: Supperposition of modes:
y(t) =
r
X
ψ(t; νi )αi + n(t)
i=1
I
I
I
I
p = 2r unknown parameters: θ = [ν1 , . . . , νr , α1 , . . . , αr ]T
Parameterized modal function: ψ(t; ν)
Additive noise: n(t)
After Sampling:

y(t0 )
y(t1 )
..
.


ψ(t0 ; νi )
ψ(t1 ; νi )
..
.


n(t0 )
n(t1 )
..
.

r 
 X











=

 αi + 

 i=1 




y(tn−1 )
ψ(tn−1 ; νi )
n(tn−1 )
or
y = Ψ(ν)α + n =
r
X
ψ(νi )αi + n
i=1
I
Typically, ti ’s are uniformly spaced and almost always n > p.
Parameter Estimation or Image Inversion III
Canonical Model:
y = Ψ(ν)α + n =
r
X
ψ(νi )αi + n
i=1
I
DOA estimation and spectrum analysis:
ψ(ν) = [ejt0 ν , ejt1 ν , . . . , ejtm−1 ν ]T
where ν is the DOA (electrical angle) of a radiating point source.
I
Radar and sonar:
ψ(ν) = [w(t0 − τ )ejωt0 , w(t1 − τ )ejωt1 , . . . , w(tm−1 − τ )ejωtm−1 ]T
where w(t) is the transmit waveform and ν = (τ, ω) are delay and
Doppler coordinates of a point scatterer.
New Challenges for Parameter Estimation
I
Limited Sampling Rate:
ultra-wideband signals, large antenna arrays, etc.
I
Noise, corruptions and missing data:
sensor failures, attacks, outliers, etc.
I
Multi-modal data: the received signal exhibits
superpositions of multiple modal functions:
I
which occurs frequently in multi-user/multi-channel environments.
Calibration and Blind Super-resolution: the modal
function needs to be calibrated or estimated
before performing parameter estimation.
Motivating applications: Super-resolution Imaging
I
Single-molecule based superresolution techniques (STORM/PALM)
achieve nanometer spatial resolution by integrating the temporal
information of the switching dynamics of fluorophores (emitters).
I
In each frame, our goal is to localize a point source model via observing
its convolution with a point spread function (PSF) g(t):
!
r
r
X
X
z (t) =
di δ(t − ti ) ∗ g(t) =
di g(t − ti )
i=1
i=1
I
The final image is obtained by superimposing the reconstruction of each
frame.
I
The reconstruction requires estimating locations of point sources.
Three-Dimensional Super-resolution Imaging
I
This principle can be extended to reconstruct 3-D objects from 2-D
images, by modulating the shape, e.g. ellipticity, of the PSFs along the
z-dimension.
800
700
600
500
400
300
200
100
0
nm
I
The reconstruction requires separation of point sources modulated by
different PSFs.
Bar: 1 um
M. J. Rust, M. Bates, X. Zhuang, ”Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM)”, Nature
Methods 3, 793-795 (2006).
J. Huang, M. Sun, K. Gumpper, Y. Chi and J. Ma, ”3D Multifocus Astigmatism and Compressed Sensing (3D MACS) Based
Superresolution Reconstruction”, Biomedical Optics Express, 2015.
Motivating Applications: Neural Spike Sorting
I
The electrode measures firing activities of neighboring neurons with
unknown characteristic functions (or PSF).
I
The goal is to identify and separate the firing times of each neuron from
the observed voltage trace at the electrode.
The reconstruction requires simultaneous estimation of the activation
time and the PSF.
I
Motivating Applications: Blind multi-path channel
identification
I
In multi-user communication systems, each user transmits a waveform
g(t) modulated by unknown data symbols, which arrives at the receiver
r
asynchronously
X
y(t) =
αi gi (t − ti )
i=1
I
The goal is to simultaneously decode and estimate the multi-path delay.
Tutorial Outline
I
Review conventional parameter estimation methods, with a focus on
spectrum estimation.
I
Super-resolution Parameter Estimation via `1 -minimization:
consequences of basis mismatch
I
Super-resolution Parameter Estimation via atomic norm minimization
I
Super-resolution Parameter Estimation via structured matrix completion
I
Final remarks
Classical Parameter Estimation: Matched Filtering I
I
Matched filtering
I
Sequence of rank-one subspaces, or 1D test images, is matched to the
measured image by filtering, correlating, or phasing.
I
Test images are generated by scanning a prototype image (e.g., a
waveform or a steering vector) through frequency, wavenumber, doppler,
and/or delay at some desired resolution ∆ν.
mforming
imple beamformer: Conventional (or Bartlett) beamformer
Matched Filtering
Estimates the
signal power
P (`) = kψ(`∆ν)H yk22
Sequence of plane-waves
perties of the Bartlett beamformer:
ery simple
ow resolution and high sidelobes
ood interference suppression
some angles
-rank MVDR Beamforming
ceton University, Feb. 1, 2007
I
I
Bearing Response
Cross-Ambiguity
Peak locations are taken as estimates of νi and peak values are taken as
estimates of source powers |αi |2 .
Resolution: Rayleigh Limit (RL), inversely proportional to the number of
measurements
Classical Parameter Estimation: Matched Filtering II
I
Matched filtering (Cont.)
I
Extends to subspace matching for those cases in which the model for the
image is comprised of several dominant modes.
I
Extends to whitened matched filter, or minimum variance unbiased
(MVUB) filter, or generalized sidelobe canceller.
H. L. Van Trees, “Detection, Estimation, and Modulation Theory: Part I”,
D. J. Thomson, “Spectrum estimation and harmonic analysis,” Proc. IEEE, vol. 70, pp. 10551096, Sep. 1982.
T.-C.Lui and B. D. Van Veen, “Multiple window based minimum variance spectrum estimation for multidimensional random fields,”
IEEE Trans. Signal Process., vol. 40, no. 3, pp. 578–589, Mar. 1992.
L. L. Scharf and B. Friedlander, “Matched subspace detectors,” IEEE Trans. Signal Process., vol. 42, no. 8, pp. 21462157, Aug. 1994.
A. Pezeshki, B. D. Van Veen, L. L. Scharf, H. Cox, and M. Lundberg, “Eigenvalue beamforming using a multi-rank MVDR beamformer
and subspace selection,” IEEE Trans. Signal Processing, vol. 56, no. 5, pp. 1954–1967, May 2008.
Classical Parameter Estimation: ML Estimation I
I
ML Estimation in Separable Nonlinear Models
I
Low-order separable modal representation for the image:
y = Ψ(ν)α + n =
r
X
ψ(νi )αi + n
i=1
Parameters ν in Ψ are nonlinear parameters (like frequency, delay, and
Doppler) and α are linear parameters (comples amplitudes).
I
Estimates of linear parameters (complex amplitudes of modes) and
nonlinear mode parameters (frequency, wavenumber, delay, and/or
doppler) are extracted, usually based on maximum likelihood (ML), or
some variation on linear prediction, using `2 minimization.
Classical Parameter Estimation: ML Estimation II
I
Estimation of Complex Exponential Modes
I
Physical model:
y(t) =
r
X
αi νit + n(t);
ψ(t; νi ) = νit
i=1
I
where νi = edi +jωi is a complex exponential mode, with damping di and
frequency ωi .
Uniformly sampled measurement model:
y = Ψ(ν)α




Ψ(ν) = 


ν10
ν11
ν12
..
.
ν20
ν21
ν22
..
.
ν1n−1
ν2n−1
···
···
···
..
.
···
νr0
νr1
νr2
..
.




.


νrn−1
Here, without loss of generality, we have taken the samples at t = `t0 , for
` = 0, 1, . . . , n − 1, with t0 = 1.
Classical Parameter Estimation: ML Estimation III
I
ML Estimation of Complex Exponential Modes
min ky − Ψ(ν)αk22
ν,α
α̂M L = Ψ(ν)† y
ν̂M L = argmin yH PA(ν) y;
AH Ψ = 0
Prony’s method (1795), modified least
squares, linear prediction, and Iterative
Quadratic Maximum Likelihood (IQML)
are used to solve exact ML or its
modifications.
I
Rank-reduction is used to combat noise.
I
Requires to estimate the modal order.
D. W. Tufts and R. Kumaresan, “Singular value decomposition and improved frequency estimation using linear prediction,” IEEE Trans.
Acoust., Speech, Signal Process., vol. 30, no. 4, pp. 671675, Aug. 1982.
D. W. Tufts and R. Kumaresan, “Estimation of frequencies of mul- tiple sinusoids: Making linear prediction perform like maximum
likelihood,” Proc. IEEE., vol. 70, pp. 975989, 1982.
L. L. Scharf “Statistical Signal Processing,” Prentice Hall, 1991.
1
Classical Parameter
Estimation:1 ML Estimation IV
0.5
0.5
0
1
I
0
1
1
0
1
0
0
Example: Exact recovery via Linear Prediction
−1 −1
−1 −1
Actual modes
Compressed
sensing
Conventional
FFT
Linear
Prediction
1 1
11
0.50.5
0.50.5
0 0
1 1
1 1
0 0
0
00
11
0 0
11
00
−1−1−1−1
00
−1−1−1−1
Compressed sensing
Linear Prediction
Two damped and two undamped modes
1
1
0.5
0.5
0
1
1
0
0
−1 −1
0
1
1
0
0
−1 −1
Classical Parameter Estimation: Fundamental Limits
I
I
Estimation-theoretic fundamental limits and performance bounds:
I
Fisher Information
I
Kullback-Leibler divergence
I
Cramér-Rao bounds
I
Ziv-Zakai bound
I
SNR Thresholds
Fisher
Edgeworth
Kullback
Leibler
Cramér
Rao
Key fact: Any subsampling of the measured image (e.g. compressed
sensing) has consequences for resolution (or bias) and for variability (or
variance) in parameter estimation.
L. L. Scharf “Statistical Signal Processing,” Prentice Hall, 1991.
CS and Fundamental Estimation Bounds I
I
Canonical model before compression:
y = Ψ(ν)α + n = s(θ) + n
where θ T = [ν T , αT ] ∈ Cp and s(θ) = Ψ(ν)α ∈ Cn .
I
Canonical model after compression (of noisy data):
Φy = Φ(Ψ(ν)α + n) = Φ(s(θ) + n)
where Φ ∈ Cm×n , m n, is a compressive sensing matrix.
I
Observation: y ∼ p(y; θ) (or Φy ∼ p(Φy; θ) after compression)
CS and Fundamental Estimation Bounds II
Estimation-theoretic measures:
I
Fisher information matrix: Covariance of Fisher score
∂
∂
log p(y; ν)
log p(y; θ) |θ
{J(θ)}i,j = E
∂θi
∂θj
2
∂
log p(y; θ)|θ
= −E 2
∂ θi θj
I
Cramér-Rao lower bound (CRB): Lower bounds the error covariance of
any unbiased estimator T (y) of the parameter vector θ from
measurement y:
tr[covθ (T (y))] ≥ tr[J−1 (θ)]
In particular, the ith diagonal element of J−1 (θ) lower bounds the MSE
of any unbiased estimator Ti (y) of the ith parameter θi from y.
CS, Fisher Information, and CRB
Question: What is the impact of compression (e.g. CS) on the Fisher
information matrix and the CRB for estimating parameters?
Theorem (Pakrooh, Pezeshki, Scharf, Chi ’13)
(a) For any compression matrix, we have
(J−1 (θ))ii ≤ (Ĵ−1 (θ))ii ≤ 1/λmin (GT (θ)PΦT G(θ))
(b) For a random compression matrix, we have
(Ĵ−1 (θ))ii ≤
λmax (J−1 (θ))
C(1 − )
with probability at least 1 − δ − δ 0 .
Remarks:
I
(Ĵ−1 )ii is the CRB in estimating the ith parameter θi .
I
CRB always gets worse after compressive sampling.
I
Theorem gives a confidence interval and a confidence level for the
increase in CRB after random compression.
CS, Fisher Information, and CRB
(Ĵ−1 (θ))ii ≤
I
λmax (J−1 (θ))
C(1 − )
δ satisfies
P r ∀q ∈ hG(θ)i : (1 − )kqk22 ≤ kΦqk22 ≤ (1 + )kqk22 ≥ 1 − δ.
I
1 − δ 0 is the probability that λmin ((ΦΦT )−1 ) is larger than C.
I
If entries of Φm×n are i.i.d. N (0, 1/m), then
I
2
3
√
δ ≤ d(2 p/0 )p ee−m( /4− /6) , where
(
I
30 2
30
) + 2(
) = .
0
1−
1 − 0
δ 0 is determined from the distribution of the largest eigenvalue of a
Wishart matrix, and the value of C, from a hypergeometric function.
P. Pakrooh, L. L. Scharf, A. Pezeshki and Y. Chi, “Analysis of Fisher information and the Cramer-Rao bound for nonlinear parameter
estimation after compressed sensing”, in Proc. 2013 IEEE Int. Conf. on Acoust., Speech and Signal Process. (ICASSP), Vancouver May
26-31, 2013.
CRB after Compression
Example: Estimating the DOA of a point source at boresight θ1 = 0 in the
presence of a point interferer at electrical angle θ2 .
I
The figure shows the after compassion CRB (red) for estimating θ1 = 0
as θ2 is varied inside the (−2π/n, 2π/n] interval. Gaussian compression
is done from dimension n = 8192 to m = 3000. Bounds on the after
compression CRB are shown in blue and black. The upper bounds in
black hold with probability at least 1 − δ − δ 0 , where δ 0 = 0.05.
Applying `1 minimization to Parameter Estimation
I
Convert the nonlinear modal representation into a linear system via
discretization of the parameter space at desired resolution:
s(θ) =
r
X

ψ(νi )αi
i=1
= Ψph α
Over-determined &
nonlinear
I

x1
 . 
s ≈ [ψ(ω1 ), · · · , ψ(ωn )]  .. 
xn
= Ψcs x
Under-determined linear & sparse
The set of candidate νi ∈ Ω is quantized to Ω̃ = {ω1 , · · · , ωn }, n > m;
Ψph unknown and Ψcs assumed known.
Basis Mismatch: A Tale of Two Models
Mathematical (CS) model:
Physical (true) model:
s = Ψcs x
s = Ψph α
The basis Ψcs is assumed,
typically a gridded imaging matrix
(e.g., n point DFT matrix or
identity matrix), and x is
presumed to be k-sparse.
The basis Ψph is unknown, and is
determined by a point spread
function, a Green’s function, or an
impulse response, and α is
k-sparse and unknown.
Key transformation:
x = Ψmis α = Ψ−1
cs Ψph α
x is sparse in the unknown Ψmis
basis, not in the identity basis.
Basis Mismatch: From Sparse to Incompressible
DFT Grid Mismatch:





−1
Ψmis = Ψcs Ψph = 



L(∆θ0 − 0)
L(∆θ0 − 2π )
n
.
.
.
2π(n−1)
)
L(∆θ0 −
n
2π(n−1)
)
n
L(∆θ1 − 0)
L(∆θ1 −
.
.
.
2π(n−2)
L(∆θ1 −
)
n
···
···
.
.
.
···
L(∆θn−1 − 2π )
n
L(∆θn−1 − 2π·2 )
n
.
.
.









L(∆θn−1 − 0)
where L(θ) is the Dirichlet kernel:
L(θ) =
n−1
1 X j`θ
1 θ(n−1) sin(θn/2)
e = ej 2
.
n
n
sin(θ/2)
`=0
1
Slow decay of the Dirichlet
kernel means that the
presumably sparse vector
x = Ψmis α is in fact
incompressible.
0.8
sin(Nθ/2)
N sin(θ/2)
0.6
0.4
0.2
0
−0.2
−0.4
−10
−5
0
θ/(2π/N)
5
10
Two models:
Basis Mismatch:
Fundamental Question
s = Ψ0 x = Ψ1 θ
Key transformation:
x = Ψθ = Ψ−1
0 Ψ1 θ
Question: What is the consequence of assuming that x is k-sparse in I, when
in fact it is only
unknownΨbasis
, which
is determined
misin
x is k-sparse
sparse in in
theanunknown
basis,Ψnot
the identity
basis. by
the mismatch between Ψcs and Ψph ?
Physical Model
s = Ψphα
CS Sampler
CS Inverter
min !x!1
y = Φs
s.t. y = ΦΨcsx
.
()
.
.
.
.
June 25, 2014
.
1/1
Sensitivity to Basis Mismatch
I
CS Inverter: Basis pursuit solution satisfies
Noise-free:
Noisy:
kx∗ − xk1 ≤ C0 kx − xk k1
kx∗ − xk2 ≤ C0 k −1/2 kx − xk k1 + C1 where xk is the best k-term approximation to x.
I
Similar bounds CoSaMP and ROMP.
I
Where does mismatch enter? k-term approximation error.
x = Ψmis α = Ψ−1
cs Ψph α
I
Key: Analyze the sensitivity of kx − xk k1 to basis mismatch.
Degeneration of Best k−Term Approximation
Theorem (Chi, Scharf, Pezeshki, Calderbank, 2011)
Let Ψmis = Ψ−1
cs Ψph = I + E, where x = Ψmis α. Let 1 ≤ p, q ≤ ∞ and
1/p + 1/q = 1.
I
If the rows eT` ∈ C1×n of E are bounded as ke` kp ≤ β, then
kx − xk k1 ≤ kα − αk k1 + (n − k)βkαkq .
I
The bound is achieved when the entries of E satisfy
emn = ±β · ej(arg(αm )−arg(αn )) · (|αn |/kαkq )q/p .
Y. Chi, L.L. Scharf, A. Pezeshki, and A.R. Calderbank, “Sensitivity to basis mismatch in compressed sensing,” IEEE Transactions on
Signal Processing, vol. 59, no. 5, pp. 2182–2195, May 2011.
Bounds on Image Inversion Error
Theorem (inversion error)
A
Let A = ΦΨmis satisfy δ2k
<
satisfy kem kp ≤ β, then
√
2 − 1 and 1/p + 1/q = 1. If the rows of E
kx − x∗ k1 ≤ C0 (n − k)βkαkq . (noise-free)
kx − x∗ k2 ≤ C0 (n − k)k −1/2 βkαkq + C1 . (noisy)
I
Message: In the presence of basis mismatch, exact or near-exact sparse
recovery cannot be guaranteed. Recovery may suffer large errors.
Y. Chi, L.L. Scharf, A. Pezeshki, and A.R. Calderbank, “Sensitivity to basis mismatch in compressed sensing,” IEEE Transactions on
Signal Processing, vol. 59, no. 5, pp. 2182–2195, May 2011.
Mismatch of DFT Basis in Modal Analysis I
I
Frequency mismatch
Actual modes
Conventional FFT
1
1
0.5
0.5
0
1
1
0
0
1
0
1
0
0
−1 −1
−1 −1
Compressed sensing
Linear Prediction
1
1
0.5
0.5
0
1
1
0
0
−1 −1
0
1
1
0
0
−1 −1
Mismatch of DFT Basis in Modal Analysis II
I
Damping mismatch
Actual modes
Conventional FFT
1
1
0.5
0.5
0
1
1
0
0
1
0
1
0
0
−1 −1
−1 −1
Compressed sensing
Linear Prediction
1
1
0.5
0.5
0
1
1
0
0
−1 −1
0
1
1
0
0
−1 −1
Mismatch of DFT Basis in Modal Analysis III
I
Frequency mismatch–noisy measurements
Actual modes
Conventional FFT
1
1
0.5
0.5
0
1
1
0
0
1
1
0
0
0
−1 −1
−1 −1
Compressed sensing
Linear Prediction with Rank Reduction
1
1
0.5
0.5
0
1
1
0
0
−1 −1
0
1
1
0
0
−1 −1
Mismatch in DFT Frame for Modal Analysis I
But what if we make the grid finer and finer?
I Over-resolution experiment:
I
m = 25 samples
I
Equal amplitude complex tones at f1 = 0.5 Hz and f2 = 0.52 Hz (half
the Rayleigh limit apart), mismatched to mathematical basis.
I
Mathematical model is s = Ψcs x, where Ψcs is the m × mL, “DFT”
frame that is over-resolved to ∆f = 1/mL.

1
Ψcs = √
m





1
1
ej
1
..
.
1
2π
mL
..
.
ej
2π(m−1)
mL
···
1
ej
···
..
.
···
2π(mL−1)
mL
..
.
ej
2π(m−1)(mL−1)
mL



.


Mismatch in DFT Frame for Modal Analysis II
I
MSE of inversion is noise-defeated, noise-limited, or null-space limited
— depending on SNR.
OMP
OMP
−20
−20
−25
−25
−25
−30
−30
−30
−35
−35
−45
L=4
L=6
−35
L=2
−40
−45
L=4
L=6
L=8
MSE (dB)
L=2
L=8
−40
MSE (dB)
MSE (dB)
l1 SOCP
−20
−40
−45
−50
L = 14
L=8
−55
−55
−55
L = 12
−60
−60
−60
−50
−65
−70
−50
−65
0
5
7
10
15
SNR (dB)
20
`1 inversions
for L = 2, 4, 6, 8
I
CRB
30
−70
−65
0
5
7
10
15
SNR (dB)
20
OMP
for L = 2, 4, 6, 8
CRB
30
−70
0
5
7
10
15
SNR (dB)
20
CRB
30
OMP
for L = 8, 12, 14
The results are worse for a weak mode in the presence of a strong
interfering mode.
L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, “Sensitivity considerations in compressed sensing,” in Conf. Rec. Asilomar’11,
Pacific Grove, CA,, Nov. 2011, pp. 744–748.
Intermediate Recap: Sensitivity of CS to Basis Mismatch
I
Basis mismatch is inevitable when exploiting `1 minimization and
sensitivities of CS to basis mismatch need to be fully understood. No
matter how finely we grid the parameter space, the actual modes almost
never lie on the grid.
I
The consequence of over-resolution (very fine gridding) is that
performance follows the Cramer-Rao bound more closely at low SNR, but
at high SNR it departs more dramatically from the Cramer-Rao bound.
I
This matches intuition that has been gained from more conventional
modal analysis where there is a qualitatively similar trade-off between
bias and variance. That is, bias may be reduced with frame expansion
(over-resolution), but there is a penalty to be paid in variance.
References on Model Mismatch in CS
I
I
I
I
I
I
Y. Chi, A. Pezeshki, L. L. Scharf, and R. Calderbank, “Sensitivity to basis
mismatch in compressed sensing,” in Proc. ICASSP’10, Dallas, TX, Mar.
2010, pp. 3930 –3933.
Y. Chi, L.L. Scharf, A. Pezeshki, and A.R. Calderbank, “Sensitivity to basis
mismatch in compressed sensing,” IEEE Transactions on Signal Processing,
vol. 59, no. 5, pp. 2182–2195, May 2011.
L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, “Compressive
sensing and sparse inversion in signal processing: Cautionary notes,” in Proc.
DASP’11, Coolum, Queensland, Australia, Jul. 10-14, 2011.
L. L. Scharf, E. K. P. Chong, A. Pezeshki, and J. R. Luo, “Sensitivity
considerations in compressed sensing,” in Conf. Rec. Asilomar’11, Pacific
Grove, CA,, Nov. 2011, pp. 744–748.
M. A. Herman and T. Strohmer, “General deviants: An analysis of
perturbations in compressed sensing,” IEEE J. Selected Topics in Signal
Processing, vol. 4, no. 2, pp. 342349, Apr. 2010.
D. H Chae, P. Sadeghi, and R. A. Kennedy, “Effects of basis-mismatch in
compressive sampling of continuous sinusoidal signals,” Proc. Int. Conf. on
Future Computer and Commun., Wuhan, China, May 2010.
Some Remedies to Basis Mismatch : A Partial List
These approaches still assume a grid.
I
H. Zhu, G. Leus, and G. B. Giannakis, “Sparsity-cognizant total least-squares
for perturbed compressive sampling,” IEEE Transactions on Signal Processing,
vol. 59, May 2011.
I
M. F. Duarte and R. G. Baraniuk, “Spectral compressive sensing,” Applied
and Computational Harmonic Analysis, Vol. 35, No. 1, pp. 111-129, 2013.
I
A. Fannjiang and W. Liao, “Coherence-Pattern Guided Compressive Sensing
with Unresolved Grids,” SIAM Journal of Imaging Sciences, Vol. 5, No. 1, pp.
179-202, 2012.
Inspirations for Atomic Minimization I
I
Prior information to exploit: there are only a few active parameters
(sparse!), the exact number of which is unknown.
I
In compressive sensing, a sparse signal is simple – it is a parsimonious
sum of the canonical basis vectors {ek }.
I
These basis vectors are building blocks for sparse signals.
I
The `1 norm enforces sparsity w.r.t. the canonical basis vectors.
I
The unit `1 norm ball is conv{±ek }, the convex hull of the basis vectors.
I
A hyperplane will most likely touch the `1 norm ball at spiky points,
which correspond to sparse solutions.
=
I
+
+
This is the geometrical reason that minimize kxk1 subject to y = Ax
will produce a sparse solution.
Inspirations for Atomic Minimization II
I
I
Given a finite dictionary D = d1 · · · dp , we can consider simple
signals that have sparse decompositions w.r.t. building blocks {dk }.
We promote sparsity w.r.t. D by using the norm:
kxkD = min{kαk1 : x = Dα}
I
I
The unit norm ball is precisely the convex hull conv{±dk }.
Minimizing k · kD subject to linear constraint is likely to recover
solutions that are sparse w.r.t. D.
=
+
+
Inspirations for Atomic Minimization III
I
A low rank matrix has a sparse representation in terms of unit-norm,
rank-one matrices.
I
The dictionary D = {uvT : kuk2 = kvk2 = 1} is continuously
parameterized and has infinite number of primitive signals.
I
We enforce low-rankness using the nuclear norm:
X
kXk∗ = min{kσk1 : X =
σi ui viT }
i
I
The nuclear norm ball is the convex hull of unit-norm, rank-one matrices.
I
A hyperplane touches the nuclear norm ball at low-rank solutions.
=
+
Atomic Norms I
Convex geometry.
I Consider a dictionary or set of atoms A = {ψ(ν) : ν ∈ N } ⊂ Rn or Cn .
I The parameter space N can be finite, countably infinite, or continuous.
I The atoms {ψ(ν)} are building blocks for signal representation.
I Examples: canonical basis vectors, a finite dictionary, rank-one matrices.
I Line spectral atoms:
a(ν) = [1, ej2πν , . . . , ej2π(n−1)ν ]T : ν ∈ [0, 1]
I
2D line spectral atoms:
a(ν1 , ν2 ) = a(ν1 ) ⊗ a(ν2 ), ν1 , ν2 ∈ [0, 1]
I
Tensor atoms: A = {u ⊗ v ⊗ w ∈ Rm×n×p : kuk = kvk = kwk = 1},
unit-norm, rank-one tensors.
Atomic Norms II
I
Prior information: the signal is simple w.r.t. A— it has a parsimonious
decomposition using atoms in A
Pr
x = k=1 αk ψ(νk )
I
The atomic norm of any x is defined as (Chandrasekaran, Recht, Parrilo,
& Willsky, 2010)
P
kxkA = inf{kαk1 : x = k αk ψ(νk )} = inf{t > 0 : x ∈ t conv(A)}
I
The unit ball of the atomic norm is the convex hull of the atomic set A.
Atomic Norms III
Finite optimization.
I
Given linear measurements of a signal x? , possibly with missing data and
corrupted by noise and outliers, we want to recover the signal.
I
Suppose we have some prior information that the signal is simple – it
has a sparse representation with respect to an atomic set A.
I
We can recover the signal by solving convex optimizations:
Basis Pursuit: minimize kxkA subject to y = Ax
1
LASSO: minimize ky − Axk22 + λkxkA
2
Demixing: minimize kxkA + λkzk1 subject to y = x + z.
Atomic Norms IV
I
The dual atomic norm is defined as
kqk∗A :=
I
sup
x:kxkA ≤1
|hx, qi| = sup |ha, qi|
a∈A
For line spectral atoms, the dual atomic norm is the maximal
magnitude of a complex trigonometric polynomial.
n−1
X
∗
j2πkν kqkA = sup |ha, qi| = sup qk e
a∈A
ν∈[0,1] k=0
Atoms
canonical basis vectors
finite atoms
unit-norm, rank-one matrices
unit-norm, rank-one tensors
line spectral atoms
Atomic Norm
`1 norm
k · kD
nuclear norm
tensor nuclear norm
k · kA
Dual Atomic Norm
`∞ norm
kDT qk∞
spectral norm
tensor spectral norm
k · k∗A
Atomic Norms V
Measure optimization.
I
Rewrite the decomposition x =
Pr
k=1
αk ψ(νk ) as
Z
x=
ψ(ν)µ(dν)
N
Pr
where µ = k=1 αk δ(ν − νk ) is a discrete signed measure defined on
the parameter space N .
I
The atomic norm kxkA equals the optimal value of an infinite
dimensional `1 minimization:
Z
minimize kµkT V subject to x =
ψ(ν)µ(dν)
µ∈M(N )
I
N
Here M(N ) is the set of all measures defined on N , and kµkT V is the
total variation norm of a measure.
Atomic Norms VI
I
I
Pr
When µ =P k=1 αk δ(ν − νk ) is a discrete measure,
r
kµkT V = k=1 |αk |.
R
When µ has a density function ρ(ν), kµkT V = N |ρ(ν)|dν = kρ(ν)kL1 .
I
The equivalent measure optimization definition allows us to apply
optimization theory and convex analysis to study atomic norm problems.
I
The dual problem is a semi-infinite program:
maximize hq, xi subject to |hq, ψ(ν)i| ≤ 1, ∀ν ∈ N
|
{z
}
kqk∗
A ≤1
Problems I
Fundamentals:
I
Atomic decomposition: Given a signal, which decompositions achieve
the atomic norm?
I
Recovery from noise-free linear measurements: how many measurements
do we need to recover a signal that has a sparse representation w.r.t. an
atomic set?
I
Denoising: how well can we denoise a signal by exploiting its simplicity
structure?
I
Support recovery: how well can we approximately recover the active
parameters from noisy data?
I
Resolution limit: what’s the fundamental limit in resolving active
parameters?
I
Computational methods: how shall we solve atomic norm minimization
problems?
Problems II
Special cases and applications:
I
Atomic norm of tensors: how to find atomic decompositions of tensors?
I
Atomic norm of spectrally-sparse ensembles: how to define the atomic
norm for multiple measurement vector (MMV) models?
I
Super-resolution of mixture models: how to solve the problem when
multiple forms of atoms exist?
I
Blind super-resolution: how to solve the problem when the form of the
atoms are not known precisely?
I
Applications on single-molecule imaging.
Atomic Decomposition I
I
Consider a parameterized set of atoms A = {ψ(ν), ν ∈ N } and a signal
x with decomposition
r
X
x=
αk? ψ(νk? ),
k=1
under what conditions on the parameters {αk? , νk? }, we have
kxkA = kα? k1 ?
I
I
I
For A = {±ek }, this question is trivial.
For A = {uvT : kuk2 = kvk2 = 1}, the composing atoms should be
orthogonal (Singular Value Decomposition).
For A = {±dk }, a sufficient condition is that the dictionary matrix D
satisfies restricted isometry property.
Atomic Decomposition II
Optimality condition.
Pr
?
?
?
I Define µ? =
k=1 αk δ(ν − νk ). We are asking when µ is the optimal
solution of
Z
minimize kµkT V subject to x =
ψ(ν)µ(dν)
µ∈M(N )
N
I
Atomic decomposition studies the parameter estimation ability of total
variation minimization in the full-data, noise-free case.
I
Recall the dual problem:
maximize hq, xi subject to |hq, ψ(ν)i| ≤ 1, ∀ν ∈ N
|
{z
}
kqk∗
A ≤1
I
Optimality condition: µ? is optimal if and only if there exists a dual
certificate q such that
|hq, ψ(ν)i| ≤ 1, ∀ν ∈ N
hq, xi = kµ? kT V
Atomic Decomposition III
I
Define a function q(ν) = hq, ψ(ν)i. The optimality condition becomes
dual feasibility: kq(ν)kL∞ ≤ 1
complementary slackness: q(νk? ) = sign(αk? ), k ∈ [r]
I
To ensure the uniqueness of the optimal solution µ? , we strengthen the
optimality condition to:
strict boundeness: |q(ν)| < 1, ν ∈ N/{νk? , k ∈ [r]}
interpolation: q(νk? ) = sign(αk? ), k ∈ [r]
2
1
0
-1
0
0.2
0.4
0.6
8
0.8
1
Atomic Decomposition IV
Subdifferential
I
The subdifferential of k · kA at x is
∂kxkA = {q : kqk∗A ≤ 1, hq, xi = kxkA },
which coincides with the optimality condition.
I
Therefore, the dual certificate is a subgradient of the atomic norm.
I
Example: For the nuclear norm, if the reduced SVD of a matrix X is
U ΣV T , then the subdifferential has the characterization
∂kXk∗ = {Q : Q = U V T + W, U T W = 0, W V = 0, kW k ≤ 1}
I
For general atomic norms, it seems hopeless to fully characterize the
subdifferential.
I
To find atomic decomposition conditions, a dual certificate is usually
constructed, which merely finds one subgradient in the subdifferential.
Atomic Decomposition V
Minimal energy dual certificate.
I
The boundedness and interpolation conditions imply that the function
q(ν) achieves maximum or minimum at ν = νk? .
I
We require that a pre-certificate function to satisfy
∂
q(νk? ) = 0, k ∈ [r]
∂ν
q(νk? ) = sign(αk? ), k ∈ [r]
I
To ensure that |q(ν)| is small, we push it down by minimizing the
(possibly weighted) energy of q to get a pre-certificate as the solution of
1 T −1
q W q
2
∂
subject to hq,
ψ(νk? )i = 0, hq, ψ(νk? )i = sign(αk? ), k ∈ [r]
∂ν
minimize
Atomic Decomposition VI
I
This leads to the following kernel expansion of the pre-certificate
function
q(ν) =
r
X
ck K(ν, νk? )
k=1
+
r
X
dk ∂K(ν, νk? )
k=1
T
where the kernel K(ν, ξ) = ψ(ν) W ψ(ξ).
I
For line spectral atoms, when W = diag(w) with w being the
autocorrelation sequence of the triangle function, the corresponding
K(ν, ξ) = K(ν − ξ) is the Jackson kernel (squared Fejér), which decays
rapidly.
1
|K(ν)|
|D(ν)|
0.5
0
-0.05
0
ν
0.05
Atomic Decomposition VII
Line spectral decomposition.
I
Using these ideas, for line spectal atoms
T
a(ν) = 1 ej2πν · · · ej2πnν , Candès and Fernandez-Granda
obtained the following theorem
Theorem (Candès & Fernandez-Granda, 2012)
If the true
{νk? } are separated by
Pparamters
r
?
kxkA = k=1 |αk |.
4
n,
the atomic norm
2.52
n
(Fernandez-Granda, 2015).
I
The critical separation was improved to
I
The separation condition is in a flavor similar to the restricted isometry
property for finite dictionaries, and the orthogonality condition for
singular value decomposition.
I
For atomic decomposition results (full-data, noise-free), the sparsity level
is typically only restricted by the separattion constraint and can be large.
Atomic Decomposition VIII
Other decomposition results.
I
Finite dictionary: restricted isometry property [Candès, Romberg, Tao,
2004]
I
2D line spectral atoms: separation of parameters [Candès &
Fernandez-Granda, 2012].
I
Symmetric rank-1 tensors: soft-orthogonality of the factors [Tang &
Shah 2015].
I
Non-symmetric rank-1 tensors: incoherence, Gram isometry, etc. [Li,
Prater, Shen & Tang, 2015]
I
Translation invariant signals: separation of translations [Tang & Recht
2013; Bendory, Dekel & Feuer 2014]
I
Spherical harmonics: separation of parameters [Bendory, Dekel & Feuer
2014]
I
Radar signals: separation of time-frequency shifts [Heckel, Morgenshtern
& Soltanolkotabi, 2015]
Resolution Limits I
Why there is a resolution limit?
I To simultaneously interpolate sign(α? ) = +1 and sign(α? ) = −1 at ν ?
i
j
i
and νj? respectively while remain bounded imposes constraints on the
derivative of q(ν):
k∇q(ν̂)k2 ≥
I
|q(νi? ) − q(νj? )|
2
=
∆i,j
∆i,j
For N ⊂ R, there exists ν̂ ∈ (νi? , νj? ) such that
q 0 (ν̂) = 2/(νj? − νi? )
1
0
-1
0.2
0.3
0.4
Resolution Limits II
I
For certain classes of functions F, if the function values are uniformly
bounded by 1, this limits the maximal achievable derivative, i.e.,
sup
g∈F
I
kg 0 k∞
< ∞.
kgk∞
For F = {trigonometric polynomials of degree at most n},
kg 0 (ν)k∞ ≤ 2πnkg(ν)k∞ .
I
This is the classical Markov-Bernstein’s inequality.
I
1
Resolution limit for line spectral signals: IfP
mini6=j |νi? − νj? | < πn
, then
?
?
?
there is a sign pattern for {αk } such that k αk a(νk ) is not an atomic
decomposition.
Resolution Limits III
I
Using a theorem by Turán about the roots of trigonometric polynomials,
Duval and Peyŕe obtained a better critical separation bound
min |νi? − νj? | >
i6=j
I
1
.
n
Sign pattern of {αj? } plays a big role. There is no resolution limit if,
e.g., all αj? are positive ([Schiebinger, Robeva & Recht, 2015]).
Recovery from Gaussian Measurements I
I
Given y = Ax? where the entries of A are i.i.d. Gaussian, we recover x?
by solving
minimize kxkA subject to y = Ax.
I
Highlight the power of atomic regularization.
I
When does this work? How many generic (Gaussian) measurements do
we need to recover x? exactly?
I
Summary of atomic minimization recovery bounds (Chandrasekaran,
Recht, Parrilo, & Willsky, 2010):
Recovery from Gaussian Measurements II
I
Tangent cone: set of directions that decrease the norm at x?
TA (x? ) = {d : kx? + αdkA ≤ kx? kA for some α > 0}
I
x? is the unique minimizer iff null(A)
I
When does the random subspace null(A) intersect the decent cone
TA (x? ) only at the origin?
I
The size of the descent cone matters as measured by the mean width:
we need
\
m ≥ nw(TA (x? ) Sn−1 )2
for the recovery of x? .
T
TA (x? ) = {0}.
Recovery from Gaussian Measurements III
I
Here the mean width
w(TA (x? )
\
1
2
Z
1
≤
2
Z
Sn−1 ) :=
sup
Sn−1 x∈TA (x? ),kxk2 =1
inf
?
Sn−1 z∈NA (x )
hx, uidu
kz − uk2 du
I
The normal cone NA (x? ) is the polar cone of the descent cone, the cone
induced by the subdifferential at x? .
I
Find a z ∈ NA (x? ) that is good enough (depending on u), which
requires some knowledge of the subdifferential.
Recovery with Missing Data I
I
Suppose we observe only a (random) portion of the full signal x? ,
y = x?Ω , and would like to complete the rest.
I
E.g., matrix completion, recovery from partial Fourier transform in
compressive sensing
I
Optimization formulation:
minimize kxkA subject to xΩ = x?Ω .
x
I
Results for line spectral signals:
Theorem (Tang, Bhaskar, Shah & Recht, 2012)
Pr
If we observe x? = k=1 αk? a(νk? ) on a size-O(r log(r) log(n)) random
subset of {0, 1, . . . , n − 1} and the true parameters are separated by n4 , then
atomic norm minimization successfully completes the signal.
Theorem (Chi and Chen, 2013)
Similar results hold for multi-dimensional spectral signals.
Recovery with Missing Data II
Recovery with Missing Data III
I
Dual certificate: x? is the unique minimizer iff there exists a dual
certificate vector q such that the dual certificate function
q(ν) = hq, a(ν)i satisfies
q(νk? ) = sign(αk? ), k ∈ [r]
|q(ν)| < 1, ∀ν ∈
/ {νk? , k ∈ [r]}
qi = 0, ∀i ∈
/ Ω.
2
0.6
1
0.4
0
0.2
-1
0
0
0.2
0.4
0.6
ν
0.8
1
0
2
4
6
8
10
Recovery with Missing Data IV
I
The minimal energy construction yields
q(ν) =
r
X
k=1
ck Kr (ν − νk? ) +
r
X
k=1
dk ∂Kr (ν − νk? )
where the (random) kernel
H
Kr (ν) = a(0) W a(ν) =
l wl Il∈Ω e
j2πνl
When the observation index set Ω is random, argue that q(ν) is close to
the Candès-Fernandez-Granda decomposition dual certificate function
using concentration of measure.
1
0.5
Q(ν)
I
P
0
-0.5
-1
0
0.2
0.4
0.6
ν
0.8
1
Denoising I
Slow rate for general atomic norms
I
Observe noisy measurements: y = x? + w with w a noise.
I
Denoise y to obtain
1
x̂ = argmin kx − yk22 + λkxkA .
2
x
I
Choose λ ≥ Ekwk∗A .
Theorem (Bhaskar, Tang & Recht, 2012)
Error Rate:
I
I
1
n Ekx̂
− x? k22 ≤
λ
?
n kx kA .
Specialize
Pr to line spectral signals: suppose the signal
x? = k=1 αk? a(νk? ) and the noise w ∼ N (0, σ 2 In ).
√
We can choose λ = σ n log n.
Theorem (Bhaskar, Tang & Recht, 2012)
Error Rate:
1
n Ekx̂
− x? k22 ≤ σ
q
log(n)
n
Pr
l=1
|αl? |.
Denoising II
Fast rate with well-separated frequency parameters.
Theorem (Tang, Bhaskar & Recht, 2013)
Fast Rate:
1
n kx̂
− x? k22 ≤
Cσ 2 r log(n)
n
The rate is minimax optimal:
No algorithm can do better than
1
C 0 σ 2 r log(n/r)
E kx̂ − x? k22 ≥
n
n
even if the parameters are
well-separated.
if the parameters are separated.
No algorithm can do better than
1
C 0 σ2 r
kx̂ − x? k22 ≥
n
n
even if we know a priori the
well-separated parameters.
Denoising III
10
10
AST
Cadzow
MUSIC
Lasso
MSE(dB)
0
−10
−20
−30
−10
−10
−20
−5
0
5
SNR(dB)
10
15
20
−30
−10
−5
0
5
SNR(dB)
10
AST
Cadzow
MUSIC
Lasso
0
MSE(dB)
MSE(dB)
0
AST
Cadzow
MUSIC
Lasso
−10
−20
−30
−10
−5
0
5
SNR(dB)
10
15
20
10
15
20
Noisy Support Recovery/Parameter Estimation I
Gaussian noise (Tang, Bhaskar & Recht, 2013)
I
When the noise w is Gaussian, we denoise the signal and recover the
frequencies using:
1
x̂ = argmin kx − yk22 + λkxkA .
2
x
I
Dual problem projects y onto the dual norm ball of radius λ.
1
kyk22 − ky − zk22
2
subject to kzk∗A ≤ λ.
maximize
I
Optimality condition: The dual certificate for x̂, q = (y − x̂)/λ, is a
scaled version of the noise estimator.
I
The places where |hq̂, ψ(ν)i| = 1 correspond to support.
Noisy Support Recovery/Parameter Estimation II
P
|α̂l | ≤ C1 σ
q
r 2 log(n)
.
n
I
Spurious amplitudes:
I
Frequency deviation:
q
n
o2
P
r 2 log(n)
?
?
|α̂
|
n
min
d(ν
≤
C
σ
,
ν̂
)
.
l
νj
2
l
j
l:ν̂l ∈Nj
n
q
P
2
Near-region approximation: αj? − l:ν̂l ∈Nj α̂l ≤ C3 σ r log(n)
.
n
I
l:ν̂l ∈F
Noisy Support Recovery/Parameter Estimation III
I
For any νi? such that αi? > C3 σ
frequency ν̂i such that
p
|νi? − ν̂i | ≤
q
r 2 log(n)
,
n

C2 /C3 
n
there exists a recovered
|αi? |
C3 σ
q
r 2 log(n)
n
− 12
− 1
Bounded noise (Fernandez-Granda, 2013)
I
I
I
I
When the noise w is bounded, kwk2 ≤ δ, we denoise the signal and
recover the frequencies by solving:
minimize kxkA subject to ky − xk2 ≤ δ.
P
Spurious amplitudes:
l:ν̂l ∈F |α̂l | ≤ C1 δ.
n
o2
P
Frequency deviation: l:ν̂l ∈Nj |α̂l | n minνj? d(νj? , ν̂l ) ≤ C2 δ.
P
Near-region approximation: αj − l:ν̂l ∈Nj α̂l ≤ C3 δ.
Noisy Support Recovery/Parameter Estimation IV
I
For any νi? such that αi? > C3 δ, there exists a recovered frequency ν̂i
such that
s
C2 δ
1
?
|νi − ν̂i | ≤
n |αi? | − C3 δ
Small noise.
Theorem (Duval & Peyŕe, 2013)
Suppose the frequency parameters are well-separated and the coefficients
{αi? } are real, when both the noise w and the regularization parameter λ are
small, regularized atomic norm minimization will recover exactly r parameters
in a small neighborhood of the true parameters.
Computational Methods I
Semidefinite Reformulations/Relaxations.
I
The dual problem involves a dual norm constraint of the form
kzk∗A ≤ 1 ⇔ |hz, ψ(ν)i| ≤ 1 ∀ν ∈ N
I
Line spectral atoms:
kzk∗A ≤ 1 ⇔ |
I
I
n−1
X
k=0
zk ej2πνk | ≤ 1 ∀ν ∈ [0, 1]
The latter states that the magnitude of a complex trigonometric
polynomial is bounded by 1 everywhere.
Bounded real lemma (Dumitrescu, 2007):
|
n−1
X
k=0
⇔
Q
zH
zk ej2πνk | ≤ 1 ∀ν ∈ [0, 1]
z
0,
1
trace(Q, j) = δ(j = 0), j = 0, . . . , n − 1.
Computational Methods II
I
This leads to an exact semidefinite representation of the line spectral
atomic norm (Bhaskar, Tang & Recht, 2012):
1
Toep(u) x
kxkA = inf
(t + u0 ) :
0
xH
t
2
I
Therefore, line spectral atomic norm regularized problems have exact
semidefinite representations, e.g.,
⇔
minimize kxkA subject to xΩ = x?Ω
1
minimize (t + u0 ) subject to
2
Toep(u) x
0, x = x?Ω
xH
t
Computational Methods III
Discretization.
I
The dual atomic problem involves a semi-infinite constraint
kzk∗A ≤ 1 ⇔ |hz, ψ(ν)i| ≤ 1 ∀ν ∈ N
I
When the dimension of N is small, discretize the parameter space to get
a finite number of grid points Nm .
I
Enforce finite number of constraints:
|hz, ψ(νj )i| ≤ 1, ∀νj ∈ Nm
I
Equivalently, we replace the set of atoms with a discrete one
X
kxkAm = inf{kαk1 : x =
αj ψ(νj ), νj ∈ Nm }
j
Computational Methods IV
I
What happens to the solutions when
ρ(Nm ) = max min
d(ν, ν 0 ) → 0
0
ν∈N ν ∈Nm
Theorem (Tang, Bhaskar & Recht, 2014; Duval & Peyŕe, 2013)
I
The optimal values converge to the original optimal values.
I
The dual solutions converge with speed O(ρm ).
I
The primal optimal measures converge in distribution.
I
When the SNR is large enough, the solution of the discretized problem is
supported on pairs of parameters which are neighbors of the true parameters.
Computational Methods V
1.5
1
0.5
0
0
1.5
1
0.5
0
0
1.5
1
0.5
0
0
m = 64
m = 32
0.5
m = 128
0.5
m = 512
0.5
1
1
0.5
0
0.05
1
1
0.5
0
0.05
1
1
0.5
0
0.05
0.1
m = 256
0.1
m = 1024
0.1
Problems
Special cases and applications:
I
Atomic norm of tensors: how to find the atomic decomposition of
tensors?
I
Atomic norm of spectrally-sparse ensembles: how to define the atomic
norm for multiple measurement vector (MMV) models?
I
Super-resolution of mixture models: how to solve the problem when
multiple forms of atoms exist?
I
Blind super-resolution: how to solve the problem when the form of the
atoms are not known precisely?
I
Applications on single-molecule imaging.
Atomic Decomposition of Tensors I
Tensor decomposition.
I
Given a tensor decomposition
R
Pr
T = i=1 αi? u?i ⊗ vi? ⊗ wi? = K u ⊗ v ⊗ wdµ?
where the parameter space K P
= Sn−1 × Sn−1 × Sn−1 , the
r
?
decomposition measure µ = i=1 αi? δ(u − u?i , v − vi? , w − wi? ) is a
nonnegative measure defined on K.
I
We propose recovering the decomposition measure µ? by solving,
R
minimize µ(K) subject to T = K u ⊗ v ⊗ wdµ.
I
The optimal value of this optiimization defines the tensor nuclear norm.
I
To certify the optimality of µ? , we a construct a pre-certificate following
the minimal energy principle to get
Pr
q(u, v, w) = hQ, u⊗v⊗wi = i=1 (ai ⊗vi? ⊗wi? +u?i ⊗bi ⊗wi? +u?i ⊗vi? ⊗ci )
Atomic Decomposition of Tensors II
I
This pre-certificate satisfies the tensor eigenvalue-eigenvector
relationships such as
X
Q:,j,k vi? (j)wi? (k) = u?i , i ∈ [r]
j,k
Atomic Decomposition of Tensors III
Theorem (Li, Prater, Shen, Tang, 2015)
Suppose
I
I
I
I
Incoherence: maxp6=q {|hu?p , u?q i|, |hvp? , vq? i|, |hwp? , wq? i|} ≤
p
Bounded spectra: max{kU ? k, kV ? k, kW ? k} ≤ 1 + c nr
Gram isometry: k(U ?0 U ? ) (V ?0 V ? ) − Ir k ≤ polylog(n)
bounds for U ? , W ? , and V ? , W ?
polylog(n)
√
n
√
r
n
and similar
Low-rank (but still overcomplete): r = O(n17/16 / polylog(n))
Then µ? is the optimal solution of the total mass minimization problem as
certified by the minimal energy dual certificate.
Corollary (Li, Prater, Shen, Tang, 2015)
Suppose that the factors {u?p }, {vp? } and {wp? } follow uniform distributions
on the unit sphere, then the first three assumptions are satisfied with high
probability.
Atomic Decomposition of Tensors IV
SOS Relaxations.
I
Symmetric tensor atoms:
kZk∗A ≤ 1 ⇔
I
I
I
X
i,j,k
Zijk ui uj uk ≤ 1 ∀kuk2 = 1
The latter states
P that a third order multivariate polynomial is bounded
by 1, or 1 − i,j,k Zijk ui uj uk is nonnegative on the unit sphere.
The general framework of Sum-of-Squares (SOS) for non-negative
polynomials over semi-algebraic sets leads to a hierarchy of increasingly
tight semidefinite relaxations for the symmetric tensor spectral norm.
Taking the dual yields a hierarchy of increasingly tight semidefinite
approximations of the (symmetric) tensor nuclear norm.
Atomic Decomposition of Tensors V
Theorem (Tang & Shah, 2015)
Pr
For a symmetric tensor T = k=1 λk xk ⊗ xk ⊗ xk , if the tensor factors
X = [x1 , · · · , xr ] satisfy kX 0 X −
PIrr k ≤ 0.0016, then the (symmetric) tensor
nuclear norm kT k∗ equals both k=1 λk and the optimal value of the
smallest SOS approximation.
Atomic Decomposition of Tensors VI
Low-rank Factorization.
I
I
Matrix atoms: {u ⊗ v : kuk2 = kvk2 = 1}
Tensor atoms: {u ⊗ v ⊗ w : kuk2 = kvk2 = kwk2 = 1}
I
For a matrix X with rank r, when r̃ ≥ r, the matrix nuclear norm equals
the optimal value of
!
r̃
r̃
X
1 X
2
2
[kup k2 + kvp k2 ] subject to X =
up vpT
minimize
{(up ,vp )}r̃p=1 2
p=1
p=1
I
For a tensor T with rank r, when r̃ ≥ r, the tensor nuclear norm equals
the optimal value of
!
r̃
1 X
3
3
3
minimize
[kup k2 + kvp k2 + kwp k2 ]
{(up ,vp ,wp )}r̃p=1 3
p=1
subject to T =
r̃
X
p=1
up ⊗ vp ⊗ wp
Atomic Decomposition of Tensors VII
Incorporate these nonlinear reformulations into atomic norm regularized
problems.
I
Theorem (Haeffele & Vidal, 2015)
I
When r̃ > r, any local minimizer such that one component is zero, e.g.,
ui0 = vi0 = wi0 = 0.
I
There exists a non-increasing path an initial point (u(0) , v(0) , w(0) ) to a
global minimizer of the nonlinear formulation.
1
35
30
0.8
Rank r
0.6
20
15
0.4
10
0.2
5
2
4
6
8
Dimension n
10
0
0.8
8
Rank r
25
1
10
0.6
6
0.4
4
0.2
2
2
4
6
Dimension n
8
0
Atomic Norm for Ensemble of Spectral Signals I
Signal model.
I
In applications such as array signal processing, we receive multiple
snapshots of observations impinging on the array.
I
Recall the atoms for line spectrum is defined as
h
iT
a(ν) = 1, ej2πν , . . . , ej2π(n−1)ν ,
I
we consider L signals, stacked in a matrix, X = [x1 , . . . , xL ], where
each xl ∈ Cn is composed of the same set of atoms
xl =
r
X
ci,l a(νi ),
i=1
I
ν ∈ [0, 1).
Continuous-analog of group sparsity.
l = 1, . . . , L.
Atomic Norm for Ensemble of Spectral Signals II
I
I
We define the atomic set as
A = A(ν, b) = a(ν)bH ,
kbk2 = 1.
The atomic norm kXkA is defined as
kXkA = inf {t > 0 : X ∈ t conv(A)}
I
The atomic norm kXkA can be written equivalently as
1
1
toep(u) X
u0 + Tr(W)
0
.
kXkA =
inf
XH
W
2
2
u∈Cn ,W∈CL×L
I
The dual norm of kXkA can be defined as
kYk∗A = sup kY∗ a(f )k2 , sup kQ(f )k2 ,
f ∈[0,1)
f ∈[0,1)
where Q(f ) = YH a(f ) is a length-L vector with each entry a
polynomial in f .
Atomic Norm for Ensemble of Spectral Signals III
I
Recovery of missing data:
min kXkA
YΩ = XΩ .
For noncoherently generated snapshots, increasing the number of
measurement vectors will increase the localization resolution.
1
1
0.9
0.8
0.8
0.7
||Q(f)||
0.6
|Q(f)|
I
subject to
0.4
0.6
0.5
0.4
0.2
Dual Polynomial
Truth
0
0
0.2
0.4
0.6
frequency (f)
(a) L = 1
0.8
Dual Polynomial
Truth
0.3
1
0.2
0
0.2
0.4
0.6
0.8
1
frequency (f)
(b) L = 3
Figure : The reconstructed dual polynomial for randomly generated spectral
signals with r = 10, n = 64, and m = 32: (a) L = 1, (b) L = 3.
Atomic Norm for Ensemble of Spectral Signals IV
I
Denoising: consider noisy data Z = X + N, where each entry of N is
CN (0, σ 2 ).
1
2
X̂ = argmin kX − ZkF + τ kXkA .
2
X
Theorem (Li and Chi, 2014)
q
12 12
p
L + log (αL) + 2L log (αL) + πL
,
Set τ = σ 1 + log1 n
+
1
2
where α = 8πn log n, then the expected error rate is bounded as
2
1 2τ
E X̂ − X ? ≤
kX ? kA .
L
L
F
√ √
I As τ is set on the order of
L, if kX ? kA = o
L , then the
per-measurement vector MSE vanishes as L increases.
Super-resolution of Mixture Models I
Mixture Model for Multi-modal Data
I
Formally, consider inverting the following mixture model:
y(t) =
I
X
i=1
xi (t) ∗ gi (t) + w(t),
where ∗ is the convolution operator,
I
I
I is the total number of mixtures, assumed known;
xi (t) is a parametrized point source signal with Ki unknown:
Ki
X
xi (t) =
aij δ(t − tij ), tij ∈ [0, 1], aij ∈ C;
j=1
I
I
I
gi (t) is a point spread function with a finite cut-off frequency 2M ;
w(t) is the additive noise;
The goal is to invert the locations and amplitudes of the point sources
i
for each mixture, {aij , τij }K
j=1 , 1 ≤ i ≤ I.
Super-resolution of Mixture Models II
I
I
Set I = 2 for simplicity. Analysis generalizes to cases I ≥ 2.
In the frequency domain, we have the vector-formed signal
y = g1 x?1 + g2 x?2 + w,
where denotes point-wise product, gi is the DTFT of the PSF gi (t),
and xi ’s are spectrally-sparse signals:
x?1 =
K1
X
k=1
a1k c (τ1k ) ,
x?2 =
K2
X
a2k c (τ2k ) ,
k=1
T
where c(τ ) = e−j2πτ (−2M ) , . . . , 1, . . . , e−j2πτ (2M ) .
I
Conventional methods such as MUSIC and ESPRIT do not apply due to
interference between different components.
Super-resolution of Mixture Models III
I
Convex Demixing: motivate the spectral sparsity of both components
via minimizing the atomic norm:
{x̂1 , x̂2 } = argmin kx1 kA + kx2 kA ,
x1 ,x2
I
s.t.
y = g1 x1 + g2 x2 .
Incoherence condition: Each entry of the sequences g1 , g2 is generated
i.i.d. from a uniform distribution on the complex unit circle.
I
The PSF functions should be incoherent across components
Theorem (Li and Chi, 2015)
Under the incoherence condition, assume the signals are generated with
random signs from the unit circle satisfying the separation of 4/n, then the
recovery of convex demixing is unique with high probability if
M/ log M & (K1 + K2 ) log(K1 + K2 ).
Super-resolution of Mixture Models IV
Phase Transition: Set the separation condition ∆ = 2/n.
1
14
1
0.9
14
0.9
0.8
0.8
12
12
0.7
0.7
10
10
8
0.5
0.6
K2
K
2
0.6
8
0.5
0.4
6
0.4
6
0.3
4
0.3
4
0.2
2
0.1
0.2
2
0.1
0
2
4
6
8
K1
10
12
(a) M = 8
14
0
2
4
6
8
K1
10
12
14
(b) M = 16
Figure : Successful rates of the convex demixing algorithm as a function of
(K1 , K2 ) when (a) M = 8 and (b) M = 16.
Super-resolution of Mixture Models V
Comparison with CRB for Parameter Estimation:
We also compared with the Cramer-Rao Bound to benchmark the
performance of parameter estimation in the noisy case when K1 = 1,
and K2 = 1 for estimating source locations.
I
−1
0
10
10
CRB for τ1
CRB for τ1
CRB for τ2
−1
10
CRB for τ2
−2
10
Average MSE for τ1
Average MSE for τ1
Average MSE for τ2
Average MSE for τ2
−3
−2
10
Averag e M SE |τ̂k − τk |2
Avera g e M SE |τ̂k − τk |2
10
−3
10
−4
10
−4
10
−5
10
−6
−5
10
−6
10
10
−7
10
−8
−7
10
−2
0
2
4
6
SNR (dB)
8
(a) M = 10
10
12
14
10
−2
0
2
4
6
SNR (dB)
8
(b) M = 16
10
12
14
Blind Super-resolution I
Super-resolution with unknown point spread functions:
I
Model the observed signal as:
y(t) =
r
X
i=1
ai g(t − τi ) = x(t) ∗ g(t),
where ∗ is the convolution operator,
I
x(t) is a point source signal with complex amplitudes, where K is
unknown:
r
X
x(t) =
ai δ(t − τi ), τi ∈ [0, 1], ai ∈ C;
I
g(t) is the unknown point spread function of the sensory system;
i=1
I
In frequency domain, we have
y = g x,
where x =
Pr
i=1
ai c(τi ).
Blind Super-resolution II
I
Extremely ill-posed without further constraints.
I
Subspace assumption: We assume the PSF g lies in some known
low-dimensional subspace:
g = Bh ∈ C4M +1 ,
where B = [b−2M , · · · , b2M ]T ∈ C(4M +1)×L , and h ∈ CL .
I
I
Self-calibration of unitary linear arrays: the antenna gains g may be
well-approximated as lying in a low-dimensional (smooth) subspace.
Blind channel estimation: the transmitted data signal g is coded by
projection in a low-dimensional subspace (e.g. the generating matrix).
Blind Super-resolution III
I
Applying the lifting trick: and write the i-th entry of y as yi = xi gi as
yi = xi · gi = (eTi x)(bTi h) = eTi (xhT )bi := eTi Z? bi ,
where ei is the ith column of I4M +1 , and bi as the ith row of B.
I
Now y becomes linear measurements of Z? = xhT ∈ C(4M +1)×L :
y = X (Z? ),
with (4M + 1) equations and (4M + 1)L unknowns.
I
Z? can be regarded as an ensemble of spectrally-sparse signals:
" r
#
X
Z? = xhT =
ai c(τi ) hT .
i=1
Blind Super-resolution IV
I
Blind super-resolution via AtomicLift:
min kZkA
I
s.t.
y = X (Z).
Incoherence condition: Each row of the subspace B is i.i.d. sampled
from a population F , i.e. bn ∼ F , that satisfies the following:
I
I
Isometry property: EbbH = IL , b ∼ F.
Incoherence property: for b = [b1 , . . . , bL ]T ∼ F , define the coherence
parameter µ of F as the smallest number such that
max |bi |2 ≤ µ.
1≤i≤L
Theorem (Chi, 2015)
Assume µ = Θ(1). For deterministic point source signals satisfying the
separation condition of 1/M , M/ log M = O(r2 L2 ) is sufficient for successful
recovery of Z with high probability.
Blind Super-resolution V
Point spread function
Before Calibration/Deconvolution
80
200
150
60
100
40
50
0
20
−50
0
−100
−150
−20
−200
−40
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
(a) PSF
−250
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
(b) Convolution with the PSF
After Deconvolution/Calibration
1
250
200
0.8
150
100
0.6
50
0
0.4
−50
−100
0.2
dual polynomial
Ground truth
−150
−200
−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
(c) Deconvolution
0.4
0.5
0
0
0.2
0.4
0.6
0.8
1
(d) Localization
Figure : Blind spikes deconvolution using AtomicLift: (a) PSF; (b) convolution
between the PSF in (a) and a sparse spike signal; (c) deconvolution with the
PSF using (b); (d) exact localization of the spikes via the dual polynomial.
Blind Super-resolution VI
I
Alternatively, consider different modulation for each point source:
y(t) =
r
X
i=1
αi gi (t − τi ),
motivated by asynchronous multi-user communications.
I
The frequency domain model becomes
y=
r
X
i=1
I
αi a(νi ) gi
Assume all gi lie in the same subspace B and apply
same lifting
Pthe
r
procedure, we obtain linear measurements of Z = i=1 αi hi a(νi )H .
Theorem (Yang, Tang, Wakin, 2015)
For point sources with random signs satisfying the separation condition of
1/M , M = O(rL) is sufficient for successful recovery of Z with high
probability.
Blind Super-resolution VII
Number of s amples N = 64
Dimens ion of s ubs pace K = 4
1
16
1
90
N : Number of s amples
0.8
12
0.6
10
8
0.4
6
4
0.2
80
0.8
70
60
0.6
50
0.4
40
30
0.2
20
2
10
5
10
0
15
2
K : Dimens ion of s ubs pace
4
6
J : Number of s pikes
Number of s pikes J = 4
1
90
N : Number of s amples
J : Number of s pikes
14
80
0.8
70
60
0.6
50
0.4
40
30
0.2
20
10
2
4
6
8
K : Dimens ion of s ubs pace
0
8
0
Application to Single-molecule imaging I
Synthetic data: discretization-based reconstruction (CSSTORM)
I
Bundles of 8 tubes of 30 nm diameter
I
Sparse density: 81049 molecules on 12000 frames
I
Resolution: 64x64 pixels
I
Pixel size: 100nmx100nm
I
Field of view: 6400nmx6400nm
I
Target resolution: 10nmx10nm
I
Discretize the FOV into 640x640 pixels
P
I(x, y) = j cj PSF(x − xj , y − yj ),
(xj , yj ) ∈ [0, 6400]2 , (x, y) ∈ {50, 150, . . . , 6350}2
I
Application to Single-molecule imaging II
Application to Single-molecule imaging III
TVSTORM [Huang, Sun, Ma and Chi, 2016]: atomic norm regularized
Poisson MLE:
χ̂ = argmin `(y|χ) + kχkA
χ∈G
0.04
0.03
0.02
0.01
Density (emitters/µm2 )
10 1
20
15
CSSTORM
TVSTORM
MempSTORM
0 1 2 3 4 5 6 7 8 9
Density (emitters/µm2 )
CSSTORM
TVSTORM
MempSTORM
10 0
10 -1
10
-2
10 -3
0 1 2 3 4 5 6 7 8 9
Density (emitters/µm2 )
2D super-resolution
0.1
0.05
0
0 1 2 3 4 5 6 7 8 9 1011
0 1 2 3 4 5 6 7 8 9 1011
Density (emitters/µm3 )
(d)
CSSTORM
TVSTORM
50
40
30
20
10
CSSTORM
TVSTORM
0.15
Density (emitters/µm3 )
60
Precision (nm)
Precision (nm)
25
Execution Time (s)
30
0.2
CSSTORM
TVSTORM
(c)
(d)
(c)
5
0 1 2 3 4 5 6 7 8 9
(b)
(a)
11
10
9
8
7
6
5
4
3
2
1
0
0 1 2 3 4 5 6 7 8 9 1011
Density (emitters/µm3 )
10 2
Execution Time (s)
0
0 1 2 3 4 5 6 7 8 9
Density (emitters/µm2 )
10
CSSTORM
TVSTORM
MempSTORM
0.05
False Discovery Rate
(b)
0.06
CSSTORM
TVSTORM
MempSTORM
Identified Density (emitters/µm3 )
(a)
9
8
7
6
5
4
3
2
1
0
False Discovery Rate
2
Identified Density (emitters/µm )
Our algorithm avoids the realization of the dense dictionary introduced by
discretization in CSSTORM.
CSSTORM
TVSTORM
10 1
10 0
10 -1
10 -2
0 1 2 3 4 5 6 7 8 9 1011
Density (emitters/µm3 )
3D super-resolution
Application to Single-molecule imaging IV
Practical Super-resolution reconstruction on real data:
(a)
(a)
(b)
(b)
800
700
600
500
400
300
200
100
0
nm
Allowing Damping for Spectral Compressed Sensing
Two-Dimensional Frequency Model
Pr
j2πht,fi i
I Stack the signal x (t) =
into a matrix X ∈ Cn1 ×n2 .
i=1 di e
I
The matrix X has the following Vandermonde decomposition:
X = Y · D · ZT .
Here, D := diag {d1 , · · · , dr } and



Y := 

|
1
y1
..
.
1
y2
..
.
y1n1 −1
y2n1 −1
{z
···
···
..
.
···
1
yr
..
.
yrn1 −1
Vandemonde matrix






, Z := 


}
|
1
z1
..
.
1
z2
..
.
z1n2 −1
z2n2 −1
{z
···
···
..
.
···
1
zr
..
.
zrn2 −1
Vandemonde matrix
where yi = exp(j2πf1i ), zi = exp(j2πf2i ), fi = (f1i , f2i ).
I
Goal: We observe a random subset of entries of X, and wish to recover
the missing entries.
I
Allow damping modes when fi ∈ C2 .





}
Revisiting Matrix Pencil: Matrix Enhancement
5
Given a data matrix X, Hua proposed the following
matrix enhancement for two-dimensional frequency
models (MEMP):
I
Choose two pencil parameters k1 and k2 ;
10
15
20
25
30
35
5
I
10
15
20
An enhanced form Xe is an k1 × (n1 − k1 + 1) block Hankel
matrix :



Xe = 

X0
X1
..
.
Xk1 −1
X1
X2
..
.
Xk1
···
···
..
.
···
Xn1 −k1
Xn1 −k1 +1
..
.
Xn1 −1



,

where each block is a k2 × (n2 − k2 + 1) Hankel matrix as follows



Xl = 

xl,0
xl,1
..
.
xl,k2 −1
xl,1
xl,2
..
.
xl,k2
···
···
..
.
···
xl,n2 −k2
xl,n2 −k2 +1
..
.
xl,n2 −1



.

25
30
35
Low Rankness of the Enhanced Matrix
I
I
Choose pencil parameters k1 = Θ(n1 ) and k2 = Θ(n2 ), the
dimensionality of Xe is proportional to n1 n2 × n1 n2 .
The enhanced matrix can be decomposed as follows:


ZL
 ZL Yd  h
i


Xe = 
D ZR , Yd ZR , · · · , Ydn1 −k1 ZR ,
..


.
ZL Ydk1 −1
I
I
I
I
ZL and ZR are Vandermonde matrices specified by z1 , . . . , zr ,
Yd = diag [y1 , y2 , · · · , yr ].
The enhanced form Xe is low-rank.
I
rank (Xe ) ≤ r
I
Spectral Sparsity ⇒ Low Rankness
holds even with damping modes.
Hua, Yingbo. ”Estimating two-dimensional frequencies by matrix enhancement and matrix pencil.” Signal Processing, IEEE Transactions
on 40, no. 9 (1992): 2267-2280.
Enhanced Matrix Completion (EMaC) I
I
Motivated by Matrix Completion, we seek the low-rank solution via
nuclear norm minimization:
(EMaC) :
I
minimize kMe k∗
M∈Cn1 ×n2
subject to
Mi,j = Xi,j , ∀(i, j) ∈ Ω.
Define GL and GR as r × r Gram matrices such that
(GL )i,l = K(k1 , k2 , f1i − f1l , f2i − f2l ),
(GR )i,l = K(n1 − k1 + 1, n2 − k2 + 1, f1i − f1l , f2i − f2l ).
I
where K(k1 , k2 , f1 , f2 ) is the 2-D Dirichlet kernel.
Incoherence condition holds w.r.t. µ if
σmin (GL ) ≥
1
,
µ
σmin (GR ) ≥
1
.
µ
only depends on the locations of the frequency, not their amplitudes.
Enhanced Matrix Completion (EMaC) II
I
Performance Guarantee in the noise-free case:
Theorem (Chen and Chi, 2013)
Let n = n1 n2 . If all measurements are noiseless, then EMaC recovers X
perfectly with high probability if
m > Cµr log3 n.
where C is some universal constant.
µ = Θ(1) holds (w.h.p.) under many scenarios:
I
I
I
Randomly generated frequencies;
Mild perturbation of grid points;
In 1D, well-separated frequencies by 2RL [Liao and Fannjiang, 2014].
separation on y axis
I
−0.5
1
−0.4
0.9
−0.3
0.8
−0.2
0.7
−0.1
0.6
0
0.5
0.1
0.4
0.2
0.3
0.3
0.2
0.4
0.5
−0.5
0.1
−0.4
−0.3
−0.2
−0.1
0
0.1
separation on x axis
0.2
0.3
0.4
0.5
0
Enhanced Matrix Completion (EMaC) III
Robustness to Bounded Noise.
I
Assume the samples are noisy X = Xo + N, where N is bounded noise:
(EMaC-Noisy) :
minimize
M∈Cn1 ×n2
kMe k∗
subject to kPΩ (M − X) kF ≤ δ,
Theorem (Chen and Chi, 2013)
Suppose Xo satisfies kPΩ (X − Xo )kF ≤ δ. Under the conditions of Theorem
1, the solution to EMaC-Noisy satisfies
(
)
√
√
8 2n2
kX̂e − Xe kF ≤ 2 n + 8n +
δ
m
with probability exceeding 1 − n−2 .
I
n
The average entry inaccuracy is bounded above by O( m
δ). In practice,
EMaC-Noisy usually yields better estimate.
Enhanced Matrix Completion (EMaC) IV
Robustness to Sparse Outliers
I
Assume a constant portion of the measurements are arbitrarily corrupted
as Xcorrupted
= Xi,l + Si,l , where Si,l is of arbitrary amplitude.
i,l
I
Reminiscent of the robust PCA approach [Candes et. al. 2011,
Chandrasekaran et. al. 2011], solve the following algorithm:
(RobustEMaC) :
minimize
kMe k∗ + λkSe k1
subject to
(M + S)i,l = Xcorrupted
, ∀(i, l) ∈ Ω
i,l
M,S∈Cn1 ×n2
Theorem (Chen and Chi, 2013)
Assume the percent of corrupted entries is s is a small constant. Set
n = n1 n2 and λ = √m1log n . Then RobustEMaC recovers X with high
probability if
m > Cµr2 log3 n,
where C is some universal constant.
I
I
Sample complexity: m ∼ Θ(r2 log3 n), slight loss than the previous case;
Robust to a constant portion of outliers: s ∼ Θ(1)
Comparisons between EMaC and ANM
Signal model
Observation model
Success Condition
Sample Complexity
Bounded Noise
Sparse Corruptions
Damping Modes
EMaC
Deterministic
Random
Coherence
Θ(r log3 n)
Yes
Yes
Yes
Atomic Norm
Random
Random
Separation condition
Θ(r log r log n)
Yes
Yes
No
Comparisons of EMaC and ANM
Phase transition for line spectrum estimation: numerically, the EMaC
approach seems less sensitive to the separation condition.
EMaC
Atomic Norm
40
1
40
1
0.9
0.9
35
35
0.8
0.8
30
30
25
0.6
20
0.5
0.4
15
0.7
r: sparsity level
• without separation
r: sparsity level
0.7
25
0.6
20
0.5
0.4
15
0.3
10
0.3
10
0.2
0.2
5
5
0.1
0
0.1
0
0
20
30
40
50
60
70
80
90
100
110
120
0
20
30
40
m: number of samples
50
60
70
80
90
100
110
120
m: number of samples
40
40
1
1
0.9
0.9
35
35
0.8
0.8
30
30
0.7
25
0.6
20
0.5
0.4
15
r: sparsity level
• with 1.5 RL separation
r: sparsity level
0.7
25
0.6
20
0.5
0.4
15
0.3
0.3
10
10
0.2
0.2
5
5
0.1
0.1
0
0
20
30
40
50
60
70
80
90
m: number of samples
100
110
120
0
0
20
30
40
50
60
70
80
90
m: number of samples
100
110
120
References to Atomic Norm and Super Resolution I
I
Chandrasekaran, Recht, Parrilo, Willsky (2010): general framework of
atomic norm minimization.
I
Tang, Bhaskar, Shah, Recht (2012): line spectrum estimation using
atomic norm minimization with random sampling.
I
Bhaskar, Tang, Recht (2012): line spectrum denoising using atomic
norm minimization with consecutive samples.
I
Candès and Fernandez-Granda (2012): Super-resolution using total
variation minimization (equivalent to atomic norm) from low-pass
samples.
I
Chi (2013): line spectrum estimation using atomic norm minimization
with multiple measurement vectors.
I
Xu et. al. (2014): atomic norm minimization with prior information.
I
Chen and Chi (2013): multi-dimensional frequency estimation via
enhanced matrix completion.
I
Xu et. al. (2013): exact SDP characterization of atomic norm
minimization for high-dimensional frequencies.
References to Atomic Norm and Super Resolution II
I
Tang et. al. (2013): near minimax line spectrum denoising via atomic
norm minimization.
I
Chi and Chen (2013): higher dimensional spectrum estimation using
atomic norm minimization with random sampling.
I
Hua (1992): matrix pencil formulation for multi-dimensional frequencies.
I
Liao and Fannjiang (2014): analysis of the MUSIC algorithm with
separation conditions.
I
Li and Chi (2014): atomic norm for multiple spectral signals.
I
Chi (2015): Guaranteed blind super-resolution with AtomicLift.
I
Yang, Tang and Wakin (2016): blind super-resolution with different
modulations via AtomicLift.
Concluding Remarks
I
Compression, whether by linear maps (e.g, Gaussian) or by subsampling,
has performance consequences for parameter estimation. Fisher
information decreases, CRB increases, and the onset of breakdown
threshold increases.
I
Model mismatch can result in considerable performance degradation,
and therefore sensitivities of CS to model mismatch need to be fully
understood.
I
Recent off-the-grid methods (atomic norm and structured matrix
completion) provide a way forward for a class of problems, where modes
to be estimated respect certain separation or coherence conditions.
These methods are also useful for other problems where traditional
methods cannot be applied.
I
But sub-Rayleigh resolution still eludes us!
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF

advertisement