anonymous - 1 year ago 164

C# Question

I have written the following application to achieve the Convolution of two images in the frequency domain.

I want to Convolve Lena with itself.

The steps I followed:

(1) Convert Lena into a matrix of complex numbers.

(2) Apply FFT to obtain a complex matrix.

(3) Then, I would multiply two complex matrices element by element (if that is the definition of Convolution).

(4) Then I would apply IFFT to the result of multiplication.

I have written a C# code. But, the output seems to be not coming as expected.

Here is an excerpt from a book. That suggests how should the output of the convolution be.

Some relevant source code:

`public static class Convolution`

{

public static Complex[,] Convolve(Complex[,]image, Complex[,]mask)

{

Complex[,] convolve = null;

int imageWidth = image.GetLength(0);

int imageHeight = image.GetLength(1);

int maskWidth = mask.GetLength(0);

int maskeHeight = mask.GetLength(1);

if (imageWidth == maskWidth && imageHeight == maskeHeight)

{

FourierTransform ftForImage = new FourierTransform(image); ftForImage.ForwardFFT();

FourierTransform ftForMask = new FourierTransform(mask); ftForMask.ForwardFFT();

Complex[,] fftImage = ftForImage.FourierTransformedImageComplex;

Complex[,] fftKernel = ftForMask.FourierTransformedImageComplex;

Complex[,] fftConvolved = new Complex[imageWidth, imageHeight];

for (int i = 0; i < imageWidth; i++)

{

for (int j = 0; j < imageHeight; j++)

{

fftConvolved[i, j] = fftImage[i, j] * fftKernel[i, j];

}

}

FourierTransform ftForConv = new FourierTransform();

ftForConv.InverseFFT(fftConvolved);

convolve = ftForConv.GrayscaleImageComplex;

//convolve = fftConvolved;

}

else

{

throw new Exception("padding needed");

}

return convolve;

}

}

Source code for the GUI,

`private void convolveButton_Click(object sender, EventArgs e)`

{

Bitmap lena = inputImagePictureBox.Image as Bitmap;

Bitmap paddedMask = paddedMaskPictureBox.Image as Bitmap;

Complex[,] cLena = ImageDataConverter.ToComplex(lena);

Complex[,] cPaddedMask = ImageDataConverter.ToComplex(paddedMask);

Complex[,] cConvolved = Convolution.Convolve(cLena, cPaddedMask);

Bitmap convolved = ImageDataConverter.ToBitmap(cConvolved);

convolvedImagePictureBox.Image = convolved;

}

Here is the zipped source code as VS2013 solution.

Also, there is a thread in SO that seems to be discussing the same topic.

Recommended for you: Get network issues from **WhatsUp Gold**. **Not end users.**

Answer Source

There is a difference in how you call `InverseFFT`

between the working FFT->IFFT application, and the broken Convolution application. In the latter case you do not pass explicitly the `Width`

and `Height`

parameters (which you are supposed to get from the input image):

```
public void InverseFFT(Complex[,] fftImage)
{
if (FourierTransformedImageComplex == null)
{
FourierTransformedImageComplex = fftImage;
}
GrayscaleImageComplex = FourierFunction.FFT2D(FourierTransformedImageComplex, Width, Height, -1);
GrayscaleImageInteger = ImageDataConverter.ToInteger(GrayscaleImageComplex);
InputImageBitmap = ImageDataConverter.ToBitmap(GrayscaleImageInteger);
}
```

As a result both `Width`

and `Height`

are 0 and the code skips over most of the inverse 2D transformation. Initializing those parameters should give you something which is at least not all black.

```
if (FourierTransformedImageComplex == null)
{
FourierTransformedImageComplex = fftImage;
Width = fftImage.GetLength(0);
Height = fftImage.GetLength(1);
}
```

Then you should notice some sharp white/black edges. Those are caused by wraparounds in the output values. To avoid this, you may want to rescale the output after the inverse transform to fit the available scale with something such as:

```
double maxAmp = 0.0;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
maxAmp = Math.Max(maxAmp, convolve[i, j].Magnitude);
}
}
double scale = 255.0 / maxAmp;
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
convolve[i, j] = new Complex(convolve[i, j].Real * scale, convolve[i, j].Imaginary * scale);
maxAmp = Math.Max(maxAmp, convolve[i, j].Magnitude);
}
}
```

This should then give the more reasonable output:

However that is still not as depicted in your book. At this point we have a 2D circular convolution. To get a 2D linear convolution, you need to make sure the images are both padded to the sum of the dimensions:

```
Bitmap lena = inputImagePictureBox.Image as Bitmap;
Bitmap mask = paddedMaskPictureBox.Image as Bitmap;
Bitmap paddedLena = ImagePadder.Pad(lena, lena.Width+ mask.Width, lena.Height+ mask.Height);
Bitmap paddedMask = ImagePadder.Pad(mask, lena.Width+ mask.Width, lena.Height+ mask.Height);
Complex[,] cLena = ImageDataConverter.ToComplex(paddedLena);
Complex[,] cPaddedMask = ImageDataConverter.ToComplex(paddedMask);
Complex[,] cConvolved = Convolution.Convolve(cLena, cPaddedMask);
```

And as you adjust the padding, you may want to change the padding color to black otherwise your padding will in itself introduce a large correlation between the two images:

```
public class ImagePadder
{
public static Bitmap Pad(Bitmap maskImage, int newWidth, int newHeight)
{
...
Grayscale.Fill(resizedImage, Color.Black);
```

Now you should be getting the following:

We are getting close, but the peak of the autocorrelation result is not in the center, and that's because you `FourierShifter.FFTShift`

in the forward transform but do not call the corresponding `FourierShifter.RemoveFFTShift`

in the inverse transform. If we adjust those (either remove `FFTShift`

in `ForwardFFT`

, or add `RemoveFFTShift`

in `InverseFFT`

), then we finally get:

Recommended from our users: **Dynamic Network Monitoring from WhatsUp Gold from IPSwitch**. ** Free Download**