Abc Abc - 2 months ago 9
C++ Question

Gaussian generated kernel and given in the book are not same. Why?

Why are not Gaussian kernels values are same generated by equation and given in the book?

I created Gaussian kernel using following code.

double gaussian(double x, double mu, double sigma) {
return std::exp(-(((x - mu) / (sigma))*((x - mu) / (sigma))) / 2.0);
}

typedef std::vector<double> kernel_row;
typedef std::vector<kernel_row> kernel_type;

kernel_type produce2dGaussianKernel(int kernelRadius) {

double sigma = kernelRadius / 2.;
kernel_type kernel2d(2 * kernelRadius + 1, kernel_row(2 * kernelRadius + 1));
double sum = 0;
// compute values
for (int row = 0; row < kernel2d.size(); row++)
for (int col = 0; col < kernel2d[row].size(); col++) {
double x = gaussian(row, kernelRadius, sigma)
* gaussian(col, kernelRadius, sigma);
kernel2d[row][col] = x;
sum += x;
}
// normalize
for (int row = 0; row < kernel2d.size(); row++) {
for (int col = 0; col < kernel2d[row].size(); col++) {
kernel2d[row][col] /= sum;
}
}

return kernel2d;
}


Its result is

0.01134 0.08382 0.01134
0.08382 0.61935 0.08382
0.01134 0.08382 0.01134
Press any key to continue . . .


And this 3x3 Gaussian kernel given in the book

{1 / 16.0f, 2 / 16.0f, 1 / 16.0f,
2 / 16.0f, 4 / 16.0f, 2 / 16.0f,
1 / 16.0f, 2 / 16.0f, 1 / 16.0f };


I wander why not both coefficient are same. and at which value of sigma, gaussain kernel (given in the book) mask is generated?
Note: I used Gaussian equation to generate Gaussian kernel

Edited: I added Gaussian function in my code.

Answer

Ok, the book's kernel is a binomial weighted kernel, which can be found in Pascal's triangle, on the row with 3 coefficients: 1 2 1. Each cell in the matrix is the coefficient corresponding to its row index, multiplied by the coefficient corresponding to its column index, then the whole thing is normalized.

Unfortunately, I can't find the source to your gaussian function. I could compare the output from your function with my guess about what it does, but I don't really want to take the time to do this.

Why this matters is that there are actually 2 types of Gaussian kernels. I suspect that your kernel uses the familiar exponential-based Gaussian, a normalized exp(-x*x/(2*s*s)) or something like that. But a kernel like that is only really valid in convolutions involving continuous functions, not sets of discrete samples, like image data.

So the other one is called the "Discrete Gaussian Kernel". For large sigma, it very closely approximates the continuous Gausian Kernel, but for a sigma smaller than 4 pixels, the discrete version is a bit different than the continuous one.

Here is a link to the Wikipedia Article which includes the Discrete Gaussian Kernel: https://en.wikipedia.org/wiki/Scale_space_implementation#The_discrete_Gaussian_kernel Unfortunately, it can be difficult to calculate because it relies on modified Bessel Functions instead of the exponential function, available in the standard math library.

Edit

I went ahead and dug up my code for calculating the Discrete Gaussian Kernel. I cannot justify the algorithm for this calculation because I wrote it a few years ago and I can't remember how I came up with this. The code below compares values from a Descrete Gaussian Kernel with values sampled from the Gaussian function, for a fixed sigma and several inputs.

#include <iostream>
#include <iomanip>
#include <cmath>

// x is expected to be an integer value
double DiscreteGaussian(double x, double Sigma) {
    x = std::fabs(x);

    if(x > Sigma * 10) return 0;

    double k = 0;

    const double LnSigma_x_2 = std::log(Sigma) * 2;

    const double Ca = x * (LnSigma_x_2 - std::log(2.0)) - (Sigma * Sigma);

    double Ra = 0;                  // accumulated LnGamma(k + 1)
    double Rb = std::lgamma(x + 1); // accumulated LnGamma(x + k + 1)
    double Rc = 0;                  // accumulated k * (4*Ln(Sigma)-2*Ln(2))

    const double Cc = 2.0 * (LnSigma_x_2 - std::log(2.0));   // const for Rc

    double Sum;
    double Next = std::exp(-Rb);

    do {
        Sum = Next;
        k += 1;
        Ra += std::log(k);
        Rb += std::log(x + k);
        Rc += Cc;
        const double ExpTerm = Rc - Ra - Rb;
        Next = Sum + std::exp(ExpTerm);
    } while(Next != Sum);

    return Sum * std::exp(Ca); 
}

double ContinuousGaussian(double x, double Sigma) {
    static const double NORMER = 0.3989422804014327;  // 1/sqrt(2*pi)
    const double x_as_sigmas = x / Sigma;
    return std::exp(-0.5 * x_as_sigmas * x_as_sigmas) * NORMER / Sigma;
}

int main() {
    // t is sigma squared, so for a t of 2, use a sigma of sqrt(2)
    const double sigma = std::sqrt(0.5);
    std::cout << "Sigma " << sigma << '\n';
    std::cout << " x    Discrete          Sampled\n";
    for(int x = -5; x<=5; ++x) {
        const double yd = DiscreteGaussian(x, sigma);
        const double yc = ContinuousGaussian(x, sigma);
        std::cout << (x < 0 ? "" : " ") << x << "    "
            << std::setw(14) << std::setprecision(8) << std::fixed << std::left
            << yd << "    " << yc << '\n';
    }
}

Output looks like this

Sigma 0.707107
 x    Discrete          Sampled
-5    0.00000499        0.00000000
-4    0.00009996        0.00000006
-3    0.00160434        0.00006963
-2    0.01935206        0.01033349
-1    0.15642080        0.20755375
 0    0.64503527        0.56418958
 1    0.15642080        0.20755375
 2    0.01935206        0.01033349
 3    0.00160434        0.00006963
 4    0.00009996        0.00000006
 5    0.00000499        0.00000000

As you can see, even for a small sigma, the differences are minor and, depending on the application, may not be significant enough to switch from using the sampled continuous Gaussian function.