I'm currently trying to implement some DSP algorithms using C++ and am curious as to whether or not I'm being efficient.
I'm specifically trying to design a 'DigitalFilter' class which will produce a filtered output when given a series of inputs. Now the issue that I'm facing is that the size of the filter (i.e. the number of filter coefficients) can vary. Thus the size of the a DigitalFilter class instance will vary. So for example, one instance of DigitalFilter may only need to hold 4 filter coefficients, while another may need to hold 90 filter coefficients.
The obvious and easy way to hold these coefficients would be to hold them using the std::vector object. This object essentially can vary in size, which seems like it would be appropriate for my application.
However, I also know that this implemented using heap allocated memory (as opposed to stack memory). Thus, once I set up my filter and start using it to do mathematically intensive calculations, it will constantly be referencing heap data. I realize the expensiveness typically associated with vectors is the need to completely reallocate memory location in case the vector becomes too big to fit in its original place in memory, however, this shouldn't be a big concern in my application because none of the vectors will be sized before filtering operations begin. However, I'm still curious about the efficiency.
So my question(s): What kind of time hit would be involved with referencing heap data vs stack data? Is it possible that the processor's cache memory hold onto this heap data for faster access?
The access time of heap memory versus stack memory is the same on any standard PC hardware.
Since you are not resizing the vector in your filtering algorithm, you can specify the size of the vector when you create it:
You could also use an array directly.
int * coef = new int;