is very convenient and relatively fast. In certain contexts I'd like to compare its output against a non-interpolated variant where the sparse values are propagated (in the "denser" output) and the result is piecewise constant between the sparse inputs. The function I want could also be called a "sparse -> dense" converter that copies the latest sparse value until it finds a later value (a kind of null interpolation as if zero time/distance has ever elapsed from the earlier value).
Unfortunately, it's not easy to tweak the source for
because it's just a wrapper around a compiled function. I can write this myself using Python loops, but hope to find a C-speed way to solve the problem.
the solution below (
) is quite slow and takes more than 10 seconds per call (e.g. input 500k in length that's 50% populated). It implements
using a zero-order spline and the call to
is very slow. However, the source code for
(i.e. default interpolation) gives an excellent template for solving the problem using straight numpy (minimal change is to set
). That code shows how to use
to solve the problem and the runtime is similar to calling
, so problem is solved by tweaking the
implementation of linear interpolation to just skip the interpolation step (slope != 0 blends the adjacent values).