`numpy.interp`

is very convenient and relatively fast. In certain contexts I'd like to compare its output against a non-interpolated variant where the sparse values are propagated (in the "denser" output) and the result is piecewise constant between the sparse inputs. The function I want could also be called a "sparse -> dense" converter that copies the latest sparse value until it finds a later value (a kind of null interpolation as if zero time/distance has ever elapsed from the earlier value).

Unfortunately, it's not easy to tweak the source for

`numpy.interp`

because it's just a wrapper around a compiled function. I can write this myself using Python loops, but hope to find a C-speed way to solve the problem.

**Update:** the solution below (

`scipy.interpolate.interp1d`

with

`kind='zero'`

) is quite slow and takes more than 10 seconds per call (e.g. input 500k in length that's 50% populated). It implements

`kind='zero'`

using a zero-order spline and the call to

`spleval`

is very slow. However, the source code for

`kind='linear'`

(i.e. default interpolation) gives an excellent template for solving the problem using straight numpy (minimal change is to set

`slope=0`

). That code shows how to use

`numpy.searchsorted`

to solve the problem and the runtime is similar to calling

`numpy.interp`

, so problem is solved by tweaking the

`scipy.interpolate.interp1d`

implementation of linear interpolation to just skip the interpolation step (slope != 0 blends the adjacent values).