I have some code that was logging data from a device and was timestamping that data with system time in milliseconds.
I was using a
struct timespec request;
uint64_t stamp0 = (uint64_t)((uint64_t)request.tv_sec * 1000 + (uint64_t)request.tv_nsec / 1000000);
I'm guessing that the timestamp
1130802699 should be more like
If so, you may be able to recover the timestamps by adding the appropriate multiple of 1<<32 (I'm assuming that your
long is 32 bits and the truncated values don't roll over into negative range). In this case, that's 344 times, or
1477468749824 to add to each value.
Your 32-bit values will roll over every 6 weeks or so, so you might have to do something a bit cleverer if your files span a longer range.
If you're wondering how I came up with that value, we should work backwards.
We know we truncated the
uint64_t to (roughly)
int32_t, and if we make some reasonable assumptions (e.g. 2's complement arithmetic), that means masking with 0xffffffff:
stamp0 & 0xffffffff
That's equivalent to subtracting
stamp0 - (stamp0 & 0xffffffff00000000)
This difference will be constant across a large range of values, and approximately equal to the difference between the actual and expected values.
1478599582064 - 1130802699 is 1477468779365, or 0x15800007365.
So I think the offset to add back on is actually 0x15800000000.