David - 10 months ago 72

C++ Question

I have seconds since Jan 1 1970 00:00 as an int64 in nanoseconds and I'm trying to convert it into month/day/year/day of week.

It's easy to do this iteratively, I have that working but I want to do it formulaically. I'm looking for the actual math.

Answer Source

New answer for old question:

Rationale for this new answer: The existing answers either do not show the algorithms for the conversion from nanoseconds to year/month/day (e.g. they use libraries with the source hidden), or they use iteration in the algorithms they do show.

This answer has no iteration whatsoever.

The algorithms are here, and explained in excruciating detail. They are also unit tested for correctness over a span of +/- a million years (way more than you need).

The algorithms don't count leap seconds. If you need that, it can be done, but requires a table lookup, and that table grows with time.

The date algorithms deal only with units of days, and not nanoseconds. To convert days to nanoseconds, multiply by `86400*1000000000`

(taking care to ensure you're using 64 bit arithmetic). To convert nanoseconds to days, divide by the same amount. Or better yet, use the C++11 `<chrono>`

library.

There are three date algorithms from this paper that are needed to answer this question.

`1.`

`days_from_civil`

:

```
// Returns number of days since civil 1970-01-01. Negative values indicate
// days prior to 1970-01-01.
// Preconditions: y-m-d represents a date in the civil (Gregorian) calendar
// m is in [1, 12]
// d is in [1, last_day_of_month(y, m)]
// y is "approximately" in
// [numeric_limits<Int>::min()/366, numeric_limits<Int>::max()/366]
// Exact range of validity is:
// [civil_from_days(numeric_limits<Int>::min()),
// civil_from_days(numeric_limits<Int>::max()-719468)]
template <class Int>
constexpr
Int
days_from_civil(Int y, unsigned m, unsigned d) noexcept
{
static_assert(std::numeric_limits<unsigned>::digits >= 18,
"This algorithm has not been ported to a 16 bit unsigned integer");
static_assert(std::numeric_limits<Int>::digits >= 20,
"This algorithm has not been ported to a 16 bit signed integer");
y -= m <= 2;
const Int era = (y >= 0 ? y : y-399) / 400;
const unsigned yoe = static_cast<unsigned>(y - era * 400); // [0, 399]
const unsigned doy = (153*(m + (m > 2 ? -3 : 9)) + 2)/5 + d-1; // [0, 365]
const unsigned doe = yoe * 365 + yoe/4 - yoe/100 + doy; // [0, 146096]
return era * 146097 + static_cast<Int>(doe) - 719468;
}
```

`2.`

`civil_from_days`

:

```
// Returns year/month/day triple in civil calendar
// Preconditions: z is number of days since 1970-01-01 and is in the range:
// [numeric_limits<Int>::min(), numeric_limits<Int>::max()-719468].
template <class Int>
constexpr
std::tuple<Int, unsigned, unsigned>
civil_from_days(Int z) noexcept
{
static_assert(std::numeric_limits<unsigned>::digits >= 18,
"This algorithm has not been ported to a 16 bit unsigned integer");
static_assert(std::numeric_limits<Int>::digits >= 20,
"This algorithm has not been ported to a 16 bit signed integer");
z += 719468;
const Int era = (z >= 0 ? z : z - 146096) / 146097;
const unsigned doe = static_cast<unsigned>(z - era * 146097); // [0, 146096]
const unsigned yoe = (doe - doe/1460 + doe/36524 - doe/146096) / 365; // [0, 399]
const Int y = static_cast<Int>(yoe) + era * 400;
const unsigned doy = doe - (365*yoe + yoe/4 - yoe/100); // [0, 365]
const unsigned mp = (5*doy + 2)/153; // [0, 11]
const unsigned d = doy - (153*mp+2)/5 + 1; // [1, 31]
const unsigned m = mp + (mp < 10 ? 3 : -9); // [1, 12]
return std::tuple<Int, unsigned, unsigned>(y + (m <= 2), m, d);
}
```

`3.`

`weekday_from_days`

:

```
// Returns day of week in civil calendar [0, 6] -> [Sun, Sat]
// Preconditions: z is number of days since 1970-01-01 and is in the range:
// [numeric_limits<Int>::min(), numeric_limits<Int>::max()-4].
template <class Int>
constexpr
unsigned
weekday_from_days(Int z) noexcept
{
return static_cast<unsigned>(z >= -4 ? (z+4) % 7 : (z+5) % 7 + 6);
}
```

These algorithms are written for C++14. If you have C++11, remove the `constexpr`

. If you have C++98/03, remove the `constexpr`

, the `noexcept`

, and the `static_assert`

s.

Note the lack of iteration in any of these three algorithms.

They can be used like this:

```
#include <iostream>
int
main()
{
int64_t z = days_from_civil(2015LL, 8, 22);
int64_t ns = z*86400*1000000000;
std::cout << ns << '\n';
const char* weekdays[] = {"Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"};
unsigned wd = weekday_from_days(z);
int64_t y;
unsigned m, d;
std::tie(y, m, d) = civil_from_days(ns/86400/1000000000);
std::cout << y << '-' << m << '-' << d << ' ' << weekdays[wd] << '\n';
}
```

which outputs:

```
1440201600000000000
2015-8-22 Sat
```

The algorithms are in the public domain. Use them however you want. The date algorithms paper has several more useful date algorithms if needed (e.g. `weekday_difference`

is both remarkably simple and remarkably useful).

These algorithms are wrapped up in an open source, cross platform, type-safe date library if needed.

If timezone or leap second support is needed, there exists a timezone library built on top of the date library.

**Update: Different local zones in same app**

See how to convert among different time zones.

**Update:** Are there any pitfalls to ignoring leap seconds when doing date calculations in this manner?

This is a good question from the comments below.

*Answer:* There are some pitfalls. And there are some benefits. It is good to know what they both are.

Almost every source of time from an OS is based on Unix Time. Unix Time is a count of time since 1970-01-01 *excluding* leap seconds. This includes functions like the C `time(nullptr)`

and the C++ `std::chrono::system_clock::now()`

, as well as the POSIX `gettimeofday`

and `clock_gettime`

. This is not a fact specified by the standard (except it is specified by POSIX), but it is the de facto standard.

So if your source of seconds (nanoseconds, whatever) neglects leap seconds, it is exactly correct to ignore leap seconds when converting to field types such as `{year, month, day, hours, minutes, seconds, nanoseconds}`

. In fact to take leap seconds into account in such a context would actually *introduce* errors.

So it is good to know your source of time, and especially to know if it also neglects leap seconds as Unix Time does.

If your source of time *does not* neglect leap seconds, you can *still* get the correct answer down to the second. You just need to know the set of leap seconds that have been inserted. Here is the current list.

For example if you get a count of seconds since 1970-01-01 00:00:00 UTC which *includes* leap seconds and you know that this represents "now" (which is currently 2016-09-26), the current number of leap seconds inserted between now and 1970-01-01 is 26. So you could subtract 26 from your count, and *then* follow these algorithms, getting the exact result.

This library can automate leap-second-aware computations for you. For example to get the number of seconds between 2016-09-26 00:00:00 UTC and 1970-01-01 00:00:00 UTC *including* leap seconds, you could do this:

```
#include "chrono_io.h"
#include "tz.h"
#include <iostream>
int
main()
{
using namespace date;
auto now = to_utc_time(sys_days{2016_y/sep/26});
auto then = to_utc_time(sys_days{1970_y/jan/1});
std::cout << now - then << '\n';
}
```

which outputs:

```
1474848026s
```

Neglecting leap seconds (Unix Time) looks like:

```
#include "chrono_io.h"
#include "date.h"
#include <iostream>
int
main()
{
using namespace date;
using namespace std::chrono_literals;
auto now = sys_days{2016_y/sep/26} + 0s;
auto then = sys_days{1970_y/jan/1};
std::cout << now - then << '\n';
}
```

which outputs:

```
1474848000s
```

For a difference of `26s`

.

This upcoming New Years (2017-01-01) we will insert the 27^{th} leap second.

Between 1958-01-01 and 1970-01-01 10 "leap seconds" were inserted, but in units smaller than a second, and not just at the end of Dec or Jun. Documentation on exactly how much time was inserted and exactly when is sketchy, and I have not been able to track down a reliable source.

Atomic time keeping services began experimentally in 1955, and the first atomic-based international time standard TAI has an epoch of 1958-01-01 00:00:00 GMT (what is now UTC). Prior to that the best we had was quartz-based clocks which were not accurate enough to worry about leap seconds.