My Musings on the 2038 Problem

I have long been partly aware of the 2038 problem, similar in nature to the Y2K problem – which, although at the ripe old age of 8 when it “happened”, I don’t really recall ever actually happening – but the key difference is that in 2038 there is the potential for some critical systems to face issues, in particular 32-bit ones that store dates e.g. every single 32-bit system.

So what exactly is the problem and what exactly is expected to happen?

In an n-bit system, dates are typically stored in a signed n-bit number, starting from the Unix Epoch, 1970-01-01 00:00:00 UTC, meaning that they can go up to 2n-1-1 seconds after that, which in 32-bit systems means 2038-01-19 03:14:07 UTC. We’re already long past the relevant points for 8-bit and 16-bit systems. In fact, I’m not even sure how 8-bit systems handled it given that 27-1 = 127 seconds which is a whole 2 minutes and 7 seconds. The same goes for 16-bit systems which would at least get you past 9AM on 1970-01-01, but not by much, 9:06:07 AM to be precise, so at least you’d get 6 working minutes out of it.

So given that times stored in 8- and 16-bit systems wouldn’t even get you a full day, getting 68 years out of a 32-bit system isn’t bad at all right? Well, no, it isn’t, but still, it’s not enough.

If I don’t do anything and still store my timestamps as signed 32-bit numbers, what will happen?

Here is where the distinction between signed and unsigned comes in, let’s use a 4-bit example just to keep things easy.

An unsigned 4-bit number can go all the way from 0000 = 0 to 1111 = 15 ( = 24 – 1), whereas a signed 4-bit number can go all the way from 1000 = -8 to 0111 = 7 ( = 23 – 1), this is because the first bit is used as the sign: 1 is negative and 0 is positive (or zero).

There is still a need, of course, to store dates before the Unix Epoch, this is typically handled by storing it as a negative number e.g. 1969-12-31 23:59:00 would be stored as -60. Bit-wise, this is done by changing the first bit to a 1 and using the remaining (n-1) bits to count up from “zero” as normal, up until 1 second before the epoch.

This is why dates are stored as signed numbers rather than unsigned, because dates existed before 1970. If we were to switch to unsigned integers then we would get an extra 68 years of breathing space, taking us to 2106-02-07 06:28:15 UTC. There would be two main problems with this however:

  1. The first one is hopefully obvious, we would completely lose the ability to work with any dates and times before 1970.
  2. Even that date is probably not as long as some people alive today will possibly live. Someone born on the day I am writing this (21st October 2022) will be 83 when that date comes, and I wouldn’t consider 83 to be an unreasonably long lifespan.

Now as to what will actually happen: time will keep moving and the binary stored number 01111111111111111111111111111111 will turn into 10000000000000000000000000000000 which will now be interpreted as – 231, so 231 seconds before the Unix epoch. This will take us all back to 1901-12-13 20:45:52 UTC. Naturally, this will cause chaos, especially if you believe time travel is possible and we’ve cracked it by then! I may go and buy a DeLorean just in case.

So, what is the solution?

Enter 64-bit

Most computers and processors you can buy today run on 64 bits, so it is incredibly unlikely that this whole problem will be a problem for end-user devices by 2038, and who is to say that by then that won’t become 128 bits, or even 256? The trouble comes when using architecture still running on 32 bits.

How long can you get out of a 64-bit system?

When doubling the number of bits, you essentially (almost) square the length of time (in seconds) that can be handled in an unsigned integer of n bits. The maths of squaring it doesn’t quite apply to signed integers. So how many years could we get out of 64 bits? A Century? A Millennium? A Decamillennium (that’s 10 millenia or 10,000 years)? Nope, you’d get 584.9 billion years. There are many comparisons you could make but the main one in my mind is that being roughly 42 times the age of the universe. This of course becomes 292.5 billion years either side of the Unix Epoch when you consider signed rather than unsigned numbers.

Call it morbid but the human race will be long gone by the time the year 292,500,001,970 comes around.

This said, however, with the advent of 64-bit numbers, could it be time to reconsider what we use as the Epoch? Is there really a need to arbitrarily start dates from 1st January 1970 any more? I’d like to propose 2 new possibilities for the Epoch:

Point in timeProsCons
0000-00-00 00:00:00 UTCWe can calculate exactly when it wasThings still happened before it, meaning numbers would have to be signed
It is based on religion which not everyone subscribes to, even if most countries recognise it as being year 0 and this current year being the 2022nd one after it.
The Big BangNothing happened before it (that we know of at least, I happen to believe otherwise)
There would be no need to use a signed integer
We cannot calculate exactly when it was, even the official age of the universe is listed with +- 0.02 billion years, which is 20 million!
Possible new Epochs with pros and cons for each

Re-approaching the Project Euler Problems: Dealing with large files

Happy new year everyone! Welcome to 2022 and let’s hope it’s at least a bit better than the last couple of years have turned out to be.

Over the break I was tweaking with my Project Euler repo, and ran into a problem that part of me always suspected I might eventually hit at some point: one of my files (either a results CSV or an expected answers JSON) being too big and GitHub eventually saying “no, you can’t host that here”. I always saw this as an “eventually” issue though rather than a “during Christmas 2021” issue.

Whilst starting work on problem 2, I noticed that the numbers involved would be considerably larger, especially as the problem itself expects a default input of 4 million rather than the 10,000 in problem 1. So I got to work as I had done with problem 1, manually calculating the results against inputs of up to 40 and then checking my Python script against that, before deciding I could then trust it to generate answers all the way up to 4 million. All good although I must confess it took a while!

Now to do some other bits and pieces before retiring for the evening, and now to git push:

remote: Resolving deltas: 100% (24/24), completed with 7 local objects.
remote: error: Trace: aa212c3521a5fdbf4c114882235a794bf0c397722cee81565295fe45a1c5e3d3
remote: error: See http://git.io/iEPt8g for more information.
remote: error: File problem_2/problem_2_expected_answers.json is 222.32 MB; this exceeds GitHub's file size limit of 100.00 MB
remote: error: GH001: Large files detected. You may want to try Git Large File Storage - https://git-lfs.github.com.
To https://github.com/gavinsykes/project-euler.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://github.com/gavinsykes/project-euler.git'

Yikes.

There is quite an alarming amount of red there, by which I mean there is any red at all. And that isn’t me just highlighting bits red for emphasis, that is git itself printing red characters to the terminal.

Luckily, after having taken a look at the mentioned git-lfs it seems to be really quite simple to use, just tell it which files you expect to be larger than 100MB and it will sort them all out for you.

brew install git-lfs
git install lfs
git-lfs track "*.csv"
git-lfs track "*_expected_answers.json"

This should create a .gitattributes file with the following content:

*.csv filter=lfs diff=lfs merge=lfs -text
*_expected_answers.json filter=lfs diff=lfs merge=lfs -text

But there is still a problem, I had committed the large file (which I suspect was the expected_answers.json file for problem 2) somewhere within the last 13 commits, before having installed LFS. This means that even though installing LFS brought up the files I asked it to track so I could recommit them, I still had a commit that was trying to sync with the large file not tracked by LFS, meaning it still didn’t want to know.

So how do I manage this? I believe I have found the solution.

Run git status and it should tell you that Your branch is ahead of 'origin/master' by 13 commits. (Your number of commits may vary.)

Delete the suspected offending file(s) on your local machine and commit the deletion.

Reset back the relevant nuber of commits, this should now be 14 (in my case it was 15 because I decided to tweak some other scripts in the middle of doing this, but don’t do that, why would you do that? Why would you make it more complicated than it needs to be unless you’re an idiot like me?)

git reset --soft HEAD~15

If you’re in VSCode, you should see all the changes you made within the last x commits reappear in your staged changes, we can now “squash” them into a single commit, this 1 commit should now push to remote sin problema.

Now for the moment of truth, LFS is now all set up and appears to have been working on the current (not too big, yet) JSON and CSV files, let’s try it on the problem 2 expected answers JSON!

Uploading LFS objects: 100% (1/1), 190 MB | 1.3 MB/s, done.
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 12 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 463 bytes | 463.00 KiB/s, done.
Total 4 (delta 2), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To https://github.com/gavinsykes/project-euler.git
0e78144..9ac5c2d master -> master

So, other than the remarkably low upload speed of 1.3MB/s (my router isn’t the greatest and I’m not exactly close to it), I think we can call that a success! 😁😁