Tiny Core Linux
Tiny Core Base => TCB Talk => Topic started by: destroyedlolo on January 19, 2023, 02:00:32 AM
-
Hello,
Today on reddit, someone raises the upcoming problem of 32bits time_t (https://www.reddit.com/r/linux/comments/10fqx1t/today_is_y2k38_commemoration_day/).
My current projects with TCL is a very long term archiving system, speaking about decades. Mostly to store my familly photos, important documents or such.
Is current 13.1 TCL for 32b system already 2k38 safe ? If not, what about upcoming 14.xx ?
Thanks
Laurent
-
Nothing special has been done in tinycore, so it’s 2k38 prepared-ness is that of the source from which it is built.
-
As per this information I found : https://stackoverflow.com/questions/14361651/is-there-any-way-to-get-64-bit-time-t-in-32-bit-programs-in-linux/60709400#60709400,
Using 5.6+ kernel and recent libc, it should be "de facto" migrated. But I duno if it's applying to the libc used by TCL ?
-
The link says glibc >2.32
tc-13 is 2.34
tc-14 is 2.36
..but your apps would still need recompiling.
-
How major release are made (especially 13.xx and 14.xx) : are U rebuilding everything from scratch or do you keep some binaries from previous release ?
-
The toolchain is built from scratch and the kernel is compiled with the new toolchain with each major release.
See, for example: http://www.tinycorelinux.net/14.x/x86_64/release/src/
The majority of the extensions are copied over from the previous major release.
-
enjoyed perusing the OP hotlink:
https://www.reddit.com/r/linux/comments/10fqx1t/today_is_y2k38_commemoration_day/
definitely thought-provoking moreso than entertaining
another fun one:
https://jvns.ca/blog/2023/01/18/examples-of-problems-with-integers/
20230119-0745am-modified-added another fun link
-
Hi, destroyedlolo.
My current projects with TCL is a very long term archiving system, speaking about decades. Mostly to store my familly photos, important documents or such.
64-bit linux has been 2k38 ready for a while. 32-bit linux has been ready since kernel version 5.6. See here:
https://en.wikipedia.org/wiki/Year_2038_problem
Technology changes fast and hard drives, USB drives, etc. are susceptible to bit rot, making archiving a difficult job.
For family photos we went having them professionally printed then making photo albums. For family videos we went with making video DVDs and burning them on M-DISCs, which supposedly last forever. Note: only some DVD burners can make M-DISCs (e.g., LG brand) but majority of players (all I've tried) can play them just like a normal DVD.
https://en.wikipedia.org/wiki/M-DISC
For both pictures and videos, we also put everything on a USB solid state drive formatted with ext4 and on a different USB solid state drive formatted with ntfs.
What we did is still not a guarantee that grandchildren and great-grandchildren will be able to access this stuff, of course. I think the photos will outlive the usefulness of the DVDs and USB devices, but we made a gamble on these technologies and hope they will still be available in antique shops of the future.
Good luck :)
-
Hi destroyedlolo
If you want to verify whether your 32 bit kernel/glibc will handle
64 bit time, see this:
https://stackoverflow.com/questions/71599103/compiling-old-c-code-y2038-conform-still-results-in-4-byte-variables
Create a file called test2038.c:
#include <sys/types.h>
#include <sys/stat.h>
#include <stdio.h>
#include <unistd.h>
int main(void)
{
struct stat sb;
printf("sizeof time_t: %zu\n", sizeof(time_t));
printf("sizeof stat timestamp: %zu\n", sizeof(sb.st_atime));
return 0;
}
Don't call it time.c like the author did, time is a busybox function.
Install the compiler:
tce-load -w -i compiletc
Compile and run the program:
tc@E310:~/y2038$ gcc -D_TIME_BITS=64 -D_FILE_OFFSET_BITS=64 test2038.c -o test2038
tc@E310:~/y2038$ ./test2038
sizeof time_t: 4
sizeof stat timestamp: 4
tc@E310:~/y2038$
I ran this under TC10 and it shows 32 bit times. The sizeof results will be 8 for 64 bit times.
-
You should use 64-bit linux to be sure, that has had 64-bit units for ages.
-
Hi all,
Let me present to you my solution:
In addition to printed books, I need numerical backup, as not all are printed (and printing a video is not very ... efficient ;) ).
I now have more than 20 years of digital photos and videos, and some are important to us (wedding where we splurged ;D, birth and important moments of children, ...).
Based on an archiving study I did at work (I'm infrastructure/solution architect), I identified the following potential issues :
- resolutions : my first camera did 640x480 and, fortunately, 1080p photos, Videos only in 640x480 which looks like a post stamp now.
-> But the technology evolve, nothing can be done here (but upscaling which is a bit limited which such low resolution)
- data format : In the '90 Amiga's IFF was very common, only little software support it now.
-> for that, the day JPEG and MPG4 become obsolete, I'll convert ...
- last problem is media sustainability.
-> For the moment, data are mirrored on 4 machines using different OS (2x TCL, 1 Gentoo, 1 NetBSD) to avoid bugs / virus. They are all very old and obsolete machines, now too slow or energy consuming for other usage. It's here 32bits enters the game. The most powerful one is a P4.
If one fails, I don't care, as I have 3 others ones ...
In addition, when a storage technology become obsolete, I add a new machine with a more recent one.
Initially, it was SCSI disks (I deprecated them as disks are quite too small / too expensive), then IDE, and now SATA. I have at least 2 machines, being able to read at least 2 formats.
Now, there is the problem of data rot : for that, I created a daemon (https://github.com/destroyedlolo/Mer-de-Glace) that associate a numeric signature for every file. It is able to detect file creation, deletion and modification/corruption. As running on all mirror, I'm able to identify which one is safe, which one is corrupted.
Even if it is already doing the job, development is still on going ... I need at least implement inotify then I'll issue a TCL package.
Comments, ideas on my solution or any help (coding, docs for Mer-De-Glace) are obviously welcome :)
-
a little more if you've got the time(pun intended):
https://rachelbythebay.com/w/2023/01/19/time/
-
For corruption, there's methods like par2, which create additional data, but are able to fix small corruption errors.
-
For corruption, there's methods like par2, which create additional data, but are able to fix small corruption errors.
Most of the time, when data started to be corrupted, it's highlighting a beginning of larger issue (hardware failure, SSD loosing its electrical charge, hd demagnetization, ...). So it's why I would prefer to be warned about it and relying on other mirror than corrective actions.
I mean, this kind of data are "sleeping", very very infrequent access and no urgency. So having to start another machine to access them is not really an issue ;)
-
Are you printing this checksum's to paper to avoid data rut, with the checksum ??
And if not how do you preserve the checksum not being data rut ??
-
Are you printing this checksum's to paper to avoid data rut, with the checksum ??
And if not how do you preserve the checksum not being data rut ??
Checksum are stored on disk. There is a checksum associated to each signature (and the signature is compared against its checksum even when stored in memory while processing).
And this file is also present on each backup server.
So, if the backup file is not readable, incomplete or doesn't match expected format, if at least one of checksum doesn't correspond to a signature even in memory after loading, I consider this server as bad. So I elect another one reference after obviously doing the same mechanism.