Tiny Core Base > Alpha Releases

Tiny Core 14.0 Alpha 1 Testing

<< < (6/22) > >>

patrikg:
I am not aiming for booting faster, just saying that we can use our multi-core CPU more efficient.
And use the parallell, could maybe be some good solution, but what i think you need to make some kind of dep tree, to see whats the system need to extract first, and like you sayed you could combined the extensions together, just extract them all first into some tmp directory and then compress them all into one piece.

nick65go:
@patrikg: I am glad that you agree with the idea to combine tcz into one big for huge tcz+deps (like Xorg server)  :)

Regarding multi-core CPU: It is a waste of (expensive) electricity to have 100 cores and NOT use all together for 1 second, instead of one core for 100 seconds (the 99 cores just sleeping). But it is a say, like "double of nothing is nothing!". 
Because I am not a programmer/developer, for me the Pareto principle apply, 80% efficiency with 20% work. So spending (2-3 hours ) time to "improve" tce-load, just to recover them in 2-3 years, is not worth for my time. ex: 365 days/year x 3 years / (10 sec / 60 sec/min / 60 min /hour => 3.04 hours.

PS: the "deep" tree can be seen from TC appls, or from /tmp/tcload/extensions.

CNK:

--- Quote ---Regarding multi-core CPU: It is a waste of (expensive) electricity to have 100 cores and NOT use all together for 1 second, instead of one core for 100 seconds (the 99 cores just sleeping). But it is a say, like "double of nothing is nothing!". 
Because I am not a programmer/developer, for me the Pareto principle apply, 80% efficiency with 20% work. So spending (2-3 hours ) time to "improve" tce-load, just to recover them in 2-3 years, is not worth for my time. ex: 365 days/year x 3 years / (10 sec / 60 sec/min / 60 min /hour => 3.04 hours.

--- End quote ---

The kernel does most of the work, the "mount" and "cp -ais" commands called from tce-load largely just tell the Linux kernel what to do. The kernel already multi-threads itself. You can list all the kernel threads currently running with:

--- Code: ---ps | grep '\['

--- End code ---

Currently I've got 86 kernel threads, so if I had 100 CPU cores I might have some to spare, but as it is I've only got four. If I run top and press "1" to see all the CPU cores while loading an extension with many dependencies at the same time, all four cores show an increased "sys" usage, so the kernel must be having some success at splitting the mount and symlink tasks over all four cores.

nick65go:
@CNK: I have also just 4 cores ( maybe 2 cores , 4 threads?) on my AMD APU,  so will be no big deal if (or not) tce-load will load TCZs in parallel.  But I saw a big difference using a manually made Xorg-full.tcz , because CPU was not switching to each thread/ tc-loop. Or maybe reading each tcz from rotational/slow HDD when faster cores are already sleeping waiting for I/O data not in RAM / cache.

It was all theoretically. I understand that a 4-core CPU will split the workload in maxim 4 threads for the same command ( to load one tcz). The possible improvement discussed was about loading many (100) tcz in parallel. Be cause the B-two tcz have to wait (is a sequential list of dependency) until A-one.tcz finish (loaded by 4 threads in parallel), even if (in the mean time) 3 cores finish A-one.tcz but 4-th core still process the last thread of A-one.tcz. [so I think]. So was about few n-1 core sleeping because at least one core still process the initial command.

nick65go:
Maybe another option, for faster 100 tcz loading, could be to have a tce-load switch /parameter on boot, to on-demand bypass loading 100 md5 files (tcz checksum) from a slow HDD, plus then implicit bypassing 100 calculation + comparisons of those md5. Or maybe ordering index/files in squash tcz, because Xorg will not show until all .so are loaded from many tcz.

PS: I wonder how many programs (as % from total) still run under Xvesa /Xfbdev in today tc14. Because most of them ask for gtk3, which in turn ask for Xlibs -> Xorg. Any proper (modern) browser  asks for Xorg.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version