Tiny Core Base > TCB Talk

Sugestion: XZ compression in kernel + tcz; squashfs with xz + biger block size

<< < (3/7) > >>

andyj:
Remember to keep your eye on the prize when it comes to compression. The goal might be the smallest size when transferring large files over the internet because bandwidth would be the limiting factor. When booting however, size is less important because you are only compressing it once then decompressing it many times so the fastest decompression would be preferred. If you want to make a convincing argument make a test we can run on the different systems we all have so we can get some data on the compression flavors.

hiro:
thanks andyj. that's kinda what i would have preferred to have said, hehe.
https://code.fb.com/core-data/zstandard/

curaga:
Just wondering what the target case here is. "Copy tcz to ram, then mount from there" is not any mode designed by us. copy2fs unpacks to RAM. XZ compression would not save any RAM for copy2fs.

There are also slow x64 cpus, Atoms and Bobcats mainly.

I do think x64 does not have the constraints of x86, and could use higher block sizes or a different algo. The question is what makes the most sense.

nick65go:
Thank you all who helped with their ideas/resons. Of course I can re-compress the tcz, re-master the core, etc; this is not the point, otherwise I could just recompile the kernel to tune-up to my CPU and it is done. but is not portable. and will not help the tc user community.

hi curaga; as you said

--- Quote ---I do think x64 does not have the constraints of x86, and could use higher block sizes or a different algo. The question is what makes the most sense.
--- End quote ---
I just asked, to clarify for myself and maybe other users, what the experts/developers intend to do in future for TCx64, what target audience is covered, regarding resources (minimum CPU), speed gains (if any) by changing compresion algorithm for booting or internet band-wise, any lower RAM usage, etc.

EDIT: I am just a user, without multiple equipments/devices to test, neither the time (or knowledge) to PROPERLY test and measure the various data sets. Plus if the gains are too small, maybe it does not worth my wasted time. So I better trust experts/experiments.

hiro:
i already gave my rather naive opinion (that everything but lz4 will be rather too slow to utilize the System (in this case the bottleneck is beleived in the IO, not the cpu) to it's fullest available extent), BUT i think testing it out in practice would be a very very valuable experience. :)

about higher block size: if you *really* care about all performance scenarios, more modern algorithms utilizing the dictionary approach seems to be more practical than higher block sizes.

it's an exciting topic, so i hope you have fun :)

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version