Tiny Core Extensions > TCE Talk
upx for large executables?
CardealRusso:
--- Quote from: hiro on May 18, 2024, 02:51:03 AM ---tested on tinycorelinux?
--- End quote ---
yes
hiro:
ah, you did your homework a year ago already :O
trying to analyze the results from your table from your example extension
1) going from gzip at 4k to zstd at 128k potentially reduces the size by 1/4th, at the cost of a bigger change for all existing extensions, giving up the former consistency.
if size isn't super important, i would say in this instance the change isn't worth it at all.
if size is super important you should probably make a dedicated single file with high blocksize and strong compression, for all your extensions and mydata. for this, you could put a script in your exittc process or so...
2) going from 4k to 128k seems to generally halve the time needed for decompression, regardless of compression codec
3) ram usage stays mostly the same for all tested combinations, so that one highly irrelevant.
bonus: on the one hand i'm surprised to see no direct benefit for lz4, on the other hand, i think it's not super realistic to load applications and not use them.
i would propose a complete end-to-end test, where you measure the boot time plus loading that webbrowser extension plus how long it takes to also automatically navigate a webbrowser to some bloated website...
neonix:
Does upx can compress dll files?
neonix:
It looks like upx use it's own ucl algorithm not gzip.
Not many people know that upx can also compress shared libraries and static libraries.
I tried to compress libfltk.so.1.3 with new version of upx -9 and I get about 50% reduction. After compression with squashfs the differences of new tcz are not great 50kb.
upx --lzma doesn' t work with libfktk.so.1.3 because it result with Trace/breakpoint trap. I also tried compress libraries in /lib and /usr/lib with --android-shlib argument but get PT_Note error.
I also discovered that Linux kernel supports ko.xz compression.
nick65go:
IMHO, we should first define the "environment" where we want to use the "optimizations", like UPX-compression which was intended for small size of executables (initially was used in Windows for pseudo protected/obfuscated executables, to hide their resources and code against disassembling/debugging).
If we want "common" scripts etc for 32 bits and 64 bits (admin hiro style), then will be AVERAGE (not optimum) for a specific environment.
1. if we use a powerful environment, like SMP -- multithreading, multicore etc 64 bits CPU, with fast SDD --not HDD, lot of RAM (over 4GB).. then UPX, zstd, gzip, does NOT matter too much. The differences are not worth the effort. Time wasted to test/benchmark, re-build TCZ, etc will never be recovered, even when used by 100 users and 100 applications. If you do not believe me, then lets try to estimate/agree how much hypothetically you want to GAIN in speed/size/RAM-usage etc (be it relative %, or absolute values), versus how much you want to INVEST to succeed as time/money/pride etc. So basically to define the target and budget.
2. if we use 32 bits, slow 486 CPU (tiny core user compatible), not SMP, not multithreading, with slow media like HDD/CDROM, then maybe UPX can be discussed with better success. Because here, in this environment, the small/minuscule gains should matter, for some particular BIG applications. For already small size tcz, it does not matter anyway too much the algorithm or block size.
PS: I hope I did not offend anyone with my comment. For me is about efficiency, small effort for small reward, or big effort for big reward, but not big effort for small reward. YMMV.
Navigation
[0] Message Index
[#] Next page
[*] Previous page
Go to full version