WelcomeWelcome | FAQFAQ | DownloadsDownloads | WikiWiki

Author Topic: upx for large executables?  (Read 3087 times)

Offline curaga

  • Administrator
  • Hero Member
  • *****
  • Posts: 11037
Re: upx for large executables?
« Reply #15 on: May 17, 2024, 11:18:27 AM »
The mksquashfs program supports up to 1mb, and if 1mb gives a good benefit over 512k, it's fine to use. The entire range is ok for 64-bit really.
The only barriers that can stop you are the ones you create yourself.

Offline GNUser

  • Wiki Author
  • Hero Member
  • *****
  • Posts: 1490
Re: upx for large executables?
« Reply #16 on: May 17, 2024, 11:24:41 AM »
Duly noted. Thanks!

Offline hiro

  • Hero Member
  • *****
  • Posts: 1229
Re: upx for large executables?
« Reply #17 on: May 17, 2024, 11:55:28 AM »
it depends on how the data is going to be accessed. as an example, if there's a huge video file embedded in the binary that plays an intro every time you open the application, in raw video format, it will be accessed sequentially in a big batch at startup and would benefit from an efficient high blocksize form of compression. even more, it would benefit from a lossy video codec.

these are 2 extremes.

a third extreme is if a binary is full of pre-computed lookuptables or something like that, and only a few entries are ever being looked at. then it would make sense to keep the blocksize small enough that most blocks with actual runnable instructions is mostly containing just those, and not whatever other never accessed data.

or, a program is extremely bloated and it contains everything and their grandmother: then you might be unlucky&lucky and most stuff is never accessed, or you might be just unlucky&unlucky and you'll page in everything bec. every second half-block has to be used!

Offline CardealRusso

  • Full Member
  • ***
  • Posts: 178
Re: upx for large executables?
« Reply #18 on: May 17, 2024, 03:52:21 PM »
Fair enough. Is there at least a range of sizes that you would consider to be reasonable?
a hint

CompSizeRamDecomp (Real)
zstd(128k)119MB391MB0.37s
lz4(128k)181MB381MB0.35s
gzip(128k)131MB379MB0.32s
zstd(4k)154MB385MB1.12s
gzip(4k)160MB374MB1.15s
« Last Edit: May 17, 2024, 03:54:07 PM by CardealRusso »

Offline GNUser

  • Wiki Author
  • Hero Member
  • *****
  • Posts: 1490
Re: upx for large executables?
« Reply #19 on: May 17, 2024, 07:35:39 PM »
Interesting, but that's only two block sizes (so not enough to find a sweet spot). But this data suggests a larger block size results in smaller extension and faster decompression, with no significant difference in RAM usage.

Offline hiro

  • Hero Member
  • *****
  • Posts: 1229
Re: upx for large executables?
« Reply #20 on: May 17, 2024, 07:37:43 PM »
it's not sure what kind of "ram usage" this is measuring.
but i presume it's most likely during bullk extraction of the whole fine (which we do not do), and not the one we actually care about (ram overhead due to block overhead).

though now that i think about it, i don't even know whether either of these ram usages would actually be significant enough for us to even begin bothering about, versus for example performance effects (due to large block access overhead/inefficiencies during actual smaller partial block highly random access scenarios for example).

less third-party random numbers, more first-party testing please. and explain your numbers.
« Last Edit: May 17, 2024, 07:40:35 PM by hiro »

Offline GNUser

  • Wiki Author
  • Hero Member
  • *****
  • Posts: 1490
Re: upx for large executables?
« Reply #21 on: May 17, 2024, 07:43:36 PM »
it's not sure what kind of "ram usage" this is measuring.
Good point, hiro.

At the end of the day, the amount of effort in benchmarking and defining terms might be much greater than the technical gains.

Offline hiro

  • Hero Member
  • *****
  • Posts: 1229
Re: upx for large executables?
« Reply #22 on: May 17, 2024, 07:47:57 PM »
if you try it and the benefit can be felt even without careful measurement, then it's likely worth measuring to optimize further down the same route.
in other words: if you find hints, it's good to use that newly found knowledge and investigate further and try to understand the full situation ;)

Offline CardealRusso

  • Full Member
  • ***
  • Posts: 178
Re: upx for large executables?
« Reply #23 on: May 17, 2024, 08:15:37 PM »
it's not sure what kind of "ram usage" this is measuring.
This is the total RAM usage of the system, with different algorithms and block sizes for ungoogled-chromium. This is the memory usage of a fresh start of the system, without running any program other than top.
It's not very accurate, but it definitely gives us some idea.

Offline hiro

  • Hero Member
  • *****
  • Posts: 1229
Re: upx for large executables?
« Reply #24 on: May 18, 2024, 02:51:03 AM »
tested on tinycorelinux?

Offline CardealRusso

  • Full Member
  • ***
  • Posts: 178
Re: upx for large executables?
« Reply #25 on: May 18, 2024, 05:53:48 AM »

Offline hiro

  • Hero Member
  • *****
  • Posts: 1229
Re: upx for large executables?
« Reply #26 on: May 18, 2024, 08:12:12 AM »
ah, you did your homework a year ago already :O

trying to analyze the results from your table from your example extension

1) going from gzip at 4k to zstd at 128k potentially reduces the size by 1/4th, at the cost of a bigger change for all existing extensions, giving up the former consistency.
if size isn't super important, i would say in this instance the change isn't worth it at all.
if size is super important you should probably make a dedicated single file with high blocksize and strong compression, for all your extensions and mydata. for this, you could put a script in your exittc process or so...

2) going from 4k to 128k seems to generally halve the time needed for decompression, regardless of compression codec

3) ram usage stays mostly the same for all tested combinations, so that one highly irrelevant.

bonus: on the one hand i'm surprised to see no direct benefit for lz4, on the other hand, i think it's not super realistic to load applications and not use them.

i would propose a complete end-to-end test, where you measure the boot time plus loading that webbrowser extension plus how long it takes to also automatically navigate a webbrowser to some bloated website...

Offline neonix

  • Wiki Author
  • Sr. Member
  • *****
  • Posts: 391
Re: upx for large executables?
« Reply #27 on: May 20, 2024, 05:56:25 AM »
Does upx can compress dll files?

Offline neonix

  • Wiki Author
  • Sr. Member
  • *****
  • Posts: 391
Re: upx for large executables?
« Reply #28 on: May 24, 2024, 06:50:01 AM »
It looks like upx use it's own ucl algorithm not gzip.

Not many people know that upx can also compress shared libraries and static libraries.

I tried to compress libfltk.so.1.3 with new version of upx -9 and I get about 50% reduction. After compression with squashfs the differences of new tcz are not great 50kb.

upx --lzma doesn' t work with libfktk.so.1.3 because it result with Trace/breakpoint trap. I also tried compress libraries in /lib and /usr/lib with --android-shlib argument but get PT_Note error.

I also discovered that Linux kernel supports ko.xz compression.

Offline nick65go

  • Hero Member
  • *****
  • Posts: 831
Re: upx for large executables?
« Reply #29 on: May 24, 2024, 04:02:55 PM »
IMHO, we should first define the "environment" where we want to use the "optimizations", like UPX-compression which was intended for small size of executables (initially was used in Windows for pseudo protected/obfuscated executables, to hide their resources and code against disassembling/debugging).

If we want "common" scripts etc for 32 bits and 64 bits (admin hiro style), then will be AVERAGE (not optimum) for a specific environment.

1. if we use a powerful environment, like SMP -- multithreading, multicore etc 64 bits CPU, with fast SDD --not HDD, lot of RAM (over 4GB).. then UPX, zstd, gzip, does NOT matter too much. The differences are not worth the effort. Time wasted to test/benchmark, re-build TCZ, etc will never be recovered, even when used by 100 users and 100 applications. If you do not believe me, then lets try to estimate/agree how much hypothetically you want to GAIN in speed/size/RAM-usage etc (be it relative %, or absolute values), versus how much you want to INVEST to succeed as time/money/pride etc. So basically to define the target and budget.

2. if we use 32 bits, slow 486 CPU (tiny core user compatible), not SMP, not multithreading, with slow media like HDD/CDROM, then maybe UPX can be discussed with better success. Because here, in this environment, the small/minuscule gains should matter, for some particular BIG applications. For already small size tcz, it does not matter anyway too much the algorithm or block size.

PS: I hope I did not offend anyone with my comment. For me is about efficiency, small effort for small reward, or big effort for big reward, but not big effort for small reward. YMMV.