Good afternoon, @nick65go!
The machines in question are "21st Century" - so they go both ways!
As needed, they're booted into an x86 kernel and/or a virtualized one. There's also the hopes to build these up into DistCC nodes for cross-compiling, but that depends as to whether it's justified.
Here's the process:
There are a number (currently 7) of Dell and HP "mini" workstations (4 and 8 core units, 16 and 32GB each) intended to handle x86 "on demand."
The 12+ core machines (currently 7 also) are Dell Precision Multi-Xeon v4 and above, intended to manage the x86_64's "on demand."
There's also an array of RasPi and similar SOCs which are lined up for the ARM's Race. Since they're much less powerful, more units were dedicated for the job.
In the event the x86 queue were falling behind, the Precision machines can be tasked to reboot into x86 mode (active) or launch a virtual x86 (still being built.)
The 4/8 Cores are set up for the same - if x64 fell behind, the workstations can boot into x86_64 accordingly.
If we utilize 7zz, it has to be compiled for all of the above platforms and the copy that WE use can be tuned Native... but the builders' job is to compile kernels/extensions that are more "General" population and since each extension has its own list of build dependencies, I'd probably create 7-Zip as a normal extension so the builders can just load it in along with the compiler. (Instead of having a "special" seven and then a "public" one.)
Each extension that's being built forcefully unloads all existing extensions (save for kernel modules, etc. which would be actively in use) to ensure as close to a "clean slate" as possible without rebooting. Considering the Precision servers take almost 45 seconds just in P.O.S.T., rebooting isn't desirable if we can avoid it.
UPX vs. 12+ core machine... no, RAM and Cores aren't an issue for x64 builds so the couple megs difference isn't concerning - and even the "baby boxes" (x86 mini's) as I've come to call them have enough RAM to where we don't have to conserve TOO much even if we had to boot native x86 with a 3.x GB ceiling, but this entire thread has me pondering a few UPX related experiments! (If UPX is what you've noted, why not UPX-compress virtually every binary in every TCL extension? Due to how SquashFS works, to my understanding, a smaller binary footprint would technically allow for a smaller memory spend??)
In any event, thanks for the feedback and take care!