Tiny Core Linux
Tiny Core Base => TCB Talk => Topic started by: CentralWare on August 13, 2024, 04:34:23 AM
-
Good morning!
@Curaga, @Juanito, @Rich, @Paul_123 and other TCL extension maintainers:
I had to build a few TCL related extensions over the past couple weeks and I keep finding myself hunting for things that are buried X number of releases ago so I put together a few scripts to help clean out and consolidate v5.x through v15.x tcz/src content (currently x86 and x64 only, I haven't gotten to ARM yet) and have a set of lists which show a good number of extensions whom do not have build content on file. This part of the project, gathering build notes, is PHASE 1 (of 5). The ENTIRE project will be repeated for ARM once x86/64 starts PHASE 3.
For example, @Juanito is listed as the maintainer for "unzip" but the script I put together didn't find anything in tcz/src for it throughout v5~15. (I'm not picking on @Juanito :) unzip just happened to be a random choice from the first run.)
These scripts I ran are far from extensive, perfect in any way and they do not de-dupe related extensions in any fashion as this was intended only as a starting point.
Two text/list files are enclosed in the attachment:
The file nosource.lst is the result of extensions which we did not find a source directory/build script/etc. in v5~v15
The file nohomes.lst is the result of /tcz/src/items that exist which did not find or exactly match up to extension filenames (such as tcz/src/xorg wouldn't see "Xorg-7.7" as a match.)
What I'm hoping to accomplish is to gather build notes for extensions that otherwise have none. Hopefully these notes contain web links to source tarballs, any exports/flags (CC/CXX/etc), compile and runtime dependencies, anomalies, trouble shooting experiences, etc. pertaining to each given extension - anything we can gather and build upon for the years to come. Granted, there's a good number of source packages that configure > make > install without an ounce of effort, but even so much as a link to sources (www, github, etc.) could save hours of hunting over time. (Yes, some of these links are in filename.tcz.info - I haven't gotten that far, yet! :) )
These build notes will eventually be turned into full compilation scripts which handle version control, configuration, compilation, extension creation, dependency checking, etc. IN PHASE 2.
@Anyone with extension or compilation experience is more than welcome to help chip away at the list in any capacity they can offer! If creating build notes/scripts, please use the naming convention of extension_filename.tcz.build so they line up with the actual TCZ files.
For PHASE 3, a group of networked hardware is going to be available online (ranging from numerous ArmV6, V7, V8, A64 devices and quite a few X86 and X64 formatted machines) where someone can download a script template, update it to manage a specific extension and submit the builder on actual, live equipment. There's more than 3,200 extensions in TCL's 80x86/64 history and I'm guessing around 10% of those are driver/kernel related, but that still leaves a large number of packages that need attention!
* This will eventually include the kernel, toolchain, busybox and other core components.
-
this will be very beneficial to Tiny Core Linux and will ultimately assist other projects who/whom silently copy/emulate/monitor/watch/etc TCL.
@CentralWare
please keep us posted on progress as well as communicating any potential ability to accept _donations_of_whatever_might_be_of_assistance.
-
About five years ago a few of us on the forum decided to do something similar. I developed a database for postgresql, some shell scripts to load the data, and some PHP scripts to present a restful interface. Unfortunately we couldn't agree on how it should be designed, as one of the team members thought the build system should be purely file based. This, and that I do back end web programming and I didn't have any desire to write a front end it basically died there. The scripts I have were developed for TCL 8, but they would all probably still work today. If this sounds like something you might be interested in I could spend a little time getting it going again. If this isn't the direction you have in mind you won't hurt my feelings if you say no.
-
For what its worth, mine are all individual scripts that I keep on my git server. At one point I tried to use a few common scripts and configuration file for the specific package. But I found for the packages I build, I could not get enough commonality in the script to make it worth my time.
The hardest part is the "vision"
BTW: The build information for some base packages are in the tool chain notes, as they get built very early in the toolchain process and/or are parts of the initrd.
-
If this sounds like something you might be interested in I could spend a little time getting it going again.
ALL directions are valid directions! Take your LAMP scripts, for example... they're more like a story to you when you read the story and you, personally, know how to get from chapter to chapter with the least amount of effort. Someone else comes in and reads your story and they could be entirely lost. I was able to read your scripts rather easily, and if memory serves, even commended you on your note-taking and thoroughness even when you struggled with MariaDB. Yes, everything that CAN be offered is welcome! Very few things could be considered valueless here... if a build-note-file contained more of a wish list than anything but still had CC flags or a website that contained a GZ file whereas their GIT site had only a master download with no heads or tails to version management... even the URL could save HOURS of wasted work!
The hardest part is the "vision"
BTW: The build information for some base packages are in the tool chain notes, as they get built very early in the toolchain process and/or are parts of the initrd.
I'm guessing your "vision" is what I noted above to AndyJ... one person being able to "see" another person's thoughts, in writing... or be able to "read" another's chicken-scratch to make heads or tails out of it. However, I'm game! I can't afford to bring the crew in and pay their salary based on this particular vision... but down the road may be a different story!
@Everyone: There's no WRONG anything here! Currently, there's no "standard" to go by, so there's no templates to follow (yet!) but that's ~ PHASE 4 when we reach out even to some of the application authors and see if they're willing to chip in... you know, when they're testing out a new Beta or putting up a new release... they had to compile it themselves! "Get your notes together and make 'em available!" (or better yet, grab a template and fill it in!) Even if they build on Debian, or Red Hat, or even Slack... every piece of information offered could be one less step for us to figure out on our own... and SO much less wasted PERSON time and PROCESSOR time. At the end of the day, when this project is complete, technically it could easily be retrofitted for just about any 'nix out there.
@Paul_123: The toolchain notes are a little scattered as to the WHY portion of a few things (and missing links or /src content for the last half) but I'm guessing there's a reason for the specific order of things. The links I can probably find most of via extension.tcz.info files. Manually... all by myself... at three in the morning... instead of sleeping... (LOL!)
When you see the proposed "templates" you'll see why "all under one roof" really matters. We want to be able to do everything necessary from the command line; from downloading the source "tarball" to stripping and packaging the finished product. ZERO third party content... no dependencies at all which are not required from the extension source. (ie: No Perl unless Perl is needed to create the extension itself. No PHP... not even Bash... I'm trying to put this project together using nothing more than the Common Core.) "...but I created my build using Bash!?" That's rarely ever a problem; converting Bash to (A)sh isn't all that big of a deal; I'm just trying to keep everything do-able via Busybox/Core supported functionality.
What you could help with in the months to come, IF you don't mind - especially with TOOLCHAIN, is to create a list of dependencies (DEP: kernel headers or DEP: glibc-x.xx one DEP per line would be ideal but if it's easier to copy/paste 20 into a single line... I'll take it!) at each of your cd extension[/i] lines as the upcoming templates have two dependency lists, one for compilation, the second for runtime, so if extension-1.23.tar.xz needs ncurses(-dev) to compile, this is very helpful information to have up front. YOU know why libstdc is that far down the list because you've already compiled it and found two or three things it needed before it was able to compile... but "vision" isn't seeing it yet :) (How many times have you compiled something just to have to rebuild numerous times as library-xyz is a prerequisite... then app-abc turns out to be needed after that one (ESPECIALLY on apps like MariaDB where on a 2x40 CPU server can still take forever to compile just to get to 97% and hey! Another dependency!!!) DEP: above can cut so much of that hunting out of the equation. Especially when a good number of authors don't list their dependencies, co-deps, etc. on their websites, gits, and so on.
Toolchain Example: libstdc gets built well after its gcc parent. I'm GUESSING there's an app/library or two that are built between GCC and LibStdC which makes it necessary to separate things in this fashion? Which ones? I don't know yet... I haven't really dug that deep. However, if I had a DEP: list, I'd know which BUILD TOOLS to load which we've already compiled --- OR, if a couple are missing, I'd know which ones NEED to be compiled, then I'd come back to the current one when they were done. This way, if I was building mc for example, the system could toss out an error saying "Hey, dummy! I need curses and readline before I can do this... sit tight and I'll be back when those are done..."
Back to the LibStdC example... let's say there was a single dependency needed... and it could have been built before EITHER of our intended libraries. If its own DEP: list was already tended to, it could be placed higher up on the priority list and "out of the way" - and the build manager would be responsible for prioritizing extensions based on those DEPs.
@Paul_123: Another note after reading (and expanding) this thread, again, to be even more long-winded... one of the toolchain goals is JUST the compilation and creation of the extensions (build tools) which MAKE the toolchain possible. Any and all methods to create the initrd, for example, really should be separated into the core tools (which is another monster for the near future.) For right now, the v14 toolchain note file is perfect as I can disregard everything else until after all of the tools compile perfectly as separate scripts.
NOTE: The system that is being built assumes NOTHING UP FRONT. This project's first job is to kill off any and all running extensions to leave the system as fresh and clean as possible, so everything has to be declared. Yes, compiletc is a cheat but since it exists, no harm... no foul. Someday, however, I'd like to see everything listed on its own without macro extensions. Only the CORE remains (unless there are locked files) so we get a clean slate with EVERY build, allowing numerous builds with a guaranteed starting point and the DEP: lists are accurate for any operating system on (most) any platform with our BusyBox and inherent tools as the only (portable) dependencies.
As with TOOLCHAIN, the template's job is also to allow BUILD TOOLS (building extensions specifically to help create other ones, not necessarily creating TCZs for the repo.) Due to the fact that there are dozens of chain apps/libraries it may be necessary to "call it a day" 33% of the way through building the chain... creating build extensions takes the place of /tools BUT similar to TCZ files, can be shut down and loaded fresh the next day with a dedicated partition to mount and a "guaranteed clean" TCL motto. (copy2fs mentality compared to squash mounting will possibly be required... but it's too soon to tell.)
The final stage of this project (PHASE 5) involves creating a client/server back-bone (like peer2peer networking, but focused on compilations and testing as opposed to file sharing.) We have about 40 Raspberry Pi units (thanks to a TCL member, we now have a RasPi-1 again!) and a dozen or two x86/64 systems which will be the starting point for this part of the project along with anyone else's machines out there who wish to participate by loaning us a thread or more similar to the VPS (Virtual Private Server) type of operation. This also allows us to test extensions on SO many more physical platforms for debugging where otherwise we wouldn't be able to "see for ourselves" the outcome of a build - or a failed one, in particular.
Example: We have a TCL user who is trying to use an older AMD Athlon mid-tower and an even older x86 Celeron laptop to run an extension we haven't supported for a good number of years (boinc) and though the extension seems to work perfectly fine on my newer i686 workstation, it crashes on his. PHASE 5 will allow the user to send me an "ID" from his test machine allowing me to single out his specific machine and remotely compile some of the extensions in question on his hardware, allowing me to gather hardware notes and build logs from his specific hardware. PHASE 5 is intended to be completely user controlled and if everything works as the theory predicts, we won't even have to ponder firewalls or routers as it'll be client -> web server managed and most of the time, non-interactive unless we have a situation like his.
As always, thoughts and opinions are welcome! :)
Take care, Peeps!
any potential ability to accept _donations_of_whatever_might_be_of_assistance.
Okay... how does this sound...
You place $1 USD in an envelope and mail it in. (other options would exist for non-US residents for supplies instead of dollars.)
You then find two friends who are willing to repeat the process and those two friends get two more friends each and so on!
YOU spend $1 and a few minutes of your time. What happens afterward is...
$0.80 of each dollar will be used to furnish computer and/or IoT hardware to children ages 5 to 16 (on average) in an introduction to computers, 3D Printing, IoT Development, Robotics and a few other topics that are being considered in a project called "IND001 INTRO DESIGN AND FABRICATION" which is in trials right now (starts Monday) in our local school district between grades 1 and 10. The school's current funding is dismal (which is expected in a trial run) and is only scheduled to run this 2024/25 school year if the public response isn't above terrific. Elementary kids have "labs" during summer months, everyone else during the school year.
$0.12 of each dollar will be used to hopefully entice a few bigger names and companies to stop by and throw a mini "seminar" for the kids throughout the year (mostly to keep their costs to a minimum, not for anyone to think they're going to make it rich! :) )
The remaining $0.08 is planned for maintenance, fuel/shipping, supplies, etc.
If this gets enough attention and we're able to repeat it for 2025/26 we'll be bringing in a camera crew, getting local media involved, etc. so that we can create a curriculum out of the entire year which we'd put out there for the world to share in. Again, it's hoped to gain traction and if we're lucky, other schools across the planet will pick up on the program!
-
reminds of Ken Starks and his many efforts.
for your convenience and respectful of your limited time, links included(although dated and not sure if Ken is still around anymore):
https://www.heliosinitiative.org/help.html
https://fossforce.com/2021/11/ken-starks-hangs-up-his-spurs-at-reglue/
https://yourswryly.blogspot.com/2020/01/broken.html
https://linuxlock.blogspot.com/
https://techaeris.com/2014/05/01/lucky-break-puts-technology-hands-disadvantaged-kids/
https://lwn.net/Articles/437057/
even more dated but from one of Ken's donors:
https://linuxlock.blogspot.com/2013/06/one-for-money.html#comment-6245728272361132352
https://thomasaknight.com/blog/15/
have never met Ken but very highly respect his efforts and tenacity
-
also, since we're talking about "schools" and subsequently "students" these by Nadia Asparouhova are definitely thought-provoking
https://nayafia.substack.com/p/protecting-our-attention
(found via her rss feed: https://nayafia.substack.com/feed)
referenced in above commentary:
https://arenamag.com/2024/07/31/playing-with-guns-and-phones/
-
The toolchain order is based on the Linux From Scratch documentation.
-
I am not a programmer, but I could recommend you to be inspired by well proven PKBUILD (package build) scripts from ArchLinux or AlpineLinux. Yes they have as back-end as a DATABASE (with optionally crypto-key for validation). This allow them to see many important FIELDS (URL, version, dependency packages, etc) something like TC info + dep files.
Plus in Archlinux web-site you can see (for any package) the list of files in the package (aka TCZ) but also the library dependency (*.so*). It is OK to have a new original vision, new template format; but is no harm to inspire from the "competitors".
In the end, neither Arch Linux. nor Alpine Linux, [or Debian, openSUSE, etc] develop their own software (exception their own front-end package MANAGER); instead they merely pack the upstream developed software AND apply their specific PATCH sometimes. Their main servers workload is about packaging with scripts.
-
reminds of Ken Starks and his many efforts.
@Gadget42: Thanks for the reading material!! I promise I'll look into it when time permits; though as of last night and the universe's ultimate gut-punch a parent could ever receive regarding the word cancer, it might be a minute. Virtually every medical journal out there states the words "...when caught and treated early..." yet these asinine physicians today all seem to be in a daze as if to say "...eh, I'll get around to it if and when I feel like it!" My kid has more ambition and he's a Nintendo addict! ("...ooowwwww! Dang, Dad, what is that stuff?!?!" the child screams as the window dressing is pulled to the side. "It's called sunlight, my son. GET OUT AND GET SOME!" Just obviously not too much these days! Maybe these kids know something I don't!?)
@nick65go: PKBUILD has been added to my journal for investigation; thank you! As just mentioned, it might be a minute as I'm about to go into Rambo + Terminator mode, ready to storm the medical community with one-liners starting with "...DO your damn job!" and ending with "I'll be back!" :)
@Paul_123: LFS documentation... understood. Thanks!
As for "school" stuff... my teen (above notes) starts school Monday and they have him in a class that focuses around a machine shop environment (lathe/mill/press/welders/etc.) and I laughed during orientation last night saying "...if you show promise with the concept of machining and with safety guidelines that "I" would approve, I'll give you a set of keys to the CNCs, 3D printers and laser rigs!" and his eyes just lit up. My six year old then broke into the conversation with "What about me, Daddy?"
...
...
"Anyone hungry??"
-
mentioning the "big C" reminded of:
https://jakeseliger.com/2024/08/04/starting-hospice-the-end/
via Bradley Taunt:
https://btxx.org/posts/perspective/
re: machining/machinist/CNC/etc, my niece's husband does that sort of thing for Blue Origin(seems good machinists are few and far between...go figure)
-
It is OK to have a new original vision, new template format; but is no harm to inspire from the "competitors".
The beauty of being "me" is that I have no "competitors."
I see everyone as friends with similar interests.
LOL - not everyone carries that same mentality, but hey...
Linus could have said "Closed Source!"
Gates could have said "Free!"
Jobs could have said "...apples AND oranges..."
Oh, what the world would be like today... :P
-
@Paul_123: There's a reason I keep an almost-military haircut... it's too short to pull out when frustrated, and LFS led me RIGHT down that path! :)
GIT the initial build content for alfs... check
Mount a RAID-5 based share to do all of the work under... check
Move the build content into its new home and run "make" to configure it... check
GET TO LINE 19 and CRASH... check!
(come to find out, A/LFS doesn't LIKE being on a share, as it can't create links in the fashion it wants to for some reason)
One hour, nineteen minutes flushed down the loo...
Regardless, she's downloading all of the support sources as we speak, finally, and I'm curious to see what awaits!
-
@nick65go:
Plus in Archlinux web-site you can see...
Arch looks very similar to what we're building (and very promising!) ours is just more elaborate (which it needs to be, considering everything it's supporting.)
Granted, the first app I usually pick on is Midnight Commander (I don't know why... it's just been this way since Norton) and I was blown back when I couldn't find it on Arch/sources/packages :)
(It does seem to be in /community... but still, she's LONG time tested... should be in the home extensions!! :) )
Alpine, however, I haven't completely traced out their repository (haven't found sources, but found binaries) so that's a mission for another day.
@gadget42: Jake's situation is heart-wrenching... my logic screams, though "...what the :-X is he doing getting Bess prego again when he knows things are far from perfect??" Regardless, our prayers go out to them.
-
@CentralWare: For Alpine Linux, usually I go first to https://pkgs.alpinelinux.org/packages (https://pkgs.alpinelinux.org/packages)
, then filter by "package name" and optionaly "ARCH"
, so for MC you arrive at https://pkgs.alpinelinux.org/package/edge/main/x86_64/mc (https://pkgs.alpinelinux.org/package/edge/main/x86_64/mc)
; and then click on "Git repository" and arrive at source : https://git.alpinelinux.org/aports/tree/main/mc/APKBUILD (https://git.alpinelinux.org/aports/tree/main/mc/APKBUILD)
-
@CentralWare: for ArchLinux, first you go to https://archlinux.org/packages/ (https://archlinux.org/packages/)
in the field "Keyword" just type "mc", for Midnight Commander, to filter the results.
then in the field "Name" just click on it ("mc") and it is done.
if you want the source package then now click (at top left) on "Source files" and you arrived at its PKGBUILD:
https://gitlab.archlinux.org/archlinux/packaging/packages/mc/-/blob/main/PKGBUILD?ref_type=heads (https://gitlab.archlinux.org/archlinux/packaging/packages/mc/-/blob/main/PKGBUILD?ref_type=heads)
-
+1 for something like aur PKGBUILD, it's comfy and familiar
-
@CentralWare: for ArchLinux...
Arch is already complete. I have a copy here (62GB!!!) on one of the file servers of the sources and community (5,000+ builds) and for just build scripts... this puppy is huge!
Alpine's APKBUILD may come in handy for those packages which weren't on Arch, or failed using Arch's scripts, or failed with our own build scripts, etc. (There's too much uncertain navigating to make it worthwhile in an automated fashion - which is our endgame. It's also very likely why Alpine doesn't share their GIT openly like Arch does (must be accessed via their web interface) BUT... I'm grateful none the less!) I haven't tried to do a git clone of Alpine yet (it's being scanned right now) but to have those available without having to traverse their interface could be very worthwhile, too.
Update: Alpine is flagged to be used "as needed" - their tested script list is a lot smaller than I had imagined. It's broken down into three categories, MAIN, COMMUNITY and TESTING where it looks like more than half of their extensions are in testing. (Midnight Commander, though, is part of their main releases!!!!) Don't get me wrong -- it's still very valuable and I'm grateful to have access to it, but "testing" tells me possibly "unknown status." so they're seen as "notes" more than scripts.
@yvs: PKGBUILD itself is a BASH system which we'll likely avoid specifics pertaining to bash, but the command structure will possibly be similar. The goal here is to avoid focused commands/functions/etc. that are likely to fail under different shells and to use third party apps (like grep, awk, etc.) as little as possible without causing bloat or lag by scripting such. For example, Bash allows arrays of text to be included in a command such as ${item1, item2, item3} whereas this fails in Ash. The way we have to write it, then, is in a fashion where it'll work in both; in this case it may mean having three separate commands, which takes longer to write but should not take longer to compute - or have very little extra in computing time. I want the end product to function in most any environment on most any 'nix foundation.
LMAO... Alpine's APKBUILD scripts are almost identical to Arch's. In fact, if it weren't for changed names here and there, I'd say they were the same source.
The only flaw between Arch and Alpine is that neither of them have version management.
For Example: filename-2.3.4.extension may have a dependency of OpenSSH Version 1.1.0 through 1.1.1-J whereas OpenSSH 1.1.1-M and newer won't work for the release. Both Alpine and Arch don't look as though they take versioning into account, which means some things are bound to break... which is fine... it's why this project was launched! We'll likely use Tiny Core's version history to create ball-park entries for this (ie: To build MC version 1.2.3.4 which was found, let's say, in TCL v10.x we can check out the dependencies found in that release and see what those versions were at that time. It's not ideal... but it should work for now and can be automated!)
-
The goal here is to avoid focused commands/functions/etc. that are likely to fail under different shells and to use third party apps (like grep, awk, etc.) as little as possible without causing bloat or lag by scripting such.
in a totally declarative way? Kinda tradeoff between predictability and flexibility. There could be solutions between, something like OBS services.
I want the end product to function in most any environment on most any 'nix foundation.
Usually it's enough to have a working solution for a target distro. If that's about the universal solution... idk, I've not seen any popular ones (maybe because the cost of tradeoffs on a way to achieve that).
For Example: filename-2.3.4.extension may have a dependency of OpenSSH Version 1.1.0 through 1.1.1-J whereas OpenSSH 1.1.1-M and newer won't work for the release.
PKGBUILD #4.1
Version restrictions can be specified with comparison operators; if multiple restrictions are needed, the dependency can be repeated for each
-
PKGBUILD #4.1
Version restrictions can be specified with comparison operators; if multiple restrictions are needed, the dependency can be repeated for each
Based on the limited number of build scripts I've looked at so far with Arch and Alpine, I haven't come across anything (yet) that looks restrictive, but that's not saying they won't show up on the trials this week! I'm also looking forward to seeing how they handle drivers and other similar kernel-interactive packages!
Complete! The local archives for both Arch and Alpine have finished! Now we can start having some fun!
PKGBUILD... for a program that's somewhat famous for building and compiling software... it's rather elusive where the source code FOR PKGBUILD lives! :) I was trying to determine whether PKGBUILD was a script system or a binary... still cannot say for certain, but I was able to find pkgbuild-assistant which was C source, so I'm going to just assume pkgbuild is also and leave well enough alone. (Compiled binaries tells me there's different releases for each platform they're used on whereas scripts "tend" to be more universal... so we won't be borrowing anything directly from pkgbuild, BUT if I can find the source somewhere, I can look within to get command-line calls and internal logic without having to actually install Arch/Alpine just to have access to it.)
-
Hi CentralWare
... PKGBUILD... for a program that's somewhat famous for building and compiling software... it's rather elusive where the source code FOR PKGBUILD lives! :) I was trying to determine whether PKGBUILD was a script system or a binary... still cannot say for certain, ...
Have you seen these:
https://wiki.archlinux.org/title/PKGBUILD
https://wiki.archlinux.org/title/Makepkg
I only briefly skimmed those Wiki links, but the impression I got
was that PKGBUILD serves as a guide for makepkg. Kind of
like Makefile for make.
-
Hi CentralWare
Also found this PKGBUILD template on github:
https://gist.github.com/valeth/f94e42cd9ecf76034ef7
-
I don't know if PKGBUILD is better suited to making TC extensions, but I described using Pkgsrc in TC (http://www.ombertech.com/cnk/tinycore/pkgsrc_guide.htm) a year ago. It's a similar sort of thing for doing scripted builds from source code (which break too often for my taste when things change).
-
I don't know if PKGBUILD is better suited...
When it comes to change, I don't think there is such a thing as a "best way" or even a "better way" but here's my take on it:
PKGBUILD, PKGsrc, etc. all achieve their destinations based on assumptions that nothing has changed. For a good number of extensions, this likely holds true.
MY goal here is to take (approx) 6,000 individual extensions and run "logical" build scripts which eventually expand into AI (Automated Ingenuity - the other one doesn't really exist :) )
Example: (I'm using dropbear because it currently doesn't require secondary dependencies)
Let's say 01-01-2024 dropbear_1.2.3.src.tar.gz compiles perfectly using glibc_a.b.c and gcc_d.e.f, make_1.2.3, etc. - we then have a documented history of successes.
On 01-01-2025 our builder throws up a flag and tells us the newest stable release doesn't compile using newer ingredients as a foundation. We compare notes to find the sore spot(s):
Dropbear's build script (its filename) is sent to the Builder Network with a Research flag, meaning if/when it fails compiling attempt one of a dozen or more combinations to see if the NEW dropbear compiles with a slightly downgraded foundation, or if the OLDER Dropbear compiles with our newest foundation members.
- First, attempt to build the most recent successful, stable dropbear using the newer foundation items. Leave notes/findings for a developer to investigate once successful
- IF FAILED, downgrade each foundation item until we're successful again - leave notes for a developer to investigate
This is basically what the maintainer would be responsible for - updating scripts when change demands it of us; in this case, given a head start and notes as to what was already attempted and what the results were from those attempts.
PRO: With the automation hardware system we're building, a ton of maintainer hours is expected to be alleviated. (This also costs $, so we're working on methods to cut this as well)
CON: There are still going to be apps which require hand-holding every so often to keep things compliant with our system. It's not yet "completely" hands free! It may never be.
PRO: The final stage of the project will be us creating a peer-to-peer-like volunteer network to assist with the cost of compiling (behind the scenes)
CON: ...assuming we can forcefully maintain security procedures as hacked extensions defeat us doing any of this in the first place.
@Rich: Yes, I have seen those pages/links and have already torn apart a hand-made template based on live source packages. There are exported variables I need to find out how they get from PKG to build scripts or however it works within arch/alpine/etc. (Thank you, though - every angle is appreciated!)
Example: $CHOST, $CBUILD, $CFLAGS, $CPPFLAGS and others are randomly spotted throughout some of their extensions, but PKG looks to be what creates these exports, so knowing where some of these defaults come from (and what their preferred defaults would be) would help me "see" what they were thinking when PKG was created. I have no clue as to whether there are cross-compilings going on or whether all of these scripts are being executed on end-user machines only... and being as I've never had reason to tinker with Arch/Alpine I've not had reason to break it apart before.
Here's a kicker: "A PKGBUILD is a Bash shell script containing the build information required by Arch Linux packages." Terminology on G00gle is the same way... discussing the shell script(s) AS if they WERE pkgbuilds, so I'm going to have to launch a virtual of Arch/Alpine and do some open-heart to figure it all out with front row seats.
"makepkg is a script to automate the building of packages. The requirements for using the script are a build-capable Unix platform and a PKGBUILD."
Again, they call makepkg a script... and they talk about "...and a PKGBUILD" which makes me assume these (https://archlinux.org/packages/) are PKGBUILDs to them.
If so, PKGBUILD <-- MakePKG <-- PacMan looks to be the method of the madness!
If anyone has arch/alpine installed, send in a copy of makepkg if you would! :) I haven't found a link to it yet and it doesn't spawn results on arch or alpine.
I take that back, I MAY have found something (https://github.com/fusion809/package-management/blob/master/makepkg). LOL - all 2,400 lines of it.
-
I don't know if PKGBUILD is better suited to making TC extensions, but I described using Pkgsrc in TC (http://www.ombertech.com/cnk/tinycore/pkgsrc_guide.htm) a year ago. It's a similar sort of thing for doing scripted builds from source code (which break too often for my taste when things change).
pkgsrc (from NetBSD) and ports (from FreeBSD) are good, but it's based on bunch of bsd make files, and mostly targeted those systems
% uname -sr
NetBSD 10.0
% ls -l /usr/pkgsrc/mk/**/*(.) | wc -l
668
for pkgsrc there's also wip (work-in-progress) for non-system repositories.
And if I got it correctly some mix of concepts from pkgsrc and pkgbuild is used on Void linux with xbps
-
@CentralWare: For PHASE1 of the project, you could gain speed using STATIC LINKED TOOLS. I mean for ash/bash, sed, grep, awk, diff, patch, make. Few reasons:
- the new scripts for TC16.x should run with independent tools, NOT depending on LIBC version of the TC root.fs. There can be new bugs in new versions, or changed arguments syntax, etc. Better to use SAME VERSION of tools for the same ARCH (ex: x86). Ex:Archlinux did not change their pack-tools for years, even with new kernel+ libc etc.
- You bypass (for the time being) the diff syntax for ASH vs. BASH; you can concentrate of other aspects of scripts.
- some tools are faster than other (ex:GAWK maybe 5x time vs. busybox-awk). You could compile (for Arch=x86) on a farm of AMD ZEN5 servers (with Arch=x86_64 base).
- the script COULD be (near) the same, invariant of libc / musl, by changing just CCFLAGS in/out of scripts.
PS: I suggest to not bite too much in one step, better version1 and test, then vers2 and test/measure. Because Pareto principle: you could gain almost 80% final result with just 20% effort. the rest is.. tuning and diminished profitability.
-
You can chroot in any mini-root and have a live experience of how their packaging is working. Or to compare with your future TC scripts.
mini-root for Alpine Linux; go to https://www.alpinelinux.org/downloads/ (https://www.alpinelinux.org/downloads/)
and download https://dl-cdn.alpinelinux.org/alpine/v3.20/releases/x86_64/alpine-netboot-3.20.2-x86_64.tar.gz (https://dl-cdn.alpinelinux.org/alpine/v3.20/releases/x86_64/alpine-netboot-3.20.2-x86_64.tar.gz) (is 3,409 KB).
mini-root for ArchLinux; from https://mirror.cmt.de/archlinux/iso/2024.08.01/ (https://mirror.cmt.de/archlinux/iso/2024.08.01/) and download the archlinux-bootstrap-x86_64.tar.zst (is 113,189 KB).
PS: It is amazing how Archlinux PACMAN download/upgrade 10+ packages (tar.zstd) in PARALLEL, if you have /etc/pacman.conf; section [options], ParalledDownloads =10 ; Sorry, I cheat a little because I have 12 threads APU and my pacman is from CachyOS (arch=x86-64-v3). But if your machine is modern, you could do it too, at least to download/compile; what metters is the final result to be correct, not how you did it.
-
It seams that the "theory" is simple:
https://wiki.archlinux.org/title/Arch_packaging_standards#Makepkg_duties (https://wiki.archlinux.org/title/Arch_packaging_standards#Makepkg_duties)
When makepkg is used to build a package, it does the following automatically:
1. Checks if package dependencies and makedepends are installed
2. Downloads source files from servers
3. Checks the integrity of source files
4. Unpacks source files
5. Does any necessary patching
6. Builds the software and installs it in a fake root
6. Strips symbols from binaries
7. Strips debugging symbols from libraries
8. Compresses manual and/or info pages
9. Generates the package meta file which is included with each package
10. Compresses the fake root into the package file
11. Stores the package file in the configured destination directory (i.e. the current working directory by default)
FYI: https://reproducible-builds.org/ (https://reproducible-builds.org/)
"Whilst anyone may inspect the source code [of free and open source software for malicious flaws], most software is distributed pre-compiled with no method to confirm whether they correspond...
The motivation behind the Reproducible Builds project is therefore to allow verification that no vulnerabilities or backdoors have been introduced during this compilation process".
https://reproducible-builds.org/who/projects/ (https://reproducible-builds.org/who/projects/)
ex: Alpine Linux, Arch Linux, Fedora, Debian, openSUSE;
wow, just a small bunch of Distro (max 37, out of 269 shameless clones) takes security as a concern; all others are just closed mouth lips.
-
"Simple" broken down
1. Checks if package dependencies and makedepends are installed - YES, we do similar
2. Downloads source files from servers - YES
3. Checks the integrity of source files - Assuming signatures, not authenticity*
4. Unpacks source files - YES
5. Does any necessary patching - It just runs existing/instructed/included patch files
6. Builds the software and installs it in a fake root - In an environment OTHER than TCL**
6. Strips symbols from binaries - Same as us
7. Strips debugging symbols from libraries - Same as us
...
* There are some packages out there with numerous "maintainers" or authors... it's sometimes hard to determine genuine sources of sources.
** TCL standard core doesn't necessarily play nice with chroot outside of the root/tc accounts.
The above also implies running yet another third party application (pacman>makepkg>pkgbuild) - something else that if changes are made by their authors/maintainers... it's something we have to rebuild everything to compensate for. This is why we're taking as much out of the equation as possible.
- some tools are faster than other (ex:GAWK maybe 5x time vs. busybox-awk). You could compile (for Arch=x86) on a farm of AMD ZEN5 servers (with Arch=x86_64 base).
Are these tools foolproof for compiling ARM and other processors? Or are you suggesting buying a farm of AMD ZEN5 (rather specific??) just to suit x86/64?
I never said anything about cranking out an entire repository in a day. Speed is nice; it's not vital. Success is.
"The AMD Zen 5 release date is August 15, 2024." Feel free to donate a few 9950X to the farm! I'll even buy the motherboards, RAM, etc.!
-
@CentralWare: Hi, maybe I should clarify few things, even with the risk to repeat myself. I already provided pieces of this personal info, but maybe they are splited in many forum categories.
I am not a zeolot (stuborn fan) of any particular distos (linux kind or not). I like many of them (KolibryOS, WinXP-64, Win11), I even love few of them (tinycore, Alpine linux, Arch linux). But as in life/love nothing is perfect to my taste, yet I accept compromise because otherwise I should be upset almost of my time. So, I prefer a laptop machine and x86-64 APU (because my laziness and their afordable price -- for now, for me) plus I like compact size machine (no desktop+monitor in my small "house") and a silent one (no fan for CPU/GPU). For the time beeing I am not focus on ARM achitecture, sorry.
I use the computer for its software, not for its basic blocks (OS, kernel+drivers); like I used a car to move from A to B, it does not matter the color, or the engine type).
I provided just few ideas, sorry if they do not mach your preferences/goals. The long time surviving linux distros have tested and learned some eficient ways to do things for their fan/contributors. Some distros even try to please a larger (not-contributory) audience, such as Ubuntu. Each of them have good ideas, but not the same implemented allover.
As the software became more bloated, because powerful & (relativly) cheap hardware, maybe is time to re-think and colect the best ideas from each of those distros. Even today there is no consens for the type of package (NixOS, snap, flat-pack) or the compression type (ta.gz, tar.zstd). Because their target audience is different. Some distro want compatibility with older CPU (x86) when other focus on performance (CachyOS) of modern machines (ex: UEFI + x86-64-v4).
I am looking forward to test/use new tinycore software and to learn few more new things in my spare time. I accept that I can not cahange the world, but I could change "my world" as I see it from my pink coloured glasses :)
-
For the time beeing I am not focus on ARM achitecture, sorry.
YOU do not have to concern yourself with ARM, nor does probably half of the planet. This is my undertaking... my challenge (aka: my problem! :) ) so on the first day I chose to pull out the first blank page of virtual paper to start on this novel, I had to consider all of the different materials that would be used in the making of this work of art - and in the end, with the numerous hands that will go into this project, it will in fact end up being a true work of art! Granted, in the end very few people will be able to take a few steps back and gaze through said pink glasses (or rose, or what ever color one chooses!) Optimism is part of a universal language I prefer to speak. :)
This project is not intended solely for Tiny Core Linux, but is more so intended for all 'nix brethren under processors and drivers we can support thus far.
This project is not intended as much for end-users as it is for distro staff (who in turn "feed" their end users...) and software authors, including brand new ones.
OH... It's 'nix based only. Sorry, everyone, I'm too old to clean up after and build new windows!
Every window I know of that has ever been constructed, eventually, leaks!™
Bloated: That's not me... that's more distro level. We're the blacksmiths... we just make the tools.
"...sorry if they do not match your preferences..." NO... First, there's nothing to be sorry about. (Asking someone to chuck $20k or so for a couple dozen "just released" processors/computers will flip my sarcasm switch regardless - especially when it's my bank that's financing it!) You did say a "farm" of machines - which to me, starts around 20 or so and works its way up. People do tend to be avid shoppers when they're spending someone else's money! :)
What triggered my interest in building a project like this was, for example, @Paul_123 a number of months ago had made a comment to someone where he spoke about his already overwhelming workload and he was being tasked with even more responsibility. (You'll notice the "Retired" below "CentralWare" -- has something to do with it, but far from being the only piece.) Paul is one of the many, many people who spend quite a few hours, hundreds even, tending to some of the mundane things that everyone else generally takes for granted as most of the time they're oblivious it even exists.
If I showed up with a few thousand already compiled binary ZIP files (and the build scripts for each were public domain, and the binaries had a paper trail of anything involved to help in their creation) @Paul_123 and the rest of the TCL staff and maintainers could spend a short while perfecting the ARC2TCZ script to their liking that would convert these ZIP files into something Tiny Core Linux can then use for their repository and automate the entire thing, reducing those wasted hours. Ubuntu, Slack, Debian, etc. crews can come in later and do the same thing if they wished, and so on. I don't care WHO uses it... I just hope they help keep it alive after its infancy as I won't live on forever and I've seen first hand here at TCL what happens when an admin/moderator/host/maintainer vanishes, quits, dies... potential chaos that can go on for years.
"Hey, what about five to ten years from now when a new type of processor is released?"
I've paved the way attempting to put some method of standardization together... to bring a little bit of calm to the chaos of attempting to read the minds of thousands of software authors and incorporate "change" as though it were a friend - or at least a necessary evil. There are patches which pertain to only a given distribution... and there are patches that fix quirks with a given software application - we'll eventually grow to where we'll be able to support both. When there's a new processor to add, though, "Here you go, peeps! You have the building blocks! Make it so!" LOL - I can't be expected to do everything! :p
Kernels and Cores (initrd) are a part of my personal quest for Tiny, but they're more of a side gig and testing grounds for this project (what I've lovingly named SimpleNIX; Keep It Simple S___ sounded too creepy when you abbreviate it to KissNix :D ) and each distro has their own recipe for both which are so painfully different I couldn't begin to see how I could accommodate an automation system (other than toolchain apps, possibly) to please that kind of an audience.
Notice: This project will not be able to comply to every distro's way of doing things but should be able to provide binary content packages where the distro's methods can be met with some basic scripting.
@nick65go: I wouldn't curse you with my home "computer" as it was built right before the COVID lock-down here in the States with the assumption a "lock-down" was going to take place and we needed a way to work from home as initially, the US Government didn't see us as being "First Responders" so we were forcefully closed like most everyone else. Until sh*t started breaking of theirs. If it wasn't for this rig and three others like it, we probably wouldn't have survived the lock-down. If it weren't for these rigs, I wouldn't have been diagnosed with having COVID five separate times and am now dealing with long term effects of being a First Responder. Having a smaller abode with a laptop, a tablet or what ever makes it worthwhile for you is actually envied... I'm here with a family of five where four of the five are truly clinically addicted to digital... whereas personally... I'd rather walk away from it.
All this... to avoid the COVID shut-down. I'm not really certain it was worth it.
BUT... take all of this and think of this project's goal... to help people stop wasting precious, irreplaceable time...
THAT is what makes it... worth it.
Okay, I'm taking a TCL break for a bit to get some real work done on this instead of rambling on a forum. If you guys need anything, shoot me an email. If you want to lend a hand, shoot me an email. If you want to buy the next pot o'coffee, shoot me an email. Okay... too much shooting going on... take care!
-
UPDATE:
16,004 extensions found between Alpine, Arch and Tiny Core
- About a dozen or so will be Tiny Core module related (filename-KERNEL.tcz) (oops!)
- A good chunk of these will be secondary extensions (ie: filename, filename-dev, filename-doc, etc.)
- Many will be Related and Unrelated extensions (alsa, alsa-mixer, etc.) and some Relateds will be from the same source package.
- Some of these extensions will be duplicates due to naming (ie: xorg versus Xorg)
From A to Z we're almost at the end of the F's and she's only been running ~10 minutes
Due to the extent of the unknowns, we're not filtering builder content at all. (ie: When searching for "alsa-mixer" it's reasonably precise as-is, when searching for "alsa" both alsa and alsa-mixer build content will be included in the "alsa" directory. This is perfectly fine this early on as the sorting logic would be astronomical.)
Due to the many, many filename naming conventions and versioning methods, we may have to get creative with the resulting repository. :)
Due to the number of naming issues which caused problems with our trimming methods (filename-ver.si.on.extension) where there are odd long-file-names-1.2.3 or versions which are not period based (such as dropbear-2024-85.tcz) there will be a number of extension directory names which will have to be manually cleaned up after and likely merged with others created by TCL's info.lst
I'd estimate at least a couple/few dozen will be empty references where Tiny Core will have a tcz/src/extension directory containing source code, but no builder info which may also not have an exact match on A/A. These will have to be created using an empty template, but compared to the total... this is nothing!
-
UPDATE:
- From A to Z on the second pass we're currently at py3-x* where the repository is being filled by Alpine, Arch and Tiny content where applicable.
- There are a number of "empty" extensions which I'm guessing are probably Tiny's macro extensions (such as compile-tc which is just .dep/.info files) and AA's that are similar.
- Tiny's extensions are being scoured from version 5.x to current, both PC and ARM/AARCH platforms. dCore doesn't fit the project "as is" and PowerPC is a maybe someday at the moment as we don't stock PPCs and unless someone wanted to donate a few minis, I'd have to pass that baton as we don't have the hardware to test or build on and I don't see myself investing in ~20 year old Macs for a crowd which may not even exist any longer.
- Tiny's kernel content is scrubbed (ie: anything-tinycore and anything-piCore extensions were removed.) I'm guessing there's some AA content similar to these.
- This pass should be complete sometime tomorrow (Friday), then run through the cleaner and afterward, we'll add empty extensions back into the scanner for one last run before flagging them as Lost Boys.
I'm estimating Monday for commencement of the new builder system, but may need to take 'er to work with me to pass it onto the rack servers.
The current part of the project is being called Search, Rescue and Save and GIT will have three directories named accordingly - this is the search phase, rescue would be the sorting process and save would be the new scripts that come from it all. Search & Rescue will eventually become empty directories resulting in just Saved and Lost Boys directories where some actual extensions may be flagged as Lost Boys temporarily if they're problematic, deprecated, etc. and to be researched afterward when time permits after the bulk is done.
@Admins and @Builders: please donate a few minutes by offering up your personal choices for DEFAULT link/compile/etc. flags for the architectures of x86, x64, arm6, arm7 and aarch64 which are tried and tested for Core. Granted, most will likely be the same as the next person's... but this is a good time to bring up opinions such as "...but this comes in handy in instances such as..." as we're likely to come across these conditions as we rebuild a large chunk of the 'Nix world! :)
-
CC="gcc -march=i486 -mtune=i686 -Os -pipe" CXX="g++ -march=i486 -mtune=i686 -Os -pipe -fno-exceptions -fno-rtti"
CC="gcc -mtune=generic -Os -pipe" CXX="g++ -mtune=generic -Os -pipe -fno-exceptions -fno-rtti"
CC="gcc -march=armv6zk -mtune=arm1176jzf-s -mfpu=vfp -Os -pipe" CXX="g++ -march=armv6zk -mtune=arm1176jzf-s -mfpu=vfp -Os -pipe -fno-exceptions -fno-rtti
CC="gcc -march=armv8-a+crc -mtune=cortex-a72 -Os -pipe" CXX="g++ -march=armv8-a+crc -mtune=cortex-a72 -Os -pipe -fno-exceptions -fno-rtti"
-flto and -DNDEBUG can also be used, -fno-exceptions and -fno-rtti sometimes fail.
autotools
CC="one of the above" CXX="one of the above" ./configure --prefix=/usr/local --localstatedir=/var --disable-static --libexecdir=/usr/local/lib/pkgname
"find . -name Makefile -type f -exec sed -i 's/-g -O2//g' {} \;" removes "-g -O2" sprinkled through the Makefiles, occasionally it is -O3
cmake
-DCMAKE_C_FLAGS_RELEASE="one of the above" -DCMAKE_CXX_FLAGS_RELEASE="one of the above" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_LIBDIR=lib [x86_64 only otherwise lib64 is used] -DCMAKE_INSTALL_PREFIX=/usr/local
meson
CC="one of the above" CXX="one of the above" ./configure --prefix=/usr/local --localstatedir=/var --disable-static --libexecdir=/usr/local/lib/pkgname --buildtype=plain
-
What he said. :)
There are a hand full of packages that use no automated configure script, which requires manual editing config files, makefiles, etc.
-
As well as Juanito's GCC options, I've been trying -fno-asynchronous-unwind-tables sometimes since this thread (http://forum.tinycorelinux.net/index.php/topic,26375.msg170165.html). When I haven't it's because I forgot.
-
@Juanito: Thank you! (I didn't even have an armv8 of my own yet; I must be getting old and lazy :P )
The "templates" for each supported platform will all have DEFAULT flags which will eventually be tweaked per application, so I figured I'd ask the Great Gurus of Core for theirs to launch with!
There are a hand full of packages that use no automated configure script, which requires manual editing config files, makefiles, etc.
@Paul_123: yes, but sometimes we're stuck with "...it is what it is..." OR if/when frustration or repetition makes somebody do something about it! :) Thus the whole purpose of this project!
Not EVERY package we've ever come across will fit into this project as if it were MADE for it... but if we're lucky, maybe some of those authors may join in and make their own contribution!?
Final automated numbers:
14,832 "Searched" Extensions (one or more "anything" files found from Tiny, Alpine or Arch repositories for a given filename wildcard) (See Attachment)
1,250 Lost Boys (Most are likely due to tcz/src directories for a given app simply not existing on the repo from the looks of it)
The attachment is an extension list of non-lost-boys with file counts in the format of [EXTENSION NAME] [ALPN] [ARCH] [TINY]
SOME of the Tiny sources look as though they may just be source tarballs and nothing more, a few I saw whizzing by looked to be just patches... we have our work cut out for us!
-
As well as Juanito's GCC options, I've been trying -fno-asynchronous-unwind-tables sometimes since this thread (http://forum.tinycorelinux.net/index.php/topic,26375.msg170165.html). When I haven't it's because I forgot.
@CNK: Thank you; notation added
-
@Everyone: Good evening!
I'm trying to come up with a reliable, shell only method to determine CPU architecture and looking for suggestions to add checks/double-checks considering the kernel tends to sway results based on what it was compiled for.
For ARM processors, I just used the Revision codes to determine which board was being used (again, this might be swayed by piCore's kernel -- yet to be tested)
For Intel/AMD/IBM processors (ARE there any Cyrix processors still running out there today??) all I have right now are processor flags LM (16 bit), TM (32 bit) and RM (64 bit) but I'm not sure these are reliable determinations.
For PPC, I'm not sure yet how to go about hardware detection... /proc/cpuinfo > grep Platform > grep Power* maybe?
From what I just tested, compiler detection is swayed by the kernel (cc -dumpmachine reflects the running kernel, not the hardware's capabilities - I have an x64 running 14x86 and CC says it's a 486 :P) so this isn't any better than flag testing, which doesn't require external software.)
Note: 3rd party apps such as cc, lscpu, etc. thus far all seem to be using kernel's uname or cpuinfo results to display content, thus far we haven't found anything that digs deeper than the obvious; nor have I yet envisioned a WAY to dig deeper myself.
-
I'm trying to come up with a reliable, shell only method to determine CPU architecture and looking for suggestions to add checks/double-checks considering the kernel tends to sway results based on what it was compiled for.
For ARM processors, I just used the Revision codes to determine which board was being used (again, this might be swayed by piCore's kernel -- yet to be tested)
I haven't looked at the kernel code, but the origin of the revision code for the Pis is the information reported by the VPU firmware that's returned in response to a request from the CPU through the mailbox property interface (https://github.com/raspberrypi/firmware/wiki/Mailbox-property-interface). The Pi revision codes shouldn't change between kernels unless there's a bug. Although I think the kernel generates the text description shown in RPi OS (but not on the versions of PiCore I've tried).
I don't know any specifics for universal x86 or x86_64 CPU identification. DMI would help for newer hardware, but that might need dmidecode.
-
@Everyone: Good evening!
I'm trying to come up with a reliable, shell only method to determine CPU architecture and looking for suggestions to add checks/double-checks considering the kernel tends to sway results based on what it was compiled for.
If that's for shell conditional settings depended on repository arch, why not `uname -m` for four TCL15 archs? (something like: i686 -> build for x86 repository, x86_64 -> x86_64, armv7l -> armhf, ..)
If that's for code optimization (besides compiler defaults), I'd not use any (not so big performance boost, but really hard to find out when it doesn't work).
-
@yvs:
uname -m running TCL 14 x86 kernel on an Intel i7 responds with i686 - which is "correct enough."
uname -m running TCL 14 x86 kernel on an AMD X2 responds with i486 - which is untrue.
This is why we're looking for ways to check and double-check results in the function so we can be reasonably sure the returned value is at least "kinda' accurate."
The need for this information (and its accuracy) is for compiling - but maybe a bit more than one would assume.
If a machine is x86 and x64 compliant, it means we can reboot the machine between the two kernels as needed to work on software packages from both platforms.
If a machine is arm8 and complies to arm7 and arm6 the same applies - we can have it natively build apps in all three environments. RasPi5 may be the exception; we'll see just how backward compatible those are when we get ready to launch the AI. For the EC-9100 boards, I think we're good for arm7 but arm6 fails with a panic I haven't bothered investigating yet.
* We're not after speed or performance at this point... we're after success rates.
@CNK: We created a grid of Revision codes dating back to Pi A/B/Zero processors to the current Pi5 and PiZero2 which is very lightly tested right now, but the tests we've run so far SEEM to be stable (ie: running piCore13xarm7 on a RasPi4 shows the CPU level (arm8) versus the kernel level (arm7) the way she's supposed to - so far! :) ) The problem arises when it's not a RasPi card we're dealing with. In these cases, so far, I've had to resort to the cpuinfo -> model name, for example, showing *armv7* which can be unreliable when the next card can be seen as *Armv7* or when the vendor_id notes *Cortex A7* to hint to the hardware platform.
-
uname -m running TCL 14 x86 kernel on an Intel i7 responds with i686 - which is "correct enough."
uname -m running TCL 14 x86 kernel on an AMD X2 responds with i486 - which is untrue.
Supposing that there's no so many `uname -m` options, they can be grouped for x86, for x86_64, and so on in some table. Not?
It was maybe hundred years ago when I used it and if I can recollect it correctly... devel/cpuflags from NetBSD-pkgsrc for Linux used `uname -m` + "model name" string from proc/cpuinfo to choose CPU architecture based on small hardcoded tables.
-
Hi CentralWare
This is what I use when I set up a build script.
I have this function that I source from another file:
# ---------------------------------------------------------------------------------------- #
GetProcessorType()
{
PROCESSOR_TYPE=`uname -m`
echo "$PROCESSOR_TYPE detected."
case "$PROCESSOR_TYPE" in
i686)
CFLAGS="$FLTO -fuse-linker-plugin -march=i486 -mtune=i686 $OPTIMIZE $SYMBOLS $DEFINES -pipe -Wall -Wextra -fno-plt"
CXXFLAGS="$FLTO -fuse-linker-plugin -march=i486 -mtune=i686 $OPTIMIZE $SYMBOLS -pipe -Wall -Wextra -fno-exceptions -fno-rtti"
LDFLAGS="-Wl,-T/usr/local/lib/ldscripts/elf_i386.xbn"
;;
x86_64)
CFLAGS="$FLTO -fuse-linker-plugin -mtune=generic $OPTIMIZE $SYMBOLS $DEFINES -pipe -Wall -Wextra -fno-plt"
CXXFLAGS="$FLTO -fuse-linker-plugin -mtune=generic $OPTIMIZE $SYMBOLS -pipe -Wall -Wextra -fno-exceptions -fno-rtti"
LDFLAGS="-Wl,-T/usr/local/lib/ldscripts/elf_x86_64.xbn"
;;
armv*)
CFLAGS="-march=armv6zk -mtune=arm1176jzf-s -mfpu=vfp $OPTIMIZE $SYMBOLS $DEFINES -pipe -Wall -Wextra"
CXXFLAGS="-march=armv6zk -mtune=arm1176jzf-s -mfpu=vfp $OPTIMIZE $SYMBOLS -pipe -Wall -Wextra -fno-exceptions -fno-rtti"
LDFLAGS="-Wl,-O1"
;;
aarch64)
CFLAGS="-march=armv8-a+crc -mtune=cortex-a72 $OPTIMIZE $SYMBOLS $DEFINES -pipe -Wall -Wextra"
CXXFLAGS="-march=armv8-a+crc -mtune=cortex-a72 $OPTIMIZE $SYMBOLS -pipe -Wall -Wextra -fno-exceptions -fno-rtti"
LDFLAGS="-Wl,-O1"
;;
*)
echo "$PROCESSOR_TYPE: Unknown processor type. Please add an entry for it in this script."
exit
;;
esac
}
# ---------------------------------------------------------------------------------------- #
Then the build script contains this:
# ---------- Set compiler options.
OPTIMIZE="-Os"
SYMBOLS="-g"
FLTO="-flto"
# Used to pass -D defines such as DEFINES="-DX_DISPLAY_MISSING" for example.
DEFINES=""
GDEBUG="No"
# Uncomment the next line to compile a version that can be run under gdb.
#GDEBUG="Debug"
if [ "$GDEBUG" == "Debug" ]
then
OPTIMIZE="-O0 -ggdb"
# -flto interferes with gdb.
FLTO=""
fi
# ---------- End compiler options.
GetProcessorType
export CFLAGS CXXFLAGS LDFLAGS
----- Snip -----
# Strip programs and libraries
if [ "$GDEBUG" == "No" ]
then
cd $CUR/"$PROGRAM"_all
sudo find . | xargs file | grep "executable" | grep ELF | grep "not stripped" | cut -f 1 -d : | xargs strip --strip-all 2> /dev/null
sudo find . | xargs file | grep "shared object" | grep ELF | grep "not stripped" | cut -f 1 -d : | xargs strip --strip-unneeded 2> /dev/null
cd $CUR
fi
-
The problem on tinycore at least is not particularly choosing the cpu, but more ensuring that all 32bit compile for i486 or armv6, whereas 64bit doesn’t seem to be a problem.
Forcing i486 is becoming increasingly difficult, but armv6 only seems to become a problem when neon is involved.
-
LOL - you guys are awesome!
Okay, let me see if I can shed a little more light on the cause and effect of needing processor types...
Let's say I dedicate a slightly aged Intel i3 into this project.
"I" know it's an x64 and without spending any grey matter, and I'm reasonably certain it'll handle most if not all x86 functions as expected.
Once the machine is plugged into the associated network and TCL is installed with the AI foundation, the box becomes a part of the smog... or cloud... or what ever we want to call it! :)
Now, TCL fires up and goes to do a self examination. It knows nothing about itself except for the kernel that booted and what that kernel sees of itself. This is where CPU_TYPE() becomes important.
TCL logs into the master machine and is given a set of extensions it needs to compile based on what capabilities the machine has.
If it's an x64 machine, it does its hand-shake with the master machine and if it's running x64 at that moment and needs to build x86 apps, for example, it reboots and does so with an x86 kernel running.
(x86 apps would then be loaded as needed based on the build's specs; x86 and x64 having their own "optional" directories, per se'.)
If it were a 32 bit machine and we assumed incorrectly it had a 64 bit structure... when it would go to reboot into a x64 kernel... "Error! Error! Warning Will Robinson!"
THEN... once we're running the appropriate kernel, apps, etc. we'd put cpu associated flags into motion for compiling as shown in @Rich's post.
@yvs: I listed a couple anomalies with uname -m that came up with odd motherboards where I had hoped there were other methods we could think up which might fine-tune results, but yes, there are "tables" out there - but SO MANY processors over the years that a table would be huge. (ie: CPU World (https://www.cpu-world.com/cgi-bin/CPUID.pl))
@Juanito: I've learned that for a cleaner i486 I had to use older hardware (which takes forever to compile, but seems to come closer to the real thing, which is why I kept some old Atom motherboards on the rig - it's not PERFECT, but seemed to have fewer flaws. The down-side is, my last 486-DX went out the door over a decade ago as we no longer had requests for it so "testing" apps is really no longer possible - at least without emulation.) Rumor had it a couple years ago that Linus was considering dropping i486 support as it were... I'm not sure what came of it as I didn't dig any deeper.
-
It seems that checking the Long Mode (https://en.wikipedia.org/wiki/Long_mode) flag in the CPUID response ("lm" flag in /proc/cpuinfo) is the common way of telling 32/64bit x86.
-
@CNK:
For ARM processors, I just used the Revision codes to determine which board was being used (again, this might be swayed by piCore's kernel -- yet to be tested)
For Intel/AMD/IBM processors (ARE there any Cyrix processors still running out there today??) all I have right now are processor flags LM (16 bit), TM (32 bit) and RM (64 bit) but I'm not sure these are reliable determinations.
For PPC, I'm not sure yet how to go about hardware detection... /proc/cpuinfo > grep Platform > grep Power* maybe?
We'll experiment accordingly, but it would be nice to get some people with older hardware to join in on the testing (later) as the hardware we're dedicating to this project will all be running headless save for a few "controllers" which will be tied into a KVM, but we'll call those 99% headless. I don't foresee adding i586 or older boards to the rack, so we're good there.
-
"LM (16 bit)" doesn't make sense to me - you're saying that the LM flag indicates a 16bit CPU? Maybe I'm completely wrong then, but see the Wikipedia link.
-
@CNK: I may have typed the flags in the wrong order when I posted.
I think what I did was type in the flag acronyms without paying attention to their "order" and probably added 16/32/64 after the fact.
Long Mode I vaguely remember it being released as an AMD64 64-bit flag, so your assessment makes sense. (IF memory serves, ia64 was Intel's - but with oddities)
TM (and I believe there was a TM2) I'm thinking was Thermal Monitor(ing) which may have been found to be a pre-32bit; I'll have to check.
Real Mode would be 16 bit dating back to the 8088
Good catch!
-
Looking for inspiration! :) Two topics...
The first topic is a generalized file extractor based on file suffixes.
extract() {
FNAME=$1
case $FNAME in
*.tar.bz2 | *.bz2 | *.tbz | *.tz2 | *.tb2 | *.tbz2 ) tar -vjxf $FNAME || exit 1;;
*.tar.gz | *.tgz | *.taz ) tar -vzxf $FNAME || exit 1;;
*.tar.lz ) tar -vxf --lzip $FNAME || exit 1;;
*.tar.lzma | *.tlz | *.lzma ) tar -vxf --lzma $FNAME || exit 1;;
*.lzop | *.tar.lzo ) tar -vxf --lzop $FNAME || exit 1;;
*.tar.xz | *.txz | *.xz ) tar -vxf $FNAME || exit 1;;
*.tar.Z | *.tZ | *.taZ ) tar -vxf --lzw $FNAME || exit 1;;
*.tar.zst | *.tzst ) tar -vxf --zstd $FNAME || exit 1;;
*.zip ) unzip $FNAME || exit 1;;
*) echo "${RED}ERROR! ${YELLOW} Unknown Compression For ${FNAME}${NORMAL}"; exit 1 ;;
esac
}
The goal here is to extract source tarballs which can be packaged in a number of flavors; please feel free to add to this list (along with a command to extract)
The second topic... I'm looking for a clean method in sh/ash to replace a string ( REPLACE_ME! ) with the content from a file or variable.
Example: we have a script file named file.sh and a number of lines into this file we have a flag we want to replace in the middle of the script:
#!/bin/sh
## COMMENT 1
## COMMENT 2
REPLACE_ME!
## COMMENT 3
## COMMENT 4
sed couldn't handle it since $REPLACE could contain one of many unescaped things to break the stream
Splitting the file into two using awk -F"REPLACE_ME!" didn't pan out well, either. '{print $1}' worked, $2 didn't for some reason.
I ended up using wc to count how many lines the file has, grep -n to find which line number REPLACE_ME! lives on and head/tail to do the job
input.sh
string="This is a long
string of text which can
be on multiple lines"
...leaves me with
#!/bin/sh
## COMMENT 1
## COMMENT 2
string="This is a long string of text which can be on multiple lines"
## COMMENT 3
## COMMENT 4
...unless I encapsulate "${REPLACE}" within double quotes, and then it preserves LF. (Oops!)
It just doesn't feel as though it's the most efficient way - again, within sh/busybox confines.
Thoughts/suggestions welcome!
-
Does not tar read magic. and use corresponding option to extract the tar file.
I newer use nothing more than xvf to extract files with lots of different compression types.
Maybe there difference between busybox and gnutar??
-
@patrikg: One would assume, yes, but more times than not -xf (busybox) doesn't know how to manage bz2, for example, unless it's trigger is specified, so to play it safe I have to throw all of the triggers/switches in the mix. It can still DO the job... just not without being told. I'm guessing that's why it specifies the engines without hinting it knows any better:
-Z (De)compress using compress
-z (De)compress using gzip
-J (De)compress using xz
-j (De)compress using bzip2
--lzma (De)compress using lzma
-a (De)compress based on extension
It's okay though... this project is surrounded by trying to keep everything under one roof where TCL's cut of busybox is the only real dependency and thus far, the project is coming along "swimmingly!" (Which translates to... if the project fails, the computer will likely find itself in a large body of water... :) Swimmingly! )
-
Hi CentralWare
... The second topic... I'm looking for a clean method in sh/ash to replace a string ( REPLACE_ME! ) with the content from a file or variable. ...
Is this what you are looking for:
#!/bin/sh
# Newline variable
NL="
"
needle="This is a long
string of text which can
be on multiple lines"
newstring="I ended up using wc to count how many lines the file has, grep -n to
find which line number REPLACE_ME! lives on and head/tail to do the job"
haystack=""
# Clear out contents of file.
> ReplaceMe.txt
for N in $(seq 1 2)
do
echo "## COMMENT $N" >> ReplaceMe.txt
done
echo "$needle" >> ReplaceMe.txt
for N in $(seq 3 4)
do
echo "## COMMENT $N" >> ReplaceMe.txt
done
echo "$needle" >> ReplaceMe.txt
for N in $(seq 5 6)
do
echo "## COMMENT $N" >> ReplaceMe.txt
done
echo "$NL""This is ReplaceMe.txt"
cat ReplaceMe.txt
# Read the file into a variable.
haystack="$(cat ReplaceMe.txt)"
# Global replace of needle by newstring and save to file.
echo "${haystack//$needle/$newstring}" > ReplaceMe.txt
echo "$NL""This is modified ReplaceMe.txt"
cat ReplaceMe.txt
This is the result:
tc@E310:~/ReplaceMe$ ./ReplaceMe.sh
This is ReplaceMe.txt
## COMMENT 1
## COMMENT 2
This is a long
string of text which can
be on multiple lines
## COMMENT 3
## COMMENT 4
This is a long
string of text which can
be on multiple lines
## COMMENT 5
## COMMENT 6
This is modified ReplaceMe.txt
## COMMENT 1
## COMMENT 2
I ended up using wc to count how many lines the file has, grep -n to
find which line number REPLACE_ME! lives on and head/tail to do the job
## COMMENT 3
## COMMENT 4
I ended up using wc to count how many lines the file has, grep -n to
find which line number REPLACE_ME! lives on and head/tail to do the job
## COMMENT 5
## COMMENT 6
tc@E310:~/ReplaceMe$
-
@Rich: Sorry, guess my description is in need of a little expression.
Envision a filename.tcz.info where we're trying to replace the field content for
Title: [TITLE]
Author: [AUTHOR]
Date: [DATE]
...SED was having a field day and choking on almost every test due to non-escaped characters existing in either the TAG NAME (such as the "[" in [TITLE]) or anything not escaped in the replacement string.
For example:
sed -i "s/[TITLE]/$STRING/" filename.tcz.info
SED cries about the s command not being closed, which first happens because of the [, and then $STRING should it contain anything outside of the typical AlphaNumeric.
I created a function which uses head/tail/wc to find the first occurrence of [TITLE] and pastes head+$STRING+tail together skipping the [TITLE] line entirely.
Not pretty, but worked "well enough for the moment" if $STRING contained the entire line of content, not just the value.
In the end, I had to create a tiny encode() function which basically escapes "everything out of the ordinary" and SO FAR, (a-z we're at "D") it seems to be doing 'eh okay.
encode() {
echo "${@}" | sed 's/[^a-zA-Z 0-9]/\\&/g'
}
so now we're escaping /[TITLE] up front and sending $STRING through encode()
sed -i "s/\[TITLE]/$(encode $STRING)/" filename.tcz.info
Reasonably functional... not necessarily perfect for the cause yet. Bare in mind, encode() is being sent anywhere from a word to a half page of content.
-
Hi CentralWare
How about something like this:
#!/bin/sh
TITLE="[TITLE]"
Title="MyExtension.tcz"
AUTHOR="[AUTHOR]"
Author="CentralWare"
DATE="[DATE]"
Date="$(date +"%D")"
# Newline variable
NL="
"
# Clear out contents of files.
> ReplaceMe.txt
> ReplaceMe.tmp
echo "Title: $TITLE
Author: [AUTHOR]
Date: [DATE]
Comment: What a great extension.
Change-log: First version.
Current: $Date" > ReplaceMe.txt
while read -r Line
do
case $Line in
*"$TITLE"*) echo "${Line/"$TITLE"/"$Title"}" >> ReplaceMe.tmp ;;
*"$AUTHOR"*) echo "${Line/"$AUTHOR"/"$Author"}" >> ReplaceMe.tmp ;;
*"$DATE"*) echo "${Line/"$DATE"/"$Date"}" >> ReplaceMe.tmp ;;
*)
echo "$Line" >> ReplaceMe.tmp
;;
esac
done < ReplaceMe.txt
echo "$NL""This is ReplaceMe.txt"
cat ReplaceMe.txt
echo "$NL$NL""This is ReplaceMe.tmp"
cat ReplaceMe.tmp
This is the result:
tc@E310:~/ReplaceMe$ ./ReplaceMe.sh
This is ReplaceMe.txt
Title: [TITLE]
Author: [AUTHOR]
Date: [DATE]
Comment: What a great extension.
Change-log: First version.
Current: 09/05/24
This is ReplaceMe.tmp
Title: MyExtension.tcz
Author: CentralWare
Date: 09/05/24
Comment: What a great extension.
Change-log: First version.
Current: 09/05/24
tc@E310:~/ReplaceMe$
-
@Rich: Interesting while read > case concept. From A to Z I'm at "T" at the moment so I'll likely have to wait until Friday evening to tinker with implementing something like it.
Issues:
We're repeating this process with 6,000+ files...
Each file has been in the hands of different humans...
Each humans likes to do things differents...
OMG what an brain-strain! :)
Here's an example source file: https://git.alpinelinux.org/aports/tree/main/mc/APKBUILD (https://git.alpinelinux.org/aports/tree/main/mc/APKBUILD)
We're populating a template file with a slightly similar layout to filename.tcz.info using a search/replace tag system as noted earlier using the content from pages such as these.
If you'll notice in the link above, the "source=" line is split into a number of rows AND doesn't contain a format of source="filename::weblink" like you're "supposed to do" per Alpine.
LOL - so many What If's to tend to in order to help prevent the same what-ifs later down the road
-
"LM (16 bit)" doesn't make sense to me - you're saying that the LM flag indicates a 16bit CPU? Maybe I'm completely wrong then, but see the Wikipedia link.
is any of this https://guix.gnu.org/blog/2024/building-packages-targeting-psabis//
*generally* relevant ? wrt to hw / flags ect
... https://gitlab.com/x86-psABIs/x86-64-ABI ?
.. https://hpc.guix.info/blog/2022/01/tuning-packages-for-a-cpu-micro-architecture/
-
i happened to (just) read this
I doubt there is a single distribution keeping separate repositories for every processor.
which remind me of https://hpc.guix.info/blog/2022/01/tuning-packages-for-a-cpu-micro-architecture/
also (four encouragement;-)
some similar sentiment wrt build-script is mentioned @ https://forum.tinycorelinux.net/index.php/topic,963.msg5548.html#msg5548
-
Good mornin' everyone!
@Paul_123: Are there any mods/drivers "built-in" installed into the TCL kernel? I haven't tried to dissect the build notes for PC/ARM / release(s) / src / toolchain yet so thought it to just be quicker to ask.
@Paul_123, @Rich, etc.: Please check sorter.sh --> kvm entry and determine if the wildcard for arch/x86/kvm/* should be there (my tests on 6.1.x and 6.11.x were odd so I just removed "/*" and everything seemed good. It could have been a fluke, though. (kvm.tcz wasn't being built, yet arch/x86/kvm was empty leading one to assume it was already processed OR something went wonky during make_modules.)
Additionally, what "sorter" is used or how is this one used when compiling piCore?
Thanks, guys!
-
piCore defconfigs come from Rasperry pi. Typically things like onboard ethernet and USB controllers are compiled into the kernel. These are the 5 kernels built.
armv6) DEFCONFIG=bcmrpi_defconfig;;
armv7) DEFCONFIG=bcm2709_defconfig;;
armv7l) DEFCONFIG=bcm2711_defconfig;;
armv8) DEFCONFIG=bcm2711_defconfig;;
armv8_16k) DEFCONFIG=bcm2712_defconfig;;
It was based on the sorter.sh. But I use something local due to the way I build the kernel module extensions. I have a different initrd module list for each of the architectures, I had to make it separate since the initrd's had drastically different modules. They are now quite similar, if you are building around sorter.sh, that will work, but likely you will need to have
sorter_armv6
sorter_armv7
...etc..
-
...But I use something local due to the way I build the kernel module extensions...
That's what I figured (thus why I asked) as I didn't see anything in GIT or on the repo that hints to how we're managing builds and modules for piCore.
Having separate "sorter" routines for 86/64/a6/a7/a8 would probably be the only sensible direction, but the same is said about kernel .config files, so it's just a matter of adding another platform file.
Once the kernel template is finished I'll send you an invite.
Thanks!