Tiny Core Linux
Tiny Core Extensions => TCE Talk => Topic started by: c-coder on December 15, 2008, 02:24:33 PM
-
Hello.
I have just booted in tiny core linux and have installed opera for surfing the net.
Everythings works great and is extreme fast.
After such a great impression i have downloaded and installed compiletc to try to compile some programms.
I am missing at the moment the Subversion Programm for fetching the source code and compile it in tcl.
Is allready somebody working to include svn to compiletc?
If not, is it possible to compile svn on my machine and to send it to somebody to include it in compiletc ?
Thanks for the help and the great work till yet.
Greetings c-coder
-
svn will probably not be included in compiletc because it is not a tool required for compiling.
It is more likely that it would be in a separate extension should you or someone else want to create one.
-
What about X headers perhaps?
Although there's already a Xorg-dev extension that one can use.
-
I'll look at making a separate svn extension - I've downloaded the tarballs, now I just have to find the time to compile them
For the X headers - since there's both the base XVesa Kdrive and the Xorg-7.4 extension that people would commonly compile against, I think it's best the headers are separate from compiletc
-
I have tryed to compile svn in tiny core linux.
Normally building svn from sources is not a problem,
You need the two source tarballs and install additional the openssl tcl extension.
After this everything should go easy.
For some reason however tiny core linux freeze after a while svn is compiling.
I cant move the mouse pointer nor can i write something on the keyboard.
Its pity that tiny core linux isnt such stable like i thinked itself.
I am asking my self why tcl freeze while i try to comple svn from sources .
Does somebody also has this problem ?
-
My guess is that you were compiling in memory and ran out of it.
-
it would be awesome if just before running out of free memory, linux severed the network connections, kill -9'ed whatever process took up the most memory, and then gave you the option of restoring the internet connections.
it would be an os-wide version of something i've seen happen to browsers in 2.6, if not by design. it's cooler than what i used to see in dsl. when you ran out of memory or cpu was at 100, it would just stop, i couldn't even switch to tty or leave xwin. in that regard, tc is much more stable (for surfing) than dsl 4.4.2 on the same machines i've used both on.
-
I recall that there were some reports that enabling some swap would help...
-
it would be awesome if just before running out of free memory, linux severed the network connections, kill -9'ed whatever process took up the most memory, and then gave you the option of restoring the internet connections.
That sounds like a horrible solution. An operating system shouldn't shut down an arbitrary application simply because it has run out of memory. It should merely wait until running processes have done their business. If that means you can't do anything until those processes complete, then it is up to you to decide if you want to force certain apps to close...or force a shutdown. The OS should never make these decisions for you.
-
Ok...maybe that sounded a little harsh. I'm sorry, but I think it would be a bad idea to let the OS decide what to shut down....the user should be given the freedom to choose, even if it means a little inconvenience.
-
Linux OOM killer is one such thing that shuts down a process before memory is exhausted and lockup occurs, but of course it can't read minds and the complaints I've read about is that it shuts down the wrong process or too many processes. It tries to aim at the offending process that is gobbling memory. I have also seen scripts that do the same thing.
I have to admit that when running with tce's and no /usr/local mounting I have in the past locked up my system while either compiling or simply untarring a kernel. Compiling glibc totally in RAM on a system with 1GB memory and no swap ground it to a halt. The solution of course is to use storage media to untar and build on and use tclocal or tcz's. I rarely do anything totally in RAM except on the above mentioned1GB Windows only machine I sometimes have use of that has no working usb ports.
For a remote server that you only have access through ssh or some other means and no physical access to, having the system shut down a process before locking up allowing you to ssh into it, free some memory or take care of other issues, and restart the process may be of some value. In that case a hard lockup is the worst case scenario.
-
no offense taken, i'm just glad you didn't phrase it "dumb idea." that was kind.
ideally i agree with you, but when the only choice is waiting (literally) an hour for a process that never frees up the ram, or the os cuts off something using all the ram, i'll take the latter. a really great compromise was the os said "hey, i'm about to run out of ram. should i just assume that's what you want or do you want me to kill the process before you're unable to?"
as often as it happened, that would be great.
but as someone mentioned, swap helps too. in any case something so very "wouldn't it be nice" was intended only as such. wishful thinking, not a formal rfc. your idea is good, i like the combination of yours and mine the best, and there's probably some way to make it even better than that. just don't think it was very serious, it was only a thought :) jason: thanks for mentioning oom, never heard of it. if they take mikshaws idea that could improve it.
-
You can also monitor your ram usage by uncommenting the monitor call in your .jwmrc-tray, which by default will show percentage and update every 1.5s.
-
..and BTW the "..0%S" works nicely with no swap now - thanks
-
You can also monitor your ram usage by uncommenting the monitor call in your .jwmrc-tray, which by default will show percentage and update every 1.5s.
conky is doing a fine job of that. it doesn't become a problem so much because of not monitoring, but because an application, usually a browser (i tried different ones, dillo did it more than firefox!) would not want to surf too much without hogging all the ram, even if i cleared the cache, history, turned off picture loading. this was more of a problem with dsl and 2.4 kernels (whether by cause or coincidence) although it happened with dsl-n too. i'm happy to say tc seems a lot better about it. maybe ff3 is more stable than the versions of ff2 i was using with dsl? seems likely. although i liked ff2 a lot.
-
Compiling glibc totally in RAM on a system with 1GB memory and no swap ground it to a halt. The solution of course is to use storage media to untar and build on and use tclocal or tcz's. I rarely do anything totally in RAM except on the above mentioned1GB Windows only machine I sometimes have use of that has no working usb ports.
If enabling swap and mounting storage media would prevent tcl to freeze why doesnt tcl automount and enable the needed partition at the boot process.
On the machine where tcl is booting i have a plenty of swap and partition space.
-
I believe TC does enable any possible swap partitions (and certain dos swap files) automatically at boot, if detected (and hw configured). If you have to load some extra modules out of the base/remaster, you'll probably have to manually enable the swap after.
As for data space, it is up to the user how to manage it.
-
I'll look at making a separate svn extension - I've downloaded the tarballs, now I just have to find the time to compile them
Hello Juanito.
Thank you for the SVN Extension.
I have just now downloaded and installed the extension.
Does SVN Works for you on your PC Mashine.
I am getting allways the error Message "Unrecognized URL Sheme ..." if i try to download a repo with svn.
I have searched a litlle the Internet for this Problem and found this here.
http://azimbabu.blogspot.com/2008/07/using-svn-and-got-unrecognized-url.html
If i do the following "svn --v" i am getting the message that the svn modules ra_svn and ra_local are installed.
The ra_dav isnt included probably becouse of the missing webdav support at the compile time.
The ra_dav module however is the most used svn module if somebody want to download a repo over http.
Thanks for any help
Greetings c-coder
-
If enabling swap and mounting storage media would prevent tcl to freeze why doesnt tcl automount and enable the needed partition at the boot process.
not to mention that some people think a livecd should never touch the host machine or its partitions unless asked to. but as tc has 4 official boot methods, i could presumably expect this only of "cloud." even then, it's up to the community, not a standard. if the noswap boot option works, that's good.
-
I believe TC does enable any possible swap partitions (and certain dos swap files) automatically at boot, if detected (and hw configured).
Maybe it sound strange for others but i have a swap partition on the harddisk.
Linux distros like Puppy or Ubuntu recognize and mount this swap partition with no probs.
If i boot in Tiny Core Linux and start from the System Panel the "System Stats" application i cant find under the application Tab "filesystems" any swap Partition mounted.
For me TinyCoreLinux dont mount my swap Partition.
Could someone prove this also so i can be sure that this is not only related to me.
-
not to mention that some people think a livecd should never touch the host machine or its partitions unless asked to.
At least if the Extensions are loaded from the Harddisk (boot option tce=hdax) Tiny Core Linux should boot with a enabled Swap partition.
-
For me TC has always used my available swap partitions like it should.
Note that they are not mounted the usual way. If you type "free" in a console, do you see your swap? Another way is "cat /proc/swaps"
-
For me TC has always used my available swap partitions like it should.
Note that they are not mounted the usual way. If you type "free" in a console, do you see your swap? Another way is "cat /proc/swaps"
Thanks for the Tip.
By typing the command "free" in the console i can see now that realy TC has booted with the enabled swap partition.
Maybe this Information should be showed also under the Tab "Filesystem" in the Stats Application.
The problem looks for me like i dont have sufficient ram and swap memory.
total used free shared buffers
Mem: 186464 183756 2708 0 92
Swap: 321260 140168 181092
Total: 507724 323924 183800
Filename Type Size Used Priority
/dev/hda5 partition 321260 140260 -1
Thanks for the help.
grettings c-coder
-
Indeed 500 megs might not be enough for the heaviest compiles, such as glibc. I'm surprised svn was so heavy though. For these compiles, it's better to do them on a hard disk so that ram/swap is not used for the actual files.
-
Both glibc and oo required more than 1gb of ram - I compiled these by using space on a usb stick (not good for the longetivity of the stick, I know, but needs must).
BTW the modified svn, which didn't require much ram at all, has been posted for a few days now (works with http://)
-
BTW the modified svn, which didn't require much ram at all, has been posted for a few days now (works with http://)
Juanito thank you very much for the new svn extension.
I have just booted only the tce base.
Downloaded after this the svn and tryed to download some http rep's.
The new Version of the svn extension works great on my pc.
I agree with you that svn does not really use a lot of ram while compiling and building it but if you boot in to tce with a lot of extension and run opera with several open tabs then compling some software can freeze tcl.
Once again thanks for the svn extension.
Greetings c-coder