Tiny Core Linux
Tiny Core Extensions => TCE Q&A Forum => Topic started by: Ztoragefool on November 03, 2010, 03:31:51 PM
-
hello everyone!
i am new to TC (and quite unexperienced concerning linux. but i´ve got big aims Wink an i love this whole idea behind tc!
right now i´m just frustrated because i can´t find mdraid in appbrowser. bug? not supported? am i too tired?
*any thoughts appreciated*
-
hmm... you're right, mdadm is not in the 3.x repo, although it is present in 2.x. seems somebody should take care of it and make an extension for 3.x :-/
-
Umm, neither is md.tcz (kernel modules)
-
hey thanks for the quick response!
so i *should* have found md.tcz, and... any extensions more?
i found that i can load modules like raid1.ko to make /proc/mdstat appear... however, no mdadm, no management... so what should i do now?
-
I think md.tcz is replaced by raid-dm-2.6.33.3-tinycore.tcz (maybe not fully, depends on testing), so you would probably only need mdadm.tcz and raid-dm-2.6.33.3-tinycore.tcz as extensions.
-
Just sent mdadm.tcz for submission, tried it myself with loop devices and raid 0,1,4 - works great!
Be a little patient until it arrives in the repo ;-)
-
root@box:~# mdadm -Es
ARRAY /dev/md0 UUID=02a0ac6a:25e79bb1:46239aa3:c992519e
ARRAY /dev/md2 UUID=a0c3d5f2:894a8eaa:1eddabe0:16f45c0d
here i am! turns out raid-dm...tcz has already been installed by lvm2, that's why i've been able to load the kernel module. now i found a fresh mdadm.tcz in appbrowser...
THANK YOU SO MUCH!!!
-
i try to get my system booting from raid1. is there any alternative to changing my initrd? if not, what am i missing?
my system: microcore64.gz with some added xorg stuff
ok, my results so far:
- extlinux boots bzImage64 from an ext2 fs on sda1 (no raid for now)
- i fetched mdadm and raid-dm-...tinycore64.tgz
- created a test-raid1 on /dev/sdb + missing
- on reboot i have to modprobe raid1, then i can reassemble: mdadm -As
i guess the raid module should be in my ramdisk to let the kernel see my raid at boot time, so i
- extracted my initial ram disk
- copied in my ...drivers/md/ dir
- found the file rcS symlinks to and added a "modprobe raid1" after "modprobe loop"
- packed back my new microcore64
and now i wonder what´s missing since the module still isn´t loaded after rebooting.
any hints, please?
-
Perhaps a 'depmod [OPTIONS]' command is needed in the process of remastering?
Search the forum for such.
-
gaaah you´re right - blind i have been!
it´s mentioned on the remastering wiki page, thougt the whole part was just for tc 2.x versions...
what exactly does it do? i´m not getting it from the help...
anyway, i´ll try
-
http://www.phpman.info/index.php/man/depmod/8
-
sorry - i dared you to RTFM me... guess i´ve read too much yesterday, i even started seeing flickering lines on my walls. but let´s go forewards...
i used depmod and checked the new modules.dep file contains raid modules. copied the freshly packed ird file to boot dir.
still, lsmod doesn´t show raid1 after reboot. it does show loop. both modules are probed in rcS during init process... dmesg shows loop got loaded, but raid1 isn´t mentioned anywhere.
a few weeks ago, i managed to make the citrix xenserver distro (deriving from centos) boot from raid1 mainly following this howto: http://forums.citrix.com/thread.jspa?threadID=236269&tstart=0 (http://forums.citrix.com/thread.jspa?threadID=236269&tstart=0)
so this terrain is not completely new to me. maybe there are some more preconditions for tinycore? must count on some guru´s help on this.
thanks for your time!
-
hmmm no more ideas, anyone?
booting from raid1 arrays, c´mon!
first of all, what would you think: is this a silly newbie issue since you know three people who got it running? or is it likely to remain unsolved during the next week?
-
You cannot boot from software RAID-1.
You can boot from a RAID member as long as you do not attempt to
mount it before the RAID is stared.
-
exactly - bootmanagers will not care about raid, but as long they are happy with the drives geometry (i.e. no stripes), booting will work from any drive the system treats as raid1 member.
but all this is step 2. step 1 is: i´d love to see a raid1 module loaded during reboot.
init calls the rcS file which loads via modprobe:
loop
md_mod
raid1
after booting lsmod shows
loop
md_mod
but _no_ raid1. since i inserted md_mod to load, i am sure that the commands get executed. now i gotta find errormessages from failing raid1.
...to be continued
-
ok, took out 2> suppression from rcS, now i got my error message on console
modprobe: can´t load module md_mod (kernel.tclocal/drivers/md/md-mod.ko.gz): No such file or directory
funny: md_mod IS loaded, directly after booting i see it via lsmod. this is the case since i inserted it in rcS, so ... ??? when and why does it get loaded, when it fails during init?
however, this path "kernel.tclocal/" is a symlink to modules installed under tce, i.e. in /usr/local/lib/..., which definitely isn´t there at boot time. so for now i´ll trick depmod by temporarily taking the link away...
yup, got my module. now lets see to get the raid up during boot.
-
This is an interesting project.
Once you assemble the array, you will need to modify the the rules
so that the RAID members are not scanned, and the new array is scanned
for the tce directory.
-
gerald, sounds like wetted appetite!
yeah, suppose you´re talking about udev rules? i´ve been afraid they would get involved - man, i see those rules and scare my *** off...
for now i just see 3 different versions of etc/udev/rules.d/64-md-raid.rules :
one contained in initrd
one in /
one in /usr/local
EDIT: sorry, the latter two are identical, so theres one for runtime and a -slightly- different inside initrd. seems like it only differs in commenting out one rule dealing with raid members.
the initrd version should be relevant for boot time, but... nnnah, messy... first gotta read more stuff about udev. cya >:(
-
hi,
let´s take a look at what i´ve got:
- sda1 partition type FD
- md1 raid level 1 (one missing), formatted with ext4., contains boot/* and tce/* directories.
- called extlinux -ir /mnt/md1/boot/extlinux to make it boot
- oh yeah, microcore64.gz and bzImage64 - wanna have full memory support
in fact, the microcore64 initrd is modified:
- added raid1 and md-mod kernel modules to load at boot time
- updated via depmod
- 64-md-raid.rules: uncommented first rule (describing raid members)
- rcS (i.e. tc-config) calls modprobe raid1 in line 107 - this is before udev is called
now tc64 boots but still ignores my partition at boot time, so there´s solely the command line but nothing installed in tce. i immediately can sudo mdadm -As and get my raiddrive, so - be it the rules like gerald suggested or a wrong placement of modprobe raid1 - it´s all there, but something´s still wrong...
i read a lot and tried hard... anyone for assistance?
-
FWIW: The included 64-md-raid.rules in tinycore base is not very sense- or helpful since it calls /sbin/mdadm, which is not in the base and even when mdadm.tcz is loaded the rules will not work, because it's /usr/local/sbin/mdadm. That's why atm the install script of mdadm overwrites the base version... actually I'd say it should go out of the base because there's no use for it.
-
ah right,
forgot to mention i copied mdadm to /sbin inside the initrd. (so you guys call it base, right?)
so your next try would include different rules? would be an idea to compare ours to the xenserver rules, where i got it booting.
just for brainstorming i blast out some more questions i got:
a) dmesg still mentions raid1 quite late, looks to me like if harware detection still happens before, is there any place other than tc-config (rcS) to load the raid1 module? my guessing is, it should be loaded when udev recognises disks, clever udev rules provided...
b) maybe kernel configuration lacks something needed for md disks at boot time? can it work with the current image?
again, thanks for spending your time!
-
FWIW: The included 64-md-raid.rules in tinycore base is not very sense- or helpful since it calls /sbin/mdadm, which is not in the base and even when mdadm.tcz is loaded the rules will not work, because it's /usr/local/sbin/mdadm. That's why atm the install script of mdadm overwrites the base version... actually I'd say it should go out of the base because there's no use for it.
Agreed. It is supplied in the mdadm.tcz extensions. Will be removed from base.
-
WOW!
just updated from 3.2 to 3.3 (microcore64) out of curiosity and... yay! default kernel/base files boot tc from my partitioned (!) raid drive. so whoever took care of this running smoother...
THANK YOU!
however, later on i'd like to list some small issues i still encounter, taking the whole solution apart from real tranparency. i take care of them within bootlocal.sh, so i've got it running, but i want to give this feedback for the case you stumble upon some lines of code which just need to be swapped to achieve this during boot/init phase.
mainly, i observe that my raid partition /dev/md1p1 gets mentioned in fstab and /mnt directory, which makes me very happy. but it doesn'd get completely preferred over /dev/sda1 since this is the one still being mounted and presented in the .backup_device and .tce_dir files.
ok, could give more details on this and my fancy vision of tranparency, if needed.
-
Regarding the "preference" of TC to use '/dev/sda1': During the boot process TC will search in the 'root' of all detected storage devices for a 'tce' directory and unless "guided" otherwise will pick the first one it comes accross. This "guidance" to choose a particular one can be achieved via the 'tce=DEV' boot code. Possible values for DEV are things like device names (e.g. 'hda1') or file system identifiers (e.g 'LABEL=...' or 'UUID=...').
Yours is certainly a special case as I don't know whether your target filesystem is already "known" at the time this scanning is performed. But maybe the 'waitusb=X' boot code can be used for your situation as well. I'm thinking that maybe in its 'waitusb=X:UUID=....' form it might be just what you need.