Tiny Core Linux
Tiny Core Base => TCB Q&A Forum => Topic started by: suxi on September 19, 2012, 12:20:52 PM
-
Hello Forum
I am sure this is a real noob question but I searched this forum and the net for quite a while and still haven't found a solution.
I would like to mount a raid drive at boot but adding mount /dev/sdb1 to my bootlocal.sh doesn't do anything.
This is my fstab file:
# /etc/fstab
proc /proc proc defaults 0 0
sysfs /sys sysfs defaults 0 0
devpts /dev/pts devpts defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
/dev/zram0 swap swap defaults,noauto 0 0
/dev/sda1 /mnt/sda1 ext4 noauto,users,exec 0 0 # Added by TC
/dev/sda2 none swap defaults 0 0 # Added by TC
/dev/sda3 /mnt/sda3 ext4 noauto,users,exec 0 0 # Added by TC
/dev/sdb1 /mnt/sdb1 ntfs noauto,users,exec,ro,umask=000 0 0 # Added by TC
/dev/sdb2 /mnt/sdb2 ntfs noauto,users,exec,ro,umask=000 0 0 # Added by TC
Thanks a lot for any help
suxi
-
Did you install ntfs support?
I wouldn't know if RAID might require additional extensions as well.
-
Be careful with Windows BIOS fakeraid. Mounting a fakeraid member under linux will likely break your RAID.
-
Hi suxi
If you have a hardware RAID controller card, see if this thread helps:
http://forum.tinycorelinux.net/index.php/topic,12624.msg68787.html#msg68787
-
Thanks a lot for the fast reply!
It's a 3ware controller and after I installed these extensions:
scsi-3.0.3-tinycore.tcz
raid-dm-3.0.3-tinycore.tcz
sg3-utils.tcz
the sdb1 and sdb2 partions of the raid have been automatically added to my fstab. I can manually mount those after boot with mount /dev/sdb1 and /mount /dev/sdb2.
Actually, I don't think it's a raid issue because after boot only the TCE partition sda1 is mounted. I have to mount all other partitions manually in order to use them.
How do you guys autmatically mount partitions during boot?
Again, thanks for your support
suxi
EDIT:
Did you install ntfs support?
I think that came out of the box. I can access both partitions after mounting them manually.
Did I mention that I loooove the tiny core concept? I am setting up a simple samba file server for 10 windows users and the frugal concept is thrilling :)
-
Hi suxi
Are you running a 3.0.3 kernel? Open a terminal and enter:
uname -r
If it comes back as 3.0.21-tinycore replace the scsi and raid-dm with the correct versions.
Tinycore only mounts partitions it needs to run automatically. You can add your mount command to bootlocal.sh.
-
Good morning!
It's the 3.0.21 kernel and (accidentally, thanks for pointing this out) I have the 3.0.21 scsi and raid extensions.
I tried adding mount /dev/sdb1 as well as mount /dev/sdb1 /mnt/sdb1 and mount -a to the bootlocal.sh but nothing gets mounted at boot. I can mount them using exact those commands manually, though.
I've just noticed that I am getting two errors during boot (please see attached jpeg):
1) mount: can't find /dev/sdb1 in etc/fstab could it be, that the fstab table has not been created at that point of boot?
2) Problems loading libtinfo.so I don't know what this is and how I can solöve it.
Thanks a lot and have a great day!
suxi
(https://dl.dropbox.com/u/19320906/tiny_core_boot_screen.jpg)
-
Problems loading libtinfo.so
If you use the "provides" function in the apps browser, it'll show you which extension you're missing.
In this case libtinfo.so is provided by the ncurses extension.
Note that if you use the apps browser to download/install extensions, it will automatically download/install all required dependency extensions.
-
Thank you for the tip on this "side issue". My apps browser hasn't got a provides function, but I could choose update and that found two ncurses extensions which I updated. That unfortunately didn't solve the mount problem, which really is a bummer ;)
-
Hi suxi
Maybe it's a timing problem. To verify, try this in your bootlocal.sh:
sleep 10
mount /dev/sdb1
If that works, we'll come up with a cleaner solution.
-
Hi Rich
sleep 10
This did the trick. Now I am really looking forward to the cleaner solution.
Thank you!
suxi
-
Hi suxi
Create the following script in your /opt directory:
mountsdb1.sh
#!/bin/sh
until cat /etc/fstab | grep sdb1
do
sleep 1
done
mount /dev/sdb1
Make it executable like this
chmod 775 mountsdb1.sh
Add the following to your bootlocal.sh
/opt/mountsdb1.sh &
If you also wish to mount sdb2, make a mountsdb2.sh script and add line in bootlocal.sh to call it too.
These scripts will run in the background until they find the device they are looking for and then mount the drive.
-
Maybe im getting the issue wrong, but the clean solution here would be to make everything you need to boot your system persistent so its available doing kernel boot and use fstab the regular way.
Screams to me that you have to do a mount -a right after boot :( think all these issues should be taken care off doing boot not after.
-
Hi ananix
If you look at the screen shot fstab has been set up but sdb1 is not yet present. Tinycore boots very quickly
and what is probably happening is the RAID drivers have not yet completed configuring the drive. Once the
drivers have finished their task, the system is notified that another drive is present, and fstab is rebuilt to
include them. Having a script wait in the background for the drive to show up seems like the best solution.
Making fstab persistent would probably not solve anything since /dev/sdb1 is not present until the drivers
run. So the mount command would still fail.
-
yeah i get you (i did not really think about the fstab more the extensions) thought the problem was pretty much like my broadcom Xtreme firmware problem if you remember, drivers and firmware not being pressent early enough, so i did misunderstand this subject :)
But it has just always seemed to me that disk mounting at boot was waiting if there where anything to wait for, but i have never been this far down in the boot process of a linux thinking of it other distroes also uses init scripts later in the process where they could add things like this hidden away as just the "system". But this is a core and i LOVE it :)
Also strikes me my first redhat 5.2 i had to do alot in the rc.local (bootlocal.sh equvilant) but i left linux (for true64 and Guardian) for a few years back then as it was to unstable and when i came back it was not only stable but also more "smooth" hiding away this stuff.
-
Hey Rich
Wow, that did it. I actually changed your script a bit so that it mounts sdb1 and sdb2 after it found the sdb1 entry in the fstab.
Now where the raid is up and running I would also like to add something like tw_cli in order to check the status with another script. Would this also be possibe with TC and could you maybe point me in the right direction?
Again, thank you all very much for your help. This is a great little community!
Best wishes
suxi
-
Hi suxi
Wow, that did it. I actually changed your script a bit so that it mounts sdb1 and sdb2 after it found the sdb1 entry in the fstab.
You can do that, but suppose the unthinkable happens and sdb1 is not recognized by the system. The script
will block forever and nothing gets mounted. Or suppose the script happens to sample fstab 1 microsecond after
it adds the sdb1 entry and sdb2 has not yet been detected, sdb1 will get mounted but sdb2 might not if the second
mount command executes before sdb2 shows up.
Using two scripts running backgrounded as I showed you ensures that neither drive can block the other from
being detected and avoids a slim but possible race condition.
.... something like tw_cli in order to check the status ....
I would suggest Googling for source or executable files in tar format and start a new thread if you need help with that.
-
... and avoids a slim but possible race condition.
Ok, that sounds very feasable, I'll stick to your suggested solution.
-
Hey guys!
A new challenge came up: When I connect an USB drive to the box, at boot TC registeres the USB drive as SDB and the raid controller as SDC, while without the usb drive the raid is SDB. Is there a way to reserve SDB for the raid no matter what?
Again, thanks for your help. Have a great day!
suxi
-
there should be a way to use uuid to get around this?
-
Hi suxi
OK, try this:
mountsdb1.sh
#!/bin/sh
RAID1UUID="77f3e5df-806f-480c-b6cc-905cb3132753"
RAID1Dev=""
while [ -z "$RAID1Dev" ]
do
sleep 1
RAID1Dev=`blkid -U $RAID1UUID`
done
until cat /etc/fstab | grep $RAID1Dev
do
sleep 1
done
mount $RAID1Dev
mkdir /mnt/RAID1
ln -sf /mnt/`echo $RAID1Dev | cut -c6-` /mnt/RAID1
Boot up without the USB drive plugged in so your RAID drives are found. Execute:
blkid /dev/sdb1
Copy the value to the right of UUID= and use it to replace the value to the right of RAID1UUID= in the script.
Save and reboot to make sure it works. If it worked, you should be able to see the drive at /mnt/sdb1 and /mnt/RAID1.
If it worked, make another script for sdb2, change ALL occurrences of RAID1 to RAID2. Reboot with a USB drive
plugged in. Your /mnt directory should have RAID1 and RAID2 entries in it.
-
Thank you so much, you all are my new heros!
I'll try this over the weekend. Have a great one!
suxi
-
Hi suxi
You are welcome. Let us know if it works. If it works, access the drives through /mnt/RAID1 and /mnt/RAID2 as
these points will never change on you.
-
It sort of worked. Now I have a sdb1 folder inside the raid1 folder, which means I'll run into problems with my samba shares. I need the drive's contents to be located just below the raid1 directory. Does this make sense?
-
Hi suxi
One more time:
#!/bin/sh
RAID1UUID="77f3e5df-806f-480c-b6cc-905cb3132753"
RAID1Dev=""
while [ -z "$RAID1Dev" ]
do
sleep 1
RAID1Dev=`blkid -U $RAID1UUID`
done
until cat /etc/fstab | grep $RAID1Dev
do
sleep 1
done
sudo mkdir /mnt/RAID1
sudo mount $RAID1Dev /mnt/RAID1
Now the drive will be mounted directly to /mnt/RAID1
-
Hi Rich, that's very strange, I don't get your last solution to work. Nothing gets mounted and if I start the mount script after boot manually, I get the following error:
mount: mounting /dev/sdc1 on /mnt/raid1 failed: Invalid argument
What was your idea behind your first script and why did that work? What a bummer and thank you for your patience.
suxi
-
Hi suxi
I'm not sure why that failed. If you execute:
ls -l /mnt
what does it show?
-
Hi Rich, this comes up:
tc@suxi:~$ ls -l /mnt
total 5
drwxr-xr-x 2 root root 40 Sep 22 00:50 raid1/
drwxr-xr-x 2 root root 40 Sep 22 00:50 raid2/
drwxr-xr-x 4 root root 1024 Sep 14 18:40 sda1/
drwxrwxrwx 4 root root 4096 Sep 19 12:37 sda3/
drwxr-xr-x 2 root root 40 Sep 22 00:49 sdb1/
drwxr-xr-x 2 root root 40 Sep 22 00:50 sdc1/
drwxr-xr-x 2 root root 40 Sep 22 00:50 sdc2/
tc@suxi:~$
-
Hi suxi
Make sure you have:
ntfsprogs.tcz
fuse.tcz
installed. Then change:
sudo mount $RAID1Dev /mnt/RAID1
to
sudo mount.ntfs-fuse $RAID1Dev /mnt/RAID1
and see if that works.
I see you changed RAID1 to lower case, which is OK. Just remember that Linux is case sensitive, so if you changed
it in the mkdir line, it has to match in the mount line.
-
Ok, thank you. This really seems to get complicated.
I changed the scripts, rebooted and the second raid partition (named godzilla2, I called the mount points by their device labels) got mounted but not the first one.
I then executed the mount script of the first partition manually and received the following error:
root@suxi:~# /opt/mount_godzilla1.sh
/dev/sdc1 /mnt/sdc1 ntfs noauto,users,exec,ro,umask=000 0 0 # Added by TC
mkdir: can't create directory '/mnt/godzilla1': File exists
Volume is scheduled for check.
Please boot into Windows TWICE, or use the 'force' option.
NOTE: If you had not scheduled check and last time accessed this volume
using ntfsmount and shutdown system properly, then init scripts in your
distribution are broken. Please report to your distribution developers
(NOT to us!) that init scripts kill ntfsmount or mount.ntfs-fuse during
shutdown instead of proper umount.
Mount failed.
root@suxi:~#
This worries me, because I did manually mount and work with the drives before, whithout getting any errors.
Thanks a lot for your help
suxi
EDIT: I have found a working solution: using mount with the -t option
I am now using the following mount command in the script: sudo mount -t ntfs $RAIDDev1 /mnt/godzilla1
Rich, I would love to learn what your thoughts were on recommending ntfsprogs.tcz and
fuse.tcz and what the differences are between that and mount and mount -t
Thank you!
-
Hi suxi
I suggested ntfsprogs.tcz because it was installed on my system for gparted.tcz and because I could not
reproduce your mount problems, fuse.tcz is an optional dependency of ntfsprogs.tcz.
Using mount -t lets you tell mount what kind of file system is on the device. I created and formatted an
NTFS file system on a spare partition and did not need the -t option to mount it. I did need to use mount.ntfs-fuse
to mount it as RW, otherwise it got mounted as RO. Maybe you could post a copy of one of the scripts in its
current form.
-
Hi Rich
Thanks for your explanation. Maybe I'll give ntfsprogs another try, as I am having problems unmounting the partitions in another backup script. I can only unmount -l them.
This is one of the boot scripts I am using now. Do you aprove?
Best wishes
suxi
#!/bin/sh
RAID1UUID="2C20120C2011DE20"
RAID1Dev=""
while [ -z "$RAID1Dev" ]
do
sleep 1
RAID1Dev=`blkid -U $RAID1UUID`
done
until cat /etc/fstab | grep $RAID1Dev
do
sleep 1
done
sudo mkdir /mnt/godzilla1
sudo mount -t ntfs $RAID1Dev /mnt/godzilla1
-
Hi suxi
I am having problems unmounting the partitions in another backup script. I can only unmount -l them.
If you are getting the message:
Device or resource busy
it means the device is still in use. Examples of in use include:
1. Open a terminal and cd /mnt/godzilla1
2. Viewing a directory in /mnt/godzilla1 with a file manager
3. Your backup script executed a cd /mnt/godzilla1 command and did not cd back out (sounds likely)
The boot script looks fine.