Tiny Core Linux
Tiny Core Base => Raspberry Pi => Topic started by: Tcore on February 04, 2019, 03:05:01 PM
-
Hello everyone
I’m new to piCore so please forgive my silly question:
On an early post from 2015 I read, that mdadm.tcz was built for piCore 6.1. I’m however missing it in the piCore Version 9.0.3. Or has it been replaced by something else?
My goal is to run a simple RAID1 configuration on piCore, using two external hard drives.
Has anyone ever done something similar?
Best regards
Serge
-
you could try using the mdadm extension from piCore-6.x in piCore-9.x and see if it works
-
I would love to, but I only found a post, that indicates, that mdadm was built for Picore 6.1: http://forum.tinycorelinux.net/index.php/topic,19261.msg118778.html#msg118778 (http://forum.tinycorelinux.net/index.php/topic,19261.msg118778.html#msg118778)
However what I would need is a package to install or someone that could point me in the right direction.
-
Hi Tcore
Since you haven't received a reply I'll give it a shot.
The mdadm extension appears to exist in the piCore-7.x repository and it's still mis-named as mdam.tcz. You'll need:
http://tinycorelinux.net/7.x/armv6/tcz/mdam.tcz
http://tinycorelinux.net/7.x/armv6/tcz/mdam.tcz.dep
The .dep file lists raid-dm-KERNEL.tcz which I think you should get from the piCore-9.x repository because it's kernel specific.
-
Thanks for the help.
It does something, but I'm quite sure it's not entirely correct:
tce-load -i mdam.tce
sudo mdadm --create /dev/md/mdraid --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1
This part works fine. I however then would have expected, that I get a reference in /dev/md/mdraid.
What I get is an entry in /dev/md127 of the array. Well I then tried to mount it, but no success.
Then I thought, I might have to format it so I ran
sudo mkfs.ntfs -f /dev/md127
That worked fine. But again it won't let me mount it. It says, that the raid is not active. Then I tried to reactivate it
sudo mdadm --stop /dev/md127
sudo mdadm --assemble --scan
But still no luck. I thought (well I have to confess I'm a Windows-child :-X) a reboot would not harm. Well it did harm:
Currently I don't have the /dev/md127 entry and the "assemble"-command does not do the trick to recreate it.
Without the /dev/md127 I'm back at the beginning. The raid-configuration is still somewhere on the disks according to
mdadm --examine --brief --scan --config=partitions
Another small thing is, that the installation of mdadm has to be repeated after every reboot.
I already put it in /mnt/mm....p2/tce/optional but somewhere I think I would have to write it in. Of course I could add the Installation into /opt/bootlocal.sh but that seems like a workaround. Maybe I find a nicer way around this.
Anyone having an idea about my problem from above?
-
Hi Tcore
Did you also get:
http://tinycorelinux.net/9.x/armv6/tcz/raid-dm-4.9.22-piCore.tcz
and did you remember to also get:
http://tinycorelinux.net/7.x/armv6/tcz/mdam.tcz.dep
The .dep file tells the system to also load the raid extension.
Another small thing is, that the installation of mdadm has to be repeated after every reboot.
Open your tce/onboot.lst file and add mdam.tcz to it.
-
Well I installed everything.
Open your tce/onboot.lst file and add mdam.tcz to it.
Worked very (well thanks for that Rich), I even added raid-dm-4.9.22-piCore.tcz there, in the hope, this would change something (although you told me, that the dep file should do that).
What I do manage by now is, that when I do
sudo mdadm --assemble --scan
after a reboot, then I get the \dev\md127 file back. Now mounting that thing is a different story. I tried:
mount /dev/md127 -t ntfs-3g -o permissions /mnt/raid
which works perfect, when the devices are not in a raid configuration.
The message I get is the follwing:
Error reading bootsector: Input/output error
Failed to sync device /dev/md127: Input/output error
Failed to mount '/dev/md127': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter is very
important! If the device is a SoftRAID/FakeRAID then first activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for more details.
Another rather strange thing is the effect /mdadm --assemble --scan command has ist, that before I do it, the external harddrives are sda1 and sdb1. After, I get following:
Disk /dev/sda: 932 GB, 1000204883968 bytes, 1953525164 sectors
121601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 16065 * 512 = 8225280 bytes
Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type
/dev/sda1 0,1,1 1023,254,63 63 1953520064 1953520002 931G 7 HPFS/NTFS
Disk /dev/sdb: 932 GB, 1000204883968 bytes, 1953525164 sectors
121601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 16065 * 512 = 8225280 bytes
Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type
/dev/sdb1 0,1,1 1023,254,63 63 1953520064 1953520002 931G 7 HPFS/NTFS
Disk /dev/md127: 931 GB, 1000068022272 bytes, 1953257856 sectors
244157232 cylinders, 2 heads, 4 sectors/track
Units: cylinders of 8 * 512 = 4096 bytes
Device Boot StartCHS EndCHS StartLBA EndLBA Sectors Size Id Type
/dev/md127p1 20 365,99,47 371,114,37 6579571 1924427647 1917848077 914G 70 Unknown
Partition 1 does not end on cylinder boundary
/dev/md127p2 65 288,115,51 364,116,50 1953251627 3771827541 1818575915 867G 43 Unknown
Partition 2 does not end on cylinder boundary
/dev/md127p3 20 288,116,47 372,101,51 225735265 225735274 10 5120 72 Unknown
Partition 3 does not end on cylinder boundary
/dev/md127p4 0,0,0 0,0,0 2642411520 2642463409 51890 25.3M 0 Empty
Partition 4 does not end on cylinder boundary
What I think is a little strange is, that i never told the system to make four partitions. Well I can live with that, but in the /dev/… folder, there are no md127px! So they can not be mounted! Second strange thing is, that there should be a raid 1 array on it. Looks to me as if there is too much disk space altogether for that.
So I asked myself the question if i have to mkfs.ntfs before i make a mdadm --create or after? From my understanding that would be after i made the raid. Or am I wrong here?
-
Well I did some research here is what I found:
First the partition has to be created on each HDD. Next, the raid array can be created. At last, the filesystem has to be created on the raid array.
So far so good.
After running mdadm --create, I checked with mdadm --detail, how my raid is doing.
sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Feb 11 20:07:11 2019
Raid Level : raid1
Array Size : 976628928 (931.39 GiB 1000.07 GB)
Used Dev Size : 976628928 (931.39 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Feb 11 20:09:50 2019
State : clean, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Resync Status : 0% complete
Name : RaspberryNAS:0 (local to host RaspberryNAS)
UUID : 239fcea3:4e176598:6064387b:6b16267a
Events : 31
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
Looking good so far. Just to double check, how long it would take to synchronise I checked the mdstat:
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
976628928 blocks super 1.2 [2/2] [UU]
[>....................] resync = 0.1% (1530048/976628928) finish=1084.5min speed=14984K/sec
bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices: <none>
18 hours…. well I've got time! Disk speed seems about right.
For some reason I then left the system to it. It should however be able to make a filesystem already.
After about 10 minutes I got this:
tc@RaspberryNAS:~$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Feb 11 20:07:11 2019
Raid Level : raid1
Array Size : 976628928 (931.39 GiB 1000.07 GB)
Used Dev Size : 976628928 (931.39 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Feb 11 20:09:56 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Number Major Minor RaidDevice State
0 8 1 0 active sync
2 0 0 2 removed
1 8 17 - faulty
Well, the drives are new, that's why I think it's not them. My currently best idea is, that the power supply to the drives is borderline sufficient. I'm using a USB hub, that's powered externally by another USB-charger. It's more kind of a traveling setup. I got myself another (powerful) USB-hub, to rule that out.
If anyone had a similar behaviour of external HDD, feel free to share.
-
Hi Tcore
I looked at a few examples on line and they all seem to avoid letting mdadm figure out what to do. They spell it out like:
sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda1 /dev/sdb1
Maybe there is some useful information here:
https://blog.alexellis.io/hardened-raspberry-pi-nas/#30theraidarray
Ignore the part about editing fstab.
-
Well it was the power to the USB-Hub. After some minutes in operation, the power failed for a second and the drives powered down.
I replaced the AC/DC converter with a more powerful one and now it works like it should.
Thanks for your help Rich