WelcomeWelcome | FAQFAQ | DownloadsDownloads | WikiWiki

Author Topic: mdadm on piCore 9  (Read 2748 times)

Offline Tcore

  • Newbie
  • *
  • Posts: 6
mdadm on piCore 9
« on: February 04, 2019, 12:05:01 PM »
Hello everyone
I’m new to piCore so please forgive my silly question:
On an early post from 2015 I read, that mdadm.tcz was built for piCore 6.1. I’m however missing it in the piCore Version 9.0.3. Or has it been replaced by something else?
My goal is to run a simple RAID1 configuration on piCore, using two external hard drives.
Has anyone ever done something similar?
Best regards
Serge


Offline Juanito

  • Administrator
  • Hero Member
  • *****
  • Posts: 14516
Re: mdadm on piCore 9
« Reply #1 on: February 04, 2019, 11:13:05 PM »
you could try using the mdadm extension from piCore-6.x in piCore-9.x and see if it works
« Last Edit: February 04, 2019, 11:18:14 PM by Juanito »

Offline Tcore

  • Newbie
  • *
  • Posts: 6
Re: mdadm on piCore 9
« Reply #2 on: February 06, 2019, 10:08:35 AM »
I would love to, but I only found a post, that indicates, that mdadm was built for Picore 6.1: http://forum.tinycorelinux.net/index.php/topic,19261.msg118778.html#msg118778

However what I would need is a package to install or someone that could point me in the right direction.
« Last Edit: February 06, 2019, 10:12:03 AM by Tcore »

Offline Rich

  • Administrator
  • Hero Member
  • *****
  • Posts: 11178
Re: mdadm on piCore 9
« Reply #3 on: February 06, 2019, 12:01:24 PM »
Hi Tcore
Since you haven't received a reply I'll give it a shot.
The  mdadm  extension appears to exist in the  piCore-7.x  repository and it's still mis-named as  mdam.tcz.  You'll need:
http://tinycorelinux.net/7.x/armv6/tcz/mdam.tcz
http://tinycorelinux.net/7.x/armv6/tcz/mdam.tcz.dep
The  .dep  file lists  raid-dm-KERNEL.tcz  which I think you should get from the  piCore-9.x  repository because it's kernel specific.

Offline Tcore

  • Newbie
  • *
  • Posts: 6
Re: mdadm on piCore 9
« Reply #4 on: February 07, 2019, 12:49:57 PM »
Thanks for the help.

It does something, but I'm quite sure it's not entirely correct:

Code: [Select]
tce-load -i mdam.tce
sudo mdadm --create /dev/md/mdraid --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1

This part works fine. I however then would have expected, that I get a reference in /dev/md/mdraid.
What I get is an entry in /dev/md127 of the array. Well I then tried to mount it, but no success.
Then I thought, I might have to format it so I ran

Code: [Select]
sudo mkfs.ntfs -f /dev/md127

That worked fine. But again it won't let me mount it. It says, that the raid is not active. Then I tried to reactivate it

Code: [Select]
sudo mdadm --stop /dev/md127
sudo mdadm --assemble --scan

But still no luck. I thought (well I have to confess I'm a Windows-child  :-X) a reboot would not harm. Well it did harm:
Currently I don't have the /dev/md127 entry and the "assemble"-command does not do the trick to recreate it.
Without the /dev/md127 I'm back at the beginning. The raid-configuration is still somewhere on the disks according to

Code: [Select]
mdadm --examine --brief --scan  --config=partitions


Another small thing is, that the installation of mdadm has to be repeated after every reboot.
I already put it in /mnt/mm....p2/tce/optional but somewhere I think I would have to write it in. Of course I could add the Installation into /opt/bootlocal.sh but that seems like a workaround. Maybe I find a nicer way around this.

Anyone having an idea about my problem from above?

Offline Rich

  • Administrator
  • Hero Member
  • *****
  • Posts: 11178
Re: mdadm on piCore 9
« Reply #5 on: February 07, 2019, 01:03:08 PM »
Hi Tcore
Did you also get:
http://tinycorelinux.net/9.x/armv6/tcz/raid-dm-4.9.22-piCore.tcz

and did you remember to also get:
http://tinycorelinux.net/7.x/armv6/tcz/mdam.tcz.dep
The  .dep  file tells the system to also load the raid extension.

Quote
Another small thing is, that the installation of mdadm has to be repeated after every reboot.
Open your  tce/onboot.lst  file and add  mdam.tcz  to it.

Offline Tcore

  • Newbie
  • *
  • Posts: 6
Re: mdadm on piCore 9
« Reply #6 on: February 09, 2019, 02:09:17 AM »
Well I installed everything.

Quote
Open your  tce/onboot.lst  file and add  mdam.tcz  to it.

Worked very (well thanks for that Rich), I even added raid-dm-4.9.22-piCore.tcz there, in the hope, this would change something (although you told me, that the dep file should do that).

What I do manage by now is, that when I do

Code: [Select]
sudo mdadm --assemble --scan

after a reboot, then I get the \dev\md127 file back. Now mounting that thing is a different story. I tried:

Code: [Select]
mount /dev/md127 -t ntfs-3g -o permissions /mnt/raid

which works perfect, when the devices are not in a raid configuration.
The message I get is the follwing:

Quote
Error reading bootsector: Input/output error
Failed to sync device /dev/md127: Input/output error
Failed to mount '/dev/md127': Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter is very
important! If the device is a SoftRAID/FakeRAID then first activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for more details.

Another rather strange thing is the effect /mdadm --assemble --scan command has ist, that before I do it, the external harddrives are sda1 and sdb1. After, I get following:

Quote
Disk /dev/sda: 932 GB, 1000204883968 bytes, 1953525164 sectors
121601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 16065 * 512 = 8225280 bytes

Device  Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
/dev/sda1    0,1,1       1023,254,63         63 1953520064 1953520002  931G  7 HPFS/NTFS
Disk /dev/sdb: 932 GB, 1000204883968 bytes, 1953525164 sectors
121601 cylinders, 255 heads, 63 sectors/track
Units: cylinders of 16065 * 512 = 8225280 bytes

Device  Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
/dev/sdb1    0,1,1       1023,254,63         63 1953520064 1953520002  931G  7 HPFS/NTFS
Disk /dev/md127: 931 GB, 1000068022272 bytes, 1953257856 sectors
244157232 cylinders, 2 heads, 4 sectors/track
Units: cylinders of 8 * 512 = 4096 bytes

Device     Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
/dev/md127p1 20 365,99,47   371,114,37     6579571 1924427647 1917848077  914G 70 Unknown
Partition 1 does not end on cylinder boundary
/dev/md127p2 65 288,115,51  364,116,50  1953251627 3771827541 1818575915  867G 43 Unknown
Partition 2 does not end on cylinder boundary
/dev/md127p3 20 288,116,47  372,101,51   225735265  225735274         10  5120 72 Unknown
Partition 3 does not end on cylinder boundary
/dev/md127p4    0,0,0       0,0,0       2642411520 2642463409      51890 25.3M  0 Empty
Partition 4 does not end on cylinder boundary

What I think is a little strange is, that i never told the system to make four partitions. Well I can live with that, but in the /dev/… folder, there are no md127px! So they can not be mounted! Second strange thing is, that there should be a raid 1 array on it. Looks to me as if there is too much disk space altogether for that.

So I asked myself the question if i have to mkfs.ntfs before i make a mdadm --create or after? From my understanding that would be after i made the raid. Or am I wrong here?

Offline Tcore

  • Newbie
  • *
  • Posts: 6
Re: mdadm on piCore 9
« Reply #7 on: February 11, 2019, 12:28:19 PM »
Well I did some research here is what I found:

First the partition has to be created on each HDD. Next, the raid array can be created. At last, the filesystem has to be created on the raid array.
So far so good.

After running mdadm --create, I checked with mdadm --detail, how my raid is doing.

Quote
sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Feb 11 20:07:11 2019
     Raid Level : raid1
     Array Size : 976628928 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976628928 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Feb 11 20:09:50 2019
          State : clean, resyncing
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

  Resync Status : 0% complete

           Name : RaspberryNAS:0  (local to host RaspberryNAS)
           UUID : 239fcea3:4e176598:6064387b:6b16267a
         Events : 31

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1

Looking good so far. Just to double check, how long it would take to synchronise I checked the mdstat:

Quote
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
      976628928 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.1% (1530048/976628928) finish=1084.5min speed=14984K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk

unused devices: <none>

18 hours…. well I've got time! Disk speed seems about right.
For some reason I then left the system to it. It should however be able to make a filesystem already.
After about 10 minutes I got this:

Quote
tc@RaspberryNAS:~$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Feb 11 20:07:11 2019
     Raid Level : raid1
     Array Size : 976628928 (931.39 GiB 1000.07 GB)
  Used Dev Size : 976628928 (931.39 GiB 1000.07 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Feb 11 20:09:56 2019
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 1
  Spare Devices : 0

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync
       2       0        0        2      removed

       1       8       17        -      faulty

Well, the drives are new, that's why I think it's not them. My currently best idea is, that the power supply to the drives is borderline sufficient. I'm using a USB hub, that's powered externally by another USB-charger. It's more kind of a traveling setup. I got myself another (powerful) USB-hub, to rule that out.

If anyone had a similar behaviour of external HDD, feel free to share.

Offline Rich

  • Administrator
  • Hero Member
  • *****
  • Posts: 11178
Re: mdadm on piCore 9
« Reply #8 on: February 11, 2019, 09:39:55 PM »
Hi Tcore
I looked at a few examples on line and they all seem to avoid letting  mdadm  figure out what to do. They spell it out like:
Code: [Select]
sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda1 /dev/sdb1Maybe there is some useful information here:
https://blog.alexellis.io/hardened-raspberry-pi-nas/#30theraidarray
Ignore the part about editing  fstab.
« Last Edit: February 12, 2019, 06:46:29 AM by Rich »

Offline Tcore

  • Newbie
  • *
  • Posts: 6
Re: mdadm on piCore 9
« Reply #9 on: February 12, 2019, 11:09:54 AM »
Well it was the power to the USB-Hub. After some minutes in operation, the power failed for a second and the drives powered down.

I replaced the AC/DC converter with a more powerful one and now it works like it should.

Thanks for your help Rich