Scenario: I'm going to be putting together a file server rig with the following soft-RAID (mdadm) configuration:
5x 1TB drives = 4TB single partition under Raid-5 using the motherboard's 6x SATA-III bridge
3x 1.5TB drives = 3TB single partition under Raid-5 using a SiL PCI 4x SATA-II card with soft-raid onboard*
Multiple 600W/80% power supplies are chained to supply power to the system itself and the drives.
The OS is on its own drive using an IDE interface shared with a PATA CD/DVD, RAM=16GB, dual-core processor.
I'll be manufacturing a fan-forced heat-sink for each of the arrays to minimize heat within the drives.
- I haven't tested to see which gets better results, RAID under NIX or the hardware soft-raid nor have I tested the hardware raid to see if it can support non-mbr layouts.
- The machine will rarely ever be powered down and a warm reboot possibly 1-3 times per month at most once configuration and testing are completed.
- One of the goals is to have a main-stream OS (CentOS7) and TC dual-boot.where TC will eventually become the primary operating system; otherwise being used to get test results between the two operating systems. The kernel and version of mdadm will be relatively close to one another for compatibility.
- There are debates online regarding spinning down drives in general, some claiming it being the best thing since sliced bread, others claiming spin-up/spin-down being unnecessary wear and tear causing for early hardware calamity. Personal experiences are requested as to what would be best for longevity as opposed to power savings. One of the arrays will be used frequently (development) while the other may be once or twice per day.
- Using mdadm, the drives are assigned super blocks using the machine's hostname. In the event of necessity, if a RAID array were transported onto a different machine (noting kernel/mdadm being similar in versions and support) what would be any foreseen problems of making such a transfer?
- Initial single-user testing of the first array (5x1TB) are 105+MB/s for write, 140+MB/s for reading through soft-RAID5 using large files. The physical drives were simply fdisk'ed (MBR) and two partitions created on each; a 1MB EXT4 partition (somewhere I could write a few text files to denote which array/LUN a drive was being used for and a copy of the mdadm config) followed by an auto_raid partition (type=FD) of equal size on all drives since one of the drives had a few bytes more than the others. Using parted, a single raid drive/partition was created using GPT. Is this efficient or would it be more suitable to have the physical drives partitioned with GPT as well?
- Finally, these drives are all SATA based drives and controllers. I was also wondering what support, if any, TC has for SCSI based hardware using older Adaptec AHxxxx controllers for purposes of hard drive(s), media changers and/or tape drives as I was also considering putting together some older SCSI arrays and backup devices?
Thanks for your thoughts and efforts!