On my Dell R510 storage server, I used to have it in a RAID-10 configuration. With 12 2TB WD Red drives, I had about 12TB usable. My collection and projects were going and I was starting to run out of space. I slowly saw it go from 1TB free, to 700GB free, to 500GB free and finally, 200GB free. That's when I made the decision to redo the array in RAID 6.
While RAID-10 offers me more protection and some speed benefits, I really didn't need that amount of protection and speed increases for what I was using it for. I'm not pushing that much through my home network and the workload wasn't crazy enough to neccesitate that much protection.
Once I made sure everything was backed up, I reboot the server, entered into the RAID-Card BIOS and deleted the current RAID-10 Volume. I re-created it as a RAID-6 volume with around 18TB usable, saved the config and let it re-create the array. After a few minutes, it completed and I reset the server.
The next tasks were to:
- Remount the block device
- Format it as EXT4
- Mount it
- Add an entry for it to /etc/fstab
- Reboot and confirm it's there.
After I could confirm the mount worked, I then accessed it through my computer and VMs to ensure the mount still worked.
Then came the long process of re-syncing my external harddrives connected to my desktop to the server. Since this was going from the externals through USB3 to my desktop and then through the desktop to the switch to the server, I wasn't seeing true 1-Gig speed. It usually hovered arouynd 800-900mbit/second. This wasn't too bad as it only took around 2 days to re-scan everything and then send it over the network.
After that, I confirmed all the data was there and then it was business as usual!
It currently holds all my media, backups, data-sets and a 512GB iSCSI setup for my DB server.
Here are some pictures showing the current situation now.