Assuming you aren’t striping, up to 36 TB. If you follow even halfway decent practices with basically any kind of RAID other than 0, hopefully 0 Bytes.
The main worry with stuff like this is that it potentially takes a while to recover from a failed drive even if you catch it in time (alert systems are your friend). And 36 TB is a LOT of data to work through and recover which means a LOT of stress on the remaining drives for a few days.
And I would go so far as to say that nobody who is buying 36 TB spinners is doing offsite backups of that data. For any org doing offsites of that much data you are almost guaranteed using a tape drive of some form because… they pay for themselves pretty fast and are much better for actual cold storage backups.
Seagate et al keep pushing for these truly massive spinners and I really do wonder who the market is for them. They are overly expensive for cold storage and basically any setup with that volume of data is going to be better off slowly rotating out smaller drives. Partially because of recovery times and partially because nobody but a sponsored youtuber is throwing out their 24 TB drives because 36 TB hit the market.
I assume these are a byproduct of some actually useful tech that is sold to help offset the costs while maybe REALLY REALLY REALLY want 72 TBs in their four bay Synology.
I wouldn’t buy a Synology but either way I’d want a 5 or 6 bay for raid-6 with two parity drives. Going from 4 bay (raid 6 or 10) to 5 bay (raid 6) is 50% more user data for 25% more drives. I wouldn’t do raid 5 with drives of this size.
It’s important to also note that RAID (or alternatives such as unRAID) are not backup systems and should not be relied on as such. If you have a severe brownout that fries more than two or three drives at once, for example, you will lose data if you’re not backing up.
Assuming you aren’t striping, up to 36 TB. If you follow even halfway decent practices with basically any kind of RAID other than 0, hopefully 0 Bytes.
The main worry with stuff like this is that it potentially takes a while to recover from a failed drive even if you catch it in time (alert systems are your friend). And 36 TB is a LOT of data to work through and recover which means a LOT of stress on the remaining drives for a few days.
I think you mean “are striping”.
But even with striping you have backups right? Local redundancy is for availability, not durability.
Words hard
And I would go so far as to say that nobody who is buying 36 TB spinners is doing offsite backups of that data. For any org doing offsites of that much data you are almost guaranteed using a tape drive of some form because… they pay for themselves pretty fast and are much better for actual cold storage backups.
Seagate et al keep pushing for these truly massive spinners and I really do wonder who the market is for them. They are overly expensive for cold storage and basically any setup with that volume of data is going to be better off slowly rotating out smaller drives. Partially because of recovery times and partially because nobody but a sponsored youtuber is throwing out their 24 TB drives because 36 TB hit the market.
I assume these are a byproduct of some actually useful tech that is sold to help offset the costs while maybe REALLY REALLY REALLY want 72 TBs in their four bay Synology.
I wouldn’t buy a Synology but either way I’d want a 5 or 6 bay for raid-6 with two parity drives. Going from 4 bay (raid 6 or 10) to 5 bay (raid 6) is 50% more user data for 25% more drives. I wouldn’t do raid 5 with drives of this size.
It would probably take days to rebuild the array.
It’s important to also note that RAID (or alternatives such as unRAID) are not backup systems and should not be relied on as such. If you have a severe brownout that fries more than two or three drives at once, for example, you will lose data if you’re not backing up.