• 2 Posts
  • 111 Comments
Joined 1 month ago
cake
Cake day: June 22nd, 2025

help-circle












  • It isn’t that limited, but for zfs arm seems to perform much worse. Plus you often don’t get a full idea of true system load. the biggest limitation is the io, it is very bad for 5 drives in zfs raidz1. the data is distributed across all 5 drives with parity as well. the pi can only do around 500 meg transfers for an nvme drive whilst many other platforms will see 3000 meg, that is why it suffers so much in this case as that 500 meg is across 5 drives. tops you’d get is 10 meg transfer I reckon and that is roughly what you are seeing. you’d be better off with 3 larger drives in raidz1


  • this is the limits of a slow interface and 5 drives. See my other reply to enable faster pci speeds. because of how zfs works 5 drives is slower than 3, takes more cache and write speeds especially will be slower, quite a lot slower. with 5 drives and 16gb you can easily have a zfs cache of 12 gigs to help it along, i guess this is why you are getting large gaps between writes. as someone else said a pi doesn’t do well in this case but I reckon you can improve it. however as also said it is never going to be a speedy solution. secure and safe for data but not fast