I personally think of a small DIY rack stuffed with commodity HDDs off Ebay with an LVM spanned across a bunch of RAID1s. I don’t want any complex architectural solutions since my homelab’s scale always equals 1. To my current understanding this has little to no obvious drawbacks. What do you think?

    • melfie@lemy.lol
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      Ha, I went down the whole Ceph and Longhorn path as well, then ended up with hostPath and btrfs. Glad I’m not the only one who considers the former options too much of a headache after fully evaluating them.

    • MrModest@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 days ago

      Why btrfs and not ZFS? In my info bubble, the btrfs has a reputation of an unstable FS and people ended up with unrecoverable data.

      • squinky@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        All I know about ZFS is that there are weird patent or closed source encumbrances or something. I hear it’s good, and it seems popular, I just avoid proprietary Oracle products.

        As for btrfs, the only thing that’s claimed to be unstable is raid 5 or 6. And people use it in production saying the claims are overblown. I don’t. I use it in raid1 mode. But raid1 in btrfs doesn’t require a bunch of matching drives. It lets you glom together a number of mismatched disks and just puts every block on more than one of them. So it’s a nice cross between a raid and LFS or JBOD.

      • unit327@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        Btrfs used to be easier to install because it is part of the kernel while zfs required shenanigans, though I think that has changed now.

        Btrfs also just works with whatever drives of mismatched sizes you throw at it and adding more later is easy. This used to be impossible with zfs pools but I think is a feature now?

      • ikidd@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        Just the 5-6 raid modes are shit. And its weird willingness to let you boot a failed raid without letting you know a drive is borked.

      • non_burglar@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        That is apparently not the case anymore, but ZFS is certainly more rich in features and more battle-tested.