Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

conversely, running a firewall on something like ZFS also sounds like too much. Ideally I'd want a read-only root FS with maybe an /etc and /var managed by an overlay.




Sounds like overcomplicating in the name of simplification. ZFS is a good, reliable, general-purpose system; often the right answer is to just put everything on ZFS and get on with your life.

I’ve had more problems with zfs than all other filesystems combined including FAT. It’s IMO overkill for a root partition.

Problems like? I run zfs on 20gb VMs and a 100tb pool and I’ve never had a problem that wasn’t my own fault. I love root on zfs, you can snapshot your entire OS at a whim. The only other way to get that I know of is btrfs which genuinely does have well known issues.

Not an OP but I have similar experience with ZFS. Over 22 years of maintaining servers, I have had serious issues exclusively with ZFS. My pool is there, but it doesnt want to mount no matter what amount of IRC/reddit/SO/general googling I apply to try and help it boot. After it happened for the second time, I removed ZFS from the list of technologies I want to work with (I still have to, due to Proxmox, but without being fascinated).

As another anecdote:

I've been working with systems for a long time, too. I've screwed things up.

I once somehow decided that using an a.out kernel would be a good match for a Slackware diskset that used elf binaries. (It didn't go well.)

In terms of filesystems: I've had issues with FAT, FAT32, HPFS, NTFS, EXT2, ReiserFS, EXT3, UFS, EXT4, and exFAT. Most of those filesystems are very old now, but some of of these issues have trashed parts of systems beyond comprehension and those issues are part of my background in life whether I like it or not.

I've also had issues with ZFS. I've only been using ZFS in any form at all for about 9 years so far, but in that time I've always able to wrest the system back into order even on the seemingly most-unlikely, least-resilient, garbage-tier hardware -- including after experiencing unlikely problems that I introduced myself by dicking around with stuff in unusual ways.

Can you elaborate upon the two particular unrecoverable issues you experienced?

(And yeah, Google is/was/has been poisoned for a long time as it relates to ZFS. There was a very long streak of people proffering bad mojo about ZFS under an air of presumed authority, and this hasn't been helpful to anyone. The sheer perversity of the popular myths that have popularly surrounded ZFS are profoundly bizarre, and do not help with finding actual solutions to real-world problems.

The timeline is corrupt.)


>The sheer perversity of the popular myths that have popularly surrounded ZFS are profoundly bizarre

Cyberjock sends his regards, I'm sure.


> Over 22 years of maintaining servers, I have had serious issues exclusively with ZFS.

I've been using ZFS since it initially debuted in Solaris 10 6/06 (also: zones and DTrace), before then using it on FreeBSD and Linux, and I've never had issues with it. ¯\_(ツ)_/¯


Not to be deliberately argumentative but still no concrete examples of zfs failures are shown, just hand wavey "I had issues I couldn't google my way out of". I've never heard of a healthy pool not mounting and I've never heard of a pool being unhealthy without a hardware failure of some sort. To the contrary, zfs has perfectly preserved my bytes for over a decade now in the face of shit failing hardware, from memory that throws errors when clocked faster than stock JEDEC speeds to brand new hard drives that just return garbage after reporting successful writes.

> I’ve never had a problem that wasn’t my own fault.

I'm including that. zfs takes more skill to manage properly.


From my understand of ZFS:

When it is treated as just a filesystem, then it works about like any other modern filesystem does.

ZFS features like scrubs aren't necessary. Multiple datasets aren't necessary -- using the one created by default is fine. RAIDZ, mirrors, slog, l2arc: None of that is necessary. Snapshots, transparent compression? Nope, those aren't necessary functions for proper use, either.

There's a lot of features that a person may elect to use, but it is no worse than, say, ext4 or FFS2 is when those features are ignored completely.

(It can be tricky to get Linux booting properly with a ZFS root filesystem. But that difficulty is not shared at all with FreeBSD, wherein ZFS is a native built-in.)


Linux takes more skill to manage than Windows or macOS, yet we all know Linux. zfs is _the_ one true filesystem, and the last one you need to know. Besides that, to know zfs is to have a deeper understanding of what a filesystem is and does.

I will admit though, to truly get zfs you need to change how you think about filesystems.


Interesting, can you share specifics?

> Ideally I'd want a read-only root FS with maybe an /etc and /var managed by an overlay.

OpenZFS 2.2 added support for overlays, so you can have the main pool(s) mounted as read-only:

* https://github.com/openzfs/zfs/releases/tag/zfs-2.2.0


> conversely, running a firewall on something like ZFS also sounds like too much.

this makes no sense. firewalling does not touch the filesystem very much if at all.

what FS is being used is essentially orthogonal to firewalling performances.

if anything, having a copy-on-write filesystem like ZFS on your firewall/router means you have better integrity in case of configuration mistakes and OS upgrade (just rollback the dataset to the previous snapshot!)


my point was that if a hardware vendor were to approach this problem, they'd probably have 2 (prev,next) partitions that they write firmware to, plus separate mounts for config and logs, rather than a kitchen-sink CoW FS

What aspect of ZFS prevents the kind of layout that you envision, do you suppose?

ZFS works just fine with partitions, if that's how a person/company/org wants to use it today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: