There is a similar question on the site which must not be named.
My question still has a little different spin:
It seems to me that one of the biggest selling points of Nix is basically infrastructure as code. (Of course being immutable etc. is nice by itself.)
I wonder now, how big the delta is for people like me: All my desktops/servers are based on Debian stable with heavy customization, but 100% automated via Ansible. It seems to me, that a lot of the vocal Nix user (fans) switched from a pet desktop and discover IaC via Nix, and that they are in the end raving about IaC (which Nix might or might not be a good vehicle for).
When I gave Silverblue a try, I totally loved it, but then to configure it for my needs, I basically would have needed to configure the host system, some containers and overlays to replicate my Debian setup, so for me it seemed like too much effort to arrive nearly at where I started. (And of course I can use distrobox/podman and have containerized environments on Debian w/o trouble.)
Am I missing something?
I would separate NixOS from other immutable distros. NixOS is really about giving you blank slate and letting you fully configure it.
You do that configuration using a static config language that is able to be far more idempotent than Andible. It’s also able to define packages that are well contained and don’t require dynamic linking setup by manually installing other packages.
Immutable distros, on the other hand, really have no advantage to your setup and will probably feel more restrictive. The main use I see for them is for someone new or lazy that wants to get a working system up and running quickly.
Well maybe you youself are too new to recognize some of the appeals ;)
One large advantage with silverblue is, that the whole composition of the OS does not take place on the target machine. That means that all the issues that could arise will not take place on the target machine, and can be dealt with beforehand. In the simple case this could mean just enjoying vanilla silverblue without having to think about possibly borking the machine. In an advanced usecase this could mean for example building the os images in a GitLab CI/CD pipeline (with well working tooling that exists already for docker etc), then having automatic tests in the pipeline ensure that everything important works as expected. And only if the tests pass, the image will be added to the repositorie’s image registry, where the target machines will fetch it from automatically and rebase to it.
No, I fully understand it. But if you build the whole system where every package is isolated, none of the packages interfere with each other, and every package is tested across a wide array of architectures, you can just as safely put together your ideal OS setup and don’t have to deal with being locked into very simple and bare system.
The right place for immutable OSes is if you’re using it as a server for container workloads, where you will never customize the base system. Or if you never want to customize your system. Yes, you can customize the system image, but it breaks all the guarantees that the images gives you because the packages themselves are not isolated and by bumping a wrong dependency for a custom packages you can still break the whole system.
Partly yes, but just installing a package without running into conflicts does not yet guarantee a working system. You have to cater for the right configurations too, for example when you think about a corporate setting with all kinds of networking whoes (like shares, vpns and such). I think you could get this to work with Nix somehow, but you want to test these things beforehand, and if you do so using images then you have the thing to ship to machines in your hands already, there’s no need to compose the OS and configurations over and over again for every machine.
Another aspect with non-atomic OS composition on the target is that you have to deal with the transient phase from one state to the next. In this phase all kinds of things could happen, for example an update of nvidia drivers would render cuda disfunctional until the next reboot, as the userspace and kernelspace parts do not fit together anymore. With something like any of the fedora atomic variants, transient phases with basically undefined behaviour do not exist, and the time the system is not guaranteed to be in working order gets reduced to just the reboot.
Nix is cool and definetely better than any traditional package manager. But it is not an ultimate solution, to be honest so far it seems to me like it is living in a nieche of enthusiasts that are smart enough to put up with its unique declaration language. And below that niche you have ordinary linux users that may just be happy with silverblue without any modifications, and above that niche you have corporate doing their own images in CI/CD, CoreOS and all that jazz.
My favorite example of how idempotent NixOS is has to do with the DE. If you’ve ever looked at switching from gnome to KDE, or the other way around, most distros suggest to just re-install because each DE leaves so much cruft around and it’s so hard to remove everything in a safe manner.
With NixOS, you just change one line in your config, and the DE is cleanly swapped.
Or you can add specialisations, which to be fair might require a reboot (system accounts might change during specialisations switch which will confuse the script trying to reload services for the now non-existent user) but it is how I have multiple DEs installed without their applications flooding the other ones, each with their own login manager (SDDM for plasma, gdm for gnome, greetd for sway).
Thanks, that clarified things for me. Only thing I really would love to have are atomic updates, but for this I probably could just use BTRFS snapshots.
Already has that. And if you use flakes, you can fully lock down your package versions that way the install is 100% identical on every machine no matter when you run it.