> If you write a playbook to install some software on your home network, you have some YAML that took longer to write than it would have taken to do the task manually and isn't likely to be useful to anybody else.
This is debatable. Some of those manual tasks you'd simply forget about and would end up with partially reproduced environment later, which is okay if you miss out on configuration that you don't actually particularly care about. It is much more annoying when you expect everything to work but realize that your workflows are randomly blocked because you forgot to manually install and configure some piece of software when reinstalling the OS or perhaps have to set up your IDEs completely anew because the configuration directories weren't preserved anywhere and are now simply lost. One of the benefits of using something like Ansible would be avoiding situations like this, or perhaps using something like NixOS (even though there are usability concerns presently).
In the more common case of using Ansible to manage servers (perhaps a homelab) as opposed to just using it for handling personal devices, the implications of this would be far more sinister - i've seen Java applications fail to work after migrating to a new OS release because fonts weren't installed, because someone forgot to document that step as necessary. And even if instructions are given (say, either when setting up some package anew, or reading your own documentation about your setup), there's no guarantee that you'll remember to follow all of them to the letter, or maybe you'll simply glance over a failure in one of the steps. That becomes even more likely as the size of your homelab and the count of your personal devices increases.
Clearly the impact of things like that happening is far lower in a homelab setting than it would be in a professional environment, but that also means that you'll essentially be pigeonholed into using some sort of an automation solution at your workplace (hopefully) to avoid situations like that, so also using it for your personal stuff is just the next logical progression. No one likes dealing with failures that aren't immediately apparent and could have been avoided entirely. I don't know about you, but Ansible's standard modules provide really good reusability, so i've definitely borrowed samples of how to do something from my personal setup for my work projects and vice versa (not verbatim, but syntax, how to use parameters and get things done in general).
> And without maintenance, that playbook will eventually decay and stop working. The scarf will continue to keep you warm forever.
This doesn't feel entirely accurate - everything, from your software, to your scarf will eventually decay. It's only a matter of time for the most part, though you can mitigate this by using more stable OS distributions like Ubuntu LTS (until very recently, i would have also recommended CentOS) or by using better materials for your scarf. Oh, and choose the stable versions of boring software, perhaps cut out the technologies that have the most rapid changes out of your stack entirely, until the development there slows down.
For me, that currently means:
- using Debian because it's stable and boring enough for my needs (both servers and desktop/laptop with XFCE)
- using Ansible for servers, treating personal devices and disposable otherwise (no attempt to preserve configuration, too much effort)
- using automated incremental backup software for the data, just in case
- manually provisioning any VMs/VPSes that i require, but having most of the configuration be similarly automated
- using Docker containers within those VMs/VPSes with Docker Swarm liberally, to separate software runtime environments from their data output and their configuration input
- using Docker Swarm to make managing all of that simpler and partially automate it, alongside something like Portainer for making that process more user friendly
- using Caddy to never have to deal with certificates manually, even though i manage DNS manually
- not updating software i don't expose publically and don't need the newest versions of (GIMP, Blender, LibreOffice, some private containers)
- using automated security upgrades within everything else, but also using the latest stable versions of server software, never bleeding edge
A lot of it is about finding what works for your particular circumstances and seeing which parts cause the most pain and then automating those.
This is debatable. Some of those manual tasks you'd simply forget about and would end up with partially reproduced environment later, which is okay if you miss out on configuration that you don't actually particularly care about. It is much more annoying when you expect everything to work but realize that your workflows are randomly blocked because you forgot to manually install and configure some piece of software when reinstalling the OS or perhaps have to set up your IDEs completely anew because the configuration directories weren't preserved anywhere and are now simply lost. One of the benefits of using something like Ansible would be avoiding situations like this, or perhaps using something like NixOS (even though there are usability concerns presently).
In the more common case of using Ansible to manage servers (perhaps a homelab) as opposed to just using it for handling personal devices, the implications of this would be far more sinister - i've seen Java applications fail to work after migrating to a new OS release because fonts weren't installed, because someone forgot to document that step as necessary. And even if instructions are given (say, either when setting up some package anew, or reading your own documentation about your setup), there's no guarantee that you'll remember to follow all of them to the letter, or maybe you'll simply glance over a failure in one of the steps. That becomes even more likely as the size of your homelab and the count of your personal devices increases.
Clearly the impact of things like that happening is far lower in a homelab setting than it would be in a professional environment, but that also means that you'll essentially be pigeonholed into using some sort of an automation solution at your workplace (hopefully) to avoid situations like that, so also using it for your personal stuff is just the next logical progression. No one likes dealing with failures that aren't immediately apparent and could have been avoided entirely. I don't know about you, but Ansible's standard modules provide really good reusability, so i've definitely borrowed samples of how to do something from my personal setup for my work projects and vice versa (not verbatim, but syntax, how to use parameters and get things done in general).
> And without maintenance, that playbook will eventually decay and stop working. The scarf will continue to keep you warm forever.
This doesn't feel entirely accurate - everything, from your software, to your scarf will eventually decay. It's only a matter of time for the most part, though you can mitigate this by using more stable OS distributions like Ubuntu LTS (until very recently, i would have also recommended CentOS) or by using better materials for your scarf. Oh, and choose the stable versions of boring software, perhaps cut out the technologies that have the most rapid changes out of your stack entirely, until the development there slows down.
For me, that currently means:
A lot of it is about finding what works for your particular circumstances and seeing which parts cause the most pain and then automating those.