Yes, this is the key distinction: old software that works vs old software that sucks.
The one that sucks was a so-so compromise back in the day, and became a worse and worse compromise as better solutions became possible. It's holding the users back, and is a source of regular headaches. Users are happy to replace it, even at the cost of a disruption. Replacing it costs you but not replacing it also costs you.
The one that works just works now, but used to, too. Its users are fine with it, feel no headache, and loathe the idea to replace it. Replacing it is usually costly mistake.
Or it doesn't. Because "software as an organic thing" like all analogies is an analogy, not truth. Systems can sit there and run happily for a decade performing the needed function in exactly the way that is needed with no 'rot'. And then maybe the environment changes and you decide to replace it with something new because you decide the time is right. Doesn't always happen. Maybe not even the majority of the time. But in my experience running high-uptime systems over multiple decades it happens. Not having somebody outside forcing you to change because it suits their philosophy or profit strategy is preferrable.
Or more likely the 'whole' accesses the stable bit through some interface. The stable bit can happily keep doing it's job via the interface and the whole can change however it likes knowing that for that particular tasks (which hasn't changed) it can just call the interface.
The one that sucks was a so-so compromise back in the day, and became a worse and worse compromise as better solutions became possible. It's holding the users back, and is a source of regular headaches. Users are happy to replace it, even at the cost of a disruption. Replacing it costs you but not replacing it also costs you.
The one that works just works now, but used to, too. Its users are fine with it, feel no headache, and loathe the idea to replace it. Replacing it is usually costly mistake.