Raymond's posts are always fun to read, but it sometimes he focuses more on the "proper" methods, and does not even acknowledge that there are hacky workarounds.
Like for this case - sure, you cannot redefine the standard output handle, but that's not what the customer asked for, is it? They said "read" and I can see a whole bunch of ways to do so - ReadConsoleOutput + heuristic for scrolling, code inject into console host, attach debugger, set up detour on logging function, custom kernel module...
To be fair, as a MS support person, it's the exactly right thing to do. You don't want the person to start writing custom kernel module when they should redirect stdout on process start instead. But as a random internet reader, I'd love to read all about hacky ways to achieve the same!
> Raymond's posts are always fun to read, but it sometimes he focuses more on the "proper" methods, and does not even acknowledge that there are hacky workarounds.
Nor should he, IMO. Hacky workarounds are almost always a terrible idea that will bite you in the ass someday.
Hacky workarounds aren't rare exceptions; they're the plumbing of modern software. Anti-cheat and antivirus tools only work because they lean on strange kernel behaviors. Cloud platforms ship fixes that rely on undefined-but-stable quirks. Hardware drivers poke at the system in ways no official API ever planned for.
Yeah, they're ugly, but in practice the choice isn't between clean and hacky; it's between shipping and not shipping. Real-world software runs on constraints, not ideals.
I think that's exactly the kind of things that make people hate AI.
Author's example for internal help channel has a rule: "NPM related: recommend reading the Fullstack onboarding guide...". So no comprehension, no contextual response. But hey, it's using AI bot for something that could have been a simple FAQ document!
At the same time, the author is clearly proud of the effort and does not seem to be discouraged by the negative experience... I was wondering about this disconnect, until I went to their page and found out they are the CTO. I guess the old quote was right: "It is difficult to get a man to understand something, when his salary depends on his not understanding it."
it's another robots.txt-like standard, with no teeth, so I'd expect it would be as effective as robots.txt ever was, which is: "every AI will ignore it"
Renting few residential proxy IPs and faking some user agent is way cheaper than paying any fees.
Using bare-metal requires competent Unix admins, and Actions team is full of javascript clowns (see: decision to use dashes in environment variable; lack of any sort of shell quoting support in templates; keeping logs next to binaries in self-hosted runners). Perhaps they would be better off using infra someone else maintains.
Low-value secrets are OK with low-effort key management.
If you are using UUIDv7 already, and just want to hide the timestamp part, you don't need HSM or key rotation. Make up a key, hardcode it into source code (or into your terraform files), and use it with AES/Blowfish. This will not stop nation-state APT attackers, but will provide immediate protection from random person on the internet. Just make sure that this is not a _sole_ method to protect user identity.
And the most important part: to guard against overenthusiastic security folks, _never_ call this "encryption", but always "obfuscation", especially in the source code. Seeing "EncryptCustomerID" triggers hard questions about key management, and could be pretty dangerous ("We encrypt customer ID, I saw it in the source code.. which means we don't need a password"). On the other hand, "ObfuscateCustomerID" makes the intent much clearer.
Deprecations via warnings don't reliably work anywhere, in general.
If you are a good developer, you'll have extensive unit test coverage and CI. You never see the unit test output (unless they fail) - so warnings go unnoticed.
If you are a bad developer, you have no idea what you are doing and you ignore all warnings unless program crashes.
You can turn warnings into errors with the `-Werror` option. I personally use that in CI runs, along with the `-X dev` option to enable additional runtime checks. Though that wont solve the author's problem, since most Python devs don't use either of those options
In PHP I don’t think there is a native way to convert E_DEPRECATED into E_ERROR, but the most common testing framework has a quick way of doing the same.
When I update python version, python packages, container image, etc for a service, I take a quick look at CI output, in addition to the all the other checks I do (like a couple basic real-world-usage end-to-end usage tests), to "smoke test" whether something not caught by outright CI failure caused some subtle problem.
So, I do often see deprecation warnings in CI output, and fix them. Am I a bad developer?
I think the mistake here is making some warnings default-hidden. The developer who cares about the user running their the app in a terminal can add a line of code to suppress them for users, and be more aware of this whole topic as a result (and have it more evident near the entrypoint of the program, for later devs to see also).
I think that making warnings error or hidden removes warnings as a useful tool.
Author here! Agreed that are different levels of "engaged" from users, which is okay. The concerning part of this finding is that even dependent users that I know to be highly engaged didn't respond to the deprecation warnings, so they're not working for even the most engaged users.
There was this one library we depended on, it was sort of in limbo during the Python 2 -> 3 migration. During that period is was maintained by this one person who'd just delete older versions when never ones became available. In one year I think we had three or four instances where our CI and unit tests just broke randomly one day, because the APIs had changed and the old version of the library had been yanked.
In hindsight it actually helped us, because in frustrations we ended up setting up our own Python package repo and started to pay more attention to our dependencies.
> If you are a good developer, you'll have extensive unit test coverage and CI. You never see the unit test output (unless they fail) - so warnings go unnoticed.
In my opinion test suites should treat any output other than the reporter saying that a test passed as a test failure. In JavaScript I usually have part of my test harness record calls to the various console methods. At the end of each test it checks to see if any calls to those methods were made, and if they were it fails the tests and logs the output. Within tests if I expect or want some code to produce a message, I wrap the invocation of that code in a helper which requires two arguments: a function to call and an expected output. If the code doesn't output a matching message, doesn't output anything, or outputs something else then the helper throws and explains what went wrong. Otherwise it just returns the result of the called function:
let result = silenceWarning(() => user.getV1ProfileId(), /getV1ProfileId has been deprecated/);
expect(result).toBe('foo');
This is dead simple code in most testing frameworks. It makes maintaining and working with the test suite becomes much easier as when something starts behaving differently it's immediately obvious rather than being hidden in a sea of noise. It makes working with dependencies easier because it forces you to acknowledge things like deprecation warnings when they get introduced and either solve them there or create an upgrade plan.
Why is it that CI tools don't make warnings visible? Why are they ignored by default in the first place? Seems like that should be a rather high priority.
> Why is it that CI tools don't make warnings visible?
A developer setting up CI decides to start an ubuntu 24.04 container and run 'apt-get install npm'
This produces 3,600 lines of logging (5.4 log lines per package, 668 packages) and 22 warnings (all warnings about man page creation being skipped)
Then they decide "Nobody's going to read all that, and the large volume might bury important information. I think I'll hide console output for processes that don't fail."
It isn't that easy. If you have a new warning on upgrade you probably want to work on it "next week", but that means you need to ignore it for a bit. Or you might still want to support a really old version without the new API and so you can't fix it now.
> If you have a new warning on upgrade you probably want to work on it "next week", but that means you need to ignore it for a bit.
So you create a bug report or an issue or a story or whatever you happen to call it, and you make sure it gets tracked, and you schedule it with the rest of your work. That's not the same thing as "ignoring" it.
Why does your codebase generate hundreds of warnings, given that every time one initially appeared, you should have stamped it out (or specifically marked that one warning to be ignored)? Start with one line of code that doesn't generate a warning. Add a second line of code that doesn't generate a warning...
> Why does your codebase generate hundreds of warnings
Well, it wasn't my codebase yesterday, because I didn't work here.
Today I do. When I build, I get reports of "pkg_resources is deprecated as an API" and "Tesla T4 does not support bfloat16 compilation natively" and "warning: skip creation of /usr/share/man/man1/open.1.gz because associated file /usr/share/man/man1/xdg-open.1.gz (of link group open) doesn't exist" and "datetime.utcnow() is deprecated and scheduled for removal in a future version"
The person onboarding me tells me those warnings are because of "dependencies" and that I should ignore them.
It's rare that I work on a project I myself started. If I start working on an existing codebase, the warnings might be there already. Then what do I do?
I'm also referring to all the warnings you might get if you use an existing library. If the requirements entail that I use this library, should I just silence them all?
But I'm guessing you might be talking about more specific warnings. Yes I do fix lints specific to my new code before I commit it, but a lot of warnings might still be logged at runtime, and I may have no control over them.
If this code crashed all the time there'd be a business need to fix it and I could justify spending time on this.
But that's not what we're discussing here, we're discussing warnings that have been ignored in the past, and all of a sudden I'm supposed to take the political risk to fix them all somehow, even though there's no new crash, no new information.
I don't know how much freedom you have at your job; but I definitely can't just go to my manager and say: "I'm spending the next few weeks working on warnings nobody else cared about but that for some reason I care about".
Because most people are working at Failure/Feature factories where they might work on something and at last minute, they find out something is now warning. If they work on fixing it, the PM will screaming about time slippage and be like "I want you to work on X, not Y which can wait".
You found that out at the last minute. So then you did a release. It's no longer the last minute. Now what's your excuse for the next release?
If your management won't resource your project to the point where you can assure that the software is correct, you might want to see if you can find the free time to look for another job. You'll have to do that anyway when they either tank the company, or lay you off next time they feel they need to cut more costs.
That's why you, very early on, release code that slows the API down once it has been deprecated. Every place you issue a deprecation warning, you also sleep 60. Problem solved.
> I teach English courses at Boston College. I don’t lecture much. Mostly, we engage in conversation, paying attention to one another and to the book we have all read.
This would be OK only if your class is small enough (something like 10-30 people). I don't think this will work in 100+ people classes at all, and that's how most of my classes were back in college.
Also, the "not for worse" part is totally missing from the article. This working implies "for the better", but all there is are the methods how to force people to ignore AI (and read on their own), and how to punish people who rely on AI too much. There is no improvements mentioned. I'd title this article "AI has changed my classroom, and it could have been worse".
Like for this case - sure, you cannot redefine the standard output handle, but that's not what the customer asked for, is it? They said "read" and I can see a whole bunch of ways to do so - ReadConsoleOutput + heuristic for scrolling, code inject into console host, attach debugger, set up detour on logging function, custom kernel module...
To be fair, as a MS support person, it's the exactly right thing to do. You don't want the person to start writing custom kernel module when they should redirect stdout on process start instead. But as a random internet reader, I'd love to read all about hacky ways to achieve the same!
reply