'If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent."'
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).
I know of at least one bigco that will no longer hire anyone, period, who doesn't have at least 6 months of experience using genai to code and isn't enthusiastic about genai. No exceptions. I assume this is probably true of other companies too.
I think it makes some amount of sense if you've decided you want to be "an AI company", but it also makes me wary. Apocryphally Google for a long period of time struggled to hire some people because they weren't an 'ideal culture fit'. i.e. you're trying to hire someone to fix Linux kernel bugs you hit in production, but they don't know enough about Java or Python to pass the interview gauntlet...
Like any tool, the longer you use it the better you learn where you can extract value from it and where you can't, where you can leverage it and where you shouldn't. Because your behaviour is linked to what you get out of the LLM, this can be quite individual in nature, and you have to learn to work with it through trial and error. But in the end engineers do appear to become more productive 'pairing' with an LLM, so it's no surprise companies are favouring LLM-savvy engineers.
> But in the end engineers do appear to become more productive 'pairing' with an LLM
Quite the opposite: LLMs reduce productivity, they don't increase it. They merely give the illusion of productivity because you can generate code real fast, but that isn't actually useful when you spend time fixing all the mistakes it made. It is absolutely insane that companies are stupid enough to require people use something which cripples them.
So far, for me, it's just an annoying tool that gets worse outcomes potentially faster than just doing it by hand.
It doesn't matter how much I use it. It's still just an annoying tool that makes mistakes which you try to correct by arguing with it but then eventually just fix it yourself. At best it can get you 80% there.
The full quote from that section is worth repeating here.
---------
"If you could classify your project as "AI," you were safe and prestigious. If you couldn't, you were nobody. Overnight, most engineers got rebranded as "not AI talent." And then came the final insult: everyone was forced to use Microsoft's AI tools whether they worked or not.
Copilot for Word. Copilot for PowerPoint. Copilot for email. Copilot for code. Worse than the tools they replaced. Worse than competitors' tools. Sometimes worse than doing the work manually.
But you weren't allowed to fix them—that was the AI org's turf. You were supposed to use them, fail to see productivity gains, and keep quiet.
Meanwhile, AI teams became a protected class. Everyone else saw comp stagnate, stock refreshers evaporate, and performance reviews tank. And if your team failed to meet expectations? Clearly you weren't "embracing AI." "
------------
On the one hand, if you were going to bet big on AI, there are aspects of this approach that make sense. e.g. Force everyone to use the company's no-good AI tools so that they become good. However, not permitting employees outside of the "AI org" to fix things neatly nixes the gains you might see while incurring the full cost.
It sounds like MS's management, the same as many other tech corp's, has become caught up in a conceptual bubble of "AI as panacea". If that bubble doesn't pop soon, MS's products could wind up in a very bad place. There are some very real threats to some of MS's core incumbencies right now (e.g. from Valve).
It hits weirdly close to home. Our leadership did not technically mandate use, but 'strongly encourages' it. I did not even have my review yet, but I know that once we get to the goals part, use of AI tools will be an actual metric ( which is.. in my head somewhere between skeptic and evangelist.. dumb ).
But the 'AI talent' part fits. For mundane stuff like data model, I need full committee approval from people, who don't get it anyway ( and whose entire contribution is: 'what other companies are doing' ).