As a fellow elder millennial I agree with your sentiment.
But I don't see the mechanics of how it would work. Rewind to October 2022. How, exactly, does the money* invested in AI since that time get redirected towards whatever issues you find more pressing?
The article resonated but I disagree with his terminology. To me, broke and poor are exactly the same thing. He can define them differently if he wants, of course, but what he calls "broke" I would call "feeling poor" and what he calls "poor" I would call "being poor".
I grew up in circumstances that were very much "broke"/"feeling poor" and it took a long time to learn that we really weren't poor. Some of the simple actions that are mis-directed towards the truly poor (second job, DIY car/home maintenance, better financial planning) would have elevated our circumstances quite a bit. Not to the point of being rich, but definitely to less precarious circumstances. And, selfishly, I would likely not have spent my childhood feeling like an impoverished outcast from my peers.
I travel internationally. These arcane rules also do not affect me.
Me: Lifelong, native-born citizen of a western nation. 1 or 2 international trips of less than 2 weeks each year.
Author: Immigrant to his country of residence. Applying or soon to apply for citizenship or permanent residency. Has taken multiple, lengthy international trips and also appears to have had immigration status in different countries .
Conclusion: If you are more like me than the author then international travel will not require navigation of arcane and contradictory rules.
Agreed. Upon passage of the ACA my company switched insurance coverage to a High Deductible plan with an HSA. So if anything, the ACA appeared to increase the prevalence of HSA's. But that is my narrow social circle and the grandparent poster seems to have a different experience.
Binary searching your commit history and using version control software to automate the process just seems so...obvious?
I get that author learned a new-to-him technique and is excited to share with the world. But to this dev, with a rapidly greying beard, the article has the vibe of "Hey bro! You're not gonna believe this. But I just learned the Pope is catholic."
Binary search is one of the first things you learn in algorithms, and in a well-managed branch the commit tree is already a sorted straight line, so it's just obvious as hell, whether or not you use your VCS to run the bisect or you do it by hand yourself.
In a past life I held the title Firmware Engineer. The day-to-day development process did not differ from subsequent positions as a Software Engineer. Write-Compile-Test-Repeat. Then put it up for review so your colleagues can skewer it, or, on rare occasion, offer considerate and thorough feedback.
Firmware development is indeed taught in higher education. But not under the name "Firmware". It will be an "embedded systems" course or series of courses. At least in my experience, those courses are run by the Electrical Engineering department and the average Computer Science student stays far away from them.
How do you test stuff that's deployed to firmware? At least when it comes to normie software your Linux or Mac box resembles the server you deploy to. And you can close the distance quite a bit with Docker.
But firmware? Totally different.
FreeRTOS does have a POSIX backend, which helps some. Maybe you can run it under a hardware emulator. But it seems like lots of the stuff you want to test isn't really testable from the perspective of what a typical dev knows.
A lot of firmware dev iteration seems to be build -> flash -> watch serial connection for debug prints.
> A lot of firmware dev iteration seems to be build -> flash -> watch serial connection for debug prints.
Serial port? Maybe for the fancy folks. Us lunch-pail types would find an unused GPIO, blue-wire an LED, and blink out a code of your choosing.
Okay, I never actually had to blink an LED but it was in my bag of tricks. The real golden-ticket to debugging embedded devices was a development kit with JTAG. JTAG, coupled with expensive additional hardware and equally expensive software license, gave you a gdb-like interface for debug. Breakpoints, stack traces, all the good stuff you take for granted when working with Windows, Linux, or those ugly abominations we call web browsers.
Emulators were also a thing, especially if your product had a custom ASIC and you needed pre-silicon development. But I didn't use them a lot myself and it seemed like by the time you had your emulation environment setup, there was 1st gen silicon and a debug board sitting at your desk.
I'm not a firmware dev but the one's I've seen working usually have all sorts of fancy test kits, debug instrumentation, Software (chip scope?), etc. to debug with, not just relying on print debugging.
Not to say it isn't a valid way to debug, but there are definately better options available.
As someone in the security field, who is currently in a security degree program...I have a major overlap with the EE and computer engineering degree requirements.
Iv dabbled in some basic MIPS assembly and some microcontroller programming but don't consider that as complicated as boot level firmware or say the firmware that controls complex stuff.
I view embedded as the closest I will ever get to actual hardware engineering and it shocks me how complex everything is.
So much EE-related math becomes trivial (or at least not-hard) once you've internalized this formula.
What I am trying to decide is 1) Did I zone out in class when Euler's formula was introduced or 2) Did my secondary school mathematics classes just kind of gloss over it?
I lean towards 2 but unfortunately none of my college classes reintroduced the formula and I ended up making a lot of problems harder than they should have been (I have an EE undergrad).
I'm not sure any of my secondary-school classes taught it, but I dropped out of secondary school to go to the university, so maybe they would have in the following year. In secondary school I was in the "smart kids" math track, though, and they also had an "average kids" track and a "dumb kids" track, and I really doubt those tracks covered it.
No link to the original research note. No real details on the methodology used. A few notes on the well-known lack of an AI business model (similar things were said about search in the late 90's).
I just don't see how the broader market is exposed to an AI crash in the way it was exposed to subprime loans. If OpenAI goes belly up is it really taking anyone else down with it?
During the dot-com era, internet or IT in general accounted for a much smaller percentage of the GDP. So, I'm not sure how the percentage of GDP can help us gauge the scale of the bubble, if any.
When you're talking the size of investment that AI-centric companies have received, on the order of hundreds of billions of dollars, there's no way it's not exposed to the wider market.
But I agree with you, the article is too light on details for how inflammatory it is.
There's good reason to believe that OpenAI's success (or failure) and the success of many other firms are correlated. If OpenAI's bubble bursts, then that is likely to spread to other close firms and – depending on severity – any other firms that are merely associated.
NVDA, MSFT, AAPL, META, and GOOG are all heavily investing in AI right now, and together make up 28% of the money tied up in S&P 500 indices. Simply investing in the S&P 500, which many people do, exposes you to meaningful downside risk of an AI bubble pop.
Don't get me wrong; I'm no fan of the billionaires. Eat the rich, etc. But I don't want the billionaires to lose everything suddenly, because I'm 100% sure my 401k will go down with them, and 50% sure my job will.
https://www.pinkbike.com/news/netflix-in-exclusive-talks-for...
(yes Pinkbike is my source)
reply