They do have a loophole; they import them as kits and “build” them at a Magna facility in Arizona (similar to how early Sprinter vans were re-assembled in the US and sold as Freightliners). But, they are FMVSS compliant (besides steering wheel) and have had several NHTSA organized recalls like any other compliant car might.
Tariffs are easy, just pay them. Federal Motor Vehicle Safety Standards are harder... But maybe there's a loophole for commercial transport? or maybe they paid to have the testing done?
Which isn’t even really that prohibitive because Chinese vehicles beat Western pricing by five figures.
Plus, all the sensor equipment is made in China anyway. There’s almost certainly no way to have it manufactured in the US.
On top of that, fleet sales don’t have to deal with the antiquated dealer network laws in the US.
And of course American market car manufacturers refuse to produce vehicles that are like this one: space efficient and reasonably sized, instead opting for gigantic bean shaped SUVs with sloping rear roofs that rob you of cargo space while taking up maximum curb real estate.
Ford E-Transit is an electric van for a lot of money. But it looks like Ford wants to stop making them, and 2 seat models look much easier to find. But you'd be able to fit your board no problem.
Not sure if it's sold in the US (assuming you are from there), but the Kia PV5 is probably your best bet. On top of that it's very reasonably priced (in contrast to the ID buzz)
More likely reason is that Supabase is a BaaS. Between client and DB there is no backend for secret management. So RLS is the only way to directly create API on the DB.
Many times these test suites are more valuable than code itself, particularly in legacy software. Trying to find and document thousands of edge cases a software like Excel must have is more difficult than implementing them.
The difference is previous version of alexa wasn't good enough to pay for it. Now it is good enough that millions of users are paying $10-100 for these services.
Once AI improves its cost/error ratio enough the systems you are suggesting for humans will work here also. Maybe Claude/OpenAI will be pair programming and Gemini reviewing the code.
That's exactly the problematic mentality. Putting everything in a black box and then saying "problem solved; oh it didn't work? well maybe in the future when we have more training data!"
We're suffering from black-box disease and it's an epidemic.
The training data: Entirety of internet and every single book we could put our hands on
"Surely we can just somehow give it more and it will be better!"
Also once people stop cargo-culting $trendy_dev_pattern it'll get less impactful.
Every time something new the same thing happen, people start exploring by putting it absolutely everywhere, no matter what makes sense. Add in huge amount of cash VCs don't know what to spend it on, and you end up with solutions galore but none of them solving any real problems.
Microservices is a good example of previous $trendy_dev_pattern that is now cooling down, and people are starting to at least ask the question "Do we need microservices here actually?" before design and implementation, something that has been lacking since it became a trendy thing. I'm sure the same will happen with LLMs eventually.
reply