> Leave rates are lower in the life sciences and higher in AI and quantum science but overall have been stable for decades
The US has been completely dominant in technology innovation for the last several decades. So, the answer is no: the loss of 1/4 of the STEM scientists is not important.
Do quant traders use Mathematica? I would guess this would be a great use case for a tool that lots of people love. Pretty language, huge boatload of built-in tools, high powered mathematics, great visualization capabilities. Quant firms should be able to live with the price tag. I assume they have a compiler that can produce fast executables for HFT.
Hypothesis C: failure of human memory. A human read Stephenson's book(s) 20 years ago, remembers that the endings were a bit unsatisfying. The same human also read some other book many years ago, which ends mid-sentence. In that person's mind, the two are conflated.
If I was writing a book review for my company (big famous VC who cares about their reputation) - I would’ve probably at least popped the book open and read a few chapters if it’s been years since I read it
Another hypothesis. Have AI generate a top 50 list of books, and add a book you want your website to promote into the mix somewhere near the top to increase its sales. Cheap marketing, It wouldn't be the first time.
> AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
Persuasion tip: if you write comments like this, you are going to immediately alienate a large portion of your audience who might otherwise agree with you.
Wait a minute - the attackers were using the API to ask Claude for ways to run a cybercampaign, and it was only defeated because Anthropic was able to detect the malicious queries? What would have happened if they were using an open-source model running locally? Or a secret model built by the Chinese government?
I just updated by P(Doom) by a significant margin.
Why would the increase be a significant margin? It's basically a security research tool, but with an agent in the loop that uses an LLM instead of another heuristic to decide what to try next.
I think AWS should use, and provide as an offering to big customers, a Chaos Monkey tool that randomly brings down specific services in specific AZs. Example: DynamoDB is down in us-east-1b. IAM is down in us-west-2a.
Other AWS services should be able to survive this kind of interruption by rerouting requests to other AZs. Big company clients might also want to test against these kinds of scenarios.
I used to tell people there that my favorite development technique was to sit down and think about the system I wanted to build, then wait for it to be announced at that year's re:Invent. I called it "re:Invent and Simplify". "I" built my best stuff that way.
A big chunk, perhaps the majority, of the "Accidents" are from cars. Another infographic I observed recently showed that, for children, the risk of death due to traffic accidents was greater than all other risks combined.
People should be raving and screaming for faster rollout of self-driving cars. If self-driving cars were an experimental drug undergoing a clinical trial, they would cancel the trial at this point because it would be unethical to continue denying the drug to the control group.
> People should be raving and screaming for faster rollout of self-driving cars
That's assuming it'll meaningfully reduce the rates of child deaths due to automobiles.
You know what will reduce the rate of child fatality due to automobiles for sure and to an even higher degree? Massively reducing the odds kids and automobiles mix. How do we do that? Have more protected walkable and bikeable spaces. Have fewer automobiles driving around. Design our cities better to not have kids walking along narrow sidewalks next to roads where speed limits are marked as 40 but in reality traffic often flows at 55+.
Its insane to me there are neighborhoods less than a mile from associated public schools that have to have bus service because there is no safe path for them to walk. What a true failure of city design.
Looks good to me. I use a tab-completion trick where the tab-completer tool calls the script I'm about to invoke with special arguments, and the script reflects on itself and responds with possible completions. But because of slow imports, it often takes a while for the completion to respond.
I could, and sometimes do, go through all the imports to figure out which ones are taking a long time to load, but it's a chore.
To me the reason ARC-AGI puzzles are difficult for LLMs and possible for humans is that they are expressed in a format for which humans have powerful preprocessing capabilities.
Imagine the puzzle layouts were expressed in JSON instead of as a pattern of visual blocks. How many humans could solve them in that case?
We have powerful preprocessing blocks for images: Strong computer vision capabilities predates LLMs by several years. Image classification, segmentation, object detection, etc. All differential and trainable in same way as LLMs, including jointly.
To the best of my knowledge, no team has shown really high scores by adding in a image preprocessing block?
Every one who had access to a computer that could convert json into something more readable for humans, and would know that was the first thing they needed to do?
You might as well have asked how many English speakers could solve the questions if they were in Chinese. All of them. They would call up someone who spoke Chinese, pay them to translate the questions, then solve them. Or failing that, they would go to the bookstore, buy books on learning Chinese, and solve them three years from now.
Bingo. We simply made a test for which we are well trained. We are constantly making real time decisions with our eyes. Interestingly certain monkeys are much better at certain visual pattern recognition than we are. They might laugh and think humans haven’t reached AGI yet.
> LLM-powered NPCs is one thing I'm most excited about in the future of gaming
Incoming new type of health crisis: video game addiction coupled with LLM-induced psychosis. Dudes spending 12 hours a day farming gold in a MMORPG while flirting with their AI girlfriend sidekick
> Leave rates are lower in the life sciences and higher in AI and quantum science but overall have been stable for decades
The US has been completely dominant in technology innovation for the last several decades. So, the answer is no: the loss of 1/4 of the STEM scientists is not important.
reply