//Two AI experts who actually love the technology explain why chasing AGI might be the worst thing for AI's future—and why the current hype cycle could kill the field we're trying to save.//
> - Provides stable pricing in shorter term while accommodating price changes over longer term
How? If you pre-pay $5, your account is credited by $5, and when you make an API request you get charged at whatever the rate is for the model you called at the time you used it. You aren't buying some virtual currency or locking in a specific price.
> - Excites engagement
More accurately, irritates customers by keeping their money without providing any service in return.
Regarding Sun's slogan: Supported a VLSI component and software engineering group at Intel in mid 80s built from Moto 68K-based Sun workstations that were provisioned engineering staff in groups of 5, with a diskful server and 4 diskless clients attached via 10Mb thicknet, running Sun's bootp, NFS and Yellow Pages. This was the meaning of "network is the computer" in the context of the Sun salespeople. It gave a VAX-11/780 of CPU performance to each engineer at a time where compute had been provisioned at 10–15 engineers to a 780. And the kit was all on/under desk, not in a special room with AC and a raised floor. The internet was a 2400b leased line to an ARPA IMP and was used only for file transfers with researchers at CMU. External mail and USEnet was UUCP via VAXes.
Lacks any quantitative claim of performance, and weasels all the way.
As companies compete within the rapidly expanding bubble and it nears the point of maximum inflation, an increasing frequency of raving use-case testimonials should be expected, because companies know that the survivor distribution will be informed by mindshare in the interval immediately ahead of the pop. So expect the hype to continue to accelerate.
--
Recall Jim Cramer of NBC raving that all signs point to buy the day before the 2008 derivatives bubble popped.
(Since then he's retconned himself as having sounded the alarm on 2008)
All you need to make your case is an intelligible definition of thought as an activity.
So far your claim is trapped behind the observation that when an AI produces an output, it looks like thought to you.
In the vein Serle's arguments about the appearance of cognition and your premise, consider the mechanics of consulting a book with respect to the mechanics (so to speak) of solicited thought:
There's something you want to know, so you pick up a book and prompt the TOC or index and it returns a page of stored thought. Depending completely on your judgment, the thought retrieved is deemed appropriate and useful.
No one argues that books think.
Explain how interacting with an LLM to retrieve thought stored in its matrix is distinct from consulting a book in a manner that manifests thought.
If the distinction is only in complexity of internal functioning of the device's retrieval mechanism, then explain precisely what about the mechanism of the LLM brings its functioning into the realm of thought that a book doesn't.
To do that you'll first need to formulate a definition of thinking that's about more than retrieval of stored thoughts.
Or are you truly saying that your 'knowing thinking when you see it' is sufficient for a scientific discourse on the matter?
Imagine common operation of vehicles without windshield wipers!
"you get strikes for not watching the road"
What does Doctorow call this risk: where you confuse a requirement for your vigilance to a machine with liberation?
Another risk looms as vehicle occupants are officially required to be back-seat drivers for a system that allows no driving experience...
Everywhere you look at AI applications you see the hazard of mechanical mad-cow disease.
"Kenyans don't talk like AI!"
reply