Variable frame rate screens aren’t just for making the phone feel snappier but are also needed for the battery to last longer.
If your production volume isn’t high enough to justify a custom screen to be cut you are stuck with what is available on the market.
And even if 5” screens are available now in the form of NOS or upcycled refurbs that may not be the case 2 or 3 not to mention 5+ years down the line.
So you have to go with what not only is available today but with what is still likely to be available throughout the expected usable lifetime of your product.
Nuclear program, ballistic missile program, drones, establishing and supporting multiple proxies in the region.
For a fraction of what they spent on that they could’ve have desalination plants in the Caspian Sea and a water way capable of providing water to their capital.
That dongle has its own Bluetooth stack and is exposing a standard audio device via USB.
Indeed that currently seems to be the only way, but then the stack need config input somehow, which in case of this one requires a proprietary Win/Mac Software.
Regardless of this situation I actually think that websites like Archive and TWBM should be fully transparent.
A very large partition of citations in Wikipedia for example relies on them. Most of the pages that cite archived copies do so because the live version is no longer available I would like to have some assurances that archive.is and the likes are not altering their content in any way over time.
Unironically content sensitive hashing of archival pages might be one of the few use cases where something like a blockchain might actually be useful for.
You do need some kind of reliable, distributed storage though. The sequential nature of a blockchain also ensures that such stored content is held no matter what by any full node.
The hash to verify content is only half of the problem. You also need to store the _actual_ content of the page. What's the point of having Wikipedia reference a URL + hash if the page does not exist anymore?
A blockchain is, at its core, a distributed database, it is exactly made for this use case.
A quick check indicates that storing something on the Bitcoin blockchain costs about a dollar. How many millions (billions?) would Wikipedia need to spend to stash everything they reference in the blockchain?
> What's the point of having Wikipedia reference a URL + hash if the page does not exist anymore?
It would be way cheaper for Wikipedia to run a durable archive service themselves than to use the blockchain as an archive.
> A quick check indicates that storing something on the Bitcoin blockchain costs about a dollar
That's nonsensical, the price of using a service on a blockchain is essentially a floating value. That is the whole point of having a token in the first place: people willing to store and people storing are participating in the price of the service.
Last I checked, filecoin was a few cents per GB per month.
You can create a blockchain of kind hearted people to store Wikipedia as well, it's really up to you. But comparing apples and oranges makes no sense.
> That's nonsensical, the price of using a service on a blockchain is essentially a floating value.
This is kind of a ridiculous response. The price of oil is also a floating value and yet it is not nonsensical to discuss the price of a barrel of oil.
Yes, the cost to store something on the bitcoin blockchain floats. Several sources indicate that roughly a dollar is a reasonable approximation currently. If you disagree I’d be interested in seeing your data.
> Last I checked, filecoin was a few cents per GB per month.
I don’t know a ton about filecoin but it seems like retrieving data is pretty cumbersome. It’s not clear that this would actually be useable for a Wikipedia reference archive.
> You can create a blockchain of kind hearted people to store Wikipedia as well, it's really up to you. But comparing apples and oranges makes no sense.
Blockchain for its own sake. Sure, you could create a custom blockchain. You could also just pay AWS for georeplicated blob storage and it would be way less complex.
You both need to generate the hash at the point of archival correctly and store it in a way that cannot be modified later on.
Doing that with a blockchain like tech is one of the few use cases where the tech itself actually adds value.
Heck you might be able to store the entire pages on a blockchain or a blockchain linked storage.
The problem with these sites is that we implicitly trust them and unlike a book or other handprint media where editing or destroying all unedited existing copies is effectively impossible if a shady actor can easily start editing archived news articles and other sites that are no longer publicly available.
This is getting to blockchain for the sake of blockchain.
If Wikipedia recorded the hash of every referenced page you could verify that the archive.is page is unchanged.
You could certainly argue that archive.is isn’t the right place to store archives (I have no idea) but attempting to move all this to the blockchain would be very expensive.
You only need the hash of the original content. No blockchain is necessary. The problem is that there is no source for that hash except for the scraper that archives it since people don't put the hash in a hyperlink.
If you download an ISO for a Linux OS for example, they give you the hash of the file so you can check it. They don't build an entire blockchain whatever to validate the hash.
No, the Internet Archive is an organization that runs a web archive at archive.org while archive.is is an alternative domain for archive.today, a competing web archive run by an individual.
Symbolic AI didn’t die tho, it was just merged with deep learning either as complementary from the get go e.g. AlphaGo which uses Symbolic AI to feed a deep neural network or now as a post processing / interventionary technique for guiding and optimizing outputs of LLM, human in the loop and MoR are very much Symbolic AI techniques.
Exactly this, well said. Symbolic AI works so well that we don’t really think of it as AI anymore!
I know I for one was shocked to take my first AI course in undergrad and discover that it was mostly graph search algorithms… To say the least, those are still helpful in systems built around LLMs.
Which, of course, is what makes Mr. Marcus so painfully wrong!
In theory yes, the problem is that even x86 emulation in hardware in order to run x86 code natively without recompiling can drag you into a legal mess which any western company will avoid.
NVIDIA got pinched for this over a decade ago.
I’m not entirely sure how Qualcomm and Apple didn’t.
But overall the more you try to make an x86 enabled alternative viable the more likely you’ll get served with papers and even if you’ll win it would take a decade and cost 100’s of millions to fight.
If your production volume isn’t high enough to justify a custom screen to be cut you are stuck with what is available on the market.
And even if 5” screens are available now in the form of NOS or upcycled refurbs that may not be the case 2 or 3 not to mention 5+ years down the line.
So you have to go with what not only is available today but with what is still likely to be available throughout the expected usable lifetime of your product.
reply