Hacker Newsnew | past | comments | ask | show | jobs | submit | contravariant's commentslogin

I'm confused, of course if you await immediately it's not going to have a chance to do anything else _before_ returning.

If you do the following it works as expected

    async def child():
        print("child start")
        await asyncio.sleep(0)
        print("child end")
    
    async def parent():
        print("parent before")
        task = child()
        print("parent after")
        await task
The real difference is that the coroutine is not going to do _anything_ until it is awaited, but I don't think the asyncio task is really different in a meaningful way. It's just a wrapper with an actual task manager so you can run things 'concurrently'.

Python does have two different coroutines, but they're generators and async functions. You can go from one to the other,


None for the foreseeable future I hope.

Why is that? I must have missed the episode of black mirror you watched that would make that a bad thing.

Antimatter is more of a Star Trek (and Revelation Space and a few others) issue than a Black Mirror episode.

I am far from a domain expert, I only know of four current and speculated uses for antimatter: energy storage, inducing nuclear reactions, medical imaging, and one specific tumour removal method.

For the first one, antimatter has about 1000x the energy density of fission, but also unlike a fission bomb all of it reacts (with an equal mass of normal matter), which means 1 gram of the stuff is a bigger boom than Fat Man and Little Boy combined.

Fortunately, "15000 antihydrogen atoms" is a factor of 4e19 away from 1 gram, and even if it wasn't we'd probably have to fuse the antihydrogen into antilithium to hold that much in a not completely absurd storage system.

Inducing nuclear reactions might make for some interesting propulsion systems, or might make atomic weapon proliferation even harder to prevent; that's expected at around 10^18 (the microgram level), which is still 1e14 more than announced by CERN — if it works, this use is hypothetical because current production is so much less than that: https://en.wikipedia.org/wiki/Antimatter-catalyzed_nuclear_p...

Medical imaging is already done with positron sources (doesn't need complete antihydrogen atoms), and antiproton beam therapy doesn't need the antiprotons to be turned into antihydrogen at any point: https://home.cern/science/experiments/ace etc.


Well what is the most obvious application of a highly volatile energy dense substance?

This substance can basically only do two things.

1) whatever ordinary hydrogen can

2) explode violently on contact with matter

Sure it's interesting to test 1) from a physics research point of view, but 2) is the only practical application that I know of.


As a luddite, when I get in to my car to drive to work, I harness the power of several hundred violent explosions per minute in order for my non-science fiction car to get to work. If I had a homelab sized source of anti-matter violent explosions, youtube university in this hypothetical universe could help me harness it into something that generates electricity or something fun and useful, and not for killing people.

If your car suffers a catastrophic exothermic event with its fuel, that looks like a car fire.

If the same energy explodes as antimatter, to the extent that it is not radiation shielded what you get is a pulse of high-energy ionising radiation*; while to the extent that it is radiation shielded, it looks like order-of as many kg of TNT as the number of kWh stored, give or take.

* with a note that 10 joules of absorbed energy per kg of body mass is "if you're lucky you will fall into a coma and die in about a week, if you're unlucky you'll be conscious", and that 1 litre of ICE fuel ~= 10 kWh = 36 megajoules.


Propulsion with antimatter drives is another application. That’s not consumer-facing though.

This exchange is somewhat hilarious. Oh how on earth do we keep things safe and secure if everyone can see the code and verify what it does! Who would keep us safe if we turn our backs to unverifiable, unvetted, unprofitable security fixes, by for-profit companies!


The biggest joke is most of the proprietary routers both consumer and enterprise grade often are running some old outdated version of custom tuned openwrt lol, this goes for tp-link, and everyone else almost.


> how on earth do we keep things safe and secure if everyone can see the code and verify what it does!

That's not always the silver bullet you seem to think it is. Have you ever tried to build something like Chromium, Firefox, or LLVM yourself? It's not realistic to do that on a mid tier let alone low end device.

Even when you go to the trouble of getting a local build set up, more often than not the build system immediately attempts to download opaque binary blobs of uncertain provenance. Try building some common pieces of software in a network isolated environment and you will likely be surprised at how poorly it goes.

If projects actually took this stuff seriously then you'd be able to bootstrap from a sectorlisp and pure human readable source code without any binary blobs or network access involved. Instead we have the abomination that is npm.


Debian manages to build Chromium, Firefox, and LLVM on servers of multiple architectures, including quite slow riscv64 machines, without any network access to the builds for any architecture.

https://buildd.debian.org/status/package.php?p=firefox-esr

See Bootstrappable Builds for starting from almost nothing, so far only GNU Guix and StageX have worked out how to start from the BB work to get a full distro. Should be fairly trivial for other distros too if they cared.

https://bootstrappable.org/ https://guix.gnu.org/blog/2023/the-full-source-bootstrap-bui... https://stagex.tools/


For context, I once found a bug in Chromium and fixed it, the initial build took a few days on and off on my development laptop that was pretty beefy for the time. I say on and off because I had to interrupt the build if I wanted to do anything else computationally taxing. They have incremental builds and caches all properly set up so you can just continue where you left off after the fact. After the initial build it's pretty fast, 5 minutes or so per build for me. On a low end device you're easily looking at a build time of a week or more if you're starting from scratch.


LLVM isn't so bad compared to the browsers. Relatively standard CMake build with mostly self contained c++ codebase and few third party dependencies. You don't need a crazy thread ripper workstation to do a build in reasonable time. A somewhat modern 8-16 core desktop CPU should be able to do it in 10-20 minutes or faster. Based on compilation benchmarks I have seen even some of 15 year old 4 core CPUs or 5year old mid/low tier mobile CPUs do it under hour.

Most importantly you need to pay attention to RAM usage, if necessary reducing parallelism so that it doesn't need to swap.


Ah that was why. Oh well, I just needed to get the code to the server, so I didn't really need Github anyway.


Seems a bit harsh to leave out the rest of the apology and only focus on the part that is not much of an apology.


Just map quality q to e^q or something and it will be sublinear again.

Or more directly, if your argument for why effort scales linearly with perceived quality doesn't discuss how we perceive quality then something is wrong.

A more direct argument would be that it takes roughly an equal amount of effort to halve the distance from a rough work to its ideal. Going from 90% to 99% takes the same as going from 99% to 99.9% but the latter only covers a tenth of the distance. If our perception is more sensitive to the _absolute_ size of the error you get an exponential effort to improve something.


Your first line assumes that `q` fails to refer to an objective property. The `e^q` space isn't quality, as much as `e^t` isnt temperature (holding the property we are talking about fixed). Thus the comment ends up being circular.


The issue was with the word "it". In the sentence, that word is acting as an indirection to both q and e^q instead of referring to a unitary thing. So yes, "it" does become linear/sublinear, but "it" is no longer the original subject of discussion.


I don't think you're wrong but I think I failed to convey the point I wanted to make.

What I was getting at is that without an objective way of measuring the whole idea of super- or sub-linear becomes ill defined. You can kind of define something to be sub-linear by definition, so the argument becomes tautological or indeed circular.

So an article that talks about perceived quality without any discussion about how people perceive quality or importantly differences in quality can say pretty much anything and it will be true for some definition of quality. You can't just silently assume perceived quality to be something objective, if you give no arguments you should assume it to be subjective.


If it's unethical it's not because of what the robots.txt says.

Blindly violating it is bad manners, but deliberately scraping a single website over a month isn't the worst.


For the intersect page you probably want to order the users by the size of their shelf. For some more obscure combinations I'm mostly getting users who read 10,000s of books, which is less useful than the users with <1000 books.


At what point are they feeding reviews into an LLM? From what I got the only personal data they're using is which user read which books.


...What?

You're saying it's not zero sum because to make it zero sum you should include the big profitable corporations that are pumping in money?

True it's not zero sum but that's like saying a casino isn't zero sum because you're playing against the house!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: