Hacker Newsnew | past | comments | ask | show | jobs | submit | TimSchumann's commentslogin

It helps when you never question if, as in his own essay describing other ‘bad writers’ weaving falsehoods, you’re the one lying to yourself.


Especially about your own significance.

I think PG's essays are (mostly) well-written, and are worth studying as examples of persuasive rhetoric.

But rhetoric has no morals and no relationship to truth.

Persuasion is what salespeople do. Grifters, lawyers, PR firms, politicians, and CEOs all make a living from being persuasive.

That doesn't mean you can trust them not to lie to you.

It also doesn't mean flawed rhetoric means flawed beliefs. Implying it does is itself a misleading rhetorical trick.


I’d argue that ‘a bunch of additional code’ to solve for memory safety is exactly what you’re doing in the ‘defining memory safety away’ example with Rust or Swift.

It’s just code you didn’t write and thus likely don’t understand as well.

This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.


That sounds a bit unfair. All that code that we neither wrote nor understood, I think in the case of Rust, it’s either the borrow checker or the compiler itself doing something it does best - i.e., “defining memory safety away”. If that’s the case, then labeling such tooling and language-enforced memory safety mechanisms as “a bunch of additional code…you didn’t write and…don’t understand” appears somewhat inaccurate, no?


It is quite fair as far as rust is concerned. For simple data structures, like doubly linked list,are hard problems for rust


So? That wasn't the claim. The GP poster said this:

> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.

Writing a linked list in rust isn't difficult because of control flow issues, or because rust makes code harder to debug. (If you've spent any time in rust, you quickly learn that the opposite is true.) Linked lists are simply a bad match up for the constraints rust's borrow checker puts on your code.

In the same way, writing an OS kernel or a high performance b-tree is a hard problem for javascript. So what? Every language has things its bad at. Design your program differently or use a different language.


> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.

The borrow checker only runs at compile-time. It doesn't change the semantic meaning - or the resulting performance - of your code.

The borrow checker makes rust a much more difficult and frustrating language to learn. The compiler will refuse to compile your code entirely if you violate its rules. But there's nothing magical going on in the compiler that changes your program. A rust binary is almost identical to the equivalent C binary.


IIRC Nebula has an exclusivity clause in their contract not allowing content creators to upload to other platforms, though I could be thinking of CuriosityStream.


You must mean CuriosityStream because most of my favourite Nebula creators are also YouTubers. Usually, the Nebula version of their video is slightly longer, too.


This point was raised in the video as well, that Nebula has an exclusivity clause. Occasionally, there is a Nebula-exclusive upload, but my experience is the same as yours; most videos are also uploaded to YouTube (usually with a sponsored ad inserted).


It could be a time based clause. Nebula exclusive for two weeks prior to any other upload, etc


Yeah in the video he brings that up. Nebula + YouTube only.


That’s good to know. That makes it a hard no for me.


Didn’t know the people behind the mp3 format were into tooling for metalworking. Guess it makes sense, it involves a practical application for use of sound, and they are a research institution.

I wonder if the metal can hear the difference if it’s not the full 192 kHz.


That's not the same group. "Fraunhofer" is not a single group but the umbrella organization for 76 different institutes: https://en.wikipedia.org/wiki/Fraunhofer_Society


I'm not sure we can call it "the people behind the mp3 format". Fraunhofer is a very big instituion.


May be "PeopleS behind the mp3 format"? :))))

Really, I hear in early 90s, how Telefunken developed PAL TV standard (and also RGB, YUV and some other things, now you could read about on Wikipedia).

- They working like Gallop - asked people from street, to answer simple questions like "is this color Red or Yellow?", and with large number of samples they got statistical approximation, about curves of sensitivity of human eye, and then just use these approximation as direct wavelengths for Red/Green/Blue respectively.

Fraunhofer, as I know, used similar approach, but for sound, and got model of sensitivity of human ear.

So, what I want to say, for these researches don't need many scientists, but need wide enough sample and good reliable execution of math.

BTW, much later I read about research conducted in US air forces, targeting some ideal human pilot size to made most convenient plane control (and sure cheapest).

But they got so disappointing results, that have decide to make pivot - instead of make one standard size, they designed usual for us now adjustable chairs, tilting steering wheel, and pedals with adjustable suspension.


They shine through my windows at night and are truly horrific.

They’re down the entire alleyway behind my place, and a walk to the grocery store at 7pm during the winter makes your body and mind think it’s sunrise.


Listen more than you speak.


Please retain council.


Adding together all the different standards/feature sets a chip supports and then aggregating the bandwidth into a single number is actually a very reasonable way to arrive at an approximation for total chip computational throughput.

Ultimately, unless the chip architecture is oversubscribed or overloaded (unsure what the right term is), the features are all meant to be used simultaneously and thus the bits being read/written have to come from somewhere.

That somewhere is a % of the total throughput of the chip.

Stated another way — people forget that there’s almost always a single piece of silicon backing the total bandwidth throughput of modern computing devices regardless of what ‘standard’ is being used.


Yeah this is odd.

I've taken multiple 10 year old T-Shirts with holes through 10% of them in to the Patagonia store and they've let me walk out with new product off the rack.


> Without CPUs, we can be freed from the tyranny of the halting problem.

Can someone please explain to me what this even means in this context?

Serious question.


He also claims that the cardinality of the reals is the same as the integers.

https://news.ycombinator.com/item?id=36074287

You could say he had a history of using big words to talk shit.


A neural network is perfectly deterministic, the runtime is predictable before you run it. Which I don't think is going to be true much longer.

https://news.ycombinator.com/item?id=41623474


It's gibberish. For one thing... https://arxiv.org/abs/1901.03429


Think of it as unwinding a program all the way until it's just a list of instructions. You can know exactly how long that program will take, and it will always take that same time.


But will it always solve the task? Because without that it it is trivially easy to “solve” the halting problem by just declaring that the turing machine halts after X steps.


Wouldn’t this also imply a lack of Turing completeness, and thus not be good for general purpose computing?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: