I’d argue that ‘a bunch of additional code’ to solve for memory safety is exactly what you’re doing in the ‘defining memory safety away’ example with Rust or Swift.
It’s just code you didn’t write and thus likely don’t understand as well.
This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
That sounds a bit unfair. All that code that we neither wrote nor understood, I think in the case of Rust, it’s either the borrow checker or the compiler itself doing something it does best - i.e., “defining memory safety away”. If that’s the case, then labeling such tooling and language-enforced memory safety mechanisms as “a bunch of additional code…you didn’t write and…don’t understand” appears somewhat inaccurate, no?
So? That wasn't the claim. The GP poster said this:
> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
Writing a linked list in rust isn't difficult because of control flow issues, or because rust makes code harder to debug. (If you've spent any time in rust, you quickly learn that the opposite is true.) Linked lists are simply a bad match up for the constraints rust's borrow checker puts on your code.
In the same way, writing an OS kernel or a high performance b-tree is a hard problem for javascript. So what? Every language has things its bad at. Design your program differently or use a different language.
> This can potentially lead to performance and/or control flow issues that get incredibly difficult to debug.
The borrow checker only runs at compile-time. It doesn't change the semantic meaning - or the resulting performance - of your code.
The borrow checker makes rust a much more difficult and frustrating language to learn. The compiler will refuse to compile your code entirely if you violate its rules. But there's nothing magical going on in the compiler that changes your program. A rust binary is almost identical to the equivalent C binary.
IIRC Nebula has an exclusivity clause in their contract not allowing content creators to upload to other platforms, though I could be thinking of CuriosityStream.
You must mean CuriosityStream because most of my favourite Nebula creators are also YouTubers. Usually, the Nebula version of their video is slightly longer, too.
This point was raised in the video as well, that Nebula has an exclusivity clause. Occasionally, there is a Nebula-exclusive upload, but my experience is the same as yours; most videos are also uploaded to YouTube (usually with a sponsored ad inserted).
Didn’t know the people behind the mp3 format were into tooling for metalworking. Guess it makes sense, it involves a practical application for use of sound, and they are a research institution.
I wonder if the metal can hear the difference if it’s not the full 192 kHz.
Really, I hear in early 90s, how Telefunken developed PAL TV standard (and also RGB, YUV and some other things, now you could read about on Wikipedia).
- They working like Gallop - asked people from street, to answer simple questions like "is this color Red or Yellow?", and with large number of samples they got statistical approximation, about curves of sensitivity of human eye, and then just use these approximation as direct wavelengths for Red/Green/Blue respectively.
Fraunhofer, as I know, used similar approach, but for sound, and got model of sensitivity of human ear.
So, what I want to say, for these researches don't need many scientists, but need wide enough sample and good reliable execution of math.
BTW, much later I read about research conducted in US air forces, targeting some ideal human pilot size to made most convenient plane control (and sure cheapest).
But they got so disappointing results, that have decide to make pivot - instead of make one standard size, they designed usual for us now adjustable chairs, tilting steering wheel, and pedals with adjustable suspension.
They shine through my windows at night and are truly horrific.
They’re down the entire alleyway behind my place, and a walk to the grocery store at 7pm during the winter makes your body and mind think it’s sunrise.
Adding together all the different standards/feature sets a chip supports and then aggregating the bandwidth into a single number is actually a very reasonable way to arrive at an approximation for total chip computational throughput.
Ultimately, unless the chip architecture is oversubscribed or overloaded (unsure what the right term is), the features are all meant to be used simultaneously and thus the bits being read/written have to come from somewhere.
That somewhere is a % of the total throughput of the chip.
Stated another way — people forget that there’s almost always a single piece of silicon backing the total bandwidth throughput of modern computing devices regardless of what ‘standard’ is being used.
I've taken multiple 10 year old T-Shirts with holes through 10% of them in to the Patagonia store and they've let me walk out with new product off the rack.
Think of it as unwinding a program all the way until it's just a list of instructions. You can know exactly how long that program will take, and it will always take that same time.
But will it always solve the task? Because without that it it is trivially easy to “solve” the halting problem by just declaring that the turing machine halts after X steps.