The POs are fun to play with, but from experience I would much rather recommend getting a used mk1 Novation Circuit. For $150 - $200 you have a four track drum machine plus two tracks of a great synth engine, with a built in mixer and simple effects, midi in if you want to use a keyboard, and the thing is still compact and battery powered.
Many hardware manufacturers do the same with claiming MIL-STD-810 compliance. Which can mean almost anything without further details specified.
E.g. you can choose to test against MIL-STD-810 500.6 procedure I, to see that the device is compatible with low pressure such that it can be safely transported via air freight. Which no consumer electronics product in existence is going to fail.
Some of the reason why the MIT license etc. is more popular surely has to do with the license text itself. I can understand the MIT license, and my corp lawyer can easily understand all the consequences of using something under MIT license. With the GPL, not so much. It's verbose and complex and has different versions.
Would it really be impossible to have a license with similar brevity as MIT but similar consequences as GPL?
Brevity maybe, but ease of understanding no. Copyleft licenses interact with copyright law in ways that permissive licences just don't need to. The closest you can get is probably MPL-2.0.
The GPL is particularly bad here as it pretends to define what is or isn't a derivative work, which is outside the scope of a licence but within the scope of a court. The EUPL was created partly because EU directives bound the viral clause in ways the FSF won't admit to, although that one isn't simple either (I'm not a fan of its compatibility clause).
No, the MIT license is short exactly because it has so little restrictions. You simply can't encode the desired result of GPL into 160 words like MIT can.
> I can understand the MIT license, and my corp lawyer can easily understand all the consequences of using something under MIT license.
Sounds like you need a better lawyer.
The consequences of the GPL are not all that complicated. In most cases it boils down to offering the source if you distribute the code outside your organisation.
Therein lies the problem. When dealing with the law, you don't want to be relatively sure that you won't go to prison or won't get sued for $1M, you want to be completely sure.
Something like the GPL is complex and non-standard as far as its interactions with the legal system go, because it is essentially a sort of hack of copyright law. If it goes before a court, you have no idea what might potentially happen. So rather than deal with that kind of complexity and uncertainty, you'd use something under MIT or Apache License that is just much better understood.
> Therein lies the problem. When dealing with the law, you don't want to be relatively sure that you won't go to prison or won't get sued for $1M, you want to be completely sure.
Not really. Unless you are redistributing it does not affect you at all.
> Something like the GPL is complex and non-standard as far as its interactions with the legal system go
it is widely used, and most people are running some copyleft software anyway, most commonly GPL or a variant such as LGPL. Linux servers, Android phones, routers, web browsers....
Say you have linked against a GPL'd dependency which you have not modified but for which there is no API-compatible substitute. Are you now bound by the terms of the GPL? FSF says "yes", which tends to surprise people. But if there is an API-compatible, non-GPL'd substitute, the answer arguably becomes "no". Note that this has nothing to do either with your work or with your direct dependency, but on the existence of some random artifact out in the ecosystem which may come into existence at any point in the timeline.
API is not a random artifact but a dedicated feature designed to separate one program from another. It also implies much more stability for the external connections to the code.
It's like saying that display is a "random artifact" of a computer not making any difference in its design or usage.
There are whole industries built around the notion that “License: MIT” is everything that is required to meet the notification requirements in the license. So I wouldn't say that the MIT license is easy to understand.
> I can understand the MIT license, and my corp lawyer can easily understand all the consequences of using something under MIT license. With the GPL, not so much.
Any IP lawyer who hasn't come across the GPL yet is probably not worth listening to.
I mean, would you listen to a bridge engineer who hasn't yet heard of a calculator? Sure, they may understand their shit, but if they haven't heard of calculators yet they are clearly not in the industry.
Any IP lawyer who hasn't yet read the GPL and formed a professional opinion on it isn't equipped to handle IP matters at all.
Au contraire, the Rust (or other "modern" lang) dependencies come in addition to the OS dependencies. The C (or other "old" lang) programs typically have very few dependencies apart from the OS, with absolutely glacial release cycles. And unless you're on Arch or similar, the OS package manager updates are primarily just minor version bumps.
It seems pretty indisputable that "modern" langs substantially increase your supply chain attack surface. Of course some (like JS) are worse than others.
As a result, whether the net security benefit of using Rust vs C is positive or negative depends heavily on the program in question. There is a huge difference between e.g. Firefox and Wireguard in this respect.
> The C (or other "old" lang) programs typically have very few dependencies apart from the OS, with absolutely glacial release cycles.
Very few by number, but that's more an artifact of C's poor package management than a true reflection of how much third-party code you're actually pulling in. Something like APR is the equivalent of hundreds of Rust packages, and comes with a similar security risk. Sure, maybe there's someone who signs off on each release, but do you think they personally know and validate each of the dozen or more mostly-independent projects that actually make up their library? No, they delegate to separate maintainers - that information just isn't surfaced in the package management system.
I think Python is a bit special in this case, as many people tend to use it more like duct tape for connecting different libraries (which often contain compiled and optimized code) rather than a language for doing stuff bottom-up.
It's also what scientists have done to visualize solutions of PDEs since the 1960s. Author should download Paraview and give it a twirl, to get this perspective.
First create a mesh (Sources -> Plane for 2D, or Sources -> Box if you want to do it in 3D). Set reasonably high values for Resolution on this source. Then use a filter to apply your function, either Filters -> Alphabetical -> Calculator for easy stuff, or Filters -> Alphabetical -> Python Calculator if you want complicated stuff. The "coordsX" etc. are your spatial coordinates on the mesh. Pick whatever color map you want (diverging types are good for this), change the limits on coloring, use a log scale, whatever.
If you do this in 3D on a box, you can then use a slice to scrub through the result on an arbitrarily oriented plane. You could visualize translucent isosurfaces of constant "error" and raytrace them. Or you could take the gradient of your "error" and plot as a vector field. With a bit of leg work you can add a fourth coordinate (time) and make animations. And you can combine all of these. Sky is the limit.
In Norway there is a state-owned monopoly on selling wine and liquor (anything above 4.75% ABV). They have 350+ physical shops, a large online shop and around $2bn annual revenue. This makes them one of the largest purchasers of wine and spirits in Europe, and they can get some very good deals.
So even though you have high taxes and a restrictive alcohol policy, the end result is shops that have high customer satisfaction because they have very competent staff, excellent selection and a surprisingly good price for quality products.
The downsides are the limited opening hours and the absence of cheap low-quality wine - the tax disproportionally impacts the low quality stuff, almost nobody will buy shitty wine at $7 per bottle when the decent stuff costs $10, so the shitty wine just doesn't get imported. But for most of the population these are minor drawbacks.
IIUC your talk "just" suggests using sandbox-exec on Mac, which (as you point out) is sadly labeled as deprecated.
Is that really the best solution the world has to offer in 2025? LLMs aside, there is a whole host of supply chain risk issues that would be resolved by deploying convenient and strong sandboxes everywhere.
1. A sandbox on someone else's computer. Claude Code for web, Codex Cloud, Gemini Jules, GitHub Codespaces, ChatGPT/Claude Code Interpreter
2. A Docker container. I think these are robust enough to be safe.
3. sandbox-exec related tricks. I haven't poked hard enough at Claude Code's new sandbox-exec sandbox yet - they only released it on Monday. OpenAI Codex CLI was using sandbox-exec too last time I looked but again, I've not reviewed it enough to be comfortable with it.
I'm hoping more credible options come along for the sandboxing problems.
I found Vibekit's (open-source https://docs.vibekit.sh/sdk) approach of allowing you to chose your own sandboxing solution for any coding cli the most flexible. Also works with openCode and local or cloud sandboxes ! Really quality piece of software that more devs should know about. I'm surprised Simon hasn't tried it yet.
Yeah they shipped that feature on Monday, you can access it via the /sandbox command. I haven't put it through its paces enough to get a feel for if I trust it yet though.
I'm considering NFS with RDMA for a handful of CFD workstations + one file server with 25Gbe network. Anyone know if this will perform well? Will be using XFS with some NVME disks as the base FS on the file server.
Quite some time ago I implemented NFS for a small HPC-cluster on a 40GBe network. A colleague set up RDMA later, since at start it didn't work with the Ubuntu kernel available. Full nVME on the file server too. While the raw performance using ZFS was kind of underwhelming (mdadm+XFS about 2x faster), network performance was fine I'd argue: serial transfers easily hit ~4GB/s on a single node and 4K-benchmarking with fio was comparable to a good SATA-SSD (IOPS + throughput) on multiple clients in parallel!
Yes, you might want to tune your NFS parameters, stick to NFSv4.2, consider if caching is appropriate for your workloads and at what level, and how much of your NFS + networking you can keep in kernel space if you decide to further upgrade your network's throughput or really expand it.
Also consider what your server and client machines will be running, some NFS clients suck. Linux on both ends works really well.