Hacker Newsnew | past | comments | ask | show | jobs | submit | cyrnel's commentslogin


This is about a different part of the controversy, and doesn't respond to the allegation of a monetization proposal.


Yes it does. He's refuting that in this part of the post:

> When they finally did reply, they seem to have developed some sort of theory that I was interested in “access to PII”, which is entirely false. I have no interest in any PII, commercially or otherwise. As my private email published by Ruby Central demonstrates, my entire proposal was based solely on company-level information, with no information about individuals included in any way. Here’s their response, over three days later.


A very specific denial. "I didn't propose this specific type of monetization". Would be better if he followed up with "Yes, I proposed monetization, but what I had in mind was this more specific, benign form of monetization:"


+1 for Node-RED. If we've learned anything from elasticsearch/redis/bitnami/and dozens of others, it should be "don't build important things on code that isn't enshittification-resistant"


What did elasticsearch/redis do so wrong?

AWS and Google stole their only revenue source.

Those guys did all the work and AWS and Google get to collect a cool hundred million each quarter for doing next to nothing. And the original authors can't compete with that.

Are we all gonna go sign up to "Enterprise Redis Cloud" to support them? I don't think so. They have nothing left. They got picked dry.

If they're preserved a "no managed offerings" freedom for themselves, they could collect a few hundred million from each hyperscaler each quarter and put that directly into product. And their engineers would be extra nicely compensated.

But that's not how this story plays out. Big tech just takes things and finds out how to monetize them in ways they don't have to give back.

And we've been trained (by big tech?) to yell at the little guys that try to carve out a space for themselves.


Both are billion dollar companies, we as individuals have nothing in common with them. Enshittification happens due to market conditions that apply to small and large companies alike. Redis and elasticsearch aren't underdogs fighting for the little guy, they are just a smaller scale version of the same shit.

I'd rather have a software commons and have tech be owned by the workers and not soul-sucking corporations, no matter the size.


Sure, so would I. I would also prefer to live in a world without borders. Meanwhile, I'm just a peace activist.

Life in this imperfect Earth makes you grumpy too early if you are not pragmatic.


I know, right? It's sycophancy.

If you are actually against the policy and suspect a lot of people are too, then don't silence your employees by keeping their feedback isolated to 1:1s which you admit are ineffective.

Executives need clear feedback to avoid making major mistakes.


I don't think I've ever worked anywhere that executives are actually capable of taking feedback constructively

In all cases they have eventually just told me to shut up, no matter how diplomatic I try to be, and in some cases it has led to me being terminated

Feedback is just taken as a sign that you aren't aligned with their vision, so you have to go


Can't upvote this enough, and I live in NL where worker protections make it hard/impossible for them to fire me for stuff like this. I'm a very honest person, so I've never had issues speaking my mind to anyone because I also don't really believe in the whole hierarchy thing.

I have approached management with stats, hard facts and level headed, calm discussions many times. It doesn't make a difference. Execs do not give a shit, unless you're also an exec, and they will pretty much always ignore anything you have to say. If you don't align with whatever idea they have at that moment, you're "not being a team player"


Everyone has an ego. Everyone wants to exercise their power their way when it's their time to shine. I wish I could upvote this twice. If you are reading the above early in your career, please don't take the comment as cynical. It doesn't have to be. Rather, look for it as the sign that you're ready to find your next role. Companies will never clearly give you that. This is often the closest heuristic you'll find, and if you take advantage of it with the right timing, you can leave with grace. If someone asks you why you're leaving, keep it to yourself.


out of curiosity, what would it look like to take feedback constructively, but not follow it?

i'm asking because (in my experience) executives get hundreds of pieces of feedback and advice. they can't follow all of it, and so they have to prioritize, and their priorities might not overlap completely with those of ICs.


One sign would be occasionally changing course in response to overwhelming employee feedback. If that never or almost never happens, the feedback is being ignored, not taken constructively and not followed.


> out of curiosity, what would it look like to take feedback constructively, but not follow it

You need to have good reasons

> their priorities might not overlap completely with those of ICs

Then their ICs are fully reasonable to be pissed off

If a company's desires doesn't align with the desires of the workers, then there is a big problem imo


thought i was taking crazy pills when no one else but you two pointed out how ridiculous a manager with two faces is, if the execs don't get proper feedback they will never change, and employees complaining to this type of manager will immediately start looking elsewhere.


Code signing, 2FA, and reducing dependencies are all incomplete solutions. What we need is fine-grained sandboxing, down to the function and type level. You will always be vulnerable as long as you're relying on fallible humans (even yourself) to catch or prevent vulnerabilities.

Apparently they've tried to implement this in JavaScript but the language is generally too flexible to resist a malicious package running in the same process.

We need to be using different languages with runtimes that don't allow privileged operations by default.


That doesn’t solve it either. If you need to grant hundreds of permissions, people will just hand-wave them all—remember the UAC debacle in Windows Vista? I like Denos approach way better; and you could also ask why any application can just read files in your home folder, or make network requests to external hosts. OSes really are part of the equation here.


It's true that we were all sold the lie of individual actions being the way to solve the climate crisis (recycling, turning off lights, etc.) But I think the conclusion is to try other strategies rather than giving up when the first strategy didn't work.


The ideal situation would be building a society that believes everyone deserves to be fed, clothed, and housed regardless of their ability to make profitable things. Weird how politically unpopular that seems to be.

Both producers and consumers of media are in the same boat of barely surviving. Maybe we can work with each other instead of against each other? :)


That is a very nice thought, have been told the Europeans seem to do this to a much higher degree:)


This seems to only address a few of the nine threats to the software supply chain, mainly "(D) External build parameters" and maybe the content-addressable storage addresses some of the distribution phase threats: https://slsa.dev/spec/v1.1/threats

There are still many other ways that a dependency can be exploited before or after the build phase.


Nix doesn't, can't, and will obviously never be able to audit your dependencies, but what it can do is give you a way in which you can audit everything byte-for-byte and end-to-end from input to output. In most architectures it is pretty hard to even get to this point because there is no rigorous tracking of dependencies and side-effects; e.g. if your builds are not sandboxed from the network, how can you be sure that the inputs you audited really account for all of the inputs to a build? Nix has a (complete) answer for that, among other things.


Debian reproducible builds, Guix, StageX and Yocto/OpenEmbedded have also worked in this area.


Reproducible builds are adjacent, but ultimately orthogonal, to what Nix does.

Reproducible builds provide strong evidence that a given set of inputs were used to produce a specific output. A lot of the work to be done here by each of these individual projects is beneficial to the entire ecosystem, including Nix. A lot of it is just fixing bugs and removing accidental non-determinism from builds. The main value this provides is that it allows a third-party to verify with relatively good certainty that binaries provided by some entity match the expected source code.

Nix provides a hermetic environment to build software with every single input from the bootstrap seed to the final build fully accounted for. Builds can't access the Internet or the host filesystem, they can only access files from their inputs. Impurities must go through the FODs, and the outputs from FODs have to match a prerecorded cryptographic hash, so they must be fully bit-reproducible or the build fails. When you have a reproducible Nix derivation, it is a strict superset of a reproducible package virtually everywhere else, because you can have a very high assurance that you can know and can audit individually every input to the derivation. This is useful for both auditing and reproducible builds.

Reproducible builds are important, but reproducible builds alone are not a panacea. They obviously don't tell you if your source code is free of defects, accidental or otherwise, and neither does Nix. Still, Nix does something basically nothing else does, by making the entire build process fully hermetic.

(Guix, being inspired by Nix, is, as far as I know, roughly the same, although Guix has put more effort into the bootstrap and package reproducibility. Still, Guix and Nix stand in a class of their own as far as usefulness to supply chain security go, even if they probably won't fit neatly into the compliance theater version of supply chain security.)


This usage of `orthogonal` feels misleading. Nix is built to be a purely functional software distribution model where the outputs can be computed from the specified set of inputs. Although it is possible for derivations to not be reproducible, it seems incorrect to say that Nix is orthogonal to reproducible builds, as the point of the functional model is to nominally be able to get the same outputs given a set of inputs. Sure, Nix doesn't guarantee reproducibility of builds, but it certainly is designed to facilitate them (i.e. clearly correlated with reproducible builds and not being orthogonal to them).


Reproducible builds really are orthogonal to Nix, even if it seems like they're not. Nix was not built with reproducible builds in mind, and the problems it solves are ultimately independent even though they are related in some ways. The Nix PhD thesis by Eelco Dolstra, the founder of Nix, lays out the motivations for Nix very plainly. I decided I'd probably be better off quoting it directly than summarizing it poorly, so here goes:

From The Purely Functional Software Deployment Model, "1.3. Motivation"[1]:

    From the previous discussion of existing deployment systems it should be clear that they
    lack important features to support safe and efficient deployment. In particular, they have
    some or all of the following problems:

    • Dependency specifications are not validated, leading to incomplete deployment.
    • Dependency specifications are inexact (e.g., nominal).
    • It is not possible to deploy multiple versions or variants of a component side-by-side.
    • Components can interfere with each other.
    • It is not possible to roll back to previous configurations.
    • Upgrade actions are not atomic.
    • Applications must be monolithic, i.e., they must statically contain all their depen-
    dencies.
    • Deployment actions can only be performed by administrators, not by unprivileged
    users.
    • There is no link between binaries and the sources and build processes that built them.
    • The system supports either source deployment or binary deployment, but not both;
    or it supports both but in a non-unified way.
    • It is difficult to adapt components.
    • Component composition is manual.
    • The component framework is narrowly restricted to components written in a specific
    programming language or framework.
    • The system depends on non-portable techniques.

    The objective of the research described in this thesis is to develop a deployment system
    that does not have these problems.
Of course, part of the reason why reproducible builds was not considered for this is probably because it was simply not a hot topic at the time (had the term 'reproducible builds' even been coined yet?) and there were much bigger fish to fry with package management than reproducibility at that time, considering how relatively poorly packages were specified. Since then, "traditional" package managers and package repositories have put substantial work into cleaning up their package manifests and ensuring that they have accurate dependencies and other specifications such that today highly reproducible systems can be and have been built on top of them, limitations from the lack of hermetic guarantees notwithstanding.

Despite this, because Nix does guarantee bit-exact external inputs to a derivation, it does indeed make an excellent starting point for reproducible builds, but Nix itself as a tool is definitely orthogonal to reproducible builds, as it solves an entirely different problem that just happens to be related. You don't need the purity guarantees Nix gives you to get reproducible builds, and having those purity guarantees don't automatically give you reproducible builds (though as seen by Nixpkgs, it isn't uncommon for a build to coincidentally be reproducible just as a result of packaging it into a Nix derivation... just, not really specifically guaranteed by anything.)

[1]: https://edolstra.github.io/pubs/phd-thesis.pdf


The entire point of purity within Nix is to have reproducibility by its nature: `In this model a binary component is uniquely defined by the declared inputs used to build the component` (section 1.5). It is not clear to me how this is an entirely different problem than that of reproducible builds. Yes, there are more problems to solve regarding achieving bit-for-bit reproducible builds that Nix does not fully solve alone, but reproducible software builds and deployments clearly are goals of Nix as a technology.


> The entire point of purity within Nix is to have reproducibility by its nature: `In this model a binary component is uniquely defined by the declared inputs used to build the component` (section 1.5). It is not clear to me how this is an entirely different problem than that of reproducible builds. Yes, there are more problems to solve regarding achieving bit-for-bit reproducible builds that Nix does not fully solve alone, but reproducible software builds and deployments clearly are goals of Nix as a technology.

Sorry, but you pretty much quoted what I would've quoted to refute your claim: it's exactly right that binary components are uniquely defined by their declared inputs, but note the subtle consequence that has: they are not defined at all by their outputs, only their declared inputs. (Making them be defined by their outputs is entirely possible FWIW, this is pretty much what content-addressed Nix paths are for.) This also applies recursively: the bit-exactness of external inputs are guaranteed by cryptographic hashes, but if you have any inputs that are, themselves, derivations, you can extremely trivially add impurities, because the system is just simply not designed to stop you from doing this. An example:

    stdenv.mkDerivation {
      name = "trivialimpurity";
      unpackPhase = "date>$out";
    }
It would be quite possible to make a system that is specifically designed to be resistant to this; consider that Nix goes great lengths to accomplish what it does, literally ripping apart ELF binaries to force dynamic symbols to be resolved in a static manner. There's no reason why you couldn't go further to force deeper reproducibility in a similar vein, it's just that Nix simply doesn't.

I think it's actually OK that Nix doesn't attempt to resolve this problem, because it is a pretty hairy one. Obviously you can't practically make the build environment 100% completely reproducible as it would be painfully slow and weird, so the best you could really probably do is intercept syscalls and try to make threading behavior deterministic (which would also make builds slow as balls, but less slow than running every build in unaccelerated qemu or something like that.) What Nix does do is solve the living shit out of the problem it was designed to solve, something you can feel very viscerally when you compare the state of Nixpkgs to the state of the AUR (a package repo I consider to be very good, but definitely can give some perspective about the problems Nix is designed to solve IMO, based on some of the problems I've run into with it.)


Yes, Nix does not provide you a guarantee that there are no impurities in builds, which the extensional model specifically caters to, and where the intentional model improves upon this by providing a system bases on content hashes instead of input hashes.

However, Nix specially attempts to facilitate a pure software distribution model, which is why it does everything it does, e.g. forcing all input files to be copied to the store and providing mechanisms such as the pure evaluation mode which restricts access to the current system and time information. Yes, there are other ways to introduce impurities, but Nix tries in many ways to systematically remove these sources to increase the purity of the deployment process.

If the entire process of building an artifact is pure, then the artifact would be entirely reproducible, given that you have access to the same inputs. Yes, there are many ways to introduce impurity, however claiming that Nix as a purely functional software distribution model, where the central point is to achieve purity, is fully orthogonal to reproducible builds, seems incorrect.


Reproducibility is a second order effect. It's not a direct goal (at least originally) and as already discussed in this thread, more or less to death, not something that is required. It is very easy to write a derivation that cannot be reproduced. Sometimes it's even useful. Purity isn't really required at all. I wouldn't even say reproducibility is encouraged. It just sort of falls out of the system.

If Nix was worried about impurities that impact reproducible builds, they certainly wouldn't make live timestamps readily available in the pure environment, would they?


> Yes, Nix does not provide you a guarantee that there are no impurities in builds, which the extensional model specifically caters to, and where the intentional model improves upon this by providing a system bases on content hashes instead of input hashes.

Actually, the intensional model doesn't improve matters at all here, I was only pointing it out to demonstrate that the fact that binary components are addressed by their inputs doesn't really have anything to do with reproducibility. Of course the intensional model would mean that if you made the same build twice and it wasn't reproducible, then you'd get a different hash; however, that's not really an improvement over the current approach, which is to just build it twice and compare the output results. If anything, it just makes things more convoluted for reproducibility, due to the fact that you have to factor out self-references to check the hash.

The main advantage of the intensional model, as far as I know, is that it simplifies the trust model a bit. In the extensional model, you have to trust the substitutor, otherwise it could poison your Nix store. In the intensional model, derivations are addressed by their contents, so it's impossible to really poison the Nix store per-se, since you can definitely validate that the store path is correct for a content-addressed path.

Really though, it doesn't have a lot to do with reproducibility, and even in the work done in recent years I've not seen it mentioned at all in relation to reproducible builds, though I fully admit that it's very possible it's somehow useful and I just missed it.

> If the entire process of building an artifact is pure, then the artifact would be entirely reproducible, given that you have access to the same inputs.

That is true. Nix, though, explicitly only makes certain parts of the process pure, and the parts that it makes pure are specifically driven by the motivations outlined above. It is true that if you made the entire process completely pure, the build would be reproducible, and it is also true that Nix very intentionally does not try to do this, because it just simply wasn't in the list of problems Nix was solving at the time.

Likewise, though, you can still make a build reproducible without functional purity, which is exactly what has been done by various other reproducible build projects. They just happen to avoid impurities that would impact the result without any specific guarantees, which happens to be exactly what you have to do to make a reproducible build in Nix.

> Yes, there are many ways to introduce impurity, however claiming that Nix as a purely functional software distribution model, where the central point is to achieve purity, is fully orthogonal to reproducible builds, seems incorrect.

I don't know what "fully orthogonal" means relative to just "orthogonal". I am using "orthogonal" to mean "independent", i.e. what Nix solves is fully independent of reproducible builds. This follows because:

- It is possible to do reproducible builds without "purity" guarantees or sandboxing.

- It is possible to have builds that are not reproducible provided the purity guarantees of Nix.

The closest that Nix's purity comes to being related to reproducible builds is that it does, in fact, prevent some of the causes of unreproducible builds by enforcing the exact inputs, but that's why I consider it to be adjacent but ultimately orthogonal. If it's the word orthogonal that in particular is too strong, then maybe a better word would be "independent".


Yes, I agree that reproducible builds is something that can be solved fully independently of Nix, and that Nix alone does not solve it, and that as such it is independent of Nix.

However, as the point of Nix is to achieve reproducible software deployments through a pure and functional description of the deployment, where it also provides mechanisms that systematically improves build reproducibility, I feel that orthogonal is misleading (as was my original disagreement) because reproducible builds correlate with Nix, and because achieving reproducible software deployment is a clear original goal of Nix, e.g. by seeking to remove implicit dependencies through mechanisms such as sandboxing of builds.


Thanks for the detailed explanation. Yocto has offline builds, but is missing host filesystem isolation.

Are you familiar with StageX, https://codeberg.org/stagex/stagex/#comparison? There's a comparison chart on that page which claims that Nix(OS?) is not fully reproducible. It would be useful to know which subset, if any, is not reproducible.


It is correct that NixOS is not fully reproducible.

Regarding Nix vs NixOS vs Nixpkgs:

- Nix is the name of the programming language (that Nix derivations are written in) and the package manager/build tool. (Though at times the term 'Nix' is just used to refer to the broader ecosystem including NixOS and Nixpkgs.)

- NixOS is an operating system that is built on top of Nix derivations.

- Nixpkgs is a giant repository of Nix derivations, including NixOS (NixOS and Nixpkgs used to be separate, but they were merged together in history.)

The reason why NixOS+Nixpkgs are not fully reproducible, despite all of the guarantees Nix gives you, is simply because there are derivations with non-determinism during the build process. An example of how this might play out is that you could wind up with the order in which operations complete in a parallel build process somehow getting encoded into the final package.

Unfortunately, I don't think there's funding or a ton of interest going into improving the reproducibility of NixOS at the moment, so progress towards squashing reproducibility issues has been slow for a while. You can see relatively up-to-date progress on getting the install media fully reproducible here:

https://reproducible.nixos.org/nixos-iso-minimal-runtime/

StageX also correctly points out that the Nix bootstrap is inferior to some of the more extreme reproducibility projects. The Nix bootstrap is fairly large, unfortunately. The Guix team has put a substantial amount of effort into minimizing the bootstrap seed and reproducibility of packages. The Nixpkgs bootstrap seed (for my machine, anyway) is currently 27 MiB. The GuixSD bootstrap seed is, I believe, 357 bytes, which is a stunning accomplishment.

StageX considers NixOS trust to be centralized and GuixSD trust to be distributed; this is likely because of the Hydra binary cache which Nix is typically configured to trust by default. You can turn off the Hydra cache to remove this centralized entity, at the cost of obviously needing to build almost everything from scratch. I'm not sure what "distributed" trust actually means here, versus "decentralized".

StageX uses OCI image building as a base. It also doesn't seem to talk about sandboxing anywhere, so it is presumed that StageX is using Dockerfile OCI builds as their only sandboxing, which still allows Internet access. Having Internet access during builds is convenient, but it makes it pretty hard to guarantee that all inputs are accounted for. Their Rust example is pretty interesting:

> RUN ["cargo", "add", "regex"]

There's nothing inherently wrong about this, but despite all of the effort to make the base StageX OCI images reproducible, if you were to build this exact same OCI image months apart, you would presumably be liable to get different results here: you could, for example, get an entirely different version of the regex crate. With Nix, if you make a derivation to build a Rust package, you have to account for the Cargo dependencies in the build, as Nix builds aren't allowed to access the Internet, with the exception of fixed-output derivations. While this doesn't result in Nix derivations being bit-for-bit reproducible, it does ensure that every (external) input is bit-for-bit identical across builds to the same exact derivation, something you can't really easily achieve without a custom build tool like Nix or Guix. If it were possible to sneak an impurity into a Nix build, it is likely a CVE. (There are some exceptions on macOS due to limitations in Darwin sandboxing, but on Linux I believe this holds true. None of the exceptions would make it possible to easily accidentally introduce impurities on macOS, though; you'd pretty much need to do it on purpose.)

Even that aside, StageX uses the same impressive 357 byte bootstrap seed as GuixSD, so it is pretty cool for what it does. It's just a bit lower in scope than Nixpkgs and GuixSD. Nixpkgs is probably the largest single software repository ever built with over 100,000 packages, all of which having to follow this schema of hermetic builds.


that Rust example is gonna bite us in the ass until the day i die, i need to remove it.

The Keyfork project is probably the best example of how an _actual_ Rust project is developed and shipped with stagex (disclaimer, I'm a maintainer of both). Actual Rust programs are built using the following steps:

1. Before building, the stagex tooling downloads and verifies a hash-locked version of the source package * Additionally, all dependencies for the package (compilers _and_ linked libraries) are verified to have been built. 2. Packages and pallets (collections of packages) are unpacked `FROM scratch` into a bare container 3. `cargo fetch --locked` is then invoked to fetch all the dependencies. 4. `RUN --network=none` is used when compiling the actual binary to ensure no network access happens _after_ the `cargo fetch` stage. Admittedly, it is not ideal to allow turning network access on and off throughout a build, but `--network=none` has helped us identify some odd cases where network access _does_ happen. 5. Once the binary is built, the binary is added on top of the base "filesystem" package, and is considered "done".

Unless some source file gets completely yoinked off the internet (which has happened, and we've had to "rebuild the world" because of it), every stagex package should be 100% bit-for-bit reproducible even if run several years down the line.

There may be some cases where we miss a datestamp or something similar, but hopefully as time goes on, we get the infrastructure to mock system times and throw other wrenches to test how reproducible these packages really are.


I probably should've mentioned that I don't actually have any familiarity with StageX, I did write that at some point but must've accidentally removed it from my reply while still working on it. Even so, I had a feeling the example wasn't a good example of how to actually use it properly, and I feel a little bad because I didn't really mean to critique StageX because of that particular issue, I just thought it was a good example of how Nix differs (Nix enforces purity, Dockerfile builds don't.) It seems like with StageX the goal is to ensure that the build is bit-for-bit reproducible as this would be a relatively good assurance that the inputs are also reproducible. On the other hand, it might be relatively hard to actually debug what went wrong in the more subtle cases where the inputs are not reproducible, since presumably the main artifact of this will be the output differing unexpectedly.

I'm definitely biased as a person who works on Nix stuff but I am not an absolutist when it comes to any of these things, based on what I'm reading about it I'd happily rely on StageX if I wanted reproducible OCI builds (and didn't feel like using Nix to do it, which has plenty of complexities on its own as nice as it can be.)


Oh and of course thanks for your opinions and feedback.


Yocto is the worst thing over ever worked with. Its just a pile of dirt on top of a pile of dirt


> pile of dirt on top of a pile of dirt

Infinite flexibility for reproducible commercial packaging of dirt permutations.

Packaging system pain exists at the border of chaos and simulated order.

There are useful concepts in Yocto but they were never formalized in academic papers, unlike some build systems for Haskell. There are packaging nuances encoded in bitbake recipes that will likely die there because they work "enough", instead of being further studied for long term lessons.

Given that shellcheck is written in Haskell, it might be an interesting academic exercise to write a Haskell replacement for bitbake, which converts bitbake recipes (shell+python) into something more maintainable.


Yocto is significantly better than what proceeded it. I could fill a book on ways it has pushed the envelope for embedded development.

Is Yocto a steaming pile? Yes absolutely. But it remains a means to an end.

I'd struggle to accomplish with Nix what can be accomplished out of the box with Yocto. Now that being said I'd certainly try and use Nix over Yocto any day of the week.

I remain optimistic and that the two will converge and it will more or less not matter, though I'd much prefer Nix to consume embedded development than Yocto become Nix-pilled.


I got NixOS running on a NXP board we were using at my last employer and Nix is a blessing if your device is juicy enough to run systemd.

If I were doing it again I'd reach for raspberry instead since they're supported everywhere. If the workload can run on it that is.


BNPL is only "good" if your definition of "good" is about GDP, market flexibility, high-performance index funds, and other things that have nothing to do with human happiness.

I'll believe that BNPL is good when all the companies become non-profits that use excess funds to cancel debts rather than lining the pockets of rich investors.


This has some good advice, but I can't help but notice that none of this solves a core problem with the tj-actions/changed-files issue: The workflow had the CAP_SYS_PTRACE capability when it didn't need it, and it used that permission to steal secrets from the runner process.

You don't need to audit every line of code in your dependencies and their subdependencies if your dependencies are restricted to only doing the thing they are designed to do and nothing more.

There's essentially nothing nefarious changed-files could do if it were limited to merely reading a git diff provided to it on stdin.

Github provides no mechanism to do this, probably because posts like this one never even call out the glaring omission of a sandboxing feature.


What would be outside the sandbox? If you create a sandbox that only allows git diff, the I suppose you fixed this one issue, but what about everything else? If you allow the sandbox to be configurable, then how do you configure it without that just being programming?

The problem with these "microprograms" have always been that once you delegate so much, once you are willing to put in that little effort. You can't guarantee anything.

If you are willing to pull in a third party dependency to run git diff, you will never research which permissions it needs. Doing that research would be more difficult than writing the program yourself.


GitHub Actions by default provide isolated VM with root privilege to a workflow. Don’t think job level privilege isolation is in its threat model currently. Although it does allow job level scopes for the default GitHub token.

Also the secrets are accessible only when a workflow is invoked from trusted trigger ie. not from a forked repo. Not sure what else can be done here to protect against compromised 3rd party action.


People have been running different levels of privileged code together on the same machine ever since the invention of virtual machines. We have lots of lightweight sandboxing technologies that could be used when invoking a particular action such as tj-actions/changed-files that only gives it the permissions it needs.

You may do a "docker build" in a pipeline which does need root access and network access, but when you publish a package on pypi, you certainly don't need root access and you also don't need access to the entire internet, just the pypi API endpoint(s) necessary for publishing.


Yes, by default things should be sandboxed - no network, no repo writes, ... - and should be easy to add extra caps (ex: safelist dockerhub)

Likewise, similar to modern smart phones asking if they should remove excess unused privs granted to certain apps, GHAs should likewise detect these super common overprovisionings and make it easy for maintainers to flip those configs, e.g., "yes" button


Where can I read about this? I see no reference in its repo: https://github.com/search?q=repo%3Atj-actions%2Fchanged-file...


Every action gets these permissions by default. The reason we know it had that permission is that the exploit code read from /proc/pid/mem to steal the secrets, which requires some permissions: https://blog.cloudflare.com/diving-into-proc-pid-mem/#access...

Linux processes have tons of default permissions that they don't really need.


Amazon really encourages valkey in the elasticache dashboard. There's a banner advertising lower prices and it's listed first in the dropdown when you go to create one. Default settings do have power.


Sure, but the impact of new customers and their decisions take a long time before they impact net statistics. All evidence I can find, regardless of domain or context, suggests Redis vs. valkey marketshare is something around a 99%/1% difference.


> All evidence I can find, regardless of domain or context, suggests Redis vs. valkey marketshare is something around a 99%/1% difference.

What evidence did you find?


Ask the LLM of your choice the following question: "Among the top cloud service providers, for product offerings that can be implemented by Redis or valkey, based on all available evidence, what is the relative market share and/or usage of Redis vs. valkey?"


LLMs are not a source.


but what if they confirm my priors?


sick burn bro


Why/how would you expect an LLM to know that info?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: