When using the OpenTelemetry Collector you might need a set of modules not supported by a standard distribution, e.g. the otelcol-k8s variant but with some extra exporter from otelcol-contrib. I found building such a custom distribution a bit cumbersome but that it was made much easier by this project.
Rust runs quite well today via WebAssembly. Continuing to improve interop between Web API / WASM / language runtimes seems like a good route that allows people to use the language they prefer against shared Web APIs.
What's really neat is that most of the problems are so compact that you can just read about the full problem, then spend hours and days thinking about possible solutions.
Isn't is good that there are some forcing factors to help ensure the quality of the content? I get that there's plenty of drama and difficulties in building and moderating the content of Wikipedia, but it certainly does not appear to stagnate in terms of content if you are looking at e.g. the number of articles on English Wikipedia. The overall process appears to produce great outcomes and it is the greatest collection of knowledge created.
I took it as a statement that it is prohibitively difficult to contribute to Wikipedia, and wanted to point out that a large number of contributions are being made and the resulting quality being high, in part due to the difficulty of making contributions.
My comment was disputing the statement above that anyone can just stick junk in Wikipedia. While yes anyone can submit edits, it's pretty hard to get them accepted so the content on Wikipedia is more reliable than just a public notepad.
My statement was that the quality of Wikipedia overall is high, and that one of the reasons for that is because they set and enforce standards for contributions.
Certainly many people are put off by the process and will not have time to deal with it, but my belief would be that such cases are more likely on more controversial topics, and less likely for less controversial topics. Inherently, collaborating on difficult topics will be a difficult process, which also means that there are likely no easy answers for how to make this process not discourage anyone.
The forcing factors aren't what they are supposed to be though. "Credible" sources and citations are exclusively up to the article moderators personal tastes which are very rarely objective.
> The forcing factors aren't what they are supposed to be though.
Is it clear what they should rather be - and are there any examples of mechanisms that have worked better at a scale like this? How are you judging that they are not what they are supposed to be?
If the resulting body of work, which is the totality of Wikipedia, is able to be a curated and high signal collection of knowledge as a result of these mechanisms, how can it be said that they are not working? Having forcing factors, even if they are not ideally aligned or executed, which pushes contributors to increase the quality of their edits to pass, seems overall like a good thing. I'm not saying that its processes and mechanisms cannot be improved, I'm saying I believe it is incorrect to say that they are not working as a whole.
> "Credible" sources and citations are exclusively up to the article moderators personal tastes which are very rarely objective.
Overall I believe Wikipedia to be curated by a large group of people which coordinate through various rules and consensus mechanisms, where I don't believe it is correct to state that sources and citations are exclusively up to any specific article moderators, as they need to be able to build consensus and co-exist with other moderation.
Exactly because Wikipedia is such a large body of work it seems more resistant to corruption to have a large number of curators with different tastes and motivations. How would you determine that their selection of sources and citations is very rarely objective - especially when objectiveness itself seems quite hard to agree upon for many of the topics covered?
From my perspective it seems far more important to consider the quality and value of the totality of Wikipedia, which is massive and signs that many things are working, rather than insisting that it is not working, especially in times where knowledge is being broadly attacked, and where Wikipedia is one of the targets.
The SpecTec mentioned in the announcement is really cool. They're using a single source of truth to derive LaTeX, Sphinx docs, Coq definitions for proofs, and an AST schema. Building the language spec in a way that its soundness can be proven and everything derived from one truth in this way seems super useful.
This "plan" seems more like a vision announcement compared to Master Plan #3. MP3 had a structured and quantified thesis, while this post offers few details. Wonder if they'll release a more substantiated version?
I've been thinking a lot about this kind of thing recently - and put a prototype up of htvend [1] that allows you to archive out dependencies during an image build. The idea being that if you have a mix of private/public dependencies that the upstream dependencies can be saved off locally as blobs allowing your build process to be able to be re-run in the future, even if the upstream assets become unavailable (as appears to be the case here).
Their Dockerfiles include things like download pre built binaries from $SECRET_BASEURL which is hosted by them, can still be found in git log though. I imagine it will go offline / have auth soon enough.
Or if you have a decent sized deployment in one of the clouds, it's extremely likely you'll already use their internal registry (eg AWS ECR). I know that we do. So it's just a case of setting up a few docker build projects in git that push to your own internal registry.
It is at the top of the announcement. This only affects OCI images, not source code "The source code for containers and Helm charts remains available on GitHub under the Apache 2.0 license."
reply