Hacker Newsnew | past | comments | ask | show | jobs | submit | LelouBil's commentslogin

OpenSSH is preinstalled on Windows as well, so I think it's not a stretch to say everyone has ssh now.

That would be really interesting if you have enough ML knowledge to do it !

Well, with things like this, I feel like someone could get interesting results pretty quickly:

https://lintangwisesa.github.io/MediaPipe-in-JavaScript/inde...

https://github.com/cosyneco/MediaPipe.NET

https://ai.google.dev/edge/mediapipe/solutions/guide

I'd think you'd start with a tool to automatically cut video into scenes, and kind of go from there...


I would for sure enable this, my personal server can handle being unreachable for a few hours in exchange for (potentially) interesting features.

Can you elaborate? I'm not sure what you mean by "at the last step"

I'm not sure which endpoint gp meant, but as I understood it, as an example, imagine a three-way handshake that's only available to enterprise users. Instead of failing a regular user on the first step, they allow steps one and two, but then do the check on step three and fail there.

The API endpoint I am talking about needs a external verification. they allow to do the external verification before checking if the user is on the enterprise plan or not.

The feature is only available to enterprise plans, it should not even allow external verification.


I am in the process of redoing all of my self-hosting (cloud storage, sso, media server, and a lot more), which previously was a bunch of docker compose files deployed by Ansible. This quickly became unmanageable.

Now I almost finished the setting up part using a single-node (for now) Kubernetes cluster running with Talos Linux, and all of the manifest files managed with Cue lang (seriously, I would have abandoned it if I had not discovered Cue to generate and type check all of the yaml).

I think Kubernetes is the right solution for the complexity of what I'm running, but even though it was a hassle to manage the storage, the backups, the auth, the networking and so on, I much prefer having all of this hosted at my house.

But I agree with the control plane part, just pointing out my use case for self-hosting k8s


Maybe they'll secretely fund an open source project to emulate only the windows kernel calls that Anti Cheats use.

As a former cheat developer, I think it is impossible since it is digging into some specific stuff of Windows. For example, some anti-cheat uses PsSetCreateThreadNotifyRoutine and PsSetCreateThreadNotifyRoutine to strip process handle permission, and those thing can't be well emulated, there is simply nothing in the Linux kernel nor in the Wine server to facilitate those yet. What about having a database of games and anticheat that does that, and what if the anticheat also have a whitelist for some apps to "inject" itself into the game process? Those are also needed to be handled and dealt with.

Plus, there are some really simple side channel exploits that your whitelisted app have vulns that you can grab a full-access handle to your anticheat protected game, rendering those kernel level protection useless, despite it also means external cheat and not full blown internal cheat, since interal cheat carrys way more risk, but also way more rewardings, such as fine-level game modification, or even that some 0days are found on the game network stack so maybe there is a buffer overflow or double-free, making sending malicious payload to other players and doing RCEs possible. (It is still possible to do internal cheat injection from external cheat, using techniques such as manual mapping/reflective DLL injecction, that effectively replicates PE loading mechanism, and then you hijack some execution routine at some point to call your injected-allocated code, either through creating a new thread, hijacking existing thread context, APC callback hijack or even exception vector register hijacking, and in general, hijack any kinds of control flow, but anticheat software actively look for those "illegal" stuff in memory and triggers red flag and bans you immediately)

From what I've seen over the years, the biggest problem for anticheat in Linux is that there is too much liberty and freedom, but the anticheat/antivirus is an antithesis to liberty and freedom. This is because anticheat wants to use strong protection mechanism borrowed from antivirus technique to provide a fair gaming experience, at the cost of lowering framerates and increasing processing power, and sometimes BSOD.

And I know it is very cliche at this point, but I always love to quote Benjamin Franklin: "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety". I therefore only keep Windows to play games lately, and switched to a new laptop, installed CachyOS on it, and transfered all my development stuff over to the laptop. You can basically say I have my main PC at home as a more "free" xbox.

Speaking of xbox, they have even more strict control over the games, that one of the anticheat technique, HVCI (hypervisor-protected code integrity) or VBS, is straight out of the tech from xbox, that it uses Hyper-V to isolate game process and main OS, making xbox impossible to jailbreak. In Windows it prevents some degree of DMA attack by leveragng IOMMU and encrypting the memory content beforehand to makd sure it is not visible to external devices over the PCIe bus.

That said, in other words, it is ultimately all about the tradeoff between freedom and control.

A similar concept, trusted computing: https://en.wikipedia.org/wiki/Trusted_Computing


Oh so they did not "adopt" Fex, they actually financed the leading developer from the start.

Great distro ! I have been using it for the last 2 years on my Framework laptop 16 without any issues. I even have a "fork" of sorts that adds Hyprland + all of my "desktop" config, which I think as being part of the OS.

I really think immutable distributions are the future of linux desktop, and maybe distributions that use OCI images, beacause they are a lot easier to work with than say, NixOS for example.

If you want to have your custom bazzite, you just do a "FROM bazzite:<whatever-version-you-want-to-pin" and add stuff you want.

Of course, you loose a bit of the reproducibility, since usually container images do not pin packages (and maybe other reproducibility issues I am not aware of) but it is way easier to work with.


I'm an https://getaurora.dev user and I agree uBlue is awesome. I'd like to create a custom image too, but it doesn't seem quite as easy as you say: https://youtube.com/watch?v=IxBl11Zmq5w

I learned about Aurora from a HN comment some weeks ago, and it has been so awesome. I really haven't been as impressed with a distro since the first ubuntu. Its just a rock solid base, awesome defaults, and kde being delightful.

While the video is long, the actual process of setting everything up only took me about 20 minutes. The template they offer is extremely convenient.

I will offer a second positive but more reserved data point. It took me closer to a day to get my custom Bazzite build working.

Switching over to my images using bootc failed because of what I eventually tracked down to permissions issues that I didn't see mentioned in any of the docs. In short, the packages you publish to Github's container registry must be public.

Another wrinkle: The Bazzite container-build process comes pretty close to the limits of the default Github runners you can run for free. If you add anything semi-large to your custom image, it may fail to build. For example, adding MS's VSCode was enough to break my image builds because of resource limits.

Fortunately, both of these issues can be fixed by improving the docs.


There's also BlueBuild [1] which abstracts the process of building your own images away further into yaml configurations.

It takes away a tad bit of the direct control of the process, but covers the majority of things you would want to configure.

[1] https://blue-build.org/


The actual process for the image is really just what I said. In the video he sets up a github actions automatic build, and adds signing with cosign (which are also all steps you really want to do) but to have custom stuff in your base os is really as easy as a Dockerfile (or should I say Containerfile ?)

Immutable is very good for new linux users but I personally don't like the restriction and find rpmostree extremely slow to install literally anything. It does make sense to use immutable distros in routers, firewall, etc

I try not to use rpmostree to install anything (only steam-input, codecs and nvidia drivers), and rely on homebrew, appimages, flatpaks and toolbox for my app needs. It works so far...

There is something about immutable linuxes that feels right, and I cannot pinpoint why exactly, but it's like things are segregated correctly.


I have disabled read-only mode and I use pacman like you would in a normal Arch system. Makes it a lot easier to install packages

It’s probably that they’re near impossible to break or end up with an unusable system.

Kotlin is trying to be one with notebooks[0], I even heard they have fancy code generation so that your dynamic data can still have typed properties (after the first evaluations, members corresponding to your field names are generated, or something to that extent I never used it)

[0] https://kotlinlang.org/docs/kotlin-notebook-overview.html


Yes and no, usually when malicious packages go public it's some third party cybersecurity firm that scans packages that found it.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: