Hacker Newsnew | past | comments | ask | show | jobs | submit | Flux159's commentslogin

Nice, this is similar to what I was wondering about - it looks like it's pretty limited in capability right now (looks like it only supports canvas2d at the moment: https://nxjs.n8.io/runtime/rendering/canvas), but in theory it would allow you to make a layer to convert WebGPU or WebGL games for Switch (ignoring the huge performance drop going from v8 / jit JS engines to QuickJS).


The interesting part here is about AthenaEnv. It looks like it uses QuickJS for the Javascript interpreter and wraps around the native system libraries that the PS2 provides.

I'm wondering if there's a modern similar project that would allow writing Javascript Canvas games (WebGPU / WebGL) and publishing on Switch/2, PS5, and Xbox.

From my understanding, they explicitly disallow JITs so you can't just wrap your JS game with Electron / Node Webkit and use V8. I'm not sure if anyone has tried publishing a game using a V8-jitless electron fork - the sdks for consoles are under NDA so there's not really much written about it publicly & most games using Unreal or Unity don't deal with these things themselves.

PC, Mac, and even mobile are surprisingly easier here because you can just run the JS via electron or in a webview on mobile.


I think the game Cross Code (a game written in JS) was ported to consoles using a pretty complex process.

Here is a detailed blog post about the topic : https://www.radicalfishgames.com/?p=6892


Yeah, I saw the video about that earlier which is what led me to wonder if there was a native JS way now.

They used Kha in order to port only the console versions, the desktop versions remained JS from my understanding: https://github.com/Kode/Kha which is built on top of Haxe. This works, but it also means not having a single codebase anymore which would be one of the benefits of a JS based system.

There are other options here - something like using an AOT JS compiler like Porffor, but from my understanding it's never been tested (and would probably be missing a lot of support to get it working - like shimming canvas & providing a WebGPU context that the compiled JS could execute against).


The official Nintendo 3DS and Wii U SDKs both provided an Electron-like framework that allowed games to be written with web technologies. I seem to recall that it was discontinued at some point before the Switch? The Switch does have a WebKit browser applet that games can call to display web-based content, but it's pretty limited since JIT is disabled like you say. I've only ever seen it used for e-manuals.


Was able to sign up for the Max plan & start using it via opencode. It does a way better job than Qwen3 Coder in my opinion. Still extremely fast, but in less than 1 hour I was able to use 7M input tokens, so with a single agent running I would be able easily to pass that 120M daily token limit. The speed difference between Claude Code is significant though - to the point where I'm not waiting for generation most of the time, I'm waiting for my tests to run.

For reference, each new request needs to send all previous messages - tool calls force new requests too. So it's essentially cumulative when you're chatting with an agent - my opencode agent's context window is only 50% used at 72k tokens, but Cerebra's tracking online shows that I've used 1M input tokens and 10k output tokens already.


> For reference, each new request needs to send all previous messages - tool calls force new requests too. So it's essentially cumulative when you're chatting with an agent - my opencode agent's context window is only 50% used at 72k tokens, but Cerebra's tracking online shows that I've used 1M input tokens and 10k output tokens already.

This is how every "chatbot" / "agentic flow" / etc works behind the scenes. That's why I liked that "you should build an agent" post a few days ago. It gets people to really understand what's behind the curtain. It's requests all the way down, sometimes with more context added, sometimes with less (subagents & co).


Many API endpoints (and local services for that matter) does caching at this point though, with much cheaper prices for input/outputs that were found in the caching. I know Anthrophic does this, and DeepSeek I think too, at the very least.


They don't have prefix caching? Claude and Codex have this.


At those speeds, it's probably impossible. It would require enormous amounts of memory (which the chip simply doesn't have, there's no room for it) or rather a lot of bandwidth off-chip to storage, and again they wouldn't want to waste surface area on the wiring. Bit of a drawback of increasing density.


Is this built with JS / something like Fabric JS? There are some things that feel very similar to a web app that I worked on before. Wondering if there's plans to have a plugin API at some point if it is.



I have a plus subscription & was able to get it to work on X. It shows you what it's doing on the tab as it's moving around.


One interesting thing here is that the chat side panel is agentic - it can read tab contents, open links in the existing tab or create new tabs, and do most of the standard "summarize", etc. things too.

This might be the first time that I move off of Chrome for an extended period of time.


I was able to add ublock origin lite as an extension & it seems to work the same as Chrome.


uBlock origin lite kinda sucks compared to the OG uBlock, though. YouTube videos have this awkward buffering at the start now, sometimes YouTube homepage ads still load, sponsored placements on GrubHub/DoorDash appear and aren't able to be removed, etc.


I thought that was more specific to YT than UB?

My UB experience remains largely unchanged since the switch to manifest v3, I pay for YT to avoid ads and support creators directly.


"I pay to remove ads so my experience with a neutered adblocker isn't as bad" is a weird take.

If you think the end game is companies deciding they're comfortable with removing ads in exchange for a subscription, rather than a subscription with a gradually increasing amount of ads, then I have a bridge to sell you.

I support the creators I watch by donating to them directly.


I use UB for all the other websites, not YT, it's a weird take to associate UB usage and experience with a single domain.

> I support the creators I watch by donating to them directly.

Me too, on top of the monthly fee that gets distributed to those I watch. More for every creator, even those I only watch once or sporadically


I mentioned multiple domains...? I said it also impacts sponsored listings on food delivery platforms. Those used to be blocked and, more broadly, the ability to manually block specific elements of a webpage was lost with the transition to UB lite.


uBO Lite does not support a lot of filters and there is no element picker. Also, a lot of other add-ons are unsupported due to no MV2 support.


It can block sponsored chat response content too? And agentic behaviors that act on behalf of sponsors?


I wonder how this works with mcpb (renamed from dxt Desktop extensions): https://github.com/anthropics/mcpb

Specifically, it looks like skills are a different structure than mcp, but overlap in what they provide? Skills seem to be just markdown file & then scripts (instead of prompts & tool calls defined in MCP?).

Question I have is why would I use one over the other?


One difference I see is that with tool calls the LLM doesn’t see the actual code. It delegates the task to the LLM. With scripts in an agent, I think the agent can see the code being run and can decide to run something different. I may be wrong about this. The documentation says that assets aren’t read into context. It doesn’t say the same about scripts, which is what makes me think the LLM can read them.


This looks great! At a previous job we had a fork of Jupyter notebooks that were used this way by some teams. I see that remote execution is on the roadmap, but was also wondering if you'll have some form of parallel remote execution as well (ie one runbook run across 10 or 100 VMs similar to parallel ssh). Definitely more complicated than single execution, but potentially very powerful for debugging fleets where you don't have Chef or Ansible. I guess the alternative is to just have the runbook run locally but run pssh in some commands to get a similar result.


absolutely!

we already support execution of script + terminal blocks over SSH, but want much tighter integration. Parallel execution is certainly a part of that too. anything else you'd want to see?


Nothing in particular - when I wrote / used the jupyter 'runbooks', they were most helpful when a SEV (site event / severe error) was happening or if a new person on the team needed to handle oncall the first time.


This is helpful when I don't want to use AI.

Any chance you could add "extract clips" in addition to extract frames? Specifically I had to split a video into x-second clips recently & had to use AI to get the right command for that.


This is AI generated and the commands are wrong in multiple cases


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: