Hacker Newsnew | past | comments | ask | show | jobs | submit | professoretc's commentslogin

I wonder why that would be? Presumably if the batteries are low then the pressure the machine "thinks" it's inflated the cuffs to is higher than the actual pressure...


And with eye/face tracking it can tell if you really watched it, with a smile.


Sorry, you are all out of door unlocks for today!

Dance along with the characters of the new Series, now streaming on $sponsor, and achieve a score of at least 6/10 to get another door unlock your door.

---

Your dance was not good enough, try again or buy a door unlock with the flash discount code "Distopia" for 99ct.


I miss TkDesk, which I discovered many years ago when I was first trying Linux, partly because it supports unlimited splits, not just two. In fact, if I'm remembering correctly, when navigating to a subdirectory the default was just to open it in a new split. You ended up with splits containing the full path from wherever you started to your eventual subdirectory (you could scroll the view of splits horizontally once there got to be too many).

https://tkdesk.sourceforge.net/


This also lets you run QEMU over SSH, if you want. I use this in my assembly language course; towards the end I give an assignment to write Hello, World! as a 16-bit real mode MBR bootloader. Students can do the whole thing on our SSH server, including testing in QEMU (and even attaching GDB to it to debug) not needing to install anything locally.


I saw "Hugginface" listed alongside C++, React, and SQL as skills on a resume recently. Wasn't quite sure what to make of that.


Honestly it's a large enough library with enough weirdness and untested areas, footguns, and bugs that I'd deem it just as valid as React for example.

Why did tensor_parallel have output += mod instead of output = output + mod? (The += breaks backprop). Nobody tested it! A user had to notice it was broken and make a PR!


For an uni course I tried to fine tune Gemma in a few days, it wasn't easy because tutorials often were written with old version of hf libraries that now work differently, there's a lot of areas to improve, everything still seems kinda fresh and so it's a pain in the ass to deviate from simple walkthroughs to something tailored to your needs.


I've found I benefit most from AI when I ask it questions about technical topics, like programming or using a device like a synthesizer or DAW software. There's pshychological effect I get especially when I get an answer that says "that feature is not supported". I get the feeling that it's not my fault that something feels very difficult, I know WHY it is difficult when somebody tells me there is no easy way to do what you want, so I don't waste any more time trying to find the solution. I must look elsewhere then.

So I wonder, trying to learn AI and how to use it, shouldn't the AI itself be the best guide for understanding AI? Maybe not so much with the latest research or latest products, because AI is not yet trained on those, but sooner or later AI should feel as easy a subject as say JavaScript programming.


Maybe that's why the models are so eager to spit out reams and reams of code, it lets their masters claim a higher percentage (even if most of the code they emit is never used).


Doesn't it still treat 64-bit code as something of an afterthought?


My wife and I agreed to expunge "just" from our vocabulary, at least with regards to asking to do this. It's almost always kind of belittling, implying that the thing you're asking for is easy and obvious, and you're an idiot/lazy for either not doing it already or trying to explain why its more difficult that it looks.


That's called a "unity" build, isn't it? I was under the impression that it was a relatively well-known technique, such that there are existing tools to merge a set of source files into a single .c file.


> I think any NPC with dialogue important to a goal (a quest, a tutorial, etc) is going to be hard to use generative AI for. It not only needs to be coherent with the story, but it needs to correctly include certain ideas. I.e. if the NPC gives a quest to go find some item at some location, it needs to say what the item is and where it is.

That was my experience when I was experimenting with using current LLMs to generate quests. You can of course ask for both a human-readable quest description and also a JSON object (according to some schema) describing the important quest elements, but the failure rate of the results was too high. Maybe 10% of quests would have some important mismatch between the description and the JSON; the description would mention an important object but it would be left out of the JSON, or the JSON would mention an important NPC but the description wouldn't, etc.

As a player, I think it would get frustrating quickly if 10% of quests were unsolvable, especially since, as a player, you don't know when a quest is unsolvable; maybe you just haven't found the item/NPC yet.


Yeah, 10% about jives with what I would expect under the assumption that the generated text needs to be non-deterministic (I.e. no careful prompt tuning and turning the temperature down to basically 0).

An interesting flip side I was just thinking about is the AI saying too much. NPCs keeping secrets until the player gets enough reputation or does a favor or whatever is pretty common. I wonder how good they are at keeping those secrets.

Prompt injection is one thing, and vaguely equivalent to cheat codes which is fine, but what is the likelihood that a player just asking for more info ends with the AI spitting out the secret without completing the quest? Will the AI know to unlock the next area or whatever, because there's no reason for the player to do that NPCs quest?

Should be neat stuff, I'm looking forward to how this all works together when the kinks get ironed out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: