Hacker Newsnew | past | comments | ask | show | jobs | submit | falcrist's commentslogin

One of the interesting things about the movie was how well they conveyed the mood and atmosphere on subs.

I don't know exactly how to describe it, but the sub force just has a different temperament than the surface fleet.

Of course, all of that went out the window when people in the movie started yelling at each other. From that point on it's a fictional scenario contrived to create a dramatic story.

Same with Apollo 13. Everything I see and hear about NASA personnel indicates that these people are consummate professionals who stay cool under extreme circumstances... but that wouldn't make for a good movie.

I should probably note that this is coming from the perspective of someone who grew up with a father who was an career enlisted man (CPO/EM-N) stationed mostly on boomers.


Right. The thing that bugs me about Apollo 13 is that they played up the drama unnecessarily, because the ground crew was so large, well-trained, and in sync. Like the scene where they dump a box of stuff and say "You have to make this go into here using just this?"- the actual story is that one of the engineers on the ground realized basically as soon as he heard about the accident (and the LM lifeboat) that they would need to use the CM scrubbers, and within five minutes of talking to another engineer they had figured it out in principle. The delay was that they wanted to walk through all the steps to make sure their documentation was correct, and the only CM scrubber available was at Kennedy, so they had to wait while it was put on a plane and flown to Houston to mate with the rest of the practice equipment.

Similarly, the "oh we forgot the moon rocks!" bit was actually the engineers realizing it ahead of time and changing the prep checklist to account for it, rather than a last second dash. This was only because there were so many engineers, and they had made themselves so immersed in the task, and they had such good lines of communication that someone identified the problem and was able to escalate the fix to the correct levels at the appropriate time. This didn't happen by accident, but was the result of years of working together, both training and the experience of actual flights that made these teams so good.

Separately, there were a few things the movie got wrong just as one-off moments. At launch the arms retracted simultaneously, rather than sequentially as shown in the movie (not quite as cool looking) and if you listen to the bit where Lovell says "Houston, check my math here" he is doing addition, which can't be done on a slide rule.


It’s a reality of cinema that when they do a biopic or a film about a real event, they often have to make changes that enhance the drama. Some of these are reasonable, some not so. In Apollo 13, they upped the drama by significantly shortening the time required to get the LEM up and running as a lifeboat. I’m okay with that. Inserting drama where there was none is less justifiable, as it speaks to the character of the persons involved.

That said, as a retired NASA contractor, I can say that Apollo 13 is highly respected at NASA. Hidden Figures has a lot of fans there, too. In spite of the horrible physics, Gravity also has its fans - some astronauts said it really captured the feeling of being in space.


I've read that when they adapted the story of Desmond Doss into the movie "Hacksaw ridge" they had to tone down some of the true events, because the scriptwriters decided that audiences would find parts of the story unrealistic and unbelievable.


> "Houston, check my math here" he is doing addition, which can't be done on a slide rule.

I still love that scene where it cuts to everyone whipping out their slide rules. It just adds to the mystique of we put men on the moon with such antiquated tech compared to modern standards (even of those available in the 90s when the movie was made).


You've probably seen it, but you'd definitely love the movie "Hidden Figures".

They do seem to get the adding machines more or less correct.


> Everything I see and hear about NASA personnel indicates that these people are consummate professionals who stay cool under extreme circumstances

On the whole, they were consummate professionals. And then there is the Apollo 10 turd incident.


It reads a bit like a verse from the song Time by Pink Floyd, which perfectly encapsulates the sense of existential dread that so many of us feel more and more often as we get older.


I don't think it's overly philosophical to point out that these are large language models, not truth engines or AGI or knowledge directories. They're not using logic to reason their way to an answer. They're just predicting the next word that would sound like part of a human answer.


Fair enough. I think a lot of people are going to end up blindly trusting AI because its right often enough. But for those who are interested in what it really means to know something, I wonder if this will push people back towards embracing the idea that there is fundamental, objective, knowable truth at the core of the universe even if we can't ever know that truth perfectly.


> They're not using logic to reason their way to an answer. They're just predicting the next word that would sound like part of a human answer.

OpenAI claims recent models are actually reasoning to some extent.


They're just outputting tokens that resemble a reasoning process. The underlying tech is still the same LLM it always has been.

I can't deny that doing it that way improves results, but any model could do the same thing if you add extra prompts to encourage the reasoning process, then use that as context for the final solution. People discovered that trick before "reasoning" models became the hot thing. It's the "Work it out step by step" trick but in a dedicated fine-tune.


> They're just outputting tokens that resemble a reasoning process.

Looking at one such process of emulating reasoning (got deepseek-70B locally), I'm starting to wonder how does that differ from actual reasoning? We "think" about something, may make errors in that thinking, look for things that don't make sense and correct ourselves. That "think" step is still a blackbox.

I asked that llm a typical question of gas exchange between containers, it made some errors and noticed some calculations that didn't make sense:

> Moles left A: ~0.0021 mol

> Moles entered B: ~0.008 mol

> But 0.0021 +0.008=0.0101 mol, which doesn't make sense because that would imply a net increase of moles in the system.

Well, that's totally invalid calculation, it should be "-" in there. It also noticed that those quantities should be same in other place.

Eventually, after 102 minutes and 10141 tokens, involving checking answers from different angles multiple times, it outputted approximately correct response.


Only by conveniently redefining the word.

Instead, they predict the next tokens of a "think out loud" example, and wrap it up with a "conclusion and summary" example.

It doesn't know why this writing pattern is the semantic space it is exploring: it has simply been set up to do so in the first place.


Does it matter if it doesn't know why this particular pattern is suitable? Also, do you always ask yourself why you use that particular pattern all the time, or do you just use them?


It seems like you are implying that I don't think before I speak. Maybe that is sometimes the case, but I would venture to say, "not usually, and certainly not always."

The point I'm making here is that all of these observations are made after-the-fact. We humans see five different categories of output:

1. "I do know X" where X is indeed correct information

2. "I do know X" where X is false information or nonsense

3. "I don't know" when it really doesn't

4. "I don't know" when a slightly different prompt would lead to option #1

5. Output that is not phrased as a direct answer to a question.

The article introduced #2 as "hallucinations". I introduced #4 in my previous comment (and just now #5), and propose that all five are hallucinations.

As far as the LLM is concerned, there is only one category of output: the most likely next token. Which of the five that will be is determined by the examples present in the training corpus, which are later weighed during training.

Logic is not present in the process. It is only present in the result.


> It seems like you are implying that I don't think before I speak.

I'm implying that most times you don't think before you think or after you think (you or me typically don't meta-think).

I'm saying that very often I (and looks like a lot of people around me) don't think much before I speak. I have internal monologue when I'm "thinking something out", but I typically don't think things through when I'm speaking with people in day-to-day conversations, only when I encounter a problem I didn't see yet and I'm not "trained" in solving it. Maybe some people can make fully reasoned sentences in split seconds before they start talking, but not me. IIRC those two modes of thinking are called slow and fast thinking.

> Logic is not present in the process. It is only present in the result.

I'm talking about that process. Have you seen "thinking" part of current reasoning LLM's? It does indeed look like a process of using logic. After "thinking" part, there is "output" part that makes conclusions form the process of thinking. Recently I asked local version of deepseek about a gas exchange problem and it thought a lot about this, making some small mistakes in logic, correcting them, ultimately returning approximately valid result. It even made some small errors in calculations and corrected itself by multiplying parts of numbers and adding them for correct result. I've put that example online[1] if you'd like to read it, it's pretty interesting.

[1] https://pastebin.com/mXyLGCGQ


I guess the crux of it is this: is it training or awareness?

What I see happening between the <think> tags of Deepseek-R1 is essentially a premade set of circular prompts. Each of these prompts is useful, because it explores a path of tokens that are likely to match a written instance of logical deduction.

When the <think> continuation rewrites part of a prompt as a truthy assertion, it reaches a sort of fork in the road: to present a story of either acceptance or rejection of that assertion. The path most likely followed depends entirely on how the assertion is phrased (both in the prompt, and in the training corpus). Remember that back in the training corpus, example assertions that look sensible are usually followed by a statement of acceptance, and example assertions that look contradictory or fallacious are usually followed by a statement of rejection.

Because the token generation process follows an implicit branching structure, and because that branching structure is very likely to match a story of logical deduction, the result is likely to be logically coherent. It's even likely to be correct!

The distinction I want to make here is that these branches are not logic. They are literary paths that align to a story, and that story is - to us - a well-formed example of written logical deduction. Whether that story leads to fact or fiction is no more and no less than an accident. We humans often tend to follow a similar process, but we can actively choose to do real critical thinking instead.

This design pattern is really useful for a few reasons:

- it keeps the subjects of the prompt in context

- it presents the subjects of the prompt from different perspectives

- it often stumbles into a result that is equivalent to real critical thinking

On the other hand,

- it may fill the context window with repetitive conversation, and lose track of important content

- it may get caught in a loop that never ends

- it may confidently present a false conclusion to itself, then expand that conclusion into a whole thread

- the false conclusions it presents will be much less obvious, because they will always be written as if they came out of a thorough process of logical deduction

I find that all of these problems are much more likely to occur when using a smaller locally hosted copy of the model than when using the full-sized one that is hosted on chat.deepseek.com. That doesn't mean these are solved by using a bigger model, only that the set of familiar examples is large enough to fit most use cases. The more unique and interesting your conversation is, the less utility these models will have.


> We humans often tend to follow a similar process, but we can actively choose to do real critical thinking instead.

> - it may confidently present a false conclusion to itself, then expand that conclusion into a whole thread

I want to know how that differs from human "real critical thinking", because I may be missing this function. How do you know what you thought of is true or false? I only know it because I think I know it. I had made a lot of mistakes in past with a lot of confidence.

> The more unique and interesting your conversation is, the less utility these models will have.

Yeah, that also happens with a lot of people I know.

> ... the result is likely to be logically coherent. It's even likely to be correct!

Yeah, a lot of training data made sure that what it outputs is as correct as possible. I still remember my training over many days and nights to be able to multiply properly, with two different versions of multiplying table and many false results until I got it right.

> I guess the crux of it is this: is it training or awareness?

I don't think LLM's are really aware (yet). But they do indeed follow logical reasoning method, even if not perfect yet.

Just a thought: when do you think about how and what you think (awareness of your thoughts)? When you actually think through a problem, or after that thinking? Maybe to be self-aware, AI's should be given some "free-thinking time". Currently it's "think about this problem and then immediately stop, do not think any more". Currently training data discourages any "out-of-context" thinking, so they don't.


We know what true and false mean. An LLM knows what true and false are likely to be surrounded with.

The problem is that expressions of logic are written many ways. Because we are talking about instances of natural language, they are often ambiguous. LLMs do not resolve ambiguity. Instead, they continue it with the most familiar patterns of writing. This works out when two things are true:

1. Everything written so far is constructed in a familiar writing pattern.

2. The familiar writing pattern that follows will not mix up the logic somehow.

The self prompting train of thought LLM pattern is good at keeping its exploration inside these two domains. It starts by attempting to phrase its prompt and context in a particular familiar structure, then continues to rephrase it with a pattern of structures that we expect to work.

Much of the logic we actually write is quite simple. The complexity is in the subjects we logically tie together. We also have some generalized preferences for how conditions, conclusions, etc. are structured around each other. This means we have imperfectly simplified the domain that the train of thought writing pattern is exploring. On top of that, the training corpus may include many instances of unfamiliar logical expressions, each followed by a restatement of that expression in a more familiar/compatible writing style. That can help trim the edge cases, but it isn't perfect.

---

What I'm trying to design is a way to actually resolve ambiguity, and do real logical deduction from there. Because ambiguity cannot be resolved to a single correct result (that's what ambiguity means), my plan is to, each time, use an arbitrary backstory for disambiguation. This way, we could be intentional about the process instead of relying on the statistical familiarity of tokens to choose for us. We would also guarantee that the process itself is logically sound, and fix it where it breaks.


If you define being a kid as "the ability to express emotions and enjoy playing", then yes.

I feel like that's a terrible definition, though.


I think the important point that isn't yet well accepted in our culture is that maturity doesn't happen automatically with age. There are extremely competent, mature, and smart 14 year olds, and there are 29 year olds who haven't emotionally matured yet.

Education is slowly shifting towards "mastery learning" (instead of just taking a test and moving on, you keep practicing a thing until you get better). This is clearly the superior approach and how people learn things anyway (this is how people pick up skills on the job). We will hopefully see a similar shift in maturity, and accept that someone people get very far in life without necessary skills, remove the shame, and help people level up


Mastery learning works great for some subjects but not as well for subjects best learned in groups. If you put 14 year olds and 29 year olds together in the same class then that's likely to be problematic. We can't really state that any one pedagogical is "superior"; the data doesn't support that.


There aren't many Microsoft products that seem to be genuinely good without many complications or drawbacks.

Most of those products have the words "Visual" and "Studio" in the name.

Atmel Studio can come too.


Being intelligent might correlate with a more liberal outlook, but I know a few highly intelligent people who are big fans of trump.

My father was a nuclear reactor operator and electrician for a big chunk of his 22 years in the Navy (EM-N, E-8, worked mostly on fast attack subs). He was a manager at multiple nuclear powerplants on the East Coast before leaving that industry. He's honestly highly intelligent and curious.

He's definitely a trump fan. He thinks Biden is going to destroy the US. Dude's politics are unhinged.

One of my siblings is disabled, bi, and trans, yet both of my parents supported trump openly in 2016.

C'est la vie. It goes to show you never can tell.


I don't doubt what you have said, and it follows my own experiences of politics in Australia.

But i must say, when I hear the reasons why people vote the way they do, my estimation of their intelligence drops significantly.


does intelligence and voting have some sort of correlation? should a vote from an unintelligent person count less in an hypothetical better system?


Speaking in pure hypotheticals, the most effective form of governance is benevolent dictatorship.

... But in terms of forms that are practically implementable without perfect knowledge and control of the future, there are huge risks involved with altering the amount a person's vote counts based upon some kind of gamable criteria.


It can't stop him from running because prison doesn't disqualify a presidential candidate.

IIRC, Eugene V. Debs ran from prison in 1920, and he got something like 5% of the vote... As a third party candidate. He was with the socialists.


> It can't stop him from running because prison doesn't disqualify a presidential candidate.

I'm unfortunately reminded once again that committing certain felonies (or failing to pay legal debts) can permanently disqualify people from voting [1][2].

[1] https://en.wikipedia.org/wiki/Felony_disenfranchisement_in_t...

[2] https://en.wikipedia.org/wiki/Felony_disenfranchisement_in_t...


It also may be worth simply providing an outlet for people to make a comment so it doesn't intrude on usual business.


But not DRM content. The system will recognize and avoid screenshotting that.

Banking details? Fine.

Passwords? A-OK!

Copyrighted content? STOP, YOU FOOLS, BEFORE YOU ANGER THE MOUSE!

This whole thing is dystopian, but making it a priority to avoid DRM controlled content is wild to me. The screenshots are ostensibly a cache, which is part of how video is displayed anyway. Is avoiding caching a 6 frame per minute cache really a higher priority than all of the privacy this undermines?


You haven't been able to take screenshots of DRM controlled content for as long as DRM controlled content has existed. That's what makes it DRM. This technology isn't doing anything special at all. If you wanted to develop it for Windows, you could, and it would work the same way.


This doesn't sound very likely. In order to show pixels on a screen, something has to write those numbers somewhere. Screenshot should be writing those pixels to a file as an image format, not consulting whatever programs are running to ask if there are areas that they don't want to draw.


The pixels are literally encrypted on PCIe on their way over to the GPU, it's quite draconian. Been this way since Vista ("Blu-ray is a bag of hurt" - Steve Jobs)

https://en.wikipedia.org/wiki/Protected_Media_Path


That is a crazy thing to do! Looks like it's windows specific at least.


This is rank populism. You're implying sinister intent, when the technical and legal constraints that resulted in that should be obvious. You already can't screenshot DRM content.


That might suggest that we have sinister copyright law. Aside from that I am pretty sure that this feature will threaten to violate a full battalion of privacy laws.


I could see the US pulling back on Israel if it starts to cost them soft power elsewhere, but you're fundamentally correct. Israel is the primary instrument of US hegemony in the middle east, and they aren't going to risk losing that.


Biden has already started drawing lines though. He has been actively been distinguishing between defensive and offensive use cases and basically saying that US aid isn’t for the latter


Likely just political nonsense to help stop the (supposed) bleeding from the swing states. As it stands, if the election were held today polls show a massive loss for him.

What even is his plan anymore? Keep beating the "im better than Trump" drum and hope for the best?


Yes, that appears to be his plan.


Personally I'm no fan of the two party duopoly and 2020 was my first time voting for a mainstream party in a national election (after decades of voting). Lest we forget, the last Trump term had a paralyzed federal government incapable (unwilling?) to respond to national or international crises, the polar opposite of leadership with the bully pulpit used to divide as if still campaigning, and culminated in an economic catastrophe of massive inflation that we're still reeling from today. And that was all before the chode embraced wholesale-reality-rejecting big lies, and grew a massive chip on his shoulder indicating a desire for straight revenge on his political opponents. So at least to me, affirmatively supporting the conservative option of Biden simply so we continue to have a country to criticize, despite all of the abhorrent status quo military industrial surveillance complex shit continuing to go on, has a pretty strong appeal. If "I'm better than Trump" can't carry the election on its own, then frankly we're doomed.


All of what you said basically indicated your elite status in a country where a majority of people are in so much trouble that they cant scrounge up 500$ for an emergency expense.

[1]: https://www.cnbc.com/2023/08/31/63percent-of-workers-are-una...

There is an argument to be made that the country is not thinking about any of this stuff at all and they will base their vote on how they feel come election time.

On the flipside, there is also evidence to indicate that Trump has completely lost the independent vote which you need to win the election.

You know what would help save the country? If Biden did not seek re-election. After all, he originally called himself a 'transition' president in 2020. His selfish behavior is the reason we are even having a discussion. There still might be time to swap him out at the convention if things get really bad but there is an argument to be made that the large Muslim community in Michigan would rather punish him for Israel/Palestine and accept Trump than give him the satisfaction that he so desires of being a two term president.

Thats all Biden seems to really care about and by making him the president that lost the country to Trump, it will be a greater way to get payback than anything else they can try.


I certainly do understand that dynamic of economic stress causing poor decisions. It's just a terrible setup when a president/party can trash the economy, and then pin the results on their opponent-successor as the effects play out over several years. See also the ongoing Trump income tax increases that were baked into law in 2017. To the extent voters are gullible enough to fall for this, as I said, we're doomed.

In general I don't see why the Israel situation should be laid at Biden's feet specifically, as opposed to the Israeli lobby that has had a stranglehold on US politics for quite some time. Short of waiting for generational change, the only way to sidestep that would be to paralyze the US government's foreign policy, but that also means the end of US hegemony, USD as a reserve currency, and the generally high standard of living we take for granted. Frankly it's odd to attack such conservatism as a "privileged" position, when the fact of the matter is that people with wealth will be generally fine regardless of what happens.

As for Biden, his personal aspirations, and internal party jockeying, I've got little opinion. Swap in another not-Trump candidate and I'll vote for them instead. Personally at this point I'd likely support someone even if they were a blue-flavored extremist, because the incumbent power structure would be inherently set to resist them, whereas at this point a reelection of criminal Trump would essentially be throwing in the towel on our system of government. But obviously another extremist would turn a lot of people off, and let so called "conservatives" maintain their cognitive dissonance that Trumpism is about conserving anything at all.

Ultimately I think if we want to throw off the chains of the military industrial surveillance complex, we've got to take that bottom-up from the grassroots (eg decentralized communications tech) rather than thinking a power vacuum at the top will lead to anything good.


I don't see evidence of any lines drawn


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: