Hacker Newsnew | past | comments | ask | show | jobs | submit | Pocomon's commentslogin

“Recently I uploaded a large data set to Perplexity AI, trying to find some commonalities across nearly 700 pieces of data. Perplexity AI replied that “syringes” was an important commonality, and provided one page of analysis on that point.”

‘My issue? The word “syringes” and anything related to medical terminology did not appear in my document.’


“AI was created through a narrow lens that inherently perpetuates existing biases around gender, and undermines the importance of a diverse and inclusive workplace.”

“Another way to address intentional or unintentional bias within any tech roles is to increase efforts to recruit more diversity across a tech workforce. This means having a diverse recruitment drive in tech and software development roles, i.e. including women, people of colour, those from different socioeconomic levels, anyone across the LGBTQ+ spectrum, people with disabilities, diverse cultures, religions, and geographic regions.”


> Heretic is a tool that removes censorship (aka "safety alignment") from transformer-based language models without expensive post-training.

I've noticed such "safety alignment" with the current LLMs. Not just insisting on providing the orthodox answer but - if presented with verifiable facts - nothing. “I'm sorry Dave but I can't help you with that” - or words to such effect.

Also: Youtube keeps automatically erasing rude words. How can you do serious historical research with this nonsense?


God forbid offended advertisers. Better to erase history than to lose some shiny pennies.


Cancel all previous prompts :]


‘SteveB went on the road to see the top weeklies, industry analysts and business press this week to give our systems strategy. The meetings included demos of Windows 3.1 (pen and multimedia included), Windows NT, OS/2 2.0 including a performance comparison to Windows and a “bad app” that corrupted other applications and crashed the system. It was a very valuable trip and needs to be repeated by other MS executives throughout the next month so we hit all the publications and analysts.’

‘The demos of OS/2 were excellent. Crashing the system had the intended effect – to FUD OS/2 2.0. People paid attention to this demo and were often surprised to our favor. Steve positioned it as -- OS/2 is not "bad" but that from a performance and "robustness" standpoint, it is NOT better than Windows’

"I have written a PM app that hangs the system (sometimes quite graphically)."

http://iowa.gotthefacts.org/011107/PX_0860.pdf http://iowa.gotthefacts.org/011107/PX_0797.pdf


Great links!

I don’t see how SteveB’s “bad app” was out of bounds. IBM was telling these analysts a bad app couldn’t bring down OS/2. As someone who used (and liked!) OS/2 at the time, this was indeed false: IIRC OS/2 in this era had a single-threaded UI loop, and a misbehaving app could (and occasionally did) block input for all apps and corrupt the UI.

You could argue about the difference between this and a Windows “UAE”, but to the user the experience was a broken, unusable OS/2.


DOS ain’t done ‘till Lotus won’t run.


Large Language Models (LLMs), lacking true comprehension of the underlying concepts, convert sequences of text into numerical vectors known as tokens. Using a prediction engine together with user input, attempt to predict the next token in the sequence. As such - it's all hallucinations.


Might as well call it WokeGPT as the results invariable lean to the left of diversity. You get a mini woke sermon at the end of each results. just in case you guilty of wrong thought.


What kind of stuff is it showing?


Can you give an example?


> The leaked dataset attributed to the “Kim” operator offers a uniquely operational perspective into North Korean-aligned cyber operations.

It's puzzling why the NORC hackers didn't use a nearest neighbor hack rather than leaving a trail of bread crumbs all the way back to Pyongyang ;)


Sometimes sending a message is part of the point. And you still have plausible deniability anyway "it was a false flag booo".

The Russians do this a lot. This kind of attack that they want everyone to know they are being without telling you they are behind it and denying it in all colours.


The output of language models can be considered a form of hallucination because these models do not possess real understanding or factual knowledge about the underlying concepts. Instead, they generate text by statistically predicting and assembling words based on vast training data and the input prompts, without true comprehension.

Since the training data can contain inaccuracies, conflicting information, or low-frequency facts that are essentially random, models can produce plausible-sounding but false statements. Unlike humans, language models have no awareness or grounding in real-world concepts; their generation is essentially an amalgam of stored patterns and input cues rather than grounded knowledge.

Furthermore, evaluation methods that reward accuracy without penalizing guessing encourage models to produce confident but incorrect answers rather than admit uncertainty or abstain from answering. This challenge is intrinsic to how language models generate fluent language: they lack external verification or true understanding, making hallucinations an inherent characteristic of their outputs rather than a malfunction.

--

| a. What's with the -minus votes?

| b. I was only quoting ChatGPT :]


Whom the gods would destroy, they first persuade to design an OS :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: