Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Real Story Behind Sam Altman’s Firing From OpenAI (wsj.com)
119 points by Philpax 8 months ago | hide | past | favorite | 115 comments



I’m even more confused now than before I read the article:

- Sutskever and Murati compile evidence of Altman’s lies and manipulation to oust him.

- Sutskever emails the evidence to the board and suggests they act on it.

- The board fires Altman but refuses to explain why.

- Murati demands the board explain why.

- The board refuses, and Murati and Sutskever rebel against the board and petition with other employees to reinstate Altman.

It all makes no sense. And why wouldn’t the board just explain their decision if Murati herself was imploring them to do so?


I read the article on archive and figured there was a big chunk missing. It really does not make any sense.

Sutskever and Murati were methodical, they waited until the board was favorable to the outcome they wanted, engaged with board members individually laying the groundwork... and then just changed their mind when it actually happened!?


The article says Sutskever was blindsided by the rank-and-file being on Sam's side. Presumably he thought the outcome was going to be business as more-or-less usual but with Murati or someone as CEO and then panicked when that didn't happen.


Or someone said "If you don't switch and back me, I am going to fight every bit of your compensation. Or you can back me and leave with favorable terms."

Panic is a less likely driver.


The board did not plan or execute their ouster well, which forced Murati and Sutskever to coup their coup to maintain the stability of the company. The board and Sutskever were expecting the general support of the company, so they had no real backup plan or evidence ready that they could publicly release.


Why couldn’t they release the evidence? At least some of it is here in the article, and it’s damaging to Sam but not particularly damaging to the company. If Murati demanded they release the evidence, why refuse?


Murati didn't demand they release the evidence, as far as I could tell. The board are describing as not wanting to throw Murati under the bus by stating the evidence came from her, which makes sense if their goal was to install her as the new CEO.


> [Murati] and the rest of the executive team gave the board a 30-minute deadline to explain why they fired Altman or resign—or else the executive team would quit en masse.

> The board felt they couldn’t divulge that it had been Murati who had given them some of the most detailed evidence of Altman’s management failings.

Unclear whether she wanted it released internally or publicly, but this sounds like a demand to me.


She/the team demanded an explanation, not the Murati-sourced evidence. You could read this as Murati asking the board to provide a narrative the rest of the team could buy into without directly using her evidence; I would assume that she was operating under the assumption that the board had already prepared such a narrative.


It makes perfectly sense if you don't try to read too much logic into their actions and view it solely from the social dynamics/emotions (Murati early realised that the coup led by Sutskever, Toner and her had failed). Besides that, the board installing her as new ceo - who provided the main claims for why to oust the old ceo - wouldn't fly through with employees and partners. She knew that. Also some of the people on the board clearly weren't qualified for that job as you can see how this whole coup was carried out by them.


> And why wouldn’t the board just explain their decision if Murati herself was imploring them to do so?

I think because they were in over their heads. They were on the board to run a non-profit and then it metastasized into a high-stakes Fortune 50-sized company.


People cared about the OpenAI drama when it looked like they might have some real edge and the future of AI depended on them. Now it’s clear the tech is cool but rapidly converging into a commodity with nobody having any edge that translates into a sustainable business model.

In that reality they can drama all they want now, nobody really cares anymore.


Yes and the open source models + local inference are progressing rapidly. This whole API idea is kind of limited by the fact that you need to RT to a datacenter + trust someone with all your data.

Imagine when OpenAI has their 23&me moment in 2050 and a judge rules all your queries since 2023 are for sale to the highest bidder.


It doesn't need to wait until 2050. The queries would be for sale as soon as they stop providing a competitive advantage.


Even worse for these LLM-as-a-service companies i that the utility of open source LLMs largely comes down to the customization: you can get a lot of utility by restricting token output, varying temperature, and lightly retraining them for specific applications.

The use-cases for LLMs seem unexplored beyond basic chatbot stuff.


I'm surprised at how little their utility for turning unstructured data into structured data, even with some margin of error, is discussed. It doesn't even take an especially large model to accomplish it, either.

I would think entire industries could reform around having an LLM as a first pass on data, with software and/or human error checking at significant cost reduction over previous strategies.


The software-based second pass is where the most value lies (and the hard problems)


Selling tokens is likely to be a tough business in a couple of years


There's more to business than tech. There's more to business than product.

The software behind Facebook as an app wasn't particularly unique, yet it eclipsed the competition. The same could be said for Google. Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.

ChatGPT is in a similar position. The fact of the matter is, the average person knows what ChatGPT is and how to use it. Many hundreds of millions of normal people use ChatGPT weekly, and the number is growing. The same cannot be said of Claude, DeepSeek, Grok, or the various open source models.

And the gap is massive. It's not even close. It's like 400M weekly ChatGPT actives vs 30M monthly Claude actives.

So yes, the average Hacker News contrarian who thinks their tiny bubble represents the entire world might think that "nobody cares," in part because nobody they know cares, and in part because that assessment aligns with their own personal biases and desires.

But anyone who's been paying attention to how internet behemoths grow for the past 30 years certainly still cares about OpenAI.


The software behind Facebook as an app wasn't particularly unique, yet it eclipsed the competition. The same could be said for Google.

I remember the search engines of the time and Google was a quantum leap.

ChatGPT is even more revolutionary but whatever Google is now, once it was brilliant.


It gets more interesting the older I get seeing people speculate about the now vs. early internet without having lived through the 90s internet. My first response is "do you not remember what that was like?!" then I remember, no, in fact they might not have even been born yet.


Useful search unlocked the web. I will take that over LLMs in their present state.


I agree, just saying, ChatGPT was a quantum leap, too. That's why it has all the consumer mindshare.


You can't compare Facebook with ChatGPT because the costs per user are in totally different orders of magnitude. One $5/mo VPS can serve the traffic of several hundred thousand Facebook users, while ChatGPT needs an array of GPUs per active user. They can optimize this somewhat, but never as much as Facebook can.

This means that they're stuck with more expensive monetization plans to cover their free tier loss leader, hence the $200/mo Pro subscription. And once you're charging that kind of price to try to make ends meet, you're ripe for disruption no matter how good your name recognition.


"ChatGPT needs an array of GPUs per active user" - nit: you're exaggerating by a few orders of magnitude.

First, queries from users can be combined and fed into servers in batches so that hundreds of queries can be concurrently served by a single node. Second, people aren't on and asking ChatGPT questions every second of every day. I'd guess the median is more like ~single digit queries per day. Assuming average response length of 100 tokens and throughput of 50 tok/s at batch size 50, that's 25 QPS or 2.1M queries per day, or 420k users served per node at 5 queries per user per day.

Now, a single 8xH100 node is a lot more expensive than $5/mo, so you're directionally correct there, but I'd wager you can segment your market aggressively and serve heavily distilled/quantized models (small enough to fit onto single commodity GPUs, or even CPUs) to your free tier. Finally, this is subject to Huang's Law, which says every 2 years the cost of the same performance will more than halve.


People said similar things about Facebook. "Oh their user growth might be amazing, but they're not making any money, it's not a real business."

But it turns out that with enough funding, you can prioritize growth over profit for a very long time. And with enough growth, you can raise unlimited funds before you get to that point. And going this route is smart and effective if you want to get to a $1T valuation in under a decade.

So yeah, ChatGPT's margins might not be as high as Facebook's. But it doesn't really matter at this point, they're in growth mode. What matters is whether or not they'll be able to turn their lead and their mindshare into massive profits eventually, and while we can speculate on that, it's far too early to definitively say the answer is no.


Like I said, that doesn't apply when money isn't free, and it doesn't apply when your marginal cost per user is so high.


Rather than getting into the nitty gritty details of monetization, when we ask ourselves if OpenAi can nail product like Facebook did (I guess) to become the next tech giant, I think we have to ask whether it's even possible when the tech industry is as established as it is.

You would think existing megacaps would be all over any new market if there is a profit to be made. Facebook's competition was basically other startups. That said, Google seems to be dropping the ball almost as bad as Yahoo.

But sure, if there's absolutely no way to make money from consumer AI then that will also make it hard for oai to win the game.


> Google didn't even have any real lock-in for years, but it still owned consumer mindshare, which gave it the vast majority of search traffic, which made it one of the most valuable companies in the world.

This isn't correct at all. Google's search engine was an important stepping stone to the behavior that actually gave them lock-in, which was an aggressive, anti-competitive and generally illegal effort to monopolize the market for online advertising through acquisitions and boxing out competitors.

It really was only possible because for reasons we decided to completely stop enforcing antitrust laws for a decade or two.


The Microsoft antitrust case also gave them a couple of years to grow without that threat.


400 million use it for free, you can give away 400 million of anything for free. The question is, how many are willing to pay the monthly fee required to stop OpenAI from bleeding 5billion/year and return the promised trillions to investors.


You absolutely can't give away 400 of anything for free. Have you ever tried?


[flagged]


Regardless of how wrong someone is or you feel they are, can you please make your substantive points thoughtfully? This was not a good Hacker News comment, and you've unfortunately been doing this repeatedly lately.

https://news.ycombinator.com/newsguidelines.html


OpenAI is spending $2 for every $1 it earns. It's certainly eating its investors' lunch, but it's not a sustainable business yet and from all accounts doesn't have a clear plan for how to become one.

Meanwhile, the ZIRP policies that made this kind of non-strategy strategy feasible are gone.


I wouldn't worry. Retracting ZIRP policies gave governments 2 choices: reduce spending by ~10% on average (in Europe), or cheat and scheme to bring them back, on a 2 year timer.

Interest rates raised, but came back down before the 2 years were even up (rate rise started 27 Jul 2022, Started coming back down 12 Jun 2024), governments have been caught cheating, and the number of central bankers replaced has gone up dramatically. Oh, and none of the governments have reduced spending. Literally not a single one. In fact, Germany has agreed to an unprecedented increase in debt financing of their government.

In other words, ZIRP, even negative rates, are coming back, and a lot sooner than most people think. Your next house, despite everything that's happened, will be more expensive. But I doubt this will save either OpenAI or Tesler.


>They had banked on Murati calming employees while they searched for a CEO. Instead, she was leading her colleagues in a revolt against the board.

I finally finished the 4th of Caro's books about LBJ, "The Passage of Power", largest part about how LBJ dealt with assassination. Over and over shows how LBJ made sure that nobody, meaning world leaders, citizens, others in government, and, relevant here, also those in the Kennedy administration would feel lost and want to resign. Caro made sure to note how this is a very difficult task and required LBJ to act differently than normal, but also how important it is to not have things go into disarray which easily happens.

Side note: Astounding notes of how LBJ was able to get bills that weren't going to get through congress with Kennedy were pushed through and made possible by Johnson. Quote to end a chapter by Richard Russell, southern complete segregationist and racist, says "You know, we could have beaten John Kennedy on civil rights, but we can't Johnson." Other side, Caro makes certain however about how the coming issues of Vietnam show the darker side of LBJ and not get fully caught up in his stability of power and civil rights successes.

Maybe these are all cases of those who want power are usually those who shouldn't have it.


To save others the lookup- this is not talking about assassinations carried out by the administration abroad, but about the Kennedy assassination.


So sam let the cat out of bag (chatgpt) behind the backs of "safety review" and the board. Probably why Google was caught flat footed and how ChatGPT became the household name.

Dubious moral decision but an excellent business one. Perhaps the benefit of hindsight where ChatGPT didn't cause immediate societal collapse helps here.


ChatGPT is already out when the story picks up, it's talking about concerns about GPT-4.

And the story isn't about that single incident of Altman dodging review and working behind the backs of the board—it's about a pattern of deception and toxic management practices that culminated in Altman lying to Murati about what the legal department had said, which lie was given to the board as part of a folio of evidence that he needed to be ousted.

You're trying to distill a pattern of toxicity and distrust into a single decision, which ameliorates it more than is fair.


Yeah to me the overt lying is more damning than any particular decision. If he owned the decision to bypass ethics review and release a model, fine, we can argue if that was prudent or not, but at least it's honest leadership. Lying that the counsel said it was ok when they hadn't is a whole other thing! When someone starts doing that repeatedly, and it keeps getting back to you that stuff they said was just outright false, you can't work with them at all imo.

If this is something he's been doing for years, it becomes clearer why Y Combinator fired him, though they have been kind of cagey about it.


The question then remains: if you have a lying, toxic, manipulative boss, who would want to work for them ? Especially the direct reports of one


From the story it sounds like the direct reports generally did not want to work with Altman, Brockman excluded. Even Murati was one of the primary instigators of the firing, but she changed her mind for reasons that the article doesn't really explore.


My interpretation is that she realised the blowback was going to be worse than reinstating Altman, but her grievances didn't really change, which is why she left to start Thinking Machines.


Money.


Aside from becoming the opposite of the values their name suggests, there’s two main mistakes OpenAI made in my view: violate copyright when training, and rush to release the chatbot. Stealing original work is going to bite them legally (opening them to all sorts of lawsuits while killing their own ability to sue competitors piggy-backing off their model output, for example), and is a special case of them being generally shortsighted and passing on an opportunity to make a truly Apple- or Amazon-scale business by applying strategy and longer term thinking (even if someone else got to release an LLM chatbot before them, they could—as in, had the funds and the talent to—build something higher level, properly licensed, and much more difficult to commoditise).

If this was the fault of Altman, it is understandable that certain people would want him out.


> violate copyright when training

If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?

When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?


I don't think the concern is related specifically to training on computer chips with copyrighted content.

If you are going to use human brain cells to memorize protected content and sell it as a product, that's still an issue based on current copyright laws.


> If you are going to use human brain cells to memorize protected content and sell it as a product, that's still an issue based on current copyright laws.

And yet, that's all most billable hours at McKinsey, BCG, KPMG, are for. Those consultants memorized copyrighted stuff so your executives didn't have to.

It's very difficult to explain how GPT is not consulting.


The question there still comes down to what they did with the memorized content. There's nothing wrong with memorizing copyrighted content, there's legally a problem with trying to resell it without paying royalties under contract to the owner of the copyright.


The problem with LLMs is techbro crowd trying to pretend they are like thinking humans (all these analogies with us memorizing things) when it comes to rights like access to information and copyright abuse, but not thinking humans when it comes to using LLMs themselves.

You’d think any logical person would believe only one or the other is true, but big tech got many people believing this paradox because the industry depends on it. As soon as the paradox is over, the industry is revealed to be either based on IP theft or on slavery.

There is no problem to argue about, really: laws and basic rights and freedoms exist for humans; if %thing% (be it made of chips or brain cells) is not considered human then laws and rights apply to humans who operate it; if the thing is considered human then it itself has human rights to be reckoned with.


Once again...LLMs are not massive archives of data.

You would never want to use an LLM to archive your writings or documents. They are incredibly bad at this.


They were never designed to be archives though, of course they're bad at something that not only was not a goal bit is opposite of a primary design factor.

LLMs are massive, lossy compressed datasets. They were designed to store the gist of effectively all digital language content humans ever created so an inference engine could use that data space to predict what a person might say to a prompt.

They were never designed to regurgitate exact copies of the original sources just use your favorite zip algorithm for that.

The question would be how closely an LLM can regurgitate an answer before running into copyright issues, and how the original training dataset was obtained.


Want to abolish economic copyright alltogether? I could get behind that. Making a legal exception because of some imagined future metaphysical property of this particular platform sounds like being fooled.


Why not abolish copyright only when it suits multi-billion corporations and leave it in place for us, ordinary people who end up providing training data so that we can be replaced at our jobs?


Abolishing economic copyright while protecting (non-transferrable) moral rights is in this direction, I believe.


This is one issue with Microsoft's Total Recall thing, right? I wonder how they're dealing with that.


Others replied to this and I am still not sure what your point is. Are you saying big tech should be able to get away with this because LLMs are just like us humans?


> If we could incrementally update our own brains by swapping cells for chips, what percentage of our brain has to be chips before us learning from a book is a violation of copyright?

The same percentage at which you stop qualifying to be human and become an unthinking tool, fully controlled by its operator to do whatever they want, without free will of its own and without any ethical concerns about abuse and slavery, like is the case with all LLMs.

(Of course, it is a moot point, because creating a human-level consciousness with chips is a thought experiment not grounded in reality.)

> When learning to recite a recent children's poem in kindergarten, what level of accuracy can a child attain before their ability to repeat it privately to one other person at a time is a copyright violation?

Any level thanks to the concept called human rights and freedoms, famously not applied to machines and other unthinking tools.


This seems short sighted. The idea of when a "mechanical man" should be given the same rights as a man has been explored for a long time, as an echo of the past when people had the same debate about women and non-Europeans.


The ideas of FTL travel or existence of bearded people in the sky were also explored for a long time.

> as an echo of the past when people had the same debate about women and non-Europeans.

If you need help spotting the difference between skin color or gender variation and an imagined ability to imbue something we cannot even define (like consciousness) onto arbitrary substrates, I am not the right person for that.


If you think LLMs are like conscious humans and should be given the same rights, I don’t see anything wrong with it. You should realize, however, that this means the LLM industry would instantly be dead because you cannot really subject conscious humans to what LLMs are subjected to in order to be profitable.


Do the copyright claims have any legs at all? ianal, but I thought it was pretty settled that statistical compilation of copyrighted works (indexes, concordances, summaries, full-text search databases) were considered "facts" and not copies.

(This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim)


1. Google was, and still is in some developed countries, under fire for as much as summarising search results too much, so I think yes, the claims have legs.

> This would be separate from the contributory infringement claim if the model will output a copyrighted work verbatim

2. Commercial for-profit models were shown to do that, and (other legal arguments aside, such as model and/or its output being a derivative work, etc.) in some cases that was precisely the smoking gun for the lawsuit, if I recall correctly.

I have not seen any conclusive outcome, I suppose it will depend on jurisdiction.


I can guarantee you if one jurisdiction limits AI training via copyright law, another one won’t, and it will have a huge competitive advantage as a result. That competitive advantage alone means you either have to leave the race or change your laws.


> I can guarantee you if one jurisdiction limits AI training via copyright law, another one won’t, and it will have a huge competitive advantage as a result.

What is the competitive advantage of using chatbots, exactly?

Any advantage from ML that I can think of comes from advanced applications that require pre-existing knowledge, like biotech, metamaterials, etc.

— I fail to see how being able to use ML to cheat on a test or to avoid learning something (including command of own language) provides an advantage.

— I can see how requiring an LLM trained on unlicensed work to be opened up and free for everyone to use could provide an advantage.

— I can see how lack of regulation in this industry could result in a quick reduction of the number of jobs in a number of industries and therefore be a disadvantage to the economy, in the worst case benefiting a small number of for-profit corporate operators.


I was with you until “immediate societal collapse”, what?


Obviously safety is not a problem if the lack of it doesn't cause an immediate end to civilization.

/s


So, why didn't the board tell the other executives (and employees) what Murati had told them. When it was them in the firing line, why didn't Ilya tell that story? They could have just fired Murati (based on the screenshots presented) and continued as before. Or what am I missing?


Yeah I don't understand this either. Make a case or don't but keeping it incredibly vague, especially when so much money was on the line due to the secondary wasn't going to work.


So sam was getting paid - possibly in egregious amounts while lying to congress?


VC huckster lies to the public, news at 11.


Why has safety taken such a back seat? Were the fears overblown back in 2022 or have model providers gotten better at fine tuning the worst away?


TLDR:

In November 2023, OpenAI CEO Sam Altman was suddenly fired by the board—not because of AI safety fears or Effective Altruism, but due to concerns over his leadership, secrecy, and possibly misleading behavior. CTO Mira Murati and chief scientist Ilya Sutskever shared evidence of Altman’s actions, like skipping safety protocols and secretly controlling OpenAI’s startup fund.

The board didn’t explain the firing well, and it backfired. Murati, who at first supported the board, turned on them when they wouldn’t give clear reasons. Nearly all OpenAI employees, including Murati and Sutskever, threatened to quit unless Altman came back. With the company on the brink of chaos, the board caved and he was reinstated days later.



From the outside it really seems like Peter Thiel was a brilliant kid who read Lord of the Rings and became obsessed with becoming the real world Sauron, manipulating weak-minded men in Silicon Valley into following the path of soulless corruption.


Did he read the end?


Skill issue. Easily overcome.


Not closely guarding the only means to destroy his source of power was such an obvious plot hole and oversight. ;)


You can innovate that part away /s


Text-only, works where archive.is is blocked:

https://assets.msn.com/content/view/v2/Detail/en-in/AA1BRU7s


For anybody who followed the saga at the time, there's nothing revelatory here as implied by the title but the essay (an excerpt from a book) is a useful summary if you wanted one.



One of the strengths of the Chinese companies is they are more aligned on their goals as team members: make cutting edge LLMs and sell access. When you have all these competing interests you end up with internal strife. This faction wants to prevent the literal end of humanity, this faction wants to make the world a better place, this faction wants to curry favor with Washington apparatchiks. Nobody is really that interested in making money by inventing new algorithms. The result is frankly embarrassing drama for the whole world to indulge in.

“One Company for All People” is a great weakness of American companies and is contributing to this economic downturn, and not just in tech. Corporate Universalism needs to go the way of history.


Chinese companies have one massive advantage in aggregate: they know that from 2028 onwards they will be competing for a captive domestic market of >1.3B people. The CCP have declared as their industrial [service] policy that by the end of 2027, all Chinese companies must be using services exclusively from Chinese suppliers. The target ratio of domestic/foreign services is being ramped up year over year, so that by 2028 the base expectation is everyone to have 100% Chinese suppliers only.

From thereon, every exception must be justified to - and approved by - their respective politburo.

An obvious second-order effect is that there has been an explosion of Chinese B2B companies eager to get themselves established in the market. They know that in just a few years they can still sell their services outside China, but can expect very limited competition from non-Chinese companies. And inside the country, they have a population of ~4x of US to compete for.


Chinese strength is that they have a manufacturing economy and oversupply on everything.

They have a great incentive to quicken AI science, as it will lead to disillusionment, if not replacement, of the knowledge economy, in other words the US economy. I believe this to be the hidden motive and it's not about profit.


That, and state sponsored hacking groups and corporate moles sending everything they can back home.


This is old info. China is more than a manufacturing economy nowadays.

It’s quickly surpassing the US.


I'm sure other priorities at those Chinese companies are also in conflict with making money.


Indeed, making money is obviously a much better goal than embarrassing things like preventing the end of humanity or making the world a better place.


What evidence are you basing this in?

The way I see it, everything points to the opposite being true: US companies, by and large, are completely dominant in technology. Google seems to be winning the race and is exactly the "one company for all people" kind of place you're talking about. Academic studies have generally shown that diverse teams outperform monocultures.


Meh, I personally don't care that much about OpenAI drama, anymore. By now it is clear that they do not hold any edge and that they won't be able to establish an AI monopoly, and that all I ever cared about.


That’s not what their revenue says, though, right?


We lose a dollar on every sale, but we'll make it up on volume!


It’s good for the community to move beyond drama and build


Literally nothing gives me any trust in using and adopting tech from such a sociopath like Altman.


Sam Altman is immune to consequences


That tends to happen if you’re rich.


This is definitely the new US. Boy have my illusions been shattered.

And then to see people here defending this new state of affairs. It's a different world we are living in now.


To be honest I think this has been the case in every country for all time.

What’s changed is that certain individuals don’t even try to hide privilege any more.



Some of the early comments here almost read as astroturfing from friends of 'sama. This article provides interesting context to one of the most consequential SV events of the decade.


This is fiction.

Edit: Rejoice in your ignorance, then. Many of the things mentioned in the article are provably false. The single most important detail about why they couldn't touch @sama is not even remotely addressed. But flies love shit, so there's that.


if you say something's provably false but you don't prove or even allude to a reason why someone should believe you, um, it's not very convincing


You're 100% right on that.

But this should be enough for you,

"This account is based on interviews with dozens of people who lived through one of the wildest business stories of all time"


That sounds like... they did extensive research and standard journalistic double-checking so it's probably correct?

The WSJ isn't some random blog. Regardless of what you think of their opinion pages, their reporting is generally factually legit.

So you seem to be suggesting that their story is, indeed, correct.


Good, then I also did "extensive research and standard journalistic double-checking" or whatever.


Fantastic! You interviewed dozens of people directly involved? And corroborated everything independently at least once?

Then please share your own version of events, we'd all love to read it! If you spent weeks doing all that research, I'm sure it was so others could learn?


?? that quote makes me think it's accurate, not inaccurate


Can you explain your pov in detail? I’m interested.


Sure, send me an email and I'll be glad to.

I wouldn't feel comfortable writing about it, it's not illegal but it just doesn't feel right.


Why wouldn't it feel right? What an odd thing to say.

From HN comment guidelines:

> The most important principle on HN, though, is to make thoughtful comments. Thoughtful in both senses: civil and substantial.

Your claim that this is fiction, without any evidence or "proof", as you put it, is neither substantial nor thoughtful.

I am genuinely curious as to why you would make a splash in the comment section with this and then not explain yourself.


You're willing to make provocative claims here in public, but only back them up privately?

Then please don't make provocative claims in public in the first place. It's just trolling then. As the saying goes, put up or shut up.


[flagged]


Who are you? And yes, there is something wrong. You posted simply:

> This is fiction.

This is against HN guidelines:

> Please don't post shallow dismissals... A good critical comment teaches us something.

Your comment is a perfect example of a shallow dismissal that teaches us nothing. dang doesn't need to add any rules -- you on the other hand need to review the guidelines. "Enjoy!" reading them.


I think your reply is more against the rules because You’re starting conflict. What if he really has good info?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: