Hacker Newsnew | past | comments | ask | show | jobs | submit | jasode's commentslogin

>write like a late night kitchen gizmo ad?

I naturally wrote "it's not just X, it's Y" long before November 2022 ChatGPT. Probably because I picked up on it from many people.

It's a common rhetorical template of a parallel form where the "X" is re-stating the obvious surface-level thing and then adding the "Y" that's not as obvious.

E.g. examples of regular people writing that rhetorical device on HN for 15+ years that wasn't in the context of advertising gadgets:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

So AI-slop writes like that because a lot of us humans wrote like that and it copies the style. Today is the first time I've learned that the "It's not X, it's Y" really irritates many readers. Personally, I've always found it helpful when it reveals a "Y" that's non-obvious.


1) None of those had an Em dash

2) Most of those, while they had the two statements, the statements were not in succession.

There are maybe 4 unique examples in the search over the past 15 years, which is why it is very telling when there is an explosion of the pattern seen today, and that is most likely due to LLMs.


>1) None of those had an Em dash

I was responding in particular to the "you write like a late night kitchen gizmo ad?" ... which would be a speech pattern people hear. In the audio case, it doesn't matter what punctuation symbol separates the "it isn't/it's" pattern because the comma or em dash would be invisible.

E.g. a blog author also complains about "it isn't/it's" separated by a comma here: https://saigaddam.medium.com/it-isnt-just-x-it-s-y-54cb403d6...

>There are maybe 4 unique examples in the search over the past 15 years,

No, (1) the Algolia search engine HN uses is not exhaustive and always returns incomplete results, and (2) I couldn't construct a regex to capture all occurrences. It didn't capture the dozens of times I used it before 2022.

More pre-2022 examples that match the "it isn't/it's" pattern that the blog author is complaining about :

2012 It Isn't Just Buzz, It's About Relationships : https://news.ycombinator.com/item?id=4567529

2019 It Isn’t the Kids. It’s the Cost of Raising Them : https://news.ycombinator.com/item?id=19468214

2021 It isn't Facebook, it's us : https://news.ycombinator.com/item?id=29402146

2017 It isn't trig, or geometry, or analysis: it's maths : https://news.ycombinator.com/item?id=14310575

The same gp mentioned that it's also common in "ad copy". That's also true with the famous Navy's "It's not just a job. It's an adventure.". E.g. 1981 tv commercial: https://www.youtube.com/watch?v=Tc9g2tagYms

That's a slogan people heard rather than read with an em dash. LLM engines picked up on a common phrasing used for decades.


I understand that there are multiple people in this conversation, but you are attempting to pick and choose points to discuss at the expense of your own internal consistency. If you were responding to "which would be a speech pattern people hear," why did you only quote written examples from the HN search and not provide video or audio clips?

>why did you only quote written examples from the HN search and not provide video or audio clips?

At the risk of stating the obvious, highlighting the HN _texts_ demonstrates in a very literal way the "write like" fragment in gp's question, "You write like a late night kitchen gizmo ad?. The other fragment was the "late night kitchen gizmo ad" which is the audio comparison. The gp was making that comparison between the writing style and the speech style when asking the question. (https://news.ycombinator.com/item?id=46165248)

Providing audio links would not show the "writes like". The gp (and you) already know what the "It isn't/It's" audio pattern sounds like. It's the written text the gp was wondering about.

The point is people really did write text like that (no em dashes required) before ChatGPT existed.

EDIT reply to: >He just said that it is traditionally associated with late-night ads, and that the explosion in use of the phrase (especially with the em-dash)

Actually, the gp (0_____0) I was responding to didn't mention the em dash in either of the 2 comments. Gp used a comma instead of em dash. Gp only mentioned the comparison to ad copy. The em dash wasn't relevant in the subthread we're in. That's something extra you brought up that's not related to gp's specific question.

https://news.ycombinator.com/item?id=46165237

EDIT reply to: >Quick HN tip: It is usually better to reply to a post instead of editing the original post.

I agree but the "reply" option was not available. This is a "cool down" mechanism HN uses to discourage flame wars. I don't know if it's 30 minutes or what the value is before the reply link shows up. It was just easier to reply in my post rather than wait an indeterminate time.

>This statement is incorrect, as the original post mentioned, "'it's not just x — it's y' format is the hallmark

Yes but that's not the ggp (ceroxylon) I was responding to. Instead, I was responding t gp (0_____0)'s question and the 2 times the writing was compared to ad copy with no mention of em dashes. Sorry for not making that clear.

>Showing fewer than a dozen uses of the phrase

Again, there are thousands of examples but the Algolia search engine will not show all of them.


Quick HN tip: It is usually better to reply to a post instead of editing the original post.

>Actually, the GP (0_____0) I was responding to didn't mention the em dash in either of the two comments. GP used a comma instead of an em dash. GP only mentioned the comparison to ad copy. The em dash wasn't relevant in the subthread we're in. That's something you brought up.

This statement is incorrect, as the original post mentioned, "'it's not just x — it's y' format is the hallmark of mediocre articles being written/edited by AI." (note the quotes in the first post), and the next post said, "It's simply how literate people write."

All of this is beside the point, however, because your statement, "The point is people really did write text like that (no em dashes required) before ChatGPT existed," was never contended in this thread, and I do not think anyone has ever thought that ChatGPT created that phrase, so it just doesn't add to the discussion. Showing fewer than a dozen uses of the phrase (with or without the em dash) in a 15-year period just further proves that it was not a common written turn of phrase before ChatGPT.


>The point is people really did write text like that (no em dashes required) before ChatGPT existed.

OK, I think I can see your point, but at best it is irrelevant. At no point did the original poster imply that ChatGPT created the phrase, or that it wasn’t in spoken or written language before then. He just said that it is traditionally associated with late-night ads, and that the explosion in use of the phrase (especially with the em-dash) is most likely attributed to increased LLM use.


You're wasting your time on that person.

>Everyone likes a service when it’s subsidized by VC dollars.

Netflix went public in 2002. It was +8 years later that the streaming-only service was launched in 2010. The digital streaming wasn't "subsidized by VC".

Netflix had more content from everybody back then because the other studios licensed their content for cheap prices to Netflix. But those studios then realized that Netflix was growing rapidly on the backs of their content. Once those multi-year contracts expired, studios like Disney didn't renew with Netflix and instead, started their own platform (e.g. Disney+).


You're not wrong, but that doesn't mean they weren't still in "growth" phase.

Their pricing, and their doubling down on account sharing policies over the last few years have shown that they are no longer in a growth phase.

I cancelled my Netflix account a few months ago because I had gotten the "You're not accessing this from your typical location" blocker. Even though I was trying to watch from my permanent residence and I was the account owner / payee.

The reason that happened was that my wife and I own two properties. We are happily married, not separated, but we just like our space... especially with two adult daughters who still live at home with one of their significant others also living in the house.

We are a single family "unit" but have two locations. Furthermore, my wife has sleeping issues and was using Netflix at night in order to fall asleep. To have to get me to check my email for an access code, was a total deal breaker since I would be fast asleep. So that cut her off from her typical usage of Netflix.

And the reason Netflix thought that I was accessing the service from a different location was that I hardly ever watched it. Every time I'd pull it up, I would spend more time scrolling for something to watch than actually watching anything.. and typically I'd just give up and go watch a 30m YouTube video instead.

So I was paying more, receiving less ... mostly had the account purely for my wife and daughters who watched it the most ... and then the final deal breaker was logistical barriers preventing me from being able to use what I'm paying for.

Fuck Netflix.


Agree, but I think they moved away from growth to this not because they lost investor money / vc demands but because they started losing a lot of licensing deals and content, and had to shift from redistribution to making more and more originals with capital investment cost and etc.

Slightly different reasons for enshitiffication - if Spotify lost half of their catalogue suddenly they might move in the same way I guess.


These content library contracts are only for a couple of years, and each time one lapses, some terms get negotiated. Nobody in the streaming industry is successful because they have a long term lock on someone else’s content. It’s all about eyeballs and margins.

Netflix had a 4 year deal with Starz, which is where a significant chunk of their early streaming content came from (Including all the Disney films).

Sure, that was very early though. You could argue that was crucial for establishing their brand, but the industry has caught up and doesn't do that very much now.

>I'm not messaging all the other sellers and suggesting we all raise prices by 10%,

The way competitors legally message each other to suggest a price increase is via the prices themselves.

E.g. an airline wants to raise the price of a ticket from New York to Los Angeles from $500 to $530 -- and they secretly want the other airlines to follow them and raise their prices too.

1) The airline submits the price increase to the global travel reservation system that all airlines can see. All the other airlines have computers constantly monitoring all the other airlines' ticket prices and can instantly adjust prices in response.

2) The airline that wants the price increase waits to see how the other airlines respond. Either (1) the competitor airlines keeps their lower prices to "take market share" -- or -- (2) they also raise their prices to match which "maintains status quo of market share" but all competitors get to take advantage of charging the higher price

3) If the other airlines don't match the higher price, the airline that "proposed" the higher price then rolls it back to $500. All this can happen within a few hours.

That's the way competitors "collude" to raise prices out in the open. The publicly visible prices are the messaging system. The loophole here is that the changing prices must be visible because the potential passengers buying the tickets need to see them too.

The above scenario has been studied by various papers and the government. The prices simultaneously act as both a "cost to buy" and as a "message to cooperate".

Legal "collusion" via price signals is easier in concentrated industries with few competitors (e.g. airlines). It's harder for fragmented markets or markets with hundreds-to-thousands of competitors. E.g. a barbershop wanting to raise the price of haircuts by $5 isn't going to get the hundred other barbershops to also raise their prices by $5.


>The above scenario has been studied by various papers and the government. The prices simultaneously act as both a "cost to buy" and as a "message to cooperate".

Yea I mean. A simple watch of movie film "A bueaitful Mind" starring John Nash as math genius russel crowe. Crowe equilibrium or whatever it's called. That scene where the nerds were in the bar trying to get the girl. his friends said let the best man win and crowe said - no - only way to win is we collude. and then they won. Now imagine that -- but it's not russel crowe, it's united airlines.

I mean if you look at companies from that crowe equilibruim perspective and treat them as sophisticated and rational.. one would expect most everything to be rigged!

https://www.youtube.com/watch?v=LJS7Igvk6ZM


>, UTF-8 predates Windows NT.

Windows NT started development in 1988 and the public beta was released in July 1992 which happened before Ken Thompson devised UTF-8 on a napkin in September 1992. Rob Pike gave a UTF-8 presentation at USENIX January 1993.

Windows NT general release was July 1993 so it's not realistic to replace all UCS-16 code with UTF-8 after January 1993 and have it ready in less than 6 months. Even Linux didn't have UTF-8 support in July 1993.


> public beta

Which, let's not forget, also meant an external ecosystem already developing software for it


>I'll never really understand how they ruined the opportunity presented,

Money. It's easier to understand it if you realize each studio is trying to maximize its own revenue.

Consider the common advice given to content creators and startups : "You don't want to be a sharecropper on somebody else's platform."

Well, the other studios like Disney, HBO-WarnerBros, Paramount, etc are just taking that same advice by not being beholden to Netflix's platform.

E.g. Instead of Disney just simply licensing all of their catalog to Netflix and then just getting a partial fraction of Netflix's $17.99 subscription revenue, Disney would rather create their own platform and get 100% of their own $19.99 revenue. In addition, the Disney+ subscribers are Disney's customers instead of Netflix's.

Everybody avoiding the "sharecropping" model inevitably leads to fragmentation of content. Everybody pursuing their self-interested revenue maximization leads to not sharecropping on Netflix's platform because Netflix (i.e. the Netflix subscribers) won't pay the equivalent higher prices that Disney thinks they can get on their own.

To create a truly unified video streaming service with everything for one cheap monthly price means multiple studios have to willingly give up revenue. Most customers are not willing to pay Netflix a hypothetical $150+ per month such that all studios like Disney think it's a waste of money to maintain their own exclusive digital streaming service and would be happy with the fractional revenue share from Netflix.


gp was using a more restrictive definition of "native controls". I.e. "o/s builtin UI controls" vs "framework canvas painted elements".

For Windows, "native" would be the classic Win32 UI "common control" elements from "Comctl32.dll"[0] that is directly used by older GUI frameworks such as Windows Forms. Those map to classic Win32 API CreateWindow(L"BUTTON", ...). In contrast, the newer frameworks of WPF and Xamarin Forms and Qt Quick "paints controls on a canvas" which are not "native" and makes every app UI look different instead of standardized "look & feel" of common controls.

But others include custom-canvas painting UI objects as "native" -- as long as it's not Electron.

[0] https://learn.microsoft.com/en-us/windows/win32/controls/com...


It's worth noting that some cross-platform toolkits are non-native in the strict sense, but mimic each platform's native controls.

This is harder to get right than one might think; small differences in text rendering look very much alien to me, and user input handling that isn't exactly the same as the platform's native conventions will make me stumble every time I perform common operations.

In my experience, Qt does an excellent job with this. It's not technically native (except on KDE and other Qt-based desktops), but it looks and feels right, or so close that I find it comfortable and well integrated with the rest of each platform I've tried. I haven't found any other cross-platform toolkit to match Qt in this area, so that's what I use for now.

Some day, I hope we'll see an alternative that accomplishes this at least as well as Qt, while being more flexible to license, easier to bind to other languages, and better at memory safety. (It's written in C++.) There seems to be renewed interest in GUI toolkit development lately, perhaps fueled by the excitement for newer languages like Zig and Rust, so perhaps I'll get my wish.


I've never used Qt Quick so I learned something new. It's like Flutter then.

React Native and .NET MAUI/Xamarin.Forms use real native UI widgets - WinUI on Windows and AppKit on macOS.


Graydon Hoare, a former C++ programmer on Mozilla Firefox and the original creator of Rust, acknowledges that for many people, Rust has become a viable alternative to C++ :

https://graydon2.dreamwidth.org/307291.html

And on slide #4, he mentions that "C++ is well past expiration date" :

https://venge.net/graydon/talks/intro-talk-2.pdf

It's possible that Graydon's earliest private versions of Rust the 4 years prior to that pdf were an OCaml-inspired language but it's clear that once the team of C++ programmers at Mozilla started adding their influences, they wanted it to be a cleaner version of C++. That's also how the rest of the industry views it.


> Rust has become a viable alternative to C++

Alternative yes, derivative no. Rust doesn't approach C++'s metaprogramming features, and it probably shouldn't given how it seems to be used. It's slightly self-serving for browser devs to claim Rust solves all relevant problems in their domain and therefore eclipses C++, but to me in the scientific and financial space it's a better C, making tradeoffs I don't see as particularly relevant.

I say this as a past contributor to the Rust std lib.


So?

Zig, D, and C are also alternatives to C++. It’s a class of languages that have zero cost abstractions.

Rust is NOT a beautiful language hiding inside of C++. It is not an evolution of C++. I’m pointing out that what you said is objectively wrong.

Can rust replace C++ as a programming language that has a fast performance profile due to zero cost abstractions? Yes. In the same way that Haskell can replace Python, yes it can.


Whenever someone argues the uselessness or redundancy of a particular word, a helpful framework to understand their perspective is "Lumpers vs Splitters" : https://en.wikipedia.org/wiki/Lumpers_and_splitters

An extreme caricature example of a "lumper" would just use the word "computer" to label all Turing Complete devices with logic gates. In that mindset, having a bunch of different words like "mainframe", "pc", "smartphone", "game console", "FPGA", etc are all redundant because they're all "computers" which makes the various other words pointless.

On the other hand, the Splitters focus on the differences and I previously commented why "transpiler" keeps being used even though it's "redundant" for the Lumpers : https://news.ycombinator.com/item?id=28602355

We're all Lumpers vs Splitters to different degrees for different topics. A casual music listener who thinks of orchestral music as background sounds for the elevator would be "lump" both Mozart and Bach together as "classical music". But an enthusiast would get irritated and argue "Bach is not classical music, it's Baroque music. Mozart is classical music."

The latest example of this I saw was someone complaining about the word "embedding" used in LLMs. They were asking ... if an embedding is a vector, why didn't they just re-use the word "vector"?!? Why is there an extra different word?!? Lumpers-vs-splitters.


"Compiler" encompassing "transpiler" I think is wrong anyway. There's a third term that doesn't seem to get nearly as much pushback, that didn't come up in your link, has yet to be mentioned here, and isn't in the article, but adds context for these two: decompiler.

Compiling is high-level to low-level (source code to runnable, you rarely look at the output).

Decompiling is low-level to high-level (runnable to source code, you do it to get and use the output).

Transpiling is between two languages of roughly the same level (source code to source code, you do it to get and use the output).

Certainly there's some wishy-washy-ness due to how languages relate to each other, but none of these terms really acts like a superset of the others.


I like your definitions, but all three of these could be called subsets of compilers.


By the definitions given they can not, as no function subsumes another. By whatever you define as "compiler" maybe, but I see no point in this kind of interaction that essentially boils down to subsumtion to an entity you refuse to describe any further.

Is there a merit to this? Can whatever you call compiler do more? Is it all three of the things mentioned combined? Who knows - as is stands I only know that you disagree with the definitions given/proposed.


I think they are fine definitions. I think a transpiler, a term rewriter, an assembler, a stand-alone optimizer, and even some pretty printers are subclasses of compilers.

I define a compiler as something that takes an input in a language, does transformations, and produces a transformed output in a language. All of them do that, and they are more specific terms for types of compilers.


Except that they do what useful words do; provide (more) useful information.


Fair. I don’t believe I said they were useless terms for differentiation of types of compilers, though. I just said they can all be thought of as a class as different types of compilers.


So what? A car, a bike, and a truck can all be called subsets of vehicles, but we still have (and need) different words for each type.


Primates, canids, felids, and ungulates are all subsets of mammals and all have further subsets. Mammalia is further a subset of animalia. When we’re discussing categorizations, it’s often helpful to have multiple levels of categories. I’m not sure why you seem to be calling out specificity as a gotcha, when my argument isn’t at all that we don’t need multiple terms. It’s that we should consider these things in terms of similarity and specific differences, not throw away a term as useless as the article and its headline suggest.


It's all about context, isn't it? "Humans vs. animals" is an important distinction to make in some contexts and useless in others. Insisting on the fact that humans are also animals if we're talking about, say, "language in humans vs. animals" is unproductive. It just makes discussions harder by forcing everyone to add "_non-human_ animals" to every mention. But if we're talking about, say, cellular biology, it's unproductive to force everyone to write "human and animal cells" instead of just "animal cells".

Similarly, distinguishing between transpilers and compilers might be important in some contexts and useless in others. Transpilers are source-to-source compilers, a subset of compilers. Whether it matters depends on the context.


I think the argument here is not really where one should draw the line and whether transpiler should be a different word...

I think the argument centers on how transpilers are often justified as being something quite different in difficulty than writing a whole compiler -- and in practice, nearly the whole set of problems of writing a compiler show up.

So, it's more like, don't use the distinction to lie to yourself.


I'm not convinced your L/S dichotomy applies. The concern there is that the natural world (or some objective target domain) has natural joints, and the job of the scientist (, philosopher, et al.) is to uncover those joints. You want to keep 'hair splitting' until the finest bones of reality are clear, then grouping hairs up into lumps, so their joints and connections are clear. The debate is whether the present categorisation objectively under/over-generates , and whether there is a factor of the matter. If it over-includes, then real structure is missing.

In the case of embeddings vs. vectors, classical vs., baroque, transpiler vs., compiler -- i think the apparent 'lumper' is just a person ignorant of classification scheme offered, or at least, ignorant of what property it purports to capture.

In each case there is a real objective distinction beneath the broader category that one offers in reply, and that settles the matter. There is no debate: a transpiler is a specific kind of compiler; an embedding vector is a specific kinds of vector; and so on.

There is nothing at stake here as far as whether the categorisation is tracking objective structure. There is only ignorance on the part of the lumper: the ignorant will, of course, always adopt more general categories ("thing" in the most zero-knowledge case).

A real splitter/lumper debate would be something like: how do we classify all possible programs which have programs as their input and output? Then a brainstorm which does not include present joint-carving terms, eg., transformers = whole class, transformer-sourcers = whole class on source code, ...


> i think the apparent 'lumper' is just a person ignorant of classification scheme offered, or at least, ignorant of what property it purports to capture.

>In each case there is a real objective distinction

No, Lumper-vs-Splitter doesn't simply boil down to plain ignorance. The L/S debate in the most sophisticated sense involves participants who actually know the proposed classifications but _chooses_ to discount them.

Here's another old example of a "transpiler" disagreement subthread where all 4 commenters actually know the distinctions of what that word is trying to capture but 3-out-of-4 still think that extra word is unnecessary: https://news.ycombinator.com/item?id=15160415

Lumping-vs-Splitting is more about emphasis vs de-emphasis via the UI of language. I.e. "I do actually see the extra distinctions you're making but I don't elevate that difference to require a separate word/category."

The _choice_ by different users of language to encode the difference into another distinct word is subjective not objective.

Another example could be the term "social media". There's the seemingly weekly thread where somebody proclaims, "I quit all social media" and then there's the reply of "Do you consider HN to be social media?". Both the "yes" and "no" sides already know and can enumerate how Facebook works differently than HN so "ignorance of differences" of each website is not the root of the L/S. It's subjective for the particular person to lump in HN with "social media" because the differences don't matter. Likewise, it's subjective for another person to split HN as separate from social media because the differences do matter.


> Here's another old example of a "transpiler" disagreement subthread where all 4 commenters actually know the distinctions of what that word is trying to capture but 3-out-of-4 still think that extra word is unnecessary

Ha. I see this same thing play out often where someone is arguing that “X is confusing” for some X, and their argument consists of explaining all relevant concepts accurately and clearly, thus demonstrating that they are not confused.


I agree there can be such debates; that's kinda my point.

I'm just saying, often there is no real debate it's just one side is ignorant of the distinctions being made.

Any debate in which one side makes distinctions and the other is ignorant of them will be an apparent L vs. S case -- to show "it's a real one" requires showing that answering the apparent L's question doesnt "settle the matter".

In the vast majority of such debates you can just say, eg., "transpilers are compilers that maintain the language level across input/output langs; and sometimes that useful to note -- eg., that typescript has a js target." -- if such a response answers the question, then it was a genuine question, not a debate position.

I think in the cases you list most people offering L-apparent questions are asking a sincerely learning question: why (because I don't know) are you making such a distinction? That might be delivered with some frustration at their misperception of "wasted cognitive effort" in such distinction-making -- but it isnt a technical position on the quality of one's classification scheme


> it's just one side is ignorant of the distinctions being made.

> No, Lumper-vs-Splitter doesn't simply boil down to plain ignorance.

If I can boil it down to my own interpretation: when this argument occurs, both sides usually know exactly what each other are talking about, but one side is demanding that the distinction being drawn should not be important, while the other side is saying that it is important to them.

To me, it's "Lumpers" demanding that everyone share their value system, and "Splitters" saying that if you remove this terminology, you will make it more difficult to talk about the things that I want to talk about. My judgement about it all is that "Lumpers" are usually intentionally trying to make it more difficult to talk about things that they don't like or want to suppress, but pretending that they aren't as a rhetorical deceit.

All terminology that makes a useful distinction is helpful. Any distinction that people use is useful. "Lumpers" are demanding that people not find a particular distinction useful.

Your "apparent L's" are almost always feigning misunderstanding. It's the "why do you care?" argument, which is almost always coming from somebody who really, really cares and has had this same pretend argument with everybody who uses the word they don't like.


I mean, I agree. I think most L's are either engaged in a rhetorical performance of the kind you describe, or theyire averse to cognitive effort, or ignorant in the literal sense.

There are a small number of highly technical cases where an L vs S debate makes sense, biological categorisation being one of them. But mostly, it's an illusion of disagreement.

Of course, the pathological-S case is a person inviting distinctions which are contextually inappropriate ("this isnt just an embedding vector, it's a 1580-dim! EV!"). So there can be S-type pathologies, but i think those are rarer and mostly people roll their eyes rather than mistake it as an actual "position".


> I'm not convinced your L/S dichotomy applies.

Proceeds to urm actually split.


All ontologies are false. But some are useful.


All ontologies people claim to be ontologies are false in toto

All "ontologies" are false.

There is, to disquote, one ontology which is true -- and the game is to find it. The reason getting close to that one is useful, the explanation of utility, is its singular truth


To be a lumper for a second, all models are flawed. But some are useful.


Ahh, so you're a meta-splitter.

https://xkcd.com/2518/


>An extreme caricature example of a "lumper" would just use the word "computer" to label all Turing Complete devices with logic gates.

"Computer"? You mean object, right?


You mean point of local maximum in the mass field?


> An extreme caricature example of a "lumper" would just use the word "computer" to label all Turing Complete devices with logic gates.

I don't think that's a caricature at all; I've often seen people argue that it should include things like Vannevar Bush's differential analyzer, basically because historically it did, even though such devices are neither Turing-complete nor contain logic gates.


'computer' is an ambiguous word. In a mathematical sense a computational process is just any which can be described as a function from the naturals to naturals. Ie., any discrete function. This includes a vast array of processes.

A programmable computer is a physical device which has input states which can be deterministicaly set, and reliably produce output states.

A digital computer is one whose state transition is discrete. An analogue computer has continuous state transition -- but still, necessarily, discrete states (by def of computer).

An electronic digital programmable computer is an electric computer whose voltage transitions count as states discretely (ie., 0/1 V cutoffs, etc.); its programmable because we can set those states causally and deterministically; and its output state arises causally and deterministically from its input state.

In any given context these 'hidden adjectives' will be inlined. The 'inlining' of these adjectives causes an apparent gatekeepery Lumpy/Splitter debate -- but it isnt a real one. Its just ignorance about the objective structure of the domain, and so a mistaken understanding about what adjectives/properties are being inlined.


In fact ‘computer’ used to be a job description: a person who computes.


Yes, definitely. And "nice" used to mean "insignificant". But they don't have those meanings now.


Most functions from the naturals to naturals are uncomputable, which I would think calls into question your first definition.

It's unfortunate that "computer" is the word we ended up with for these things.


Ah well, that's true -- so we can be more specific: discrete, discrete computable, and so on.

But to the overall point, this kind of reply is exactly why I don't think this is a case of L vs. S -- your reply just forces a concession to my definition, because I am just wrong about the property I was purporting to capture.

With all the right joint-carving properties to hand, there is a very clear matrix and hierarchy of definitions:

abstract mathematical hierarchy vs., physical hierarchy

With the physical serving as implementations of partial elements of the mathematical.


Word definitions are arbitrary social constructs, so they can't really be correct or incorrect, just popular or unpopular. Your suggested definitions do not reflect current popular usage of the word "computer" anywhere I'm familiar with, which is roughly "Turing-complete digital device that isn't a cellphone, tablet, video game console, or pocket calculator". This is a definition with major ontological problems, including things such as automotive engine control units, UNIVAC 1, the Cray-1, a Commodore PET, and my laptop, which have nothing in common that they don't also share with my cellphone or an Xbox. Nevertheless, that seems to be the common usage.


> Word definitions are arbitrary social constructs, so they can't really be correct or incorrect, just popular or unpopular.

If you mean that classifications are a matter of convention and utility, then that can be the case, but it isn’t always and can’t be entirely. Classifications of utility presuppose objective features and thus the possibility of classification. How else could something be said to be useful?

Where paradigmatic artifacts are concerned, we are dealing with classifications that join human use with objective features. A computer understood as a physical device used for the purpose of computing presupposes a human use of that physical thing “computer-wise”, that is to say objectively, no physical device per se is a computer, because nothing inherent in the thing is computing (what Searle called “observer relative“). But the physical machine is objectively something which is to say ultimately a collection of physical elements of certain kinds operating on one another in a manner that affords a computational use.

We may compare paradigmatic artifacts with natural kinds, which do have an objective identity. For instance, human beings may be classified according to an ontological genus and an ontological specific difference such as “rational animal“.

Now, we may dispute certain definitions, but the point is that if reality is intelligible–something presupposed by science and by our discussion here at the risk of otherwise falling into incoherence–that means concepts reflect reality, and since concepts are general, we already have the basis for classification.


No, I don't mean that classifications are a matter of convention and utility, just word definitions. I think that some classifications can be better or worse, precisely because concepts can reflect reality well or poorly. That's why I said that the currently popular definition of "computer" has ontological problems.

I'm not sure that your definition helps capture what people mean by "computer" or helps us approach a more ontologically coherent definition either. If, by words like "computing" and "computation", you mean things like "what computers do", it's almost entirely circular, except for your introduction of observer-relativity. (Which is an interesting question of its own—perhaps the turbulence at the base of Niagara Falls this morning could be correctly interpreted as finding a proof of the Riemann Hypothesis, if we knew what features to pay attention to.)

But, if you mean things like "numerical calculation", most of the time that people are using computers, they are not using them for numerical calculation or anything similar; they are using them to store, retrieve, transmit, and search data, and if anything the programmers think of as numerical is happening at all, it's entirely subordinate to that higher purpose, things like array indexing. (Which is again observer-relative—you can think of array indexing as integer arithmetic mod 2⁶⁴, but you can also model it purely in terms of propositional logic.)

And I think that's one of the biggest pitfalls in the "computer" terminology: it puts the focus on relatively minor applications like accounting, 3-D rendering, and LLM inference, rather than on either the machine's Protean or universal nature or the purposes to which it is normally put. (This is a separate pitfall from random and arbitrary exclusions like cellphones and game consoles.)


> That's why I said that the currently popular definition of "computer" has ontological problems.

Indeed. To elaborate a bit more on this...

Whether a definition is good or bad is at least partly determined by its purpose. Good as what kind of definition?

If the purpose is theoretical, then the common notion of "computer" suffers from epistemic inadequacy. (I'm not sure the common notion rises above mere association and family resemblance to the rank of "definition".)

If the purpose is practical, then under prevailing conditions, what people mean by "computer" in common speech is usually adequate: "this particular form factor of machine used for this extrinsic purpose". Most people would call desktop PCs "computers", but they wouldn't call their mobile phones computers, even though ontologically and even operationally, there is no essential difference. From the perspective of immediate utility as given, there is a difference.

I don't see the relevance of "social construction" here, though. Sure, people could agree on a definition of computer, and that definition may be theoretically correct or merely practically useful or perhaps neither, but this sounds like a distraction.

> I'm not sure that your definition helps capture what people mean by "computer" or helps us approach a more ontologically coherent definition either.

In common speech? No. But the common meaning is not scientific (in the broad sense of that term, which includes ontology) and inadequate for ontological definition, because it isn't a theoretical term. So while common speech can be a good starting point for analysis, it is often inadequate for theoretical purposes. Common meanings must be examined, clarified, and refined. Technical terminology exists for a reason.

> If, by words like "computing" and "computation", you mean things like "what computers do", it's almost entirely circular

I don't see how. Computation is something human beings do and have been doing forever. It preexists machines. All machines do is mechanize the formalizable part of the process, but the computer is never party to the semantic meaning of the observing human being. It merely stands in a relation of correspondence with human formalism, the same way five beads on an abacus or the squiggle "5" on a piece of people denote the number 5. The same is true of representations that denote something other than numbers (a denotation that is, btw, entirely conventional).

Machines do not possess intrinsic purpose. The parts are accidentally arranged in a manner that merely gives the ensemble certain affordances that can be parlayed into furthering various desired human ends. This may be difficult for many today to see, because science has - for practical purposes or for philosophical reasons - projected a mechanistic conceptual framework onto reality that recasts things like organisms in mechanistic terms. But while this can be practically useful, theoretically, this mechanistic mangling of reality has severe ontological problems.


That's very interesting!

Splitters make more sense to me since different things should be categorized differently.

However, I believe a major problem in modern computing is when the splitter becomes an "abstraction-splitter."

For example, take the mouse. The mouse is used to control the mouse cursor, and that's very easy to understand. But we also have other devices that can control the mouse cursor, such as the stylus and touchscreen devices.

A lumper would just say that all these types of devices are "mouses" since they behave the same way mouses do, while a splitter would come up with some stupid term like "pointing devices" and then further split it into "precise pointing devices" and "coarse pointing devices" ensuring that nobody has absolutely no idea what they are talking about.

As modern hardware and software keeps getting built on piles and piles of abstractions, I feel this problem keeps getting worse.


Doesn't it make sense to use words that mean what you're using them to mean?

By your logic I could use the term "apple" to describe apples, oranges, limes, and all other fruit because they all behave in much the same ways that apples do. But that's silly because there are differences between apples and oranges [citation needed]. If you want to describe both apples and oranges, the word for that is "fruit", not "apple".

Using a touchscreen is less precise than using a mouse. If the user is using a touchscreen, buttons need to be bigger to accommodate for the user's lack of input precision. So doesn't it make sense to distinguish between mice and touchscreens? If all you care about is "thing that acts like a mouse", the word for that is "pointing device", not "mouse".


The point is that it's simpler to understand what something is by analogy (a touchscreen is a mouse) than by abstraction (a mouse is a pointing device; a touchscreen is also a pointing device), since you need a third, abstracting concept to do the latter.


Whenever someone argues the uselessness or redundancy of a particular word we just have to remember that the word exists because at least two parties have found it useful to communicate something between them.


But they may have done so before the meaning shifted or before other, more useful words were coined.


in addition to that, some people just seem to have an extreme aversion to neologisms. I remember being surprised by that when ajax (the web technology) first came out and there was a huge "why does this thing which is just <I honestly forget what it was 'just'> need its own name?" faction.


> But an enthusiast would get irritated and argue "Bach is not classical music, it's Baroque music. Mozart is classical music."

Baroque music is a kind of classical music, though.


and a combination lock is a permutation lock


>I don't understand why most people don't just torrent? [...] It's actually more convenient than streaming services.

I think your technical sophistication means you're somewhat out-of-touch with what "most" people do.

Most normal people watch Netflix/HBO/etc on smartphones/tablets, or stream devices like Amazon Fire Stick, Google Chromecast puck, Apple TV cube, or the "smart tv app" built-in with their Samsung or LG tv. All of those "mainstream devices used by most" don't make it easy to access torrenting sites or files. Sure, one could hypothetically sideload a torrenting app on a Google Chromecast but now you're beyond the demographic of "most people" because you have extra complexity of also adding some USB storage to save the torrent or point to a local network share.

The type of situations that makes "torrenting more convenient" are people watching everything on a laptop or have a dedicated HTPC media server hooked up to their tv.

I'm technically savvy and it was not easy to sideload Kodi player onto Amazon Fire Stick to legitimately play DVD ISOs. It required a lot of google searches to finally figure it out. (E.g. after realizing VLC app for Fire Stick doesn't work, and then finally stumbling across a "developer setting", and then getting the SMB network path correct, and so on...) Thinking that most people could just torrent is being unrealistic.


Torrents nowadays have <1% the number of people they had a decade ago. It didn't used to be considered technically sophisticated, just a new version of file sharing that everyone used two decades ago.


I just stream files from my Macbook via AirPlay to my tv. works pretty seamlessly. No media server setups. They don't even have to be on the same wifi network. I think most TVs support several streaming protocols nowadays? I got the cheapest Samsung smart TV.

You are probably right I'm out of touch with technology, but I also think that many people do much more advanced technical stuff like using VPNs - became pretty mainstream.


>I just stream files from my Macbook via AirPlay to my tv.

Ok, explaining your situation with a laptop clarifies where you're coming from. (Which my prior reply anticipated and covered in my 3nd paragraph about torrents being easy for people using laptops.)

In any case, most normal people do not use AirPlay from their laptop, nor cast from a Chromebook, nor cast/mirror a Windows to their tv to play Netflix/HBO/Disney. Instead they just use the mainstream hardware streaming devices or the built-in tv app. Torrents would be much less convenient for the way most non-techie people watch tv (Roku/FireStick/SamsungTVapp/etc). Netflix has stated many times that the majority of their customers' watch time comes from smartphones/tablets/tv and not desktop/laptop web browsers.


>You can’t have your UI disrespecting [...] and I care too much!

I think it just highlights that people care about different things. I've seen the "(s)" placeholder for decades in computer UIs and it's never bothered me. On the other hand, blog article characteristics that bug me are titles consisting of a non-descriptive teaser with ellipsis (...) that doesn't describe the main point and not having a publication date at the top.

But I'm not going to complain about blog articles that "disrespect" readers that way because apparently, it's ok with some writers and some readers.

Likewise, someone using the Comic Sans font enrages some folks but it never bothers me. On the other hand, displaying big numbers without any thousands separators is very annoying.

I code a lot of utilities for myself and I always avoid the "(s)" problem by re-ordering the text. Instead of:

  Uploading 3 image(s)
The UI is:

  Number of images uploaded:  3
That looks ok for all quantities and doesn't require tedious ternary logic everywhere :

  Number of images uploaded:  0
  Number of images uploaded:  1
  Number of images uploaded:  2


I'd be curious if that holds for all languages.

The plural version does not and as you pointed out, there's a solution in English that works. But, I'm curious if there are any languages were that type of solution doesn't work.

https://perldoc.perl.org/Locale::Maketext::TPJ13#A-Localizat...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: