This is mostly computer literacy for people who are already committed to using office and AI products from Microsoft, Google and Apple. For those people the article provides good actionable advice. But I like to contest that this is required to achieve "computer literacy in 2026". In many cases, AI is a vehicle for sending your personal and business data to a Big Tech cloud. Depending on the tasks that you need to perform, AI may not be a significant help.
"social" is the most important word. I'm surprised that so many people in this thread focus on algorithms, ranking and addiction. These things can be part of social media platforms, but they are orthogonal to what social media is: A platform that is centered around the identities of its users and the relationship between users.
The dtype constraint can with pyright (and presumably others). We're still on older versions of numpy so I don't have first hand knowledge of the shape constraints.
It's also due to browser not doing anything useful with the additional tags, if I use <article>, <section> or <div> doesn't make any difference, my browser doesn't use that to generate a TOC or let me open an <article> in a new Tab. Even the, in theory, incredible useful <time> tag seems to be completely invisible to my browser and many other potentialy useful tags don't exist in the first place (e.g. <unit> would be useful).
Exactly this. I really wish browsers would use semantic html to make content more accessible for me, a sighted user! Why does my browser not give me a table of contents I can use to navigate a long page?
I think the parent has a good point: browsers don't do anything with these tags for sighted users, who are unfortunately the majority of developers. If they were to notice benefits to using semantic tags, maybe they'd use them more often.
It’s interesting, because if you imagine sites actually using these tags to be maximally compatible with reader mode and other accessibility modes, they’re setting themselves up perfectly to be viewed without ads.
I use reader mode by default in Safari because it’s essentially the ultimate ad blocker: it throws the whole site away and just shows me the article I want to read.
But this is in opposition to what the website owners want, which is to thwart reader mode and stuff as many ads in my way as they can.
It’s possible good accessibility is antithetical to the ad-driven web. It’s no wonder sites don’t bother with it.
Reader mode seems to still work if you have a div with article text in it. I would be interested to see a comparison of what works and what doesn’t if such a reference exists though!
Reader mode is based on a whole slew of heuristics including tag names, class names, things like link-to-text-content ratio, and other stuff I can't recall. IIRC it considers semantic tag names a stronger signal than e.g. class names, so having <article> in your page isn't necessary but helps a lot.
Yes, I think that is what browser should spend money on instead of inventing new syntax. Google Chrome still doesn't support alternate stylesheets. But I refuse to not use them simply because a rich company can't be bothered to implement decades old standards.
Not true. Using semantic HTML and relying on its implicit ARIA roles allows the browser to construct an accurate AOM tree (Accessibility Object Model) which makes it possible for screen readers and other human interface software to create TOCs and other landmark-based navigation.
> Not true. Using semantic HTML and relying on its implicit ARIA roles allows the browser to construct an accurate AOM tree (Accessibility Object Model) which makes it possible for screen readers and other human interface software to create TOCs and other landmark-based navigation.
Sure, it allows the browser to do that. GP is complaining that even though browsers are allowed to do all that, they typically don't.
We just don't have enough tags that we can really take advantage of on a semantic or programmatic level, and that has lead to other tags getting overloaded and thus losing meaning.
Why don't we just have markup for a table of contents in 2025?
That'd open a whole new can of worms. Browsers are already gargantuan pieces of software even with the relatively primitive tags we have today. We don't need to further argue with each other what the <toc> tag should look and behave like, deal with unforeseen edge cases and someone's special use cases which end up requiring them to implement the whole thing with <ol>s and <li>s anyway.
Then let the edge cases use <ol> and <li>, and in some sense all those website style simplifiers that comes built-in with Safari will just have to deal with those edge cases. Similarly we have a built-in date picker, and if you don't think it's good enough then build a better one.
If you want a specific behavior for <time> then write a browser plugin which e.g. converts the <time> content to your local time.
But if you are a developer you should see value in <article> and <section> keeping your markup much much nicer which in turn should make your tests much easier to write.
Back when I explored helix as a long-time vim user, I had some LSPs set up with neovim. But I was very much in doubt how to take advantage of these LSPs and what kind of configuration options make sense. The "hard" part is understanding what LSPs can do for you and what kind of key bindings I need to set up, so that I can use the relevant features.
Helix gives you a sane user interface to LSPs that is discoverable
It's probably a learning process for sounds that are typical in a language. Later in life we can also distinguish different languages even though we don't speak them. Many non-English-speakers will still identify a song like https://en.wikipedia.org/wiki/Prisencolinensinainciusol as being English, even though it only uses sounds that are typical for English.
I started learning Arabic at about 45, and many sounds I've learned to make with my mouth, but I can not distinguish with my ears when other people speak. For instance ك/ق or س/ص or د/ض.
It's as if my brain is binning these sounds together and I can not retrain the binning.
Can you hear the difference when you ask someone to really exaggerate their articulation and enunciation? Because with most languages I speak I tend to have difficulty understanding people when they don't talk like a radio weather forecaster.
Saying/hearing is not the same. Parent struggles to hear the differences and says they can pronounce them.
Polish has a retroflex and palatalized “sh” pair of sounds (Sz vs ś) that I can pronounce perfectly but not clearly distinguish as a listener.
I learned Polish when I was 5, moved back to the states when I was 11, barely used it for 7 years, and relearned it when I was 18. I don’t know if (at 5) I ever distinguished between the two. But I certainly struggle now.
Same for me with Dzongkha: cha (ཅ) / chha (ཆ) and tsa (ཙ) / thsa (ཚ). I can aspirate the second of these fine, so that's not the problem. Hearing the difference is. This also makes it harder to remember the spelling as I mix them up all the time.
To be fair, I've only been learning for a short while (8 months or so) and haven't had much opportunity to listen to a lot of different speakers so perhaps this may get better.
If it’s anything like the aspirated consonants in Hindi, usually the aspirated one of the pair ends up being sounded out slightly longer because of the expelling of breath. That’s how I learned to recognize it. Aspirating was actually much harder for me at first than hearing it.
Thank you. Yes, I am able to articulate the sounds more or less.
That said, any other tips on pronunciation are greatly appreciated! I did not realise that the tongue makes the same shape with both letters - I was not making a proper bowl with ض.