I'm regularly asked by coworkers why I don't run my writing through AI tools to clean it up and instead spend a time iterating over it, re-reading, perhaps with a basic spell checker and maybe grammar check.
That's because, from what I've seen to date, it'd take away my voice. And my voice -- the style in which I write -- is my value. It's the same as with art... Yes, AI tools can produce passable art, but it feels soulless and generic and bland. It lacks a voice.
It also slopifies your work in a way that's immediately obvious. I can tell with high confidence when someone at work runs their email through ChatGPT and it makes me think less of the person now that I have to waste time reading through an overly verbose email with very little substance to it when they could have just sent the prompt and saved us all the time.
I manage an employee from another country and speaks English as a second language. The way they learned English gives them a distinct speaking style that I personally find convincing, precise and engaging. I started noticing their writing losing that voice, so I asked if they were using an LLM and they were. It was a tough conversation because as a native English speaker I have it easy, so I tried to frame my side of the conversation as purely my personal observation that I could see the change in tone and missed the old one. They've modified their use of LLMs to restore their previous style, but I still wonder if I was out of line socially for saying anything. English is tough, and as a manager I have a level of authority that is there even when I think it isn't. I don't know the point, except that I'm glad you're keeping your voice.
As a non-native English speaker living in AU, I can offer my opinion in case it's helpful.
Of course I can't speak to the person you mentioned but if you said what you did with respect and courtesy then they probably would've appreciated it. I know I would have. To me, there's no problem speaking about and approaching these issues and even laughing about cultural issues, as long as it's done with respect.
I once had a manager who told me that a certain client finds the way I speak scary. When I asked why, it turns out that they're not expecting the directness in my speech manner. Which is strange to me since we were discussing implementation and requirements and directness and precision are critical and when they're not... well that's how projects fail, in my opinion. On the other hand, there were times when speaking to sales people left me dizzy from all the spin. Several sentences later and I still had no idea if they actually answered the question. I guess that client was expecting more of the latter. Extra strange since that would've made them spend more money than they have to.
Now running my own business, I have clients that thank me for my directness. Those are the ones that have had it with sales people that think doing sales is by agreeing to everything the client says and promising delivery of it all and then just walking away leaving the client with a bigger problem than the one they started with.
I often ask for ai to give only grammar and spelling corrections, and then only a change set I apply manually. In other words the same functionality as every word processor since…y2k?
Why not just use one of those word processors, then? It seems like you'd expend less effort (unless there's an advantage of your approach that I'm missing), since the proof-reading systems built into a Word processor have a built-in queue UI with integrated accept / reject functionality that won't randomly tweak other parts of the paragraph behind your back.
Far better at catching some types of mistakes. Word only has this many hardcoded rules past the basic grammar. LLMs operate on semantics, and pick up on errors like "the sentence is grammatically correct, but uses an obviously wrong term, given the context".
That's not the kind of thing I'd trust to a language model: I'd expect it to persuade me to change something correct to something incorrect more often than it catches a genuine error. But ymmv, I suppose.
I have definitely seen Grammarly make suggestions that are actually wrong, but I think it's generally pretty ok, and it does seem to make fewer mistakes than I normally do.
Sometimes I use incorrect grammar on purpose for rhetorical purposes, but usually I want the obvious mistakes to be cleaned up. I don't listen to it for any of its stylistic changes.
I've had good results with doing similar. My spelling and grammar have always been a challenge and, even when I put the effort into checking something, I get blind to things like repeating words or phases when I try to restructure sentences.
I sometimes also ask for justification of why I should change something which I hope, longer term, rubs off and helps me improve on my own.
I consider myself to be an above average writer and a great editor. I will just throw my random thoughts about something that happened at work, ask ChatGPT to keep digging deeper in my question, I will give it my opinion of what I should do. Ask it to give me the “devil’s advocate” and the “steel man opinion” and then ask it to write a blog post [1].
I then edit it for tone, get rid of some of the obvious AI tells. Make some edits for voice, etc.
Then I throw it into another season of ChatGPT and ask it does it sound “AI written”. It will usually call out some things and give me “advice”. I take the edits that sound like me.
Then I put the text through Grok, Gemini and ask it the same thing. I make more edits and keep going around until I am happy with it. By the time I’m done, it sounds like I something I would write.
You can make AI generated prose have a “voice” with careful prompting and I give it some of my writing.
Why don’t I just write it myself if I’m going through all that? It helps me get over writers block and helps me clarify my thoughts. My editing skills are better than my writing skills.
As I do it more and give it more writing samples, it is a faster process to go from bland AI to my “voice”
[1] my blog is really not for marketing. I don’t link to it anywhere and I don’t even have my name attached to it. It’s more like a public journal.
> By the time I’m done, it sounds like I something I would write.
As a writer myself, this sounds incredibly depressing to me. The way I get to something sounding like something I would write is to write it, which in turn is what makes me a writer.
What you’re doing sounds very productive for producing a text but it’s not something you’ve actually written.
Maybe he just want to summarize things. I'm writing in Spanish. Of course I won't let AI to write this very post ---even in my bad E. But there are things in my Obsidian written in Spanish, by AI. They're sounds like nothing, sometimes you need something to sound that way: informative, aseptical. But it is good to hear about you anyway, when some people thinks, or fake they think, AI can write, let's say, fiction.
I am torn, as someone who is learning Spanish and should be at a strong A1 [1] by the end of the year, I would be horrified to think about posting something in a public forum based on my Spanish speaking ability.
On the other hand, I’ve had enough conversations with Spanish speakers in Florida like at my barbershop and a local bar in a tourist area who speak limited English and I would much rather have real conversations between my broken Spanish and their broken English than listen to or read AI Slop.
[1] according to this scale, I’m past A1 into A2.1 category now. But I still feel like I’m A1
I write to communicate with myself or other people. Just like I use AI to go from I need to do $X based on my ideas and designs to I did $x. It’s not about “art” or “passion”. It’s about a paycheck
I don’t think it needs to be about art or passion. I just don’t think someone who relies entirely on AI generated text can accurately call themselves “a writer.”
I don’t call myself a writer. I call myself an employee who needs to exchange labor for money to support my addictions to food and shelter. I was writing and developing long before AI.
When I’m writing something for work where I know the end goal - I don’t. When I’m streaming random thoughts without any coherent end goal for my blog or my internal notes on something that happened at work as a retrospective I will use it.
Just to be repeat myself, my blog isn’t for marketing, I don’t have any advertising on them, I don’t post a link to it anywhere and I have no idea if anyone besides me has ever read it since I don’t have any analytics. I don’t have my name or contact information on it
I dont buy it can tell if something sounds ai. Multiple times i have given it direct ai slop writing and it could not tell it was ai written. As a matter of fact, it would insist it wasnt.
This flow sounds like what an intern did in pr reviews and it made me want to throw something out a window. Please just use your own words. They are good words and much better words than you may think.
I can’t share links from Gemini or Grok. But they both immediately flagged the first one as AI generated and the second most likely human.
I didn’t actually do anything here except told ChatGPT to rewrite it in the form of an article I found from an old PDF “97 Things a software engineer should know” from 2010, then ask Grok did it sound AI generated (it did), ask Grok to rewrite it to remove tell tale signs (it still kept the em dashes) and then I copied it ba k to ChatGPT.
Could I tell if the last one is AI? Absolutely. Throwing a few "damns" in there didn't convince me. And all the reworking you've done, while it makes it a little more passable, has made it arguably worse in quality. The point of the final article is so muddy. It has no central point and sprawls on and on about random nonsense.
With some human editing to make it sound less douchery or better prompting, do you think you could tell?
In other words - I did no human editing or even played with the prompt.
For instance, I would have definitely reworded this “a solid meeting isn’t just about not screwing up the logistics. It’s a snapshot of how your team actually operate”
The “it isn’t just $x. It is $y” is something that Ai loves to do.
The larger point is AI is really good at detecting its own slop. Gemini is really good at detecting first pass AI Slop from another LLM and out of curiosity I put a few other articles I knew was written before 2022 to see if it gave false positives.
I agree. I use Grammarly for finding outright mistakes (spelling and the like, or a misplaced comma or something), but I don't listen to any of the suggestions for writing.
I feel like when I try writing through Grammarly, it feels mechanical and really homogeneous. It's not "bad" exactly, but it sort of lacks anything interesting about it.
I dunno. I'm hardly some master writer, but I think I'm ok at writing things that interesting to read, and I feel Grammarly takes that away.
The thing is, ask it something right away and it'll use its own voice. Give it lots of data from your own writing through examples and extrapolations on your speech patterns and it will impersonate your voice more. It's like how it can impersonate Trump, it has lots of examples to pull from, you? it doesn't know you. LLMs needs large amount of input to give it a really good output.
I said almost exactly that to a coworker a few hours ago. My writing is me, it’s who I am. But I know that is not true for everyone, and in particular non-native speakers.
I just detest that AI writing style, especially for business writing. It’s the kind of writing that leaves the reader less informed for the effort.
If I were asked a direct question, especially in a job interview, I would be truthful. That answer stops any sniping about using AI and lets me focus on my skills.
Ah, I misunderstood the parent comment as having that disclaimer on the CV itself.
I agree that if asked directly, it makes sense to talk about candidly. Hopefully an employer would be happy about someone who understands their weak spots and knows how to correctly use the tools as an aid.
Asking about AI usage in CV is pointless in my opinion. You are always responsible what reads in there. If they don’t like the writing style, then they don’t.
Interviewers directly asking whatever bothers them is fine IMHO. The alternative is keeping a negative impression when there could have been an insightful exchange, and the candidate also gets to know what to expect from the company.
If you have access to Microsoft Word, I'd customize the grammar checker settings to flag more than what is enabled by default. They have a lot of helpful rules that many are oblivious to because it's all buried deep in the preferences. Then adopt the stance of taking the green lines under advisement but ignore them if your original words suit your preference. That will get you polished up without submitting to AI editorial mundanity.
Honestly, the issue is that most people are poor writers. Even “good” professional writing, like the NY Times science section, can be so convoluted. AI writing is predictable now, but generally better than most human writing. Yet can be an irritating at the same time.
I wonder who actually discovered this attack? Can we credit them? The phrasing in these posts is interesting, with some taking direct credit and others just acknowledging the incident.
Aikido says:
> We were alerted to a large-scale attack against npm...
Socket says:
> Socket.dev found compromised various CrowdStrike npm packages...
Ox says:
> Attackers slipped malicious code into new releases...
Safety says:
> The Safety research team has identified an attack on the NPM ecosystem...
Phoenix says:
> Another supply chain and NPM maintainer compromised...
Semgrep says:
> We are aware of a number of compromised npm packages
Mackenzie here I work for Aikido.
This is a classic example of the security community all playing a part. The very first notice of this was from a developer named Daniel Pereira. He alerted Socket who did the first review of the Malware and discovered 40 packages. After, Aikido discovered an additional 147 packages and the Crowdstrike packages.
I'm not sure how Step found it but they were the first to really understand the malware and that it was a self replicating worm. So multiple parties all playing a part kinda independent. Its pretty cool
question how does your product help in these situations? I imagine it'd require for someone to report a compromised package, and then you guys could detect it in my codebase?
Yes to the you guys can detect it in my codebase, but it's generally not required for someone to report a compromised package, we do also discover them ourselves quite fast due to automated scans of npm package updates. This is how aikido was first to discover the previous supply chain hack.
Several individual developers seem to have noticed it at around the same time with Step and Socket pointing to different people in their blogs.
And then vendors from Socket, Aikido, and Step all seem to have detected it via their upstream malware detection feeds - Socket and Aikido do AI code analysis, and Step does eBPF monitoring of build pipelines. I think this was widespread enough it was noticed by several people.
Since so many vendors discovered these packages seemingly independently, you'd think that they would share those mechanisms with NPM itself so that those packages would never be published in the first place. But I guess that removes their ability to sell an "early alert" mechanism through their offerings...
NPM is owned by github/microsoft. I'm sure they could afford to buy one of these products or just build their own, but clearly security is not a thing they care about.
> The entire attack design assumes Linux or macOS execution environments, checking for os.platform() === 'linux' || 'darwin'. It deliberately skips Windows systems
If I were the conspiracy-minded sort I might jump to some wild conclusions here.
I watched an interview with Jeff Snover once and he said that they tried to make a unixy bash-like shell a few times and decided it was never going to fit in Windows. So they went a different way and took a lot of inspiration from OpenVMS.
So don’t expect PowerShell to be like a UNIX shell. It isn’t, and wasn’t meant to be one. It’s different, on purpose :)
I'm a die hard linux user, and some years ago took a windows gig on a whim. I find powershell fantastic and the only thing that makes my role bearable. Now, one of the first things i install on Linux is powershell.
Why should MS buy any of these startups when a developer (not any automated tech) found the malware? It looks like these startups did after-the-fact analysis for PR.
on the other hand, the previous supply chain attack was found by automated tech.
Also, if MS would be so kind as to just run similar scans at the time a package is updated instead of after the package is updated (which is the only way the automated tech can run if npm doesn't integrate it), then malware like this would be way less common.
Hi, I'm Charlie from Aikido, as mentioned above. Yes, we detected it automatically, and I alerted Josh to the situation on BSky.
There's no reason why Microsoft/npm can't do what we're doing, or any of the other handful to dozen companies that do similar things to us, to protect the supply chain.
Usually security companies monitor CVEs and the security mailing lists. That's how they all end up releasing the blog posts at the same time. It's because they are all using the same primary source.
I've never heard of Socket before this thread. They could be taking advantage of this news and promoting the company, as it's mentioned quite a few times in this thread. Or it's just a good service that I should probably be using.
NPM deserves some blame here, IMO. Countless third party intel feeds and security startups can apparently detect this malicious activity, yet NPM, the single source of truth for these packages, with access to literally every data event and security signal, can't seem to stop falling victim to this type of attack? It's practically willful ignorance at this point.
NPM is owned by GitHub and therefore Microsoft, who is too busy putting in Copilot into apps that have 0 reason to have any form of generative AI in them
But Github does loads of things with security, including reporting compromised NPM packages. I didn't know NPM is owned by Microsoft these days though, now that I think about it, Microsoft of all parties should be right on top of this supply chain attack vector - they've been burned hard by security issues for decades, especially in the mid to late 90's, early 2000s as hundreds of millions of devices were connected to the internet, but their OS wasn't ready for it yet.
The difference is in the apparent available resources. You cant get to "professional" without the time and money, and NPM post acquisition, presumably, has more of both. Granted, NPM probably doesn't have a revenue model to speak of, which means Microsoft is probably not paying it much attention.
Dozens of businesses have been built to try fixing the npm security problem. There's clearly money in it, even if MS were to charge an access fee for security features.
Identical, highly obfuscated (and thus suspicious looking) payload was inserted into 22+ packages from the same author (many dormant for a while) simultaneously and published.
What kind of crazy AI could possible have noticed that on the NPM side?
This is frustrating as someone that has built/published apps and extensions to other software providers for years and must wait days or weeks for a release to be approved while it's scanned and analyzed.
For all the security wares that MS and GitHub sell, NPM has seen practically no investment over the years (e.g. just go review the NPM security page... oh, wait, where?).
I blame the prevalence of package mangers in the first place. Never liked em, just for this reason. Things were fine before they became mainstream. Another annoying reason is package files that are set to grab the latest version, randomly breaking your environment. This isn't just npm of course, I hate them all equally.
> As in, things were fine before we had commonplace tooling to fetch third party software?
In some ways they were. I remember how much friction it was to take a dependency in your typical desktop C++ or Delphi app in late 90s - early 00s. And because of that, developers would generally be hesitant to add a new dependency without a strong justification, especially so any kind of dependency that comes with its own large dependency tree. Which, in turn, creates incentives for library authors to create fairly large, framework-style libraries. So you end up with an ecosystem where dependencies are much more coarse and there are fewer of them, so dependency graphs are shallow. Whether this is an advantage or a disadvantage in its own right can be debated, but it's definitely less susceptible to this kind of attack because updating dependencies in such a system is also much more involved; it's not something that you do with a single `npm install`.
I mostly share GP's sentiment, although they didn't argue their point very well.
> As in, things were fine before we had commonplace tooling to fetch third party software?
Yes. The languages without a dominant package manager (basically C and C++) are the only ones that have self-contained libraries, that you can just drag into your source tree.
This is how you write good libraries - as can be seen by the fact that for many problems, there's a powerful C (or C++, but usually C) library with minimal (and usually optional) dependencies, that is the de-facto standard, and has bindings for most other languages. Think SDL, ffmpeg, libcurl, zlib, libpng/jpeg, FreeType, OpenSSL, etc, etc.
That's not the case for libraries written in JS, Python, or even other compiled languages like Go and Rust - libraries written in those languages come with a dependency tree, and are never ported to other languages.
I can see the value, but to do the things you're describing, the AI needs to be given fairly highly-privileged credentials.
> Right now, Datafruit receives read-only access to your infrastructure
> "Grant @User write access to analytics S3 bucket for 24 hours"
> -> Creates temporary IAM role, sends least-privilege credentials, auto-revokes tomorrow
These statements directly conflict with one another.
So it needs "iam:CreateRole," "iam:AttachPolicy," and other similar permissions. Those are not "read-only." And, they make it effectively admin in the account.
What safeguards are in place to make sure it doesn't delete other roles, or make production-impacting changes?
Ahh. To clarify, changes like granting users access would be done by our agent modifying IaC, so you would still have to manually apply the changes. Every potentially destructive change being an IaC change helps allow the humans to always stay in the loop. This admittedly makes the agents a little more annoying to work with, but safer.
Lots of people have asked us this! We try to do more than just being an AI-enabled IDE by giving the agent access to your infrastructure and observability tools. So you can query over your AWS, get information about metrics over the past few days, etc etc. We also plan to integrate with more DevOps tools as our customers ask for them. We also try to be less like an IDE, and more like an autonomous agent. We've noticed that DevOps engineers actually like being engineers, and enjoy some infrastructure tasks, while there are others that they would rather automate away. Not sure if you have experienced this sentiment?
Also, auto-revoke right now can be handled by creating a role in Terraform that can be assumed and expires after a certain time. But we’re exploring deeper integrations with identity providers like Okta to handle this better.
I can put some AWS Creds in my terminal and Claude Code is perfectly happy writing AWS CLI commands (or whole python scripts if necessary) to work out what it needs to about my infrastructure.
In China, nearly everything works via the same app (WeChat) and via QR code. Every grocery store, coffee shop, train station, or point of sale has the same scanner, where you can flash your QR code. I don't think I saw a single physical currency exchanged in the entire 6 weeks I was there.
I keep hearing that X wants to be the "everything" app. WeChat is _already_ the everything app. It's DoorDash, Venmo, Facebook, Instagram, and about 500 other apps in one.
I will say that I disliked the pattern of every restaurant using a WeChat "mini app" where it basically loads an entirely new app within WeChat just to see the menu or order. It felt much clunkier than just using a web page.
You're misunderstanding, Musk wants X to be the everything app in the US. He knows about WeChat, that is in fact where part of the inspiration came from, and more generally he talked about it during the PayPal days.
Companies should automate this. Write their own outage monitoring, feed the results, plus the cumbersome format you have to send to the provider, into an LLM, have it spit out an email requesting SLA credits or whatever the contract specifies.
Probably not worth it for low cost services, but if you’re paying GitHub $x millions per year, maybe it is.
They intentionally underreport outages. Everybody does. When your performance metrics for your customers, managers, and individual contributors all include uptime, what you get isn't better uptime but lies about uptime.
Some customers of my product, StatusGator, do this with our API. They can extract the outage data -- including the time when we detect the outage before its acknowledged. And then use that to get SLA credits.
Its great that your specific product does this, but as a whole I have to monitor the service separately to keep you honest (well not you specifically, I'm sure you are honest and do as much as you can to be honest, but not every company is), and of course to monitor the problems I have which you don't detect.
I'm still working on https://wut.dev/ - a simpler, privacy-focused, read-only AWS resource viewer. I did a "show Reddit" post a few weeks back and it got quite a bit of interest, so doubling down with actual user feedback now.
I don't know if this style of... discussion is something the Cluely team made popular recently, or if it took off sooner, but I really hope it doesn't catch on further.
> We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone
And down in Enforcement (emphasis mine):
> Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [INSERT CONTACT METHOD]. All complaints will be reviewed and investigated promptly and fairly.
What’s the point of this dance when you can’t bother to fill out the contact.
Honestly, how many projects with "Code of Conducts" actually use or follow them? Maybe if you're a big CNCF project or something like that, but the average GitHub project just adds one because GitHub told them to. If they were actually more about enforcing community standards and not about social signaling they'd probably just be called "contributing rules" or something boring and nondescript like how Internet rules always have been.
In other words, the point of this dance is to check a box. I mean literally, GitHub will check a box in Insights -> Community Standards when you add one.
Did anyone read that linked thread in full? There's no "style of discussion" there, there's a lot of people engaging in a very normal, constructive discussion, which is being interupted by a single disruptive commenter (Zaid).
Nothing there seems to reflect poorly on the project as far as I can tell?
"A single disruptive commenter (Zaid)"
And he is one of the top contributors. That doesn't quite fit the narrative that it was just some weirdo interfering.
According to github insights, he contributed 66 lines of code. I mean, it could be really valuable 66 lines, especially if he fixed bugs. Somehow I doubt it.
However, I'd get rid of this 'contributor' asap if I was a maintainer.
If it was a more legit trademark claim that would be one thing, a lot of OSS people think you can just name your project after something popular so you can coast off the reputation of the more popular product.
But since this is a BS claim, I think the following approach is totally appropriate:
- Have one person post the antagonistic garbage the OP deserves
- Have another person play the “rear guard” and follow up with the actual legal reasons they won’t comply.
Why? OSS projects aren't somehow exempt from trademark law, and at the very least can have its repo taken down. The trademark in this case might not be airtight, but that's a separate issue.
I have little respect for people petty enough to involve legalese against people doing something for free, just for the love of the game.
I would let them do a takedown, just so they expend billable lawyer hours, only for me to do a search+replace and reupload under a different name out of spite.
In various jurisdictions, a trademark that doesn't get defended makes it much more likely that you'll lose your trademark all together when trying to stop actual infringement. If the infringers can point at others and say "look, they let others infringe on their trademark for years and only now they're going after us" that can have an impact on the viability of your case.
That said, China regularly blocks or attacks Github users, so I don't think any open source project needs to be too wary unless they're trying to do business in China.
>In various jurisdictions, a trademark that doesn't get defended makes it much more likely that you'll lose your trademark all together when trying to stop actual infringement. If the infringers can point at others and say "look, they let others infringe on their trademark for years and only now they're going after us" that can have an impact on the viability of your case.
Oh, I didn't know that. At least now it makes some sense. Thanks.
The law is the only reason why a corporation can't take your open source project and rerelease it without any attribution. Laws for thee and not for me is juvenile.
Then frankly your being a child. The point isn't to shut down the project... It's asking them to change name due to trademark, and asking civilly without lawyers is nicer than most do up front.
Nah you still come off as childish when you reply like that no matter the validity of the claim. If the claim is nonsense then ignore or call it out as nonsense and move on ... but when you act like that you sour your image and if it is a legit claim you've now just made things worse.
Not saying it’s the adult thing to do but again, it’s understandable. Their takedown attempt seems to be disrespectful and wrong. I typically advocate for taking the high road but I certainly understand some situations where people don’t, even if I would have.
edit: thinking about it, we could look at your tone here as well to illustrate the point. Sure you weren't as uhh "passionate," but let's look at it more closely:
>Nah you still come off as childish when you reply like that
1) "Nah" is a very dismissive way of saying "I disagree." Then you follow it up 2) by calling someone "childish." There are certainly more respectful ways to make your point! Then again, I don't think it's that big of a deal. Someone else might though.
>Their takedown attempt seems to be disrespectful and wrong
How was the initial claim "disrespectful"? It might not be using maximally cuddly language, but "[...] your platform seriously infringes on our legitimate rights and interests, please rename to other one." seems pretty respectful to me. Was it only "disrespectful" because it was wrong?
Fair question - my use of "disrespectful" is a bit forced. It appears their claim is frivolous so I just considered the whole act "disrespectful." There are certainly better words though.
comparing me saying "nah" and describing something as "childish" to the types of things that Zaid-maker was saying in that comment thread is kind of a stretch isn't it?
The person I was referring to as childish:
>No one gives a shit, get out of here!
>BLAH BLAH BLAH
(and then when someone points out he is in fact NOT the maintainer and is a contributor)
>I am the maintainer of this project who says I am not. I only made changes to CI by my choice. Ask maze before making false exceptions to me otherwise you will be hearing from my lawyers next time for harassment.
It is the late teens/young twenties online communication style. Generally pretty aggressive, but easy to ignore because they are usually not really saying anything of substance. They are "ragebaiting" you.
When I see people commenting "shut your bitch ass up" (now deleted), "triggered", "keep dreaming my guy", it is distinctly generational (young gen z) to me. It is a style of communication.
Torvalds may have "invented" OSS toxicity, but, as far as I can tell, he was not popping in saying SYBAU like he was commenting on a tiktok brainrot compilation.
Who knows, maybe he is just the childish persona of the maintainer. It is rare to use more than one in a specific project but a lot of people maintain more than one persona online.
Let me introduce you to OpenCart, an open source eCommerce platform in use on hundreds of thousands of websites handling customer payments, recently struck a multi-million dollar deal with PayPal, and whose founder and practically sole developer responds to bug reports and CVEs with careful, well-thought-out replies like "JUST FUCK OFF!":
Hard to tell who’s who, but the Zaid person who claims to be a maintainer is apparently not a maintainer. He contributed some small changes and started claiming to be a maintainer.
Wut.Dev (https://wut.dev) - a fast, client-side, privacy-focused, alternative to the AWS console.
I got tired of using the AWS console for simple tasks, like looking up resource details, so I built a fast, privacy-focused, no-signup-required, read-only, multi-region, auto-paginating alternative using the client-side AWS JavaScript SDKs where every page has a consistent UI/UX and resources are displayed as a searchable, filterable table with one-click CSV exports. You can try a demo here[1]
Unsolicited feedback (and take with grain salt since I’m probably not your target buyer)
- the subheading is describing the “how” not the “what”. Meaning, what would you use this product for?
- in general, all the headlines could be preposition from the “what” a user would do scenario. Eg instead of saying “Resource Relationship Diagrams” … say “See Resource Relationship with Ease”
- if I’m understanding the tool correctly, this seems like a “lookup” tool. In which case lookup.dev is for sale … just fyi.
“Not fancy security tools. Not expensive antivirus software. Just asking my coding assistant…”
I actually feel like AI articles are becoming easier to spot. Maybe we’re all just collectively noticing the patterns.