"The simultaneous activity from US and India confirmed we were dealing with a single attacker using multiple VPNs or servers, not separate actors."
Did it really? It's not clear to me why the possibility that the exfiltrated credentials were shared with other actors, each acting independently, is ruled out.
"... resent SO and Reddit trying to gatekeep": Am curious why you felt they were gatekeeping your content. They are free websites, and anybody who wants/needs to read your content, can.
"... not giving them free money if they're notnpassing some of the benefits ..." - Could you expand on the specific benefits you wanted them to pass on to the community? As a user, being able to find other people's content that is relevant to my current need is already a pretty solid benefit.
The term "machine vision" is mainly used in highly controlled, narrow industrial applications, think factory assembly lines, steel inspection, monitoring for cracks in materials, shape or size classification of items, etc. The task is usually very well defined, and the same thing needs to be repeated under essentially the same conditions over and over again with high reliability.
But many other things exist outside the "glue some GPT4o vision api stuff together for a mobile app to pitch to VCs" space. Like inspecting and servicing airplanes (Airbus has vision engineers who make tools for internal use, you don't have datasets of a billion images for that). There are also things like 3D motion capture of animals, such as mice or even insects like flies, which requires very precise calibration and proper optical setups. Or estimating the meat yield of pigs and cows on farms from multi-view images combined with weight measurements. There are medical things, like cell counting, 3D reconstruction of facial geometry for plastic surgery, dentistry applications, and a million other things other than chatting with ChatGPT about images or classifying cats vs dogs or drawing bounding boxes of people in a smartphone video.
Thank you for your thoughtful comment! I completely agree.
It’s great to see someone emphasize the importance of mastering the fundamentals—like calibration, optics, and lighting—rather than just chasing trendy topics like LLM or deep learning. Your examples are a great reminder of the depth and diversity in machine vision.
Your clever remark highlights poor emotional intelligence and weak communication skills. Sarcasm might have its place in casual conversation, but in professional discussions, it signals insecurity and a lack of respect—neither of which contribute to meaningful dialogue.
Your disdain for LLMs is equally puzzling. Are you seriously suggesting I shouldn’t use tools to improve my grammar and delivery simply because they don’t align with your engineering view? Ironically, LLM-based tools likely support your own work—whether through coding assistance, debugging, or other tasks—even if you choose not to acknowledge it.
By the way, I used an LLM to craft this reply too—who doesn’t?
Most don't use LLMs, and I'm telling you, many people are going to be pissed if they figure out that you're writing to them through LLMs. Maybe you find this reaction strange, but it's at least good to know in advance and not be surprised.
You claim that 'most people' will be upset—are you their appointed spokesperson, or is this just your personal assumption? What I find strange is that I complimented and thanked you for your thoughts on machine vision, yet you responded with hostility. Is this how you communicate in real life too?
If 'most people' are upset about others using LLMs to improve their written communication, maybe they should reflect on why they hold such outdated views—or consider that the person replying might not be a native English speaker. Are platforms like Hacker News meant only for native English speakers?
Warning: The statement above was written by an LLM, so don’t be surprised—I’m letting you know in advance.
I use LLMs daily for coding. They are great. They are not a replacement for reading a book like the one linked here, or understanding image formation, lenses etc. Many people seem to imagine that all this stuff is now obsolete and all you need to do is wire up some standard APIs, ask an LLM to glue the json and that's all there is to being a computer vision engineer nowadays. Maybe even pros will self denigradinglybsay say say that but after a bit of chatting it will be obvious they have plenty of background knowledge beyond prompting vision language models.
So it's not disdain, I'm simply trying to broaden the horizon for those who only know about computer vision from OpenAI announcement and tech news and FOMO social media influencers.
Here are two examples where the right camera, optics, and lighting make a huge difference:
Semiconductor Wafer Inspection: Detecting tiny defects like scratches or edge chips requires high-resolution cameras, precision optics, and specific lighting (e.g., darkfield) to highlight defects on reflective surfaces. Poor choices here can easily miss critical flaws.
Food Packaging Quality Control: Ensuring labels, seals, and packaging are error-free relies on the right camera and lighting. For instance, polarized lighting reduces glare on shiny surfaces, helping detect issues that might otherwise go unnoticed.
Not certain I agree about the payment part. Why not leave it up to the DM receiver to decide whether / not they want to charge for their time, and to respond accordingly?
I mant the platform should charge to let the DM reach but yes the DM receiver can also set the charge beforehand or if he is a small time player then he can be allotted some default fee or maybe he can receive the DM freely also. This can be done by fone tuning later
In the same vein: it's outright bizarre that the HN community in general has a lot of difficulty handling sarcasm and irony, proceeding to knee-jerkingly downvote.
I'm confused: after all, on which bank of the Seine are the snowflakes?