Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For anyone who missed the (poorly-explained) trick, the website uses a CSS trick to insert an equals sign, thus showing different code if read or if copied/pasted. That's how the author knows whether you solved it in your head or pasted it somewhere.




The thing I found particularly fascinating, because the article was talking about discarding AI applicants, is that if you take a screenshot and ask ChatGPT, then it works fine (of course, it cannot really see the extra equals sign)

So this is not really foolproof, and also makes me think that feeding screenshots to AI is probably better than copy-pasting


Sure, it's not foolproof, but a large percentage of folks would just copy&paste rather than taking the screenshot. Now that may start to change.

There was story like this on NPR recently where a professor used this method to weed out students who were using AI to answer an essay question about a book the class was assigned to read. The book mentioned nothing about Marxism, but the prof inserted unseeable text into the question such that when it was copy&pasted into an AI chat it added an extra instruction to make sure to talk about Marxism in relation to this book (which wasn't at all related to Marxism). When he got answers that talked extensively about the book in Marxist terms he knew that they had used AI.


This didn’t work for me because Reader Mode popped up and showed the “hidden” equals sign.

I did the same thing. After seeing the answer I thought “this CTO is a booger eating moron”…

Thanks, I was wondering how in the hell that many would get the answer wrong and what is this hidden equal sign he was talking about.

Maybe the question could be flipped on its head to filter further with "50% of applicants get this question wrong -- why?" to where someone more inquisitive like you might inspect it, but that's probably more of a frontend question.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: