Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it’s important to test these systems. Let some % of candidates who get this wrong through to the next stage and see what happens. Does failing this test actually correlate with being a bad fit later?

If you want to ineffectivly filter out most candidates just auto-reject everything that doesn’t arrive on a timestamp ending in 1.





> Let some % of candidates who get this wrong through

Really, the better test would be to not discriminate on it before you know it's useful, but store their answer to compare later.


You're right. I agree.

_How_ can you be a good hire for a _software engineering_ position, if you can’t get that one correct though?

It depends why they didn't get it "correct" (asked ChatGPT bad, used Python REPL not so bad, used screen reader very not bad) and what "correct" even means for this problem.

There's a bizarro version of this guy who rejects people who do it in their head because they weren't told to not use an interpreter and he values them using the tools available to solve a problem. In his mind, the = is definitely part of the code, you should have double checked.


Oh. I was reading this on a phone, and didn’t realise there’s hidden equal sign (though it’s mentioned).

That does change it. In that I can see how false negatives may arise. Though, when hiring you generally care a lot more about false positives than negatives.


> Let some % of candidates who get this wrong through to the next stage and see what happens.

This isn't a good methodology. To do your validation correctly, you'd want to hire some percentage of candidates who get it wrong and see what happens.

Your way, you're validating whether the test is informative as to passing rate in the next stage of your hiring process, not whether it's informative as to performance on the job.

(Related: the 'stage' model of hiring is a bad idea.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: