No, it's an attempt to force, and has the same flaw as all attempts to force, which is: the other side might just call your bluff.
"Oh you think that going ahead before a decision is made and just publish a post will get us to decide in your favor because we're afraid of looking like we're backtracking? How about we don't decide in your favor and delete your post? What you gonna do now?"
I asked gpt4 about this first and it didn’t really know. What is it about “First Principles“ that’s cringeworthy or pretentious? I think I’m out of the loop on this one.
> I've come to the conclusion that convex functions are functions that are trivial to optimize
I think a lot of maths is created like this. I'm trying to solve problem P in domain D. I couldn't do that, but I've noticed that I could do it in subset D1 of the domain. If problem P was important enough and D1 large enough, I become very famous within mathematics and people start studying D1 by itself, without connection to P. After a generation or more a lot of experts in D1 might not know the original reason why someone decided to study it. Eventually some historians of science re-discover the original motivation and publish it as a curiosity.
Very true, combined with the fact that deep understanding of D1 turns out, centuries later, to be of practical use in building some sort of physical machinery.
Sounds about right...? End of November 2023 the average German worker hat 20 sick days already. Add 2 for December and 2 for good measure and you're at 2/month.
Some times it blows my mind to discover that some bleeding obvious things arent well established. Next thing you're gonna discover that kids with funnier parents tell more jokes, or that kids with angrier parents shout more.
Assuming you're right, it doesn't change the point. Commenters here need to follow the rules regardless of what others are doing.
> Are you going to parrot that BS at every one of them, or just me, you dong?
As we already had to ask you not to post like this (https://news.ycombinator.com/item?id=38003769), it seems that you don't want to use HN as intended. I've therefore banned the account.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. But please don't create accounts to break HN's rules with.
Title is an obvious statement of fact that anyone would also just naturally expect, that either genes or nurture have correlation to where children have similar behaviour to their parents.
Within the article maybe what would have been more interesting statement and for the title is that "Global data set suggests socioeconomic status does not play a role in children’s language development".
Otherwise it reads like an onion article or something.
"Study shows that children's skin color correlates with their parent's skin color".
Of course the interesting science would be - nature or nurture?
Eg in a “village” would the kids of quiet parents be exposed all day to all these other talkative people, and therefore the effect disappears? Or is it genetic?
Great identical twins studies would be very interesting here.
And I agree due to the complexity of this, it's one of the most fascinating questions out there. How much and with what kind of likelihood I as a parent can influence my child?
Because due the complexity of it all, it sometimes also happens that bad parents providing bad environment and perhaps bad genes as well, get good children and vice versa. No one likes the ideas of this happening though as it dismisses the idea of responsibility, and no one likes the idea that they won't have 100% control over raising their children.
It would be great to know the exact odds, e.g. if you classified parents into different buckets, how often could it happen that with whatever rules that are perfect environment would it happen that children happen to grow up in a bucket where they aren't with desired characteristics.
Then sometimes identical twins who grow up with same parents, still become different personality wise. What exactly explains that? Is it that one of them just happens to take the leading role in their relationship and the other just balances and the difference between them snowballs because of the dynamic they just built with each other.
It may be obvious, but as a relative HN long timer, I've been hit too many times with the "source please" when making statements based on what I thought were obvious well known things, so studies like these are ammo for those sticklers who insist on seeing the sauce for everything.
Are you sure it's obvious? I have parents that are always joking and what they transmitted me is to hate everything, in my experience people tend to hate the traits they grow up with and to which they're continuously exposed, as I hear often "Who did he take it from?"
The whole point of study is to break down these “obvious” conclusions and make sure we understand the exact causal chain, free of confounding variables. Extremely often something is studied that seems “obvious” and turns out to be anything but.
I would still think that multiple dispatch is a better way to build libraries than OOP, but indeed I agree that wouldn't be enough for something as drastic as changing language.
True randomness doesn't drive better results for humans, due to some intrinsic humane qualities, but rather eliminates the advantage of large volume data access and processing of AI. Basically true randomness levels the playing field. Backgammon is a perfect example.
You want a game where searching large amounts of data, computing moves and calculating probabilities fast doesn't help.
Maybe some randomness will help, but might not be enough.
My ideea is that bots can't win in the real world economy, no fund driven solely by algorithms can win more than funds driven by humans.
So if we can find a game modeled like the economy, where nothing is random but many things are uncertain, then it might be harder for the software to win.
I would imagine in many games randomness (e.g. through throwing dice, or pulling cards from a randomized pack) add noise, but the underlying strategy (including processing data, calculating probabilities etc.) still help. In such a game even though an AI might be superior, the randomness might mean that occasionally the human wins. Or for an extreme example, if nothing you do in the game really matters, it's all just down to random chance, then the AI:human win rate should approach 50%. But such a game would probably not be particularly enjoying to play.
But yes, I think you're right that you'd need a game where crunching a lot of numbers really fast doesn't give you an advantage. For instance, if the state space of the game is so big that number crunching is useless, and other approaches like AI style pattern matching (used IIRC by Alpha-Go?) don't work either.
Though ultimately, what is the uniquely human trait that would allow a human to beat an AI? Can you make a game that depends on that? Is there even such a thing?
> if nothing you do in the game really matters, it's all just down to random chance... But such a game would probably not be particularly enjoying to play.
I guess all the slot machine players tend to differ in opinion there :)
Sure, slot machine operators adjust winning chances so players keep on playing, but to a player, it's not influenced by anything they do, other than "just one more time and I'll win".
TBH, "real world economy" does allow humans to rewrite the rules, and they've done so: issuing more bonds, deflating the currency, printing more money, bailing out broke banks, hiding facts and selling before downturn goes public, pure and simple fraud...
I’m not saying a human would be better than an AI. Just saying it would be a more equal playing field, since neither would have a significant advantage.
I think this depends on whether the human has a chance in a particular run (yes) or on average of many runs (probably no because the AI will calculate the probabilities better).
But in the extreme case of a random game (like rolling the highest number on a die) they are equal (obviously).
>I would like to hear your reasoning as to why you think that a human is especially good at dealing with uncertainty.
We can ask any successful CEO. Fortune 500 companies would use bots if that was possible.
Uncertainty doesn't equal randomness. Randomness is flipping a coin and asking you the result. Uncertainty is hiding the coin behind my back and asking you in which hand is it.
That's disanalogous to board games. We're comparing board games with uncertainty to board games without uncertainty. In either of these categories, the thing that makes AI competent is unlimited training data due to self-play.
"Oh you think that going ahead before a decision is made and just publish a post will get us to decide in your favor because we're afraid of looking like we're backtracking? How about we don't decide in your favor and delete your post? What you gonna do now?"