Hacker Newsnew | past | comments | ask | show | jobs | submit | sh1mmer's commentslogin

It would be interesting to see if this could be combined with those little sidewalk bots to do the last mile, effectively having the Waymos act like buses for the bots.



Those things need a last 100ft microbot


I’m curious about how you confirmed some things you wrote were in the dataset.


When Node was just getting big I met Mikael for the first time at some Node event or another. It’s hard to overstate or forget how welcoming he was, always excited to see folks, and the one getting the energy of the room going.

The more time I spent with Mikael the more I saw him doing all the small things that needed doing for a community, or an event. Even just hanging out Mikael was always so considerate and tried to make things special for everyone.

He will be missed.


His own events were rad. I remember O’Reilly running some giant corporate event (maybe called JSfest) and Mikeal rented out the Marines Memorial Theatre in San Francisco for a couple of days beforehand and had way kore interesting and relevant talks by nearly every major person in the JavaScript ecosystem.


Yes! He organized a barebones conference out of the Meetup office in New York. The premise was that all attendees would do a 10 minute talk on any subject (I think it was called 10conf?). Mikeal spoke about his pour-over coffee setup while using it to make coffee for everyone.

I really enjoyed time I spent with him and appreciated his kindness and leadership in the community. My thoughts are with his family and loved ones.


Memory unlocked! I remember his coffee obsession and him going to extra effort to make sure that his events actually had real coffee rather than the usual conference garbage that was popular at the time.


Also “England” but the shape is the UK.


I think this is a great point. The author seems to have a selection bias. All the “great” problems in maths are the ones that have remained hard to solve.

All the stuff in the middle has been solved and then taught and is no longer “interesting”. Or, are we build on those results the new problems are a bit further from the fundamentals so you have to look in specific more specialized domains to find new areas.

What the author seems to forget is that most of the stuff we take for granted now were at one point the cutting edge of maths and obscure to all but the leading mathematicians of the time.


Is that based on the average level of maths at the time? I would argue that there are far more mathematicians now who understand the results from that period now than there were then because our mathematical literacy, especially in higher education and in the developing world has increased significantly.

The fact that those results are easier to understand is because of our increased literacy. Trigonometry was the cutting edge of maths at one point and math literacy was even less then. Now it’s material for tweens.


I think this is one of those examples where many people wouldn’t think about botulism because both garlic and oil are common and store safely out of the fridge uncombined.

Things like meat people might be more skeptical about, but imo this goes back to do Google, et al really trust their LLMs to give definitive answers on food safety. If it were me this would be one area I’d have the LLM refuse or hedge all answers like with other sensitive topics.


Actually the first hit [1] on Google I got for “is it safe to infuse garlic oil without heating” is this article from OK state saying that it’s only safe without heat if you use citric acid.

Of course, the incorrect Gemini answer was listed above that still.

[1] https://news.okstate.edu/articles/agriculture/2020/gedon_hom...


Thanks for your correction. This makes me think that how Gemini arrived at this answer is that it mashed together "heating first" with "no heating but with citric acid first" articles, but left out the (critically important) citric acid part.

I think this "failure mode" really highlights how LLMs aren't "thinking", but just mashing up statistically probable tokens. For example, there was an HN article recently about how law-focused LLMs made tons of mistakes. A big reason for this is that the law itself is filled with text that is contradictory: laws get passed that are then found unconstitutional, some legal decisions are overturned by higher courts, etc. When you're just "mashing this text together", which is basically what LLMs do, it doesn't really know which piece of text is now controlling in the legal sense.


I also like to picke garlic. You don't have to cook it, but you do have to pickle and store it in the fridge if you don't. If you do that it can be safe for years (and actually pretty good after 5+ years). Even with acid it's not foolproof at room temperature for long. I think you only get a few days.


If the phone was locked, passcoded, remote erased and sim-locked how did they know her number? I’m not doubting the story I’m just curious how they figured that out.


I was wondering the same thing. Maybe it was on the lock screen? Seems unlikely that they managed to remember its association with the phone throughout, though.

edit: we have the answer https://news.ycombinator.com/item?id=40579230


> As you can see, most of the phones she tried didn’t have passcodes but were still linked to iCloud accounts

I guess she didn't have passcode on.


Sam Altman holds.. no degree? So presumably using their academic qualifications as a basis for their eligibility for the board of OpenAI is as silly as using Mr Altman’s?

Would you give him the benefit of the doubt as an “AI expert” because of his well know work experience at YC? Without discussing their careers at all this comment seems at best unhelpful.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: