Hacker Newsnew | past | comments | ask | show | jobs | submit | g9yuayon's commentslogin

> so we have been unwilling to invest in our own children.

The school districts like SFUSD are actually sabotaging the growth of our kids in the name of equity. They're committed to ideas from people like Jo Boaler, and they tried very hard to dumb down the curriculum. The real tragedy is that kids from wealthy families will just get other means of education to make up the difference. It's the kids who desperately need the quality education who are going to be left behind.

If it were up to me, I'd send those people to jail (yes yes, I know. I'm just angry and lashing out)


I looked up this lady on Wikipedia, but I couldn't find any obvious problems. It says she's a math educator with degrees from known universities and lots of published research?


https://stanfordreview.org/jo-boaler-and-the-woke-math-death..., and wikipedia on Math Wars: https://en.wikipedia.org/wiki/Math_wars

Personally, I find Boaler's advocacy extreme. Her famous quote: "Every student is capable of understanding every theorem in mathematics – and beyond – the mathematics curriculum. They just need the opportunity to struggle with rich tasks and see mathematics as a conceptual, creative subject.” This sounds inspiring, but in practice she advocated the policy of truly dumbing down math curriculums and text books. To say the least, shouldn't she at least demonstrate that she could understand any theorem? But instead, she advocated that SFUSD eliminate algebra from 8th Grade . Another example was that the curriculum that she advocated, College Preparatory Mathematics, was so boring and trivial. She also said something along the line "Traditional mathematics teaching is repetitive and uninspiring. We give students 30 similar problems to do over and over again, and it bores them and turns them off math for life.” What's funny is that the alternatives that Boaler prescribed were quite uninspiring and low level: https://www.youcubed.org/tasks/. All I can derive from her policies and complaints is that she couldn't do math. Why people would listen to someone who sucked at math about math education is beyond me.


I'm curious why Lisp didn't gain mass popularity despite its advantages. In fact, I was wondering if it's popularity has event decreased in the past decade or so. I remember in the 2000s and even early 2010s, there were active discussion on Clojure, Scheme, and functional/logic programming in general. There seems much less discussion or usage nowadays. One theory is that popular languages have absorbed many features of functional programming, so the mainstream programmers do not feel the need to switch. My pet theory is that many of us mortals get the productivity boost from the ecosystem, in particular powerful libraries and frameworks. Given that, the amazing features of lisp, such as its s-expression, may not be powerful enough to sway users to switch.


Lisp has disadvantages too. I wrote about some of them in https://paulgraham.com/redund.html.


Is it the right link? "Lisp" is mentioned once, in a good way. It's an old post too, it mentions Emacs' ilisp mode, which is the now unused ancestor of Slime.


it’s also Paul Graham’s website…

https://paulgraham.com/bio.html


it's confusing but there's actually written "kragen sitaker" on the article title as if it was an invite post.


I originally posted it to a mailing list where we were discussing the issues: https://people.csail.mit.edu/gregs/ll1-discuss-archive-html/...

He posted it on his website with my approval, presumably because he thought it was a different perspective worth reading.


> I'm curious why Lisp didn't gain mass popularity despite its advantages.

In my opinion Lisp is too flexible. I think the ideal use of Lisp is one or a few talented developers exploring the problem space and creating the MVP. Then a follow on team to reimplement it in a mainstream language that’s more maintainable by “mere mortals”.

Ime it’s similar to the fact that projects implemented with statically typed languages are easier to maintain than dynamically typed languages. Lisp is so flexible even lexical scoping (of variables for example) is a choice. Not what I’d care to have juniors or run of the mill seniors responsible for!


Clojure seems to be pretty strong. At least here in Brazil, several companies here use it as their main programming language


Lisp was the first to have first-class functions and garbage collection which have become common language features by now. But it also has many features that still are not widespread. Its metaprogramming is unparalleled, especially reader macros. A lot of its power comes from that. Rust macros and C++26 compile time reflection are steps in the right direction but still nowhere near what Lisp offers. Java's Project Babylon is also cool but not in the same ballpark.

When doing joint debugging with teammates, I've seen so many of them randomly add or remove & and * from C++ statements trying to get their code to compile, without bothering to reason through the issue. I suspect this stochastic approach to code development is pretty common. That is not going to unlock the benefits of metaprogramming either, where you have to deliberately build up the language you want to have.

Metaprogramming is extremely powerful but hard to use, especially for novices. I also think there is a general lack of education about programming languages and compilers at play here. So much of Lisps power comes from knowing that.


I think both can true. I learned a lot in my university, and my learning has been carrying me ever since. Case in point, it was never a problem for me to pick up functional programming or programming-language concepts in general because the courses on programming languages were so wonderful. I had no problem tap into formal verifications or data science or distributed systems because my universities gave me solid fundamentals. Heck, I was not even a good student back then. It was Sam Toueg of the failure detector fame who taught us distributed systems, yet I was lost most of the time and I thought he was talking some abstract nonsense. Only after I graduated could I appreciate the framework of analyzing distributed systems that he taught us.

On the other hand, we certainly learned more after graduation (or something is wrong, right?). When I was in the AI course, the CS department was all about symbolic reasoning I didn't even know that Hinton was in the same department. I think what matters is the core training stayed with me and helped me learn new stuff year after year.


My own experience: https://www.quora.com/Could-online-coding-programs-and-codin...

And my wife's experience: https://www.quora.com/What-is-it-like-to-learn-computer-scie...

In short, the training that we got from our universities was invaluable, and I always feel fortunate and grateful to my CS department.


I can attest how useful Bayesian analysis is. My team recently needed to sample from many millions of items to test their qualities. The question is that given a certain budget and expectation, what's the minimum or maximum number of items that we need to sample. There was an elegant solution to this problem.

What was surprising, though, was how reluctant the engineers are to learn such basic techniques. It's not like the math was hard. They all went through the first-year college math and I'm sure they did reasonably well.


What were they reluctant to learn? Why do they need to learn it?

Plenty of engineers have to take an introductory stats course, but it's not clear why you'd want your engineers to learn bayesian statistics? I would be surprised if they could correctly interpret a p-value or regression coefficient, let alone one with interaction effects. (It'd be wholly useless if they could, fwiw).

It'd be nice if the statisticians/'data scientists' on my team learned their way around the CI/CD pipelines, understood kubernetes pods, and could write their own distributed training versions of their pytorch models, but division-of-labor is a thing for a reason, and I don't expect them to nor need them to.


I guess I have a different philosophy: whoever owns the problem should learn everything necessary to solve the problem. In my case, the engineers showed no interests in learning the algorithm and the math behind it. For instance, when they built the dashboard for the testing, they omitted a few important columns and got the column names wrong. When I tested them on their understanding of the method, there was none. To say the least, my team should know enough to challenge me in case I made any mistake, or so I assume.

On a side note, I believe it is an individual's responsibility to find the coolness in their project. What's the fun of building a dashboard that I have done a thousand times? What's the fun of carrying out a routine that does not challenge me? But solving a problem in a most rigorous and generalized way? That is something in which an engineer can find some fun. Or maybe it's just me.


And the competition from Airbus may not make Boeing better either. On the contrary, Boeing may well get into a death spiral and a slow but painful death. What competition really means is that incumbents can die without impacting customers as other more competent alternatives will fill the void.


The example in the article does not look like LLM Inflation, but that LLM can't reduce the waste in a bureaucratic process.


I used to try services like Blinkist. Did anyone have similar experience as I had: I simply couldn't remember what I read, let alone what I listened to. The summaries, despite being reasonably detailed and having key points and representative examples, were still bland and boring, to the point that they left little impression on me.


This isn't due to Blinkist, but due to how you consume (high information density) content. What you need to do is write the insights you are getting from it down in a way where you will see them again when they are relevant.

Lengthy stuff has lots of repetition and different access routes to the insights and information. Even then the above approach works much better than hoping that the passive consumption will lead to memorization.


That’s why Patrick said it helps to have a strong reputation going in. Still, you can absolutely negotiate—just make sure you have real leverage. That usually means a competing offer from another solid company (ideally a competitor).

Keep this in mind: it’s really hard for companies to hire good engineers. The onsite-to-offer ratio might be 20:1 or worse. So when a recruiter says they’ll just move on to the next candidate, they’re probably bluffing.

But what if they do have 20 people lined up? Then you don’t have leverage with that company—and that’s fine. Take the offer if it’s good enough, or walk and try elsewhere.

P.S., a fun anecdote: when Netflix was extending an offer to a renowned engineer, he brought his PR to negotiate. Apparently, it worked well for him.

P.P.S, always interview for a higher title. I get it — it’s tough with hot companies like OpenAI. But for most places, it’s worth a shot. At the very least, don’t aim lower than your current level. It’s funny how the human mind works—interviewers anchor their expectations to your title. And ironically, a senior engineer interview is often just as hard as a staff-level one. If you’re feeling cynical, just remember: title inflation is real and everywhere, and plenty of high-level ICs are great at navigating politics, drawing boxes, and sounding confident, but not necessarily skilled at offering real values like solving hard engineering problems. So if you can’t beat the game, why not play it?


Specifying a system correctly can be hard with the previous generation of tools. For instance, using LTL to describe system properties is not necessarily easy. I remember there used to be pattern library for model checking or for temporal logic. For something as simple as checking bounded existence, one has to write LTL formula like below. That certainly is out of most people's interest. Fortunately tools have improved a lot, and engineers do not really need to study temporal logic deeply for many cases.

``` []((Q & <>R) -> ((!P & !R) U (R | ((P & !R) U (R | ((!P & !R) U (R | ((P & !R) U (R | (!P U R)))))))))) ```



Thanks! I used to use the same pattern library hosted in either CMU or PSU, IIRC. Glad that it has a new home.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: