Hacker Newsnew | past | comments | ask | show | jobs | submit | jedberg's commentslogin

Looking back at the best leaders I've ever worked for, they all followed that philosophy that was explicitly stated at Netflix: Context not Control.

The goal of the manager was to explain to their reports what problems the team need to solves and why. Make sure the team was aware of any factors elsewhere in the org that might make a difference, and then connect the people on their team with the people on other teams who they need to talk to.

Beyond that the leader's job was to seek out such context from their peers and leadership.

But then it was up to the IC to figure out the how. The manager never told me how to accomplish the task unless I asked, and that was more of a mentorship than as a manager. And when I was a junior, most of that mentorship came from my more senior peers than my manager.


Yeah, this makes sense. What I am missing from both the blogpost and the comments is that the most valuable contribution a manager can provide is information and not some elusive, esotheric, corporate-speak mumbo-jumbo. And when I write information I mean information in advance (you can call this call for comments if you wish), not after the fact when decisions have been already made. 99% of managers rather focus on some self-patting feel good mindfulness guru teachings instead, and I wonder why...?

I've been saying this for years, since the first AI coding models came out. Where do the juniors go to learn? I'm a senior engineer because I got to do a bunch of annoying tasks and innovate just slightly to make them better.

That opportunity is now lost. In a few years we will lack senior engineers because right now we lack junior engineers.

All is not lost however. Some companies are hiring junior engineers and giving them AI, and telling them to learn how to use AI to do their job. These will be our seniors of the future.

But my bigger concern is that every year the AI models become more capable, so as the "lost ladder" moves up, the AI models will keep filling in the gaps, until they can do the work of a Senior supervised by a Staff, then the work of a Staff supervised by a Principal, and so on.

The good news is that this is a good antidote to the other problem in our industry -- a lot of people got into software engineering for the money in the last few decades, not for the joy of programming. These are the folks that will be replaced first, leaving only those who truly love solving the hardest problems.


I'm more pessimistic. It costs too much to go back to college and retrain. The result is going to be a generation of ambitious people doing a craft they hate. The results are going to be dismal.

They're overstated. The median commute time in the USA is about 27 minutes each way. NYC is the highest at 33 min.

For tech hubs? Because tech hubs tend to be in some of the most traffic nightmare cities. I have worked in DC and Atlanta. My commute for all my jobs except 1 was an hour. The one exception was 20mins because it was a small weirdly placed company that just happened to be in the suburb one over from me.

For all other jobs, I had to commute to a business district I didn't live close to because business district and low price (when young) or great schools (when older) don't mix often.

Yeah, I know the median commute in these areas is low, but they are counting retail workers and teachers. I bet the median for tech workers is pretty high because of the reality of how they tend to be placed.


In a real tech hub, it's definitely going to be a longer commute. Nashville, for instance, is not a tech hub. Yet it has some of the worst commute times for people who have an office there.

Most cities on earth have mixed-zoning, with office and living spaces mixed together. There, this does not occur.

Is that one way?

> 27 minutes each way

If you reload the page a few times, and you're using a modern browser, you'll almost certainly find it's a different fingerprint every time. Most modern browsers add in a randomization so that fingerprinting cannot be used for tracking.

So yes, your fingerprint is unique, but it's a different unique every time, making it pretty useless for anything.


Seems right, I'm on "Mozilla/5.0 (X11; Linux x86_64; rv:145.0) Gecko/20100101 Firefox/145.0" and reloading the page I get a new fingerprint each time. "Unique Fingerprint ID" seems to be the only attribute that changes each reload, but it isn't clear how that's derived.

Edit: Ah, turns out "Unique Fingerprint ID" is just the same fingerprint ID printed at the top, it isn't one of the attribute used for calculating the ID, it is the ID. Guess I got confused by the placement of it.


Yeah, I made the mistake of including all features, even ones that change on every refresh like canvas or audio.

The fingerprint should really only use stable features that don’t fluctuate between reloads. That way it’s consistent for the same device.


> No, I have an iPhone Pro and am in the PST time zone, set to English. It has the exact same finger print as millions of other devices among the 40 million people in the PST time zone.

Your IP address, ASN, and location make this not true.


Those have nothing to do with "what the browser exposes". They are exposed regardless of what you use to connect.

But the browser knows them too and the site can get you to reveal it, even behind a vpn.

Unlikely, given that you don't know it has no door until after you get there.

And also, when I travel with my kids, I still want to close the door.


Pydantic also has a first party integration with DBOS, which doesn't require an external state server.

https://ai.pydantic.dev/durable_execution/dbos/


I am not a smart person. How are you avoiding using webhooks but giving the same functionality? I read the materials but still don't understand.

We eat all the webhook pain so you don’t have to. Much of our code is dealing with complex state transitions triggered webhooks. We just avail you the end result.

Instead of having to implement all that yourself, you just read the latest billing and entitlement state for each of your customers from Flowglad:

const billing = flowglad(user.id).getBilling()

const hasFastGen = billing.checkFeatureAccess(‘fast_generations’)


Oh I see, you're still running a server, the open source bit is just your SDK. I thought your entire system was open source and didn't understand how you could trigger workflows without webhooks. Makes more sense now, thanks!

What are you using to manage your workflows on your backend?


We're using trigger.dev for processing workflows and we really love the product (and team)

Oh excellent! I'm the CEO of DBOS, and we're fans of trigger too. If you ever branch out from typescript or need to run on your own infra, we can help you out!

Visa and Mastercard don't want you to integrate with them. They want you to go through an intermediary that they can have more contractural control over and that they can rate limit and monitor.

Up until today, the general advice was use Opus for deep research, use Haiku for everything else. Given the reduction in cost here, does that rule of thumb no longer apply?

In my opinion Haiku is capable but there is no reason to use anything lower than Sonnet unless you are hitting usage limits

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: