Hacker Newsnew | past | comments | ask | show | jobs | submit | eatonphil's commentslogin

Looks interesting, thank you!


> 2. You end up needing to use `uv pip` so it's not even a full replacement for pip.

Needing pip and virtualenvs was enough to make me realize uv wasn't what I was looking for. If I still need to manage virtualenvs and call pip I'm just going to do so with both of these directly.

I had been hoping someone would introduce the non-virtualenv package management solution that every single other language has where there's a dependency list and version requirements (including of the language itself) in a manifest file (go.mod, package.json, etc) and everything happens in the context of that directory alone without shell shenanigans.


> I had been hoping someone would introduce the non-virtualenv package management solution that every single other language has where there's a dependency list and version requirements (including of the language itself) in a manifest file (go.mod, package.json, etc) and everything happens in the context of that directory alone without shell shenanigans.

Isn't that exactly a pyproject.toml via the the uv add/sync/run interface? What is that missing that you need?


> pyproject.toml

Ah ok I was missing this and this does sound like what I was expecting. Thank you!


What are you needing to use `uv pip` for? I don't think I ever call into pip from uv for anything nowadays. I typically just need to do `uv sync` and `uv run`, maybe sometimes `uvx` if I want to run some random 3rd party python script


> I had been hoping someone would introduce the non-virtualenv package management solution that every single other language has where there's a dependency list and version requirements (including of the language itself) in a manifest file (go.mod, package.json, etc) and everything happens in the context of that directory alone without shell shenanigans.

If you are using uv, you don’t need to do shell shenanigans, you just use uv run. So I'm not sure how uv with pyproject.toml doesn't meet this description (yes, the venv is still there, it is used exactly as you describe.)


In most cases, you don't really need to manage virtual envs though ? uv commands that need a venv will just create one for you or install to the existing one automatically.


Pixi is an alternative that you may want to try.


Really sounds like you're using it wrong, no? I completely forgot about virtualenvs, pip and requirements.txt since I start using UV.



Wow I didn't know this history about Akamai, thanks for mentioning, interesting as a former Linode guy and a fan of consistent hashing.


What is the reason or benefit of them being so secretive about this?


Usually, though not always, a company will tell you if they're making money on something, and if they're not they beat around the bush like this. Notice how, for example, Gywnne Shotwell never beats around the bush like this when talking about Starlink.

Notice the weird language:

> That’s making progress in terms of unit economics very, very positive.

He says the "progress" is "very, very positive," but if you're not paying close enough attention you might come away thinking that the unit economics are what's very very positive.

All that said, what he's saying makes sense. They're able to charge more for their rides since they offer the convenience of not having to deal with a driver, and they're not paying the driver, who is the most expensive part, so yea, I'm bullish on them.


They don't gain much from disclosing anything imo , their competition reads every word they say. I'm not sure it matters that much but as a habit I don't see why they should disclose exact numbers.


Waymo doesn't gain anything. Google i.e. Alphabet Inc, does.

Especially these days. Every scrap of news that could pump the stock price is publicized aggressively.

And this makes the absence of such actions suspicious.


Not really: this past few years, listed companies tend to be _very_ pessimistic on their quarterly projection, and then reveal that either: it wasn't that bad, and nothing change, or that is was great, and their valuation shoots up. Weirdly the market doesn't react over those pessimistic projections, so it seems it's just a safe play for CEOs. They started doing that in Europe as well.


Because no highly indebted company is going to "strongly hint" that they aren't just hemorrhaging cash like everyone assumes--they will absolutely let you know. "Hints" are just best effort accounting aesthetics to seem like the dream is just around the corner.


They have to follow SEC rules about disclosing it.


Guessing from the relevant masters thesis linked on his site and his time at materialize and tigerbeetle, Jamie's been working on databases for at least like 15 years?

At least personally when Jamie says something I listen. :)


Many of these points are not compelling to me when 1) you can filter both rows and columns (in postgres logical replication anyway [0]) and 2) SQL views.

[0] https://www.postgresql.org/docs/current/logical-replication-...


Is it possible to create a filter that can work over a complex join operation?

That's what IVM systems like Noria can do. With application + cache, the application stores the final result in the cache. So, with these new IVM systems, you get that precomputed data directly from the database.

Views in Postgres are not materialized right? so every small delta would require refresh of entire view.


Take a look at Alex Miller's diagrams for what function calls are actually doing on various systems.

https://transactional.blog/how-to-learn/disk-io


> Sqlite's test suite simulates just about every kind of failure you can imagine

The page you link even mentions scenarios they know about that do happen and that they still assume won't happen. So even sqlite doesn't make anywhere near as strong a claim as you make.

> SQLite assumes that the operating system will buffer writes and that a write request will return before data has actually been stored in the mass storage device. SQLite further assumes that write operations will be reordered by the operating system. For this reason, SQLite does a "flush" or "fsync" operation at key points. SQLite assumes that the flush or fsync will not return until all pending write operations for the file that is being flushed have completed. We are told that the flush and fsync primitives are broken on some versions of Windows and Linux. This is unfortunate. It opens SQLite up to the possibility of database corruption following a power loss in the middle of a commit. However, there is nothing that SQLite can do to test for or remedy the situation. SQLite assumes that the operating system that it is running on works as advertised. If that is not quite the case, well then hopefully you will not lose power too often.


There was a time that Oracle databases used raw disk partitions to minimize the influence of the OS in what happens between memory and storage. It was more for multiple instances looking at the same SCSI device (Oracle Parallel Server).

I don't think that is often done now.


> So even sqlite doesn't make anywhere near as strong a claim as you make.

And? If you write to a disk and later this disk is missing, you don't have durability. SQLite cannot automatically help you to commit your writes to a satellite for durability against species ending event on Earth, and hence its "durability" has limits exactly as spelled out by them.


You're arguing a strawman and I pointed at a specific example. Sticking with my specific example they could probe for this behavior or this OS version and crash immediately, telling the user to update their OS. Instead it seems they acknowledge this issue exists and they hope it doesn't happen. Which hey everybody does but that's not the claim OP was making.


It’s not really a libraries job to cover all bases like you’re suggesting. They outline the failure scenarios fairly well and users are expected to take note.


To the contrary, when all the good New York meetups (Papers We Love, Linux User Group) didn't come back, and inspired by the continuously running Munich Database Meetup and TUMuchData, I started the NYC Systems Coffee Club and co-started NYC Systems (talk series) after which came Berlin Systems Group, Bengaluru Systems Meetup, San Francisco Systems Club, Systems from HEL, DC Systems, Vancouver Systems, South Bay Systems, and Seattle Systems.

So I think people are really eager for high quality talks and chances to gather with smart people.

What's more I think there are not enough meetups in almost any major city to satisfy the demand of speakers or attendees. For example, NYC Systems gets hundreds of people asking to speak (we have 12 speakers a year) and gets 2-3x as many attendees wanting to come as we have space for.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: