Hacker Newsnew | past | comments | ask | show | jobs | submit | tdumitrescu's commentslogin

If the last 9 months have shown us anything, it's that long-running government institutions are a lot easier to kill than we thought. And the idea of archive.org being under the control an administration like the current one in the US is pretty frightening. They would have absolutely zero qualms about deleting and changing that data.


Ditto for the other side of the aisle. We still don't know who was really the president, while everyone pretended Biden was not a dementia patient.


Totally agreed. They are just less obvious about it.


Frankly, I envy people who still have sides and that don't see the world as cynically as I do. For me, Clinton, Obama, Bush, Trump, don't matter, just different slave masters for the same system.


That's the main problem with evented servers in general isn't it? If any one of your workloads is cpu-intensive, it has the potential to block the serving of everything else on the same thread, so requests that should always be snappy can end up taking randomly long times in practice. Basically if you have any cpu-heavy work, it shouldn't go in that same server.


Indeed. async is one of those things which makes a big difference in a handful of scenarios but which got promoted as a best-practice for everything. Python developers have simply joined Node and Go developers in learning that it’s not magic “go faster” spray and reasoning about things like peak memory load or shared resource management can be harder.


My system is written in Python because it is supported by a number of batch jobs that use code from SBERT, scikit-learn, numpy and such. Currently the server doesn't do any complex calculations but under asyncio it was a strict no-no. Mostly it does database queries and formats HTML responses but it seems like that is still too much CPU.

My take on gunicorn is that it doesn't need any tuning or care to handle anything up to the large workgroup size other than maybe "buy some more RAM" -- and now if I want to do some inference in the server or use pandas to generate a report I can do it.

If I had to go bigger I probably wouldn't be using Python in the server and would have to face up to either dual language or doing the ML work in a different way. I'm a little intimidated about being on the public web in 2025 though with all the bad webcrawlers. Young 'uns just never learned everything that webcrawler authors knew in 1999. In 2010 there were just two bad Chinese webcrawlers that never sent a lick of traffic to anglophone sites, but now there are new bad webcrawlers every day it seems.


OS threads are for CPU bound work.

Async is for juggling lots of little initialisations, completions, and coordinating work.

Many apps are best single threaded with a thread pool to run (single threaded) long running tasks.


Traditionally, there are two strategies:

1) Use the network thread pool to also run application code. Then your entire program has to be super careful to not block or do CPU intensive work. This is efficient but leads to difficult to maintain programs.

2) The network thread pool passes work back and forth between an application executor. That way, the network thread pool is never starved by the application, since it is essentially two different work queues. This works great, but now every request performs multiple thread hops, which increases latency.

There has been a lot of interest lately to combine scheduling and work stealing algorithms to create a best of both worlds executor.

You could imagine, theoretically, an executor that auto-scales, and maintains different work queues and tries to avoid thread hops when possible. But ensures there are always threads available for the network.


Backend developers finding out why user interfaces have a thread for the GUI and a thread for doing work :D


Seriously. This is one of those cases where rolling your own really does make sense. Flags in a DB table, flags in a json file, all super simple to build and maintain, and 100x faster and more reliable than making the critical paths of your application's request cycle depend on an external provider.


You know what I would find worse than telling my customers that they can't access the application they paid for and works because I farmed my auth out to a 3rd party that is having an outage?

Telling them that my auth provider isn't out, but the thing I use to show them a blue button vs a red button is.

Oof.


Has this actually been a problem? We’ve been using launch darkly for years and if they do have an outage (which is really really rare) the flag will be set to the default value. It’s also very very cheap, maybe $500 a month.


$500 a month and it has had many major outages in 2024 alone. lol

https://status.launchdarkly.com/uptime?page=5


What about ConfigCat?


I don't know, why don't you do the research and let us know?


We did this. Two tables. One for feature flags, with name, desc, id, enum (none, defaultToEnabled, overrideToDisabled). One for user flag overrides, with flagId, userId, enum (enabled, disabled).

The combination of these two has been all we've ever needed. User segmentation, A/B testing, pilot soft launch etc are all easy.


Would you mind expanding on the usage of enums for the feature flags table? Why not use a boolean?


We actually did use booleans, I just found it easier to explain using enums, and the code would have been simpler if we'd done it that way.


In years of trying to sell things I've found that one of the best selling points to management is "susceptible to vendor lock-in", "you don't own your customer database", etc.

I have no idea why that is.


I'm confused. Are you saying this ironically or have you literally pitched management with the risks of using your product?


I developed an open source "user management system" circa 2001 which I used on several sites, including one that had 400,000+ users, a famous preprint archive and the web site of a county-level green party. It was patterned on what sites like Yahoo and Amazon had at the time, did email verification and numerous things that were a hassle to implement, had great screens for the administrators, all of that.

I couldn't get anybody else to adopt this software despite putting a lot of work into making it easy to pick up and install.

10 years later competitors popped up like mushrooms and were adopted quickly. The thing they all had in common was somebody else owned your user database. So yeah I feel pretty cynical.


There's such a thing as being too early.

My university had a great shared browser bookmark management system, even with a basic discussion support for them. In 1998. It was not super popular because people just didn't have that many links to share, eventually it fell offline and got accidentally deleted in 2001.


Nobody is going through the significant work of raising VC and then TRYING to fail. All for the sake of temporarily using some fancy office furniture.


OP's not suggesting they're trying to fail, OP is suggesting they're not trying very hard to succeed.

A typical bootstrapped business wouldn't dream of spending money on equipment that wasn't absolutely essential because it pointlessly burns valuable runway. But when you're spending someone else's money and don't see this startup as your one shot—when you can always start again with a new VC when this one fails, and the new VC will be totally understanding of your past failures because that's just how the dice roll—the need to be careful isn't there, creating an incentive to live it up while you're flush with cash.


High spending on office furniture etc is _not_ a common behaviour for early-stage startups, tho. It's really something that you see more from mid-stage, and generally at a point where it doesn't _really_ matter; the amounts just aren't significant enough to make more than a few days' difference to the startup's lifespan if the next raise doesn't work out.


> wouldn't dream of spending money on equipment that wasn't absolutely essential

The issue is if they don't create a desirable work place, they will have trouble attracting and keeping talent. Further most of the luxury items pale in comparison to a single engineer's salary. E.g. 10 engineers with cushy job setup, or 11 engineers with a very lean one. The former may be more attractive to the kinds of engineers you want. The trick of course is identifying the talent, IMHO and IME this is why most startups don't make it.


These days, just offering WFH would be more than enough to attract the right kind of talent, with the benefit of not only not needing to pay for a cushy office but not needing to pay for an office at all.


I will take WFH offer anything, especially forced "team" exercises.

Pay me competitive salary. Allow me to WFH. Spare me all the cheap benefits.


Some talent is going to be more attracted to sensible cost controls than lots of cushy perks.

But up to you what kind of talent you want to attract.


Often, one of those cost controls will be penny-pinching on salary. Don't get me wrong, I'd love a place that pays a generous salary and then buys knockoff/used aerons instead of brand new / brand name.

But often the company is either stingy or generous, and they view the chairs and the engineer comp as both things to invest in, or neither.


You know your salary before accepting the offer, so if they're that kind of cost control you should have advance warning when they lowball you.

I think the big reason why this is so common is because the type of talent a lot of these startups want to attract is explicitly immature talent. People who are drawn to fluffy perks without realizing that they (a) don't actually benefit from them in any real way and (b) are actually a warning sign of what's to come financially.


Given that most employees are underpaid at very-early stage startups with stock options it seems deliberate fraud is not common at all. Besides monetary compensation the "have a failed startup in the CV" argument doesn't seem a big draw either.

I don't see how VC funding is being syphoned away by fraudsters at all. Syphoned away by incompetence yes, not deliberate fraud.


Amusing office perks (and any other benefits down to the Mr. Coffee) are a branding and recruiting exercise. They are essential.

It's just a matter of budget and efficiency.


> Nobody is going through the significant work of raising VC and then TRYING to fail.

Unfortunately there are individuals who do almost this. Their names do travel in the VC circles. Their goal is to get VC money, enjoy the funds and preferably fail within a year or so; rinse and repeat.

Spend 3 to 9 months securing the funds, be boss, leading lavish lifestyle for 12 to 18 months, while getting paid and have lots of perks.

edit: @lolinder wrote it better "..they're not trying very hard to succeed."


Hey if they wanna pay me for real transparent food I'll cook them up some "Emperor's New Hollandaise"


> If you only do something very rarely anyway, spending time to automate it won’t have a great ROI

For code-editing, maybe. But in general software engineering, there are tasks that I have to do maybe once a year or less that are always way more painful than they need to be because I don't remember the details, and anytime I automate even part of them (or yes, just document a little better), it turns out to be well worth it. Stuff like bootstrapping new environments, some database-related work, etc.


Pasting all the commands in a doc that you keep around seems to give 90% of the benefit of automating it for 1-4 times per year tasks.

It's not typing the commands that takes time, it's remembering what commands to type.



Absolutely, I try to "automate" or at least have scripted almost everything I do.

I do this almost always for reproducibility and documentation purposes first.


It may not be worth the time to automate those, but I always take the time to document them, usually in Obsidian, where I can easily find them later.


I’m a big fan of writing things down for future reference. Keeping a journal with things like exactly which commands I used, or transferring that knowledge to something like a README or a wiki page for others to see, is a low effort but sometimes high reward habit. Even for something done only rarely, saving a lot of time next time can justify the documentation effort.

Similarly, if the automation effort is literally only copying those commands verbatim into a shell script or similar, that’s often time well spent. The trap, as the XKCD helpfully demonstrates, is when you then start spending significantly longer on making that script more “flexible”. Turning hard coded values into parameters. Looking context up from the environment. On multiple development platforms. In staging, UAT and production deployments too. With detailed help text to explain it all and say what the defaults for everything are if you don’t specify them explicitly. And then it turns out that you only actually run that script once every six months anyway and never use 95% of that extra flexibility anyway…


I think everyone's got a different threshold for where returns start diminishing sharply. While I'm squarely in the "don't waste time micro-tweaking your editor" camp, there are some little bits of shortcuts and tooling that made me much more fluent at code-editing with very little investment. One example that stands out is the multi-cursor support that Sublime Text popularized (and which I use all the time in vscode now). It eliminates a good 80% of repetitive typing, or symbol refactoring that would have involved clunking through menus in old IDEs, and makes experimentation that much quicker. Feels fundamental, like copy/paste shortcuts which everyone knows now.


Yes, I tend on the side of only tweaking things when I realise I'm repetitively using the same context menus. That's the only point I learn the keyboard shortcuts, or map one, these days. I really haven't had to learn too many shortcuts. The best ones are multi-cursor editing like you've mentioned.


more than multi-cursor, recordable keyboard macros save me a ton of time. I miss them sorely in vscode


Nitpicking: Crook Manifesto is the sequel to Harlem Shuffle, which came out years earlier. There's a third installment planned.


Ah, yes. I didn’t know about the third planned. CM I thought was brilliantly written. Just a joy to follow down each little flashback.


Whoa. That article says that SpaceX does tender offers twice a year?! That's so much better than 99% of private companies, it makes it almost as liquid for employees as a public company.


Which in a real way makes the threat of being left out of liquidity rounds that much more powerful a tool for keeping people looking forward to an actual windfall in their lane.


Take from a musicologist and semi-professional performer: IMSLP is great for finding quick scores of random stuff, but the editions are often pretty shoddy, like either super outdated with weird editorial decisions that no one's agreed with for the last 50 years, or someone's totally amateur transcription full of errors and equally weird decisions. Treat it like you might treat Wikipedia as a research source (ok as an initial point of entry, but soon you want to dig into the real sources).


I work for IMSLP.

I am working on adding new scores, but in many cases these old things are either the only editions published or the only editions we can take.

IMSLP accepts urtext editions over 25 years old. We have a good amount of these from series for the most famous composers (e.g., Mozart, Bach...) but these are only a minority of the collection. We of course would like to add more.

Otherwise, for "original" editorial editions, as well as arrangements, etc., we take them so long as the contributors of original creative contributors died in 1971 or earlier, or, as the case may be with amateur editions, with permission (note: we actually do have a significant number of quite good editions/arrangements contributed with permission, but we have a lot of files on the site overall).

The other thing is, since we draw from many existing digitizations (such as national libraries), and most of their digitized documents are quite old.

If you have any particular requests, please add them to the wishlist: https://imslp.org/wiki/Wishlist

Anyway, we are not really like Wikipedia, since we have a great deal of those primary sources on our site (incl. many autograph mss., as well as historical editions), and we're fundamentally more a collection of sources than a synthesis of them like WP. But we are certainly not a complete collection of sources, by any means (for both practical and legal reasons).


For sure, I'm not meaning to denigrate IMSLP, as it's a great project that offers access to so much material that's otherwise a pain to get to. I was just trying to make the distinction that when I want the highest-quality materials I can find for performance or study, that's not where I'll find them. But it's wonderful for quick easy access, invaluable for browsing and making finds when you're far from any university library etc.


> IMSLP accepts urtext editions over 25 years old.

Do you have a working definition of "urtext"? A lot of famous Bach and Mozart keyboard stuff was written on C clef for the right hand, but many so-called urtexts transcribe it to G. I don't understand how this can be seriously called urtext.


A mere clef change is a purely mechanical adjustment. The point of an urtext edition is to try to avoid new creative work being added.


What counts as a purely mechanical adjustment? Can you apply any trivial and reversible operation to the notation? I guess changing the time signature to an equivalent one is acceptable. Changing all the notes to half their length? Replacing dotted notes by ligatures? Switching the key would be probably a "creative" change, but it's just as mechanical as the others. What if I move the key up a semitone but then I add a text that says to tune the instrument a semitone down? Does said text really matter, considering the historical tuning has changed by almost a whole semitone along the centuries?

I feel very uneasy changing the notation of the original, even if the music sounds the same, and naming it "urtext".

No big deal... after all, you can buy "urtext" with editor-provided fingerings from many publishers, so the word is almost meaningless at this point.


Any change that can be made without "skill and judgment" as would be involved in the creation of an original work is definitively mechanical per Canadian copyright law. Changing the clef from one to another is just like changing the font from one to another, which in Canada is considered the prototypical mechanical change.

Maybe you don't like modern clefs, but by no means are they any sort of "original" work by the standards of copyright law in any country. (Even in the small handful of countries that recognize typographical arrangements/engravings per se as copyrightable in themselves, like the UK, changing the clefs used is not considered original in any way.)

Pure transposition, by the way, is also not considered an original copyrightable creative change.

I think you have focused in on the wrong aspect entirely from the musical standpoint... the purpose of any of these editions is not to be a facsimile of some existing document (facsimiles of autograph manuscripts exist), but really just to be critical editions taking into account a number of sources and intending to establish the work as it is in a certain state (generally, an early state) according to certain editorial principles, rather than reflecting the editor's personal creative opinions on how the work should be interpreted.

As for the fingerings provided in some critical editions, these are not really relevant, because they are clearly marked (when applicable) as separate from the text itself, and they can easily be redacted (as we need to do sometimes) or ignored. By the same token, the preface or editorial footnotes in these editions are separate from the presented musical object — do you believe a preface or footnotes somehow makes the thing provided in the same pages indescribable?


Thanks for your detailed answer! I guess I'm just used to critical editions in poetry, which are supposed to be identical to a facsimile but without transparencies nor coffee stains. The critical comments and the text are separated in odd/even pages. The even pages have a lot of notes, and are sometimes longer than the actual text! They explain the history of a single comma along the several editions of the poem in question, etc. I always hoped that musical urtexts would be like that, but it doesn't seem to be the case. Fortunately, we have IMSLP where we can at least see most of the originals nowadays!


> IMSLP is great for finding quick scores of random stuff, but the editions are often pretty shoddy, like either super outdated with weird editorial decisions that no one's agreed with for the last 50 years, or someone's totally amateur transcription full of errors and equally weird decisions.

You picked my interest.

- what makes an "edition" ("score"?) outdated?

- can you give examples for a weird editorial decision, maybe by additionally providing an alternative good editorial version?

- what happened that something deemed correct (?) over 50 years ago is nowadays something no one agrees with?


In the last decades philological standards for reproducing the original form of a musical work taking into account historical musical traditions, composing and performance practices have become much more thorough. Editions from the 19th and early 20th century which you can find on IMSLP because they are out of copyright restrictions often don't meet these scholarly standards. They very often reflect performance practises and the style of their time. See also https://en.wikipedia.org/wiki/Urtext_edition.


But – there are tons of prestigious, high-quality, Urtext editions on IMSLP, because

https://imslp.org/wiki/IMSLP:Copyright_Made_Simple#Urtext_Ed...

(copyright laws afford Urtext editions shorter terms of protection than original works)


I’d add that some of the older texts are really badly laid out - often handwritten or in a strange typeface that makes them really hard to mentally parse when you‘re used to the uniformity of modern scores.

That said imslp is still a life-saver and playing music would be a lot more painful (and expensive!) without it


Is there a standard file format for scores? I see MusicXML and MEI.


To give an example to further illustrate answers you've received. In the 1800s it was common to produce heavily editorialized volumes of Bach, Scarlatti and other baroque keyboard works, with tempo and dynamic indications, written-out ornamentation, phrasing and articulation marks, pedalling, and other directions that would have been a relatively alien language in the actual autograph and early editions. These were much more reflective of (romantic) performance practices of the day than of the original period. While much of it could be taken as potentially interesting suggestion about interpretation on modern instruments, it gets hopelessly muddled when the score fails to distinguish between what content is the editors' vs. the composer's, as they so often do. Something like this of Bach's C major prelude from The Well-Tempered Clavier, book 1, shows basically all the different kinds of unwanted editorializing I described: https://imslp.org/wiki/File:PMLP05948-WTC_Mugellini_No._1-12...

Compare that with: https://imslp.org/wiki/File:PMLP05948-WTC_1_No_1-12.pdf

A modern Urtext edition would also include detailed information about which manuscript and early edition sources it was prepared from, and any unresolved or variant readings between these sources, with performance suggestions (apart sometimes from fingerings) relegated to supplementary notes that are clearly written by the editors and not the composer.


I don't presume to know what they meant, but as an example of something I think fits the bill, I downloaded "Bill Bailey Won't You Please Come Home" and the lyrics are full of a certain type of phonetic transliteration from vernacular Afro-American English (as if it's a foreign language or something) the likes of which you would probably never see today (e.g. lots of "dat" and "dey" and "throw'd" and "de whole day" and so on).


I'd love to know if there was a whole class of people that actually did speak in "derogatory caricature".


piqued not picked*


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: