Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's rather sad, union types are really what I like most about TypeScript. This might explain why VS Code feels so slow sometimes when type checking, because I have a few types that rely heavily on unions.


> However, if your union has more than a dozen elements, it can cause real problems in compilation speed. For instance, to eliminate redundant members from a union, the elements have to be compared pairwise, which is quadratic. This sort of check might occur when intersecting large unions, where intersecting over each union member can result in enormous types that then need to be reduced.

This statement makes me think... how come the TS compiler is not using something like a hash/map (object) of union members to basically ignore redundancy?

Or any other strategy really. The union of unique values in 2 or more arrays is a classic CS problem for which there are many, many performant solutions.

Anyone familiar with the TS internals? Maybe I'm not seeing the forest.


What you’re not seeing is that TS is a structurally typed language, not a nominally typed language. Two type signatures defined in different places with a different name may be assignable to each other - so typescript usually has to deep-compare, recursively, the two types until a conflicting field is found.


So just hash the structures?


TypeScript team member here: the structures are arbitrarily deep and recursive. Also, as explained in a sibling comment, it's not just about type identity, but type assignability.


Hashing lets you know whether it's the same type or two different types, but you still need to look at the members to find out whether one is a subtype of the other.


> This statement makes me think... how come the TS compiler is not using something like a hash/map (object) of union members to basically ignore redundancy?

The trouble is the operation isn't "is X a member of Y", rather it's "does X match any values of Y according to predicate P."

You can break that out if you have knowledge of possible X's and P, as is the case with type matching.

Say we are checking G[] against {str, "foo literal", int[]}. I have no idea how TS implements these internally, but say the underlying type expressions are:

    [{head: String}, 
     {head: String, constraint:"foo literal"}, 
     {head: Array, element:{head: Integer}}]
And G[] is {head:Array, element: {head: Generic, name: "G"}}.

We could reasonably require that heads are all simple values, and then group types as such:

    {String: [{head: String},
              {head: String, constraint:"foo literal"}],
     Array: [{head: Array, element: {head: Integer}}]}
You'd still have to try to match that generic parameter against all the possible Arrays, but you could add more levels of hashing.

The downside is, of course, it's quite tricky to group types like this and prove that it returns the same results as checking all pairs, especially when you have a complex type system.


I've found VS Code to be pretty slow regardless of what I've used it for. The UI is snappy, but the hinting/linting/type-checking is sometimes shockingly slow.

The only reason I've used VS Code for more than an hour in the last few years is because its Svelte plugin was much better, but now JetBrains has a good Svelte plugin, and I'm back to JetBrains 100% of the time. It's worth every penny.


Yea, I've been using VSCode these past couple of weeks as I'm now coding in Python + JS after a few years of only JS with WebStorm... and I'm buying a PyCharm license tomorrow.

VSCode is fantastic in many ways, and I'll keep using for my markdown dev notes and for general purpose programming occasionally, but for anything of substance I'm a JetBrains convert.


In this case it's literally the LSP and maybe JetBrains have a different way of doing all of this more efficiently or it's just that many LSPs are not written in the most efficient language.

We're beginning to see more Javascript tooling that are written in other languages such as Go and Rust that just blow away existing tooling in terms of performance. Look up on esbuild/swc.

Last I heard JetBrains is also looking into adopting LSP so maybe we could even pay for JetBrains LSP to use with VSCode some day


id run some benchmarks first on your codebase. would be surprised if union types were a top bottleneck.


How do you do that? My company’s 60kLOC typescript codebase takes several minutes to compile and I don’t know how I’m supposed to diagnose what the problem might be; diagnostics flag exists but I don’t understand how I’m supposed to take action on it. Current plan is to break the project into a lot of smaller project references but yeah compile speed has probably taxed my company significantly in productivity and I don’t feel confident about addressing it.


Your response tells me you definitely are the type with the talent to solve it, but yeah it will require some brute work but I’ll think you’ll come out of it a lot stronger (and help your team a lot). “tsc — extendedDiagnostics“ is a great place to start. tsc --traceResolution > resolution.txt. both from the OP. I’d get a book pronto on advanced typescript as well. Create a tiny Ts project from scratch and use that as your comparable. Measure your progress and have a beer or ice cream every time you make a jump. I worked on one huge typescript project years ago at MSFT with slow compile speeds, and was always too scared to try my hand at fixing that, and sort of regret it (though it was a hard problem since we were on windows ;) )




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: