I think using this feature is the wrong advice. Frankly I think much of modern C++ should be rejected. If you want to use a language like with those features then fricken switch language! There are countless better alternatives today: Rust, Swift, D, Haskell etc.
Many are stuck on C++ for legacy reasons. I have worked in such large C++ legacy systems for many years in the past. The average programmer simply cannot deal effectively with this modern C++. It is too complex.
I’ve been down that road being exited about new C++ features only to realize that I’ve actually reduced the productivity of my co-workers.
Often I wonder if C++ programmers all suffer a Stockholm syndrome. They have come to sympathize with their hostage taker C++, making up excuses for the many ways C++ abuse and terrorize them.
I go to these C++ conference talks in occasion and I see people in ecstatic praise about how some genius C++ guru came up with an elaborate convoluted solution to something which is like two lines of code in a sane programming language.
Seriously I think C++ has drained so much of their brainpower that they have simply not been able to look outside and see how the grass is greener everywhere else.
>I think much of modern C++ should be rejected [...] then fricken switch language! There are countless better alternatives today: Rust, Swift, D, Haskell etc.
The reality is that many projects are enabled more by existing libraries & tools rather than blank slate clean language syntax.
Some example domains of libraries & tools where C++ is the 1st-class client instead of Rust/Swift/D/Haskell:
+ deep learning: NVIDIA CUDA api is C++
+ HPC High Perfomance Computing: Intel MKL math library is C++
+ physics engine for video games: Unreal game engine is C++
+ latest cpu chip : ARM Allinea dev tools for the new ARM/Neoverse chips (e.g. latest AWS Graviton2 servers) is C++
+ Qt GUI is C++
Because of the extensive C++ ecosystem, you get a counter-intuitive situation where programming the latest cutting-edge apps requires a 35-year-old language from 1985 instead of the newer Rust language from 2012. I predict this delta of tools+libraries between C++ and Rust/Swift will continue to exist for 10+ years. Recommending/lecturing people to use Rust/Swift/D overlooks the reasons that many programmers use C++. It's not a random choice.
E.g., a realistic starting point for a project might be: "What's a good game engine... oh, it's Unreal, and that's C++." -- instead of -- "Rust has clean syntax of ownership and borrow with "&" ampersand symbol instead of ugly warts of unique_ptr<T> and std::move() in C++ so I'll code a game in Rust."
Sometimes, we look at the ecosystem first, and then work backwards from that to make programming choices.
It's when you have the luxury of a greenfield project that doesn't need legacy dependencies that choosing alternative cleaner syntax of Rust/Swift is possible. Amazon AWS and MS Azure are examples of doing new bare-metal projects with Rust instead of C++.
To add to that, not all software is written by those who identify chiefly as software-engineers. In, e.g. the embedded world, software is often written by those who are electrical engineers by profession, who use assembler, C, C++, FORTH or Python if they are lucky and rarely venture further. Consequently for some MCUs available tools are only for C/C++ (and of course assembler). Often those engineers are more concerned how much the MCU costs or how much power it dissipates, rather than how elegant the language is they are programmed in. It's a pity, but part of reality.
Yea, there is a lot of this in the embedded world. "Software" is just another line item on the BOM, like connectors and bolts. Source it as cheap as possible or find some junior EE in house who barely knows C to do it. That code is going to get written once and never read again.
All true. Ironically the main thing keeping C++ relevant is the lack of a standard cross platform ABI. The moment interop with the C++ ecosystem becomes easier, the newer languages look so much more attractive.
> Ironically the main thing keeping C++ relevant is the lack of a standard cross platform ABI.
I find that statement very odd, because in my experience the bane of C++, and perhaps the primary reason to look for alternatives, is C++'s lack of a standard ABI.
The lack of a stable ABI is C++'s main source of bit rot, and the main technical point that justifies migrating projects, even legacy ones, into other programming languages. If not for the ABI issue, most of the technical points used in favour of jumping away from C++ into some bandwagon of the moment would be moot.
If C++ had a stable defined API then you could just import a C++ library blob into python, or rust, or whatever and it would just work. That you can't is what keeps people developing projects in C++ instead of more productive and sane languages.
Very true. It also makes working with C++ harder. It’s so easy to use different libraries with C#, Java or Python vs the struggle it is to integrate a C++ library.
Exactly - right now I write program in C#. Not because C# is cool or because I like its syntax but because it is what Unity supports.
It could be C++, if I would go with Unreal Engine.
I used Squirrel in past, solely because for some reason it was modding language of OpenTTD.
I am not fan of any of them, I am not an expert in any of them. But it was more effective that writing from scratch 3D engine or AI backend to use something else.
I love to write F# (I built FVim), but there's still gap towards high performance game programming. In C# you get a lot of low-level unsafe constructs with zero allocation overhead that integrates very well with the external libraries. In F#, you either get a clumsy library special form (without proper syntax support), or the idea itself is considered "only for C# interop uses" and thus not prioritized.
F# has been in this position: a smaller team that tries to align with C#, without a language implementation team backing them (unlike Roslyn-C#), yet constantly feeding innovations back into the dotnet ecosystem. You can see a lot of example where F# piloted the idea, and later C# adopted and extended it in its own way, and F# struggling to catch up in that OO-centric world
(Disclaimer: MS employee, but not on the language teams)
I agree with most of your post, but I think MKL is Fortran and C/C++, and implements the BLAS API. I recall that BLAS has a Fortran interface and a CBLAS (C, not C++) interface, and I used the latter. And Rust can call out to C ABI code.
We're talking about what language you can use instead of C++, without losing access to C++ libraries. Nim doesn't have the unnecessary complexity of C++, and being rarely used is not a problem. How does it add “third-party dependencies”?
> We're talking about what language you can use instead of C++, without losing access to C++ libraries.
Congratulations, your suggestion requires you to maintain two entirely separate languages, one of which is not standard nor mature, not to mention maintaining interoperability bugs.
> The reality is that many projects are enabled more by existing libraries & tools rather than blank slate clean language syntax.
As I said, sometimes for legacy reasons you must use C++. My point is to use a sane subset of C++ an ignore the modern crap. You are only going to make life harder for co-workers, and future maintainers. If the modern C++ features themselves are what attracts you specifically to C++ rather than some library you need to use, then that is when I suggest switching language.
> deep learning: NVIDIA CUDA api is C++
You can do deep learning much better in Julia. It is a nicer language, allow iterative development in a REPL and will typically give you higher performance as well.
And who on earth does machine learning in C++? People typically use Python today. C++ is an implementation detail. A necessary evil for a slow language like Python. But given a choice people pick Python as their interface, not C++, because C++ is a horrible language to use, which you should only use when you absolutely have to.
> HPC High Perfomance Computing: Intel MKL math library is C++
Again you could actually use Julia for that. Next generation climate models running on super computers is built in Julia, specifically because it is fast. A language with 1/10th the cognitive load of C++.
> physics engine for video games: Unreal game engine is C++
Sure, as I said, sometimes you need to use C++ for legacy reasons. But who on earth writes the game logic in C++? Typically you use Lua, JavaScript, GDScript or something else because people don't use C++ when they can avoid it. Historically it has been hard to avoid C++ when creating a physics engine, because you need the kind of high level performance and memory control C++ gives.
> Qt GUI is C++
Yes, which I have spent a lot of my career coding. And which cannot be compared to the insanity of modern C++. Qt tries to keep C++ somewhat sensible.
I could add VTK. Old school C++, which is fairly readable, unlike modern C++. As I said if you have some deep geek desire for fancy language features, find another language.
> Recommending/lecturing people to use Rust/Swift/D overlooks the reasons that many programmers use C++. It's not a random choice.
Problem is that you missed my central point. I didn't say you could go out and replace C++ in every instance with any of these languages. I specifically made it clear that for legacy reasons that may not be possible. And yes that may include that the functionality you need only exists in C++. My point was to write your code in cold old C++, which most people can actually somewhat fathom. If you want to geek out, C++ is a terrible place to do it.
> What's a good game engine... oh, it's Unreal, and that's C++.
Yeah... so Unreal Script was made because C++ is such an amazing language? You know I use Godot Engine, mostly written in C++, but I don't have to relate to that fact. I just code in GDScript. Fortunately the makers of Godot was no so sadistic as to require their users to code in C++.
> It's when you have the luxury of a greenfield project that doesn't need legacy dependencies that choosing alternative cleaner syntax of Rust/Swift is possible.
I think you are kind of overselling the need for using C++. You fail to mention in any of your examples that C++ is very frequently wrapped and not the preferred interface. E.g. TensorFlow is written in C++, but who the hell interface with it from C++? No, people use Python as much as possible.
Or how about Qt. The most popular way to create Qt GUI code today is through QML, not through their C++ APIs. You also failed to mention that in most game engines such as Unreal you can do almost your entire game in a script language. It is typically only the Engine designers who have to program in C++.
You have to consider each case, but frequently wrapping C++ works just fine and may make you a lot more productive. VTK is written in C++ but it is also frequently if not mostly used through bindings to other languages.
If you cannot wrap C++, then embedding a script language may work well. I have written simple game engines in C++ with Lua. I found myself pushing more and more functionality over to my Lua layer as I experienced how much productive language that was.
Maybe there's some truth to what you're saying. But at the same time, there aren't many alternatives that tick all the boxes C++ does (performance, portability, maturity and stability). Together with the amount of C++ code that exists, I don't see C++ going away anytime soon.
Ironically, if one wants all of those things, the best alternative at the moment may be the C language.
A major feature of modern C++, and one of the most difficult to replace, is its expressiveness and optional strictness. You don't get this with C (or most other languages). The metaprogramming and compile-time facilities for example, while complex, are indispensable for some code bases. This becomes evident when you try to port rigorous modern C++ code bases to other programming languages du jour. A surprising amount of C++ is difficult to express with equivalent behavior and performance in a similar amount of code.
I play around with new system languages by implementing parts of database engines (typically written in modern C++) in the new language. It terms of abstract code architecture, you can port a C-style database design with relative ease -- most new system languages aim to be an improved C with direct portability in mind -- but not modern C++-style database design, and the latter is unambiguously superior for performance and correctness when writing database engines. C++ will continue to see a lot of usage as long as it is difficult or impossible to write code in other languages that is functionally equivalent to C++.
For C++, the overarching criterion for almost any language feature is how useful it is for capturing semantics in an easy-to-use, performant, powerful, and generally usable library.
The consequence is that using new core language features in non-library code often seems unnecessarily complicated, but a library constructed using the feature as intended is insanely powerful.
The result is that you can write libraries in C++ you cannot write in other languages, and you can call into libraries that are so powerful only from C++.
In most other languages, using a library means you give something up: usually performance, and often it comes with restrictions. The standard of usefulness for C++ libraries is very high, and always increasing.
While this is true in general, we are talking about a library feature that many people want to move into a language feature, specifically because of the drawbacks of it being a library.
That a library implementation of pattern-matching would be unsatisfactory exposes a weakness in the core language. Usually, it is better to strengthen the language to the point where a library component can do the job. Sometimes, though, that is too hard, and you give up and do it directly in the core language. It should always be noted as a failing.
Something similar happened with coroutines, although what we did get has hooks that a library should be able to patch into, to achieve wonders not yet seen.
I have the complete opposite experience. Through my whole career writing C++, that language has been in the way due to all its idiosyncratic behavior. One spends so much time trying to figure out a way around some crazy boiler plate code requirement in C++ only to realize that no, it simply isn't possible to do it in a sane way in C++.
Switching to Swift I felt almost everything worked as it should. It was almost a bit boring not having to deal with all the usual crap that C++ would give me. I am willing to be you can write any C++ system much better in Swift. C++ would probably have a performance edge, but in 95% of cases not enough to be worth dealing with an ugly language like C++.
> C++ will continue to see a lot of usage as long as it is difficult or impossible to write code in other languages that is functionally equivalent to C++.
I would challenge you to give me any language construct that is indispensable for a particular type of program which cannot be done more elegantly in another language and with less headache.
I cannot think of a single feature in C++ which I have ever missed in any other of my preferred languages.
You likely design very different software systems than I do. I don’t have the luxury of ignoring performance, efficiency, and scalability. I don’t have any particular love for C++, there are many things wrong with it, but sadly there isn’t a practical alternative that doesn’t have worse trade offs. Ironically, I largely refused to use C++ until C++11. I delivered systems in a long list of other languages before that time; my usage of C++ is relatively recent. My choice is pragmatic and based on experience.
If you can’t think of reasons to use C++, that isn’t because reasons don’t exist.
> One spends so much time trying to figure out a way around some crazy boiler plate code requirement in C++ only to realize that no, it simply isn't possible to do it in a sane way in C++.
It seems you're complaining about the outcome of poor and ill-advised engineering practices instead of a programming language.
And your description also covey's the idea that you had people who knew very little about C++ trying to figure out how to use it in ways that they have no idea was possible to use.
I get the appeal of a good scapegoat. Yet, from your description it seems you're trying to dump the blame on a lot of engineering problems you're creating for yourself on a tool.
Maybe the idiosyncratic BS of C++ is what's hard to express in a saner language? If someone only used hammers all their life, then they will say that a microscope is a bad hammer, which is kind of true =D
C-style designs rely heavily on macros and aliasing, there is minimal type safety and most correctness properties can only be determined at runtime. Resource management is extremely manual which is both fiddly in the amount of code and error prone. Lots of function pointer passing instead of generics. Macros are used to hide just how tedious a lot of it is but they aren't safe. PostgreSQL is an exceptionally tidy example of a C-style database engine.
In C++-style designs, you end up writing surprisingly little code, having metaprogramming scaffolding generate most of the code for you while doing fairly deep correctness and type safety checks at compile-time. Someone has to write the scaffolding libraries but they aren't that large, just tedious, and they get reused. Resource management, change detection, etc is automagic because C++ makes that easy to hide even in complex cases like DMA I/O. Every data structure and algorithm is highly optimized for the local use case and using the most highly compressed representation reasonable in context. It would be impractical to write all of this code and analyze it manually.
I used to write databases in C99. It required several times the lines of code relative to C++17, with worse results, even if you include the scaffolding libraries. C++17 implementations, done well, is much closer to writing a specification for a subsystem design and behavior and having the compiler generate an optimized implementation for that specification and exporting types that hide the fiddly details that can be safely composed with other generated types, than writing code. The scaffolding is also generic and flexible: some can generate an OLTP database engine just as easily as an OLAP database engine largely by changing the subsystem specifications and composing things differently. This would not be possible without heavy use of the metaprogramming facilities to both generate the types and guarantee that type interactions will still be safe and reasonably optimal.
I’ve never seen one in open source but I’ve written a couple over the years and seen a few others. It forces you to solve many difficult metaprogramming challenges elegantly so it is a good excuse to achieve some mastery. I’ve burned more hours than I care to admit trying to figure out how to achieve seemingly simple results. Also, until C++17, this was an exercise in masochism due to the limitations of the expressiveness and type inference, which is why so few people tried. Basically, there were a lot of unpleasant rough edges that went away with C++17 because the language wasn’t smart enough previously. C++20 will also be a big step forward for this, whenever the compilers become usable.
One thing to understand is that these libraries are highly opinionated about the abstract architectural model. It doesn’t make a lot of sense to mix components from user space and kernel space designs, for example, though both have advantages separately. It tends to be more along the lines of one abstract architectural model and enormous amounts of elasticity and flexibility within that model based on the data models, workloads, transaction semantics, and hardware you are targeting. You also still have to write a spec that makes sense from a database engineering standpoint.
At least for me, there are still significant parts of a database engine for which I haven’t built a metaprogramming scaffold. That is largely a matter of time and effort. Other parts I haven’t had to write much code for years but still get state-of-the-art implementation to spec.
The problem with such fancy meta programming in C++, is that while it may be a thrill to program for a C++ wiz you may end up with making something that is completely unmaintainable, because nobody else can grasp what you wrote.
Compare to writing a database system in something like Go. Sure you make end up with 50% more code, but you could have anybody up and running reading and understanding the code within 3 days.
IBM have done studies of this and found that fancy code is not all that valuable. It ends up falling in disuse over time as people don't get it. I have seen my fair share of C++ code which simply had to be tossed because nobody at the company could understand what the previous whiz kid had written.
That’s the beauty of the evolution of C++. Metaprogramming has become increasingly maintainable, as making it a first-class capability of the language is a core focus of the people designing it. The learning curve is finally low enough that it is practical. I think it is fair to say that C++17 is the first version where that is true.
You can’t write a comparable database engine in Go, fundamentally. The language lacks features required for competitive performance. The code difference will be much more than 50% trying to get the most out of what Go is capable of in this domain.
The point of writing code this way isn’t to be clever or for a “thrill”, it objectively produces superior performance, reliability, and maintainability. Defects scale with the number of lines of code regardless of language. Type safe code gen is a powerful tool.
Just look at the new features in C++17 that simplify the use of templates, like variadic templates, constexpr everywhere, automatic type detection working together with templates, etc. The last versions of C++ are mostly about making using templates easy, which allows programmers to operate at a different level than languages such as Go and Python.
But at the same time, there is no alternative that ticks all the boxes C++ does (performance, portability, maturity and stability).
As you identified yourself, that's not entirely true, because C offers all of those things. But the really tragic thing about the status quo is that because there is so much sunk effort behind the C and C++ ecosystem, the resulting momentum makes it difficult for new languages that are simply better to gain traction.
In a way, C++ is the ultimate demonstration of how important the surrounding tool, library and developer ecosystem is to the success (== usefulness in practice) of a programming language. If the language itself were the dominant factor, the writing would have been on the wall when almost every language at a higher level than C was adding features like first class functions and C++ was trying to do something vaguely similar with overcomplicated binder templates.
A language with nicer syntax for working with higher-order functions, Haskell say, might have used
xs = [1, 2, 3, 4, 5]
n = count_if (<3) xs
But for many years in C++, the closest theoretical equivalent would have been to write something like
xs = // some standard container type, tediously constructed
n = count_if(xs.begin(), xs.end(), bind2nd(less<int>(), 3))
instead. Obviously hardly any real programmers ever did that, and it's true that modern C++ is better in several relevant ways, but it's been literally decades and it still hasn't entirely caught up.
Meanwhile, several much more promising languages are struggling to break into the kinds of markets they deserve to because they haven't achieved a critical mass of support. And the world continues to suffer the loss of productivity and problems with security and reliability that come from using a language like C++ for things that do actually matter. It's an understandable situation, but still a regrettable one.
> As you identified yourself, that's not entirely true, because C offers all of those things.
You're right, I contradicted myself there a bit. Fixed. But C is almost a subset of C++, except for some technicalities. So it's a bit hard to call it an actual alternative. Switching from C++ to C is more or less a matter of restricting oneself to less functionality (which in some cases is a good thing, but I digress).
Other than C it's hard to point at a solid alternative for many C++ use cases.
> In a way, C++ is the ultimate demonstration of how important the surrounding tool, library and developer ecosystem is to the success
Indeed! This is something that any language wanting to compete with C++ must get right. It's a hard thing to do.
This is something that any language wanting to compete with C++ must get right. It's a hard thing to do.
I suspect it's actually an impossible thing for a new language to do on its own, because it's not really a technical problem in the first place. It needs a new language with a "killer feature" and serious resources backing it to break through. Several of the relatively success new(ish) languages have combined those two attributes, with the language being in some sense the favoured one for writing software that runs on a certain platform.
My point was that you should just use old-school C++ if you need to use C++. If your desire is to use fancy language features, and that is your primary focus, then you should switch language.
People should just accept that they work with a legacy language and not try to turn C++ into Haskell, Rust or something it isn't. They just turn it into a worse mess than it already is.
Most C++ code that exists and which is useful is written in old school C++ anyway and could be continued to be maintained that way and wrapped for other users.
And really C++ performance is IMHO somewhat overrated. It depends entirely on what you are doing. I think it was the CouchDB creator. He made his first version in C++. He struggled hard. Then he switched to Erlang, despite Erlang running on a VM he got magnitudes higher performance and had to write less than half the code.
You know the ones who squeezed the most performance out of the Playstation 2 did it using LISP and not C++. They used LISP to create a DSL for PS2 assembly code. Thus they could do high level LISP coding as well as low level assembly all in one.
And today you got many scientists needing high performance computing switching to Julia. Fortran will usually outperform C++ on number crunching. And there are quite a number of cases where Julia will outperform Fortran.
Yes in real time systems with tight memory requirements something like Julia is not a good choice. But then again in those cause you may actually want to prefer C or Rust.
Another option is to gradually replace parts of the C++ with Nim code set to output to C++. As such you get the advantages of a high level, low friction language with fast compile times and move semantics, ABI compatibility with C++, and a really nice FFI. You can import or export functions and so on between the languages, use any C++ libraries in Nim (or visa versa), and even directly emit C++ from Nim if required.
Gradually porting like this lets you keep using your code base whilst introducing new or overly complex stuff in a language that's faster and easier to write (usually ends up with less than half the LoC of the equivalent C++ but often way, way less than that). Metaprogramming is also very nice, easier to reason about and perhaps most importantly, even with lots of macros doesn't noticeably affect the fast compiles.
Another advantage is once you have some Nim code you can choose to change the target to C or ObjC (or even JS or LLVM) so you're actually increasing portability.
This all depends on how you rank 'maturity' of course. Nim's been around for longer than Rust and Go IIRC, and it's been rock solid for me but you may have different parameters. It certainly helps being able to directly use libraries for C and C++ if you can't find an appropriate Nim implementation.
Edit: For contrast, consider the challenges in the article for variants in C++, then the equivilent object variants in Nim:
type
MyVarKind = enum mvkNumber, mvkString
MyVariant = object
case kind: MyVarKind
of mvkNumber:
num: int
of mvkString:
str: string
var myVariant = MyVariant(kind: mvkNumber)
myVariant.num = 1
myVariant.str = "Oops" # Error: 'str' is not accessible using discriminant 'kind' of type 'MyVariant'
Technically yes, but directives are inserted so GDB reports the line number, source code, and parameters of the Nim file.
However GDB shows the name mangling suffix in generated code, and types are their native types. There's a script called nim-gdb to add pretty printers for types to the GDB output to show the Nim source types.
Perhaps surprisingly though, the C and C++ generated output itself is fairly straightforward, even with name mangling suffixes. The inserted directives tell you the Nim source line so you can navigate it fairly well if you want to, and the suffix means the variable is unique referenced in the code. As far as I know you can use any debugger that supports the target language, though I've not tried anything but GDB myself.
It's very rare for me to dig into the generated code but sometimes I'm curious about the data structure analog in the target language. In the case of Nim's object variants, last time I looked when compiling to C they were ultimately reduced to simple checked union types.
Not yet, no. I hope with companies like JetBrains working on their Nim plugin https://plugins.jetbrains.com/plugin/15128-nim we'll see more focus put on a smooth IDE based debugging experience in particular.
Yes, if the target is C++ then the whole output will be in C++, including code for GC operations, so you can directly inspect how this is translated. You can actually choose between several GCs for different needs: https://nim-lang.org/docs/gc.html
It's worth mentioning that by default all types in Nim are stack allocated, and you have full manual memory management to the same level as C/C++ but with better type safety and less boilerplate. The GC is only used when you tag a type as `ref`, in strings, and the 'vector' dynamic list type, `seq`.
The newer GC, ARC (not related to Swift's ARC), is similar to RAII - scope based, non-atomic, deterministic, shares memory between threads but not stop-the-world, and uses move semantics: https://nim-lang.org/docs/destructors.html
This makes the GC a nice to use addition for resource management but not a fundamental requirement or speed limitation.
In my experience the default (thread-local refc + cycle collection) GC is very performant already, but it's straightforward to write code that works entirely on the stack, or create objects that wrap manual heap allocs, or use custom external memory allocators. Passing `--gc:none` removes the GC entirely from the compilation target, for example if you're working with very constrained embedded devices with the caveat that less of the stdlib is available (currently).
The ARC GC (doesn't handle cycles unlike it's sibling ORC) is aiming to be lean enough to be used in hard realtime and memory constrained embedded systems. For hard realtime though I'd expect most people would just manually manage their types anyway on the heap or stack.
If you're doing interop between Nim and C++ and want Nim's GC to manage types that you're passing directly to pure C++ code, you can tell the GC that the data is still being used with GC_Ref() or not with GC_Unref(). There are a few libraries for C/C++ interop, such as: https://github.com/nimterop/nimterop
Personally in this case I would probably just manually allocate memory memory in Nim or C++ and not use GC'd types across boundaries for clarity if nothing else, still it's an option if your design requires it.
Rust has all of those other than portability. Rust's ecosystem while it is missing some things it also has things which are lacking in C++ like a good mature package manager.
Another option if you're on Linux/BSD is to use your native package manager. C++ packages are usually included in those repositories. You also may be able to install Brew, Nix, or Guix on any given distribution.
It's more like snowboarding or any activity with a high initial learning curve before a big payoff: It's hard, even painful, at the beginning, and you might come home covered in bruises, but eventually it "clicks" and it's a lot of fun.
Stockholm Syndrome is more about convincing yourself that a bad situation is good because you're stuck in it. I don't think there are too many people "stuck" with Rust yet. On the contrary, I think the borrow checker is a draw.
The modern video game platforms are: PC, Xbox Series, PS5, Xbox One, PS4, Switch, Android, iOS, Web
Of which Rust has backends for all them, even the web via wasm.
While Rust is not as portable, it's not like it's strictly limited to well functioning x86 boxes, it'll produce binaries for anything that LLVM supports, which is quite a lot by now.
They are not directly supported by the vendors. That said, tools that work on C and C++ will at least somewhat work on Rust, so you also get sorta kinda half support even if it's not officially supported.
We'll see if this changes in the future, especially with some of the names doing Rust dev in these places.
Internally I know Sony had a Rust compiler for the PS4, I don’t know much more than that (like if it was used for anything). Just saw the repo was updated quite often so it was definitely still WIP at least, and it had some code samples for console apps. Never tried it on my dev console to see if it worked.
IMHO the last big upgrade to C++ was C++11, which is supported by lots of toolchains. It added move semantics, decent smart pointers, and a crapload of other improvements.
C++14 and C++17 were minor upgrades by comparison, so I'd say C++11 can rightly be called modern C++, though others may disagree.
C++20 is barely out yet, and not supported by any toolchain. If that's the only version that counts as modern C++, then I agree that modern C++ isn't portable :)
I'd argue that Rust is as portable as C++, if not more so, in the sense that there are no real barriers in the language itself that limit portability. It's harder to write un-portable code in Rust than in C++, because a lot less in idiomatic rust is undefined or food-gunable.
Of course when it comes to practical portability today then you're right as there are plenty more C++ compilers for different platforms than Rust compilers (then again, there are more C compilers than C++ in that regard). This may eventually change over time, tho, but it sure is something to consider today.
PS and edit: I don't mind you disagreeing with me, but would anybody care to illuminate me as to what your disagreement is about?
> Java's used in HFT, so it definitely can compete.
This is a bad argument and if you're using Java, you're not competing in the very speed-critical parts of HFT. Nor is Java common among HFT shops at all.
I know people who sit in HK and have been doing HFT/Quant stuff for years and they say something completely different - that Java's popular among biggest trading/stock systems.
So saying that
> Nor is Java common among HFT shops at all.
Sounds weird, especially when it appears among job postings.
When I use C++ (for personal projects), it tends to be because of the libraries I want to use (eg EnTT[1] or Immer[2], both of which use templates so can't be easily used from other languages). Eventually something like Rust or D may have equivalents to all the libraries I want and I could use those for such projects instead, but for now, if I want to use those things, I need to use C++.
You may also want to look at this talk: "What C++ got Right" [3], where the guy shows some nice things that C++ does that other languages don't. The other languages versions of the code he shows does tend to be simpler and nicer, but the C++ versions tend to give you more control. Some people value that control (rightly or wrongly) and other languages that don't provide it will never feel right to them.
I do personally quite like Rust and I'm eagerly watching Zig and some of the other new contenders, so I'm all for moving to something more pleasant than C++, but modern C++ gives just enough new niceness that I don't feel pushed to jump ship until the alternatives check more boxes.
I'm not replying to detract from your point, because I agree with you whole heartedly that competing ecosystems are not mature enough to dethrown C++ if you're someone who is pragmatic and wants to get things done. C++ has an enormous number of high quality libraries with often no equivalents in competing languages.
There are some compelling ECS frameworks emerging in Rust if anyone is interested, though.
Bevy -- https://github.com/bevyengine/bevy -- not strictly comparable as it's an entire game engine, but it's built around an (imo) more ergonomic fork of hecs.
There are some great ECS frameworks for a lot of languages these days, but in my personal opinion (rightly or wrongly), EnTT is in a league of its own in terms of features and performance, its very mature, constantly improved, and, possibly the most important part, I have experience using it.
The logic seems a bit circular to me. I cannot imagine you would have even discovered those libraries if not for the fact that you specifically wanted to use C++ libraries. Neither game engines nor immutable data structures is anything special about C++.
You could have used Godot, Unity3D and a whole host of other game engines if you wanted to make a game. If immutable data structures is your thing, wouldn't a functional language be a better fit, such as Haskell?
But I do know the problem you talk about. I have mostly used C++ in the past because you could not get any good GUI libraries in other languages.
I have played around with Zig a bit. Tried Rust earlier, but it seems to me to duplicate many of the things I didn't like about C++: slow compile times, complexity, verbose syntax.
Zig I am still making up my mind about. It looks to be on the right track. But for now I think Swift and Go are pretty good alternatives to C++, a bit depending on what you are making.
Personally I prefer Julia, but that is not exactly a drop in replacement for C++. You don't get fine grained control over memory usage and it is not suited for real time systems or small memory footprint. Zig is great at all those things though. Julia does replace C++ however in most case where you simply need raw number crunching performance, such as simulations, scientific computing, machine learning etc.
> The average programmer simply cannot deal effectively with this modern C++. It is too complex.
I interpret this assertion with a high dose of incredulity and skepticism.
You're specifically referring to a single class that's a part of C++'s standard library.
C++'s standard library is extremely spartan when compared with other programming languages, such as Java.
It's unbelievable that C++'s inclusion of stuff such as std::visit in its standard library renders it "too complex" when the extend and arcane nature of other standard libraries are incomparably higher.
Hell, some toolkits and frameworks are incomparably far more complex and we don't see them accused of being unusable.
I am talking about the total cognitive load of modern C++. Not one specific class.
I highly doubt people express in public that a particular library is too complex. They simply choose not to use it. I remember some Boost Graph library once. What a a mess. I never expressed publicly that I couldn't figure out how do use it. Nobody does that, because nobody wants to appear stupid. Instead I just wrote a graph library in C++ from scratch myself. Discovered that was faster than figuring out the template insanity in that Graph library.
I have worked on usability for a number of years and done usability testing on people. And rarely if ever do anyone ever blame the system. They blame themselves. Or they shut up because they don't want to appear stupid.
That is why when you do usability testing you have to sit an observe people. You cannot rely on questionnaires because people seldom admit the system was complex.
Actually observing a variety of people try to program modern C++, would likely have been a rather sad experience.
Its interesting to me that in between the irreducible complexities of hardware/ x86 ops and real world domains C++ has emerged as an middle level of complexity in this stack. You can explore C++ features and syntax for years and there are entire podcasts and conferences that mostly just talk about language features and never run out of things to say. I wonder if some
programmers are attracted to this complexity either for the challenge or because they enjoy exploring abstract human made systems.
It is not only legacy reasons why people are "stuck" with C++. In embedded and real-time applications, using a high level (GC'ed) language like Haskell is totally out of the question. D and Rust might be feasable, but typically lack the FFI you need for integration. I mention this because I recently wrote a DSP program for a (eurorack) synthesizer module, and C++ was pretty much the only sensible option. And I actually used variant in that codebase, even sending a (trivially copyable) variant over a RT queue to separate RT thread from UI. And std::visit was pretty useful in that context. I dont think it makes sense to rant against particular language/library features. Almost all of them are there fore a reason. It might just be that you dont see the reason right now. The design space for code is huge.
You are specifically describing being stuck with C++ for legacy reasons despite trying to disprove it. You say you cannot use D and Rust because of the FFI needed for integration. How is that not a legacy issue? You got existing C++ code you need to integrate with. How is that not a legacy issue?
What I am talking about is that if you don't have to use a C++ library to solve your task, there is simply no sane reason to pick C++.
From such a futuristic point of view, I guess you are right. However, it will likely take a few decades until your wish is granted and everything has been rewritten language X. Calling all interfaces your OS gives you legacy is pretty far stretched man :-)
I consider myself good architect but I am miles away from being deeply familiar with all of the sourcery of its metaprogramming facilities. I have real things to do and prefer to invest most of my mental efforts to more practical and productive things.
I however do use C++ to write servers and if I do not try to play language guru in my code modern C++ along with the STL feels incredibly easy to use.
Against massive initial skepticism, I’m becoming very impressed with Rust. I’m now going to be ‘that guy’ - apologies.
After years of being annoyed by Rust people claiming C++ is dying or too unsafe or whatever, I expected this to be another C# - a sort of C-with-training-wheels for people who care more about productivity than perf. Something I could safely ignore.
This ‘borrow checker’ stuff sounds great for kids, but for disciplined experienced C++ guys it’s at best a non feature and at worst would just get in the way, like GC, right?
In fact, I’m finding the reduced mental overhead (not just from borrow, but from the language as a whole being smaller and more consistent) is a huge win. Even learning the language and stdlib as I go.
I’ll still use C++ proudly where it’s the best choice, but I’d put some effort into trying Rust first. As a cpp-master-race guy, that surprised the crap out of me.
Why would you write your own basic data structures when the standard library provides them for you? I haven't written an implementation of a doubly-linked list by hand in a very long time.
If your point was about explicit ownership being a little more tricky with self-referential structures, that's true, though the existence of std::collections::LinkedList shows that it's entirely possible to manage these structures and at least you still get the benefits of explicit ownership for the structure and the data within it.
> Why would you write your own basic data structures when the standard library provides them for you?
I think the problem with this view is that Rust is not being used as a true programming language (like for implementing algorithms), but more as a "glue language".
If all the interesting stuff has to be done in libraries in "unsafe" mode, and all the uninteresting stuff (gluing) is done with borrow checking, then Rust is more a glue language. Which is fine. But don't call it a general purpose programming language.
I respectfully disagree. Using tools provided by the standard library is common practice in any general purpose programming language. It just means that the algorithms you're implementing can concentrate on the novel aspects that provide additional value and not get bogged down with reinventing wheels for the basic implementation details.
Also, you seem to be suggesting that it's necessary to use unsafe code to implement these data structures in Rust. For something like the doubly-linked lists that were mentioned in an earlier comment, it usually suffices to use tools like Option and Rc for the links and to encapsulate the logic that manages those links inside functions like push_back and pop_back that you are surely defining to mutate your data structure anyway. Is there something specific you have in mind where this kind of strategy doesn't work and unsafe code is the only solution?
I write my own basic structure when I need specific implementation for performance.
Most of time its about memory layout or caching a specific internal pointer for some hot code.
Sure, but unless the high performance code you work with is very different in nature to the high performance code I've worked with, you probably have a small set of carefully implemented and probably very well optimised structures in your toolbox and write everything else using them. You do need to deal with the explicit ownership semantics in Rust when you're first implementing those tools, but that's a probably minor inconvenience up front and afterwards things mostly just work like the equivalent data structures in any other programming language, but with the added safety guarantees from the ownership system and borrow checker.
Double linked list is just an example. There are other both more useful and harder to generalize structures like various kinds of trees for example.
Anyway, if your question is why implement your own data structures then it's very likely C/C++ isn't a proper tool for your job anyway.
There are other both more useful and harder to generalize structures like various kinds of trees for example.
Trees are usually easy because each node has a single parent all the way up to the root and ownership can follow the same hierarchy. More general graphs are a little more tricky, but the same techniques that you can use to implement a specific case like a DLL generalise. For example, in Rust, you can combine Option and Rc to handle, respectively, the intermediate states while the structure is being manipulated and the shared ownership semantics.
Self-referential data structures are a code smell in non-GCd languages, and I think I have neither used or written a doubly linked list since university.
You can do both in unsafe rust though. But you probably shouldn't no matter what high level non-GCd language you use.
/Another ex-cpp-master-race guy who has been impressed with Rust
I see doubly-linked lists and other self-referential structures all the time in low-level code. Boost even has a whole library dedicated to them: http://www.boost.org/libs/intrusive
Yeah, hence high level non-GCed language. Using heap allocated linked lists is rarely the way to go if you're not developing for embedded.
However, if you're developing for embedded you'll probably want to be familiar with unsafe either way if you're going to use Rust.
Regarding self-referencing: Rust encourages you to design differently, which I've found leads to nicer and more thought out designs most of the time. Not all algorithms are possible (or efficient) without it though, and for those you'll have to use unsafe if you _really_ have to implement them yourself.
There is a weak_ptr equivalent; the borrow checker makes it basically impossible to use references to implement it. You can still do it with raw pointers exactly like you would in C++ too.
(And yes, it's a niche data structure, vecs are far more common, but it's people's go-to "the borrow checker restricts you too much" example anyway.)
You would think so, but it's sort of just become a standard thing on forums, someone claims that linked lists are impossible, someone else talks about how you can do them in various ways (possibly linking to the "Learning Rust with Too Many Linked Lists" book, which is awesome, by the way), and how you don't actually need them in real code, followed by someone saying that this is just Rust stockholm syndrome and they use linked lists all the time... just the fun of being on forums :)
Haha, forums gonna forum I guess. Reminds me of the “but how can I trust std containers to allocate for me? Anything could happen!” crowd in the 90s and early 2000s.
FWIW I think the Rust community has done itself a disservice by focusing on the safety benefits. Const by default and move by default are much bigger deals in the day to day coding experience, but it’s hard to get across just how awesome they are.
That book looks great, thanks - should take me through a few corners I’ve yet to meet while keeping me entertained.
You're welcome. I sorta kinda agree on the safety thing; I wrote a blog post that kind of kicked off a whole discussion about this a while back; https://brson.github.io/fireflowers/ is a really beautiful summary of all of that.
After 20 years of C++ and about the same number of hours of Rust, Rust makes me feel like Fire Mario because when I come to do a strict correctness and memory efficiency pass after writing something - it’s already correct. I suspect this will have a compound interest-like effect, similar to good code coverage empowering change.
The context makes it clear that what's meant is "strictly worse as a C++ alternative".
Rust, like you said, isn't there yet. And I'd take issue with the claim that the gap has been shrinking - C++ right now is developing at a faster pace than Rust.
Rust is excellent overall, but is lacking in interoperability (I mean, it is capable -- but iterop is burdensome, relatively). Related to the overall topic of sum types, there is ergonomic support in the form of built-in enum.
D is excellent overall (and I think sadly under-recognized for this), but it is lacking in ecosystem. Related to the overall topic of sum types, there is an excellent library, `sumtype` which recently hit 1.0 status.
Interoperability is generally a disaster. Most languages can interoperate with C ABI functions, but even that is painful in some cases. It seems like most languages consider this an afterthought when it should be a central consideration. There will never be a time when everything is written in just one language.
Calling C functions with minimal fuss and good performance, which requires C header parsing and linking or DLL loading support. It also requires built in or standard library support for whatever convenience types and functions are needed to make using real C APIs pleasant and as safe as possible. Support for exporting functions and passing function pointers the other way is great too if it can be done.
Go with CGo is okay from a usability perspective but performance is poor. I haven't looked much at Rust's implementation so I can't comment. I've also heard Python is not bad.
Python was actually the reason I asked the question. There are multiple ways to integrate Python and C, depending on exactly what you want to do. The simplest is to use ctypes, which almost reduces calling C APIs in a DLL/SO to calling a function to load the shared library and then calling the APIs like any other function.
The most obvious practical limitation is that ctypes can only infer so much about the required parameter and return types of the C functions from how you call them, so you have to specify the types explicitly in your Python code in any other case. That's no harder than writing a C header file, though, and I expect there is probably a tool somewhere that would automatically generate the corresponding Python/ctypes code for the common use cases.
There are a few other warts that aren't quite as intrusive, including some restrictions relating to pointers and some verbose marshalling if you're passing strings around. But overall, it's a very convenient way to call out to C-style APIs from Python code that probably covers 95+% of practical use cases with only minor inconvenience compared to what you'd have written doing everything in either language on its own.
It’s not complex problems. It’s being close to the metal. Rust isn’t bad but it’s still not as good as C or C++ if you need to get every last bit of performance or need to be really close to the OS and hardware.
Rust is coming along though. It’s the first language I have seen that might unseat C++.
I work at a company where we're doing software at all levels of the stack, from the firmware up. It is all in Rust. Even our firmware. It's working really well for us.
I think it's fair to call it a complex problem, if referring to the task itself, of creating performant software (which I assume is what he did). I agree with otabdeveloper in general. I'm learning C++, and over the past year there has been many occasions where I've found myself dumbfounded by patterns in the language. But, in 99% of the cases, the outcome is that the patterns are there for a reason. And the reason is performance.
Funny how people obsess about getting "every last bit of performance" and yet we are building applications in layers upon layers inside bloated Web browsers...
I think that is a bigger problem than using a language that gives you 10% less performance than C++. And anyway algorithms are typically are more important than the language in terms of performance.
> It’s the first language I have seen that might unseat C++.
Doubtful, as C++ has language standards and a visible, comprehensible story on how they evolve. C++ has also been evolving faster than Rust.
Right now Rust fills the "C++ with training wheels" niche - when you need a language in this space but don't have the energy or time to disentangle the sordid story of all the crufty legacy systems built around C++ after all this time.
But in principle if somebody made a well thought-out onboarding story for C++ then there's be little point in Rust.
P.S. But you say "my borrow checker" - well, sorry to disappoint you, people don't really care and actually view it as an impediment in learning Rust, not as a feature. If Rust had no borrow checker it'd be an even more popular language.
The borrow checker is one of Rust’s selling main selling points, and was what got me into the language.
If you don’t see the value of compile-time memory safety in Rust, you probably haven’t spent hours debugging subtle runtime memory safety errors in a large C application.
This, anyway, is correct: In 30 years of coding maximal-performance C and C++ systems, I have not spent hours debugging subtle runtime memory safety errors. I have spent strictly more time tracking down compiler bugs.
I can count on one hand the memory errors I have debugged in five years. The longest took a half-hour. (Code that used a "relaxed" atomic load was moved to a place where it needed "acquire" -- which is the default.) So, I eagerly prefer (rounding up) 2.5 hours of debugging spent in five years over spending 5 extra hours every week -- 1750 hours in 5 years -- waiting on builds from a very slow compiler. (I don't know any reason why the Rust compiler could not be two orders of magnitude faster, and notably faster than a C++ compiler.)
My experience may be unusual in that I do not habitually depend on code badly-written by other people. My complaints about other people's code I do use are more typically about performance, conformability to requirements, or poor usability.
I haven't been writing C for anywhere as long as you have, but I have definitely spent quite a bit of time printf() debugging and digging through corefiles in GDB. I also have never had to deal with compiler bugs, but that could be because IOS-XR uses a stable/old compiler toolchain.
I could be wrong, but I would wager that you are definitely an outlier among C and C++ developers. For the remaining majority, I believe that the compile-time memory safety guarantees that Rust provides are huge.
As to why your experience is different, it could be that you're simply (1) a talented programmer, (2) work with talented programmers, or (3) don't work on a large and complex legacy codebase, or some combination of the above.
The people who choose Rust over C++ aren't (generally speaking, there are of course exceptions) doing because of the borrow checker. They're doing because Rust has Cargo and C++ doesn't.
I’d love to read your detailed comparison of C++ with all of the languages you mentioned. You seem certain that they’re strictly worse.
I can make the case that in some circumstances like low latency requirements, C++ is superior to a GC language like Haskell. But it’s harder to make the case that it’s “strictly” better.
The amount of complexity you can have in a system without it falling over is finite. You need to find simple tools, or at least comprehensible abstractions and simple compositions, to solve complex problems. You can only afford to use complex tools when solving simple problems.
C++ is a simple tool with comprehensible abstractions. You just don't understand it because you don't have experience with how computers and compilers work under the hood of dynamic languages.
This sounds pretty condescending to me. I've worked on C++ for a long time, including on bare metal and compilers, and yes, the abstractions are totally comprehensible. But keeping them in your headspace is a full time job. The sheer size of the standard belies "simple" tool.
I used to sit next to a member of the C++ standards committee. He was quite open that he didn't fully comprehend the abstract machine and had to rely heavily on documentation. And as you'd expect he's a wizard at C++. There are a lot of great things you can say about C++, but that it's easy to understand is not one of them.
(It's also very funny because deeply understanding how compilers and computers work under the hood and making sure our C++ code keeps running is, in fact, one of the things I do in my full-time job.)
That’s a big assumption to make. I think it’s valid to say that C++ is complex and more difficult to understand than other compiled languages without resorting to ad hominem arguments.
Haskell is much more complex than C++, for one example. (How many language extensions does Haskell have now? A hundred?)
C++ is easy to understand when you understand the mental model its abstractions target.
(In some sense you're right: the mental model of Haskell is mathematical, while the mental model of C++ is some sort of multi-core "Harvard architecture" machine. While it's true that math is a lot easier for the human brain to grasp, this isn't really related to language complexity per se.)
Suggesting everyone using C++ isn't capable of properly evaluating it (which the comment author can clearly tell from the outside, for every use case) is "not derogatory"?
(and from what I understand "Stockholm syndrome" is not classified as a medical condition because it's questioned if it actually exists as thing that can be usefully delineated, not because it wouldn't qualify)
I'm certainly not claiming "C++ is perfect and everything else is useless" or "Rust doesn't stand a chance", but this "there has to be something wrong with you if you use it/ever consider it more suitable" is seriously annoying, and disappointing to see as the top comment here (even if it's not the worst pro-rust denigration of C++-users I've seen). And is damaging the chances Rust et al have in these spaces.
Medical condition is a very different thing from a mental illness though. I'm quite sure there aren't mental _illnesses_ which last 1-2 days like Stockholm syndrome does.
>Suggesting everyone using C++ isn't capable of properly evaluating it (which the comment author can clearly tell from the outside, for every use case) is "not derogatory"?
I see it more as "I wonder if they have given other languages a chance?". There's definitely a lot of C++ devs who aren't giving for example Rust a try, and don't see the negatives of C++ because they haven't experienced anything else lately.
If anything, it was derogatory towards C++, not the devs.
"Stockholm syndrome" suggests a lot more "aren't mentally capable of" than "wonder if they have".
And the answer to the question "I wonder if any C++ developer has ever given other languages a chance" is also a quite obviously "yes" (remember, the claim was about "all"), not "no, they never have".
Simple using which metric? As far as programming languages go, C++ is one of the most complex ones. Which is funny because e.g. Haskell has way more advanced type system and still is easier to understand.
I've been writing C++ since 1990. I make a good living from a little C++ app (500KLOC) that sells in fifty countries. I love the ultra-static typing. It's saved my bacon many a times.
Long ago "modern" C++ crossed into a territory that I will never understand or use. STL is the fanciest stuff I write. Code needs to be maintainable. If I have to hire geniuses to collaborate on my code, that's not going to be cost-effective. So I use a conservative subset, and thank my lucky stars I don't need to pay attention to the latest exotica.
> Long ago "modern" C++ crossed into a territory that I will never understand or use.
Do you know where the line is? C++11? Later versions?
I ask because I quite like a lot of the changes that C++11 brought to the language, I feel they make my code much simpler to understand and maintain, but I got very confused by post-11 C++ for a long time to the point where I said I'd never bother with C++17 or later (14 turned out not to be too scary once I actually sat down with it) because 17 and 20 added so much stuff that just looked beyond me. But I've since changed my mind and have started to experiment with C++20, after I sat down and read a few articles and watched a few cppcon videos on the changes. I don't know all that these versions have to offer, but I do understand the main changes enough to get by and most of them do make the language better, in my opinion.
Overall, the language is a mess, simply because they keep adding to it and very little stuff gets deprecated, removed, changed or cleaned. That leaves a large and messy language, especially compared to more recent deliberately-designed languages, but it turned out I could understand "most" of the things, once I had someone explain them to me. A few times ;)
Having said that, it probably isn't worth your time to "modernise" the C++ in your money-making codebase at this stage, if you're comfortable with it how it is. You likely wouldn't see a return in the effort it would take.
For C++17, the only best thing for me was that you can destructure tuples and pairs with a simple syntax (when otherwise you needed to use std::tie)
auto [a, b] = functionThatReturnsATuple();
For C++20, the one single thing I'm excited about is designated initializers (which was already in C99! It's kinda mad why this came so late). Basically you can initialize structs with parameter names with just a single line of code:
struct Person {
std::string name;
int age;
};
Person person = {.name = "Bob", .age = 29};
Anything else from C++17 and onwards I really don't find a usage for right now. Compile-time programming using constexpr and consteval is great, until you realize that compile-time programming works under the hood by evaluating the AST directly (which is the slowest possible way for an interpreter), and would make your build times skyrocket to the moon. Concepts are kinda nice because of better error messages, but doesn't really matter for me if heavily templated libraries like STL and Eigen aren't fully ported for this feature. Ranges are just unnecessary for me, I learned the hard way that for loops are much better in terms of clarity and performance. And for modules: again, it only matters if the STL, tooling, and major libraries are ready for it, and I'm skeptical it will be achieved at any time.
For C++20, I agree with you on Concepts. They're nice to have, but its probably going to be quite rare that I will use them. I am, however, interested in coroutines and modules too (the other two of the three main things 20 adds). I also agree on the initializers, that's something I wish C++ had added long ago, since C has had it for so long.
Funny thing is, I didn't even think of ranges while writing my comment above. I like the idea of them and they do allow some nice code, but its not something I've been wishing I had.
For C++17, there's a bunch of small things that I like: standard attributes like [[nodiscard]], nested namespace definitions are a nice convenience, constexpr if statements and some of the library additions are nice (string_view, optional, etc). Destructuring auto, as you said. Nothing as exciting as what C++11 or 20 adds, but still nice to haves.
If that is your favorite thing from C++20, I have very good news for you: You hain't seen nothin' yet.
Saying concepts are just for better error messages is like saying types are just for error checking. You can rely on them only for that. Or you can put them to work.
Most of what makes C++ powerful is from putting types to work. Concepts are just as powerful. You haven't seen yet what they can do.
Agreed. They are the compile-time interfaces to templates' compile-time classes (so to speak, though obviously templates aren't limited to classes). I've been wanting them for so long; naturally error messages are better, but documentation will be better integrated/generated too. Just being able to understand what's expected of a templated type in a compile-time verified way is invaluable. I've taken to writing the concepts I'm using in comments (in the concept syntax) and can't wait to start uncommenting them.
Really, you can now overload based on concept matching, and so tailor implementations according to the family of types passed, analogously to overloading regular functions based on actual types, and overloading templates based on partial matches.
Concepts are a fundamentally powerful tool. Haskell people will recognize them as akin to what they call type classes.
Thus, in P2187 http://wg21.link/p2187 we have conditional swap that, for any small- and simple-enough type, is swapped using CMOV instructions instead of a probably badly-predicted branch. This mechanism works in released Gcc-10 today. (Note that what goes into the Standard Library will probably differ, in details, substantially from P2187R5.)
Currently, I use this to implement hasMemberBar and I wonder if there's a better way now: https://gist.github.com/danielytics/8944b51ca51280fba30baab6... Especially something that works without using the preprocessor, while still not needing a new implementation for each member I wish to check for.
I want to use concepts, but it's not integrated with the STL yet (which is a baffling decision - aren't you supposed to dogfood new features into the standard library if you want to encourage its usage and test its strengths and weaknesses?...) My code is not that heavily templated (and wouldn't benefit that much in doing so), so the only reason concepts would be good for me right now are having better error messages in other heavily templated libraries like STL and Eigen.
> > Long ago "modern" C++ crossed into a territory that I will never understand or use.
> Do you know where the line is? C++11? Later versions?
It's funny to read this, because I purposely avoided C++ for the better part of twenty years after dabbling with it in college--not worth it to deal with so much complexity. I've started using it in the last couple of years for work, and the modern part is what makes it bearable.
C++ is a big beast, so there is a cost to each new standard simply by virtue of adding features. However, the types of things I want to use in C++14 are pretty simple and worthwhile; e.g., better constexpr functions, and the _t aliases (like std::enable_if_t<>, rather than typename std::enable_if<>::type). Similarly, C++17 has a few new templates, like std::optional and yes, std::variant, along with ergonomic improvements like the _v aliases (like std::is_floating_point_v<>, rather than std::is_floating_point<>::value).
Now that the glacial pace of standardization has been replaced with a three-year cycle, I almost want something like LTS releases. I use C++11, and a bit of C++14, but when is it safe to use C++17 or C++20?
C++14 is really just a small patch of improvements to 11, so I think if you're going to adopt 11, you may as well adopt 14 and get a better experience.
And that is where I intended to stop, but then I started reading and watching content on C++17 and realised the things I thought were complex and scary were not so bad. I mean, there's much code I just don't understand, but from how it affects me personally, its not so bad and adds a bunch of improvements.
C++20 is a different beast altogether and I told myself its too complicated so I'll never bother beyond C++17. And then I watched some cppcon videos and realised that concepts, ranges, modules and coroutines are pretty cool. I still don't understand anything but the basics of concepts and ranges have some cool properties, but otherwise I dunno if I'll use them. But coroutines open a world of interesting possibility and modules have a lot of potential too, so I've started playing around with C++20 and will slowly learn to use it (although I did already notice some stuff that's not yet implemented in the clang version my package manager gave me... so not quite ready yet).
I've been using C++ about as long as OP, and I found much of C++11 to be practical and useful so adopted the useful bits into my tool chest. Since then, the additions seem (to me) to be more academic and interesting to computer scientists, so I haven't really found the need to keep up. I think C++11 hit the sweet spot. Threading, Smart pointers, std::unordered_xxx hash tables, strongly typed enums, lambda functions, are great and useful. Tuples, initializer lists, range-based for loops, type inference are convenient, I don't see how they hurt. R-value references and constexpr are neat academically, but I've so far failed to see the practical killer app for them, or how they could improve my existing code. The rest of C++11, I either haven't looked into or just haven't been convinced make my code better than without.
> R-value references ... are neat academically, but I've so far failed to see the practical killer app for them,
The killer app is unique_ptr. Without move semantics, unique_ptr wouldn't be nearly as nice to use. There's a good reason that it has a move constructor but not a copy constructor.
Sometime around smart pointers, or "managed" pointers. If I can't figure out how to use something in a few minutes, it's probably going to cost more in overhead than it benefits.
Likewise. I've been writing C professionally since '83, and thru the decades of my career spanning computer graphics, embedded systems, video games, film VFX, high compute web apps, and now facial recognition - the rule has been using a conservative subset of C++. STL being about as fancy as one gets, with an eye towards maintainability has been satisfying. By working with and employing people with backgrounds like former video game developers, they know the value of only optimizing bottlenecks. Clean easy to read C++ that any C programmer could read works, very well.
Some of the modern c++ is a what I'd consider essential really. And there's plenty more that is very nice to have.
Move semantics and unique_ptr are really IMHO essential. Then lots of nice to have stuff such as, shared_ptr, threads, lambdas (make STL a lot more usable).
Personally I'd not want to be maintaining a C++98 style of code base at this point.
That being said you have probably already written all of the "nice to have" stuff so there's no real value benefit to go and replace it as such. So I understand your POV as well. That being said I'd probably modernize the code as I go whenever I touch any part of the codebase.
Since this is your business, your incentives are aligned: develop a product by means of a tool that needs to be productive and cost effective. In other words, you have skin in the game, unlike the average employed coder. This changes perspective.
As a long-time user of C++ variants with a home-grown tool much like the article’s make_visitor, I think there’s a much bigger problem with C++ variants: the sub-types don’t have names.
If you genuinely want your variant to just be a mildly polymorphic object, this is okay. But you probably don’t. Imagine the classic use of variants for a parse tree:
variant<string, shared_ptr<Node>>
Okay, presumably the string is a token and the Node is a subtree. This is already a bit unpleasant, but at least it’s serviceable. But then you discover that you really want tokens and symbol names to be separate. You try:
variant<string, string, shared_ptr<Node>>
And you lose, because the two unnamed string cases are ambiguous. To work around this, you need to wrap the strings in newtypes, which is quite verbose in C++. And you start to wonder why, after all these years, C++ hasn’t sucked it up and come up with real sum types in the language.
(For those who are too stuck in C++ land, this problem simply does not exist in a language like Rust.)
> To work around this, you need to wrap the strings in newtypes
That is certainly an option, but actually std::variant<string, string, shared_ptr<Node>> does still work for most operations (std::visit is not one of them though). So another option is to use that typelist directly and switch to index-based access, at least for those entries:
if (std::string* s = std::get_if<0>(myVariant)) {
// Use s
}
else if (std::string* s = std::get_if<1>(myVariant)) {
// Use s
}
else if (NodePtr* node = std::get_if<2>(myVariant)) {
// Use node
}
else {
throw std::logic_error("Unknown variant entry");
}
At that point you'll probably want to define an enum (an old-school one so it implicitly converts to int) to identify the cases symbollically, which admittedly is pretty messy given that it's up to you, the maintainer, to keep the enum in sync with the actual type list in the variant. But it's hardly the worst coding sin ever, and better than any alternative I can think of (short of switching to another language). As you can see from my comment further down, I think a chain of if statements is cleaner than std::visit anyway, precisely because of the complexity problems of std::visit discussed in the article.
Unions discriminated by type could be useful in Rust, especially if you could use them without declaring the union type explicitly. It'd enable something like `Result<..., IoError | ParseError>` without having to manually create a union type, implement the appropriate traits for it, etc. Unfortunately it opens a can of worms related to type identity, so I don't expect to see such a feature soon.
The post raises some good points, but I nonetheless think that C++ is a better language for having std::variant. It can result in some very clear, expressive code, despite the boilerplate and dabbling with more advanced language features that it necessitates.
Indeed, from the article: "In spite of all of this, I’ll be busy encouraging my coworkers to use variant if anybody needs me. Sum types are such a useful concept that they’re worth the pain[...]"
Looks like you are making things deliberately complicated. Here is an existing Variant type which existed long before modern C++ and which is easy to use: https://doc.qt.io/qt-5/qvariant.html
It looks awful. Most languages should look like pseudo code, otherwise just accept defeat and admit you did not create an intuitive language (not everyone is good at making one). This is my biggest fear with arcane Typescript at the moment.
Edit: I have seen Typescript that looks like this, but some of you could more easily corroborate this claim that are working with it more persistently.
With TypeScript, you have kind of an impossible problem: type check a program, but maintain compatibility with extant JavaScript libraries. There are JS idioms that don't match to TS very well without creating some pretty complex union types. And it's very easy to end up with ad hoc types all over everywhere, only to escape to using `any` when it becomes too much mental overhead.
Previously, I had been very diligent about using JSDoc to annotate methods and fields with types, which gave me a lot of information with which to work in my IDE. But JSDoc is not a type system, it's very cumbersome to express ad hoc object structures, and being "very diligent" is not "perfectly diligent", so the TS porting process discovered some subtle bugs where I had, for example, guessed wrong on the type of something coming from a library (I also forked and ported most of my dependencies), or changed the return type of a method but missed a couple of very rare calls of it.
Ad hoc union types in signatures gets super line-noisey. You can avoid it by diligently (there's that word again) using interfaces and named types. I think type expressions get overused in TS, out of some hope they can make the language into Haskell. That is not the case, and most of the time it's better to use interfaces.
Still, I have a dependency that can only be included via a script tag. This wasn't too much of a burden in JS, because the language does basically nothing for you. But using it in my TS port took a lot of effort. I had to download the source of it, patch in a tsconfig, generate .d.ts files, and fix them up where they had escaped out to `any`. It's still not perfect. The quality of the library is pretty low to begin with, but it's the only way to interface with a WebRTC server I'm using. I'm pretty close to deciding to use a different WebRTC server (probably even one I write on my own), just to fix the remaining issues.
That said, if you're starting from scratch, TS can be very clean and push you towards much better designs. It's hard to do the wrong thing. You can still do it, but it's hard. You need to have that ability, though, because there's just too much "wrong" with everyone else's code out there.
I don't mind that, too much. The only thing I mind is that it's very difficult to distribute a library in such a way that it can be used with a TS project without a lot of warnings or errors, especially if you're using Webpack (ugh, Webpack generates some uuuugly bundles). So even though a lot of my dependencies offer TS-generated bundles or .d.ts files, it still doesn't work very well. You really need to be distributing TS source for the library, and not relying on any conditional compilation that your consumers will have to replicate in their own builds.
Thank you for your in-depth reply. Certainly using Typescript judiciously is good. My fear is of the Frankenstein, which fundamentally slows down the flexibility and speed of frontend development (which is paramount, since the UI you build today looks dated and old within 6 months).
Type checking has existed for quite some time. Those languages and developers have created some of the most static UIs you can think of. Those desktop apps got stuck, they don’t innovate or adapt. I think it has partly to do with how rigid and sedimented their tools and languages got.
I deal with it every day. We look at a Java enterprise app built, and go, yeah this has to be refreshed. That language has type checking. What are we replicating here? We throw that stuff away like it’s a joke, so I wonder if we aren’t wasting our time with some of this stuff.
Type checking defies the utter success of Python, Ruby, PHP and JavaScript in building the entire modern web. We moved quickly and accurately due to these languages (it’s not a coincidence that they are all interpreted). I don’t want us to forget that. If you guys want to add something that helps, sure, but if you guys want to add stuff that adds little value
that slows stuff down, it will prove out the same way it happened before. We’re just going to throw it away when the seasons change.
Type checking has nothing to do with flexibility of UI. If anything, type checking improves your ability to iterate UI, because you can change things with more confidence that nothing is broken.
I disagree. Your type checking systems can strangulate code bases. If we can’t move through the code quickly and change things quickly, you hamper us.
More serious question for you: Why do interpreted languages exist? What are one of the things they shed? Why’d they get popular? Why’d people find a way to move fast in them?
I guess my vigilance is basically this, I’m the guy before you all turn JavaScript into what the OP is describing as the monstrosity that c++ is. Explain why that won’t happen the same way I guess, since we’re all smart, and students of history. We have something to protect here too. We built the stuff you guys couldn’t, but you know better apparently now. I’m your ghost of Christmas past.
We got here because adding library features is a lot easier and less risky than adding language features. Work on pattern matching [1] is ongoing, but the bar for entry for a language feature is much higher and as such takes a longer time.
Beyond the ergonomics, `std::variant` is horribly outperformed by basic type erasure mechanisms or inheritance. Every time I've tried to use `std::variant` I've been unhappy with the runtime performance vs any other way to solve this problem (the compile time performance is expectedly atrocious compared to any other method).
So you've got poor ergonomics, poor compilation performance, bloated code size, and poor runtime performance. I've written an enormous amount of C++ code from applications to libraries to system development. I literally have never found a place where `std::variant` has any positives that can't be written a different, clearer way with drastically better compile time and run time performance.
> Beyond the ergonomics, `std::variant` is horribly outperformed by basic type erasure mechanisms or inheritance.
uh, I have yet to see variants beaten by inheritance in practice - in particular as soon as you have type erasure all your containers have an additional level of indirection (which can be solved partially by Boost.PolyCollection but that adds its own layer of complexity) and that has been a much bigger perf-killer - with variants memory is nicely packed when iterating over your array of values and that's by far the biggest performance thing you can do. You also need to allocate memory dynamically everywhere which is a no-go in a lot of cases.
Like I said, I was shocked but I was building something recently and, like you, reached for std::variant first to make the solution more generic while, in theory, outperforming type erasure/inheritance. Thankfully I had thorough micro benchmarks and was able to demonstrate convincingly that type erasure was drastically faster in practice, despite the theory (and virtual inheritance was better than both)..
I think what ends up happening is that std::variant and how you access it is pretty code bloaty and hasn’t been optimized yet properly by library authors and compilers. It’s also not particularly ergonomic so even though it’s been around for 3 years there’s not really any incentive (it’s slow and unergonomic, slow adoption, not the highest area for focusing on improving ergonomics and performance).
I think type erasure is overly criticized on how bad it performs. A single indirection in a hot loop is actually extremely CPU friendly as it’s predictor will keep the pipeline filled and you’ll be unlikely to end up flushing the pipe. It’s something the CPU has been optimized for since polymorphism is present in one way or another in all paradigms. Equally I suspect vtable costs are denigrated for no good reason. Like the cost is there (and you can accidentally overuse it) but the vtable is information to the compiler (+ it wouldn’t surprise me if CPUs had special tricks to make c++ vtables even faster) that can result in faster perf than you could achieve any other way. Again. I’m not talking about the theory here. I’m talking about the actual performance I wrote thorough microbenchmarks on.
The amount of code that gets generated to do discriminated std::variant unions is absurd which I suspect is a large part of the problem. I’m impressed that the standards body managed to pull off being so bad on so many dimensions. I’m not saying it’s a bad library. It has very specific requirements it’s trying to maintain specific to how the STL is written. Abseil’s HashTable reimplementation (SwissTable) should show how many of the invariants the STL likes to enforce in practice aren’t invariants anyone cares about and inhibit good performance.
Said another way. std::variant is probably the most efficient discriminated union you could get past the standards body to be included in the STL. Rust’s performs way better on any metric and that should give you a glimpse that the incentives in the standards body are misaligned with what’s actually needed to keep C++ healthy. I think - I haven’t benchmarked but tagged unions are so embedded in the language and type system rather than as “dumb” library code I can’t imagine they have the inefficiency baggage of std::variant.
I don’t have a huge amount of experience writing C++ code. What are these other more efficient ways? I suppose virtual classes is one, but it becomes difficult to add functions.
- you can write 10LOC to have a lambda visitor. Idk why the standard doesn’t do this for you but it’s not a big deal
- sum types are super useful and your language will want some sort of niceties/extra characters to deal with them (and all options will be better than unions which are an unsafe mess)
So ... what’s the big issue? C++ is complicated? It’s a language designed to give the programmer full control when they want it. You don’t have to constantly use every bit of it.
Its probably time to give C++ a rest and stop evolving it. We've probably long since passed the point where any evolution gives a net benefit to engineers, and its time to let other languages take over.
The problem with C++ is really the standards committee overcomplicating things. They've done some important work, like the memory model, but they don't have the confidence to make the decision quickly enough.
Compare template constraints in D and C++. C++ ends up with concepts after something like 25 years in the making, and they add huge amounts of syntax and bloat, whereas in D it's just an if statement after before the template body.
I usually just use switch(std::variant:index()) and then std::get<int>(). Which worked reasonably well. But then I stopped using std::variant altogether, because in the end it never gave enough benefits.
These days I use reinterpret_cast and cast away all my problems.
On a tangent, how do you do sum types in java? I recently wanted to use them and all I could figure out was some subclasses and `instanceof` as poor man's pattern matching.
That's how. Sum types were largely dropped from imperative languages in favor of subclassing during the OO craze (which is why they had to be re-discovered from the functional world).
Look up sealed classes if you're using Java 15. There's no proper exhaustive pattern matching for them yet though, but that's coming in a future version.
I actually prefer the overloads or constexpr branches to make_visitor because it's easier to see what's going on.
Anyway, until C++ gets pattern matching for the simple cases make_visitor is the best we can do.
If you want advanced variant use cases you can always use boost::variant (v1) which supports multivisitation, which coupled with templates is exceptionally powerful.
If you want to go even further with open multimethods you can checkout the yomm2 library.
The constexpr version looks like it can't guarantee at compile time that all the variants are handled. At least to me, statically ensuring full coverage seems like an absolutely crucial part of effectively using sum types.
Is there some kind of deeper magic going on in the "constexpr if" version than it looks like?
No I think you're right, but this is fine. There may be instances where you don't want to handle every possible data type and other values should result in a no-op.
If you want full coverage then visitation guarantee that statically. Multi-visitation over m variants each with N possible types also does this statically with N^m instantiations. This balloons quickly, which is why I mentioned yomm2, which can add these handlers dynamically at runtime for polymorphic types.
If you need to put in the fallback "else" yourself, with no error or even warning if you forget to do so, it's not much of a guarantee :)
(And thinking about this a bit more, there's presumably also the opposite case: if you add a branch for a type that's not a member of the variant / remove a type from a variant without updating all the visitors, it'd just happily compile with no indication that anything is wrong.)
static_assert(false) fails to compile even if it appears in an if-constexpr branch that isn't taken. You need a dependent false value, templated by the type passed into the lambda.
I don’t think any of the approaches in this article are appropriate for the stated problem. Specifically, parsing data from an INI/JSON/text file (or TLV-like encoding, predefined format/protocol, etc.), the parser can produce appropriate structures for the underlying data. This is the phase when you analyze (and likely validate) the input data, so you’d know the appropriate type after parsing.
You may want your parse tree to produce objects instead of fundamental C++ types (e.g. with functions like isNull(), isValid(), etc.). A common approach might be to produce a simple AST where each node includes just the fundamental type you’re interested in. Then you can simply follow the visitor pattern and write visitors for things like serializing/deserializing, generating/traversing/executing trees/graphs/code, etc...
Perhaps I’m missing something though? A more concrete definition of the problem would be helpful...
I actually think the last version, using a single lambda with constexpr ifs inside, looks completely reasonable.
But yeah the first time I had to use boost::variant (which is what the version in the standard library is based on) I thought the visitor was too much boilerplate. Things have gotten better since then, but many people are not even on C++17 yet ):
I've also found std::get_if to be halfway decent in terms of readability. You end up with a if/else-if block that looks a bit like an ugly verbose pattern match:
if (const auto sPtr (std::get_if<std::string>(&myVariant)); sPtr) {
/* use sPtr */
} else if (const auto iPtr (std::get_if<int>(&myVariant)); iPtr) {
/* use iPtr */
}
Agreed that std::get_if is nice, it avoids redundantly checking then fetching the value compared to using holds_alternative. But there's no need to have that separate clause in the if statement to test the truthiness of pointer, that's already what will happen from the initialisation:
if (std::string* s = std::get_if<std::string>(&myVariant)) {
// Use s
}
else if (int* i = std::get_if<std::string>(&myVariant)) {
// Use i
}
else {
throw std::logic_error("Unexpected type in variant");
}
The hardcore modern C++ gang will vigorously complain that using this kind of chained if will - horrors! - yield a runtime error instead of a compile time error if you have forgotten one of the types that the variant can hold. I have some sympathy for that point of view, but in practice the small likelihood of that bug happening and the small extra difficulty of finding it at runtime is completely overshadowed by the extra complexity of std::visit.
I don't agree with GP's claim that chained ifs are likely to be slower than std::visit. If anything, it seems like they'd give the compiler less work to do to optimise well than the layers of funtion calls involved in std::visit, especially since std::visit is ultimately going to expand to chained if statements anyway.
(A digression that's maybe dangerous to bring up: You've made an odd choice out of the many initialisation syntaxes to use. Many in the hardcore modern C++ gang would advise always using auto and brace initialisation, with no equals, even for really simple variable definitions e.g. auto i{int(3)};. Personally I find that obtuse and would write that as int i = 3, at least for simple things (raw pointers like in the example above fall into that category for me). Direct initialisation with parentheses isn't normally advocated by either old-timers or modern C++ fans, in part because of the most vexing parse - did you mean to use braces?)
Nice, your example is cleaner than mine for sure. My example was copied from some code I'd written but looking at it again it's more verbose than necessary.
Re: compile time vs runtime errors, it is a valid point. In Scala (which I use in my day job) I will generally question any use of `case _ =>` in a pattern match because of this, particularly because you don't get the nice compiler error when someone (for example) adds a new type to a sealed trait hierarchy without making sure all business logic that touches that type handles the new case correctly. I guess in C++ it feels painful for a non-expert like myself to get those types of compile time guarantees while functional languages like Scala (for example) make it easy.
As someone who “uses” C++ as “C with classes”, this looks like line noise...
I mean, I can grok it, but I’d never write it. I’d fall back to the ‘write methods to encapsulate the behaviour’ approach instead, and I think my co-workers would thank me for it.
In many cases you can get away with just using a struct with a tag (no unions). Yes, it wastes memory, and isn't as safe as a proper sum type, but at least it's memory-safe for C++ objects and you can use a normal switch statement. Compiler warns if your switch isn't exhaustive. Compile times are fast too.
I've written a lot of C++ code but I'm retired so it's been a while. After reading about SFINAE I had a strong feeling that C++ reminds me of the US Tax code or fits the definition of the word Byzantine.
C++ is like the 1947 Boeing 377 Stratocruiser: its 28-cylinder radial engine was the pinnacle of piston technology, it was impossible to tune, and shook so badly it literally ripped from the wings.
Judging by ballooning attendance at rapidly multiplying C++ conferences, the number of C++ programmers is growing faster than ever. More C++ programmers are newly hired, in any given week, than the total at work using Rust. That will be true for some time to come.
Here on HN you can always get upvotes by promoting Rust (and it really is the only interesting new systems language in 30 years) or complaining about C++, but C++ is going nowhere but upwards. C++20 is better than '17, and C++23 will be better than '20.
A language has achieved something when there are more articles complaining about it than about having succeeded in coding something using it.
I love rust. Yet, I do not understand the hate for C++. I've done a lot of work using C++ and a lot of being able to use the language effectively comes from knowing why things are a certain way. Given its constraints, I think it is a brilliant language and I agree with your view of it.
However, if someone were to ask my advice on which programming language s/he ought to learn, I usually point to python as its easy to begin and unless the goal is to get into game development, financial institutions or something like that, where C++ is the dominant language, I do not recommend it. It takes a lot of time before the complexity begins to disappear from one's mental model of the language. After that point though, one can see why it's a solid language.
Yes, I work in C++ systems, I feel huge job security. I don't see demand for C++ systems engineers going down in the next 10 years at all. The field evolves, it is just very slow.
Given all this, use a dumb old macro to make a simpler interface with a macro that expands to type safe code. That will save a lot of unnecessary syntactical noise and overwinter you until better pattern matching support comes along.
There is a point, but it is overstretched. make_overloads is actually a cool trick. Yes, it would be nice if there was a version of this in the standard library, but then again, it is just a few lines of magic. That will not be the first lines of magic in your C++ codebase.
If you compare the sum type and pattern matching story with modern languages, sure, you will find C++ is lacking. And it is. However, such a comparison is ignoring history.
I actually like variant. It is the best sum type we have in C++.
Every task isn't a task that can be solved with 5 minutes of Python or JS. There are task that are straight up unsuitable for a language like Python or JS.
Many are stuck on C++ for legacy reasons. I have worked in such large C++ legacy systems for many years in the past. The average programmer simply cannot deal effectively with this modern C++. It is too complex.
I’ve been down that road being exited about new C++ features only to realize that I’ve actually reduced the productivity of my co-workers.
Often I wonder if C++ programmers all suffer a Stockholm syndrome. They have come to sympathize with their hostage taker C++, making up excuses for the many ways C++ abuse and terrorize them.
I go to these C++ conference talks in occasion and I see people in ecstatic praise about how some genius C++ guru came up with an elaborate convoluted solution to something which is like two lines of code in a sane programming language.
Seriously I think C++ has drained so much of their brainpower that they have simply not been able to look outside and see how the grass is greener everywhere else.