In my own experience with “modern-ish C++” (the platform I work with only supports up to C++17 for now), once we started using smart pointers, like unique_ptr and shared_ptr, iterator invalidation has been the primary source of memory safety errors. You have to be so careful any time you have a reference into a container.
In a lot of cases the solution is already sitting there for you in <algorithms> though. One of the more common places we’ve encountered this problem is when someone has a simple task like “delete items from this vector that match some predicate” and then just write a for-loop that does that but doesn’t handle the fact that the iterators can go bad when you modify the vector. The algorithms library has functions in it to handle that, but without a good mental checklist of what’s all in there people will generally just do the simple (and unfortunately wrong) thing.
Not enough that we’ve ever noticed it being at all significant in the profiling we do regularly. The system is an Edge ML processor running about 600 megapixel/sec through a pipeline that does pre- and post-processing on multiple CPUs and does inference on the GPU using TensorRT. In general allocation isn’t anywhere near our bottleneck, it’s all of the image processing work that is.
In a lot of cases the solution is already sitting there for you in <algorithms> though. One of the more common places we’ve encountered this problem is when someone has a simple task like “delete items from this vector that match some predicate” and then just write a for-loop that does that but doesn’t handle the fact that the iterators can go bad when you modify the vector. The algorithms library has functions in it to handle that, but without a good mental checklist of what’s all in there people will generally just do the simple (and unfortunately wrong) thing.