Would it be possible to artificially delay the world by 15ish ms? A person would have to fully have a headset on (so it'd be more like VR than AR), but perhaps it could deliver a time-delayed view of the world only once the augmented pieces are ready to render.
Edit: you'd still have the motion-sickness challenge, but perhaps at least the 'layers', so-to-speak, wouldn't appear separately.
No. The important thing is keeping your sensory inputs in sync with your vestibular system. There were some research questions about hacking the vestibular system a few years ago.
But in VR we can have even lower latencies for synthetic content.
Because we have the head tracker recent history, we use prediction on pose trajectory, and can effectively know where the head pose will be at the time the current rendered frame will be displayed. And use that predicted pose to render the scene. That type of optimization won't be possible with see-through VR or AR.
The second optimization is timewarp, where the rendered scene is distorted in screen space after the fact, based on post-render tracker data (just a few ms before display). I wonder if that type of optimization would create artifacts in AR.
Edit: you'd still have the motion-sickness challenge, but perhaps at least the 'layers', so-to-speak, wouldn't appear separately.