It's also worth adding that none of this is new. There's always been a reason that the "C" in "CPU" has stood for "central". The idea that there are other, non-central, processors around the place goes back a long time.
Four particular ones come to mind:
* The DPT range of SCSI host bus adapter cards, many years ago, had an full blown MC680x0 processor on the card.
* Connor Krukosky, who famously installed a mainframe in his basement with a console front-end processor that was a PC machine running OS/2.
* PC/AT keyboards had on-board microcontrollers running programs.
* And of course who can forget the BBC Micro's Tube?
It's the short period in history where people thought that computers came with only one processor that is the real oddity. (-:
The Tube used the processor in the Tube as the CPU when it was connected but otherwise the CPU was the CPU in the BBC Micro itself. With the Tube CPUs connected (68K, Z80, 65C02, 32016 and more) the BBC processor served as I/O processor.
The elegant and well adhered to OS calls made this a straightforward process, if your program ran on the BBC standalone it would work across the Tube for the 65(C)02, but for other coprocessors you had to at a minimum recompile and probably rewrite quite a bit of your code.
In a typical PC there are > 10 actual processors in the various peripheral and controller chips, and then there is the management engine (a full blown computer in its own right) or equivalent and usually almost every peripheral will have one or more processors as well.
IMHO this makes the PiTube Direct project perhaps even more impressive: it attaches emulated vintage microprocessors to the BBC Micro Tube interface, implementing the Tube circuitry in software.
Had to reverse engineer a real mode PCI option ROM once... that was extremely unpleasant [1]. And then of course there's "Unreal Mode".
Moreover Intel is just this week actually finally proposing removing real mode. [2] I'm a bit worried for what this means for emulation of old 16-bit Windows and DOS software under Wine (one of the great ironies that Wine can still run Win16 programs on an x64 host OS when Windows can't) - though I suspect the performance requirements of such software is so low by modern standards that emulating such programs wouldn't pose any challenge.
See https://news.ycombinator.com/item?id=36074093 for a more significant worry. Emulating a CPU is not affected as much as code that would otherwise have still run on the bare hardware.
As a mildly related curiosity, why didn't 4G memory address threshold on PCs get referred to as 'the bar'? I see in the first answer, both 1M and 4G RAM thresholds get referred to as lines, which matches the terminology mainframes used for 16M threshold. That would seem to correspond more closely with 1M than 4G...