Looking at the wrong holes
In my opinion nobody should be expecting to rely on the hardware taking care of software security, and in many ways it's a great pity that it was ever attempted. If I understand correctly the original idea was only to protect against buggy software *accidentally* causing mayhem, not to protect against a deliberate attack.
Once any malicious software has managed to get itself running on a computer it will always be able to do damage by one means or another. Effective security means preventing anything malicious being executed by the CPU in the first place, just as you protect machinery from sabotage by physically shielding it (or the entire building its housed in) in some way, you do not design it with gears that can withstand a spanner being poked between them, you prevent the spanner from getting to the gears in the first place.
I develop embedded systems, and in critical systems, in order to protect against a malicious firmware update, all new code is signed with a private key that exists only on a secure machine within the company. All firmware contains the same public key used to check the signature before the code is flashed to memory or executed.
Of course a malicious actor with physical access could remove the Flash memory and substitute firmware that contains no checks, but that's not the threat that's being guarded against. A person with a hammer could also crash the unit!