Re: RPO 0 is good, but...
Data replication (synchronous/RT) absolutely does provide an RPO of 0. Regardless the state of the data itself at the primary, the goal is to provide the exact same state without any delta in a secondary (or P') location. In the scenario posed here, the proverbial site obliteration would cause an instant failover to P'. Under no circumstances should there be any change in/to the live data presented to applications - even if it is infected by ransomware or contains human error.
Snapshots are an effective way to mitigate the dangers posed here, but must also be delta-replicated to allow for P' to restore should any of the common scenarios you've mentioned occur. The issue with snapshots is they capture the entirety of the resource at a particular point in time and must be scoured for the particular data desired. A DR plan should include regular testing of snapshots mounted and restored should that ever be necessary, but also to verify their integrity and application consistency.
Backups are easily the most common form of DP, but as the author points out, do not deliver an RPO of 0 unless they are continuous and (you guessed it), RT replicated to P'. Coupled with snapshots and application integration, backups definitely provide the most seamless and comprehensive way to recover locally and in a metro environment.
I saw someone mention application-level awareness and involvement. Absolutely. Intelligence is the answer to handling ceilings in latency and throughput. The less data read/written the better. Storage systems provide the intelligence in the form of access pattern recognition, data pattern compression, deduplication, and delta-only growth and transfers in certain cases. Storage systems don't provide data or application-level intelligence or they'd just be servers with storage and you might as well host the apps right there and have each server handle its own protection (good luck with shared link replication).
Combating costs is generally the inhibiting factor. CapEx chokes the ability (if distance and latency don't) to creating a good DR strategy. Often times it's okay to have slower drives and higher capacity in P' and simply run in a degraded (slower) mode should a disaster strike. The additional capacity and cost savings also allows for DR testing, snapshot testing/purging, application recovery, virus scanning, etc.
Every environment and organization is unique in most respects. There is no single right answer that covers all. A combination of different techniques and products to create a final strategy which encompasses and captures mission critical apps, essential, non-essentials, RPO(s), RTO(s), data TTL, and scheduled testing is more important than simply stating "RPO-0". A documented DR plan and strategy needs to be created and followed with stated expectations...and then it's all your fault anyway :-)