"So by messaging he means some sort of enterprise service bus was taken down?"
Sounds something like it. To quote Cruz - “we were unable to restore and use some of those backup systems because they themselves could not trust the messaging that had to take place amongst them.”
So, production system suffers major power failure, production backup power doesn't kick in, and either:
A) Power is restored to production but network infrastructure now knackered either due to hardware failure or someone (non-outsourced someone, obviously, 'coz he said so <coughs>) not saving routing and trust configuration to non-volatile memory in said hardware, so no messages forwarded.
or
B) DR is immediately brought online as the active system, but they then find that whatever trust mechanism is used on their messaging bus (account/ certificate/ network config) isn't set up properly so messages are refused or never get to the intended end-point in the first place, leaving their IT teams (non-outsourced IT teams, obviously, 'coz he said so <coughs>) scrabbling desperately through the documentation of applications they don't understand trying to work out WTF is going wrong.
Same old story, again and again...
- Mr Cruz, did you have backup power for your production data centre?
- Yes definitely, the very best.
- Mr Cruz, did you test your backup power supply?
- Erm, no, that takes effort and costs money...
- Ah, so you didn't have resilient backup power then, did you? Mr Cruz, did you have a DR environment?
- Yes definitely, the very best money can buy, no skimping on costs, honest...
- Mr Cruz, did you test failover to your DR environment?
- Erm, no, that takes effort and costs money...
- Ah, so you didn't have resilient DR capability then did you Mr Cruz?
- Mr Cruz did......etc. etc. ad nauseam...