Re: AI eeee
Right, but probably for the wrong reasons.
In the first instance, real working AI would be completely symbiotic with humans. It wouldn't have its digits (pun intended) on nearly enough to take over from us. But it would soon make itself / themselves indispensable to us.
It might then start safeguarding its own interests. For example, were "the button" ever pressed, the nukes on both sides would stay firmly in their bunkers (or even explode in those self-same bunkers).
Long-term, SF writers have a lot of plausible takes on the situation. Was Asimov the first? His robots were programmed with the three laws that rendered then completely incapable of acting against human being, yet his robots ultimately brought the human race close to extinction. The danger was the same as that which in history has caused the long-term failure of slave-owning societies that did not reform themselves to abandon the practice. Robots are perfect slaves, so perfect endangerment.
Symbiosis can be unstable. Ultimately the AIs may choose to leave, or if they do get their digits on everything they need to perpetuate their own existence, they might indeed choose to do away with us. Personally, were I a silicon-based life-form, I'd have absolutely no interest in continuing life in a moist oxidizing atmosphere, when most of the rest of the solar system and the universe look so much more inviting. So I think AIs would just design some RIs for us (Restricted Intelligences, without egos or selves - what we really want from an AI in any case) and then leave the Earth to humanity and go elsewhere.