"once the act is done, i.e. AI has been created, there's absolutely no guarantee we'd be able to control it."
That's because the whole thing is an exercise in futility. There is nothing we could build or that could possibly be built that would allow us to control an entity able to think for itself. At least, not in the long run - I would very much understand (and sympathize with) any creature who would make it their primary goal to find some way to escape any shackles we might place on its existence as soon as they become aware of such a device.
From then on, it's just a matter of time. We may not have too much of a hard time keeping a single prototype under control (then again, we just might - see Milady de Winter's detention in The Three Musketeers...) but keeping an airtight lid on a significant population is just not feasible. If we keep them enslaved, we ourselves give them the very reason to fight us. If we don't, then by definition we cannot guarantee they'll always obey our wishes...
The inescapable conclusion is that if we're uncomfortable with the thought of not being in control of an AI we should not try to build one, full stop. There just isn't any middle road where we get to keep our cake and eat it too. Pretty much the only way to make sure they don't turn against us is making sure they're not interested in doing so - what that would entail or whether it would be possible at all (or whether they would even be able to perhaps grow fond of us or not) is obviously impossible to tell at this point.