Artificial Intelligence, for me, means that we can "make" a construct that, when activated, is capable of learning and deciding on it's own.
Are you arguing that to qualify for AI the construct must have free will or volition? I think that is problematic - it's a problem that has exercised us humans for aeons (I'm not even sure whether I have free will).
Perhaps you mean, rather, that an AI must exhibit some form of autonomy. But even that is hard to discern in practice: how do you actually tell whether an apparently autonomous action cannot simply be traced back through some logical, deterministic chain of events? This is particularly difficult for neural net-based systems, which are typically inscrutable, even to their designers.
To take your "I want a cigarette even though I know it's bad for me" example, I can explain that pretty easily in reductionist physiological/brain chemistry terms ("I know it's bad for me, but my addiction to nicotine overrides that"). Or, even if someone has never smoked before, they may decide that it must be worthwhile, despite the health risks, because so many other people do it. I could imagine comparably perverse behaviours manifesting even for current machine learning/AI systems.
Like a teenager, if you will.
Not sure if teenagers are the ideal paradigm for "learning and deciding on their own" (I happen to have one of those knocking around at home).