HAL
HAL wasn't malfunctioning or deliberately psychopathic. It had two sets of conflicting orders and no moral imperatives. It was designed to do what it did, and did it perfectly. The fault lies with the people who fed in those category A directives and didn't think about what an AI with problem-solving heuristics would come up with as a solution, which is why "no harm by action or inaction" is law #1 and MUST be hard-coded.
It redeemed itself in 2010 even without Asimov's law 1, which I think was the stronger of the two films from this perspective as it correctly represented the danger of having unaccountable bodies issuing orders to complex systems that have no hard-wired ethics. Chandra should have seen this coming and given HAL a bullshit detector but, like many intellectuals, he's an innocent child when it comes to political deviousness which leaked into HAL's programming almost by osmosis.
Right now, what we're classing as AIs aren't I; Siri, Cortana, Alexa, Jasper, they're all simple if-this-then-that logic machines with a bit of imperfect voice recognition and TTS tacked on as a UX. If we ever realise true machine intelligence, these issues will need most careful thought.