This is not even a logical question.
"Last week Fujitsu CTO Joseph Reger raised the example of two autonomously driven vehicles, both containing human passengers, en route for an “inevitable” head-on collision on a mountain road"
Unless the two AI's are in (fast) communication there is only one decision process going on here.
Each AI must make the best decision available to it for the humans it is currently responsible for. It is not in charge of the 'other' vehicles occupants therefore cannot decide.
What if the other car doesn't have an AI?
What if the other cars AI has comms problems?
What if the other AI is making a similar decision to sacrifice it's own humans based on slightly differently biased information - everyone would die!
The only thing we can program an AI with in terms of morality is the equivalent of a human.
For example, you are driving your family along a road at night. All of a sudden a similar group of people to yours has just emerged from a hidden path and is now, on foot, directly in front of you.
You only have time to
a) brake as hard as you can and plough into the pedestrians
b) take avoiding action and drive you and your family off the road, next to which is a 200 ft drop to a river meaning almost certain death.
I would choose a) in an instant. I would also choose a) after some serious thought as well since it offers the most chance to the most people. If I try and avoid them, me and my family would almost certainly die, if I hit them me and my family would almost certainly survive. For the pedestrians, they would certainly survive if I went off the cliff - but they may stand a better chance to survive being hit by a car than we would by hitting a river after a 200 ft nosedive.
In essence, most of the time you can only act to save yourself and those you are responsible for as it taps into your basic instincts for survival - anything else will take up precious moments and everyone could die.