Will a human driver make a better choice than AI when faced with sacrificing the passengers or killing innocents?
I'm tired of hearing these silly worries. NOTHING is ever that definite. NOBODY knows what might happen with either choice.
With humans and AI, we won't know why a decision like this went one way rather than the other. This type of programming doesn't use IF-THEN logic in machines or humans.
As always, the courts will decide if there is negligence by a human or a manufacturer of self driving cars.