That's the thing, no one really knows until presented with the situation and at the point the brain makes about a zillion calculations and then might even find a third alternative that wasn't included in the computer program. No one really knows how one would react, but, I think that most would choose not to plow down a group of people. We all have some degree of humanity, I would hope. It's not the dying, it's the lack of choice that bothers me. And would the car even know that there were a bunch of people there and not just some random bushes until after it had pointed itself in that direction. And would you rather have the computer make that decision so you can walk away and have a "machine" to blame. Something that you still have to live with because you made the decision to ride in a car that you were not in control of. It seems to me that there would be unlimited algorithm's in every movement that is made. The car is not going to get sued, you or the manufacturer will be spending out the cash. You can bet that before you can incorporate that technology into your life, you will be signing off the right to go after the manufacturer for damages. The fine print will need the Hubble Telescope strength Microscope to be able to read.
Well, if a human made a choice in a no-win situation, what do we do with that human? Charge them with manslaughter? Or do we judge they had no good choice and let them go free?
You do the same with the computer.
The only difference is, when a human is in that type of situation, we can, after the fact, make an informed decision of what was the lesser of two evils. Once we do that, we can tell all the A.I.s that if they're in a similar situation, here is the lesser of two evils and then do that. And they will.