
Misc thoughts
The very basic ethical rules of non-maleficence, beneficence, justice, fidelity, proportionality and respect for autonomy, although widely acknowledged, are not always interpreted with consistency in the context of policy making.
Ethics should be translated into law, via policy making; to make this happen a set of ethical guidelines need to be set. Ethical consistency leads to standardisation, which is prerequisite for law making.
Experimental ethics; are we moving towards an empirical morality?
The definition of bug-free algorithms to confront moral dilemmas require as an input an infinite number of scenarios and a large number of moral decisions on each scenario.
Do we adopt a deductive reasoning approach or an inductive reasoning one combined with counterexamples, when it comes to running accident scenarios? This can be read as a dilemma of perfection vs achievability within a reasonable timeframe.
Surveys on self-sacrificing scenarios reveal the moral dilemma of programming autonomous vehicles. The majority of people agrees that a driverless vehicle should sacrifice its passenger, in order to save numerous pedestrians, but would not be keen on purchasing a vehicle controlled by such an algorithm.
What’s achievable should lie within the intersection of ‘legal framework’, ‘technical capacity’ and ‘ethics’ sets of potential outcomes.
Who to save first in case of a fatal accident? Human value assessment approaches: Equality vs Social Darwinism / elitism / capitalism (most ‘valuable’ comes first) vs humanitarianism (most vulnerable or marginalised comes first).
How to define the human value: Cross check integrated sources of information (life insurance companies / demographics / search engines etc)?
Car insurance is a key factor here: Who will insure the car and the driver and undertake liability against the law? The most obvious answer seem to be ‘the software producers’ or any third parties holding the IP rights of the driving software.
Will states be these third parties? Privatisation of the autonomous driving software production sector will imply a capitalistic approach in the human life definition, even if the core value on which the algorithm will be based is ‘equality’.
Take the expensive neighbourhood example: insurance companies will opt for more conservative driving in order to avoid high compensations for damages of expensive cars and other property in upper-class areas, which implies lower risk for pedestrians too.
And who undertakes liability for software hacking incidents?
Risk distribution among property, drivers, cyclists, pedestrians, animals and nature should take into account animal and environmental ethics, which might affect the gravity factors in the equation.
How does ‘minimising human losses’ relates to ‘complying with the law’? Breaking the law (like crossing the double line) is preferable when compared to a fatal incident. But what happens when it comes to cases where a fatal accident between an individual who breaks the law and another one who doesn’t, cannot be avoided? What is the gravity of the law breaking factor compared to other differences between the two individuals, in terms of human value assessment?
Are we optimising safety within the legal framework, or regardless of it? How is an ethical law-breaking margin defined?
Public opinion vs criminal justice: known to be both largely uninformed and frequently lead by emotions, public opinion should not affect policy making. But it does affect the commercial success of autonomous vehicles, which will eventually affect politics, unless deprivatisation is secured and encouraging UVs buyers is not a top priority.
How high is unmanned vehicles’ ethics in the governmental agendas across the globe?
What should happen when it comes to driving for pleasure (leisure, hobbies, racing, sports) and how will this affect the free market financially (betting, TV rights)?
Risk assessment should cover: terrorism, vehicles carrying hazardous materials.
Skepticism about AI may lead to an AI winter, but being overoptimistic might imply an increase in the risk under which human lives are being put under.