Self-driving vehicle on city street

To Err is Human ... BUT ... Who is Responsible for Machine-Made Decisions?

By Ran Kamil, Soroker Agmon Nordman | IP & beyond

The second decade of the twentyfirst century will probably be marked by future historians as the dawn of the Artificial Intelligence (AI) era. While we are yet to be hunted by legions of killer robots guided by an AI resolved to correct God’s mistake of saving Noah from the flood, we no longer use machines just to substitute or enhance human physical labour, but also as a substitute for human discretion in the decision-making process.

With much media attention focusing on autonomous vehicles making moral decisions in choosing between passengers and bystanders’ safety, it is easy to forget that whereas autonomous vehicles or self-guided drones are still under development, other AI machines are already making decisions which affect our everyday lives.

In Machines We Trust?

Whether they employ the latest AI technologies like artificial neural networks (ANNs), or a simple, old-fashioned, if-then flowchart algorithm, we use machines (which we call “computers”) daily to make decisions for us and replace our human discretion. Lost on your way or just seeking to avoid traffc? A navigation software can decide on the best route for you. Too busy to sort through your e-mails? A spam filter can decide which ones to keep and which ones to discard. Trying to figure out what to watch? An app can analyse your past choices to decide which movie you may like.

Who is responsible when you are led to a traffc jam and are late for a meeting with your boss? Who is to blame when that important message from a potential client is overlooked because it was filed in the junk folder, or for an evening wasted on a boring or distasteful movie? Who is liable for the result of a wrong decision when there is no human involved in the decision-making process?

The answer is easy; the person deciding to rely on a machine’s decision for his or her convenience can be made to agree to assume the risks of a wrong decision, in exchange for such convenience. The answer gets more complicated when the subjects of machine-made decisions have no choice or are unaware of the use of machines in the decision-making process and especially when the decisions might have a greater effect on their life.

Need a loan? You can most likely get one online without any human involvement. A computer will calculate your credit history and asset value to evaluate your risk factor and based on this factor, determine the amount you can get and the interest rate you would be offered. The same is true for deciding what the insurance premium will be; which applicant should be admitted to a coveted school; or even when deciding on the amount of bail money a suspect would need to post to avoid detention. In such cases, the operator of the decision-making machine makes use of the machine to make many decisions affecting many individuals, to whom the liability for erroneous decision cannot be transferred.

The ContentID™ algorithm used by YouTube™ to identify copyright infringement has been the focus of numerous lawsuits for falsely identifying original or public-domain creations as infringement. The Correctional Offender Management Profiling for Alternative Sanctions software “COMPAS”, used by US Courts to asses flight risk and set bail, is being criticised and challenged for being biased against certain minorities. Biometric Facial Recognition systems are slammed for being inaccurate and too vulnerable and easy to deceive. False, discriminatory, unfair, inaccurate, or otherwise wrong decisions, exposing the ones who rely thereon or act upon them to claims, are not exclusive to humans and in fact are much more common with decision-making machines.

Machines Make Decisions, Not Responsibility

A citation attributed to American scientist Paul Ehrlich, “To err is human, but it takes a computer to really foul up things”, sums things up well. The advantage of having a machine making multitudinous decisions faster and cheaper than any human, can easily become a disadvantage when the machine gets it wrong. In such a case, traditional defences applicable to human decision makers, such as the mistake being an isolated incident, a deviation from the organisation’s policy, bias or malice by the said person, exceeding his or her authority or acting on their own, etc., cannot be applied to a machine without attributing the responsibility directly to the operator thereof. Thus, the multitude of decisions on one hand and the direct responsibility of the operator to each of them on the other hand, increases the operator exposure to undesirable results such as class actions, negative media, damage to reputation, inquiries by consumer protection authorities, increased regulation, etc.

Obviously, the operator of the decision-making machine can seek indemnification from the provider or developer of the machine. However, in most cases the operator and the developer are the same entity, or the operator itself is involved in adjusting or training the machine. In such cases, there are measures that can be taken to minimise or mitigate the exposure:

Transparency

As with any human-made decision, disclosing the decision-making process and the criteria for making thereof, makes the decision less arbitrary, more predictable, and reduces the frustration of the person(s) affected by it. Even in cases where full transparency is not possible for reasons such as proprietary decisionmaking technologies, a partial transparency, in the form of reasoning the decision or explaining what terms or criteria were met or not met by the subject of the decision, is preferable. Transparency can demonstrate that the decision, although made by a machine, was not arbitrary, biased, discriminatory, or otherwise unfair. Indeed, some AI technologies, such as ANN, pose challenges in implementing such transparency.

Option to Appeal

Offering subjects of machinemade decisions the option to appeal decisions they believe to be erroneous to a human referee, even if such an appeal involves costs and bureaucratic procedure, can serve to shift some responsibility for the error in the machine-made decision process from the operator of the machine to the subjects. This is because by exercising their discretion in deciding whether to appeal, and why, the subjects of the decision are no longer totally passive and thus share, at least in part, the responsibility of the final outcome of the decision. Of course, the appeal process must be reasonably available, and the human referee authorised and capable of reversing or amending the decision in cases where the appeal is justified.

Alternatives

When subjects of decisions are given a choice whether the decision in their cases will be made by human or by machine, they can also be requested to assume the risks of erroneous decisions by the machine, in exchange for the benefits of receiving the decision faster, for free, etc.

In conclusion, with emerging AI technologies making machine-made decisions more and more common, relying on such decisions may increase exposure to liability for wrong decisions when they occur. Operators of machines making decisions, and the ones relying on those decisions, should be made aware of these potential exposures and take measures to minimise or mitigate them.


Ran Kamil

Ran Kamil

Soroker Agmon Nordman | IP & beyond, Herzliya, Israel & Singapore
T: +972 9 950 7000
E: This email address is being protected from spambots. You need JavaScript enabled to view it.; W: www.sanlaw.legal
 


Published: GGI Insider, No. 103, September 2019 l Photo: scharfsinn86 - stock.adobe.com

GGI Logo 70x50px

GGI Geneva Group
International AG

Schaffhauserstrasse 550
P.O. Box 286
8052 Zurich
Switzerland

Contact

T: +41 44 2561818
F: +41 44 2561811
This email address is being protected from spambots. You need JavaScript enabled to view it.
www.ggi.com