Machines: helpers or discriminators?

0

Since the first appearances in e-commerce websites such as Amazon and e-Bay[1], the application of machine learning (also known as Artificial Intelligence) has grown significantly and it has become of common use; the applications of AI are now multiple and across various markets.

AI has also been applied to the legal field, resulting in what is called “legal informatics”, which is the application of technology within the context of legal environment; it involves law-related organizations, such as law offices and Courts. Within legal informatics it is also present the branch of “computational law” that concerns the automation and mechanization of legal reasoning.

It is through the application of these methods, given the assumption that juridical reasoning can be automatized, that here are analyzed those machines able to issue legal decisions, reflecting on how they, in spite of the so called “machine bias”, can be better and even less biased than a human Judges.

The scope of action of these machines is, until now, the issuing of basic decisions taken in those phases of trials, or for those pronouncements, that are usually ruled by Judges not through a detailed investigation but only making a calculation; it is the case of COMPAS[2]– used in many states of the US and particularly famous for the case Loomis v. Wisconsin[3]– which calculates the likelihood of recidivism, based on the evaluation of data collected thorough a questionnaire. Similar machines could also be used in processes for releases on bail, where the Judge has only to conduct an assessment of the presence, or not, of fixed prerequisites.

These machines can work thanks to a system of predictive technology– which is a technology that “makes forecastskeeping in perspective the previous records/data”[4]– trained through a supervised learning model; the training data set is usually constituted of previous decisions taken by human Judges. This is the more sensitive step: once developed the software, the machine need to be “fed” with data inserted by human operators. In supervised learning system, in fact, the machine “learns” from the data it was given; hence the choice of data and the action of inserting the data into the machine is pivotal because, in presence of biased data, the machine would output biased results.

Despite the social resistances these machine are currently facing – both from Justice workers who, legitimately, fear to be replaced by machines, and also from Justice subjects, reluctant towards the idea of a machine deciding about their fundamental rights – the advantages presented by the use of computational law tools can be various and of a great impact, both on Courts’ organization and on people subjects to the decisions.

The most evident advantages are represented by the help machines give to Courts in easing the workload – leaving to Judges more time to assess only those cases and those phases of trials that require for investigation and application of a proper “human process” –, and in speeding up processes, resulting in a better service for those who wait for the decisions.

A less evident advantage is the absence of (or, at least, the possibility to remove) human bias which, contrarily to what happens with machine bias, is usually underestimated and not always challengeable. When dealing with AI and its decisions on human rights, the first point tackled is its transparency. The problem with AI is, in fact, the “reasoning” it follows. Usually it is said that humans think – or at least, are supposed to do so – in terms of causation, while machines use a procedure of connection; this can make it difficult to properly understand how the machine goes from point A to point B. This process of connection is conducted by the system behind the machine, that is the algorithm that runs it. The problematic aspect is that, as also highlighted in the mentioned case of Loomis v. Wisconsin[5]algorithms are protected as trade secrets, and this aspect can represent an obstacle when trying to understand the process followed by the machine: revealing the algorithm could jeopardize someone’s intellectual property.

However, throughout few adjustments, it is possible to challenge or, at least, minimize, machine bias.

Can the same be done with human bias?

It is true that the human Judge has to follow a fair and legal process, when taking his or her decisions, and this is, of course, a very strong safeguard; nevertheless it is also true that in many occasions – especially on those imagined as scope of action of the machines here analyzed – there are more than one law-compliant solutions and, among them, there are those more, or less, benevolent in respect to the subject of the decision; hence, within this area of lawful possibilities, the Judge can freely and, more importantly, legally, choose which one better suites the case in exam. In doing so, it is not unreasonable to think that a bias, when present, and which can be of any kind – among the most common there are biases against ethnicities, against gender, against sexual orientations, or against social backgrounds – could intervein, perhaps even subconsciously, and, except for those cases that are evidently flawed by a prejudice, it is quite difficult to challenge the decision.

Contrarily to what happens with human bias, which, in order to be challenged, needs, in the best case, an additional level of justice – consequently a duplication of proceedings, and of the costs and of the time related – challenging a machine bias can be faster and more effective. This is the pivotal element that distinguishes machine bias from human bias; given that the algorithm behind the machine is merely a “procedure or set of instructions”[6], if an error is present – that is, the instruction which gives place to the discrimination –, as soon as the error itself is identified, the instruction can be modified.

A famous example of such feature is the case, dating back to 2015, of Google Photo, which labels photos on the basis of the elements in them recognized by the machine; the incident arose when a picture of two Afro-American people were labeled as “gorillas”. The incident, of course, made people claim that the machine was racist, and biased against black people.

Google, though, immediately intervened and removed the issue in the machine.

When dealing with humans, instead, there is always the possibility that a persons is biased against a category but also that the bias is subconscious, hence even the person who is perpetrating the prejudice cannot be very much aware of it. Especially in these cases, challenging such human prejudice can be very hard, because what has to be proven is the flawed reasoning that brought the person in charge of the decision to adopt one legal solution instead of another one, equally legal[7]. The process to remove such biases in people, therefore, can be very long, difficult and expensive. Moreover, even in the scenario a specific biased case is solved in favor of a fairer decision, it is possible that the biased Judge still maintains the bias, therefore repeating it in similar situations.

As demonstrated in the case of Google Photo, instead, once the flaw is identified in machines, only one intervention in order to remove the bias is enough to prevent similar discriminations.

Another important inspiration from the Google Photo case is the statement made by the chief social architect of Google, Mr. Yonatan Zunger.

Mr. Zunger claimed that machines are not biased per se but they can easily learn biases from people. Here lies the key element to implement machines to be more helpers than discriminators: as human products, machines need to be properly designed and developed.

In predictive technologies, in fact, what is crucial is the human intervention in feeding the machine with previous data. Such operation, if properly conducted and under strict parameters, can avoid to “teach” biases to the machine. An accurate choice of cases, a proper balance among categories taken into account (like, for example, white and black people, men and women), or even the omission of all those sensitive details (ethnicity, gender, etc.) that commonly trigger biases but are not strictly necessary for the decision, could avoid the perpetration of typical human biases; this process of removal, evidently, cannot be done if the Judge is facing the person.

If such advantages are indisputable, less evident are the solutions to the existing problems of accountability and transparency of the machine.

An interesting reading key is the one provided by the Council of Europe’s insight on AI and human rights, and by GDPR’s[8]tools.

On the recent document issued by the Commissioner of Human Rights of the Council of Europe “Unboxing Artificial Intelligence: 10 steps to protect Human Rights[9], the recommendation is that, whether an AI system is used to issue decision that have a “meaningful impact on person’s human rights” it needs to be “identifiable”; furthermore, the recommendation states that no AI system should be complex to the degree it does not allow for human review and scrutiny.

The GDPR, in addition, through article 22, subsection 3, provides for the so called “right to explanation”, stating that the data subject, whose personal data has been processed in order to produce a legal effect, has the right to “contest the decision”; in order to do so, the data subject needs to know how the decision itself was made, that is understand how the machine works. This does not mean that the algorithm has to be revealed – balancing with the safeguard of due process right also the protection of the intellectual work of the developers –, but that there is an obligation for the data processor to provide specific explanation for each case contested; this process could also bring to a re-evaluation of the data sets used in order to ascertain whether they are biased, hence producing some kind of prejudiced decisions, or not.

A different solution, but also viable, could be the requirement for machine involved in human rights-related decisions to have an open source algorithm, allowing anyone to read it and possibly challenge it. This does not entail that Lawyers or Judges are required to gain also computational skills but would only imply the involvement of other professionals in the process, as already happens for many other fields of science.

Moreover, it has to be noted and remembered that AI systems are not thought to totally replace humans, but only to accompany them in certain tasks, as already occurred in the past technological revolutions; hence the intervention of humans – as stated in recital 71 of GDPR– can be provided even upon request or it could be foreseen by design.

As an extrema ratio, in those unfortunate cases in which a system cannot abide by appropriate standards of transparency and accountability – the recommendation by the Commissioner of Human Rights of the Council of Europe states – it should not be used.

In conclusion, what has to be taken into account is the fact that, although very much innovative, AI systems are nevertheless human products and, as so, they can be controlled and regulated by law – or even forbidden if they do not meet the standards – as any other tool.

Human mind, that is the place of human prejudices, instead, cannot be regulated or easily scrutinized, hence devolving to it important decisions is not automatically safer than devolving them to machines.

 

 

 

[1]B. Casey, A. Farhangi, R. Vogl., Rethinking explainable machines: the GDPR’s “right to explanation” debate and the rise of Algorithmic audits in enterprise; Berkeley Technology Law Journal, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3143325.

[2]Acronym for “Correctional Offender Management Profiling for Alternative Sanctions”.

[3]Loomis v. Wisconsin,  881 N.W.2d 749 (Wis. 2016).

[4]Definition from Technopedia; https://www.techopedia.com/definition/14525/predictive-technology.

[5]For example, COMPAS was developed by a private entity, hence even the States that make use of it are unable to unpack the algorithm behind it.

[6]As defined by Julia Angwin in Making algorithms Accountable, ProPublica(Aug. 2016) https://www.propublica.org/article/making-algorithms-accountable.

[7]It is obvious that here are not taken into account those decisions where the bias is evident and it oriented toward an unlawful choice, as they are not problematic in point of challenge.

[8]Regulation (EU) 2016/679.

[9]Available at https://www.coe.int/en/web/commissioner/-/unboxing-artificial-intelligence-10-steps-to-protect-human-rights.

Share this article!
Share.

About Author

Leave A Reply