Consumer Research Reports, Science & Technology

Employment, security, justice: where do AI “biases” come from and can we avoid them?

As an Amazon Associate I earn from qualifying purchases.

In 2018, Amazon was ditching its automatic resume sorting system after realizing that it was systematically downgrading female resumes. Why such behavior? Because the previous recruitments, that is to say, the base from which the system had been trained, were almost exclusively male: the AI ​​system reproduced the previous “biases.”

This type’s problems are legion, and they are of the utmost importance because they are the source of discrimination, whether in employment, security, or justice. They are also the source of multiple scandals and help make public opinion suspicious of Artificial Intelligence, especially when AI plays a social role. Of course, other areas seem less affected by these issues, such as when AI is used to analyze radios in the medical field or prevent accidents through assisted driving.

Some argue that technology is neutral: it is the data that is biased, not the technology. This is partly true but also a bit short: an artificial intelligence first “learns” from the data but then produces its own results from this training. If its results are biased, then it is difficult to argue that the technology is neutral, especially since it can even, in the worst case, increase bias by reinforcing key trends contained in training data.

Others rightly believe that this is a real problem that should be corrected. But, what is a bias? Who decides? How do you decide that training data is “representative,” “complete,” and “balanced”?

Few risk the exercise, and the notion of bias is rarely defined. For example, the National Digital Ethics Pilot Committee uses an example rather than a definition:

“Recorded speech data may contain only adult voices when the system is supposed to interact with children as well, or a body of text may statistically use female pronouns more frequently than male ones. . »( Call for contributions from the National Pilot Digital Ethics Committee, 2020)

To be unbiased, should a corpus systematically include as many feminines as masculine pronouns? If it is a question of developing an information system on violence against women, wouldn’t it be wise for the corpus to contain more female references?

What is a bias?

One can first seek to eliminate false statements (such as “London is the capital of France” or “2 + 2 = 5”), but a false statement is not a bias. Note that AIs are also sometimes clearly missing the mark, and these are clearly errors, not bias. This is quite rare in practice, but sometimes big mistakes do happen.

Another type of problem is more related to data. If the data used for training is far from the intended application domain, then the resulting software will perform poorly (this is essentially what the CNPEN questionnaire aimed at). In theory, it would suffice to select diverse and representative training data. In practice, this is often problematic because the data for training is not available or insufficient, or simply because it is diverse and representative is not a trivial problem. It is, in fact, a major source of bias.

But the case of Amazon examined initially is not so much due to data choice: Amazon had obviously used its own recent data to train its system. This, in fact, reproduced biased behavior in past recruitments. We are then close to “cognitive bias,” as defined by Wikipedia:

“A cognitive bias is a distortion in the cognitive processing of information. The term bias refers to a systematic deviation of logical and rational thinking from reality. “(Wikipedia)

In Amazon’s case, the bias was a systematic disadvantage for female applicants, as the training data was not neutral.

Should the training data be corrected, and how?

It will be noted that Amazon then chose to disconnect this algorithm. Another choice would have been to try to “correct” him by influencing his decisions to benefit female candidates. But what is the point of having a learning approach if the data on which we learn is biased? Can we really “correct” the data to obtain a more neutral system? This is one of the big questions that is currently being asked in AI.

Every application has a purpose, and therefore a social dimension. Which necessarily means a point of view, a subjectivity. There is, therefore, no one-size-fits-all solution to the problems posed by AI, and these resonate with those of the living world. Defining and prioritizing the criteria for choosing among candidates, for example, is a complex process and difficult to model.

So what to do?

Best practices for limiting sources of AI bias

Avoiding possible bias begins with a careful preliminary study, if possible collegiate, of the problem and the target population. We must try to bring to light the relevant variables, be as exhaustive as possible, develop a corpus (the data that will be used to “train,” that is to say, to fine-tune the system) according to these variables, ensuring that each important element is well represented. This seems obvious but often comes up against practice: lack of time and interest in this type of study, lack of means. The corpora used are also often data coming directly from the Web, difficult to control and therefore quite often uncorrelated from the intended target.

It is also necessary to document and make public the elements underlying the decisions (which code, which data is used and why) and to state the doubts and unknowns (for example, when the training corpus corresponds to a set of texts from the Web, poorly mastered and whose biases cannot be precisely known in advance) so that others can examine them, criticize them and possibly correct them. Currently, both code and data are often protected, which is rather counterproductive, arousing mistrust and suspicion.

One of the big challenges for current AIs is to explain their decisions: explainability is necessary for decisions to be accepted, but it is also a real technical challenge given the complexity of current systems.

It is the law that frames and limits the use of AI systems. This makes it possible to avoid certain biases and deviant behaviors: an AI should neither discriminate based on gender nor utter homophobic remarks, quite simply because it is against the law. Besides, French law prohibits an administrative decision from being taken by apprenticeship systems without prohibiting the use of these. It’s a pretty reasonable compromise, at least on paper, that allows for automatic monitoring and validation of proposals, provided the entire process is open and transparent. Therefore, we add a level of human supervision, which is itself probably not free from bias. Still, at least the responsibility falls on the human and is not delegated to an abstract machine.

Does objectivity exist?

But the last point is to really realize that the absence of bias in human decisions does not exist; a decision is a choice, and, as such, a decision is necessarily marked by subjectivity. Likewise, there is no such thing as an unbiased corpus: by nature, all words and data are the fruit of a point of view. This does not mean that there is no truth and that everything is equal, but being aware of biases also means being aware of the multiple factors, conscious or not, which influence us in our decisions.

As an Amazon Associate I earn from qualifying purchases.

Leave a Comment

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.