Yesterday Alison and I drove home by the lakeside road. It was a lovely drive in the mottled shade of maples, with views of the water on one side and the hillside like a hanging garden on the other. Why did I decide to take that route, longer and slower? Perhaps I remembered similar drives on similar days; or, at least I anticipated it might be pleasant. I can’t explain it. At most I can guess at some uncertain reasons.
When we consider the decisions we make every day, they are mostly like this. Pushed for an explanation we can come up with justifications - in business, mostly defences - from a mix of prior experiences, learned responses, and intuition; but none of these explanations are themselves easily explainable.
This is the landscape we navigate when we think about accountability in machine learning, automation and AI. That road is twisty and dappled with shade, but unlike our lakeside drive, it’s hard work.
The rights approach
Machine learning is now central to many aspects of our lives, some of which we are only marginally aware of. As these algorithms become more critical, so too do the issues of accountability that arise. In this context, you will often hear of the right to an explanation. Indeed, this very right is solemnly set down in laws around the world.
Given the complexity of the algorithms we work with, is the right to an explanation a hopeless quest? Is it even the right quest?
I know many of my readers are experts in machine learning and have already thought deeply about this topic. But many readers are not, and for you I want to make sure you are are not short-changed by legislators or tech companies offering the right to an explanation as an answer to what are fundamental, and deeply human, issues facing us today.
The obligation to be fair
Firstly, I am not attached to the language of rights. I tend to agree with Simone Weil (see L’Enracinement, or The Need for Roots) that rights are secondary to obligations. This means that the existence of a right is dependent on the recognition and fulfillment of an obligation. For example, the right to be free from hunger is is only meaningful if there is an obligation on others to provide food.
So the right to an explanation, depends on the obligation of, say tech companies, to provide that explanation. However, providing an explanation in the realm of machine learning or human reasoning can be challenging, if not impossible. Or the explanation may be beyond the understanding of the person who needs it.
Why would someone want an explanation anyway? In order to know if we have been treated fairly. Has the decision-making process considered all relevant factors? Has it disregarded irrelevant or prejudiced biases? Has it treated everyone impartially and equitably.
Fairness is fundamental.
Many animals, primates and dogs being familiar examples, show a sense of fairness. They can exhibit signs of real distress or protest when they perceive unequal treatment. This suggests that fairness is not just a human societal construct, but a fundamental aspect of social life across species.
Research shows that our sense of fairness is more than just being upset when we get less than others. It has evolved as a two-part process.
The first part is something we see in many animals. This is the ability to recognize when we're getting less than someone else, especially someone we're working with. This is important because it helps us figure out who is a good partner to cooperate with. If someone we're working with keeps getting more than us, we might decide they're not a good partner and stop working with them.
The second part is being able to see when we're getting more than someone else and feeling like this isn't fair. This is also important for maintaining cooperation because if we're always getting more than our partner, they might decide to stop working with us.
The first step helps us recognize good cooperative partners, and the second step helps us keep these partners by making sure we're not taking advantage of them.
This evolutionary model suggests that with careful governance and oversight it will be in the interests of tech companies to treat us fairly. This mutuality can help us.
Rather than a right to an explanation we may find more success with an obligation to be fair.
This shift comes with its own complexities. But, while fairness is a complex notion with various interpretations and implications, its essence—equity, impartiality, justice—is widely understood, even if indistinctly expressed. Just like animals instinctively understand the concept of fair treatment, humans can sense when a decision or process is fundamentally fair or unfair. By embracing the responsibility to strive for fairness in machine learning, we acknowledge this deep-rooted instinct and need.
Fairness is broadly based. An explanation need only describe how one decision is made. Fairness requires an understanding of how a decision was made, in comparison to all other related decisions.
This also leaves fewer hiding places for tech-companies, who, faced with the demand for an explanation, would love to deliver PhD-level technical defences of their decision making process.
Fairness, on the other hand, a dog can understand or, more to the point, a jury of our peers can judge.
Competitive fairness
The implication for AI would be profound. If we recognize fairness as a basic principle of social interaction, it’s reasonable - essential, even - to extend this principle to the artificial intelligences which are becoming increasingly involved in decision-making affecting our lives.
This doesn’t mean it will be easy to define or measure fairness in every scenario involving AI, or the related concepts of equity, impartiality or justice. These are age-old concepts that have been the subject of debates among scholars, rulers and subjects for centuries. They are to some extent subjective and can vary drastically based on cultural, societal, and individual values. So, how can a tech company, whose primary expertise lies in the domain of science and technology, be expected to define and apply these concepts?
Yet, there are compelling reasons for tech companies to take on this task. The response need not be defensive. Tech companies can partner with experts in ethics, social sciences, law, and policy to better understand and incorporate fairness and justice into their algorithms *and the processes which use those algorithms.* Multi-disciplinary collaboration can be extremely beneficial in this context, building public trust which in turn enables customer acquisition, retention and higher engagement.
If the goal is fairness and accountability, then what we should be aiming for is not just transparency, but systems and processes that can be audited and held accountable. Just as businesses have financial audits, tech companies deploying algorithms should have algorithmic audits.
This means ensuring that there’s a record (if not an explanation) of the decision-making process that an algorithm goes through, and that the results of these decisions can be examined and evaluated for fairness and justice. Documentation is key here. It would involve, for instance, recording how the algorithm was designed, what data it was trained on, what its goals or objectives were.
Such an audit would also consider how the algorithm was tested for fitness to these ends before being deployed, how it has been reviewed and monitored since, and how it is updated or modified over time. Importantly, there should be a mechanism for redress if unfairness or injustice is found, and a plan for preventing such issues in the future.
If we shift our focus from explanation to fairness, we have the potential to develop a new perspective in AI and machine learning. This approach is less about dissecting the black box of algorithms and more about setting and meeting standards for how those algorithms should impact the world.
Testing times
This doesn’t mean that explanation has no place in algorithmic accountability. There is still value in understanding, to the extent possible, how an AI system makes its decisions. This is especially critical for quality assurance, testing and maintenance of systems. However, our primary ethical concern should be whether the algorithm is doing its job in a way that aligns with our societal values of fairness and justice.
An important reason why this shift in perspective is necessary is because our values of fairness and justice change over time. Forms of discrimination that we now unhesitatingly call racist or misogynist were once widely considered reasonable and natural.
We need to be on guard for this, because the very prevalence of algorithmic decision making in our society may itself change our perceptions of what is fair. And it's important to note that consistency and objectivity do not necessarily equate to fairness. Algorithms, after all, can perpetuate existing biases or create new ones.
The historian Lorraine Daston explained this well in a recent interview with The Nation about her marvelous new book Rules: A Short History of What We Live By.
Even if it’s not just a set of white, male, European faces that train the facial-recognition algorithm—a problem that could be corrected—the deeper problem is that all of these examples are from the past. This means if the future deviates from the past, as it is wont to do now and again, the algorithm is no longer a good fit.
It’s worth noting that a perspective of fairness would require cooperation from all stakeholders involved in AI and machine learning, from the engineers and data scientists who design the algorithms, to the executives who make strategic decisions, to the policymakers who set regulatory standards, and the users who interact with these systems daily.
By replacing the right to an explanation with the obligation of fairness, we can create a legal, political and technical environment where algorithms serve society in a manner that is aligned with our most fundamental values.