ESSEC METALAB

IDEAS

EXPLAINABLE AI: PART 2 - CAN WE HAVE A STATUS QUO WHERE AI IS EXPLAINABLE?

[Student IDEAS] by Nelberto Nicholas Quinto and Kunal Purohit - Master in Management at ESSEC Business School

As established in Part 1, knowing how a decision is made by AI and which parties can be accountable for it are of paramount importance where there is high or unacceptable risk under the EU commission’s classification. In scenarios where explainability is consequential, the adoption of AI and the ability to realize the efficiencies that such a technology brings is much less straightforward. Explainability is a key ingredient of establishing trust between the users of AI and those who produce it. One can trust an algorithm for being accurate and reliable- i.e. predicting well and ‘getting the job done’ - but, in many circumstances, this simply is not enough. 

Faced with a trade-off between performance and explainability, how should firms and governments alike go about choosing one over the other? Furthermore, when it comes to risky contexts where lives and individual freedoms are at stake, how should regulators ensure that organizations make the right choices? And when algorithmic design can always be hidden under the veil of proprietary secrecy, what hope is there for fairness, justice, and transparency when it’s needed most?

In order to fully harness the benefits that AI can bring, it is essential to move from a system of simply harnessing its ability to predict, automate, and decide, towards a system where different stakeholders trust the technology (and each other!). This is exceedingly important in high-stakes environments where the returns of having trustable AI are magnified. But if adoption necessitates algorithms to be trusted by users to predict with precision, sound-judgment, and fairness, what does it really mean to trust AI? 

According to Arun Das and Paul Rad of IEEE, the world’s largest technical professional organizations dedicated to advancing technology for the benefit of humanity:

“Trustability of artificial intelligence models is a measure of confidence, as humans, as end-users, in the intended working of a given model in dynamic real-world environments. Thus, ‘Why a particular decision was made’ is of prime importance to improve the trust of end-users including subject matter experts, developers, law-makers, and laypersons alike” (Das & Rad, 2020)

In this definition, explainability (why a decision was made) appears to be vital in trusting AI. But is explainability the only thing that matters? Is it synonymous to trust? Or is it just one of the many components of trust ? If so, how does explainability interact with all the others? 

According to Mckinsey & Co., there are several components of trust in AI algorithms apart from explainability. These include performance, fairness, and transparency. Since each of them is intricately linked to explainability, having AI that is explainable becomes both central and critical to producing trustable AI. With performance, for instance, whether measured by accuracy or precision, there is the potential trade-off with explainability, as discussed in the previous article. Then there’s the issue of fairness which is a measure of how much bias-free decision making is present in an AI. Automated decisions can be skewed and made unfair with respect to certain groups or individuals especially when sensitive variables such as gender, race, disability, etc., are involved. But in order to gauge whether an algorithm biases decisions or not, it simply must be understood first. Lastly, even if underlying algorithms are deemed reliable and marketed as ‘fair,’ end-users still fear that AI can be misused to exploit their personal information, collect and process more sensitive data than is necessary, or simply not be as bias-free as advertised. This underscores the value of transparency in how algorithms are used for decision-making. And just like fairness, it is difficult to be transparent when the inner-workings of an algorithm are not well understood to begin with. Therefore, while performance comes at the expense of algorithmic obscurity, other trust components such as fairness and transparency necessitate explainable AI.

What’s preventing explainable AI?

If explainability is so crucial for end-users to build trust with AI, and if it matters for riskier cases, then what’s stopping organizations from adopting it? Is it simply because of the performance trade-off? To understand this, we turn to the dynamics between different AI stakeholders (organizations, end-users, and regulators) and examine the familiar case of mortgage applications.

Organizations, usually firms or government institutions, produce AI for commercial, administrative, or policy-making purposes. They also use the technology to improve their goods, operations, and services. When we think of mortgages, these are the banks that use credit scoring and risk assessment algorithms to give automated loan decisions. They supply or use this “AI service” to “end-users,” in this case, the loan applicants. For something as important as providing a home for your family, you would expect that decisions, especially those related to rejections, could be explained by the bank. Even successful clients that were given certain borrowing rates would sometimes want to ensure that the right factors were considered- and sensitive matters such as race or gender would be left out. In the same way, regulators such as the U.S. Consumer Financial Protection Bureau would want to make sure that consumers were treated fairly in such a process. But because of the classic trade-off between performance and explainability, if a bank invests in making their algorithms explainable, they may lose some degree of predictive ability that can lead to a very costly increase in the number of defaults.

From the scenario above, we can postulate that end-users prefer products that they can trust while organizations face decisions related to calibrating the features that affect the level of explainability and performance needed to have a trustworthy and reliable product. Furthermore, despite its appeal on establishing consumer trust, explainability doesn’t necessarily become a de facto priority since organizations have to account for other factors. These include the impact on profits due to losing predictive capabilities, feasibility constraints of implementing explainable AI, and the risk of underlying algorithms being replicated by competitors. For these reasons, in order for explainable AI to be part of status-quo, regulators are left with the problem of creating the necessary incentives and standards for banks, technology firms, and all AI-producing bodies alike to invest in explainability.

Incentives for Firms

When the organization that supplies AI is a firm, their default goal is to maximize profit. So investing in explainability only makes sense if it increases their bottom line. Since profit is simply the difference between total revenue and all costs, firms will usually invest if one of two conditions are met. First the marginal increase in trust produced by making AI more explainable (often at the expense of predictive ability) must create an advantage that drives revenue for the firm. And second, investing in explainability must reduce costs. Indeed, to produce more explainable AI, organizations can tell their engineers to change pre-processing data techniques, embed new tools, use different datasets and the like- all of which entail some costs in the early stages of AI development. But most of these expenses related to investments in explainability do not come from tuning algorithms to make them more explainable, fair, or transparent- rather, they come from costs associated with regulatory compliance. Notably, a third, necessary but not sufficient condition, is that firms are able to facilitate adoption and ensure the correct use of AI for both their staff and, ideally, their customers.

Demand for Trustable AI

In a study based on interviews with 4,400 end-users of AI, Capgemini found that their views on ethics and AI threaten both company reputation and the bottom line - 41% said they would complain in case an AI interaction resulted in ethical issues, 36% would demand an explanation, and 34% would stop interacting with the company. From this information, we see that in broad measures, it is possible to lose revenue when an AI product or service is not explainable. But does the converse hold true? Do investments in explainability normally translate into more revenue? This depends on how explainability affects a product’s competitive advantage. Which, in turn, is a function of two things: how important explainable AI is to end-users in a given market and how much more trustable AI can be when its predictive reliability is dampened by making it more explainable. 

What this all means for firms is that they have to understand just how important explainability is to their end-users, and whether investing in efforts to increase the valuation of end-users’ trust in AI can be a source of differentiation among competition, and hence a possible source of competitive advantage. But while explainable AI can make an organization more competitive in this manner, it could also lead rivals to learning about the intricacies of a firm’s algorithm. If such inner working of a firm’s AI practice were made available, then any advantage derived from intellectual property would be at risk. For this reason, Rudina Sesseri, managing partner of Glasswing Ventures, insists that: “if a startup created a proprietary AI system, for instance, and was compelled to explain exactly how it worked, to the point of laying it all out — it would be akin to asking that a company disclose its source code.” She further posits that regulators who push for transparency should be vigilant as such policies can stifle innovation by killing-off startups whose value and ability to compete with incumbents is primarily driven by their intellectual property.

Firms are then left with the decision of weighing any potential upside of having explainable AI against the probability and consequences of competition being able to benefit from learning about their algorithmic blueprint. This serves as an opportunity for regulators to enact the right policies that could protect proprietary algorithms while incentivizing firms to invest in explainable AI. But non-compliance of such policies entails costs to firms, from direct expenses such as fines to less direct ones such as the opportunity cost of not being able to provide certain goods, features, or services in a given market due to regulatory blockage.

Regulation and Costs

When firms face fees for non-compliance, investments in explainable AI are rarely worth it if they exceed such costs. What complicates matters further is that audit efficacy is sometimes affected by explainability. When it is not, then regulatory audits will normally suffice to induce investments if the economics make sense. However, recent research by economists Xavier Lambin and Adrien Raizonville (2021) informs us that when explainability facilitates regulatory audits, then firms may potentially hide their misconduct by making their algorithms less understandable for regulators. In other words, explainable AI could lead to ‘regulatory opportunism’ which refers to the act of regulators focusing their scrutiny on firms whose algorithms are simply ‘easier to understand’ than others, thereby making regulatory oversight counterproductive in incentivizing explainability. In summary, we see that as far as costs are concerned, firms would invest in explainable AI if non-compliance costs generally exceed investment costs and if they are in a regulatory climate where explainability does not facilitate audits ex ante. If the latter doesn’t hold however, then firms need to trust that regulators won’t engage in ‘regulatory opportunism’ once their algorithms are made explainable. To merit such trust, regulators also need to signal that they won’t do so even if they technically could.

Incentives for Government institutions

For government institutions that both produce and use AI, the dynamics are not too different from what we’ve seen with firms. While they may not need to generate revenues, firms need to ensure that the AI powered services they’re providing still produce reliable predictions if made explainable. Since public services such as judicial systems, defense, healthcare, transportation, etc. are all high-risk environments that can fundamentally affect the lives and freedoms of citizens, it is safe to say that the benefits of having a trustable, fair, and transparent algorithm outweigh the need for high performance algorithms in most cases. 

So when will government agencies not invest in explainable AI? An obvious case is when contracting firms do not provide such AI in the first place. A lot of government agencies use AI but do not necessarily produce them. Instead, they contract these services from traditional for-profit firms. Hence, when such firms do not meet the conditions mentioned above, then government agencies that contract them are left to use what is available. Such is the case with COMPAS, a highly controversial AI solution which is currently being used in criminal cases to predict a defendant’s likelihood to re-offend. Despite several lawsuits and large public scandals that came its way, this judicial algorithm is still widely used in the United States because the company contracted by the government has not invested in explainable AI. This further underscores the need for regulators to exert the right pressure for firms to adopt explainability in high-risk environments. 

Another reason for non-adoption of explainable AI in the public sector is when understanding the causes of a decision simply does not matter nearly as much as getting accurate, fast results. Examples include allocating emergency resources, rescue operations, tracking criminal activity, or administering radiology treatment. While these examples would not fall under limited or minimal risk under the EU commission’s classification, speed and precision can outweigh value-judgements associated with explainability in such cases, thereby bringing us back to the performance-explainability trade-off.

What’s next

There needs to be critical dialogue to happen between organizations, end-users, and regulators for explainable AI to be ingrained into status-quo. Just as organizations need to produce AI that end-users can trust, regulators need to earn the trust of firms that marginal investments in explainability does not increase chances of future misconduct to be detected and does not result in losing intellectual property advantages. Similarly, just as regulators need to signal to firms that they can be trusted to protect their interests should firms comply, end-users need to signal to organizations that trustworthy algorithms matter to them. 

Bibliography

Benjamin Misha, Bhueler, K., Dooley, R., & Zipparo, P. (2021). What the draft European Union AI regulations mean for business. Quantum Black AI by Mckinsey. 

Burt, A. (2021, April 30). New AI Regulations Are Coming. Is Your Organization Ready? Harvard Business Review

Cowgill, B., & Tucker, C. E. (2019). Economics, Fairness and Algorithmic Bias. SSRN Electronic Journal

Das, A., & Rad, P. (2020). Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

Dhasarathy, A., Khan, N., & Jain, S. (n.d.). AI in public sector | McKinsey. Retrieved September 1, 2022, from 

Egglesfield, L., Cook, B., Golbind, I., & Ani, T. (2018). Explainable AI: Driving Business Value through Greater Understanding. PwC. 

Gade, K., Geyik, S., Kenthapadi, K., Mithal, V., & Taly, A. (2020). Explainable AI in Industry: Practical Challenges and Lessons Learned. Companion Proceedings of the Web Conference 2020, 303–304.

Raizonville, A., & Lambin, X. (2021). Algorithmic Explainability and Obfuscation under Regulatory Audits. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3958902

Rosi, F. (2019, February 6). Building Trust In Artificial Intelligence. JIA SIPA. 

Seseri, R. (2018). The problem with ‘explainable AI’ – TechCrunch

Stanton, B., & Jensen, T. (2021). Trust and Artificial Intelligence. NIST. 

Yong, E. (2018, January 17). A Popular Algorithm Is No Better at Predicting Crimes Than Random People. The Atlantic. 

Why Addressing Ethical questions in AI will benefit organizations. (2019). Capgemini Research Institute. 

Ideas list
arrow-right