Artificial intelligence or AI is increasingly finding its way into most of the products, services, and decisions that we encounter each day. From chatbots, to streaming platforms, to automated vehicles, and even tools that power law enforcement, loan approvals, and trial convictions: AI has permeated deep within business and society. In fact, estimates provided by Grand View Research forecast the market for AI-powered products growing at an impressive CAGR of 57.2% from 2017 to 2025.1 With such technology being ever so ubiquitous and influential in daily-life, there has understandably been a recent push for regulation that mitigates some of the risks that it comes with. One key agenda for policy is to find solutions for the ever-so controversial ‘black box’ nature of AI.
In traditional computing parlance, a ‘black box’ is a program or system where the input and output are clearly defined yet the inner processes and workings in-between aren’t. AI is often called a black box because many of the tools and systems that it fuels with immense predictive power comes at the expense of opaque, incomprehensible results. This leaves both experts and firms who rely on AI blind to how these sophisticated algorithms come up with decisions that can dramatically affect the lives of many.2
Figure A: Illustration of the Black-Box Dilemma
Photo taken from: GoodFrims.co
Because of this dilemma, there has been demand for explainable AI. According to IBM, AI is deemed explainable when we can “comprehend and trust the results and output” created by its underlying algorithms. This generally involves understanding its decision-making process, its expected impact, and potential biases. For example, if a bank uses AI to make predictions on whether or not someone’s mortgage application should be approved, a customer who finds themselves on the bitter-end of a decision would conceivably want answers to why they were rejected. If the bank’s technical experts are able to come up with an explanation, then we have AI that is explainable.3 However, this is rarely the case. More and more companies are relying on highly convoluted yet powerful models, powered by neural networks and the like. In matters where high performing algorithms like these are involved, results become nearly impossible to explain due to their sheer complexity. This trade-off between performance and explainability is a key defining tension in the journey towards more explainable AI and will be discussed in more detail later on.
Presently, at least in the banking sector, we see that governments around the world are setting regulatory boundaries when it comes to AI and explainability. This involves requiring some level of disclosure on the reasoning behind loan application decisions, putting pressure for the industry to incorporate explainable AI in their strategy and investment horizon.4 Unfortunately, for many business leaders across other less regulated industries, explainability remains irrelevant. So long as their firm’s AI practice produces fruitful results for their business, then there is little to no need to invest in opening that mystery ‘black box’ and all the potential problems and trade-offs it could come with.
Understanding the Performance and Explainability Trade-Off
AI that is made explainable generally hinges on aggregating what are called “model interpreters” to be able to describe a decision process. To illustrate, take a simple scenario of AI trying to identify which images contain a cat. The algorithm here will map the concept of a cat with the same features that humans use to recognize the creature. It will look for a group of pixels that represents whiskers, a tail, and perhaps, a collar. These will then be understood as a collection of interpretable features that contributed to its final decision: cat or no cat.
Interpretability therefore can be understood as the extent to which a model can assign interpretable features to a prediction, and therefore is linked to explainability. But this association generally comes at a cost: the more complex the model, the more accurate it is, but the less interpretable it becomes. DARPA, the research arm of the U.S. Department of Defense, shows this as they map the relative explainability of different machine learning algorithms in Figure B below.
Figure B: Performance and Explainability Trade-Off
Photo by: DARPA
Because predictive power is linked with better business and organizational outcomes, firms and other AI producing bodies can’t switch to more explainable algorithms to more explainable algorithms. Left unregulated, profit-making organizations tend to engage in risky behavior that drives economic results at the expense of exposing them to unintended, often disastrous, consequences. Think of financial products, for instance. The rule of thumb in the trading industry is: the less oversight, the better the possible returns. However, this is like playing Russian roulette. There’s an intrinsic risk and horrible outcomes will eventually surface - in this case: markets crashing. Therefore, regulation exists to prevent such occurrences.
In the context of AI and explainability, regulation will be discussed in the proceeding article. In absence of regulation however, there are firms who still adopt certain practices and strategies to make AI more explainable because they want to ensure that they sustainably reap the benefits of this technology and avoid treading down the path of bankruptcy or self-destruction. While use cases of this will be discussed in the next sections, the first step for such firms is to clearly navigate through the complexity associated with their algorithms of choice.5
How can we go around this complexity conundrum?
Complexity is driven by the type of model used. In AI models, there are performance parameters that can be tuned and adjusted to influence both their performance and relative complexity. For decision tree algorithms that are mostly used for classification decisions (reject or accept, cat or not a cat, etc.), the number of “trees” or “leaves” are examples of such parameters. Decision trees, as can be seen from the figure above, are highly amenable to explanation. In Figure C, we can see an example of the output of a decision tree model used to predict customer churn. Technical experts can interpret this with the naked-eye and can readily understand the algorithmic decision process. For more complex trees, commercially available software can generate somewhat similar diagrams that outline the relevant decision process and then trace clear paths to extract the key determinants of a prediction.
But for more complex models such as neural networks, the diagrams tend to become too large and confounded to produce useful graphical representation. They contain too many interacting nodes that make it inherently difficult to interpret. Given these constraints, many businesses have developed (or have been developing) software aimed at producing granular investigations that could shed light on the decision path of such complex models. One popular toolkit is IBM’s AI Explainability 3606, which provides an open source solution that helps users comprehend how machine learning models predict labels by various means throughout the AI application lifecycle. There are also companies that have successfully navigated this dilemma and have produced remarkable explainable AI use cases in spite of this trade-off. Below, we will examine two such cases.7
Figure C: Decision Tree
Use Case 1: Explainable AI in the Energy Sector
Commercial losses resulting from energy theft are disabling resources worldwide, generating an estimated $ 89B per year and raising energy prices for customers. Electric thieves use many methods to steal energy: they can get in line between a transformer and a house, they can get into a neighbor's meter, they can lower their meter, and so on.
To reduce theft, the Revenue Protection Officer needs a comprehensive list of possible theft cases that need to be investigated. Such a list can be generated by an ML model trained in intelligent meter data and external factors such as weather, environmental hazards etc. The tool must be flexible enough to adapt to flexible theft methods. The tool needs to be able to identify factors that increase the risk of theft and to clear the location of the case.
Different types of theft require different investigative action. The postponed meter needs to be disconnected, the city where the thief is located to steal the neighbor's power needs to be informed and the meter changed. In the event the transformer line hits the truck it needs to be sent to the correct location etc. This requires a deep understanding of the model’s decision process and hence, there has been a huge need for AI that is explainable to facilitate theft investigations. This case shows how there is intrinsic business value in making AI explainable.
Use Case 2: Explainable AI in Climate Prediction
Droughts are one of the most challenging natural hazards to be predicted owing to multiple predictors and non-linearity between them. A paper by Dikshit and Pradhan (2021) used explainable AI to help local governments provide more transparency to the public. Here, explainability was essentially for them to interpret models in the field of drought forecasting to preemptively examine future drought scenarios. This also provides a nice use case where the right balance of performance and explainability was achieved. The pair of researchers used deep learning algorithms while incorporating some techniques for explainability. Their solution had an accuracy of around 85%, which was sufficient for drought prediction. While tuning the parameters and model could have led to higher accuracy measures, they would have risked convoluting results to an extent that made explainable results impossible.8
When does AI explainability ultimately matter? A European solution
The European Union became the first regulatory body that has issued an exhaustive list of regulations on AI systems. The objective as portrayed by the EU is to encourage AI excellence along with Trustworthy AI. On one hand, the Commission plans to invest €1 billion per year in AI to encourage and spread the usage of the technology, and on the other hand, proposed three legal initiatives to regulate AI. One of these initiatives is a framework that protects fundamental rights & addresses safety risks.9
According to a report by McKinsey, they found that only 48 percent of organizations report recognizing regulatory compliance risks, and even fewer (38 percent) report actively working to address them. Their study emphasizes that AI systems have revolutionized how companies perform and make a fundamental impact on consumers across all industries.10 Yet analysts of the esteemed consulting firm also insist that there is still a large opportunity to push the boundaries of innovation and value creation in AI if the inherent risks associated with it are addressed. Inevitably, one of these is un-explainability. The same research by McKinsey highlights that companies with the highest AI returns are more likely to have addressed the technology's risks.
From Mickinsey’s research, we see that investments in explainability can have a large upside even from a purely business standpoint. But from the EU’s risk classification in Figure D below, we can also infer that explainable AI is not always necessary. Because there are trade-offs associated with the performance of algorithms, it may be prudent to apply them to situations where they matter most, such as those deemed “Unacceptable” or “High-risk” in their classification. These scenarios come with AI applications where safety, individual freedoms, lifelong economic opportunities, and other high-stake affairs are at stake. For more casual, and commercial oriented applications, such as chatbots, automated emails, and inventory optimization which would be classified as “limited” or “minimal risk,” it would make sense to prioritize AI that makes strong predictions over AI that is more explainable.11
In any case, the next article in this series will shed light on how companies that utilize AI can be incentivized to build trust with consumers, central to which is having explainable AI, in-spite of this performance-explainability trade-off. It will address questions such as “what is truly preventing explainable AI from becoming part of the status quo?” and “what are the different incentives that each stakeholder faces?” - from firms, to government bodies, to regulators, and AI users. Finally, it will also push for an agenda in which each stakeholder can learn to trust each other, given the constraints and potential incentives they face, in order to fundamentally facilitate critical dialogue that leads to explainable AI being found where it matters most.12
Ce site utilise des cookies afin que nous puissions vous fournir la meilleure expérience utilisateur possible. Les informations sur les cookies sont stockées dans votre navigateur et remplissent des fonctions telles que vous reconnaître lorsque vous revenez sur notre site Web et aider notre équipe à comprendre les sections du site que vous trouvez les plus intéressantes et utiles.
Cookies strictement nécessaires
Cette option doit être activée à tout moment afin que nous puissions enregistrer vos préférences pour les réglages de cookie.
Si vous désactivez ce cookie, nous ne pourrons pas enregistrer vos préférences. Cela signifie que chaque fois que vous visitez ce site, vous devrez activer ou désactiver à nouveau les cookies.
Cookies tiers
Ce site utilise Google Analytics pour collecter des informations anonymes telles que le nombre de visiteurs du site et les pages les plus populaires.
Garder ce cookie activé nous aide à améliorer notre site Web.
Veuillez activer d’abord les cookies strictement nécessaires pour que nous puissions enregistrer vos préférences !