Model Explainability in an Unregulated World

In my last blog on the topic of transparent AI, I wrote about the necessity of transparency in heavily regulated industries. This post is about the importance and value that model transparency has in less/unregulated industries. Industries that fit this description include retail, digital advertising, manufacturing to name a few. There are various reasons why companies in these industries want or require model transparency, reasons that are not for regulatory purposes.

The first reason is internal buy-in. Once something goes from an idea to a model, some data science teams consider it to be mission accomplished. Other teams go a step further and only call it mission accomplished once the model is deployed into production and running. However, I would argue that it is still not mission complete. The most important piece is still missing: Is the company doing anything differently now that the model is in production? In other words, “Did it make an impact?”

Many times the answer to this question is no. When posed to an executive, the question of why the model is not being used to make decisions is usually answered with an unscientific, “I don’t trust the model.” Data Scientists get frustrated when they hear this. However, they must understand that the answer itself is also coming from a place of frustration on the other side. The frustration that stems from AI investments that usually do not materialize into a meaningful ROI.

A second important point, related to the first one, is the fact that if a company actually takes an action on the insights gained from these models, they desire as much insight as possible. The reason being is that the more information a company has, the more targeted the action can be. For example, an online retailer might have a model that predicts if the items in a customer’s cart will be purchased or dropped. Knowing the items will be dropped might trigger some generic response to try to get the customer to make the purchase. This generic action will probably be more effective than no action at all. Now, imagine that the model could not only predict the likelihood of the items being dropped, but also explain why it is making this prediction, and, suggest actions that have worked in similar scenarios. This can be achieved with advanced techniques of model explainability. This additional, specific insight will be invaluable as targeted actions are more personal and more effective at increasing sales.

Finally, model transparency will aid the Data Scientist’s understanding of which information is crucial, and which additional information should be acquired. This helps the Data Scientist to more effectively use their time, as well as, helps to highlight problems such as data leaks and anomalous model behavior.

As discussed above, model transparency is an essential capability. It facilitates the jobs of a data science team, while also helping bring internal end users of the models to increase their comprehension of and comfort in the business decisions that are driven by those models.

So, whether your company is in a regulated or unregulated industry, AI transparency is a vital requirement of proper AI implementation within an organization.

Related Post

Ople scales!

Inside Ople - Petr Tsatsin