When choosing an AI technology for a given problem, we often spend a lot of time evaluating whether it is possible for the technology to get the job done.
In the public sector, this is not enough. There are several other aspects we must consider like transparency, equity, and risk mitigation/ management. In particular, in the public sector it is critical to
Each aspect that must be considered (transparency, equity, and risk management) can translate into a technological criterion for evaluating AI:
Transparency --> Explainability: A technology is explainable if you can define how a given output was produced or calculated based on the provided input. Being able explain why a technology produced a certain output is critical for ensuring that processes that use AI maintain their transparency.
Accountability, Fairness, & Equity --> Consistency: A technology is consistent if, given the same set of inputs, you consistently get the same output from the system. We want to ensure the technology treats two identical inputs in the exact same way. Consistency also allows for greater accountability because it is difficult to be accountable for a system that can randomly provide differing results.
Risk Management --> Maturity: A technology is mature if it is currently relied upon and widely used by critical systems in the real-world. Mature technologies are less risky since they have been tried and tested in production environments.
In this blog we will explain how we evaluate these 6 common kinds of AI technologies against the criteria of Explainability, Consistency, and Maturity. It is worth noting that over time, as technologies evolve, they will become more explainable, more consistent, and more reliable/mature.