Artificial intelligence (AI) is becoming big business, with all kinds of fascinating opportunities. Growth has been extraordinary: in 2015, global AI revenues were $126 billion, and last year revenues were $482 billion. The prediction for 2024 is that revenues will top $3.061 trillion.
Advances in AI are making it possible for computers take on more tasks that were formally done by humans. While this trend is creating greater efficiencies, it is also increasing the degree to which people feel that they are talking to a wall. Just think about the difference between speaking to a live person on a phone compared to navigating a series of menu options, which by the way have recently changed, so please listen carefully.
The role of AI in financial services is large. In 2017, about 85% of online payment and credit card transactions relied on AI techniques. As fintech grows, there will be times when the “intelligence” in “artificial intelligence” might seem like a misnomer, leading people to feel justified in saying “go figure.” This is because some AI can act like a “black box”, meaning that the AI processes are opaque and difficult to understand for everyone, including AI developers. With “black boxes” and “go figure” comes frustration.
Frustration is an emotional response, and the emotion is fueling a demand for explainable AI. I recently watched an excellent online tutorial about explainable AI entitled “Explainable AI in Industry.” The tutorial was presented by two groups at a recent KDD conference on AI and Data. One presenting group was from Fiddler Labs, a firm that specializes in explainable AI, and the other was from LinkedIn’s Fair Machine Learning team.
One particular slide in the online tutorial caught my attention, about users of new machine learning models asking themselves: “When can I trust you?” with the “you” being the models. As an exciting Silicon Valley startup, Fiddler Labs understands that they are a technology firm whose products deliver trust.
Emotions, like trust, are part of our psychology. Broadly speaking, trust is an essential component of exchange in modern economies. Most people do not have sufficient knowledge to second guess those we regard as experts. People rely on the advice of physicians, accountants, dentists, mechanics, and contractors. More often than not, they trust the advice they receive. That trust often stems from past experience. However, in the absence of experience trust often stems from referrals that in turn are based on what we hear from others whom we trust.
It is important to think about trust in the context of explainable AI. In general terms, we can break trust down into four components. Those components can be summed up as:
- competence; and
People will distrust AI if they regard it as a foe, not a friend. People will distrust AI if they believe that it is structured to operate in violation of ethical norms. People will distrust AI if they feel the algorithms lack competence. Finally, people will distrust AI if they find its results to be unpredictable.
Concern about the effect that AI is having on our lives is becoming so great that there is a bipartisan effort in Congress to deal with it through regulation. This past June 26, the House Financial Services Committee’s Task Force on Artificial Intelligence held a hearing titled, “Perspectives on Artificial Intelligence: Where We Are and the Next Frontier in Financial Services.”
A memorandum written by the Financial Services Committee Majority Staff raised the issue “explainable AI,” expressing “potential concern for financial service applications” in respect to “opacity and explainability of the systems.” By explainability, they meant to the ability to “explain decisions and actions to human users.”
Psychologist Paul Slovic tells that people’s perceptions of risk tend to be driven by two features. The first is dread, the extent to which people dread the potential outcome of a risky situation. The second is knowledge based on familiarity, the extent to which people feel they understand the risk.
People will lack trust in activities they perceive to be risky because they realize that they do not understand the associated risk and dread the potential consequences.
To take a nonfinancial example, AI underlies autonomous vehicles, meaning self-driving cars. People react with dread to incidents of pedestrians being killed by autonomous vehicles. Not understanding why a vehicle’s sensors failed to recognize a pedestrian in the car’s path compounds the sense of risk.
Of course, the Financial Services Committee is focusing on the financial sector, not the automobile sector. That is why the Committee invited testimony from Jesse McWaters who works for the World Economic Forum on issues associated with financial Innovation. McWaters’s focus is “on exploring the role that new technologies, including artificial intelligence, are playing in the rapid transformation of the financial system.” He testified about trust, noting the importance of the “context, transparency and control deemed necessary to have informed trust in a model for a particular use case.” Bonnie Buchanan, an academic from the University of Surrey, also mentioned trust in her testimony, connecting it to issues of governance.
The financial stakes associated with AI are great. Trust is part of those stakes. Trust is more easily lost than gained. That is why making AI explainable is so critical, for both financial and ethical reasons.
social experiment by Livio Acerbo #greengroundit #thisisnotapost #thisisart