
It’s Not the Algorithm, It’s the Ethics
This article was originally published the Journal of Financial Compliance Volume 6 Number 3. To read the original article in full, click here.
Abstract
Machine learning and artificial intelligence (ML/AI) technologies have transformed nearly every industry, helping to realize unprecedented efficiency and effectiveness in a variety of tasks once thought the exclusive domain of humans. The financial compliance industry, however, lags its peers in adopting ML/AI tools in spite it being readily available and promising to reduce costs for financial institutions. This paper argues that the reason for delay in adoption is not ignorance of the technology but the lack of a moral consensus around its use in financial compliance. The ethics and morality behind the adoption of ML/AI tools and why compliance professionals are discouraged from adopting it in their compliance programs are explored. The paper introduces the trolley car problem and how this explains the lack of a moral consensus of the use of ML/AI in compliance. It then explores why, even though machines today can pass the Turing test, machines are not capable of making moral judgments, meaning humans remain responsible for the actions taken by ML/AI. This creates an unprecedented burden about making moral decisions without any real benefit to compliance officials who want to do good. The argument is that if regulators change the incentive structure away from conformity to saving lives, and making this the moral regime guiding the use of ML/AI, technology adoption would increase and allow the compliance industry to change the world for the better.