For the tech industry, Artificial Intelligence (AI) is an undoubted focal point.
The research and innovation currently taking place in this field is influencing industry verticals such as legal, financial and healthcare, among others. So much is this so, AI appears to be the foundation upon which a fundamental shift in operations is occurring.
But with new developments and breakthroughs come new problems and challenges.
As algorithms advance and automate ever more complicated decisions, it becomes more difficult to discern how they work.
In part, this is down to the reticence of the companies developing them to allow third-part scrutiny of proprietary algorithms. Put simply, as the challenges they face become more complex, their inner workings become opaque and less accountable.
AI’s big challenge
All this AI innovation is starting to pose a problem.
As more and more of our economic, social and economic interactions – from mortgage applications to financial transactions, insurance policies to recruitment and legal processes – are carried out by AI systems, there are natural requests from users to be given an explanation of how a particular decision has been reached.