Can we really trust AI when it comes to accounting?
In Isaac Asimov’s I, Robot, the famous “Three Laws of Robotics” were meant to safeguard humanity from the potential perils of AI, ensuring machines would always act ethically.
Yet, as the stories unfold, these laws are tested, bent, and even broken, revealing the fragility of ethical frameworks when applied to artificial minds. Fast forward to today, and what was once science fiction is inching closer to reality.
The use of AI in everything from decision-making to data processing has opened up a Pandora’s box of ethical dilemmas—questions that even the most sophisticated algorithms struggle to answer.
Imagine an AI entrusted with making hiring decisions, armed with vast datasets of past employee performances. The risk? Biases deeply ingrained in the data, subtly nudging the AI to favour one demographic over another, much like how HAL 9000 from 2001: A Space Odyssey malfunctioned, prioritising its own directive over human welfare.
This scenario begs the question: as we hand over more responsibility to AI, can we ensure that these systems will act ethically, or are we merely constructing digital overlords with flawed moral compasses?
A recent Accountancy Age broadcast featuring Tim Baker, CEO of Kloo, and Sean Smith, Accountant Evangelist at Sage, discussed how accountants needed to have ethics front of mind when considering integrating AI tools within the tech stack.
Smith noted professionals are currently particularly concerned about data handling and the sources of AI-generated information.
This trust extends beyond just data handling. It encompasses the reliability of AI-generated insights, the transparency of AI decision-making processes, and the assurance that these systems align with professional and ethical standards. Baker emphasised AI’s capability to comprehend and supplement invoice data, showcasing both the potential and the need for verification in AI-generated information.
The accounting profession doesn’t operate in a vacuum, and the regulatory landscape for AI is rapidly evolving. In Europe, the EU AI Act is the first comprehensive law to govern AI across sectors, including financial services. It categorises AI applications by risk levels, with high-risk systems like those used in credit scoring and risk assessments facing strict scrutiny.
In contrast, the UK is adopting a principles-based, pro-innovation approach led by the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA). The US, while lacking federal AI regulation specific to finance, is addressing AI risks through agencies like the Consumer Financial Protection Bureau (CFPB) and the Securities and Exchange Commission (SEC).
Data security forms the foundation of trust in AI systems. The sensitive nature of financial data demands robust protection measures. Emerging technologies like federated learning, homomorphic encryption, differential privacy, and secure multi-party computation offer promising solutions.
These advanced methods allow for AI model training and data analysis while maintaining strict privacy controls. For instance, federated learning enables model training on decentralised data without moving it to a central server, while homomorphic encryption allows computations on encrypted data without decryption.
Implementing such technologies can help accounting firms address the concerns raised in the broadcast about data privacy and security. It allows for the benefits of AI-driven insights without compromising on data protection.
The integration of AI into accounting practices is also transforming audit procedures. AI-enhanced audit processes are improving speed, accuracy, and depth of analysis. The broadcast highlighted the challenge many organisations face in measuring the real impact of AI beyond basic metrics, suggesting that AI could help bridge this gap in performance evaluation.
Real-time auditing, enabled by AI, is replacing traditional periodic approaches. This allows for continuous monitoring and immediate insights into discrepancies or potential risks. Moreover, as organisations deploy AI systems, auditors are now expected to evaluate AI governance frameworks, ensuring compliance with regulations and ethical guidelines.
Perhaps most intriguingly, new methods are being developed to audit AI systems themselves. This involves evaluating algorithms’ decision-making processes, the quality of data used for training, and ensuring transparency in how AI models function.
Despite the advanced capabilities of AI, the broadcast emphasised the importance of thoughtful implementation, where AI supports rather than replaces human judgment. This echoes the sentiment across the industry that AI should be viewed as an augmentation tool rather than a replacement for human expertise.
The challenge lies in striking the right balance. The speakers cautioned about the practical challenges of implementing highly personalised AI solutions, highlighting the need to be mindful of the complexities involved in integrating AI into diverse accounting roles and organisations.
To watch the full live broadcast, click here.