Artificial Intelligence (AI) is rapidly transforming how Australian businesses compete in today's market. It streamlines operations and enhances decision-making, offering efficiency and innovation at an unprecedented scale. However, the risks associated with AI are often underestimated, especially as directors remain bound by their duties under the Corporations Act 2001 (Cth), even when decisions involve algorithms or AI support.
Directors’ duties require directors to exercise care and diligence, act in good faith in the best interests of the company, and for a proper purpose. This raises various issues as AI can create decisions and content based on unverified information and can even be biased or discriminatory towards certain sources. Directors who put too much trust in AI without appropriately checking the material before using it in any capacity run the risk of breaching their duties.
That’s not to say that directors shouldn’t use AI to assist them in decision-making. However, AI should never be relied on solely. Directors need to work alongside AI systems to incorporate human judgment and verify the AI information against reliable sources.
A Quick Path to an Easy Marketing Plan — or a Fast Track to Legal Trouble?
So, how should directors mitigate any risks of breaching their duties when using AI?
Place yourself in the shoes of a sole director of a company using AI to create a marketing campaign for a new product. You’ve heard through connections how efficient and innovative this AI platform is when used for marketing purposes. Your connections even showed you examples, so naturally you become trusting of how reliable and precise the AI is. You put your prompts into the platform, and it creates a marketing plan for this new product.
However, what you didn’t realise was the AI integrated opinions from anonymous Reddit threads on similar products, fake product reviews, and biased influencer opinions. The orders came in and after a few weeks, customers started to complain about how the statements made about the products were completely false and the product failed to work in the way it was advertised. The company’s reputation begins to deteriorate and a frustrated client begins legal proceedings against the company for negligent and deceptive advertising. You, as the director, are also brought into the proceedings for aiding the negligent use of AI. After legal costs and a decline in customers, the company is unable to bounce back and ultimately enters into an insolvency restructure.
This is an example of many potential risks that businesses take when incorporating AI into decision-making processes, and it is a very possible scenario for many directors and their respective businesses. In this example, the director would have been in serious breach of their legal duties and exposed the company and himself to an array of legal claims.
Approving the use of unverified AI materials without fact-checking against known and reliable sources is a direct breach of the director’s duties to exercise due care and diligence. Although this is a specific example, the risks of AI use leading to a business’ insolvency exist in any business using AI that is essentially open to an unlimited range of information sources, or in other words, an AI platform that is not closed to information that is directly fed to it. This may be as simple as using AI to create content material, develop products, or produce formal documents.
Mitigating Risks
These outcomes could have been avoided, and any future risks mitigated if the business had adopted an internal AI checklist. This will vary from business to business, but it may include:
- Establish risk management strategies to identify and mitigate AI risks
- Ensure that human judgment is incorporated with the AI material by reviewing and verifying all information against reliable sources before it is used.
- Disclose and document when AI is being used to foster transparency and trust.
- Establish a process for individuals impacted by AI systems to complain about AI-assisted decisions.
- Ensure that staff and management understand that delegating tasks to AI is not the same as delegating to employees.
- Directors should retain oversight by obtaining AI training and independent advice where knowledge is limited and seek regular reports on the accuracy and incidents of the AI platform being used.
- Maintain clear records of AI usage and compliance for accountability reasons.
- Conduct conformity assessments to ensure compliance when using AI.
Conclusion
The risks associated with the use of AI are not hypothetical or distant concerns;, rather, they are immediate and real risks faced by companies implementing AI technologies. These risks can quickly escalate into significant legal consequences and even insolvency. It is therefore essential that company directors adopt a proactive approach to assessing and mitigating AI-related risks to ensure compliance with their legal duties and safeguard the company’s reputation and long-term goals.