The Artificial Pot Calls the Machine-Learning Kettle: Lessons Learned from South Africa’s Attempted AI Policy

Calvin Christopher 29 April 2026

The South African Department of Communications and Digital Technologies has recently announced the withdrawal of its Draft National Artificial Intelligence (AI) Policy. The painful irony is that the policy was withdrawn because it was discovered that the document contained fictitious citations, the result of the use of AI in its creation.

I was motivated to reflect on what this means for the use of AI in the legal, regulatory and compliance space. And to spare a thought for the efforts of the South African government to make initial steps into the regulation of AI.

Legal advisors have been hesitant to leverage the potential of AI in the preparation of legal work, due to the potential for mistakes and inaccuracies. In particular, the risk of hallucination, which arises when AI produces fictional content in response to a request for factual information. This for example has resulted in AI making up citations or referring to make-belief precedents.

The challenge is that on the one hand, the AI aims to please its user by delivering on the objective at all costs. On the other hand, the work product of the AI can be deceptively professional, making it difficult to catch its mistakes.

For most firms in the financial services sector, best practice in the responsible management of AI begins with board oversight and a responsible AI policy framework. This needs to be supported by training, testing and risk assessment, to ensure effectiveness. Firms can also use verified data sets, human-in-the-loop workflows, and parameter tuning to reduce the risk of hallucination.

Although the bad press is unhelpful, it should not detract from the wider challenge facing South Africa and other governments when it comes to the regulation of AI.

On the one hand, it is a daunting task to write regulations for such a complex and fast-moving technology. On the other, it is becoming clear that it is irresponsible for governments to shy away from the task of attempting to regulate the space. For example, Anthropic AI have spoken out about self-imposed restrictions they have adopted for their latest AI model, and called for more progress on regulatory guardrails.

For countries like South Africa, there is also the challenge that while we may be users of AI, we are not home to the large-scale all purpose AI platforms. South Africa may find itself caught in the AI arms race between the US and China. South Africa has attempted to stake out a middle ground to signpost the benefits it seeks to obtain from AI while identifying the risks it seeks to avoid or mitigate.

There is also the question of whether the better approach is to pursue specific, AI-targeted regulation (similar to what the European Union has attempted to do); or to use a lattice approach to build AI-related enhancements or adjustments atop the framework of other technology-agnostic regulation (similar to the approach favoured by the United Kingdom).

How is your organisation navigating the uncertain landscape of AI governance? At Spencer West, we support businesses looking to ensure that they stay abreast of regulatory developments in Artificial Intelligence; and also to advise them on implementing the appropriate AI compliance and risk management frameworks.

Calvin Christopher
Partner (Non-practising) - Regulatory & Compliance
Calvin Christopher
Calvin Christopher is a Partner (Non-practising) at Spencer West based in South Africa. He specialises in regulatory and compliance matters.