Shadow AI in South African Businesses: Why Banning AI Doesn’t Work

South African companies are experiencing a surge in the use of generative AI tools like ChatGPT and Microsoft’s Copilot. A recent study, which surveyed over 100 large-sized enterprises nationally, found that about 67% are already using GenAI in some form, up from 45% in 2024. Yet most of these organisations lack formal guidance on AI use – in 2025, only 15% had an official policy for using such tools. This gap has led to a growing trend of “shadow AI,” where employees use AI tools unofficially, even if the company ignores or outright bans them. Simply banning AI in the workplace is proving ineffective and potentially harmful. Instead, businesses need to acknowledge the reality of AI usage and respond with a clear strategy, staff training, and strong guardrails. Below, we explore how rampant unofficial AI use has become, why banning it doesn’t solve the problem, and how South African firms can proactively and safely integrate AI into their operations. Shadow AI: Unofficial AI Use on the Rise Many employees are quietly using generative AI tools at work without official approval, a phenomenon dubbed “shadow AI.” This unofficial experimentation is on the rise in South Africa, as staff turn to AI assistants like ChatGPT to boost productivity even when their companies lack formal AI policies or strategies. While this shows enthusiasm for AI’s potential benefits, it also raises concerns about data security, compliance risks, and the absence of oversight. Employers are increasingly discovering that AI adoption is happening regardless of official plans, underscoring the need to catch up with governance and clear guidelines. According to the South African Generative AI Roadmap 2025, roughly one-third of surveyed businesses already have employees using GenAI tools informally without company sanction. This share has climbed sharply from about a quarter of firms the year before (rising from 23% in 2024 to 32% in 2025). The same World Wide Worx/Dell study found that only 15% of organisations have an official policy for AI tool usage, highlighting a significant governance gap. In practice, many staff are experimenting with AI on their own – often using personal accounts or unvetted apps – because formal strategies and oversight have yet to catch up. It’s a pattern seen elsewhere, too: one global security survey found companies deploy dozens of AI tools on average, yet 90% of those run without any formal IT approval. These statistics underscore how rapidly AI use is outpacing policy, and they reinforce the urgent need for businesses to establish clearer AI guidelines and oversight. Why are employees doing this? Quite simply, many find these AI tools incredibly helpful for productivity and tasks. For example, Capitec Bank reported significant time savings by using Microsoft’s AI Copilot assistant – cutting a financial reconciliation process from six hours to just one minute. With results like that, it’s easy to see why workers are tempted to use AI, even if company policy is silent or prohibitive. They might use personal devices or home computers to access AI tools, creating an “AI underground” of employees solving problems with ChatGPT, Bard, or other platforms outside official channels. The risk is that this happens without any oversight or guidance, which can lead to mistakes or security breaches. Why Banning AI Backfires Some companies have reacted to AI’s risks by attempting outright bans – forbidding employees from using tools like ChatGPT on work devices or with work data. However, simply banning AI is not an effective solution. As the data shows, employees often circumvent bans when they see clear benefits. Generative AI’s value is just too high for a blanket ban to be realistic, especially as competitors embrace the technology. When organisations say “no AI,” they may drive the behaviour underground rather than stop it. Banning AI can actually increase a company’s exposure to risk. If employees feel they have to hide their AI usage, they won’t ask for guidance or permission, and they might use tools in insecure ways. For instance, an employee might paste sensitive client information into a public AI service from home – a dangerous practice that could violate South Africa’s data protection laws. Under the POPIA regulations, even a single instance of personal data being fed into a public AI model without authorisation could constitute a breach, with serious financial and legal repercussions. Without any approved tools or clear policies, workers might inadvertently share confidential data with AI platforms or rely on AI outputs without verification, leading to errors. In other words, a ban doesn’t prevent the behaviour – it prevents the organisation from managing it. There’s also a competitive aspect: companies that ban generative AI outright risk falling behind more innovative rivals. If your staff are forbidden from using tools that boost productivity, while other firms are leveraging them (with proper safeguards), you could be at a disadvantage. It’s telling that 84% of South African businesses surveyed say that oversight of AI use is an important or very important success factor for GenAI deployment. Businesses increasingly recognise that the answer is not to ignore or suppress AI, but to supervise and manage its use. As tech researcher Arthur Goldstuck warns, many companies are enthusiastically adopting AI “in a regulatory and ethical vacuum” – and “the longer this continues, the more harm can be caused… before these guardrails are in place.” Simply put, the genie is out of the bottle with AI adoption, and banning it won’t put it back in. The smarter move is to guide how it’s used. The Need for a Formal AI Strategy The first step to regaining control is for businesses to develop a formal AI strategy. Shockingly, only 14% of South African companies have a defined company-wide GenAI strategy in place. A further 22% have some strategy but only for specific divisions, while the majority have nothing concrete yet. In many firms, the adoption of AI has outpaced any kind of plan or policy. Goldstuck noted that what’s most startling is that many companies think using a GenAI tool is the same as