Shadow AI in South African Businesses: Why Banning AI Doesn’t Work

South African companies are experiencing a surge in the use of generative AI tools like ChatGPT and Microsoft’s Copilot. A recent study, which surveyed over 100 large-sized enterprises nationally, found that about 67% are already using GenAI in some form, up from 45% in 2024. Yet most of these organisations lack formal guidance on AI use – in 2025, only 15% had an official policy for using such tools. This gap has led to a growing trend of “shadow AI,” where employees use AI tools unofficially, even if the company ignores or outright bans them. Simply banning AI in the workplace is proving ineffective and potentially harmful. Instead, businesses need to acknowledge the reality of AI usage and respond with a clear strategy, staff training, and strong guardrails. Below, we explore how rampant unofficial AI use has become, why banning it doesn’t solve the problem, and how South African firms can proactively and safely integrate AI into their operations.

Shadow AI: Unofficial AI Use on the Rise

Many employees are quietly using generative AI tools at work without official approval, a phenomenon dubbed “shadow AI.” This unofficial experimentation is on the rise in South Africa, as staff turn to AI assistants like ChatGPT to boost productivity even when their companies lack formal AI policies or strategies. While this shows enthusiasm for AI’s potential benefits, it also raises concerns about data security, compliance risks, and the absence of oversight. Employers are increasingly discovering that AI adoption is happening regardless of official plans, underscoring the need to catch up with governance and clear guidelines.

According to the South African Generative AI Roadmap 2025, roughly one-third of surveyed businesses already have employees using GenAI tools informally without company sanction. This share has climbed sharply from about a quarter of firms the year before (rising from 23% in 2024 to 32% in 2025). The same World Wide Worx/Dell study found that only 15% of organisations have an official policy for AI tool usage, highlighting a significant governance gap. In practice, many staff are experimenting with AI on their own – often using personal accounts or unvetted apps – because formal strategies and oversight have yet to catch up. It’s a pattern seen elsewhere, too: one global security survey found companies deploy dozens of AI tools on average, yet 90% of those run without any formal IT approval. These statistics underscore how rapidly AI use is outpacing policy, and they reinforce the urgent need for businesses to establish clearer AI guidelines and oversight.

Why are employees doing this? Quite simply, many find these AI tools incredibly helpful for productivity and tasks. For example, Capitec Bank reported significant time savings by using Microsoft’s AI Copilot assistant – cutting a financial reconciliation process from six hours to just one minute. With results like that, it’s easy to see why workers are tempted to use AI, even if company policy is silent or prohibitive. They might use personal devices or home computers to access AI tools, creating an “AI underground” of employees solving problems with ChatGPT, Bard, or other platforms outside official channels. The risk is that this happens without any oversight or guidance, which can lead to mistakes or security breaches.

Why Banning AI Backfires

Some companies have reacted to AI’s risks by attempting outright bans – forbidding employees from using tools like ChatGPT on work devices or with work data. However, simply banning AI is not an effective solution. As the data shows, employees often circumvent bans when they see clear benefits. Generative AI’s value is just too high for a blanket ban to be realistic, especially as competitors embrace the technology. When organisations say “no AI,” they may drive the behaviour underground rather than stop it.

Banning AI can actually increase a company’s exposure to risk. If employees feel they have to hide their AI usage, they won’t ask for guidance or permission, and they might use tools in insecure ways. For instance, an employee might paste sensitive client information into a public AI service from home – a dangerous practice that could violate South Africa’s data protection laws. Under the POPIA regulations, even a single instance of personal data being fed into a public AI model without authorisation could constitute a breach, with serious financial and legal repercussions. Without any approved tools or clear policies, workers might inadvertently share confidential data with AI platforms or rely on AI outputs without verification, leading to errors. In other words, a ban doesn’t prevent the behaviour – it prevents the organisation from managing it.

There’s also a competitive aspect: companies that ban generative AI outright risk falling behind more innovative rivals. If your staff are forbidden from using tools that boost productivity, while other firms are leveraging them (with proper safeguards), you could be at a disadvantage. It’s telling that 84% of South African businesses surveyed say that oversight of AI use is an important or very important success factor for GenAI deployment. Businesses increasingly recognise that the answer is not to ignore or suppress AI, but to supervise and manage its use. As tech researcher Arthur Goldstuck warns, many companies are enthusiastically adopting AI “in a regulatory and ethical vacuum” – and “the longer this continues, the more harm can be caused… before these guardrails are in place.” Simply put, the genie is out of the bottle with AI adoption, and banning it won’t put it back in. The smarter move is to guide how it’s used.

The Need for a Formal AI Strategy

The first step to regaining control is for businesses to develop a formal AI strategy. Shockingly, only 14% of South African companies have a defined company-wide GenAI strategy in place. A further 22% have some strategy but only for specific divisions, while the majority have nothing concrete yet. In many firms, the adoption of AI has outpaced any kind of plan or policy. Goldstuck noted that what’s most startling is that many companies think using a GenAI tool is the same as having an AI strategy. It’s not. Buying licenses for ChatGPT or allowing Copilot in Office 365 is not a strategy – it’s a tactic. A true AI strategy means understanding where AI can add value, how to integrate it into business processes, and how to mitigate risks and ethical concerns.

Without a clear strategy, companies are essentially “walking blindfolded” into an AI-driven future. They might achieve quick wins with AI, but lack alignment with broader goals or preparedness for AI’s pitfalls. Leadership must define why and how the organisation will use AI. This includes setting priorities (e.g. enhancing customer service vs. automating internal tasks), deciding which AI tools are acceptable, and aligning AI initiatives with compliance requirements and company values. Formal strategy also assigns ownership – someone in leadership responsible for AI oversight. Currently, in many SA businesses, the CIO or IT leaders are taking charge of AI governance, but 9% report that no one is formally in charge at all. An AI strategy should establish clear accountability for monitoring AI projects and usage.

Training Employees and Establishing Guardrails

Having a strategy is not enough on its own – companies must also invest in training and clear guardrails to support responsible AI use. A striking 87% of South African businesses say they have committed to GenAI upskilling or training for employees, recognising that people need new skills to use AI tools effectively. Training programs should educate staff on how to use AI for their tasks, how to fact-check AI outputs, and, importantly, how to avoid pitfalls like sharing confidential data or relying on AI for decisions beyond its capabilities. Enhancing employees’ skillsets is essential to getting the most out of any GenAI deployment – as one Dell Technologies executive put it, without investing in people and processes, the risk of “shadow AI” only increases, along with the likelihood of reputational and operational fallout from misuse. In other words, if staff aren’t taught how to use AI properly under guidance, they’ll likely use it improperly in secret.

Guardrails are the policies and controls that ensure AI is used safely, ethically, and in compliance with regulations. Unfortunately, only 13% of enterprises have implemented comprehensive AI guardrails – such as safety protocols, privacy protections, and bias mitigation measures – to govern AI use. Every company using AI should define clear principles and rules: for example, guidelines on what data employees can input into AI systems, restrictions on using AI for sensitive tasks, and processes for reviewing AI-generated content. Implementing access controls or approved AI platforms is another key guardrail – providing a secure, company-sanctioned tool can dissuade employees from turning to unvetted apps. Strong governance not only reduces risks like data leaks or compliance violations, but it also builds trust in AI initiatives. When employees see that the company has clear rules and is looking out for ethical and legal issues, they’re more likely to engage with AI responsibly rather than recklessly. As Goldstuck cautions, “without governance, organisations are walking blindfolded into a future shaped by AI… that is not sustainable.”

To safely harness AI’s benefits, experts recommend that companies take the following steps:

  1. Develop a formal AI policy and strategy – Define how AI will be used to meet business goals, who is responsible for oversight, and what is and isn’t allowed. Currently, too few firms have this roadmap in place, so closing the strategy gap is urgent.

  2. Train and upskill your staff – Equip employees with the knowledge to use AI tools effectively and ethically. Regular training can cover how to interpret AI outputs, protect sensitive data, and follow company AI guidelines. Most SA businesses realise this need, with 87% committing to GenAI training initiatives.

  3. Establish clear guardrails – Implement governance frameworks and ethical guidelines for AI use. This includes privacy safeguards (critical since about 29% of businesses cite data privacy concerns around AI), accuracy checks, and bias mitigation. Also, ensure compliance with local laws like POPIA when handling personal data. Set up approval processes or monitoring for AI-driven projects to catch issues early.

  4. Foster an open culture around AI – Encourage employees to discuss their use of AI tools with managers instead of hiding it. Make it easy for staff to ask questions about what’s acceptable. An open dialogue can help identify shadow AI usage and bring it under the official policy, so that it can be guided rather than ignored.

By taking these steps, companies create an environment where AI can be used as a positive force for productivity and innovation – within a controlled, well-understood framework.

Embrace AI with Oversight, Not Opposition

The reality in South Africa’s workplaces is that AI is here to stay, whether officially sanctioned or not. Banning or ignoring it won’t stop employees from finding a way to use tools that make their jobs easier – it will only increase the chances of mistakes and security incidents happening in the dark. Businesses that proactively develop strategies, educate their teams, and set guardrails will be far better positioned to benefit from AI’s potential while protecting themselves from its risks. In the words of Arthur Goldstuck, when it comes to AI in business, “the genie is out of the bottle, but that does not mean it should be allowed to wander around the office unsupervised.” By embracing AI with proper oversight rather than trying to ban it, South African companies can innovate confidently and safely in the new era of intelligent tools.

Sources:

  • Daniel Puchert, “AI warning for businesses in South Africa,” MyBroadband (17 July 2025) mybroadband.co.zamybroadband.co.za.

  • Philippa Larkin, “South African enterprises are rapidly adopting Generative AI but without formal strategies, study finds,” IOL Business Report (17 July 2025) iol.co.zaiol.co.za.

  • Arthur Goldstuck, “GenAI gains ground in SA but governance lags,” Gadget (21 July 2025) gadget.co.zagadget.co.za.

  • Nadeem Mahomed et al., “Unchecked AI, unseen dangers: What the DeepSeek breach means for SA companies and POPIA compliance,” Cliffe Dekker Hofmeyr via Labour Guide (03 Mar 2025) labourguide.co.za.

  • Craig Leppan, “Navigating the AI Adoption Wave in South Africa: Urgency Meets Opportunity,” Imbila (22 Jul 2025) imbila.aiimbila.ai.

Share – 

Related Posts