Demand for AI driving rise of Shadow AI

Andrew Walls, Distinguished VP Analyst, Gartner
Andrew Walls, Distinguished VP Analyst, Gartner

Effective management of Shadow AI requires the deployment of security controls, such as web-based monitoring, data leakage prevention and AI usage control. These tools can help manage unauthorised AI activity without resorting to punitive measures says Andrew Walls at Gartner.

Artificial Intelligence is rapidly transforming the workplace, empowering employees to experiment with new tools and approaches that can drive productivity and innovation. However, this surge in AI adoption is also fuelling the rise of Shadow AI—the use of AI tools, applications or features without formal approval or oversight.

Shadow AI introduces significant risks, including the potential exfiltration of confidential data and violations of company policy, which can threaten enterprise value and reputation. To address these challenges, CISOs must establish robust programmes for employee training, monitoring, and filtering that encourage responsible innovation while mitigating the risks associated with unsanctioned AI use.

Shadow AI manifests in several forms across organisations. The most widespread is employee Shadow AI, where employees use Generative AI tools to enhance productivity or for personal projects unrelated to business needs. Developer Shadow AI occurs when software developers within an organisation experiment with open-source AI models outside of sanctioned corporate repositories, often bypassing established cybersecurity controls.

Technology provider Shadow AI arises when enterprise software vendors embed new AI features into their products without adequate notification, inadvertently expanding the organisation’s attack surface.

Finally, third-party provider Shadow AI involves partners or contractors leveraging AI tools without informing the organisation or adhering to its security and privacy requirements. Each of these forms presents unique challenges for CISOs seeking to maintain oversight and control.

The scale of the Shadow AI challenge is significant. According to Gartner’s 2025 Cybersecurity Innovations in AI Risk Management and Use survey, 69% of organisations suspect or have evidence that employees are using prohibited public Generative AI tools. Additionally, 79% report suspected or confirmed misuse of approved public Generative AI, while 52% have concerns about employees building custom Generative AI solutions without proper cybersecurity risk evaluation.

Despite these risks, only a small fraction of organisations has taken strong preventive measures—just 16% block public Generative AI by default, and only 9% block embedded or custom-built AI. This gap highlights the urgent need for more proactive and comprehensive risk management strategies.

To balance the value and risk of Shadow AI, CISOs should focus on discovering and monitoring unsanctioned AI use, particularly where unauthorised Generative AI tools are involved. Engaging directly with users to understand their objectives, the types of data being shared and the skills being developed is essential for crafting effective policies.

Clear, practical guidelines should be established, such as restricting the sharing of confidential or regulated data with unauthorised AI, opting out of data reuse where possible, validating AI outputs before use and ensuring transparency through proper attribution.

Building open AI communities within the organisation can foster a culture of support and transparency, encouraging employees to surface both risky and innovative uses of AI.

Effective management of Shadow AI also requires the deployment of security controls, such as web-based monitoring, data leakage prevention and emerging AI usage control technologies. These tools can help detect and manage unauthorised AI activity without resorting to punitive measures that may drive it further underground.

Policies should be aligned with business needs and risk priorities, offering flexible options like enterprise AI licenses or internal chatbots for scenarios involving sensitive data.

Ultimately, building a secure AI culture depends on practical, jargon-free education and leveraging existing data governance resources. By bringing AI use out of the shadows and into the open, CISOs can empower employees to innovate safely and responsibly.

A balanced approach that combines education, supportive communities and non-blocking controls—such as splash pages or real-time coaching—can improve risk awareness and influence behaviour, enabling organisations to harness the benefits of AI while protecting valuable assets.

===============================================================

Key takeaways

  • Employee Shadow AI is where employees use Generative AI tools to enhance productivity or for personal projects.
  • Developer Shadow AI occurs when software developers experiment with open-source AI models bypassing cybersecurity controls.
  • Technology provider Shadow AI arises when enterprise software vendors embed new AI features into their products without notification.
  • Third-party provider Shadow AI involves partners or contractors leveraging AI tools without adhering to its security and privacy requirements.
  • 69% of organisations suspect or have evidence that employees are using prohibited public Generative AI tools
  • 79% of organisations report suspected or confirmed misuse of approved public Generative AI.
  • 52% of organisations have concerns about employees building custom Generative AI solutions without cybersecurity risk evaluation.

===============================================================

Leave a Reply

Don't Miss

Gartner

Global GenAI Spending to Reach $14.2B in 2025: Gartner

Worldwide end-user spending on generative AI (GenAI) models is projected to total
Pedro Pacheco, VP Analyst at Gartner

Gartner Identifies Key Automotive Trends for 2025

Gartner highlighted several trends set to shape the automotive sector in 2025,

Welcome to

By signing or creating an account you agree with our Code of conduct & Privacy policy