But mostly, unsurprisingly, shadow AI (like most forms of shadow technology and bring your own device activity) is viewed as a negative, an infringement and a risk.
AI Shadow Breeding Ground
The issue today is that AI remains essentially in its infancy, still embryonic and only beginning to experience its first wave of implementation. Many users' exposure to AI is limited to amusing image creations generated by ChatGPT and other tools (think about human plastic toy blister packs last week, cats on diving boards this week, and something even more bizarre next week for sure), meaning widespread enterprise adoption of AI tools has yet to become standard practice. Although that time seems imminent, the current state of AI development means some usage is slipping under the radar.
The unauthorized use of AI tools by developers is becoming a serious problem as application development continues to accelerate rapidly. Scott McKinnon, CSO for UK&I at Palo Alto Networks, says this means building modern, cloud-native applications isn't just about writing code anymore—it's realizing we’re now operating in a “continuous beta mode” due to the pressure to deploy new enterprise software services quickly.
“The resulting effect is that developers face intense pressure to deliver fast and reduce time to market. Given this, it’s understandable why many developers are turning to AI tools to boost efficiency and meet these demanding expectations,” said McKinnon. “Our research shows enterprise generative AI traffic surged by over 890% in 2024—and with organizations increasingly using these apps—some can be considered high-risk. Meanwhile, data loss prevention incidents linked to generative AI have more than doubled, clearly signaling governance failures.
Go-Around Guardrails
When all these realities are combined, it’s easy to see why software developers might be tempted to bypass the organization’s AI guardrail policies and controls. In practice, they may plug into open-source large language models outside approved platforms, generate code using AI without oversight, or skip data governance policies to speed up deployment. The result could expose intellectual property through compliance breaches that also compromise system security.
“It all comes down to one thing: if developers are to balance speed with security, they must adopt a new operational model. This model should embed clear, enforceable AI governance and oversight directly into the continuous delivery pipeline rather than tacking them on afterward,” explained McKinnon. “When developers use AI tools outside sanctioned channels, one major concern is supply chain integrity. Pulling in untested or unvetted AI components introduces opaque dependencies often hiding vulnerabilities.”
What are opaque software dependencies?
It’s a term that sounds ominous enough already, and opaque software dependencies truly are problematic. Software dependencies are essential parts of smaller
data services—software libraries handling database connections, frameworks managing user interfaces, or modules forming part of external third-party applications. Useful software dependencies are transparent and easy to inspect; opaque ones, while functional, are murky when it comes to revealing their origins and internal components. Technically speaking, opaque software application dependencies mean developers cannot "assign" them (and establish a connection to them) via a public application programming interface.
According to McKinnon, another significant threat is prompt injection attacks, where malicious actors manipulate AI inputs to force unintended and dangerous behaviors. These types of vulnerabilities are hard to detect and can erode trust and safety in AI-powered applications. When unchecked, such practices create new attack surfaces and increase the overall risk of cyber incidents. Organizations need to get ahead of this by securing AI development environments, rigorously vetting tools, and ensuring developers have the support needed to work effectively.
The Road To Platformization
"To effectively tackle the risks from unsanctioned AI use, organizations need to move beyond fragmented tools and processes toward a unified platform approach. This means consolidating AI governance, system controls, and developer workflows into a single integrated system offering real-time visibility. Without this, organizations struggle to keep pace with the speed and scale of modern development environments, leaving gaps adversaries can exploit,” said McKinnon.
His vision of platformization (and broader platform engineering) enables organizations to apply consistent policies across all AI usage, identify risky behavior early, and provide developers with secure, approved AI capabilities within existing workflows.
“This reduces friction for software developers, enabling faster work without compromising security or compliance. Instead of juggling multiple disconnected tools, organizations gain a centralized view of AI activity, making monitoring, auditing, and responding to threats easier. Ultimately, a platform approach is about balance—delivering safeguards and controls to minimize risk while preserving the agility and innovation developers require,” concluded Palo Alto Networks’ McKinnon.
At its worst, shadow AI can lead to so-called model poisoning (also known as data poisoning), a scenario described by application and API reliability company Cloudflare as when an attacker manipulates the outputs of an AI or machine learning model by altering its training data. The goal of an AI model poisoner is to make the AI produce biased or harmful results once it starts processing inference calculations that ultimately provide us with AI-driven insights.
According to Mitchell Johnson, chief product officer at software supply chain management specialist Sonatype, “Shadow AI includes any AI application or tool used outside an organization’s IT or governance frameworks. Think shadow IT but with far greater potential (and risk). It’s the digital equivalent of prospectors staking claims during a gold rush, cutting through red tape to strike gold in efficiency and innovation. Examples include employees using ChatGPT to draft proposals, leveraging new AI-powered code assistants, building machine learning models on personal accounts, or automating repetitive tasks with unofficial scripts.”
Johnson notes that it appears increasingly now due to the rise in remote working, where teams operate outside traditional oversight and firms lack comprehensive AI governance, creating policy gaps that allow room for improvisation.
From Out Of The Shadows
There is clearly a network system health concern tied to shadow AI; after all, it’s the first issue raised by tech industry commentators warning about any form of shadow IT. There are wider implications too, including certain IT teams gaining what might seem like an unfair advantage, or some developer teams introducing rogue AI that causes bias and hallucinations.
To borrow a meteorological truism, shadows are usually good news only during a heatwave… which typically means there’s a lot of humidity present, with storms potentially on the way.
The above is the detailed content of Shining A Light On Shadow AI. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Let’s dive into this.This piece analyzing a groundbreaking development in AI is part of my continuing coverage for Forbes on the evolving landscape of artificial intelligence, including unpacking and clarifying major AI advancements and complexities

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

Looking at the updates in the latest version, you’ll notice that Alphafold 3 expands its modeling capabilities to a wider range of molecular structures, such as ligands (ions or molecules with specific binding properties), other ions, and what’s refe

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

Dia is the successor to the previous short-lived browser Arc. The Browser has suspended Arc development and focused on Dia. The browser was released in beta on Wednesday and is open to all Arc members, while other users are required to be on the waiting list. Although Arc has used artificial intelligence heavily—such as integrating features such as web snippets and link previews—Dia is known as the “AI browser” that focuses almost entirely on generative AI. Dia browser feature Dia's most eye-catching feature has similarities to the controversial Recall feature in Windows 11. The browser will remember your previous activities so that you can ask for AI

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a
