67 Percent of CISOs Cannot See Their Own AI Systems — Here Is the Enterprise AI Security Stack That Actually Works in 2026

67 Percent of CISOs Cannot See Their Own AI Systems — Here Is the Enterprise AI Security Stack That Actually Works in 2026

I was on a call with a CISO named Greg last Wednesday at 9:15 PM — because apparently security leaders only have time to talk after their kids go to bed — and he told me something that stuck with me. "I have 340 employees. At least 200 of them are using some form of AI at work. I can name maybe 12 of the tools they are using. Maybe."

Greg is not an outlier. According to Pentera's AI and Adversarial Testing Benchmark Report 2026, surveying 300 US CISOs and senior security leaders, 67 percent reported limited visibility into how AI is being used across their organization. Not one single respondent — zero out of 300 — said they have full visibility.

Let that sink in. Not one CISO in the entire study could say "I know every AI system running in my environment." Not one.

The Problem Is Not Budget — It Is Skills

Here is where most articles about AI security go wrong: they assume the problem is money. It is not. Only 17 percent of CISOs cited budget constraints as their primary barrier. The actual top obstacles are:

Lack of internal expertise: 50 percent
Limited visibility into AI usage: 48 percent
Insufficient AI-specific security tools: 36 percent

Translation: companies are willing to spend. They just do not know what to buy, who to hire, or where to start. I had lunch with a recruiter last Tuesday who specializes in cybersecurity hiring, and she said, "The words AI security engineer did not exist two years ago. Now every Fortune 500 company wants three of them and there are maybe 400 qualified people on the planet."

She was exaggerating. Probably. But the point stands — this is a skills gap dressed up as a tooling problem.

Why Legacy Security Tools Fall Short

The Pentera report found that 75 percent of CISOs are using legacy security controls — endpoint protection, application security, cloud security tools — to protect AI systems. Only 11 percent reported having tools designed specifically for AI infrastructure.

That is like using a metal detector to find a gas leak. The tool works great at what it was designed for. But it was not designed for this.

AI systems break the assumptions baked into traditional security tools in three fundamental ways:

Non-deterministic behavior. A traditional application does the same thing given the same input. An LLM might give you a haiku, a legal disclaimer, or your company's proprietary training data, depending on how someone phrases their prompt. Endpoint security was not built to monitor for "the application decided to do something creative."

Indirect access paths. When your marketing team connects ChatGPT to your CRM via Zapier, your customer data now flows through OpenAI's infrastructure. Traditional DLP tools see an API call to Zapier. They do not see that your customer list just left the building.

Privileged system-to-system interaction. AI agents that can read your Jira, search your Confluence, and send emails on your behalf have more access than most employees. But they show up in your IAM as a service account, not a user, so nobody is reviewing their permissions quarterly like they would for a human.

The AI Security Stack That Actually Works

After talking to Greg and about a dozen other security leaders over the past two months, plus digging through every vendor's documentation I could find, here is the practical stack that companies with mature AI security programs are actually deploying. Not aspirational, not theoretical — what is working right now in production.

Layer 1: AI Asset Discovery and Inventory

You cannot secure what you cannot see. Start here.

Wiz AI-SPM (AI Security Posture Management): This is the one I hear mentioned most often. Wiz scans your cloud environments — AWS, Azure, GCP — and automatically discovers AI services, models, SDKs, and training data. It creates what they call an "AI Bill of Materials," essentially a complete inventory of every AI asset in your environment. Genpact used it to achieve 100 percent visibility across their multi-cloud LLM deployments. Pricing is enterprise-only (read: call them and prepare your checkbook — expect $150K-$400K annually depending on cloud spend).

Robust Intelligence (now part of Cisco): Focuses on AI model validation and monitoring. Think of it as continuous testing for your AI models, checking for data drift, adversarial vulnerability, and unexpected behavior changes. Good for companies that build their own models. Pricing starts around $50K/year for mid-market.

DIY alternative: If your budget is Greg's-size-startup-budget (his words), you can build basic AI asset discovery using your existing CASB (like Netskope or Zscaler) to detect AI service API calls. It will not be as thorough as Wiz, but it will at least tell you which AI services your employees are hitting. Budget: $0 additional if you already have a CASB.

Layer 2: AI-Specific Threat Detection

Protect AI Guardian: Monitors LLM interactions in real-time for prompt injection, data exfiltration attempts, and jailbreak attacks. I watched a demo where someone tried to extract training data through a series of escalating prompts, and Guardian flagged it on the third attempt. Not the first — the third. But that is still better than most tools, which flag nothing. Pricing starts around $30K/year.

Lakera Guard: Specifically designed to detect prompt injection attacks. It sits between your users and your AI application as a middleware layer. The API is genuinely fast — under 10ms latency on most calls, which matters when you are adding it to a customer-facing chatbot. Free tier available for up to 10K API calls/month. Production plans from $500/month.

NVIDIA NeMo Guardrails: Open-source framework from NVIDIA that lets you define conversational boundaries for LLMs. You write rules like "do not discuss competitor products" or "always verify financial data before presenting" and NeMo enforces them. It requires more engineering effort than Lakera, but it is free and remarkably flexible. My friend Marcus spent a weekend implementing it for his startup and called it "the best free tool I have found for keeping our chatbot from going rogue."

Layer 3: Model Security and Red-Teaming

IBM Adversarial Robustness Toolbox (ART): Open-source Python library backed by the Linux Foundation. It includes 39 attack modules and 29 defense modules for testing ML models against evasion, poisoning, extraction, and inference attacks. It supports TensorFlow, PyTorch, and most major frameworks. The learning curve is steep — Derek from my team spent three days just getting the poisoning tests configured properly — but it is the most complete free option available.

Meta Purple Llama: Open-source suite specifically for testing LLM safety. Includes CyberSecEval (benchmarks for cybersecurity risks in code generation), Llama Guard (content moderation), and Prompt Guard (prompt injection detection). If you are deploying any open-source LLM, Purple Llama should be part of your testing pipeline. It is free and surprisingly well-documented for a Meta project.

Mindgard: Commercial AI red-teaming platform that automates adversarial testing. You point it at your AI application and it runs hundreds of attack scenarios automatically. Think of it as Burp Suite but for AI. Pricing from $25K/year.

Layer 4: Data Protection for AI Pipelines

Nightfall AI: DLP specifically designed for AI data flows. It monitors what data goes into and comes out of LLMs, catching PII, credentials, and sensitive business data before it leaves your environment. Integrates with Slack, ChatGPT, GitHub Copilot, and most enterprise AI tools. Starts around $5/user/month.

Private AI: Automatically redacts PII from data before it reaches your AI models. Instead of sending "John Smith, SSN 123-45-6789, owes $45,000" to your AI, it sends "[NAME], SSN [REDACTED], owes [AMOUNT]." The AI can still process the query without ever seeing the sensitive data. Starts at $0.001 per API call.

What a Realistic Implementation Looks Like

Greg asked me to sketch out what a phased rollout would look like for a 300-person company with a $200K annual security budget. Here is what I recommended:

Month 1-2: Discovery ($0-$5K)
Configure your existing CASB to detect AI service API calls. Run a manual audit of AI tools (just ask department heads — you will be horrified). Deploy NeMo Guardrails on any customer-facing AI. Set up Purple Llama for testing.

Month 3-4: Protection ($15K-$30K)
Deploy Lakera Guard on production AI applications. Implement Nightfall for data monitoring. Run ART against your custom models if you have any. Create an AI acceptable use policy (yes, you need one — 72 percent of companies do not have one, according to ISACA).

Month 5-6: Maturity ($50K-$100K)
Evaluate Wiz or Robust Intelligence for full AI-SPM. Hire or contract an AI security specialist (budget $180K-$250K salary if full-time, $300-$500/hour if contract). Begin quarterly AI red-teaming exercises.

Ongoing: Governance
Monthly AI asset inventory reviews. Quarterly adversarial testing. Annual policy updates. Board-level AI risk reporting (if your board is not asking about AI risk yet, they will be by Q3 — Gartner says 60 percent of boards will require AI risk reporting by 2027).

The One Thing Nobody Wants to Hear

Look, I know this is a tools article, but I would be dishonest if I did not say this: no tool will fix a culture problem. If your developers are copy-pasting proprietary code into ChatGPT (58 percent of them are, according to GitHub's 2025 developer survey), no DLP tool will catch every instance. If your marketing team signed up for 14 different AI writing tools using personal email addresses, your CASB cannot see what it cannot see.

The companies that are actually getting AI security right are doing two things that have nothing to do with software: they are training their people (not a one-hour webinar — actual hands-on training with their specific AI tools), and they are making it easy to use approved AI tools so employees do not feel the need to sneak around policy.

Greg got that. By the end of our call — it was almost 11 PM at that point — he said something I think about: "I spent $400K on security tools last year. I spent $8K on training. I think I had the ratio backwards."

He was right. Tools are necessary. But tools without trained humans operating them are just expensive decorations. Like that $3,200 Peloton I bought in 2022 that has been a very sophisticated clothes hanger for the last four years.

Start with visibility. Add protection layer by layer. Train your people. And maybe — just maybe — you will be in that zero percent who can say they actually know what AI is running in their organization.

Greg is aiming for Q4. I told him I would check back in. I will let you know how it goes.

For more on AI security threats, read how Chinese state hackers spent six years watching Southeast Asian militaries. Also check out our take on Leanstral, the AI agent that mathematically proves your code is correct, and why MCP is not actually dead.

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.