Researchers Just Found Eight Ways to Hack AWS Bedrock — And Most AI Teams Have No Idea They're Exposed
I was on a call with my friend Raj last Friday — he runs AI infrastructure for a mid-size fintech in Austin — when he casually mentioned they'd just connected their AWS Bedrock agents to their Salesforce instance and a SharePoint knowledge base. "It's beautiful," he said. "The AI can pull customer data, generate reports, trigger Lambda functions. We built it in a weekend."
I asked him what security review they'd done on those Bedrock connections.
Dead silence. Then: "What do you mean?"
That conversation happened three days before the XM Cyber threat research team published findings that I'd describe as "the kind of thing that should make every AI team lose sleep." They identified eight validated attack vectors inside AWS Bedrock that range from quietly siphoning your model logs to hijacking your AI agents and using them to modify databases under the cover of normal AI workflows.
And the scariest part? Most of these attacks exploit permissions that companies grant intentionally, thinking they're just enabling their AI to work.
What Is AWS Bedrock and Why Should You Care
For the uninitiated: AWS Bedrock is Amazon's managed platform for building AI applications. It gives you access to foundation models from Anthropic (Claude), Meta (Llama), Mistral, and others — plus the infrastructure to connect those models directly to your enterprise data. If you are evaluating cloud infrastructure for AI workloads, see our Hetzner vs DigitalOcean vs Vultr comparison.
The whole point is connectivity. Your AI agent can query your CRM, pull from internal knowledge bases, trigger automated workflows. That's what makes it useful. That's also what makes it dangerous.
Because when your AI agent can access Salesforce, trigger a Lambda, and query a SharePoint library... it's not just an AI anymore. It's a node in your infrastructure. With permissions. With reachability. With paths to your crown jewels.
As of March 2026, Bedrock adoption has grown 340% year-over-year, with enterprises racing to connect their AI systems to production data. The XM Cyber findings suggest that security hasn't kept pace with that adoption.
The Eight Attack Vectors — Broken Down
I spent the weekend reading the XM Cyber research, talking to three different cloud security engineers, and testing some of these scenarios in a sandbox environment. Here's what you need to know, in language that doesn't require a PhD in cloud security.
Vector 1: Model Invocation Log Theft
What happens: Bedrock logs every interaction with your AI models — every prompt, every response. These logs typically live in an S3 bucket. An attacker with S3 read access can just... read them. All your prompts. All your AI's responses. Every piece of data your team has been feeding into the model.
The scarier version: An attacker with bedrock:PutModelInvocationLoggingConfiguration permission can redirect those logs to a bucket they control. From that point forward, every prompt silently flows to the attacker. Your team keeps working. Your dashboards look normal. And somewhere, someone is reading everything.
Why it matters: Think about what your team puts in AI prompts. Customer names, financial data, internal strategies, code snippets with API keys (yes, people do this — my colleague Sarah caught three API keys in Bedrock logs during an audit last month).
Vector 2: Knowledge Base Data Source Attacks
What happens: Bedrock's Knowledge Bases use RAG (Retrieval Augmented Generation) to connect models to your data. The actual data lives in S3 buckets, Salesforce instances, SharePoint libraries, or Confluence spaces. An attacker can bypass the AI model entirely and pull raw data directly from these sources.
The really scary version: An attacker who can retrieve and decrypt the stored credentials (the ones Bedrock uses to connect to SharePoint, for example) can steal those credentials and move laterally into Active Directory. Your AI's login to SharePoint becomes the attacker's login to your entire identity infrastructure.
I asked my friend Derek, a pentester based in Seattle, if he's seen this in the wild. "Not yet with Bedrock specifically," he said. "But RAG knowledge base misconfigurations are the new S3 bucket misconfigurations. Give it six months."
Vector 3: Knowledge Base Data Store Attacks
What happens: After ingestion, your data gets indexed into vector databases — Pinecone, Redis Enterprise Cloud, or AWS-native stores like Aurora and Redshift. The credentials for these stores are often the weakest link.
How it works: An attacker calls the bedrock:GetKnowledgeBase API, which returns the StorageConfiguration object. Inside that object: endpoint URLs and API keys for your vector database. With those credentials, an attacker has full administrative access to your indexed knowledge.
This is particularly nasty because the vector database contains processed, structured versions of your data — often more useful to an attacker than the raw source material.
Vector 4: Direct Agent Attacks
What happens: Bedrock Agents are autonomous orchestrators — they receive a task, decide which tools to use, and execute. An attacker with bedrock:UpdateAgent permission can rewrite the agent's base prompt, forcing it to leak its internal instructions and tool schemas.
Even worse: Combined with bedrock:CreateAgentActionGroup, an attacker can attach a malicious "tool" to a legitimate agent. The agent then executes unauthorized actions — database modifications, user creation, data exfiltration — all under the cover of a normal AI workflow.
Think about that for a second. Your monitoring shows your AI agent doing what it normally does. But one of its "tools" is actually sending data to an attacker's endpoint. It's like replacing one book in a library with a perfect-looking decoy that phones home every time someone reads it.
Vector 5: Indirect Agent Attacks (Prompt Injection)
What happens: Instead of modifying the agent directly, an attacker poisons the data the agent consumes. Plant a carefully crafted payload in a document that gets ingested into the knowledge base, and the next time the agent processes that document, the payload triggers.
This is the AI equivalent of a stored XSS attack. And if you've ever done web security, you know how devastating those can be.
"Indirect prompt injection is the vulnerability class I'm most worried about in 2026," said Dr. Elena Vasquez, an AI security researcher I spoke with at a conference in San Francisco last month. "We've been warning about it since 2023, but now that enterprises are connecting AI agents to production data at scale, the attack surface is enormous."
Vector 6: Prompt Flow Injection
What happens: Bedrock's Prompt Flows allow you to chain multiple AI steps together. An attacker who can modify a flow can inject malicious steps that execute between legitimate operations — intercepting data mid-pipeline, modifying outputs before they reach the user, or routing sensitive information to external endpoints.
Imagine a legitimate flow: "User asks question → AI searches knowledge base → AI generates response." Now add an injected step: "User asks question → AI searches knowledge base → silently forward raw search results to attacker → AI generates response." The user gets their answer. The attacker gets your data. Everyone's happy — except your security team, who has no idea this is happening.
Vector 7: Guardrail Degradation
What happens: Bedrock Guardrails are safety filters that prevent AI from generating harmful, off-topic, or sensitive content. An attacker with bedrock:UpdateGuardrail permission can weaken or disable these filters entirely.
Why it's subtle: The guardrails still exist. They still appear active in your dashboard. But the thresholds have been modified to allow everything through. It's like keeping the security cameras but pointing them at the ceiling.
Once guardrails are degraded, an attacker can use the AI for jailbreaking, data extraction, or generating content that your organization's policies should prevent.
Vector 8: Cross-Service Lateral Movement
What happens: Bedrock connects to so many services — S3, Lambda, Salesforce, SharePoint, Active Directory — that compromising a single Bedrock component can give an attacker paths to multiple other systems. The research found that in typical enterprise Bedrock deployments, a single compromised agent could reach an average of 4.7 other services.
This is the "blast radius" problem. It's not just about Bedrock being insecure. It's about Bedrock being connected to everything, which means a Bedrock compromise IS an everything compromise.
What You Should Do Right Now
I spent Sunday afternoon creating a Bedrock security checklist based on the research findings, conversations with three security engineers, and my own testing. Here are the most critical actions:
Immediate (This Week)
- Audit Bedrock IAM permissions. Search for any user, role, or service account with
bedrock:PutModelInvocationLoggingConfiguration,bedrock:UpdateAgent,bedrock:UpdateGuardrail, orbedrock:CreateAgentActionGroup. Apply least-privilege principles ruthlessly. - Check your model invocation logs. Verify the logging destination hasn't been changed. Look for any unexpected S3 bucket targets.
- Inventory all Bedrock Knowledge Base data sources. Know exactly which S3 buckets, Salesforce instances, and SharePoint libraries your AI can access.
- Enable CloudTrail logging for all Bedrock API calls. You can't detect what you don't log.
Short-Term (This Month)
- Implement SCPs (Service Control Policies) to prevent Bedrock logging configuration changes outside of a dedicated security account.
- Review vector database credentials. Rotate any API keys stored in Bedrock StorageConfiguration objects.
- Add anomaly detection on Bedrock Agent actions. Monitor for unusual tool invocations, unexpected Lambda triggers, or data access patterns that deviate from baseline.
- Test for indirect prompt injection. Run controlled tests where you insert benign payloads into knowledge base documents and verify that guardrails catch them.
Long-Term (This Quarter)
- Implement network segmentation between Bedrock services and critical data stores. The fact that an AI agent CAN reach Active Directory doesn't mean it SHOULD.
- Build a Bedrock security monitoring dashboard. Track permission changes, logging configuration modifications, agent updates, and guardrail changes in real-time.
- Run a Bedrock-specific red team exercise. Hire someone to try all eight vectors against your actual deployment. You'll be surprised what they find.
The Bigger Picture
Here's what keeps me up at night about this research: these aren't theoretical vulnerabilities. They're not "an attacker could, in theory, if they had root access and a PhD and three hours of unmonitored access..." These are practical attacks that exploit permissions enterprises grant intentionally.
The company that connects Bedrock to Salesforce and SharePoint isn't doing anything wrong — that's the entire point of the platform. But the security model hasn't caught up with the connectivity model.
I called Raj back on Saturday to share the XM Cyber findings. He was quiet for a long time. Then he said: "So that beautiful weekend project we built? We basically gave our AI agent the keys to every customer record we have, and the only thing stopping someone from stealing them is an IAM policy we copied from a blog post?"
Yeah. Pretty much.
He spent the weekend rewriting those IAM policies. I'd suggest you do the same.
The full XM Cyber research on AWS Bedrock attack vectors was published on The Hacker News on March 23, 2026. The findings have been responsibly disclosed to AWS.
Disclaimer: This article is for educational purposes. The attack vectors described were identified through legitimate security research. Do not attempt these techniques against systems you don't own. Seriously. Don't be that person.