Your AI Tools Are Leaking Data — Here's What Your CISO Won't Tell You

March 2026

Last month I sat in a boardroom where the CEO proudly announced their team had been using AI to "supercharge productivity." When I asked if they'd done a security review, the room went quiet. They hadn't. And they had no idea that six months of proprietary client data, internal financials, and competitive strategy had been flowing through a third-party AI platform with no data retention policy.

The New Shadow IT

Ten years ago, the security nightmare was employees spinning up unauthorized cloud services. We called it shadow IT. Today, the same thing is happening with AI — except the stakes are higher.

When an employee pastes a client contract into ChatGPT to "summarize the key terms," where does that data go? When your marketing team feeds customer lists into an AI writing tool, who else can access that data? When your finance team uses an AI assistant to analyze quarterly numbers, are those numbers now part of a training dataset?

Most CEOs can't answer these questions. And that's a problem.

What's Actually at Risk

Compliance Violations

If you're in healthcare, finance, legal, or any regulated industry, feeding client data into an AI tool could violate HIPAA, SOX, or state privacy laws. The AI vendor's terms of service don't supersede your compliance obligations. And "we didn't know" has never been a successful defense.

Competitive Intelligence Leaks

Some AI tools train on user inputs. That means your proprietary processes, pricing models, and strategic plans could theoretically inform outputs served to your competitors. Even if the risk is small, the downside is catastrophic.

Client Trust

If a client discovers their confidential data was processed by an AI tool they never authorized, you don't have a technology problem. You have a trust problem. And trust problems kill businesses faster than technology problems.

The Five Questions You Need to Answer Today

  1. What AI tools are your employees actually using? Not what's approved — what's actually in use. Survey your team. You'll be surprised.
  2. Where does the data go? Read the terms of service for every AI tool in your stack. Specifically: data retention, training usage, and third-party sharing clauses.
  3. What data types are off-limits? Create a clear policy: what can and cannot be entered into AI tools. Make it specific, not vague.
  4. Who approved the tools? If the answer is "nobody," you have shadow AI. Establish an approval process before more tools show up.
  5. What's your incident plan? If an AI tool is breached or misuses your data, what do you do? If you don't have an answer, you're not ready.

This Doesn't Mean Don't Use AI

I'm not anti-AI. I help companies adopt AI every day. But I'm anti-reckless, and right now, most AI adoption is exactly that.

The companies that will win with AI are the ones who adopt it with guardrails. That means security reviews before deployment, clear data handling policies, and ongoing monitoring. It's not sexy, but it's the difference between an AI strategy and an AI liability.

As the only CISSP in the NOLA market working on AI strategy, I bring this lens to every engagement. Because the fastest way to turn AI from a profit engine into a cost center is a security incident you didn't see coming.

Want to get your AI house in order? Download the free AI Readiness Checklist — it includes the security questions most companies skip. Or book Rene to speak on Secure AI Adoption at your next event.

Free: The AI Readiness Checklist

10 questions every CEO should answer before investing in AI.