Join Free

The Advisor's Guide to Using AI Without Compliance Headaches

Every advisor we talk to has the same concern about AI: "My compliance department will never approve this."

We get it. Financial services is one of the most heavily regulated industries, and the idea of using AI to help with client communications feels like walking into a minefield.

But here's the reality: AI isn't going away. Your competitors are already using it. The question isn't whether to use AI—it's how to use it responsibly in a regulated environment.

After months of working with AI tools in our own RIA practice (and plenty of conversations with compliance consultants), here's the practical framework we've developed.

The Core Principle: AI Assists, Humans Decide

The single most important thing to understand is that AI should never make decisions for clients or generate final client-facing content without human review.

Think of AI as a very capable intern. You wouldn't let an intern send client emails without your approval. You wouldn't let them make investment recommendations. You wouldn't let them sign off on financial plans.

The same rules apply to AI. It can draft, research, summarize, and suggest—but a licensed professional must review and approve everything before it reaches a client.

The golden rule: Never let AI output go directly to a client without human review. Every email, every piece of content, every recommendation needs your eyes on it before sending.

What's Safe vs. What's Risky

Generally Safe Uses

These are activities where AI can add significant value with relatively low compliance risk:

Higher Risk Uses (Proceed with Caution)

These activities require more careful consideration and potentially compliance pre-approval:

Caution: AI can hallucinate facts, cite non-existent regulations, or make mathematical errors. Never trust AI output for anything involving specific numbers, legal citations, or regulatory requirements without independent verification.

Data Privacy: What Can You Share with AI?

This is where many advisors get stuck. The answer depends on which AI tool you're using and how it handles data.

Claude (Anthropic)

Claude Pro does not use your conversations to train its models. Your conversations are retained for 30 days for trust and safety purposes, then deleted. Anthropic also offers a business tier with additional privacy controls.

ChatGPT (OpenAI)

By default, OpenAI may use your conversations for model training. You can opt out in settings, but many compliance departments are more comfortable with tools that don't train on user data by default.

Best Practices for Any AI Tool

Before sharing client information with AI:

  • Anonymize data when possible (use "Client A" instead of names)
  • Never input Social Security numbers, account numbers, or passwords
  • Avoid sharing specific portfolio holdings or account balances
  • Review your firm's data handling policies
  • Consider using enterprise versions with enhanced privacy

A practical approach: Share the situation, not the identity. Instead of "John Smith has $2.3M in his IRA and is worried about RMDs," try "A 72-year-old client with a significant IRA balance is asking about RMD strategies. Help me explain the options."

Documentation: Covering Your Bases

If your compliance department asks "Did you use AI to write this?", you should have a clear answer and documentation.

What to Document

Simple Documentation Approach

We keep it simple: If AI helped with something client-facing, we add a note to the CRM record. Something like: "Email drafted with AI assistance, reviewed and personalized by [advisor name]."

This isn't currently required by regulation, but it's good practice and shows good faith if questions ever arise.

Building a Firm Policy

If you're a solo advisor, your "policy" can be your own guidelines. If you have a team or work under a broker-dealer, you'll want something more formal.

Elements of a Basic AI Use Policy:

  • Approved AI tools (and which are prohibited)
  • Approved use cases (email drafting, research, etc.)
  • Prohibited use cases (investment recommendations, etc.)
  • Data handling requirements (what can/cannot be shared)
  • Review and approval requirements
  • Documentation requirements
  • Training requirements for staff

What Regulators Are Saying

As of early 2026, there's no specific SEC or FINRA rule about advisors using AI tools. But that doesn't mean it's the Wild West.

Existing regulations still apply:

The regulatory landscape is evolving. The SEC has signaled interest in AI-related risks, and we expect more specific guidance in the coming years. Building good practices now positions you well for whatever comes.

Practical Tips from Our Experience

Start Small

Don't try to automate everything at once. Start with internal tasks like meeting summaries or research. Build confidence before moving to client-facing communications.

Always Review Output

Even for simple tasks, read what the AI produces. Look for factual errors, tone problems, or anything that doesn't sound like you.

Keep Compliance in the Loop

If you work under a broker-dealer or have an external CCO, have a conversation about AI use before diving in. It's easier to get buy-in upfront than to ask for forgiveness later.

Train Your Voice

The better you train AI to write in your voice, the less editing you'll need—and the less likely clients will notice AI involvement. Check out our guide on training Claude to write like you.

Stay Current

AI capabilities and regulatory guidance are both evolving rapidly. What's true today may change in six months. Follow industry publications and compliance bulletins.

The Bottom Line

Using AI in your advisory practice isn't inherently risky—using it carelessly is. With proper guardrails, human oversight, and documentation, AI can make you more efficient without creating compliance problems.

The advisors who figure this out now will have a significant advantage over those who wait until it's "perfectly clear" what the rules are. That clarity may take years.

Start with low-risk use cases, document your process, and always remember: AI assists, humans decide.

Get our compliance checklist

Join our email list and we'll send you our AI compliance checklist—a simple framework you can adapt for your practice.