Generative AI has arrived in healthcare, and it’s not waiting for your compliance team to catch up.

Clinicians are using large language models to draft clinical notes. Administrative teams are summarizing patient records with AI tools. Revenue cycle teams are leveraging AI to automate coding and denial management. Research departments are analyzing datasets with AI-powered platforms. And across every department, individual employees are using ChatGPT, Microsoft Copilot, Google Gemini, and similar tools for dozens of tasks that may — or may not — involve protected health information.

The productivity gains are real. The compliance risks are equally real.

HIPAA wasn’t written with generative AI in mind. But HIPAA’s requirements absolutely apply to generative AI systems that process, store, or transmit electronic protected health information (ePHI). And the gap between how healthcare organizations are actually using generative AI and how they should be using it from a compliance standpoint is enormous.

This guide breaks down exactly what HIPAA requires when your organization uses generative AI, where the biggest compliance risks are, and how to build a framework that lets you leverage AI’s benefits without putting your organization at regulatory risk.

The Core Problem: Generative AI and ePHI Don’t Mix Without Safeguards

Every time someone in your organization enters patient information into a generative AI tool, a series of HIPAA questions arise:

Is the AI vendor a business associate? If the vendor’s system receives, creates, maintains, or transmits ePHI on your behalf, the answer is yes. That triggers the requirement for a Business Associate Agreement (BAA) before any data is shared.

What happens to the data? Many generative AI platforms retain user inputs to train and improve their models. If those inputs contain ePHI, the vendor is now using patient data for purposes that almost certainly weren’t authorized by the patient and may not be permitted under your HIPAA obligations.

Where does the data go? Cloud-based AI systems may process data across multiple geographic locations, through multiple subprocessors, with varying levels of security. HIPAA requires you to know where ePHI is and who has access to it.

Is the output accurate? Generative AI hallucinations — confident but factually incorrect outputs — create patient safety risks when used in clinical contexts. While accuracy isn’t explicitly a HIPAA requirement, the Security Rule’s integrity controls require that ePHI isn’t improperly altered, and using AI-generated clinical content without verification processes raises compliance questions.

These aren’t theoretical concerns. They’re the reality of how generative AI is being used in healthcare organizations right now, often without any of these questions being formally addressed.

What HIPAA Actually Requires When You Use Generative AI

Let’s be specific about the HIPAA requirements that apply to generative AI use in healthcare. These aren’t new requirements — they’re existing requirements applied to a new technology.

Business Associate Agreements Are Non-Negotiable

Under 45 CFR §164.502(e) and §164.504(e), any entity that handles ePHI on behalf of a covered entity must have a BAA in place. If your staff are entering patient data into a generative AI tool, the vendor behind that tool is a business associate. Period.

Here’s the practical challenge: Many of the most popular generative AI platforms don’t offer BAAs on their free or standard tiers. OpenAI offers a BAA for ChatGPT Enterprise and through its API with specific terms. Microsoft offers a BAA for Azure OpenAI Service and certain Copilot configurations. Google has specific healthcare offerings with BAA options. But the free versions of ChatGPT, Copilot, and Gemini that your employees might be using? No BAA. No HIPAA compliance. No exception.

Real-world scenario: A health system’s IT department deployed Microsoft Copilot across the organization without configuring it through Azure’s HIPAA-compliant pathway. Staff began using it to summarize patient emails, draft referral letters, and analyze clinical data. The Copilot deployment was connected to the organization’s Microsoft 365 environment, which contained ePHI. But the specific Copilot configuration didn’t fall under the organization’s existing Microsoft BAA. It took a compliance audit to identify the gap, by which point thousands of interactions involving patient data had already occurred.

The Minimum Necessary Standard Applies

HIPAA’s minimum necessary standard (45 CFR §164.502(b)) requires that organizations limit the use and disclosure of ePHI to the minimum amount necessary to accomplish the intended purpose. This has direct implications for generative AI use.

If a clinician is using an AI tool to help draft a patient communication, does the tool need the patient’s full medical history? Probably not. If an administrator is using AI to summarize a meeting about a patient case, does the AI need the patient’s Social Security number or insurance ID? Definitely not.

Organizations need clear guidelines about what data can and cannot be entered into generative AI tools, even tools that have BAAs and appropriate security configurations. The fact that a tool is compliant doesn’t mean every use of that tool is compliant.

Risk Analysis Must Include Generative AI

The HIPAA Security Rule requires covered entities and business associates to conduct a thorough Security Risk Analysis that identifies risks and vulnerabilities to ePHI. If your organization uses generative AI — or if your workforce is using it without formal approval — those AI systems represent risks that need to be identified, documented, and managed.

Your SRA should address: which generative AI tools are in use, what data flows to those tools, what security controls are in place, what vendor agreements govern data handling, and what residual risks remain after controls are applied. If your most recent SRA doesn’t mention generative AI, it has a gap that needs to be addressed.

Technical Safeguards Must Be in Place

HIPAA’s technical safeguard requirements (45 CFR §164.312) apply to generative AI systems that process ePHI. This includes:

Access controls: Only authorized users should be able to use AI tools that process ePHI. Individual user accounts, authentication, and role-based access are required.

Audit controls: You need to be able to track who used AI tools, what data was entered, and when. Many consumer AI platforms don’t provide the audit trail capabilities that HIPAA requires.

Transmission security: Data sent to and from AI platforms must be encrypted in transit. Most major AI platforms meet this standard, but it needs to be verified, not assumed.

Integrity controls: Mechanisms must be in place to ensure ePHI isn’t improperly altered. In the context of generative AI, this raises important questions about AI-generated content that modifies, summarizes, or reinterprets patient data.

The Biggest Generative AI Compliance Risks in Healthcare

1. Uncontrolled Data Exposure Through Prompts

Every prompt entered into a generative AI tool is data that leaves your organization’s control. When those prompts contain patient names, diagnoses, medications, treatment plans, or any other ePHI, you’ve created a data disclosure that HIPAA governs.

The challenge is that staff often don’t think of typing a question into ChatGPT as a “data disclosure.” But from a regulatory standpoint, it absolutely is. You’ve transmitted ePHI to a third party. The fact that it happened through a chat interface rather than an email or fax doesn’t change the legal analysis.

Mitigation: Create explicit policies about what types of data can be entered into AI tools. Provide approved de-identification procedures for staff who need to use AI for tasks involving patient information. Consider implementing technical controls that detect and block ePHI from being entered into unauthorized AI platforms.

2. Model Training on Patient Data

Many generative AI platforms use customer inputs to refine and improve their models. This means patient data entered into the system doesn’t just get processed and forgotten — it may become part of the model’s training data, influencing future outputs for other users.

This creates multiple compliance issues. First, using ePHI for model training is a use of patient data that likely exceeds what was authorized. Second, once data is incorporated into a model, it can’t be reliably extracted or deleted — which conflicts with patients’ rights regarding their health information. Third, there’s a theoretical risk of model memorization, where the AI could reproduce fragments of patient data in responses to other users.

What to verify: Before deploying any generative AI tool, confirm in writing whether the vendor uses customer inputs for model training. Ensure your BAA explicitly addresses this. Opt out of data training programs wherever possible. For enterprise deployments, negotiate terms that prohibit the use of your data for model improvement.

3. Shadow AI Usage

The most pervasive generative AI compliance risk in healthcare is the one your organization can’t see. Staff across every department are using AI tools — personal accounts, free tiers, browser extensions, mobile apps — without IT or compliance awareness. We covered this in depth in our AI Security Risks in Healthcare guide, but it bears repeating in the HIPAA compliance context.

Every unauthorized AI interaction involving patient data is a potential HIPAA violation. And because these interactions are invisible to your compliance monitoring, you can’t document them, you can’t assess them, and you can’t mitigate them.

The practical fix: Don’t just ban unauthorized AI — provide approved alternatives. If your staff need AI tools to be productive (and many do), give them compliant options with clear usage guidelines. Organizations that provide approved AI pathways see dramatically less shadow AI usage than those that simply prohibit everything.

4. Inadequate Documentation and Audit Trails

HIPAA requires that you maintain records of how ePHI is accessed, used, and disclosed. For most healthcare IT systems, audit logging is built in. But for generative AI interactions, audit capabilities are often limited or nonexistent.

If an OCR investigator asks, “Show me records of how patient data was used in your AI systems,” can you produce that documentation? For most organizations, the honest answer is no.

What you need: Deploy generative AI through enterprise platforms that provide comprehensive logging. Maintain records of which users accessed AI tools, what types of data were processed, and what outputs were generated. Include AI usage in your routine audit procedures.

— /wp:paragraph –>

5. AI-Generated Clinical Content Without Review

When generative AI drafts clinical documentation, summarizes patient records, or generates treatment-related content, the accuracy of that output has direct patient safety implications. Generative AI hallucinations — where the model produces confident but incorrect information — are well-documented and unpredictable.

From a HIPAA perspective, the integrity of ePHI is a compliance requirement. If an AI tool generates an inaccurate patient summary that becomes part of the medical record, you have an integrity problem. If clinical decisions are made based on AI-generated content that contains errors, you have both a compliance problem and a patient safety problem.

The standard: Every piece of AI-generated clinical content must be reviewed by a qualified professional before it becomes part of the patient record or informs clinical decisions. Document your review processes. Train staff to treat AI outputs as drafts that require professional judgment, not finished products.

Building a HIPAA-Compliant Generative AI Program

Here’s the good news: You don’t need to ban generative AI to be HIPAA compliant. You need to govern it. Here’s a practical framework for building a compliant generative AI program.

Step 1: Establish an AI Acceptable Use Policy

Create a clear, specific policy that defines how generative AI may be used in your organization. This policy should address: which AI tools are approved for use with patient data (and which are not), what types of data may be entered into AI tools, who is authorized to use AI tools in clinical versus administrative contexts, what review and validation processes are required for AI-generated content, how to report unauthorized AI usage, and the consequences of policy violations.

Keep the policy practical. Staff are more likely to follow guidelines that make sense than rules that feel arbitrary. Explain the “why” behind each requirement — when people understand that entering patient data into ChatGPT creates a HIPAA violation because there’s no BAA in place, they’re more likely to comply than if they’re simply told “don’t use ChatGPT.”

Step 2: Evaluate and Approve AI Vendors

Before deploying any generative AI tool that will process ePHI, conduct a thorough vendor evaluation. Your evaluation should include: whether the vendor offers a BAA (and what its terms include), how the vendor handles data storage, processing, and retention, whether customer data is used for model training, what security certifications the vendor holds (SOC 2, HITRUST, etc.), what audit and logging capabilities are available, the vendor’s breach notification procedures, and data portability and deletion capabilities.

Document your evaluation process and its results. This documentation becomes part of your compliance record and demonstrates due diligence if your AI practices are ever reviewed by regulators.

Step 3: Configure for Compliance

Having a BAA isn’t enough. The AI tool must be configured properly for HIPAA compliance. This includes: disabling features that share data with the vendor for training purposes, enabling encryption for all data in transit and at rest, configuring access controls and user authentication, enabling comprehensive audit logging, setting data retention policies that align with your organization’s requirements, and restricting AI tool access to authorized users only.

Work with your IT team and the vendor to ensure that the configuration meets your compliance requirements. Document the configuration settings and include them in your security documentation.

Step 4: Train Your Workforce

Your AI acceptable use policy is only as good as your staff’s understanding of it. Include generative AI training in your HIPAA training program. Make it practical: show examples of compliant and non-compliant AI usage. Demonstrate what approved AI tools look like and how to use them properly. Explain in plain language why entering patient data into free AI tools creates compliance risk.

Don’t make training a one-time event. The generative AI landscape is evolving rapidly. New tools emerge, capabilities change, and the ways your staff want to use AI will evolve. Update your training regularly to reflect current tools, current policies, and current best practices.

Step 5: Update Your Security Risk Analysis

Add generative AI as a specific category in your Security Risk Analysis. For each approved AI tool, document the risks, current safeguards, and residual risks. For shadow AI, document the risk of unauthorized usage and the controls you have in place to detect and prevent it. Include AI-specific risk categories: data exposure through prompts, model training on patient data, hallucination risks in clinical contexts, vendor security, and integration vulnerabilities.

Step 6: Monitor and Enforce Continuously

Compliance isn’t a one-time achievement. Monitor AI tool usage through network traffic analysis, audit logs, and periodic surveys. Review vendor compliance status regularly — vendors update their terms and capabilities, and your compliance posture needs to keep pace. Conduct periodic audits of AI usage patterns. Update policies as the technology and regulatory landscape evolve.

The Regulatory Direction: What’s Coming Next

The regulatory environment for AI in healthcare is tightening. While HIPAA’s current text doesn’t explicitly mention AI, the direction of regulatory activity makes clear that AI-specific enforcement is coming.

Proposed HIPAA Security Rule updates are expected to strengthen requirements around technology risk assessment, including explicit requirements to evaluate AI systems. Annual security risk assessments, more detailed documentation, and specific requirements for new technology implementations are anticipated. Organizations that build compliant AI programs now will be ahead of these requirements.

OCR enforcement focus. The Office for Civil Rights has indicated that technology-related compliance failures are a priority. When OCR begins investigating AI-related HIPAA violations — and it’s a question of when, not if — organizations without documented AI governance will face significant exposure.

State-level AI regulation. Multiple states have enacted or proposed AI transparency and governance requirements, several of which specifically address healthcare. Organizations operating across state lines need to track and comply with these evolving requirements.

Frequently Asked Questions

Q: Can we use ChatGPT for clinical documentation?

A: Only if you’re using a version covered by a BAA (such as ChatGPT Enterprise with an executed BAA), the tool is properly configured for HIPAA compliance, staff are trained on appropriate use, and all AI-generated clinical content is reviewed by a qualified clinician before being finalized. The free version of ChatGPT should never be used with patient data.

Q: What if we de-identify the data before entering it into an AI tool?

A: Properly de-identified data under HIPAA’s standards (either the Expert Determination method or the Safe Harbor method per 45 CFR §164.514) is not considered ePHI and is not subject to HIPAA’s restrictions. However, true de-identification is more rigorous than most people realize. Simply removing patient names is not sufficient. You must remove or adequately address all 18 HIPAA identifiers. If you’re going to rely on de-identification as a strategy, ensure your process meets HIPAA’s actual standards, and document it.

Q: Does our existing Microsoft BAA cover Copilot?

A: Not necessarily. Microsoft’s BAA coverage varies by product and configuration. Some Copilot implementations are covered under existing Microsoft 365 BAAs; others are not. Review your specific BAA terms with Microsoft and verify which Copilot features and configurations are included. Don’t assume coverage — confirm it in writing.

Q: How do we handle the risk of AI hallucinations in clinical settings?

A: Require human review of all AI-generated clinical content. Establish clear workflows where AI outputs are treated as drafts, not final products. Train clinical staff to apply professional judgment to AI suggestions, just as they would to any other information source. Document your review processes as part of your compliance framework. Never allow AI-generated content to automatically populate medical records without clinician review.

The Bottom Line

Generative AI is going to be part of healthcare’s future. The organizations that figure out how to use it compliantly will gain significant advantages in efficiency, quality, and competitiveness. The organizations that ignore the compliance implications will face regulatory consequences that could far outweigh any productivity gains.

The path forward isn’t prohibition — it’s governance. Establish clear policies. Vet your vendors. Configure your tools properly. Train your people. Monitor usage. Document everything. These are the same compliance principles that apply to every other aspect of HIPAA — generative AI just adds a new dimension to an existing framework.

Start where you are. If your organization hasn’t addressed generative AI in your compliance program, the most important step is the first one: acknowledge that AI is being used (or will be), and begin building the governance framework to manage it responsibly.

Medcurity helps healthcare organizations navigate the intersection of emerging technology and HIPAA compliance. Our Security Risk Analysis platform and compliance experts can help you assess the risks that generative AI introduces to your environment and build a governance framework that enables innovation while maintaining the compliance standards your organization and patients depend on.

Leave a Reply

Your email address will not be published. Required fields are marked *

//...snippet//