Every healthcare organization is using artificial intelligence. The question is whether they know it — and whether they’ve assessed the risks.
AI has moved well beyond the pilot stage in healthcare. Clinical decision support tools, ambient documentation systems, AI-powered coding and billing platforms, predictive analytics for population health, and large language models for administrative tasks are all in active use across hospitals, clinics, health plans, and business associates of every size. Many of these tools were adopted quickly, driven by vendor promises and genuine operational needs, without the kind of structured risk assessment that healthcare organizations apply to other technology decisions.
That gap is becoming a serious problem. AI systems interact with protected health information in ways that traditional software doesn’t. They learn, adapt, and make decisions — or influence decisions — based on data patterns that can be opaque even to the people using them. They introduce risks that standard security risk assessments weren’t designed to capture: algorithmic bias that affects patient care, data exposure through model training, vendor relationships with unclear data handling practices, and regulatory obligations that are evolving faster than most compliance programs can keep up with.
The organizations that will navigate this well are the ones that build AI risk assessment into their existing compliance framework now — not as a separate initiative, but as an integrated extension of the security risk analysis process they’re already conducting. Here’s how to do that.
Why Traditional Risk Assessments Miss AI Risks
Most healthcare organizations conduct their Security Risk Analysis using frameworks built around traditional IT systems: servers, workstations, networks, EHR platforms, email systems, and portable devices. These frameworks are essential, and the upcoming HIPAA Security Rule update will make annual SRAs mandatory. But they were designed for systems with predictable, well-understood behaviors.
AI systems are fundamentally different. A traditional application processes data according to explicit rules written by developers. An AI system processes data according to patterns it learned during training — patterns that may not be fully understood by anyone, including the developers who built the system. This creates categories of risk that traditional SRA frameworks don’t adequately address.
Data exposure through model training is one example. When a clinician enters patient information into an AI-powered documentation tool, where does that data go? Is it stored by the AI vendor? Is it used to train or fine-tune the model? Could information from one patient’s record influence outputs generated for another organization? These questions don’t have standard answers because every AI vendor handles them differently, and many vendors haven’t been transparent about their data practices.
Algorithmic bias is another. An AI clinical decision support tool trained primarily on data from large academic medical centers may perform poorly — or even generate harmful recommendations — when applied to a rural community health center serving a different patient population. This isn’t a traditional IT security risk. It’s a patient safety risk with regulatory implications.
Then there’s the problem of shadow AI. Just as organizations dealt with shadow IT a decade ago, they’re now dealing with employees using AI tools without organizational knowledge or approval. A staff member using ChatGPT to draft a prior authorization letter that includes patient details has created a data exposure that no traditional risk assessment would catch — because the organization doesn’t know it’s happening.
Building an AI Risk Assessment Framework
An effective AI risk assessment for healthcare doesn’t require starting from scratch. It requires extending your existing security risk assessment process to capture AI-specific risks. Here’s a practical framework that integrates with the SRA process most healthcare organizations already have in place.
Step 1: Inventory All AI Systems
You can’t assess risk for systems you don’t know about. The first step is building a comprehensive inventory of every AI tool, platform, and capability in your environment. This is harder than it sounds, for several reasons.
First, AI is embedded in many platforms that aren’t marketed as “AI products.” Your EHR likely has AI-powered features. Your billing platform may use machine learning for coding suggestions. Your phone system might use AI for call routing or transcription. Your cybersecurity tools almost certainly use AI for threat detection. These embedded AI capabilities need to be inventoried alongside standalone AI tools.
Second, you need to capture unauthorized AI use. Survey department leaders and frontline staff about AI tools they’re using — including personal accounts for tools like ChatGPT, Google Gemini, Microsoft Copilot, and similar platforms. Many employees don’t realize that using these tools with patient information creates a compliance risk. A non-judgmental discovery process will yield more complete information than a punitive one.
For each AI system, document: the vendor and product name, what the tool does, what data it accesses or processes, whether it interacts with ePHI, how data is transmitted and stored, whether data is used for model training, who authorized the tool’s use, and which staff members use it.
Step 2: Classify AI Systems by Risk Level
Not all AI systems carry the same risk. A spell-check tool with no access to patient data is fundamentally different from an AI system that analyzes clinical images to support diagnostic decisions. Classifying your AI systems by risk level allows you to allocate assessment effort where it matters most.
High-risk AI systems are those that directly access, process, or generate ePHI; influence clinical decision-making; operate with significant autonomy; or involve data sharing with third-party vendors whose data practices aren’t fully transparent. These systems need the most thorough assessment and the most robust controls.
Medium-risk systems interact with operational or administrative data that may include patient information indirectly, support clinical workflows without directly influencing patient care decisions, or have well-documented vendor data practices with appropriate BAAs in place.
Lower-risk systems are those that don’t access patient data at all, operate entirely within your organization’s controlled environment, and perform functions that don’t affect patient care or regulatory compliance.
Step 3: Assess AI-Specific Risks
For each AI system — starting with high-risk systems — evaluate these specific risk categories:
Data Privacy and Security Risks. How does the AI system handle ePHI? Is data encrypted in transit and at rest? Does the vendor use customer data for model training? Where is data stored geographically? What happens to data when the vendor relationship ends? Is there a Business Associate Agreement in place that specifically addresses AI-related data handling? As we outlined in our guide to HIPAA compliance for generative AI, the standard BAA template doesn’t cover many AI-specific data risks.
Clinical Safety Risks. For AI systems that influence patient care, what validation has been done on the model’s accuracy and reliability? Does the system perform equally well across different patient populations? What happens when the system encounters data outside its training distribution? Are clinicians trained to understand the system’s limitations? Is there a clear process for clinicians to override AI recommendations?
Regulatory Compliance Risks. Does the AI system’s data handling comply with HIPAA requirements? Does the vendor’s data processing comply with applicable state privacy laws? If the AI system is used in clinical care, has it received appropriate FDA clearance or approval? Is the system’s use documented in a way that would satisfy an OCR investigation?
Operational Risks. What happens if the AI system becomes unavailable? Are there manual fallback processes? How dependent have workflows become on the AI system’s availability? What’s the process for updating or changing the AI system without disrupting operations? Is there vendor lock-in that limits your options?
Bias and Fairness Risks. Has the AI system been evaluated for bias across different demographic groups? Could biased outputs lead to disparities in patient care, resource allocation, or administrative decisions? Is the system’s decision-making process transparent enough to identify and address bias?
Step 4: Evaluate Existing Controls and Identify Gaps
For each identified risk, document what controls are currently in place and whether they’re adequate. Many organizations will find that their existing controls partially address AI risks but leave significant gaps.
Common gaps include: no BAA that specifically addresses AI data handling, no policy governing employee use of generative AI tools, no validation process for AI systems used in clinical settings, no monitoring of AI system outputs for accuracy or bias, no incident response procedures specific to AI system failures or data exposures, and no documentation of AI systems in the technology asset inventory.
For each gap, assess the likelihood and potential impact of the associated risk. This is the same risk evaluation process you use in your standard SRA — you’re just applying it to AI-specific scenarios.
Step 5: Develop and Implement Remediation Plans
Prioritize remediation based on risk level. High-risk gaps need immediate attention. Medium-risk gaps should be addressed within a defined timeline. Lower-risk gaps can be scheduled for future remediation cycles.
Practical remediation actions typically include: developing an AI acceptable use policy that defines what tools employees can use, how they can use them, and what data can be entered into AI systems; updating BAAs with AI vendors to include specific provisions for data handling, model training, data retention, and breach notification; implementing technical controls such as data loss prevention tools that detect ePHI being entered into unauthorized AI platforms; establishing a governance process for evaluating and approving new AI tools before they’re deployed; and training staff on AI risks and organizational policies.
Integrating AI Risk Assessment into Your SRA
The most effective approach isn’t to create a separate AI risk assessment process running parallel to your existing SRA. It’s to integrate AI risk assessment into your annual Security Risk Analysis as a natural extension of the work you’re already doing.
When you conduct your security risk analysis, you already inventory technology assets, identify threats and vulnerabilities, evaluate existing controls, and develop remediation plans. AI risk assessment follows exactly the same structure — you’re adding AI systems to your asset inventory, adding AI-specific threats to your threat analysis, evaluating AI-specific controls, and incorporating AI remediation into your overall plan.
This integrated approach has several advantages. It eliminates redundant effort. It ensures AI risks are weighted against all other security risks in a single framework. It produces a single, comprehensive risk assessment document rather than multiple disconnected reports. And it aligns with how OCR evaluates compliance — they’re looking at your overall security program, not a separate AI checklist.
Common Mistakes in AI Risk Assessment
Assessing only the tools you know about. If you only assess AI tools that went through a formal procurement process, you’re missing the biggest risk category: unauthorized use. The shadow AI problem in healthcare is substantial. Your assessment needs to actively discover AI tools in use across your organization, including personal accounts and browser-based tools.
Relying solely on vendor assurances. AI vendors will tell you their product is HIPAA-compliant, secure, and proven. These claims need verification. Request SOC 2 reports, review their security practices, examine their BAA terms carefully, and ask specific questions about data handling, model training, and data retention. If a vendor can’t or won’t provide clear answers, that’s a risk factor in itself.
Treating AI risk assessment as a one-time project. AI is evolving rapidly. New tools are being introduced constantly. Existing tools are being updated with new capabilities. Your organization’s use of AI will change throughout the year. AI risk assessment needs to be an ongoing process, not an annual exercise. Build AI tool evaluation into your change management process so that new AI deployments are assessed before they go live.
Focusing only on data privacy. Data privacy is critical, but it’s not the only AI risk. Clinical safety, algorithmic bias, operational dependency, and vendor lock-in are all legitimate risks that need assessment. A narrow focus on data privacy will leave significant blind spots in your risk profile.
Creating policies nobody follows. An AI acceptable use policy that staff don’t know about or don’t understand is worse than useless — it creates a false sense of security. Policy development must be accompanied by training, communication, and enforcement. Staff need to understand not just what the policy says, but why it matters.
What Regulators Expect
OCR hasn’t published AI-specific enforcement guidance yet, but the trajectory is clear. The proposed HIPAA Security Rule update expands requirements for technology asset inventories, risk assessments, and documentation in ways that clearly encompass AI systems. OCR’s existing enforcement framework — which consistently focuses on whether organizations have identified their risks and taken reasonable steps to address them — applies directly to AI.
If your organization experiences a breach involving an AI system and you can’t demonstrate that you assessed the AI-related risks, implemented appropriate safeguards, and trained your workforce, OCR will view that as a compliance failure. The fact that AI is “new” or “complex” won’t be an acceptable explanation for failing to conduct a risk assessment.
State regulators are also paying attention. Several states have introduced or passed legislation addressing AI in healthcare, including requirements for transparency, bias testing, and patient notification when AI is used in care decisions. Your AI risk assessment should account for state-specific requirements in every jurisdiction where you operate.
Getting Started
If your organization hasn’t conducted an AI-specific risk assessment, the best time to start was six months ago. The second-best time is now. Here’s a practical starting point:
Start with discovery. Send a brief survey to department leaders asking what AI tools their teams are using. Review your vendor list for AI-powered products. Check your network traffic for connections to known AI platforms. Talk to your IT team about what they’re seeing. The goal is to build an initial inventory — it doesn’t have to be perfect on the first pass.
Classify what you find. Sort your AI inventory into high, medium, and lower risk categories. Focus your initial detailed assessment on the high-risk systems — the ones that touch ePHI, influence clinical decisions, or involve vendors with unclear data practices.
Assess the high-risk systems first. Use the framework outlined above to evaluate data privacy, clinical safety, regulatory compliance, operational, and bias risks. Document your findings using the same format as your SRA so they can be integrated into your overall risk profile.
Develop your AI acceptable use policy. Even before you’ve completed a full assessment, establishing clear guidelines for employee AI use can mitigate your most immediate risk: unauthorized use of AI tools with patient data.
Integrate into your annual SRA. Going forward, include AI systems in every annual Security Risk Analysis. As your AI inventory grows and evolves, this ensures that AI risks are continuously assessed and managed alongside all other security risks.
Medcurity helps healthcare organizations integrate AI risk assessment into their security risk analysis process. Our platform and compliance experts provide the framework, tools, and guidance to identify AI risks, document your assessment, and build remediation plans that satisfy both current HIPAA requirements and the upcoming Security Rule update. Whether you’re just starting to think about AI risk or looking to formalize an existing process, we can help.