Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
April 27, 2026

Use of AI Tools and Protected Health Information (PHI)

Use of AI Tools and Protected Health Information (PHI)

Overview

As AI tools become more common in day-to-day work, it is important to use them in a way that protects patient information and aligns with our privacy and security requirements.

AI Tools and PHI

At this time, there are no AI tools approved for use with Protected Health Information (PHI) at Vynca.

This includes, but is not limited to:

  • ChatGPT
  • Claude
  • Perplexity
  • Other generative AI or automation tools

PHI may not be entered, uploaded, or processed in any AI tool.

Compliance and IT are actively evaluating AI solutions and will provide updates as approved, compliant options become available.

What This Means in Practice

To protect patient information, workforce members must:

  • Do not paste patient information into AI tools
  • Do not upload files containing PHI into AI tools
  • Do not use AI tools to summarize, draft, or analyze content that includes PHI
  • Use AI tools only for general, non-sensitive work

If you are unsure whether something contains PHI, do not use AI tools.

Where to Find More Information

Visit Vynca Connect for additional guidance and resources, including:

  • Privacy & Security updates
  • SOPs and policies
  • Approved tools and workflows

You can also visit the “Lockbox” section for the latest updates. Content will continue to be added, so check back regularly.

Questions or New Tools

If you are considering using a new tool, or have questions about appropriate use, please reach out to the Compliance or IT team before proceeding.

Protecting patient information is a shared responsibility. Thank you for helping maintain the privacy and security of our patients and our systems.