The Evolving AI Reality and Confidentiality in the Ombuds Practice
by Reese R(ai)mos
Director - University Ombuds Office, Virginia Tech
To understand where we’re heading, it helps to look back. Two decades ago, I read Ray Kurzweil’s The Singularity Is Near, where he predicted an exponential rise in technology that would culminate in artificial and human intelligence merging around 2045. Some called him unrealistic; others, like Bill Gates, praised his foresight. Whether you see him as a visionary or a dreamer, there’s no denying that AI is rapidly reshaping our world, and, for Ombuds, our profession.
Rather than debate the timing of the “singularity,” it’s more pressing to examine what AI is already doing to our practice. Across fields, technology is transforming how information is created, managed, and used. In July 2025, for example, a robot trained solely on surgical videos autonomously performed a major phase of a gallbladder removal at Johns Hopkins. That’s no longer science fiction, it’s our present.
Closer to home, AI is now embedded in everyday workplace tools. Zoom’s AI Companion can summarize or transcribe meetings automatically. Email clients draft messages for you. Note-taking apps like Otter and Read AI capture and analyze conversations. Useful? Absolutely. Compatible with Ombuds confidentiality? Often not.

Confidentiality in the Age of Default Record
Our biggest emerging challenge is simple: conversations we assume are private may now be recorded without anyone’s intent or awareness. Many devices and apps enable AI features by default, meaning an Ombuds must meticulously disable them across phones, computers, watches, meeting software, and productivity suites.
Even after adjusting settings, the landscape keeps shifting. We are, in a sense, living the Red Queen’s warning from Through the Looking Glass: “It takes all the running you can do to stay in the same place.”
Wearables: The New Confidentiality Threat
A visitor walking into your office wearing glasses used to be unremarkable. Today, Ray-Ban Meta or Oakley Meta glasses can discreetly record audio and video. Likewise, “The Friend” AI necklace or Bee wristbands can passively listen, summarize, and store conversations. What once required someone pulling out a phone now requires nothing more than them showing up wearing their accessories.
We cannot control visitors’ motivations. If someone wants to record, they will find a way. But as Ombuds, we can mitigate risk, set expectations, and safeguard our space as best we can.
Should Ombuds Offices Ban Devices?
In high-security environments, employees deposit phones into lockers before entering. Could Ombuds Offices adopt similar systems? Perhaps, but doing so may unintentionally communicate distrust, and it’s no guarantee against hidden wearables.
Instead, our task is to adapt thoughtfully: reinforcing expectations, adjusting our practices, and staying informed about AI developments that may affect confidentiality.
Action Steps for AI-Conscious Ombuds Practice
To maintain strong confidentiality, consider integrating the following “AI hygiene” practices:
- Review every device you use for work such as phones, laptops, tablets and watches and disable built-in AI functions.
- Avoid using full visitor names in calendars, emails, or any digital system.
- Create an AI usage and privacy policy that outlines how your office uses (or does not use) AI and what you expect from visitors, especially regarding recording or transcription.
- Display this policy prominently on your website, brochures, email signatures, and office materials.
- Verbally reinforce expectations at the start of meetings:
“To protect confidentiality, I’m not using any AI tools, and I ask that you refrain from using or activating AI apps or devices during our conversation. Is that okay?” - Train all Ombuds Office staff to recognize tools that auto-summarize or auto-transcribe interactions.
- Use privacy-compliant, local systems rather than cloud-based AI.
- Disable AI integrations in Microsoft 365 Copilot, Google Workspace Gemini, Zoom AI Companion, and similar tools.
- Avoid smart note-taking apps during confidential sessions.
- Keep confidential notes offline and outside AI-enabled platforms. Use locked storage for physical materials.
- Use encrypted, non-cloud communication channels when discussing confidential matters.
AI is exponentially evolving and so staying informed of breakthroughs is no longer optional. It’s essential to maintaining the standards of our profession.
Sample Language to Incorporate in your Ombuds Practice Regarding AI:
University Ombuds Office Policy on AI Usage and Privacy
The Ombuds Office is committed to protecting the standards of practice of confidentiality, impartiality, independence, and informality that are essential to how services are provided. The use of artificial intelligence (AI) tools by the Ombuds will be carefully limited and governed by the following principles:
Confidentiality First
- No confidential information shared with the Ombuds Office will be entered into, stored by, or processed through external AI systems unless explicitly authorized by the visitor and secured through vetted, privacy-protective platforms.
- AI may only be used to support internal, de-identified analysis, trend spotting, or administrative efficiencies—not for recording, monitoring, or profiling individual visitors.
Transparency of Use
- Visitors to the Ombuds Office will be informed if AI tools are being used in any aspect of their interaction, and they will retain the right to opt out.
Human Judgment Prevails
- AI will never replace the Ombuds’ role as a confidential, impartial, and human point of contact. Decisions, guidance, and interventions will always be made by people, not algorithms.
Data Minimization
- Any AI-assisted work will be conducted with anonymized, aggregated data whenever possible to prevent the risk of visitor identification.
Continuous Safeguards
- The Office will regularly review AI systems, and developments, for compliance with privacy, security, and ethical standards, and discontinue any use that risks undermining trust.
AI and Confidentiality in the Ombuds Office
The Ombuds Office is a confidential space. We do not use AI to record or analyze your individual conversations, and we ask that you also do not use AI tools or apps to record, transcribe, or share our discussions. If we suspect a recording is taking place we reserve the right to disengage from further interactions because engagement with the University Ombuds Office is premised on good faith, trust and a commitment to confidentiality. The only exception to this policy would be if there’s a documented ADA accommodation.
Any limited AI we use is only for behind-the-scenes support such as organizing systemic data in an anonymous way or enhancing our workshops but never for maintaining a record about individuals. Your conversations remain private, and human judgment always guides our work.
Confidentiality goes both ways—privacy and trust come first.
*This blog post, and the sample language provided, was drafted with assistance from OpenAI’s ChatGPT, thus inspiring the nom de plume Reese R(ai)mos.

Thank you Reese, This article is a really well structured, so the assistants you consulted show their power. The article is also practical, while raising awareness in our community and providing guidance on how to declare and position our own use in relation to our code of ethics and to our visitor. As you state, the future is now.