The Evolving AI Reality and Confidentiality in the Ombuds Practice
by Reese R(ai)mos
Director - University Ombuds Office, Virginia Tech
To understand where we’re heading, it helps to look back. Two decades ago, I read Ray Kurzweil’s The Singularity Is Near, where he predicted an exponential rise in technology that would culminate in artificial and human intelligence merging around 2045. Some called him unrealistic; others, like Bill Gates, praised his foresight. Whether you see him as a visionary or a dreamer, there’s no denying that AI is rapidly reshaping our world, and, for Ombuds, our profession.
Rather than debate the timing of the “singularity,” it’s more pressing to examine what AI is already doing to our practice. Across fields, technology is transforming how information is created, managed, and used. In July 2025, for example, a robot trained solely on surgical videos autonomously performed a major phase of a gallbladder removal at Johns Hopkins. That’s no longer science fiction, it’s our present.


As Ombuds, we often find ourselves supporting individuals and groups working in high-stakes, cross-functional, and/or interdisciplinary environments. These spaces can offer tremendous opportunities for innovation. These environments also frequently bring the interpersonal and structural challenges that lead people to our doors. Researchers, in particular, often work within complex collaborations that span departments, institutions, cultures, and differing funding expectations. When roles, expectations, or communication norms are unclear, relational strain can quickly appear. Today's complex research challenges demand effective teams, yet researchers rarely receive training in teamwork skills crucial for collaborative success.
In celebration of Ombuds Day 2025,