Why AI Still Can’t Replace Human Interpreters: A Closer Look at the Risks, the Research, and the Realities

The Temptation of AI in a Multilingual World

We live in a world where artificial intelligence assists with driving directions, shopping lists, and even basic chat conversations. It’s no surprise that AI tools are now being explored to bridge language gaps too. But when it comes to critical interpretation—like explaining a diagnosis to a patient, translating courtroom testimony, or guiding a refugee through asylum procedures—the margin for error shrinks to zero.

At VriOpi, we support innovation. We use technology to scale and support our interpreting services every day. But we also draw a hard line: AI is not ready to replace human interpreters.

And the world’s leading institutions agree.


What the WHO Found When It Put AI to the Test

In one of the most eye-opening evaluations of AI-based interpretation, the World Health Organization (WHO) partnered with a multilingual interpretation platform called Wordly, which uses automated speech recognition (ASR) and AI-generated translations to deliver real-time interpretation during conferences and public health meetings.

The result?

Out of 90 interpretation sessions analyzed by the WHO, 89 contained significant errors—many of which had the potential to cause confusion, misdirection, or even harm.

These weren’t small typos. The errors included:

  • Mistranslation of medical terminology
  • Broken sentence logic
  • Incorrect pronoun usage that shifted meaning
  • Cultural misunderstandings that undermined the intent of the speaker

For a health organization that deals with life-and-death communication across borders, this was an alarming outcome.

The WHO concluded that while tools like Wordly may help reduce cost and logistics in certain non-critical environments, they are not viable substitutes for professional interpreters when precision and clarity are non-negotiable.

This echoes what many language professionals have long feared: AI can mimic language, but it doesn’t understand it.


The ABA’s Position: Unacceptable for Legal Settings

In the legal world, even one mistranslation can have irreversible consequences. That’s why the American Bar Association (ABA) weighed in with a firm position:

Machine translation tools are unacceptably unreliable for use in legal settings.

The ABA highlights several legal and ethical concerns:

  1. Lack of confidentiality – Most AI translation tools (especially cloud-based services) cannot guarantee privacy, putting client data and case strategy at risk.
  2. No accountability – When an AI misinterprets a witness’s words or a police warning, who is responsible? AI tools offer no legal liability coverage.
  3. No context awareness – AI often fails to grasp legal nuance, intent, or cultural norms—especially in languages with layered meanings or formal structures.

In legal systems built on fairness, accuracy, and informed consent, relying on an AI tool to interpret someone’s words is not just risky—it’s potentially unconstitutional.


The Human Element: What AI Still Can’t Replicate

Let’s be clear: interpretation isn’t just about language.

It’s about trust. Emotion. Empathy. Awareness. Interpreters often serve as the emotional and ethical bridge in complex interactions.

Real-world example:

A refugee describing traumatic events in their native language may pause, shift tone, or choose words deliberately to soften emotional pain. A human interpreter understands how to convey this without stripping the sensitivity. An AI might translate it literally, losing the entire human weight of the moment.

Or consider a cancer diagnosis being delivered to a patient whose first language is Arabic or Mandarin. An AI might misplace a term or mistranslate “tumor” as “infection.” A human interpreter, trained in medical terminology and emotional intelligence, ensures the message is accurate, appropriate, and culturally respectful.


Why AI Tools Can’t Be Trusted for Critical Interpretation

Let’s break this down clearly.

1. They Misread the Message

AI doesn’t understand idioms, tone, sarcasm, or subtext. It interprets literally, even when literal makes no sense. Phrases like “break a leg,” “cut him some slack,” or “on the fence” are ripe for misunderstanding.

2. They Can’t Be Held Responsible

When an AI tool leads to a misdiagnosis or wrongful conviction, the software provider is rarely liable. That leaves institutions (and people) exposed to legal and ethical consequences.

3. They Risk Breaking Laws

Many AI platforms are not compliant with HIPAA, GDPR, or other privacy standards. Interpreters are trained to protect confidentiality, and they can be disciplined if they don’t. AI tools? They’re just software—often owned by third parties.

4. They Undermine Trust

Patients, immigrants, and defendants rely on interpreters not just to understand, but to be understood. Trust is built with a human voice—one that listens, adapts, and advocates. Machines can’t offer that.


When AI Has a Place (And When It Doesn’t)

To be fair, AI can be useful—in the right context.

It works well for:

  • Tourist travel assistance
  • Menu translation
  • Navigating signs and schedules
  • Drafting rough outlines or helping interpreters prep for assignments

But that’s where its role should stop.

The moment stakes are high, emotions are complex, or legal and medical accuracy is required, AI must step aside for certified, accountable human interpreters.


A Better Way Forward: Human-Led, Tech-Supported

Rather than hoping AI will “catch up” to human interpretation (and risking lives in the process), the smart approach is to:

  • Expand interpreter education programs
  • Invest in better pay and protections to retain skilled professionals
  • Use secure remote platforms like VriOpi’s OPI and VRI systems to extend access
  • Use AI tools for back-end support, not front-line communication

Technology should make human interpreters more effective—not try to replace them.


What We’re Doing at VriOpi

At VriOpi, we’ve built a language solutions platform that centers humans where it matters most. We offer:

  • 300+ languages, available via phone (OPI), video (VRI), or in-person
  • Certified interpreters, trained in legal, healthcare, and public sector domains
  • Strict privacy and compliance protocols (HIPAA-aligned, secure)
  • Technology integration that supports—not substitutes—human decision-making

We believe language access is a human right. That right cannot be automated.


Closing Thought: Ask the Right Question

The question isn’t just “Can AI do it?”

It’s: “What happens if it gets it wrong?”

In healthcare, that could mean a patient misunderstanding a life-altering diagnosis. In court, it could lead to wrongful detention. In school, it could isolate a child.

Until AI can feel, think, and take responsibility—human interpreters remain irreplaceable.


📞 Ready to work with a language solutions partner that puts people first?
👉 Request a Demo and experience what true professional interpretation feels like—with support from the best-trained interpreters in the field.

– References

American Bar Association. (2025, June 23). Think AI should replace interpreters? Think again. Retrieved from https://www.atanet.org/advocacy-outreach/think-ai-should-replace-interpreters-think-again/

World Health Organization. (2024). Evaluation report on the use of AI-based interpretation tools in multilingual conferences. Internal publication summary retrieved From https://www.linkedin.com/posts/chartered-institute-of-linguists_who-report-on-ai-interpretation-activity-7334217039382773760-KEO0

American Translators Association. (2025). ATA Language Services Directory. Retrieved from https://www.atanet.org/directory/

blog

Related Articles

Explore our latest articles to learn how language access, interpretation, and localization shape communication across industries.