Back to Blog
Journalism5 min readFeb 27, 2026

Why Journalists Still Misquote Sources — And How AI Transcription Fixes It

Reporters accurately recall only 40-60% of interview statements from memory. AI-powered transcription with speaker identification changes the accuracy equation.

AI transcription for journalists

A reporter wraps up a 45-minute interview with a whistleblower. The source described a specific timeline of events, named three people involved, and clarified twice that one statement was off the record. The reporter’s phone recorded everything. But back at the desk, the notes are a mess — timestamps don’t match, a key quote is paraphrased from memory, and the off-the-record boundary is fuzzy.

This happens every day in newsrooms. Interview accuracy is the foundation of journalism, yet the tools most reporters rely on — handwritten notes, basic voice memos, manual transcription — haven’t changed in decades. The stakes keep rising. One misquote can kill a source relationship, trigger a lawsuit, or destroy public trust.

The Accuracy Problem

Studies have found that reporters accurately recalled only 40-60% of specific statements from interviews conducted without recording. Even with recordings, manual transcription introduces errors — mishearing technical terms, losing speaker attribution in multi-source interviews, and spending 3-4 hours transcribing a single hour of tape.

For investigative reporters, the problem compounds. A six-month investigation might involve 50+ interviews. Finding the exact moment a source said something specific means scrubbing through hours of audio. Most reporters don’t bother — they rely on their notes and memory, which is how misquotes happen.

The source protection angle makes it worse. When a reporter can’t precisely identify where on-the-record ended and off-the-record began, they either over-censor (killing legitimate quotes) or under-censor (burning a source). Neither is acceptable.

Why Current Tools Fall Short

What Changes With AI-Powered Transcription

Accurate Domain Terminology

AmyNote uses OpenAI’s latest Speech API, which handles specialized vocabulary — legal terms, policy jargon, technical language — with accuracy that makes the transcript actually usable without heavy editing. “Eminent domain” doesn’t become “imminent domain.” “Amicus curiae” stays intact.

Speaker Identification With Cross-Session Memory

When you interview the same city council member three times over two months, the system recognizes their voice and labels them consistently across all transcripts. In a press conference with five speakers, each person’s statements are attributed correctly.

AI-Powered Semantic Search

Instead of scrubbing audio, you ask: “When did the CFO mention the Q3 revenue shortfall?” Anthropic’s Claude Opus processes the query and returns the exact passage with timestamp. Across 50 interviews over six months, this turns hours of searching into seconds.

Privacy Architecture for Journalism

Both OpenAI and Anthropic contractually guarantee that user data is never used for model training. Audio is encrypted in transit, processed, and not retained on provider servers. All transcripts and recordings are stored locally on the reporter’s device with end-to-end encryption.

No source audio sitting on a third-party server. No confidential interview material feeding into model training pipelines. No data retention by AI providers after processing.

Getting Started

AmyNote handles transcription through OpenAI’s Speech API and AI analysis through Anthropic’s Claude Opus — both with zero-training guarantees. It works for in-person interviews, not just video calls, with support for 120+ languages.

Try it free for 3 days at amynote.app — no credit card required.


Originally published as an X Article.

Ready to try it?

AmyNote is built for professionals who need accurate, private transcription. Powered by OpenAI and Anthropic Claude Opus — both with contractual zero-training guarantees.

3-Day Free Trial — No Credit Card

Related Articles