Although AI is changing the media, how much it’s changing journalism is unclear. Most editorial policies forbid using AI to help write stories, and journalists typically don’t want the help anyway. But when consulting with editorial teams, I often point out that, even if you never publish a single word of AI-generated text, it still has a lot to offer as a research assistant.
Well, that assertion might be a bit more questionable now that the Columbia Journalism Review has gone and published its own study about how AI tools performed in that role for some specific journalistic use cases. The result, according to CJR: AI can be a surprisingly uninformed researcher, and it might not even be a great summarizer, at least in some cases.
Let me stress: CJR tested AI models in journalistic use cases, not generic ones. For summarization in particular, the tools—including ChatGPT, Claude, Perplexity, and Gemini—were asked to summarize transcripts and minutes from local government meetings, not articles or PowerPoints.
So some of the results may go against intuition, but I also think it makes them much more useful: For artificial intelligence to be the force for workplace transformation as it’s often hyped to be, it needs to give helpful output in workplace-specific use cases.
Read more | FAST COMPANY