While there are still some analog ways to detect that the content we see was created with the help of AI, the implicit visual tip-offs are, increasingly, disappearing.
The limited release of Sora 2, OpenAI’s latest video-generation model, has only hastened this development, experts at multiple AI detection companies tell Fast Company—meaning we may soon come to be entirely dependent on digital and other technical tools to wade through AI slop. That has ramifications not only for everyday internet users but also for any institution with an interest in protecting its likeness or identity from theft and misappropriation.
“Even [for] analysts like me who saw the evolution of this industry, it’s really hard, especially on images,” Francesco Cavalli, cofounder of one of those firms, Sensity AI, tells Fast Company. “The shapes, the colors, and the humans are perfect. So without the help of a tool now, it’s almost impossible for the average internet user to understand whether an image or a video or a piece of audio is AI-generated or not.”
Read more | FAST COMPANY

