top of page
Search

This new AI attack steals models without touching the system

  • Apr 8
  • 1 min read


DIGITAL TRENDS — AI systems have long been treated like sealed black boxes, especially in areas like facial recognition and autonomous driving. New research suggests that protection isn’t as solid as assumed.


A KAIST-led team shows that AI systems can be reverse engineered remotely using emissions that leak during normal operation, without direct intrusion. Instead, the approach listens.


Using a small antenna, the researchers captured faint electromagnetic traces from GPUs and rebuilt how the system was designed. It sounds like a heist trick, but the results hold up, and the security implications are immediate.


Read the full story  |  DIGITAL TRENDS




  • Twitter

© 2026 UnmissableAI

bottom of page