The promise at the heart of the artificial intelligence (AI) boom is that programming a computer is no longer an arcane skill: a chatbot or large language model (LLM) can be instructed to do useful work in simple English sentences. But that promise is also the root of a systemic weakness.
The problem comes because LLMs do not separate data from instructions. At their lowest level, they are handed a string of text and choose the next word that should follow.
If the text is a question, they will provide an answer. If it is a command, they will attempt to follow it.
Read more | THE ECONOMIST