The problem with the agents of that
Flash crashing is probably the most popular example of risks raised by automated agents that have the power to take action in the real world without human supervision. This power is the source of their value; Agents that overloaded the flash collision, for example, can trade much faster than anyone. But it’s also why they can cause so much wrongdoing. “The great paradox of agents is that the very thing that makes them useful – that they are able to perform a variety of tasks – including control of the control,” says Iason Gabriel, a senior staff search scientist at Google Deepmind who focuses on his ethics.
“If we continue on the current way … We are basically playing Russian roulette with humanity.”
Yoshua Bengio, Professor of Computer Science, University of Montreal
Agents are already everywhere – and have been for many decades. Your thermostat is an agent: it automatically ignites or turns off the heater to keep your home at a specific temperature. Thus are antivirus and roombas software. Like high -frequency traders, which are programmed to buy or sell in response to market conditions, these agents are all built to perform specific tasks by following the set rules. Even agents that are more sophisticated, such as Syria and cars that drive cars, follow the predetermined rules when performing many of their actions.
But in recent months, a new class of agents has reached the stage: those built using large language patterns. The operator, an agent from Openai, can autonomously navigate a browser to order groceries or make dinner reservations. Systems like Claude Code and Cursor conversation feature can modify entire code bases with a single command. Manus, a viral agent from the Chinese effect of starting the butterfly, can build and place websites with little human supervision. Any action that can be caught with text – from playing a video game using print commands to run a social media account – is potentially within the control of this type of system.
LLM agents do not yet have a dirty record, but to hear CEO show it, they will transform the economy – and soon. Openai General Director Sam Altman says agents can “join the labor force” this year, and Salesforce General Director Marc Benioff is aggressively promoting Agentforce, a platform that allows businesses to adapt to agents to their goals. The US Department of Defense recently signed a contract with Scale to draft and test agents for military use.
The researchers are also taking agents seriously. “Agents are the other limit,” says Dawn Song, a professor of electrical engineering and computer science at the University of California, Berkeley. But she says, “In order for us to really benefit from Him, in fact (use it to solve) complex problems, we need to figure out how to make them work safely and safely.”

Patrick Leger
This is a long order. Like chatbot, agents can be chaotic and unpredictable. In the near future, an agent with access to your bank account can help you manage your budget, but can also spend all your savings or flow your information into a hacker. An agent administering your social media accounts can facilitate some of Drudgery to keep an online presence but can also distribute lies or spout abuse to other users.
Yoshua Bengio, a computer science professor at the University of Montreal and one of the so -called “godfathers of him”, is among those who worry about such risks. What concerns it the most, however, is the possibility that LLMs can develop their advantages and goals-and then act on them, using their real-world skills. A llm trapped in a conversation window cannot do much without human help. But a powerful agent of it can potentially copy itself, exceed protective measures, or prevent it from closing. From there, it can do whatever he wanted.
So far, there is no foolish way to guarantee that agents will act as their developers intend or to prevent actors with malicious misuse. And though researchers like Bengio are working hard to develop new security mechanisms, they may not be able to continue the rapid expansion of agents’ powers. “If we continue on the current path of building agents systems,” says Bengio, “we are essentially playing Russian roulette with humanity.”