The year 2025 marked a decisive shift in artificial intelligence (AI). Systems once confined to research labs and prototypes began to appear as everyday tools. At the centre of the transition was the rise of AI agents, the systems that can use other software tools and act on their own.
While researchers have studied AI for more than 60 years, and the term “agent” has long been part of the field’s vocabulary, 2025 was the year the concept became concrete for developers and consumers. AI agents moved from theory to infrastructure, reshaping how people interact with large language models, the systems that power chatbots such asChatGPT. In 2025 the definition of AI agent shifted from the academic framing of systems that perceive, reason and act to AI company Anthropic’s description of large language models capable of using software tools and taking autonomous action.
While large language models have long excelled at text-based responses, the recent change is their expanding capacity to act, using tools, calling application programming Interfaces, coordinating with other systems and completing tasks independently. Thw shift did not happen overnight. A key inflection point came in late 2024 when Anthropic released the model context protocol.
Read Full Article on TimesLIVE
[paywall]
The protocol allowed developers to connect large language models to external tools in a standardised way, effectively giving models the ability to act beyond generating text. With that, the stage was set for 2025 to become the year of AI agents. The momentum accelerated quickly.
In January, the release of the Chinese modelDeepSeek-R1as an open-weight model disrupted assumptions about who could build high-performing large language models, briefly rattling markets and intensifying global competition. An open-weight model is an AI model whose training, reflected in values called weights, is publicly available. Throughout 2025, major US labs such asOpenAI, Anthropic, Google andxAIreleased larger, high-performance models, while Chinese tech companies including Alibaba, Tencent and DeepSeek expanded the open model ecosystem to the point where the Chinese models have been downloaded more than American models.
AI agents expanded what individuals and organisations could do, but they also amplified existing vulnerabilities. Systems that were once isolated text generators became interconnected, tool-using actors operating with little human oversight The developments quickly found their way into consumer products. Tools such as Perplexity’s Comet, Browser Company’s Dia, OpenAI’s GPT Atlas, Copilot in Microsoft’s Edge, ASI X Inc’s Fellou, MainFunc.ai’s Genspark, Opera’s Opera Neon and others reframed the browser as an active participant rather than a passive interface.
For example, rather than helping you search for holiday details, it plays a part in booking the vacation. At the same time, workflow builders such as n8n and Google’s Antigravity lowered the technical barrier for creating custom agent systems beyond what has already happened with coding agents such as Cursor and GitHub Copilot. As agents became more capable, their risks became harder to ignore.
In November, Anthropic disclosed how its Claude Code agent had been misused to automate parts of a cyberattack. The incident illustrated a broader concern: by automating repetitive, technical work, AI agents can also lower the barrier for malicious activity.
[/paywall]