Today, we send simple commands as prompts for machines to answer. We still manage the complex workflows, apply the tools, and make the decisions to achieve the broader goal. Imagine a world where complex tasks are not delegated to humans but are indeed executed by agents who have the creativity, agency, and tools to satisfy their stated goal. This is the grey, messy middle field of prioritising the “right” set of decisions that we struggle with today and expecting machines to do better than us.
So, it should come as no surprise that there are many definitions of an agent (Willison, 2025a). Here are a few:
- Software that uses AI and tools to accomplish a goal requiring multiple steps (Shah, 2024)
- Systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks (Schlundtz and Zhang, 2024)
- An application that attempts to achieve a goal by observing the world and acting upon it using the tools that it has at its disposal (Wiesinger, Marlow and Vuskovic, 2025)
Simply put, an agent has agency. So what is agency?
- Agency refers to the capacity of an agent to act independently, make decisions, and achieve goals using reasoning, tools, and available information.
And with this, let me throw another definition of this common ground of agency:
- An agent is an application that accomplishes a goal independently of human intervention.
There are three important parts to this definition:
- Application. It’s technology. A piece of software.
- Goal. It has an objective to get something done.
- Independent. It can operate autonomously to achieve its stated goal.
However, software vendors vary widely in how they build the application, define the goals, and achieve independence. And that’s okay. But before jumping directly into the implementation details, there are elements of “agent design” that we need to align on within the context of applications, goals and independence. So let’s do that.