Agents have come a long way…
Published:
I think some readers have started to feel a hint of sickness when they hear about LLMs. This includes new developments—not just in research, but across the entire socio-economic-technical system—along with emerging and existing limits, unsolved issues with training data, legal challenges, and discussions about their real capabilities and what they might bring about. Probably, a significant share of people sharing this feeling were already involved in computer science or ICT technologies before LLMs, whereas younger people are more curious and generally have more positive feelings about LLMs and news related to them. I am stumbling more and more frequently on comments that show even resentment about LLMs, acrimony, maybe symptom of envy of the attention they have, and potential they are showing.
I think there are motivations to this, and they are related to the fact we are in a phase in which we have this new promising approach and technology and we are understandably trying to understand it, its limits, and potential applications. It’s a pretty wild phase, potentially a revolutionary moment in time, deeply changing the way things are done in a large set of areas in ICT, and in many human activities. It’s also understandable that people working in those areas, that maybe have spent years building an awareness of the problems and a competence in their solutions, are cautious, conservative, and maybe even angry to see a context in which they though to dominate change so radically, so fast.
However, I also think it’s not just a psychological and quite human reaction that however has no real reason, no rational or scientific motivation. Let us consider one of the recent concepts that has been put forward in LLM related developments, that deals with one of the limits of this novel technology. LLMs have been developed with huge investments because of their generality, because they were showing capabilities that were not initially intended; well, maybe someone was hopeful, but there was not guarantee that you could equally ask an LLM to correct a text you’ve written from typos and maybe change its style to make it less colloquial, more formal, and more business ready, and also ask the LLM to be the dungeon master in a role playing session. Despite their generality, LLMs lack intention. This has been a topic of discussion at the intersection of philosophy and technology, with some arguing that LLMs cannot be considered agents because action is understood as autonomous, intentional behavior.
Agents, and multi-agent systems, are not a new concept even in computer science and engineering, though. The concept of agent is basically the starting point of Russel and Norvig for the “Artificial Intelligence - A modern approach” book, but they are also instrumental in Genesereth and Nilssen’s “Logical foundations of Artificial Intelligence”, and certainly many more, before and after the ones I mentioned. The mentioned books include reactive agents architectures, with or without an internal state, that are not necessarily provided with a formal explicit representation of a goal. Think of boids, for instance: very specialized “social” agents exhibiting a nice form of overall resulting system level behavior, achieved as a consequence of very simple reaction rules. We are, after all, seeking the knowledge and capacity to build artificial systems implementing our goals as modelers, designers, engineers, users.


It does not represent a surprise, at least to me, that the trend in LLM related developments has come to the point of including a notion of agency. It is a very obvious development: since LLMs are so good in responding questions, in preparing texts of different kind, and even computer programming code, for us, why not delegating more complicated tasks to them? Well, in this case, we naturally need to provide LLMs with an architecture granting them a (to a certain extent) persistent existence, possibility to interact with external tools, maybe with other LLMs (other agents), and maybe also spontaneous behavior (technically maybe achieved as a reaction to the passage of time). The surprise is that people doing this, even in the academia, are essentially neglecting the body of work that has been carried out by the autonomous agents and multi-agent systems research community in over thirty years of work. I made a comment on someone’s post in LinkedIn about this point, basically saying that the current trend in computer science and engineering is the opposite of the punk rock movement: punk rock was about “no future”, whereas nowadays developments in (at least certain areas of) computer science and engineering are “no past”, since they do not even bother considering past research and experiences.
What’s funny is that this area of research is not exactly a small one. There is an International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS), that sponsors the annual International Conference on Autonomous Agents and Multiagent Systems (AAMAS) (reaching the 24th edition in 2025). This conference is quite recognized in the computer science community, for what it’s worth it even has a very good ranking in the CORE Conference Rankings. My personal point of view is that, regrettably, research on agents and multi-agent system has brought results whose impact has not been as deep as what has been achieved by other fields of AI. I remember a provocative Jim Heldler’s editorial titled “Where Are All the Intelligent Agents?”, and that was quite some time ago (very good researchers proposed replies to that editorial highlighting achieved results): to some extent, I think that he had a point, although he was of course provocative. On the other hand, I also think that this is scientific research, and that this is what we think now. Research is a risky enterprise, there is always the chance that some line of work turns out to reveal unexpected or unsolvable issues, or the motivations behind it change and then the whole effort becomes not motivated anymore. In the mid nineties some very serious and skilled software engineers were investigating code mobility; ten years later their perspective was obviously quite different. Researchers in the area of autonomous agents and multi-agent systems can be angry right now, they can be thinking that LLMs and LLM-based system need to integrate additional concepts, mechanisms, results from their own field and also acknowledge that what is going on is not completely new. Similar considerations could be done for semantic technologies, although knowledge graphs are generally much more recognized by LLM researches and developments (see, events seem to show that maybe Jim Hendler had a point after all). So, my final point is: I think people using terms and concepts related to autonomous agents for LLM based developments, especially in the academia, are plain wrong in not considering and acknowledging past researches, results, and even failures to deliver expected results and impact (we don’t want history to repeat in these failures, do we?). On the other hand, they also need to consider that there’s this niche, there’s a chance to give a contribution to the overall scientific process, to maybe change its direction for the better (hopefully). It would be childish not to consider this possibility, as a research community, just because we are being ignored.