Funding scientific research… are we doing it well?
Published:
Throughout my career, I have focused on studying collective behaviors and complex systems, aiming to define computational models that replicate overarching (emerging?) aggregated dynamic properties for explanation or prediction.
I was thus thrilled to read a post on ArsTechnica about an experiment comparing human and ants on solving a problem requiring cooperation. Essentially, a group of individuals must move a large object through bottlenecks resembling a maze. The object’s weight (resembling a piano, hence the ‘piano-mover puzzle’ name) makes it impossible for a single individual to complete the task. Its peculiar shape further complicates determining a trajectory for movement.


An intriguing aspect of this experiment (which might even be considered for an Ig Nobel Prize, potentially more so than the paper on the phase behavior of the cacio and pepe sauce) is that when humans are restricted from verbal or non-verbal communication, the group’s overall performance becomes worse than that of an individual.
The paper is published on PNAS, and if you’re curious - or if you think that my description is unclear, which is probably true - you can take a peek there.
Results, by the way, seem very much in tune with a conjecture described in The Enigma of Reason, by Hugo Mercier and Dan Sperber: their theory is that reason has appeared relatively late in our evolution and it enabled communication, cooperation, and also justification of our behavior. They also show that in different situations problems hardly solved by an individual are more tractable for a group of people, provided that they can communicate, discuss, also argue about things like the best line of work to follow in a given situation.
This does not negate the so called wisdom of crowds (from a book by James Surowiecki), the capacity of large groups of individuals not necessarily communicating with each other or however coordinating to come up with a joint solution, to collectively perform better than the best individual among them… it just shows that this is not always the case. To be honest, Surowiecki mentions a few cases in which crowds (and markets) fail, the book (at least in my memory of it) is not a universal praise for markets as mechanisms for decision making… but others often tended to simplify the message.

One of our collective decisions as human group was that research funding should be allocated by means of competitive schemes. I had the chance to discuss recently the news about a non competitive research funding scheme in a post in which I tried to look for papers trying to evaluate different types of research funding schemes.
More recently I was suggested by one of the authors to read another PNAS paper about the costs of competition in distributing research funds. I was quite happy to receive this suggestion, since it is a topic of interest to me, and also given the quality of the work. Authors analyze the different types of costs associated to competition, not just the economic ones, that are not trivial to evaluate, but that are certainly not the only ones.
There are also epistemic costs, for instance related to the fact that high-risk (and high-impact) research is generally not favored in funding decisions. Research project proposal always must claim to be leading towards groundbraking results… they just need to show a credible operational plan, sometimes including risk analyses and contingency plans. These biases in proposal evaluations influence researchers’ behaviors, much like the predominantly bibliometric approach to research evaluation has, in my opinion, contributed to the inflation of published papers (I talked a bit about it in several posts, also dealing with public goods game) and I like to remind you to read an important work on this topic.
There are also additional social and ethical costs of competition, intertwined with epistemic ones, such as the the sometimes unbearable pressure to win funding that has a significant impact on mental health and work-life balance of researchers, especially in the early stages of their careers. The competitive nature of the overall system also affects the local scale of the system, with work environments like departments in which colleagues are certainly not encouraged to collaborate, reducing collegiality.
Authors explicitly say that they do not suggest that more non-competitive, block funding, and a reduction of competition in the system would certainly reduce these problems. Nonetheless, they certainly propose a wide set of references to literature works that do build a case for a causality between at least the current level of competition and at least some of these problems, but this is my point of view. The authors make a crucial point: we are not even attempting to evaluate whether our current approach to managing research funding achieves its intended effects or works effectively. I add, metaphorically speaking, that we are acting like the humans in the piano-mover experiment that are not allowed to coordinate with each other.

I often had the impression that academics have often an excellent understanding of some portion of reality, its internal working, mechanisms and techniques… but they are often pretty bad at putting their knowledge in action just inches (or centimeters, if you prefer) outside their specific area of expertise. I am not exception to this, despite all my efforts. This probably has to do with the way our performance is evaluated, and/or to the way different forms of knowledge are organized and compartmentalized, sometimes in hard to escape silos.
So, I join the authors in urging the overall scientific community to do like them, and like the authors of the paper on “the strain on scientific publishing”, and start considering more the possibility to apply scientific research to scientific research, and to ask that scientific approaches are applied by funding agencies. We must ask funding agencies to self-evaluate their mechanisms much more substantially and systematically. We are being asked to comply to recommendations and good practices proposed by the Open Science movement, starting from data availability. I want to undersign a simple statement by the authors that is “Nonsensitive data that do not raise privacy or data protection concerns should be accessible to the public without restrictions” (and I would extend this consideration to states, local authorities, municipalities, public administrations, etc.).
Authors do propose additional recommendations, that are also very interesting (e.g. systematically try to evaluate alternative ways to fund research), but this post is getting a tad too long, so I stop by strongly suggesting all readers to do themselves a favor: read the paper yourselves, think it over, and share it to other colleagues.