Sin # 6: Funding only rocket science, but not the rocket launcher.

Sin # 6: Funding only rocket science, but not the rocket launcher.

This sin is conducted mainly by research funding organisations, but scientists, especially the Silverbacks (sin #1') are at the very minimum co-conspirators as they are dominating the review panels of grant proposals. As one of the most pressing issues, especially now that 'Open Science Clouds' 'Commons' or 'Platforms' and 'Data Spaces' are suddenly fashionable, this Sin warrants a longer post than most other Sins. In our High Level Expert Group report for the European Open Science Cloud, we have already warned for most of the below. I was already told as a Chair of the group in these days that our suggestion to change the funding scheme for EOSC and avoid competing projects would be 'very difficult or close to impossible and would take a decade'. Well a decade has passed, let's see where we are.

Most funding agencies have indeed been mandated with the funding of 'top' science (whatever the definition may be) and in some cases there is even an explicit policy to exclude the funding of infrastructure from from the portfolio. I decided in this post to refrain from mentioning any particular agency, but I encourage everyone who reads this to explore how their national and regional funding system deals with this critical issue.

Let's define 'rocket science' as the 'ideal' research to fund for most agencies. Spectacular, highly visible and potentially transformative research that will 'make it into Nature, Science and Cell'. As we already saw earlier, such research addresses more and more highly complex processes and systems and needs massive amounts of data as well as the assistance of machines. In order to make optimal use of existing and newly generated date as the basis for novel 'actionable knowledge' this type of research needs high quality and reliable reusable data and research infrastructures (the 'rocker launcher' in my metaphor). I could extend the rocket launcher concept to the innovation strategy of funding agencies, towards translational research and ultimately to real (societal) innovation, which is not necessarily part of the portfolio as well, but I would like to keep that for a later post. Here I will focus on the lack of structural and sustainable funding of research infrastructures supporting contemporary science.

One extreme is that the research funder expects the infrastructure to be (magically) available. In many cases that is wishful thinking and there is no specific infrastructure programme that will specifically fund 'just infrastructure'. In the few cases where infrastructure programmes for science specifically give funding to engineers or programmers to build a research infrastructure for a particular scientific domain, my personal experience is that Sin#3 comes into play. Highly skilled people, start building a 'wonderful rocket launcher' without full understanding of what active experimental researchers in their domain really need on a day to day basis. The communication between these infrastructure developers and the domain specialists is cumbersome even when it exists. 'Build it and they will come' is one of the most frequently heard slogans and usually leads to utter disappointment. That does not mean the infrastructure does not work per se, but for instance the entry and remote use procedures are so cumbersome that the scientists run away screaming after trying three times to even get access to the system. If they get in they encounter an environment that seems perfectly logical and consistent to computer scientists or material engineers, but not to domain experts with very little computer, data modelling or engineering skills. So we have a Rolls Royce level rocket launcher without rockets.

The other extreme is, that funders include a work package in grants for 'infrastructure development'. This may be even worse than the former extreme, as it leads to what I have earlier defined as 'Professorware'. The infrastructure works, is even much better tuned to the actual need of the researchers, because it is built by an embedded 'engineer' but it falls apart when there are 'more than 10 users' and do not even ask for a proper versioning system and a service level agreement (SLA)! This approach leads to a myriad of private rocket launchers that can only launch one particular rocket and are not interoperable with any of the others. This non-interoperability is further stimulated by the request (of the agency and the reviewers) to 'demonstrate how innovative you are' (in other words, why your rocket launcher will be 1% better -and very different- than existing one's. That is, if you and the reviewers know of their existence in the first place, because 'publishing' about a rocket launcher in a widely read journal is yet another story (@ Sin#5).


Article content
After publishing a paper (a surrogate endpoint for rocket science) now what ????

So, is there a middle road that may actually work? I think there is, and I have offered this advice in several evaluation committees of national and international 'infrastructure programmes'. The advice has usually been greeted with terms as 'insightful', 'innovative thinking' or even worse... 'interesting'. Well, I learned over 40 years what these terms actually mean....

Now that I am retired and cannot be fired anymore, let me once more repeat my advice in public and I would be very interested to get in contact with funding schemes where a similar 'dual approach' is actually put in practice.

First of all, professional and reliable infrastructure needs to be built by professional engineers, not by scientists or scientific programmers. They make Professorware, which is in most cases a first step, is very important and the term is in no way meant to be derogatory! Professional developers (be they public sector or in private companies) should actually take 'practical professorware' in many cases as their starting point and build a professional version of it, with proper version control, separation of development and production environments and SLA provision. This is a first step towards actual use of the resulting infrastructure as it was 'designed originally by the people who need it'.

So, instead of 'build it and they will come', we need to move to 'build it on what they designed as professorware' and professionalise that without unnecessary extra entry barriers of exorbitant user costs. But why would professional engineers build something without some reasonable chance that researchers will actually use it and be able and willing to pay a market conform fee for the use? Well, researchers are notoriously 'on the penny' when it comes to the use of infrastructure. While they find it completely normal to pay for computers, reagents etc., if it comes to infrastructure reuse they suddenly feel it should either be free, or they will rebuild the next generation of professorware, because their embedded engineers' salary is hidden (and eligible) costs.

So the solution flies in your face, or not?

Funders (not necessarily the research funders) should:

  1. Fund infrastructure development only if it is user driven (not engineer driven) and in development teams including prospective users.
  2. Actively avoid the classical competitive funding schemes (driving divergence and fragmentation) but set up specific funding instruments for the joint development of infrastructures.
  3. Recognise that many components of infrastructures (access control, basic hardware requirements, metadata models, payment schemes and so forth) are generic and need not to be rebuilt over an over for each discipline.
  4. Also recognise that certain elements of an infrastructure are indeed domain specific (for instance ontologies used for data representation or levels of security and data access control) and insist that these are developed (if not yet existing in recognised expert communities) in close conjunction with domain experts.
  5. Actively work on the coherence of already funded projects with a strong infrastructure component and take affirmative action to 'defragment the landscape', even if competitive funding has already created a diverging situation.
  6. Last but NOT LEAST Very importantly: Make the use of the infrastructures 'eligible cost' for subsequent research grants, educating researchers that the reuse of 'other people's data' and infrastructures comes at a cost (very much like PC's, laboratory equipments and reagents) and that these costs are to be budgeted for in each grant proposal !

The latter will also have the desired side effect that overly expensive (public or private) infrastructure services or malfunctioning infrastructures (built without researchers in the loop) will not be sustainable, while well-functioning infrastructures will be, based on overlapping paid use by successful PI's. This also means that infrastructures will become self-sustainable after a while and do not need endless funding and near-death experiences at every turn.

Is this so difficult? NO, This is in fact what happens in Open Access publishing already, where (reasonable and transparent) APCs are eligible costs for most funders. So why not services for FAIR data publishing, research hotels for highly expensive instruments and other infrastructural services?

When funders ask me for example what the costs will be to not only 'require FAIR data' but also make data publishing eligible costs and monitor the output before closing the grant, I have now moved on from my notorious '5%' of the total grant to '10% of your savings'.......





Ginny Hendricks

Chief Program Officer at Crossref. Working to make a difference in open science while transforming community, membership, metadata, and product.

2mo

Well said, this chimes with our experience too - we've seen either/or but not this combined approach too much. Although of course #notallfunders 😁 .

Like
Reply

To view or add a comment, sign in

More articles by Barend Mons

Insights from the community

Explore topics