mastodon.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
The original server operated by the Mastodon gGmbH non-profit

Administered by:

Server stats:

352K
active users

Bjƶrn Brembs

These 2 papers provide compelling empirical evidence that competition in leads to sloppy work being preferentially published in hi-ranking journals:
journals.uchicago.edu/doi/10.1
academic.oup.com/qje/advance-a
Using the example of structural , the authors report that scientists overestimate the damage of being scooped, leading to corner-cutting and sloppy work in the race to be first. Faster scientists then end up publishing sloppier work in higher-ranking journals.

@brembs I really hope that more scientists adopt preprints and slow science!

@manisha

Yes! That would be a step in the right directio. But we need to get the institutions to support that. So we wrote this:

royalsocietypublishing.org/doi

@brembs @manisha
Perhaps the w3c might be able to propose something? It's the sort of thing they are very good at, perhaps starting with the same distributed protocols that mastodon and the rest use.

@julesbl @manisha

Yes, this is actually saomething we explicitly mentrion in the article. Right on! šŸ‘

@brembs I’ve been kicking around a similar idea to what you propose here for a while and just reached out to the lemmy/ibis devs to get feedback before embarking on building it. We decided the first steps would be to mock up some user experiences to make the goal more tangible.

Would love to get your thoughts as the project matures.

@brembs this sounds like an amazing effort. I’m not sure it’s exactly what I’m aiming for, but there’s enough overlap that it’s worth learning more.

@brembs This is very true as well in my field. Very incomplete data sets in #connectomics that then modelers pick up and run with, and don't understand when we show a lack of enthusiasm for their findings because the many limitations of the data weren't considered. To be fair, such limitations are as buried as possible in most manuscripts.

@albertcardona

I'm not surprised that their finding in structural biology generalizes to other fields.

And given the fact that this work is esentially a confirmation of a suspicion many have had for a very long time, I think it is also time to ask: how much more data and evidence does science need before we conclude it's time to act?

Or is this a question like how many academics are needed to change a lightbulb? šŸ˜†

@brembs The deeper issue is that of evaluation in academia. At the moment, and for quite some years, many have taken "more" as better, in both number of papers and of citations, plus the additional axis of perceived importance, i.e., the glamour journal aura and their impact factor use as rubber stamping credentials.

This castle of cards falls down quickly when considering that e.g., a large double-digit percent of papers in glamour journals are never cited at all – motivating the article-level metrics –, and that a significant percent ends up retracted – invalidating any claims that more citations means better.

To evaluate scientists from their published papers, evaluators have to read the papers, discuss them among themselves, contextualize them to the needs and future of their institution, and make up their mind. There are no shortcuts.

@albertcardona

Exactly.
What I was trying to say is that the data seems by now overwhelming in support of droping this kind of evaluation.

How can we get scientists to do what they always ask of the public in terms of,. e.g. climate crisis: "follow the science"?

I mean, how can we expect anybody to "follow the science" if even we ourselves don't seem to bother with it?

@brembs When I say this, a question I get often is, how can you tell a paper is good, generically speaking?

A good paper comes in many forms. Some well-known forms are, based on attention:

1. A report on findings or tools that other labs are already relying on even before it's even published.

2. A report that has been ignored for some time, and suddenly starts being referred to, surfacing as late citations. A "sleeper" paper, one that was ahead of its time.

No. 1 are favourites of journals fishing for citations to boost their impact factor, since they are guaranteed to get many in the first years. But No. 2 signals exceptional, visionary work.

Some other, overlapping forms of papers, based on how foundational the findings are:

3. A report whose findings often change the way other labs will from then on approach a particular field or a paradigm within that field.

4. A report that is referred to from an undergraduate textbook.

Most of these, except No. 1, share the same "problem": takes years for the field to appreciate them. The impact factor only considers 2 years and it's journal-wise, not paper-wise: there's a lot of noise. But paper-wise, article-level citations take time anyway to build up, and are very field-dependent, so isn't reliable either.

Truly there are no shortcuts to the evaluation of scientific research. A sensible strategy is hedging one's bets, because the chase for short-term clout can drastically cut out those sweet long-term rewards, and both matter. What also matters a lot is being a scientist yourself, in addition to an administrator, so as to be able to even approach the evaluation problem.

@albertcardona @brembs Allow me to be a jerk and bring another question that would set fire to the whole card house: "what is good research, even good science?"
The system, as currently is, has all of those problems you have pointed out, and also a problem of trends that rewards publishing about the latest fad that "will change the world". I know more than a few examples of principal investigators whose positions were the result of being in the right place at the right time.