Hacker News new | past | comments | ask | show | jobs | submit login
Let’s Publish Everything (columbia.edu)
84 points by luu 7 hours ago | hide | past | web | favorite | 6 comments





The lede got buried here. This article does not just propose "let's publish everything", it proposes "let's publish everything and have endorsement as a completely separate step." I think that is a far more rational approach than the current model. The current model assumes that the only way to distribute information is by printing on paper. That is clearly no longer the case, in fact it is a laughable assumption. So let's get the papers published online, and then separately have discussions leading to various kinds of endorsements. Or not, depending on the strength of the arguments.

The same thing is trying to happen in scientific publication that's been happening in every other published medium: scarcity is gone. So why not embrace the abundance? Well if there's any contrary case to be made, it's that the explosion in quantity inevitably reduces the average quality. In the arts, low quality corresponds to that ineffable quality of shittiness. In news, it's fakeness. In science, it's irreproducibility. So what do you do? All the former built-in scarcity-based ways of filtering for quality went out the window along with the scarcity itself. You can no longer use the limited space on physical sheets of papyrus (or vinyl LP records, or celluloid) as an excuse to exclude things. So you need new ways of sorting out good quality work from the (suddenly vast) quantities of mediocre work. Seems like you're doing pretty well if you're lucky enough to have a whole new class of people step in to help with the filtering part. Unless of course you believe a scientific discovery is fundamentally the work of a person, an ego, rather than a fact that existed long before that particular person happened to discover it.

Democratization ends elitism; unfortunately elites sometimes really do consist of the best of a given thing. Especially newer elites. Every elite starts out as a meritocracy and ends up some kind of weird legacy-based cabal/cartel that deserves to be overthrown (that's the one you're probably accustomed to thinking of when you hear the term "elitism").


(I agree with the parent poster, but this seems like the right place to add the follwoing.)

One thing that people who claim "the scarcity is gone, let's just publish everything" do miss is that there still is a scarcity: The time and attention of researchers.

Let me expand a bit on that from my own field of studying reconnection in astrophysical plasma. As you can tell from the description it is not a well defined closed topic. Instead there is tons of more or less adjacent fields. The study of solar flares (which might be triggered by reconnection), the study of coronal heating (the heat might be due to magnetic field getting converted to heat by reconnection), astronomical observations of AGN jets (reconnection might be what produces eneretic particles there that we see in the SDE of those sources), observations of pulsar wind nebulae (the energetic particles there might have been accelerated at the termination shock. or due to reconnection), observations of giant pulses in (some) pulsars (that might be due to reconnection at the Y point of the current sheet just outside the light cylinder) and so on. On top of the related physical topics I need to keep on top of development methods in the simulation method I use (particle in cell codes) and in several related simulation methods (either because improvements there might make them viable to study reconnection which wasn't possible before, or because the neat new trick in method X might also improve the characteristics in PiC codes).

If I wanted to I could spend easily 100 hours per week just reading papers. But staying current with the field is just 5 or maybe 10 percent of my job. So what I do is the following: Papers that fall close enough to my special topic I will read. All of them. I will actually print them out, annotate them, go over them with a fine toothed comb. For many other papers I will read the abstracts (10 to 50 each morning) to find the 1 or 2 paper that are worth reading. And this is where selection by the editors and limitations to publications come in. They rank important papers. I am much more likely to read "a novel approach to plasma simulation" if it was accepted into the Astrophysical Journal than if it just appears on ArXiv. Because ApJ does not like pure code papers. So if it made the cut 2 or 3 experts in the field deemed it worth the time of the community.

Now that doesn't mean that all ArXiv papers are bad, that we should publish less or anything like that. When I talk to a collegue and ask about a technical detail I am SOOOO happy when they say "it's in the Arxiv paper ID 1906.bla". And on the other hand there is the notion of "it was in Nature but it still might be right". Bottom line is:

Do not discount the sorting by topic (numerical vs observational vs theoretical), impact ("here is one data point" vs "here is a completely new approach") and quality ("I'm not too convinced, but maybe it gives somebody an idea" vs "holy smokes how did we all miss that!") that journals provide. Any better, future alternative needs this sorting and ranking. Just dumping it onto the internet is not the solution.


>>> Publishing in Psychological Science and PNAS has value because these journals reject a lot of papers.

That seems to be the crux - scientific papers will have to fall into two camps "blogs or basically deciphering the lab notes of everyone in your field" and "look this is a real effect and worthy of your attention"

People seem to want the second without wading through the first - but i don't think you can


> That seems to be the crux - scientific papers will have to fall into two camps "blogs or basically deciphering the lab notes of everyone in your field" and "look this is a real effect and worthy of your attention"

This ignores reputation and expertise. If you’re studying the sociology of Hollywood there’s Gabriel Rossman and maybe six other people who do quantitative work. They all know each other and read each other’s work. If one of this small group has a grad student who does quantitative sociology work on rich datasets from IMDB or somewhere else they’ll be introduced at some point while doing their doctorate. He also partakes of other networks within sociology, Economic sociologists and Princeton graduates. All of these networks are relatively small and reputation and gossip travel quickly. People hear that others are good or not getting tenure, etc.

Sociology is a normal social science. Papers go from an idea to a sketch of an idea or maybe a poster to a conference paper or graduate seminar discussion to working paper before finally being published. The peer review step is the last one for the diffusion of knowledge about what’s good. If people stopped publishing in legacy journals tomorrow peer review would still happen, it’d just be post publication peer review of working papers, the system Einstein published all but three of his papers under.

You can get the second without personally wading through the lab notes of everyone in your field because if you’re trying to get attention you need to present your results and advertise them so people will care, unless they already care because you have a reputation.


> the authors of the above article, and other people who present similar anti-replication arguments

Study replication is good, and fairly rare.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: