We know how to fix peer review
It's an alignment problem. We can test this empirically, then publish the results in a journal of coordination science.
Two days ago
, of Astera Institute, announced that they’re giving up on peer reviewed journals:I no longer believe that incremental fixes are enough. Science publishing must be built anew. I help oversee billions of dollars in funding across several science and technology organizations. We are expanding our requirement that all scientific work we fund will not go towards traditional journal publications. Instead, research we support should be released and reviewed more openly, comprehensively, and frequently than the status quo.
This is very good news. Peer review is broken, everyone who understands how it works knows this. I think much better systems already exist, in theory. I’ve grown confident of this theory because I keep seeing validation for it in the real world, in how scientists on the frontier already interact through social media.
Let me now try to spell out what this better system looks like, and how we can go about executing it & testing it. I also want to make a meta point about how this problem is itself an example of something that a “coordination scientist” aka “alignment scientist” would work on, but there does not yet exist anywhere to publish this/get recognized for success here. This too, is itself an alignment/coordination problem.
The Solution: Reward Honest Reviews
Elisabeth Bik makes a living peer reviewing microbiology papers and calling out bad science, as part of her “Science Integrity Digest” blog. Her work has lead to “1331 Retractions, 215 Expressions of Concern, and 1074 Corrections (as of December 2024)”. She has found a way to fill this gap in the market AND be rewarded for it. She makes ~$2000 a month from her Patreon.
Sabine Hossenfelder does a similar thing on YouTube. She makes more money, because of the bigger audience. The majority of her audience is the general public, but she does speak to & have an impact on academia, as evidenced by scientists attacking her. They attack her because her claims, calling a lot of the work BS, erode the field’s credibility (which hurts their funding). Especially because she speaks as an authority, as an insider to the field, so she’s not easily dismissed.
Both Hossenfelder and Bik are able to do work that is extremely important & valuable, that few people are doing, because they’ve found a way to get recognized & rewarded for it. That’s it. Solve the (1) recognition and (2) reward problem, and the rest follows.
Ok, but HOW exactly do you reward honest reviews?
Here is a “day in the life” of the ideal system, end to end:
A young researcher with a promising idea tweets about it. She has 5 followers on twitter.
Someone like me, who is trying to launch his science communication career, finds it & amplifies it. I do this by making it legible & @-ing the people who I think could move the needle
Someone with a big audience like Sabine sees it, is impressed, and endorses it
The researcher gets funded
The researcher delivers a novel result, which rewards (a) her (b) Sabine for being the one to break the news (c) me for being the first to spot it
Notice that at every step of the way, every action is rewarded, and NOT necessarily by the same system. Sabine runs a YouTube channel that makes a lot of money, this requires feeding the content machine. Anyone who can surface novel, useful content to Sabine helps her. She doesn’t need to pay you for it, because you’re compensated by the attention that her specific audience brings (it gets the researcher funded).
What I get out of this is something very special: attention from those on the frontier. People see that I can take recognize a novel, useful idea, BEFORE it gets big. This is extremely valuable for (1) those with money who want to put it into the next big thing (2) those with novel ideas who want to be recognized/want resources/collaborators. Every time I get a “closed loop, end to end” success brings more attention & resources for me to wield.
I’ve been funding myself by converting this status & attention from people on the frontier to money & opportunities. Money wise: I have a grant from Kanro, and one from Analogue Group. Opportunity wise: I have my name on an upcoming NeurIPS paper through the memetics work I’m doing with
, and maybe a full-time job if I can deliver on what the project needs1.What does it mean to work on coordination / do alignment science?
I claim the reason we haven’t yet solved coordination is because, just like with peer review, we have NOT yet figured out how to reward the behavior/outcome we want:
i think people don't understand that the way to solve coordination is by figuring out how to reward coordination. The better we get at that, the faster it gets fixed (align incentives, and the pieces & skills & funding materialize)
Source: https://x.com/DefenderOfBasic/status/1930070741476675757
A journal of “coordination science” does not yet exist2. There isn’t even anyone whose job is “solve coordination”. When I tweeted this people said “that’s just economics” or “that’s just anthropology” but they’re wrong because no closed loop exists. What I want is to:
Come up with a theory of coordination
Test it in the real world
It either creates the predicted value, OR it fails, and allows me to update my theory & write a paper on that
Economists or w/e write their little theories, but they have to fight over prestige, status, and funding. If your theories worked, you could use them to fund yourself. If you are NOT a well known researcher, or you’re outside of academia (gasp!), you can find smaller/2nd tier companies or individuals who are willing to try your methods. You can’t pay them, but you can give them ideas. They can’t pay you, but they can execute & let you know if it worked.
Figuring out how to reward science communication & honest peer review causes a lot more of it to happen, AND those who are best at delivering value surface. Similarly, figuring out how to reward solving coordination causes a lot more of it to happen, AND the best at it will surface. My method/theory here is that we should reward it by the outcome itself, NOT with any proxy metric. This gives you an unfakeable signal (similar principle behind “Unfakeable signals of good faith”).
The solution to something like peer review is to come up with a specific system that aligns the incentives.
The solution to something like coordination is to get good at the meta skill of creating systems that align incentives. The portfolio of people who work on coordination should be a list of end to end “closed loop theory → predicted outcome”. You show that you are good at coordination by demonstrating your ability to solve it.
I basically wanted to be an open source memetic engineer, but that job didn’t exist, so I marketed it into existence, so that I can apply for it. If I am a good fit for the job, I’ll do it and get paid. If I am not, I’ll take my reward as “attention/status” and keep looking. This is what it means to work on coordination. You get rewarded directly by the “downstream” benefits, the fruits of the coordination. It’s a risk, because it requires you betting on your own competence & the success of your ideas. But that is a feature, not a bug.
Another reason a “coordination journal” doesn’t exist is: if you figure out a method to create a ton of value, instead of just “giving it away” for nothing, and doing the difficult & expensive work of explaining it & spreading it, it makes more sense to just capitalize on it yourself. One way around this is for that to double as marketing. When a company like Stripe blogs about how they are so productive, it makes them look good, and also spreads useful information. Tying marketing to surfacing novel information is another “coordination theory” that, if applied, should make companies who adopt it a lot of money. It requires that the company (1) have the competence to find & surface novel information (2) that they benefit from transparency, because they are doing good, and aren’t hurt by consumers learning more about how they work.
What would we lose if journals had bounties?
something like- this is a problem we would like solved, or an area that we feel needs exploration (so you can define how you are contributing)
Journals with high integrity could accept bounties from other groups- like govts, or even a different research group that wanted a problem outsourced.
> i think people don't understand that the way to solve coordination is by figuring out how to reward coordination. The better we get at that, the faster it gets fixed (align incentives, and the pieces & skills & funding materialize)
This.