I’ve developed a “review rubric” for ORI1 that I’m really happy with. For a given piece of information, I want to know in which category it falls for you:
A - “true & useful, AND new to me”
B - “true & useful, but NOT novel”
U - “unclear, undefined, unknowable, or just flat out wrong”
This is designed to be an “unfakeable signal of good faith”. If I can give you information that you recognize as “A”, then I am “ahead of you” on this subject; I’ve proven my expertise.
If everything I can give you is “B”, that suggests that YOU are “ahead”. If you can then turn around and give me an “A”, then that confirms it.
If you give me something that is a “U”, that could mean that you are completely on the wrong track, OR that *I* am on the wrong track. I fail to recognize the truth you present to me2.
In a typical unproductive internet interaction, both sides give each other U’s. This is a waste of time for both sides.
In a typical productive internet interaction, both sides are trying to give each other A’s, that may get interpreted as U’s. They then both continually “drop down” until they can find something that is a “B”. This represents the floor. From there, you then go up until one of you can give the other an “A”.
This is the “leveling” protocol. If they cannot give you any “A”, then there’s not much you can learn from them. This is a test that no one can cheat on.
What if someone lies about their rating? What if you tell them something that is a novel breakthrough, but they say “oh I already knew that, that’s a B for me”.
If they’re lying, it is VERY easy to detect this. You can just ask them, “oh! how did you learn about this?” They will have to tell you a source, or explain how they deduced it. The proof that they’re lying is in their inability to give YOU any A’s back.
This protocol is designed to be useful even if you are the only one who uses it. But if it starts gaining traction, then it helps everyone3.
If a notable person ranks something as “A”, but to you that is a “B”, then you now start to level them. You realize that you know things that they are only just now learning.
If that notable person ranks things as “U” that you know to be true, that shows you the edge of their frontier.
If you rank things “A”, that can surface the people who produce novel, useful insight. I’d LOVE to see who across substack for example you consider an “A”, which of their posts, what idea IN their posts was an A for you4.
The way I currently use this is to just ask someone, hey can you rate this A/B/U? And I drop this image. So I can use this over any platform.
If it becomes enough of a norm you can just reference it “A/B/U”.
EDIT 1:
asked if I can give a piece of information out as a test, so, here’s one:The hard problem of consciousness is already solved - at least partially. We know that consciousness isn’t localized to the brain, first of all, it exists throughout the whole body.
It’s more accurate to say “consciousness uses the brain to compute things” than “consciousness is generated by the brain”.
(this is technically multiple ideas, you can extract one and rate it. If we were having a conversation I’d give you one at a time for you to rate)
EDIT 2:
is an example of someone whose writing I get a lot of “A” from. His recent piece, “Subjective Science”, was a B for me (because I’ve already read his twitter threads on the same topic, but I can see how he’s laying the foundation here for his audience, and I’m paying extra attention to his writing right now)ORI stands for “Open Research Institute”. At its best, this is an actual organization with funding, or a kind of “social network for truth seeking”. Right now it’s more of a concept / a group of people that I work with, with no formal legal entity.
The closest thing(s) that exist to it today are Ronen Tamari’s “Semble: social knowledge network for researchers”, and separately what
is building with his network(s). I recently met Dylan Tull & Patrick Connolly and am very impressed with the direction they’re going in.It’s a very intentional decision to omit an “F”. There is no option for “this is wrong, and I am sure of it”, because in general, people don’t separate “I don’t understand it” from “I understand it, but I think it’s wrong”. You can test if people understand it by asking them a series of questions, but they may still be convinced that they understand it even if they don’t.
This is the ideal property we look for when engineering cultural change, as I describe in “How to build culture tech”. If it’s useful at small scales, and only gets better if more people use it, then it’s a great recipe for it spreading. As opposed to things that only work if everyone buys in, which are difficult to take off. And the worst category which is: things that work really well if very few people do them, but completely break if everyone does them.
This system is unbeatable because, if a lot of people coordinate to rank junk info “A”, just to give it a boost in this algorithm, then their ranking drops (because, you then look at it, see that it’s a B, or a U, and that de-ranks it, for you). This system is kind of the “holy grail of algorithms”, and no one has done it at scale yet, because it isn’t really scalable - it ONLY works if people put in effort to curate & judge things accurately.
So, naysayers will go “it will never scale”, but I argue, “it doesn’t matter”. You & your network can rate things accurately, and that will allow you to surface the best information that your collective stumbles across. It will scale alongside your trust network.
this was an A
Do you have a test piece of info for us? 👀