11 Comments
User's avatar
Lincoln Sayger's avatar

I'm guessing this is why agreeing with someone on, say, minimum wage and asking about the consequences is more successful than pointing out facts about its consequences.

"Okay, let's do it. Why not $100/hr?"

Expand full comment
DC Sentence Club's avatar

Enjoyed the piece, the analogy to ensemble learning at the end raises some interesting implications. For example, ensemble methods can destroy your interpretability. Also, if you're thinking about mass truth-seeking as an ensemble learning problem, are we in the regime where we have way too many high-bias learners polluting the results and thus we'd gain accuracy from removing some of them? If so, this seems to be at least partially in conflict with your conclusion.

Expand full comment
Defender's avatar

the way you can include everything without it damaging the aggregate model is to weight it based on performance. Essentially like a stock market kind of system. Anyone can "vote", but if you're wrong you're going to just lose all your money (in our case, what you lose is trust/weight in the aggregate prediction).

What that means practically is I'll listen to everyone, no matter how wacky or consistently wrong they are. At worst their perspective adds zero predictive power. But usually there are cases where it is valuable. MAGA for example will almost always celebrate any decision that trump makes. There isn't much signal in that, but SOMETIMES they falter/disagree with it. And that is valuable signal.

Expand full comment
Defender's avatar

I think this is only possible if you are willing & able to "step inside" the model(s) that you're aggregating over? I don't treat MAGA as a black box. You can weight their confidence, given the topic, their incentive, how it emotionally feels in their world, what the stakes are, etc.

Do this for ever single world view and you get a pretty complete picture of human reality (in terms, not just what is true in the world, but what all groups think is true, and why some good & useful things fail to spread / be executed, and why bad things persist). This is essentially what I mean when i talk about "The Human Memome Project", piecing this together collaboratively

Expand full comment
DC Sentence Club's avatar

Yeah, I like this interpretation and I think I’m generally aligned with it. The goal isn’t to develop the most accurate model of the world for just yourself, but its more like figuring out how one would spread truths or truth-seeking behavior *as mediated by the models used by each person (or information network/community, since that level of abstraction makes things way easier).* That last part being the tricky but important bit.

Expand full comment
Antony Van der Mude's avatar

One thing that people who study urban legends point out is that even if the facts are false, the legend represents a fundamental fear (or sometimes hope).

This points to another fundamental truth: the main purpose of language and communication is to express emotion - facts are secondary. I learned this from having grown up with dogs. Dogs have a rich language. It is as much a body language of posture as it is of growls, barks and howls. But it is mostly comprised of expressing what they are feeling.

I was sitting in the passenger seat of the car when I was 11, with our West Highland White Terrier, Jock, sleeping at my feet. I knelt down to pat him. He didn't want to be patted. He bit my wrist - just two canine teeth through the flesh - glared at me, then closed his eyes again. I got the message.

Dad didn't see it that way. He stopped the car, gave Jock a good thrashing, staunched the blood, then drove to the doctor for a rabies shot. Sheesh! Can't a dog just be left to have a nap?

Jock left our house soon after, since I was the only kid he could tolerate. Jock hated human children, except me. I still have the scar, though.

To get back to the main point, if you say something that challenges a person's fundamental model, the emotional subtext is that they are sleeping through life, with the illusions of safety in their fantasies. The result is that they are liable to bite.

Expand full comment
Defender's avatar

the main purpose of language and communication is to express emotion - facts are secondary

YES. This is such a big idea, but also very simple. Once understood I think a LOT of confusion evaporates. We stop trying to fight against the system and work with it.

This is what I was trying to explain in “Lima Beans & Butter Beans”: https://defenderofthebasic.substack.com/p/lima-beans-butter-beans-and-manipulation

if you say something that challenges a person's fundamental model [..] The result is that they are liable to bite.

YES!! This happens over and over, ALL THE TIME, and people get shocked. They can’t see this attempt to dismantle the fundamental model, despite the other person kicking & screaming (via ad-hoc rational arguments).

The “long way around” is understanding their fundamental model, entering inside of it, seeing why they rely on it, and charting a path alongside them. To come outside, knocking down the walls, and leave them to deal with the debris is a very hostile act.

People don’t trust or listen to others who are different from them, and for good reason. If I let an outside knock down my model, which has kept me alive so far, I am either forced to seek safe harbor in their world, or I am left with a broken model, exposed to the elements. It’s a great risk to jump ship, or change the planks on a ship as it’s sailing.

(but, of course, if the wood is rotting, it will eventually collapse on itself)

Expand full comment
colossalmini's avatar

makes sense. how 2 increase discernment?

Expand full comment
Defender's avatar

I was going to end with "Ok but how do we actually increase discernment?? Tune in next time!!"

There's no rules or formula here. The one thing I can say that applies to everyone is: practice testing your world model. This will tell you if your world model is getting better or worse.

I wrote about my journey here on "learning to think for myself": https://defenderofthebasic.substack.com/p/geoffrey-hinton-on-developing-your

To tie it back to the diagrams in this essay, it's basically that I used to hold the strict green/red view, then I just tossed it and started from scratch. I looked at every piece of information, and tried to figure out if it was true or not. I took what random people on twitter said just as seriously as what people on the radio/experts said. I assumed equal chance that they were lying or telling the truth.

So it's like clearing the whole space, taking a dot, testing if it's true or false, and repeatedly doing this until I built a model. Then comparing my model with other people. Anyone who said something crazy, I'd try to figure out why they believed it. Did they have a blindspot, or did i?

Expand full comment
Pixie's avatar

Quoting: “If you’re speaking to someone across worldviews, your goal is either (1) to convert them, or (2) to extract insights from their worldview that can improve the discernment of your own—or vice versa.”

Discernment is a super bottleneck! Gosh. Worldviews feel like comparing apples vs cats vs aeroplanes at times.

So, while this might overlap with (2), but I’d propose that translating between worldviews is likely essential for mutual discernment. I frequently encounter worldview differences that stem from ontological mismatches—for example, comparing ideas situated at different levels within a causal chain.

Other differences concern how epistemic criteria are justified—for instance, when a worldview interprets the past through differing narrative agents aimed at differing end goals. These presupposed commitments shape the frameworks used to gather and interpret data.

FWIW, the other problematic cases arise when ideas are compared solely based on conclusions drawn from datasets produced by fundamentally different assessment frameworks. Discernment should immediately shift upstream for such cases—from comparing end outcomes towards evaluating the epistemic criteria behind each framework (Otherwise there is literally no basis for comparison).

I’m not sure how exactly to execute the above, but IMO, we can usually instantly “triage” worldview drift by examining how each party justifies their epistemic criteria. It’s especially revealing when one side says things like “this just is,” or “my data using this framework proves it—therefore my framework justifies itself.”

Since these responses instantly expose self defeating circularity / relativism, the next step is really about each party’s is willingness / or openness towards knowledge sharing and exchange.

Expand full comment
Dave's avatar

"Show me the incentives and I'll show you the outcome."

-Charlie Munger

Expand full comment