>This hints at a very important but not well understood dynamic: you should be very skeptical of truth that comes from your enemy, even if it is true.
This is a fairly dangerous rule of thumb. Being “skeptical of truth” is kind of a doublespeak in and of itself.
Your opponents will nearly always be those who see the truth differently. The strategy being suggested in essence leads to recursive belief retention with a diminished ability to update priors.
If you dismiss information based on source rather than content, you lose access to perspectives that might reveal blind spots. Opponents often have the strongest incentives to identify and expose your weaknesses, errors, and contradictions. This makes them valuable epistemic resources, not epistemic threats.
Each application of the heuristic potentially compounds error:
- Round 1: Dismiss opponent’s accurate criticism of Position A
- Round 2: Maintain Position A, now with additional evidence against it
- Round 3: Dismiss new criticism as “more enemy propaganda”
- And so on…
This dynamic explains how groups become increasingly divorced from reality. Religious cults, political movements, and ideological bubbles all exhibit this pattern - external criticism becomes proof of persecution rather than signal for course correction.
Soviet leadership dismissed Western reports of famines partly because they came from ideological enemies. Corporate executives have ignored employee whistleblowers because they were “disgruntled.” Scientific establishments have rejected paradigm-shifting research because it came from outsiders.
A more defensible approach might be: evaluate information content independent of source motivation, while remaining aware that framing and selection effects matter. Truth doesn’t become false because an enemy speaks it.
Another way to think of this is: this default position of "only listen to truth from your trust network" makes sense because it has kept humans alive for a long time. Even if the tribe is incorrect, aligning with their version of reality is still probably safer (because then you're all in the same boat, and if a catastrophe arises, you'll deal with it together).
If you go off on your own, you have to have the confidence that you can deal with all the challenges that brings.
yes, I think you are exactly right. My vision was to turn "culture war" into "culture science". Instead of saying "I don't want to listen to the right wingers" -> "i want to understand them so I can change them, or spread my ideology faster"
This is the obvious path to winning, and yet trying to do this in practice, I have faced resistance. Why is it so hard to get people to listen to their opponents? My theory is that it's because they are afraid of a failure mode, where they get persuaded by their enemy (and NOT because the enemy was correct & true, but because their individual ability to discern truth is weak).
So, the correct path is to increase your discernment of truth, which includes listening to all sides. But if it's so good & advantageous, why doesn't everyone do it? Understanding their fear, helping them overcome it, is what will unblock this. And I think understanding in what ways truth can be weaponized is an/the answer.
yes, this is exactly the difficult but necessary follow up question. Given that some truths are harmful, who do you trust to handle that for you?
I argue that a lot of people are already emerging to handle this role. Hank Green is one such case that people trust to do exactly this. Essentially he's taking on the role that was traditionally held by priests, to learn about reality from the chaos of what's "out there", and to translate the important parts to you.
I consider Hank Green a "good actor" because he is technically gatekeeping truth, BUT he increases people's awareness as he does this. He doesn't just say "this is what is good", he also raises the epistemology, he says "this is how I know it's good". This creates a more resilient system because then if he ever makes a mistake, his audience can contribute, and eventually start being able to discern harmful truth for themselves.
Ok, napkin math for what truths to highlight and which to ignore:
- Is the truth, if accepted, helpful in moving you where you want to go?
I think that question concerns everything except the first example. We DO want to save people… but at all costs? I guess once again, “where you want to go” is an individual judgement call. So society would have to cohere around individuals who share a set of values for truth (and its suppression) to be trusted from authority again.
I think me saying "suppression" distracted from the point. It's not about excusing blocking people from searching from truth, it's about: an algorithm to decide which truths are worth searching for in the first place, and once you find them, which ones are worth blasting/making legible/spreading.
I was going to write a follow up to this that says "Arbitration of Truth Is Necessary; So Who Do You Trust To Arbitrate?" -> this one is more precise. You go to the doctor, or mechanic, and they say "you need this treatment". He's not giving you the full truth, he's arbitrating. He's saying "you don't need to know the details, this is what you need to know"
The scenario is (1) this is always necessary and (2) given that it's necessary, it's been abused. And now I'd like to describe a world in which, when it is abused, people can tell.
(what this ultimately is going to culminate into is everyone recognizing the media landscape for the adversarial nature that it is. When a journalist writes about something, they are acting as a lawyer. They *should* be biased, to their side. If they pretend they don't have a side, that makes it harder to trust them).
Nice, I really like that forward thinking. Rather than just establishing a new (“good”) authority, make it so that there’s a built-in mechanism by which people operate to notice (and uproot?) those authority imbalances.
This is some serious memetics work… like the Constitution but for communication rather than government.
This reminds me of a “normative value of belief” framework that I’ve been thinking about. Considering agents will change their behavior depending on the information you give them, the framework takes into consideration the normative implication of belief onto action. This is contrasted to judging belief purely on their accuracy. I’m sure this is already explored territory, especially with the pragmatists, but I also want to apply some decision theory to it to perhaps find the “best beliefs.”
I am then also reminded of the value and principle of honesty, and those with norms of honesty get quite upset when they perceive that they have been lied to, even if it was for their own good. Take the COVID vaccine case for example.The negatives were generally downplayed and the positives were overstated, and as a result I would bet there was a net increase in vaccine deniers and now increased ammunition against the government. I’ll generalize it; when the government lied/overemphasized/covered up something, even when it was motivated by good intentions, it seemed to have ended up damaging trust.
Perhaps the damage to trust isn't a bad thing either, now we have institutions and individuals checking one another. But, the distrust could become a bad thing if side A starts triggering attacks based on false positives, which then creates motives for genuine retaliation for side B, then true positives for side A, and then a war starts (I would wager this is how many conflicts have started). So then, either we should loosen our grips on our honesty norms, and/or be a lot more careful about not making false positives (this is where the accuracy value of belief comes in handy!), and/or just have grace god dammit!!
yeah, this is why the answer to most questions of "is this good or bad" is, "it depends". But this doesn't mean an answer does not exist! It depends on everything else going on in the environment. Given a particular outcome, and a current world state, we CAN make an empirical claim that "belief X is better". It's empirical because we can see whether or not that achieves the expected outcome (via a prediction ahead of time, and we'll know for sure by what unfolds).
Thinking about this, I’m not sure that either outcome in these cases is “better” or “more moral”. We cant know if the trajectory of the spaceship outcome was better or worse. What if risking some more lives and resources resulted in advancement of other technologies? Or created economic benefit through the action? I might say, these decisions just “are”. They were guided by certain principles, with a goal/outcome in mind, and they predicted how to achieve that outcome, and moved to create it. The success you might be perceiving is possibly more simple: they successfully predicted the outcome of their intervention.
(1) I think we must place as our basis for choosing to intervene or not our ability to predict what will happen
(2) Our ability to predict is something measurable and can improve
(3) We can't test the counterfactual world but we can desire the best world that we can imagine for ourselves & loved ones and check if we are moving towards it
I think this is the core of suntzoogways theory as I understand it. There's always going to be things we don't know that we don't know (what if things need to get worse before they get better). They made this choice because they couldn't imagine a better possible world. Maybe they were right or wrong, but I need to look at it and decide how I will act upon having this choice in the future
A system sees my creation as the “most moral” outcome. It took a lot of genocide, slavery, interpersonal harms, and personal suffering to create me. I also want to exist. Other people also view my existence as morally wrong, and making the world a worse place. Does it still create me?
So far it makes me think how political ideologies suppresses truth to propagate: church suppressing heleocentrism reseach because of “dangerous” ideas that this new model would bring (watvr it is), or because I am watching chernobyl, suppressing truth (denying even truth) about how bad situation is to protect state interest.
Both situation, suppression was bad because A) the reason for suppression was unfounded and based on fear, B) method was violent
If an ideology is actually benefitial overall + no violence then suppression okay?
The question here is, how to decide whether the ideology/mission warrants the suppression
Okay I should have finished the piece till the end I see your point. It is about getting it back to people we can trust. I guess the pro/con consideration becomes will the benefit from suppressing truths be better than the risk of the suppression system being corrupted again. Because I feel any system of control is very likely to be corrupted at some point
Yeah, I could have made this more clear, but the short answer is like, truth is good, period. The problem is, speaking truth doesn't mean the receiver hears truth. Worse: this can be weaponized. You can't fact check propaganda that is 100% true.
Just because something is 100% true doesn't make it NOT propaganda (as shown in the case of the rebel groups).
The IDEAL case is every person can decide for themselves what is true. But that is too expensive and also error prone. The better thing for all of our survival is forming trust networks, where if you spot something potentially bad, you can surface it, and you trust others in the network to do the same. You should be default skeptical of anything outside of the network. This is how humans have survived up to this point. In hostile environments, loyalty matters more than meritocracy. Similarly, coherence in a group matters more than truth. Even if your neighbors are wrong, it's better for survival to share the same beliefs about reality (purely because it's hard to survive alone. If there's a catastrophe, at least you will be together)
Of course if you know the truth and are confident you can survive alone, great, do that. Better if you can convince others to come along with you too.
I resonate with your point of trust network, I think we already engage with that online for example There is a few set of accounts that I really trust that have been either vetted by other accounts I really trust or shared something that made me trust them. Do you notice something similar when for your twitter feed?
Fascinating read. One way of thinking about psychotherapy is that it is the illumination of truth, but also that "truth" must come to fruition in a particular way and context in order to be helpful (if not actively damaging). Also that neither party gets to be the absolute arbiter of what "truth" is, that often the truth has fuzzy edges.
yes, I wanted to include a personal example for that reason, but if you just blindly search for truth about yourself, you could end up destroying yourself (by focusing on the worst parts of yourself, which amplifies them, which causes you to get worse, downward spiral)
Learning about the good first, getting a stable foundation, then tackling the bad would be a much better path.
In some cases the source of truth being plausibly disconnected from some nodes is more important than absolute suppression. In particular, if proving the connection requires a contextual truth that neutralizes the potential harm there is minimal danger in the truth being known.
I'm not sure if I fully understand this, but I think this is a really important point. Structuring it in such a way that, you either don't know it, or by virtue of what it takes to know it, you acquire it with enough context. Then you don't need to exert any control, you can set it free with that mechanism
Great point. I think this used to be more 'common knowledge' but democratic culture has been pushing the fact that the 'everyman' is qualified to have judgment on everything.
The idea that partial truths are damaging but full truths are good assumes a high standard from the agent. You can imagine a person who even presented with the whole truth would do the wrong thing. (Think of the I/P scenario. the terrorist group could be carried out by someone who had full understanding. Also consider if there was a choice w/NASA on saving the astronauts vs saving 1,000 sick children with the same funds. I'm not sure people would choose the right choice there.)
I think part of where I'm concerned with your model is that it disregards maturity in the agent as a factor in interpreting the information. It's not just info assymetry, it's also a quality of interpretation. To use I/P example; imagine one side knows they can do a deal that will be good for both sides, but will be 'unjust' in some way. Some crimes have to be forgotten, some criminals will get away, good long term. Some ppl even with full information would make the wrong choice. Some would prefer power over peace.
Do you see this problem? How do you fit it in?
You would really enjoy "Philosophy Between the Lines" by arthur melzer.
>This hints at a very important but not well understood dynamic: you should be very skeptical of truth that comes from your enemy, even if it is true.
This is a fairly dangerous rule of thumb. Being “skeptical of truth” is kind of a doublespeak in and of itself.
Your opponents will nearly always be those who see the truth differently. The strategy being suggested in essence leads to recursive belief retention with a diminished ability to update priors.
If you dismiss information based on source rather than content, you lose access to perspectives that might reveal blind spots. Opponents often have the strongest incentives to identify and expose your weaknesses, errors, and contradictions. This makes them valuable epistemic resources, not epistemic threats.
Each application of the heuristic potentially compounds error:
- Round 1: Dismiss opponent’s accurate criticism of Position A
- Round 2: Maintain Position A, now with additional evidence against it
- Round 3: Dismiss new criticism as “more enemy propaganda”
- And so on…
This dynamic explains how groups become increasingly divorced from reality. Religious cults, political movements, and ideological bubbles all exhibit this pattern - external criticism becomes proof of persecution rather than signal for course correction.
Soviet leadership dismissed Western reports of famines partly because they came from ideological enemies. Corporate executives have ignored employee whistleblowers because they were “disgruntled.” Scientific establishments have rejected paradigm-shifting research because it came from outsiders.
A more defensible approach might be: evaluate information content independent of source motivation, while remaining aware that framing and selection effects matter. Truth doesn’t become false because an enemy speaks it.
Another way to think of this is: this default position of "only listen to truth from your trust network" makes sense because it has kept humans alive for a long time. Even if the tribe is incorrect, aligning with their version of reality is still probably safer (because then you're all in the same boat, and if a catastrophe arises, you'll deal with it together).
If you go off on your own, you have to have the confidence that you can deal with all the challenges that brings.
yes, I think you are exactly right. My vision was to turn "culture war" into "culture science". Instead of saying "I don't want to listen to the right wingers" -> "i want to understand them so I can change them, or spread my ideology faster"
This is the obvious path to winning, and yet trying to do this in practice, I have faced resistance. Why is it so hard to get people to listen to their opponents? My theory is that it's because they are afraid of a failure mode, where they get persuaded by their enemy (and NOT because the enemy was correct & true, but because their individual ability to discern truth is weak).
So, the correct path is to increase your discernment of truth, which includes listening to all sides. But if it's so good & advantageous, why doesn't everyone do it? Understanding their fear, helping them overcome it, is what will unblock this. And I think understanding in what ways truth can be weaponized is an/the answer.
This is very brave... the big problem is who can be trusted to decide what truth to supress.
yes, this is exactly the difficult but necessary follow up question. Given that some truths are harmful, who do you trust to handle that for you?
I argue that a lot of people are already emerging to handle this role. Hank Green is one such case that people trust to do exactly this. Essentially he's taking on the role that was traditionally held by priests, to learn about reality from the chaos of what's "out there", and to translate the important parts to you.
I consider Hank Green a "good actor" because he is technically gatekeeping truth, BUT he increases people's awareness as he does this. He doesn't just say "this is what is good", he also raises the epistemology, he says "this is how I know it's good". This creates a more resilient system because then if he ever makes a mistake, his audience can contribute, and eventually start being able to discern harmful truth for themselves.
Ok, napkin math for what truths to highlight and which to ignore:
- Is the truth, if accepted, helpful in moving you where you want to go?
I think that question concerns everything except the first example. We DO want to save people… but at all costs? I guess once again, “where you want to go” is an individual judgement call. So society would have to cohere around individuals who share a set of values for truth (and its suppression) to be trusted from authority again.
yes! You get it.
I think me saying "suppression" distracted from the point. It's not about excusing blocking people from searching from truth, it's about: an algorithm to decide which truths are worth searching for in the first place, and once you find them, which ones are worth blasting/making legible/spreading.
I was going to write a follow up to this that says "Arbitration of Truth Is Necessary; So Who Do You Trust To Arbitrate?" -> this one is more precise. You go to the doctor, or mechanic, and they say "you need this treatment". He's not giving you the full truth, he's arbitrating. He's saying "you don't need to know the details, this is what you need to know"
The scenario is (1) this is always necessary and (2) given that it's necessary, it's been abused. And now I'd like to describe a world in which, when it is abused, people can tell.
(what this ultimately is going to culminate into is everyone recognizing the media landscape for the adversarial nature that it is. When a journalist writes about something, they are acting as a lawyer. They *should* be biased, to their side. If they pretend they don't have a side, that makes it harder to trust them).
Nice, I really like that forward thinking. Rather than just establishing a new (“good”) authority, make it so that there’s a built-in mechanism by which people operate to notice (and uproot?) those authority imbalances.
This is some serious memetics work… like the Constitution but for communication rather than government.
This reminds me of a “normative value of belief” framework that I’ve been thinking about. Considering agents will change their behavior depending on the information you give them, the framework takes into consideration the normative implication of belief onto action. This is contrasted to judging belief purely on their accuracy. I’m sure this is already explored territory, especially with the pragmatists, but I also want to apply some decision theory to it to perhaps find the “best beliefs.”
I am then also reminded of the value and principle of honesty, and those with norms of honesty get quite upset when they perceive that they have been lied to, even if it was for their own good. Take the COVID vaccine case for example.The negatives were generally downplayed and the positives were overstated, and as a result I would bet there was a net increase in vaccine deniers and now increased ammunition against the government. I’ll generalize it; when the government lied/overemphasized/covered up something, even when it was motivated by good intentions, it seemed to have ended up damaging trust.
Perhaps the damage to trust isn't a bad thing either, now we have institutions and individuals checking one another. But, the distrust could become a bad thing if side A starts triggering attacks based on false positives, which then creates motives for genuine retaliation for side B, then true positives for side A, and then a war starts (I would wager this is how many conflicts have started). So then, either we should loosen our grips on our honesty norms, and/or be a lot more careful about not making false positives (this is where the accuracy value of belief comes in handy!), and/or just have grace god dammit!!
yeah, this is why the answer to most questions of "is this good or bad" is, "it depends". But this doesn't mean an answer does not exist! It depends on everything else going on in the environment. Given a particular outcome, and a current world state, we CAN make an empirical claim that "belief X is better". It's empirical because we can see whether or not that achieves the expected outcome (via a prediction ahead of time, and we'll know for sure by what unfolds).
Thinking about this, I’m not sure that either outcome in these cases is “better” or “more moral”. We cant know if the trajectory of the spaceship outcome was better or worse. What if risking some more lives and resources resulted in advancement of other technologies? Or created economic benefit through the action? I might say, these decisions just “are”. They were guided by certain principles, with a goal/outcome in mind, and they predicted how to achieve that outcome, and moved to create it. The success you might be perceiving is possibly more simple: they successfully predicted the outcome of their intervention.
Yes to all that, and:
(1) I think we must place as our basis for choosing to intervene or not our ability to predict what will happen
(2) Our ability to predict is something measurable and can improve
(3) We can't test the counterfactual world but we can desire the best world that we can imagine for ourselves & loved ones and check if we are moving towards it
I think this is the core of suntzoogways theory as I understand it. There's always going to be things we don't know that we don't know (what if things need to get worse before they get better). They made this choice because they couldn't imagine a better possible world. Maybe they were right or wrong, but I need to look at it and decide how I will act upon having this choice in the future
A system sees my creation as the “most moral” outcome. It took a lot of genocide, slavery, interpersonal harms, and personal suffering to create me. I also want to exist. Other people also view my existence as morally wrong, and making the world a worse place. Does it still create me?
So far it makes me think how political ideologies suppresses truth to propagate: church suppressing heleocentrism reseach because of “dangerous” ideas that this new model would bring (watvr it is), or because I am watching chernobyl, suppressing truth (denying even truth) about how bad situation is to protect state interest.
Both situation, suppression was bad because A) the reason for suppression was unfounded and based on fear, B) method was violent
If an ideology is actually benefitial overall + no violence then suppression okay?
The question here is, how to decide whether the ideology/mission warrants the suppression
Okay I should have finished the piece till the end I see your point. It is about getting it back to people we can trust. I guess the pro/con consideration becomes will the benefit from suppressing truths be better than the risk of the suppression system being corrupted again. Because I feel any system of control is very likely to be corrupted at some point
Yeah, I could have made this more clear, but the short answer is like, truth is good, period. The problem is, speaking truth doesn't mean the receiver hears truth. Worse: this can be weaponized. You can't fact check propaganda that is 100% true.
Just because something is 100% true doesn't make it NOT propaganda (as shown in the case of the rebel groups).
The IDEAL case is every person can decide for themselves what is true. But that is too expensive and also error prone. The better thing for all of our survival is forming trust networks, where if you spot something potentially bad, you can surface it, and you trust others in the network to do the same. You should be default skeptical of anything outside of the network. This is how humans have survived up to this point. In hostile environments, loyalty matters more than meritocracy. Similarly, coherence in a group matters more than truth. Even if your neighbors are wrong, it's better for survival to share the same beliefs about reality (purely because it's hard to survive alone. If there's a catastrophe, at least you will be together)
Of course if you know the truth and are confident you can survive alone, great, do that. Better if you can convince others to come along with you too.
I resonate with your point of trust network, I think we already engage with that online for example There is a few set of accounts that I really trust that have been either vetted by other accounts I really trust or shared something that made me trust them. Do you notice something similar when for your twitter feed?
Also I understand your point better now
Fascinating read. One way of thinking about psychotherapy is that it is the illumination of truth, but also that "truth" must come to fruition in a particular way and context in order to be helpful (if not actively damaging). Also that neither party gets to be the absolute arbiter of what "truth" is, that often the truth has fuzzy edges.
yes, I wanted to include a personal example for that reason, but if you just blindly search for truth about yourself, you could end up destroying yourself (by focusing on the worst parts of yourself, which amplifies them, which causes you to get worse, downward spiral)
Learning about the good first, getting a stable foundation, then tackling the bad would be a much better path.
In some cases the source of truth being plausibly disconnected from some nodes is more important than absolute suppression. In particular, if proving the connection requires a contextual truth that neutralizes the potential harm there is minimal danger in the truth being known.
I'm not sure if I fully understand this, but I think this is a really important point. Structuring it in such a way that, you either don't know it, or by virtue of what it takes to know it, you acquire it with enough context. Then you don't need to exert any control, you can set it free with that mechanism
Really good review of this post by Eric
https://x.com/aporeticaxis/status/1940159148500689086
I think you can get equal or better results with effective framing of information without suppressing the truth, especially in the long run.
Great point. I think this used to be more 'common knowledge' but democratic culture has been pushing the fact that the 'everyman' is qualified to have judgment on everything.
The idea that partial truths are damaging but full truths are good assumes a high standard from the agent. You can imagine a person who even presented with the whole truth would do the wrong thing. (Think of the I/P scenario. the terrorist group could be carried out by someone who had full understanding. Also consider if there was a choice w/NASA on saving the astronauts vs saving 1,000 sick children with the same funds. I'm not sure people would choose the right choice there.)
I think part of where I'm concerned with your model is that it disregards maturity in the agent as a factor in interpreting the information. It's not just info assymetry, it's also a quality of interpretation. To use I/P example; imagine one side knows they can do a deal that will be good for both sides, but will be 'unjust' in some way. Some crimes have to be forgotten, some criminals will get away, good long term. Some ppl even with full information would make the wrong choice. Some would prefer power over peace.
Do you see this problem? How do you fit it in?
You would really enjoy "Philosophy Between the Lines" by arthur melzer.
This was quite a demoralizing read.
why! it points to a solution. There's a way through. Not seeing it keeps us stuck in the same cycle