Short on time and exasperated by the denial of their request, the engineers then went outside of reporting channels and contacted a friend with access to a military satellite that could provided the needed photographs. NASA found out about this request and shut it down also. The computer simulation was, as expected, inconclusive and all astronauts on board the shuttle perished upon reentry as it was later determined that there was indeed a large hole in the wing of the spacecraft that rendered its return to earth impossible.
Was this good or bad? What would you have done if you were in charge?
My understanding is: this was the right decision. Had this issue been widely reported, it would have put humanity in a lose-lose position:
publicly abandon the astronauts & condemn them to death
OR launch an extremely expensive repair mission with minimal chance of success, putting MORE lives at risk1.
I now believe this is a general principle, it is unavoidable, and we will find many cases of this: the best possible world requires some amount of suppression of truth.
If this is true, then it explains why we’ve been stuck solving this problem for so long:
We want the best outcomes
Sometimes, the best outcomes require *some* supression of truth
This creates an opening for bad actors to abuse this & lead us to worse outcomes
The correct solution here is for us to understand: WHEN is suppression of truth necessary. If we understand this, we can fight all the bad actors who suppress truth, while leaving “the good ones” intact (so you don’t make things worse once you’re in charge).
The reason I am writing this now is because within the two biggest projects I’m working on, “Open Memetics Institute” and “Open Research Institute”, we’re potentially creating mechanisms to surface any truth that we want to find.
Is this dangerous? I think the answer is “yes”.
But for the longest time I couldn’t decide if it was dangerous because:
There exist bad actors who will fight this
There are good reasons why some things are suppressed & will lead to worse outcomes for everyone if you surface them
Basically, I’m saying the Open Memetics Institute needs a theory2 of (1) how to discern infohazards (2) and how to handle them. We have to do this before we can open the floodgates.
And I think I’ve got it now.
Let me now give you a few more case studies. These, plus the NASA case above together form a “unit test suite” for any correct theory of infohazards.
Case Study 2 - Israel / Palestine on the verge of peace
Both sides have finally come to the table, ready to bury the hatchet. In this scenario there exists a LOT of Israelis that do not trust the Arabs/muslims, AND vice versa. But the pro-peace factions on each side have gotten just enough support.
In the final hours, a rogue Arab group runs off and bombs something.
By some miracle, the Israeli’s aren’t aware that it was an Arab behind this attack.
Do you surface this truth?
The two trajectories at this inflection point are:
(A) treat the attack as a rogue one off, sign the peace treaty. Trust builds, violence is condemned by both sides as they heal & co-exist.
(B) the attack “confirms” the narrative that Arabs are not to be trusted, that they want to genocide the Israelis, and that causes the pro-peace people to lose their belief in piece. They now believe they either must fight or die at the hands of a people who will never co-exist with them
This is the same pattern as before. Truth itself is good. What is bad is “partial truth” that leads to amplifying a specific model that leads to worse outcomes. In scenario A, the truth of the rogue Arab attack may surface years later, but it now has context: it’s safe to spread then because it doesn’t threaten the real truth that “enough Arabs want peace that they are willing to coexist” (because by then they have already been co-existing, you have the proof).
Case Study 3 - Rebel groups fight each other in Andor
In Andor (Star Wars TV show), there’s a scene where a group of rebels splits in two and fights over a shared resource: one ship that can get them off the planet, and can only fit half of them. Each group doesn’t trust the other to come back for them.
At some point, they are fighting a war of attrition: one group yells to the other across the battle lines:
“You can’t live off of these coconuts forever!”
This is a truth, hurled by the enemy at you. Do you accept this truth & spread it to your group? Or do you suppress it?
The correct answer is suppress. This is the same case as before, a “partial truth” that leads you away from truth.
The truth is that it is likely for your group to keep fighting, and survive & get off this planet
The “partial truth” demoralizes you & leads you to a (false) conclusion that you cannot survive, so you better give up, and let the other group take the ship
This hints at a very important but not well understood dynamic: you should be very skeptical of truth that comes from your enemy, even if it is true.
Thankfully, this is already widely understood within the human memome. People hear true things from someone on the opposite political spectrum, and they often reject it. This is the correct default. It keeps you safe from weaponized partial truths3.
Future work
The main claim of this essay is that there exist cases where suppression of truth is good. A lot follows from this, if it’s true. Some next steps off the top of my head:
Outline a concrete case of suppression being good & necessary in a romantic relationship
Outline a concrete case of suppression being good & necessary in an individual’s relationship with themselves4
Given enough concrete cases of “good & necessary” suppression, articulate the framework for how you can tell in general. What is common/invariant across all the cases of good & necessary suppression?
Apply this to the current world state: (1) there once existed good actors that could handle suppression of infohazards, the people trusted them, and the people thrived. (2) At some point, bad actors co-opted these mechanisms, and now people don’t know who to trust. You cannot attack the suppression without also destroying everything.
How do you safely transition from world state (2) back to world state (1)5?
Get this “peer reviewed” from people I trust, maybe add their reviews here. Top on my list is to get a review from Suntzoogway who has, as far as I can tell, the most advanced testable theory for “the best possible world”, although he has not yet published a lot publicly.
Reviews
Really good review of this post by Eric:
The need to suppress wasn’t a necessary evil, but a symptom of already-broken trust where factual content can't flow without becoming misinformation - which is precisely the 'impoverishment of reality' Floridi diagnoses!
I think the best possible outcome here is letting the concerned engineers bubble their concern all the way to the top, and then the people at the top say “look guys, those astronauts up there are dead. It’s too late. There’s nothing we can do”. The reason they didn’t do that is because they worried about this information leaking and triggering a misunderstanding of the situation, like “NASA chooses to abandon its astronauts instead of even trying to save them”, which isn’t really the true story (because it implies they could have saved them).
Note that I say “theory” and not “a set of rules that we hope everyone will abide by”. The mechanism has to be designed such that the good outcome is inevitable. The way to do that is to design “win-win” mechanisms, where you either (1) do the good thing, and that doubles as a costly signal of your cooperation (2) do the bad thing, and that serves as a way to “out” you, which reduces your power & influence in the network. For more on this, see: “Unfakeable signals of good faith”.
This of course only works as long as there exists someone on your side that you trust, and who is capable of “filtering” the truth for what is good & useful for you. In the case of Andor, this is the ship’s captain. The rebel fighters can ignore “you can’t survive on these coconuts” even if it’s true, because they believe the captain has more context and will tell them when it’s time to give up vs keep fighting. They trust the captain because he is, literally, in the same boat as them, as opposed to the other group hurling truth at them who will not suffer the consequences of what the truth does.
I talk about this a little bit in the “Uncanney Valley of Self Awareness”. Basically, sometimes there is a true thing about you that, if you learn, leads to worse outcomes. In those cases, people naturally reject the truth. But they remain stuck if they confuse this good suppression with bad suppression (the latter keeps them stuck / makes the bad outcome keep happening more). This is impossible to solve in the general case until you can identify when you’re pushing away useful but painful truth, vs genuinely harmful truth.
This is kind of what I’m trying to do with the “Our Story So Far” series, describing the current state, then describing how we got to the next state, and how we got there, so we can think through that. If someone has the correct answer, we can “test it”, we can see it unfold, and update the story.
>This hints at a very important but not well understood dynamic: you should be very skeptical of truth that comes from your enemy, even if it is true.
This is a fairly dangerous rule of thumb. Being “skeptical of truth” is kind of a doublespeak in and of itself.
Your opponents will nearly always be those who see the truth differently. The strategy being suggested in essence leads to recursive belief retention with a diminished ability to update priors.
If you dismiss information based on source rather than content, you lose access to perspectives that might reveal blind spots. Opponents often have the strongest incentives to identify and expose your weaknesses, errors, and contradictions. This makes them valuable epistemic resources, not epistemic threats.
Each application of the heuristic potentially compounds error:
- Round 1: Dismiss opponent’s accurate criticism of Position A
- Round 2: Maintain Position A, now with additional evidence against it
- Round 3: Dismiss new criticism as “more enemy propaganda”
- And so on…
This dynamic explains how groups become increasingly divorced from reality. Religious cults, political movements, and ideological bubbles all exhibit this pattern - external criticism becomes proof of persecution rather than signal for course correction.
Soviet leadership dismissed Western reports of famines partly because they came from ideological enemies. Corporate executives have ignored employee whistleblowers because they were “disgruntled.” Scientific establishments have rejected paradigm-shifting research because it came from outsiders.
A more defensible approach might be: evaluate information content independent of source motivation, while remaining aware that framing and selection effects matter. Truth doesn’t become false because an enemy speaks it.
This is very brave... the big problem is who can be trusted to decide what truth to supress.