I didn't manage to fit this in but I wanted to say that one solution to this language problem is to lean further *the other* way, away from accessibility. The fact that we use this common "base" language english is deceptive. I think it makes things LESS accessible because, at least if you are aware that we're speaking different languages, you're going to spend some time trying to translate, vs leaving with the incorrect understanding.
I recently learned about an old internet community that still does this, where a lot of their blog posts are written in made up languages (https://github.com/DefenderOfBasic/notebook/issues/10). Presumably so that only those who can decode it can participate. It's a good way to speak about potentially dangerous things but in a way that can be open/let in people who can figure it out.
this is also a little bit similar to a viral post I saw last night (https://x.com/DefenderOfBasic/status/1956167843176697989) that talked about how terrible it is that generative AI makes it so they can't trust "anything on the internet anymore". And from my perspective, I'm like, THIS IS GOOD! It already WASN'T trusted, it was already being used to manipulate you. The fact that you weren't aware of it before, and are aware of it now, because it's gotten worse, is actually an improvement.
It's the same thing with languages. Reading other subcultures' stuff is confusing, and leads to a lot of conflict. If the languages were even MORE different, it would be clearer that translation is required, and lead to less conflict
I'm not a mystic or a rationalist so this may have gone over my head, but it sound kind of like you're just describing the observation vs. theory dichotomy. There are fields where we have examples of observation outpacing theory (astronomy, ML), causing those who want to work at the cutting edge to tend to prefer the observational approach. The ML example is especially telling because it implies that humans are able to construct systems that are too complex for us to predict from first principles (yet? ever?). This seems to be applicable to plenty of social science fields and, of course, memetics/the pursuit of trying to be a good online information-spreading agent.
It seems like people who succeed at the observational approach tend to create their own headcanons that are sort of cobbled together from the existing zeitgeist. It also seems like they tend to hold onto them loosely and update them often based on how useful they appear. Given this, it then seems like folks working in this space would gravitate towards your "mystic" archetype.
1. Let's assume that tpot people (tPotters) can act in the role of tPosters and tPeers. tPosters say things, and tPosters may be genuine or attempting to hijack. tPeers assess the contributions of tPosters and decide whether to believe them.
1.1. Earnest reputation: tPosters share a lot of their observations, feelings and persuasions. Over time a tPoster produces enough signal so that tPeers can conclude that the tPoster behaves earnestly and thus their future contributions are expected to be earnest too. A fake tPoster may have difficulty with appearing consistently earnest over time.
1.2. Shared feelings: when tPeers assess each other's contributions, they don't accept them blindly. There's always some shared "feeling" at the base level. They see that they share enough of the way of seeing the world with a certain tPoster that they can merit the intuition of that tPoster. With a fake tPoster something might feel off to the tPeers and there might not be much of a shared internal shape.
1.3. Spectrum and gravitation: tpot is actually a spectrum, and some clusters share more of the feelings within each other than with other clusters. Because tPeers can deal with spectrums, and not just binary preferences, they assign different weights of credibility to people/clusters based on their feelings. They might appreciate some insight from a distant tPoster node because their intuition largely agrees with it, but they will not automatically accept all the other insights from that node. So if tPeers see contributions of a fake tPoster that they feel they don't agree with, they may just conclude this is someone from a different area and will not engage.
1.4. All this works on a personal, slow level. That's why tpot wants to be a bit illegible to the ousiders, so that not to attract the attention of bad actors. Tpot doesn't seek virality, because it brings lots of participants with unknown reputation and unknown shared base, and it's harder to disengage from them.
2. Two paths:
2.1. A hacker uses falsified data or claims to have performed an experiment that is extremely hard to reproduce. The hacker has proper credentials, uses the proper language, the proper institutions, the proper publication channels. This way the fake piece can go unchallenged by other scientists for a long time. Works better in biology/medicine, etc.
2.2. A hacker starts in traditional science, gains traditional credibility, and then branches out to parallel independent work, and gains massive media following. Laypeople believe automatically because of past credentials plus current popular image. Scientists may not be bothered to debunk someone who now functions outside of the traditional scientific world. If the hacker keeps an affiliation with a large institution on the side, the good faith scientists will be even more unwilling to engage so that not to elicit wrath of both common people and large organizations.
In both paths the hacker produces signal that seems credible either to science people or laypeople, and in the absence of resources to check their claims they can stay believable for a long time.
Yes! I think this is what hits the mark. Many "mystics" answered this by saying, "what are you talking about? there's no bug here! I can tell when people are lying!"
The "bug" happens when someone who does not have this "rigorous intuition", this sensitivity to emotions, who can detect sincerity, but tries to use these methods. They will be lead astray.
There's a paragraph in "Of Water and Spirit" that I stumbled on after writing this, that articulates exactly this. A man tries to lie to these mystic elders who are teaching him something, because it's supposed to be 100% about his internal experience, "so how would they know if I was lying" but they can tell immediately. And he is baffled.
this is the answer I was looking for! This mirrors the mystic path because, the mystic's "source data" is their internal experience, and what feelings are triggered.
Therefore, lying about your feelings is a direct analogy to a scientist lying about the evidence they found.
Thank you for playing Olga, it's great to read your line of thought here and see you deduce the correct answers!!
Thank you for inviting to play! It's curious to watch how tempting was the path of least resistance (to look up the twitter thread passively), but making the effort to actually form answers in writing helps with thinking things through
This debate mirrors a long discussion in academia over the past several decades and crossing over several disciplines regarding a movement called 'The Social Construction of Science'. Regular, 'old fashioned', rigorous science was seen as inadequate, or worse obstructionist to, real knowledge not just about your life but about the very things that science professes to know about, like electrons. I was in the middle of this in my academic travels (before I went into Rock-n-Roll!). The equivalent rationalist 'hack' that Defender is asking about is known in philosophy as 'The Problem of Induction'. Science is precisely designed to derive knowledge despite this problem. Social Constructivist inspired anthropologists went into scientific laboratories in the 1970s and 1980s and (ironically) claimed to have shown that scientists did not in fact surmount this problem, thus something more than science (let's call it Science+) was needed for real knowledge. For a good summary and discussion of why the mission of these 'debunkers' was a failure see Doing, Park "Give Me a Laboratory and I Will Raise a Discipline"
1. I don't know how a community _would_ do this, but I'd guess it lies in the realm of principled rhetorical persuasion. "Can you convince me?" If I'm trying to develop rigorous intuitions, my intuitions _are_ the test I want to apply. Outcomes:
- I can, you and I can work together on this
- I can't, but that doesn't stop you nor me from working from within our frames, perhaps with carefully built APIs (costly method)
- I can't, and the interface-building is also too costly to invest in for whatever reason (hazard or DoS), in which case we've hit the limit on what information can be transmitted between us. The new entrant can't be in the club.
2. Inducing the "you can't claim that because you don't have proof and you can't get proof without making the claim and getting help" crashloop. Hijacking the middle step of the rationalist epistemic process, in other words.
But maybe that's not the "equivalent" -- maybe instead it's "I've found something sufficiently like 'a good logical argument' that your paradigm demands you invest resources in verifying or using it." Goodharting on the crucial step in both cases?
I think you're thinking of it as, each person has access to their own feelings, but they don't trust the others feelings, unless they can expose the other person to the experience that generated the feelings?
Doing that is the "full proof" way. It'd be like, you don't trust someone's else paper, you have to reproduce it yourself. But scientists trust each other's papers, generally.
Same here. A report that "I feel this way about X" will be trusted even if the other person doesn't have direct experience with X.
The missing piece here is: the mystic may not have direct experience with X, but they have direct experience to the person telling them their feelings. *That* can be run through their intuition to detect sincerity.
So, lying about your feelings is a direct analogy to a scientist lying about the evidence they found. But a mystic can "detect false evidence" in a way that isn't always available to the scientist. It'd be like if the scientists are exchanging pieces of code that they run on their machines, as opposed to sharing reports of experimental evidence that is too expensive to repro.
(basically the "gotcha" with this question is the idea that sincerity is detectable if you are a "good mystic")
Good gotcha, annoyingly simple, very illustrative.
Another example I can think of (tell me if I'm miunderstanding something here): the common moderation wisdom that it's often a good idea to empower moderators to make decisions not covered by the letter of the forum rules, because you're unlikely to have a ruleset that covers everything you'd like it to. The effective filter in this scenario is "community members trust the mod's judgment because they can see what the mod is responding to and build a good model of their judgment". "What the mod says goes" _de jure_ looks dictatorial, but doesn't necessarily confer dictatorial powers in practice?
I looked at the quiz and thought, "I don't recognize the language, but I also don't care about the questions. They seem irrelevant and obtuse."
I read further and discovered that, as a scientist, I shouldn't be open to discovery based on intuition. I then learned that had I been a mystic, my intuition would be the correct way to arrive at insight.
I spent many years working on the cutting edge of science.... there's no map there... there's only the great unknown and a scientist's intuition of where to go next. We spend most of our time intuiting where to look and then, once we find the interesting phenomenon, a great deal more time coming up with rigorous ways to test and explain the new with existing (or sometimes new) scientific language.
But the finding of the phenomena or grand theories.... this is all intuitive insight : a feeling in the gut, a prophetic dream, an unexpected connection made over something utterly unrelated.
When someone says that something is more of an art than a science, it tells me that they don't know science.
There are many things about the universe that I intuit and cannot explain. Sometimes another scientist comes along and does explain them. Science is magic... with a dedicated language and a methodology based on replication. But, at it's core, it's still a search with improper tools and a language that cannot capture the full extent of the things it looks at.
I may not know the language of mysticism, but I am familiar with the experience.
I wouldn't call myself a rationalist, but I've observed those spaces for a while now, and I'm philosophically inclined towards them. But any movement has unavoidable issues when trying to perpetuate itself, and rationalism has definitely run into a few of these. Like all ethical theories, its best to take the good and ignore the rest.
I felt pretty silly seeing your self-description as a mystic - of course it's apt, but somewhat appropriately, I had avoided labeling you in my head, but instead just kept your ideas as kind of a diffuse cloud floating around in the back of my head. I do think some of your ontologies are kinda flawed, or obscure more than they reveal (like dark memetics, which I think is more emergent behavior and incentives than it is intentions). But of course, team better times is tautologically a wide umbrella.
> I felt pretty silly seeing your self-description as a mystic
I think this is a language thing. "Mystic = someone who just makes stuff up" was what that meant to me ~a year ago. It means something fairly specific now, which is what I'm trying to translate. "Good at recognizing patterns intuitively" would be another term that's closer. I feel like before I understood this, reading things about mysticism just sounded like gibberish. The idea that you can discern a good one from a bad one sounded completely arbitrary, because it's all made up. But now I can see how someone can discern this and debate intelligently about it. And how there are nuances that different schools of thought may disagree on.
And how there is still "new discoveries" to be made or ground to tread here (same broadly with religion. The best religious scholars aren't ones that just look to the past. The things that people were studying back then didn't disappear. If all memory of human religion disappeared over night, I think we would rediscover it from first principles. Not the exact wording, but isomorphic. Same way we could rediscover all of science again).
> like dark memetics, which I think is more emergent behavior and incentives than it is intentions
I agree with this! I think the best analog to what I mean by dark/open memetics is black hat/white hat hacking. They do very similar things, but the dynamics are asymmetric. Example: an offensive hacker need only one success to penetrate a system. A defensive white hat needs to secure ALL parts, any ONE mistake they make is all it takes for them to lose. This creates a power differential. Which has nudged the white hats to coordinate together to keep up.
The other dynamic here is that, the frontier of black hat hacking is necessarily unknown, by definition. And it's a similar thing to "dark memetics". These are the differences I wanted to lay out and study.
Computational nature of nature would probably mean that the dichotomy is actually false in reality as both sides have their downsides and a simple switch seperates something that should whole. Intuition is great for compressing complexity, but fails to address when we are fooled and we often are, therefore requiring rationality for sanity check. Yet this process isn't dualistic and rationality could develop into intuition and vice versa.
I didn't manage to fit this in but I wanted to say that one solution to this language problem is to lean further *the other* way, away from accessibility. The fact that we use this common "base" language english is deceptive. I think it makes things LESS accessible because, at least if you are aware that we're speaking different languages, you're going to spend some time trying to translate, vs leaving with the incorrect understanding.
I recently learned about an old internet community that still does this, where a lot of their blog posts are written in made up languages (https://github.com/DefenderOfBasic/notebook/issues/10). Presumably so that only those who can decode it can participate. It's a good way to speak about potentially dangerous things but in a way that can be open/let in people who can figure it out.
this is also a little bit similar to a viral post I saw last night (https://x.com/DefenderOfBasic/status/1956167843176697989) that talked about how terrible it is that generative AI makes it so they can't trust "anything on the internet anymore". And from my perspective, I'm like, THIS IS GOOD! It already WASN'T trusted, it was already being used to manipulate you. The fact that you weren't aware of it before, and are aware of it now, because it's gotten worse, is actually an improvement.
It's the same thing with languages. Reading other subcultures' stuff is confusing, and leads to a lot of conflict. If the languages were even MORE different, it would be clearer that translation is required, and lead to less conflict
I'm not a mystic or a rationalist so this may have gone over my head, but it sound kind of like you're just describing the observation vs. theory dichotomy. There are fields where we have examples of observation outpacing theory (astronomy, ML), causing those who want to work at the cutting edge to tend to prefer the observational approach. The ML example is especially telling because it implies that humans are able to construct systems that are too complex for us to predict from first principles (yet? ever?). This seems to be applicable to plenty of social science fields and, of course, memetics/the pursuit of trying to be a good online information-spreading agent.
It seems like people who succeed at the observational approach tend to create their own headcanons that are sort of cobbled together from the existing zeitgeist. It also seems like they tend to hold onto them loosely and update them often based on how useful they appear. Given this, it then seems like folks working in this space would gravitate towards your "mystic" archetype.
Can I try answering the homework questions?
1. Let's assume that tpot people (tPotters) can act in the role of tPosters and tPeers. tPosters say things, and tPosters may be genuine or attempting to hijack. tPeers assess the contributions of tPosters and decide whether to believe them.
1.1. Earnest reputation: tPosters share a lot of their observations, feelings and persuasions. Over time a tPoster produces enough signal so that tPeers can conclude that the tPoster behaves earnestly and thus their future contributions are expected to be earnest too. A fake tPoster may have difficulty with appearing consistently earnest over time.
1.2. Shared feelings: when tPeers assess each other's contributions, they don't accept them blindly. There's always some shared "feeling" at the base level. They see that they share enough of the way of seeing the world with a certain tPoster that they can merit the intuition of that tPoster. With a fake tPoster something might feel off to the tPeers and there might not be much of a shared internal shape.
1.3. Spectrum and gravitation: tpot is actually a spectrum, and some clusters share more of the feelings within each other than with other clusters. Because tPeers can deal with spectrums, and not just binary preferences, they assign different weights of credibility to people/clusters based on their feelings. They might appreciate some insight from a distant tPoster node because their intuition largely agrees with it, but they will not automatically accept all the other insights from that node. So if tPeers see contributions of a fake tPoster that they feel they don't agree with, they may just conclude this is someone from a different area and will not engage.
1.4. All this works on a personal, slow level. That's why tpot wants to be a bit illegible to the ousiders, so that not to attract the attention of bad actors. Tpot doesn't seek virality, because it brings lots of participants with unknown reputation and unknown shared base, and it's harder to disengage from them.
2. Two paths:
2.1. A hacker uses falsified data or claims to have performed an experiment that is extremely hard to reproduce. The hacker has proper credentials, uses the proper language, the proper institutions, the proper publication channels. This way the fake piece can go unchallenged by other scientists for a long time. Works better in biology/medicine, etc.
2.2. A hacker starts in traditional science, gains traditional credibility, and then branches out to parallel independent work, and gains massive media following. Laypeople believe automatically because of past credentials plus current popular image. Scientists may not be bothered to debunk someone who now functions outside of the traditional scientific world. If the hacker keeps an affiliation with a large institution on the side, the good faith scientists will be even more unwilling to engage so that not to elicit wrath of both common people and large organizations.
In both paths the hacker produces signal that seems credible either to science people or laypeople, and in the absence of resources to check their claims they can stay believable for a long time.
> 1.2. Shared feelings
Yes! I think this is what hits the mark. Many "mystics" answered this by saying, "what are you talking about? there's no bug here! I can tell when people are lying!"
The "bug" happens when someone who does not have this "rigorous intuition", this sensitivity to emotions, who can detect sincerity, but tries to use these methods. They will be lead astray.
There's a paragraph in "Of Water and Spirit" that I stumbled on after writing this, that articulates exactly this. A man tries to lie to these mystic elders who are teaching him something, because it's supposed to be 100% about his internal experience, "so how would they know if I was lying" but they can tell immediately. And he is baffled.
> 2.1. A hacker uses falsified data
this is the answer I was looking for! This mirrors the mystic path because, the mystic's "source data" is their internal experience, and what feelings are triggered.
Therefore, lying about your feelings is a direct analogy to a scientist lying about the evidence they found.
Thank you for playing Olga, it's great to read your line of thought here and see you deduce the correct answers!!
Thank you for inviting to play! It's curious to watch how tempting was the path of least resistance (to look up the twitter thread passively), but making the effort to actually form answers in writing helps with thinking things through
This debate mirrors a long discussion in academia over the past several decades and crossing over several disciplines regarding a movement called 'The Social Construction of Science'. Regular, 'old fashioned', rigorous science was seen as inadequate, or worse obstructionist to, real knowledge not just about your life but about the very things that science professes to know about, like electrons. I was in the middle of this in my academic travels (before I went into Rock-n-Roll!). The equivalent rationalist 'hack' that Defender is asking about is known in philosophy as 'The Problem of Induction'. Science is precisely designed to derive knowledge despite this problem. Social Constructivist inspired anthropologists went into scientific laboratories in the 1970s and 1980s and (ironically) claimed to have shown that scientists did not in fact surmount this problem, thus something more than science (let's call it Science+) was needed for real knowledge. For a good summary and discussion of why the mission of these 'debunkers' was a failure see Doing, Park "Give Me a Laboratory and I Will Raise a Discipline"
My answers:
1. I don't know how a community _would_ do this, but I'd guess it lies in the realm of principled rhetorical persuasion. "Can you convince me?" If I'm trying to develop rigorous intuitions, my intuitions _are_ the test I want to apply. Outcomes:
- I can, you and I can work together on this
- I can't, but that doesn't stop you nor me from working from within our frames, perhaps with carefully built APIs (costly method)
- I can't, and the interface-building is also too costly to invest in for whatever reason (hazard or DoS), in which case we've hit the limit on what information can be transmitted between us. The new entrant can't be in the club.
2. Inducing the "you can't claim that because you don't have proof and you can't get proof without making the claim and getting help" crashloop. Hijacking the middle step of the rationalist epistemic process, in other words.
But maybe that's not the "equivalent" -- maybe instead it's "I've found something sufficiently like 'a good logical argument' that your paradigm demands you invest resources in verifying or using it." Goodharting on the crucial step in both cases?
I think you're thinking of it as, each person has access to their own feelings, but they don't trust the others feelings, unless they can expose the other person to the experience that generated the feelings?
Doing that is the "full proof" way. It'd be like, you don't trust someone's else paper, you have to reproduce it yourself. But scientists trust each other's papers, generally.
Same here. A report that "I feel this way about X" will be trusted even if the other person doesn't have direct experience with X.
The missing piece here is: the mystic may not have direct experience with X, but they have direct experience to the person telling them their feelings. *That* can be run through their intuition to detect sincerity.
So, lying about your feelings is a direct analogy to a scientist lying about the evidence they found. But a mystic can "detect false evidence" in a way that isn't always available to the scientist. It'd be like if the scientists are exchanging pieces of code that they run on their machines, as opposed to sharing reports of experimental evidence that is too expensive to repro.
(basically the "gotcha" with this question is the idea that sincerity is detectable if you are a "good mystic")
Good gotcha, annoyingly simple, very illustrative.
Another example I can think of (tell me if I'm miunderstanding something here): the common moderation wisdom that it's often a good idea to empower moderators to make decisions not covered by the letter of the forum rules, because you're unlikely to have a ruleset that covers everything you'd like it to. The effective filter in this scenario is "community members trust the mod's judgment because they can see what the mod is responding to and build a good model of their judgment". "What the mod says goes" _de jure_ looks dictatorial, but doesn't necessarily confer dictatorial powers in practice?
I looked at the quiz and thought, "I don't recognize the language, but I also don't care about the questions. They seem irrelevant and obtuse."
I read further and discovered that, as a scientist, I shouldn't be open to discovery based on intuition. I then learned that had I been a mystic, my intuition would be the correct way to arrive at insight.
I spent many years working on the cutting edge of science.... there's no map there... there's only the great unknown and a scientist's intuition of where to go next. We spend most of our time intuiting where to look and then, once we find the interesting phenomenon, a great deal more time coming up with rigorous ways to test and explain the new with existing (or sometimes new) scientific language.
But the finding of the phenomena or grand theories.... this is all intuitive insight : a feeling in the gut, a prophetic dream, an unexpected connection made over something utterly unrelated.
When someone says that something is more of an art than a science, it tells me that they don't know science.
There are many things about the universe that I intuit and cannot explain. Sometimes another scientist comes along and does explain them. Science is magic... with a dedicated language and a methodology based on replication. But, at it's core, it's still a search with improper tools and a language that cannot capture the full extent of the things it looks at.
I may not know the language of mysticism, but I am familiar with the experience.
I wouldn't call myself a rationalist, but I've observed those spaces for a while now, and I'm philosophically inclined towards them. But any movement has unavoidable issues when trying to perpetuate itself, and rationalism has definitely run into a few of these. Like all ethical theories, its best to take the good and ignore the rest.
I felt pretty silly seeing your self-description as a mystic - of course it's apt, but somewhat appropriately, I had avoided labeling you in my head, but instead just kept your ideas as kind of a diffuse cloud floating around in the back of my head. I do think some of your ontologies are kinda flawed, or obscure more than they reveal (like dark memetics, which I think is more emergent behavior and incentives than it is intentions). But of course, team better times is tautologically a wide umbrella.
> I felt pretty silly seeing your self-description as a mystic
I think this is a language thing. "Mystic = someone who just makes stuff up" was what that meant to me ~a year ago. It means something fairly specific now, which is what I'm trying to translate. "Good at recognizing patterns intuitively" would be another term that's closer. I feel like before I understood this, reading things about mysticism just sounded like gibberish. The idea that you can discern a good one from a bad one sounded completely arbitrary, because it's all made up. But now I can see how someone can discern this and debate intelligently about it. And how there are nuances that different schools of thought may disagree on.
And how there is still "new discoveries" to be made or ground to tread here (same broadly with religion. The best religious scholars aren't ones that just look to the past. The things that people were studying back then didn't disappear. If all memory of human religion disappeared over night, I think we would rediscover it from first principles. Not the exact wording, but isomorphic. Same way we could rediscover all of science again).
> like dark memetics, which I think is more emergent behavior and incentives than it is intentions
I agree with this! I think the best analog to what I mean by dark/open memetics is black hat/white hat hacking. They do very similar things, but the dynamics are asymmetric. Example: an offensive hacker need only one success to penetrate a system. A defensive white hat needs to secure ALL parts, any ONE mistake they make is all it takes for them to lose. This creates a power differential. Which has nudged the white hats to coordinate together to keep up.
The other dynamic here is that, the frontier of black hat hacking is necessarily unknown, by definition. And it's a similar thing to "dark memetics". These are the differences I wanted to lay out and study.
Intuition is a heuristic. If your intuition is more accurate than not, then it's useful to determine truth.
Some people are born with strong intuition, others have to practice to improve it.
"All suffering comes from the violation of intuition."
-Florence Scovel Shinn
Computational nature of nature would probably mean that the dichotomy is actually false in reality as both sides have their downsides and a simple switch seperates something that should whole. Intuition is great for compressing complexity, but fails to address when we are fooled and we often are, therefore requiring rationality for sanity check. Yet this process isn't dualistic and rationality could develop into intuition and vice versa.
Really good. I’m excited to see the rationalist rebuttals!
Maybe you can call it Open M and Dark M for short, since open is such an overloaded word on its own.