How to estimate a pre-aligned value for a common discussion ground?
post by EL_File4138 · 2023-02-23T10:38:18.489Z · LW · GW · 2 commentsThis is a question post.
Contents
So the question is: None Answers 11 localdeity 3 TAG 1 alex.herwix None 2 comments
There are actually all sorts of false science in the wild, such as "the earth is flat" and "pandemic is government-driven lies", extending their influence even though the rational community struggled to promote the rational thinking paradigm, which I believe is a strong prerequisite for the Alignment question to be solved.
So the question is:
- How to define "absolute truth" under the rational thinking paradigm?
- How to forcefully educate this "absolute truth" to the general public, for a common consensus to be presumed before any meaningful discussion can exist?
Edit: I found "forcefully" particularly not suiting under the context, but I can't bring a more proper expression at this time. This question is mainly on the first point, not by educating/convincing but by estimating a pre-aligned value.
Second Edit: This question might not be suitable because it's some long-standing question merged together with a manipulative intention I was not intended to express. I've changed the title to a more suitable one. Thanks for replying.
*Background: I've recently been struggling to educate some fellow artists with no ML backgrounds on generative ML models, but with little luck. *
Answers
There are very few groups of people I would trust to correctly choose which "absolute truths" warrant forcibly shoving down everyone's throat, and none of them I would expect to remain in power for long (if they got it in the first place). Therefore, any mechanism for implementing this can be expected in the future to be used to shove falsehoods, propaganda, and other horrible things down everyone's throat, and is a bad idea.
Maybe people with ideas like yours need to be forcefully educated of the above truth. :-P
Maybe you meant "forcefully" less literally than I interpreted it.
↑ comment by EL_File4138 · 2023-02-23T10:56:14.554Z · LW(p) · GW(p)
Yes, that's also a reason I refuse to bring this question up for quite a long time because "forcefully" sounds too close to what a propaganda machine would do.
What I mean under this particular context is some objective truth, like physical law, that should be a consensus. Critics from my peer suggest that physical law itself may not be an absolute objective truth, thus I'm really curious from that standpoint: If there's no common ground one can reach, then how would any discussion prove useful?
Replies from: localdeity↑ comment by localdeity · 2023-02-23T11:14:41.889Z · LW(p) · GW(p)
Let's suppose that lots of people agree on the objective truth, such as the earth being round, and some wackos don't.
Some wackos are complete wackos, in the sense that you should keep an eye on them at all times and make sure you're never alone with them unless you're prepared to defend yourself. Others have a crazy belief or two while being astonishingly functional in all other aspects of their behavior; my understanding is that this is a thing that happens and isn't even rare.
Discussion with the first kind of wacko is dangerous. Discussion with the second kind of wacko about the issue they're wrong about may or may not be helpful; discussion about other things, like organizing a party, may be helpful and good; such a person could even become a friend as you gain confidence in their functionality outside the one issue. Discussion that lets you find out which kind of wacko a person is (and whether they're a wacko) is useful.
Replies from: EL_File4138↑ comment by EL_File4138 · 2023-02-24T04:41:05.737Z · LW(p) · GW(p)
After some reading, I found a post Scott wrote. I think this is the answer I needed, pretty much similar to your answer. Thanks!
People generally reject the teachings of mainstream culture because they belong to a subculture that tells them too. But rationalism is itself a culture with some contrarian beliefs, so why would rationalists know how to fix the problem?
↑ comment by EL_File4138 · 2023-02-24T04:09:19.372Z · LW(p) · GW(p)
Yes, I've done some reflection and recognized that my question is actually the "how to expand rationalism thinking paradigm", which is a long-standing problem not solved by the rationalist community. This is just another way of not inquiring about it correctly.
False premise. There is no “absolute truth”. I don’t want to come across as condescending but please have a look at any somewhat recent science textbook if you doubt this claim.
I would suggest reframing to: how can we establish common ground that a) all/most people can agree on and b) facilities productive inquiry.
↑ comment by EL_File4138 · 2023-02-23T11:52:34.362Z · LW(p) · GW(p)
And that arose a question: If there's no "absolute truth", then how "relative" the truth most people agree on (such as 1+1=2 mathematically) would be?
Sorry if this question seems too naive as I'm at an early stage of exploring philosophy, and any other views other than objectivity under the positivism view seems not convincing to me.
Replies from: alex.herwix↑ comment by alex.herwix · 2023-02-23T17:16:52.970Z · LW(p) · GW(p)
False premise. You seem to be assuming that many people using symbols reliably in similar ways points to anything other than this convention being reliably useful in achieving some broadly desired end. It doesn't.
Your mathematics example is also misleading because it directs attention to "mathematical truths" which are generally only considered to be valid statements within the framework of mathematics and, thus, inherently relative to a particular framework and not "absolute".
As soon as you move to "real life" cases you are faced with the question of how to frame a situation in the first place (also see the "frame problem" in AI research). There is no "absolute" answer to this. Maybe a little bit tongue in check but ask yourself: Why is this website called "Less Wrong" and not "absolute truth"?
If you are looking to educate yourself have a look at the following resources. I found them quite insightful.
On philosophy:
Dewey, J. (1938). Logic: The Theory of Inquiry. Henry Holt and Company, INC.
Ulrich, W. (2006). Critical Pragmatism: A New Approach to Professional and Business Ethics. In Interdisciplinary Yearbook for Business Ethics. V. 1, v. 1,. Peter Lang Pub Inc.
On the frame problem:
Vervaeke, J., Lillicrap, T. P., & Richards, B. A. (2012). Relevance Realization and the Emerging Framework in Cognitive Science. Journal of Logic and Computation, 22(1), 79–99. https://doi.org/10.1093/logcom/exp067
Andersen, B. P., Miller, M., & Vervaeke, J. (2022). Predictive processing and relevance realization: Exploring convergent solutions to the frame problem. Phenomenology and the Cognitive Sciences. https://doi.org/10.1007/s11097-022-09850-6
Replies from: EL_File4138↑ comment by EL_File4138 · 2023-02-24T03:51:56.795Z · LW(p) · GW(p)
This seems useful. Thanks!
2 comments
Comments sorted by top scores.
comment by Dagon · 2023-02-23T15:47:11.627Z · LW(p) · GW(p)
I'd enjoy (perhaps, it'll depend on what you actually mean) more exploration of the specifics of your shared truth-seeking with your fellow artists about generative ML models. I don't think it makes for very good general discussion until you have some successes on smaller, more direct interactions.
I am a bit concerned about your framing of how to "educate" the public or your fellow artists, rather than any sort of cooperative or agency-respecting mechanisms.
Replies from: EL_File4138↑ comment by EL_File4138 · 2023-02-24T04:01:35.501Z · LW(p) · GW(p)
"educate" is used here because I found these kinds of discussions would not be easily conducted if the AI part were introduced before any actual progress can be made. Or, to frame it that way, my fellow tends to panic if they are introduced to generative ML models and related artwork advancements, and refuse to listen to any technical explanation that might make them more understood. There was no manipulative intention, and I'm willing to change my interaction method if the current one seems manipulative.