Time Binders
post by Slimepriestess (Hivewired) · 2020-02-24T09:55:55.672Z · LW · GW · 10 commentsThis is a link post for https://hivewired.wordpress.com/2020/02/24/time-binders/
Contents
10 comments
My continued exploration of Korzybski and the history of rationality.
10 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2020-02-24T22:07:23.234Z · LW(p) · GW(p)
Unfortunately for Korzybski, General Semantics never really took off or achieved prominence as the new field he had set out to create. It wasn’t without some success and it has been taught in some colleges. But overall, despite trying to create something grounded in science and empiricism, over the years the empiricism leaked out of general semantics and a large amount of woo and pseudoscience leaked in. This looks like it was actually a similar failure mode to what had started happening with Origin before I stopped the project.
With Origin, I introduced a bunch of rough draft concepts and tried to bake in the idea that these were rough ideas that should be iterated upon. However, because of the halo effect, those rough drafts were taken as truth without question. Instead of quickly iterating out of problematic elements, the problematic elements stuck around and became accepted parts of the canon.
Something similar seems to have happened with General Semantics, at a certain point it stopped being viewed as a science to iterate upon, and began being viewed in a dogmatic, pseudoscientific way. It would eventually spin off a bunch of actual cults like Scientology and Neuro-Linguistic Programming, and while the Institute of General Semantics still exists and still does things, no one seems to really be trying to achieve Korzybski’s goal of a science of human engineering. That goal would sit on a shelf for a long time until finally it was picked back up by one Eliezer Yudkowsky.
This makes me wonder to what extent we fail at this in the rationality movement. I think we're better at it, but I'm also not sure we're as systematic about fighting against it as we could be.
Replies from: Yoav Ravid, Wei_Dai↑ comment by Yoav Ravid · 2020-02-26T10:53:47.748Z · LW(p) · GW(p)
I agree. I love LessWrong (and its surroundings), but i think it hasn't yet lived to its promise. to me it seems the community/movement suffers somewhat from focusing on the wrong stuff and premature optimization.
it also seems that sequences suffer from the same halo effect as the author's project (origin, which I'm not familiar with). it has been written more then 10 years ago, ending on a note that there's still much to be discovered and improved about rationality - even with it's release as a book Eliezer noted in the preface his mistakes with it. Since there seems to be agreement on the usefulness of a body of information everybody is expected to read (e.g "read the sequences"), I'd expect there would at least be work or thought on some sort a second version.
Just to be clear, since intentions sometimes don't come through in text, I'm saying that out of love for the project, not spite. I've came across this site a bit more then a year ago and have read a ton of content here, i both love it and somewhat disappointed -
In short, I feel there's still a level above ours.
Replies from: Raemon↑ comment by Raemon · 2020-02-27T00:28:31.264Z · LW(p) · GW(p)
I'd expect there would at least be work or thought on some sort a second version.
Note that the current version of R:AZ has been updated and is half-as-long as the original (with some additional edits in the works). There's definitely effort in this direction, it's just a lot of work.
Replies from: Hivewired↑ comment by Slimepriestess (Hivewired) · 2020-02-27T05:58:31.895Z · LW(p) · GW(p)
Shorter definitely seems better. Ideally I think there'd be a version that was less than a hundred pages. Something as short and concise as possible. Do we really need to list every cognitive bias to explain rationality? How much is really necessary and how much can be cut?
Replies from: Raemon↑ comment by Raemon · 2020-02-27T21:10:18.854Z · LW(p) · GW(p)
It's a nontrivial operation to figure out what stuff can be cut. The work isn't just listing a bunch of facts, it's weaving them in a compelling way that helps people integrate them. Trimming things down requires new ways of fitting them together.
(Basically I'm saying "yes, people are taking this seriously, and the reason the job isn't done already is that it's hard.")
↑ comment by Wei Dai (Wei_Dai) · 2020-02-25T10:00:51.117Z · LW(p) · GW(p)
I think we’re better at it, but I’m also not sure we’re as systematic about fighting against it as we could be.
I'm trying to do my part by pointing out misuses or overuses of UDT (for example trying to derive strong conclusions about human rationality from it), at least when I see them on LW, and being as clear as I can about its flaws and inadequacies. I also try to do this for Aumann Agreement [LW · GW] which is another idea that has the potential to become viewed in a dogmatic, pseudoscientific way.
Would be interested in ideas on how to go about doing this more systematically.
comment by Vanessa Kosoy (vanessa-kosoy) · 2020-02-24T20:08:34.517Z · LW(p) · GW(p)
I wonder whether Korzybski was indeed a "memetic ancestor" of LessWrong or more like a slightly crazy elder sibling? In other words, were Yudkowsky or other prominent rationalists significantly influenced by Korzybski, or they just came up with similar-ish ideas independently?
Replies from: Eliezer_Yudkowsky, gworley↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2020-02-27T00:21:54.316Z · LW(p) · GW(p)
Yes, via "Language in Thought and Action" and the Null-A novels.
↑ comment by Gordon Seidoh Worley (gworley) · 2020-02-24T22:06:18.351Z · LW(p) · GW(p)
I'm hoping we'll find out in the next post, but I would guess the answer is "yes" via general semantics having an impact on science fiction writers who had an impact on transhumanism and the extropians out of which SL4, Eliezer, and this whole thing grew such that even if it wasn't known at the time the ideas were "in the water" in such a way that you could make a strong argument that they did.
comment by Yoav Ravid · 2020-02-26T10:54:39.571Z · LW(p) · GW(p)
I like the term "Memetic Ancestors" that you used (coined?)
Typo:
"even if Korzybski gets lets himself get sucked into"