Posts
Comments
I’ve been reading the hardcover SSC collection in the mornings, as a way of avoiding getting caught up in internet distractions first thing when I get up. I’d read many of Scott Alexander’s posts before, but nowhere near everything posted; and I hadn’t before made any attempt to dive the archives to “catch up” to the seeming majority of rationalists who have read everything Scott Alexander has ever written.
Just a note that these are based on the SlateStarCodexAbridged edition of SSC:
I still think that this problem is intractable so long as people refuse to define 'rationality' beyond 'winning'.
https://www.thelastrationalist.com/rationality-is-not-systematized-winning.html
I, in general, try to avoid using the frame of 'rationality' as much as possible precisely because of this intractability. If you talk about things like existential risk, it's clearer what you should know to work on that.
This talk is required reading for designing a tag system: https://idlewords.com/talks/fan_is_a_tool_using_animal.htm
I can also recommend the book The Intellectual Foundation Of Information Organization by Elaine Svenonius.
https://www.thelastrationalist.com/memento-mori-said-the-confessor.html
I don’t see how the law of “people are obligated to respond to all requests for clarifications”, or even “people always have to define their terms in way that is understood by everyone participating” is somehow an iron law of communication. If anything, it is not an attribute that any existing successful engine of historical intellectual progress has had. Science has no such norms, and if anything strongly pushes in the opposite direction, with inquiries being completely non-public, and requests for clarification being practically impossible in public venues like journals and textbooks. Really very few venues have a norm of that type (and I would argue neither has historical LessWrong), even many that to me strike me as having produced large volumes of valuable writing and conceptual clarification.
Some thoughts.
I don’t see how the law of “people are obligated to respond to all requests for clarifications”
I feel like Said is either expressing himself poorly here, or being unreasonable. After all, the logical conclusion of this would be that people can DDoS an author by spamming them with bad faith requests for clarification.
However I do think there is a law in this vein, something more subtle, more nuanced, a lot harder to define. And its statement is something like:
In order for a space to have good epistemics, here defined as something like "keep out woo, charlatans, cranks, etc", that space must have certain norms around discourse. These norms can be formulated many different ways, but at their core they insist that authors have an obligation to respond to questions which have standing and warrant.
Standing means that:
- The speaker can be reasonably assumed not to be bad faith
- Is an abstract "member of the community"
- It is generally agreed on by the audience that this persons input is in some way valuable
There are multiple ways to establish standing. The most obvious is to be well respected, so that when you say something people have the prior that it is important. Another way to establish standing is to write your comment or question excellently, as a costly signal that this is not low-effort critique or Paul Graham's infamous "middlebrow dismissal".
Warrant means that:
-
There are either commonly assumed or clearly articulated reasons for asking this question. We are not privileging the hypothesis without justification.
-
These reasons are more or less accepted by the audience.
Questions & comments lacking either standing or warrant can be dismissed, in fact the author does not even have to respond to them. In practice the determination of standing and warrant is made by the author, unless something seems worthy enough that their ignoring it is conspicuous.
I think you would be hard pressed to argue to me in seriousness that academics do not claim to have norms that peoples beliefs are open to challenge from anyone who has standing and warrant. I would argue that the historical LessWrong absolutely had implicit norms of this type. Moreover, EY himself has written about insufficient obligation to respond as a major bug in how we do intellectual communication.
I have this intuitive notion that:
I do think the relevant question is whether your comments are being perceived as demanding in a similar way. From what I can tell, the answer is yes, in a somewhat lesser magnitude, but still a quite high level, enough for many people to independently complain to me about your comments, and express explicit frustration towards me, and tell me that your comments are one of the major reasons they are not contributing to LessWrong.
I agree that you are not as bizarrely demanding as curi was, but you do usually demand quite a lot.
When people talk about "demanding" in this sense what they're actually doing is a very low level reasoning mistake EY talks about in his post on Security Mindset:
AMBER: That sounds a little extreme.
CORAL: History shows that reality has not cared what you consider “extreme” in this regard, and that is why your Wi-Fi-enabled lightbulb is part of a Russian botnet.
AMBER: Look, I understand that you want to get all the fiddly tiny bits of the system exactly right. I like tidy neat things too. But let’s be reasonable; we can’t always get everything we want in life.
CORAL: You think you’re negotiating with me, but you’re really negotiating with Murphy’s Law. I’m afraid that Mr. Murphy has historically been quite unreasonable in his demands, and rather unforgiving of those who refuse to meet them. I’m not advocating a policy to you, just telling you what happens if you don’t follow that policy. Maybe you think it’s not particularly bad if your lightbulb is doing denial-of-service attacks on a mattress store in Estonia. But if you do want a system to be secure, you need to do certain things, and that part is more of a law of nature than a negotiable demand.
Where, there is a certain level of detail and effort that simply has to go into describing concepts if you want to do so clearly and reliably. There are inviolable, non-negotiable laws of communication. We may not be able to precisely define them but that doesn't mean they don't exist. We certainly know some of their theoretical aspects thanks to scholars like Shannon.
I think a lot of what Said does is insist that people put in that effort, that The Law be followed so to speak. Unfortunately there is no intrinsic punishment for not following the law besides being misunderstood (which isn't really so costly to the speaker, and hard for them to detect in a blog format). That means they commit a map/territory error analogous to the Rust programmer who insists Rust makes things much harder than C does. There's probably some truth to this, but a lot of the thing is just that Rust forces the programmer to write code at the level of difficulty it would have if C didn't let you get away with things being broken.
Before and After
At the start of the decade I was 13, I'm now 23.
Philosophy
Before: I was a recovering conspiracy theorist. I'd figured out on my own that my beliefs should be able to predict the future, and started insisting they do. I wrote down things I expected to happen by a certain time in a giant list, and went back to record the outcome. I wanted to be a video game developer, but didn't know how to start.
A 13 year old boy sits on a swingset in his backyard, listening to Owl City[0] and Lemon Demon[1] as frosty dew melts off green grass in the morning sun. He's daydreaming about the end of the world and his impending death. There is no god and nobody is coming to save him.
After: The oldest copy of Harry Potter and The Methods Of Rationality I can find on my computer is dated January 1st of 2011 at 4:13AM. Now in 2019 I have read many books about phreakers, hackers, makers, computer wizards, rationalists, stats nerds, and the subjects that interest them. My enthusiastic anarchism has given way to a grim realpolitik that still values freedom but understands there are no easy solutions and everything runs on incentives. I call myself an extropian because 'singularitan' sounds too awkward.
A young man is washing the main board of an original Xbox with vinegar. His work bench has an overhead light, it's the brightest thing in the room and everything else looks dim by comparison. The intent of the Xbox was that its data be confined to its aging hardware. He remembers taking Adderall that day, he has perfect focus as he washes away the corrosion left behind by the clock capacitor. During this task he reflects on the decay inherent in all things. The data in his brain is also confined to its aging hardware, and as it ages it corrodes. In his reflections he is no different from this Xbox, peering into his magnifying glass at an eroded trace on the board he sees the infinite void ahead of him. He imagines himself to be washing the body of an embryonic god.
Skills
Before: I was probably most skilled at playing Halo, and only so good at that. I found the idea of writing a 2-3 page essay an imposition. It was around this time that I first installed Linux, I could not program.
After: I am now probably most skilled at writing, but only so good at that. ;) I can write a 12 page lab report in a weekend. I'm skilled enough at programming to write a compiler.
Career & Lifestyle
Career is just starting, though I did make a point of trying to do Real Things during school. Lifestyle is more or less unchanged, a lot of time spent indoors on nerdy things.
Between
Oops and Duh
- The curse of dimensionality makes it easy to get confused about peoples relative ability to each other. It is however a map/territory error to believe that your confusion means there is no sense in which some people are massively more competent than others. Duh.
- People are only a little altruistic, and only value 'purity' in products a little for its own sake. Distributed systems will generally lose to centralized systems which are more convenient, because they more or less compete on the same metrics. If you want people to use them then, you need to work a lot harder. Oops.
- The reason why you got diagnosed with ADD as a kid isn't because it was a fad, it's because you had every symptom including the emotional regulation issues[2] which are part of the disorder but not in the DSM. Incidentally, you have to fight so hard to do schoolwork because you have untreated ADD. Oops.
- Instead of trying to write your own programs while you learn to program, you'd be better off trying to clone other programs that already exist. This frees you from having to do any of the design while you struggle with programming, gives you an objective measure of progress, ensures you are capable of doing useful work, and has other benefits as well. I wasted lots of time by not knowing this. Duh.
Habits
Probably the biggest habit I broke was playing video games. I rarely play video games these days, and go out of my way to avoid television and fiction stories as well. Life is too short to waste it on transient hallucinations, the real world is much more interesting.
I think the biggest habit I started was talking to people, a lot. With the Internet and smart phones you can basically always be in a conversation if you want to. I started making a point of always talking to people about my ideas, getting feedback, practicing persuasion, etc.
Experiences
I spent 7 of the last 10 years in school, and I hate school. Realistically then if I'm being honest with myself, this was not a fun decade for me. I probably had more bad experiences than good, but the good experiences were good enough to balance it out.
Maybe I'll come back to this section later and edit in more, maybe I won't. :)
Worth Noting
-
I'm overall satisfied with this decade. I could have done more if I was playing perfectly, but I feel pretty good about where I am right now.
-
My past self should really get their ADD treated before they spend 4 years of high school struggling against it. He should also stop focusing so much on program 'correctness' or whatever that he's not even qualified to understand, and just focus on replicating the computer programs he interacts with. It's okay to use a web framework. The reason they're not intellectually satisfied with the web is that all the knowledge they want is on Google Scholar and buried in academic PDF's and print books. I think my past self would probably be pretty skeptical of a lot of this, and then figure out it's true as they're not making progress fast enough.
-
I'll probably remember the 2010's for: Anonymous, Wikileaks, Machine Learning, frivolous smartphone driven social media apps, memes, the Lain-ification of the Internet with the alt-right & Trump (etc), economic anxiety and rent seeking, the death of journalism.
[0]: This Is The Future by Owl City
[1]: Sundial by Lemon Demon
[2]: I grew up and no longer have emotional regulation issues.
The CFAR branch of rationality is heavily inspired by General Semantics, with its focus on training your intuitive reactions, evaluation, the ways in which we're biased by language, etc. Eliezer Yudkowsky mentions that he was influenced by The World of Null-A, a science fiction novel about a world where General Semantics has taken over as the dominant philosophy of society.
Question: Considering the similarity to what Alfred Korzybski was trying to do with General Semantics to the workshop and consulting model of CFAR, are you aware of a good analysis of how General Semantics failed? If so, has this informed your strategic approach with CFAR at all?
Does CFAR have a research agenda? If so, is it published anywhere?
By looking in-depth at individual case studies, advances in cogsci research, and the data and insights from our thousand-plus workshop alumni, we’re slowly building a robust set of tools for truth-seeking, introspection, self-improvement, and navigating intellectual disagreement—and we’re turning that toolkit on itself with each iteration, to try to catch our own flawed assumptions and uncover our own blindspots and mistakes.
This is taken from the about page on your website (emphasis mine). I also took a look at this list of resources and notice I'm still curious:
Question: What literature (academic or otherwise) do you draw on the most often for putting together CFAR's curriculum? For example, I remember being told that the concept of TAP's was taken from some psychology literature, but searching Google scholar didn't yield anything interesting.
https://slatestarcodex.com/2013/06/14/the-virtue-of-silence/
Altruistic silence is probably my default position, but from a strictly rational standpoint, is there some way to get paid for my continued silence (other than with the joy of living in a world ignorant of this idea)?
This belies a misunderstanding of what 'rational' means. Rational does not mean homo economicus, it means doing what a person would actually want to do on reflection if they had a good understanding of their options.
I doubt your idea is actually that dangerous, so I'm treating this as a hypothetical. But in general if your idea is dangerous and you want hush money to keep silent about it then this is really more like a blackmail threat than anything else. I think you should reflect on what life decisions you've made that posting what amounts to a "it'd be a real shame if..." threat on a public forum seems like a good idea.
And while you're at it, delete this.
I just don't comment in these sorts of threads because I figure the site is a lost cause and the mods will ban all the interesting people regardless of what words I type into the box.
Thirding this.
I'd have to read the LW 2 source to confirm, but from my experience with the API and relevant data models I'd imagine it's just a matter of changing the "post" field on a comment and all its children. Then making that a button which lets you write a new post and append the comment tree to it.
So it's a useful feature, but probably not a particularly difficult one.
Yet, somehow, it is you saying that there were people who left the rationality movement because of the Solstice ritual, which is the kind of hysterical reaction I tried to point at. (I can’t imagine myself leaving a movement just because a few of its members decided to meet and sing a song together.)
I don't think it's really "a few people singing songs together". It's more like...an overall shift in demographics, tone, and norms. If I had to put it succinctly, the old school LessWrong was for serious STEM nerds and hard science fiction dorks. It was super super deep into the whole Shock Level memeplex thing. Over time it's become a much softer sort of fandom geek thing. Rationalist Tumblr and SlateStarCodex aren't marginal colonies, they're the center driving force behind what's left of the original 'LessWrong rationality movement'. Naturally, a lot of those old guard members find this abhorrent and have no plans to ever participate in it.
I don't blame them.
Yes, also see my 2017 post Guided Mental Change Requires High Trust.
I think it's a sort of Double Entendre? It's also possible the author didn't actually read Zvi's post in the first place. This is implied by the following:
Slack is a nerd culture concept for people who subscribe to a particular attitude about things; it prioritizes clever laziness over straightforward exertion and optionality over firm commitment.
In the broader nerd culture, slack is a thing from the Church of the Subgenius, where it means something more like a kind of adversarial zero sum fight over who has to do all the work. In that context, the post title makes total sense.
For an example of this, see: https://en.wikipedia.org/wiki/Chez_Geek
I was about to write up some insight porn about it, and then was like “you know, Raemon, you should probably actually think about about this for real, since it seems like Pet Psychology Theories are one of the easier ways to get stuck in dumb cognitive traps.”
Thank you. I'm really really sick of seeing this kind of content on LW, and this moment of self reflection on your part is admirable. Have a strong upvote.
Thanks for inspiring GreaterWrong's new ignore feature.
For what it's worth, I don't feel like 'escalation spiral' is particularly optimal. The concept you're going for is hard to compress into a few words because there are so many similar things. It was just the best I could come up with without spending a few hours thinking about it.
"Uphill battle" is a standard English idiom, such idioms are often fairly nonsensical if you think about them hard enough (e.g, "have your cake and eat it too"), but they get a free pass because everyone knows what they mean.
and one feature of the demon thread is ‘everyone is being subtly warped into more aggressive, hostile versions of themselves’
See that's obvious in your mind, but I don't think it's obvious to others from the phrase 'demon thread'. In fact, hearing it put like that the name suddenly makes much more sense! However, it would never be apparent to me from hearing the phrase. I would go for something like "Escalation Spiral" or "Reciprocal Misperception" or perhaps "Retaliation Bias".
One thing I like to do before I pick a phrase in this vein, is take the most likely candidates and do a survey with people I know where I ask them, before they know anything else, what they think when they hear the phrase. That's often steered me away from things I thought conveyed the concept well but actually didn't.
That post is a fairly interesting counterargument, thanks for linking it. This passage would be fun to try out:
This prompted me to think that it might be valuable to buy a bunch of toys from a thrift store, and to keep them at hand when hanging out with a particular person or small group. When you have a concept to explore, you’d grab an unused toy that seemed to suit it decently well, and then you’d gesture with it while explaining the concept. Then later you could refer to “the sparkly pink ball thing” or simply “this thing” while gesturing at the ball. Possibly, the other person wouldn’t remember, or not immediately. But if they did, you could be much more confident that you were on the same page. It’s a kind of shared mnemonic handle.
My problem with s1 and s2 is that it's very difficult to remember which is which unless you've had it reinforced a bunch of times to remember. I tend to prefer good descriptive names to nondescript ones, but certainly nondescriptive names are better than bad names which cause people to infer meaning that isn't there.
Most people don't learn jargon by reading the original source for a term or phrase, they learn it from other people. Therefore one of the best ways to stop your jargon from being misused is to coin it in such a way that the jargon is a compressed representation of the concept it refers to. Authors in this milieu tend to be really bad at this. You yourself wrote about the concept of a 'demon thread', which I would like to (playfully) nominate for worst jargon ever coined on LessWrong. Its communicated meaning without the original thread boils down to 'bad thread' or 'unholy thread', which means that preserving the meaning you wanted it to have is a multi-front uphill battle in snow.
Another awful example from the CFAR handbook is the concept of 'turbocharging', which is a very specific thing but the concept handle just means 'fast electricity' or 'speedy movement'. Were it not for the context, I wouldn't know it was about learning at all. Even when I do have that context, it isn't clear what makes it 'turbo'. If it were more commonly used it would be almost instantly diluted without constant reference back to the original source.
For a non-LessWrong example, consider the academic social justice concept of 'privilege', which has (or had) a particular meaning that was useful to have a word for. However mainstream political commentary has diluted this phrase almost to the point of uselessness, making it a synonym for 'inequality'.
It'd be interesting to do a study of say, 20-50 jargon terms and see how much level of dilution corresponds to degree-of-self-containment. In any case I suspect that trying to make jargon more self contained in its meaning would reduce misuse. "Costly Signaling" is harder to misinterpret than "Signaling", for example.
I like the spirit of this post, but think I object to considering this 'too smart for your own good'. That framing feels more like an identity-protecting maneuver than trying to get at reality. The reality is that you think you're smarter than you are, and it causes you to trip over your untied shoelaces. You acknowledge this of course, but describing it accurately seems beyond your comfort zone. The closest you get is when you put 'smart' in scare quotes near the end of the essay.
Just be honest with yourself, it hurts at first but the improvement in perspective is massive.
You have the year wrong in the title.
It's been a classic guideline of the site for a long time, that you should avoid the word 'rational' or 'rationalist' in titles as an adjective to describe stuff. In the interest of avoiding a repeat of the LW 1 apocalypse, I (and probably others) would really appreciate if you changed it.
Suggested feature: adding a “link option” to answers. I’m not sure what this is actually called, but it’s a feature that comments have. For example, here is a link to this comment.
This is generally called a permalink.
I think my broader response to that is "Well, if I could change one thing about LW 2 it would be the moderation policy."
That seems strictly off topic though, so I'll let it be what it is.
My Complaint: High Variance
Well, to put it delicately the questions have seemed high variance when it comes to quality.
That is the questions posed have been either quite good or stunningly mediocre with little in between.
3 examples of good questions
https://www.greaterwrong.com/posts/8EqTiMPbadFRqYHqp/how-old-is-smallpox
https://www.lesswrong.com/posts/Xt22Pqut4c6SAdWo2/what-self-help-has-helped-you
3 examples of not as good questions
I'd prefer to be gentle when listing examples of not-so-good questions, but a few I think are unambiguously in this category are:
https://www.lesswrong.com/posts/D62GoptY4uX9e2iwM/what-does-it-mean-to-believe-a-thing-to-be-true
(No clarification given in post, whole premise is kind of odd)
https://www.lesswrong.com/posts/TKHvBXHpMakRDqqvT/in-what-ways-are-holidays-good
(Bizarre, alien perspective. If I were a visitor and I saw this post I would assume the forum is an offshoot of Wrong Planet )
https://www.lesswrong.com/posts/AAamNiev4YsC4jK2n/sunscreen-when-why-why-not
(I don't quite understand what the warrant is for discussing this on LW. Yes it's a decision, which involves risk, but lots of things in our lives are decisions involving risk. If those are the only criteria for discussion I don't really see any reason why we should be discussing rationality-per-se as opposed to the thousands of little things like this we face throughout our life.)
What I Would Like To See
Personally I think that it would help if you clarified the purpose and scope of the questions feature. What sort of questions should people be asking, what features make a good question, some examples of well posed questions, etc. Don't skimp on this or chicken out. Good principles should exclude things, they should even exclude some things which would be net positive value to discuss! This is in the interest of keeping net negative gray areas from dominating to preserve positive edge cases.
That is to say, I want some concrete guidelines I can point to and say "Sorry but this question doesn't seem appropriate for the site." or "Right now this question isn't the best it could be, some ways you could improve it to be more in line with our community policy is..."
The official LessWrong 2 server is pretty heavy, so running it locally might be a problem for some people.
Whistling Lobsters 2.0 uses a clone of the LW 2 API called Accordius as its backend. Accordius is, with some minor differences, nearly an exact copy of the pre-October LW 2 API. It was developed with the goal that you could put the GreaterWrong software in front of it and it would function without changes. Unfortunately due to some implementation disagreements between Graphene and the reference GraphQL library in JavaScript, it's only about 95% compatible at the time of cloning.
Still, this thing will run on a potato (or more specifically, my years-old Intel atom based netbook) with GreaterWrong running on the same box as the front end. That makes it a pretty good option for anyone who's looking to understand GraphQL and the LW 2 API. This implementation does not take into account the changes made in the big API update in October. As a consequence, it may be more useful at this point for learning GraphQL than the LW 2 API specifically.
(Note to future readers: The GraphQL API is considered legacy for Accordius in the long term, so if you're reading this many months or even years from now, you may have to go back to the first alpha releases to get the functionality described here. Pre 1.0 perhaps.)
A great deal of my affection for hackers comes from the unique way they bridge the world of seeking secrets about people and secrets about the natural world. This might seem strange, since the stereotype is that hackers are lonely people that are alienated from others, but this is only half truth. In both the open source MIT tradition and the computer intrusion phone phreaking tradition, the search for secrets and excellence are paramount but fellow travelers are absolutely welcome on the journey. Further, much of even the ‘benign’ hacking tradition relies on the manipulation of social reality, the invisible relationships between people and symbols and things that are obvious to us but might confuse a visitor from Mars. For example, this story from the Jargon File about sneaking a computer into a hospital exemplifies the nature of social reality well. In Sister Y’s essay she hypothesizes that nerds are people who have a natural ability to see the underlying mechanisms of social reality in a way that is invisible to most people. Mostly through their natural inability to understand it in one way or another. Things that normal people take for granted confuse nerds, which provides the impetus for making discoveries about social reality itself.
A dictionary definition might be something like:
The map of the world which is drawn by our social-cultural universe, and its relationship to the standard protocols of societal interaction & cooperation. Implicit beliefs found in our norms & behavior toward others, as expressed through: coercive norms, rituals, rank, class, social status, authority, law, and other human coordination constructs.
One aspect of social reality is the offsets between our shared map and the territory. In many old African regional faiths, it was thought to be necessary for commoners to be kept away from upper class shamans and wizards. Otherwise their influence might damage their powers, or cause them to lose emotional control and damage the community. The idea that these people have magic powers and must be protected, along with the social norms and practices that arise from that is an example of social reality. It has very little to do with any real magic powers, but clearly there was some in-territory sequence of events that got everyone to decide to interpret the world this way.
This foreign, ancient example is useful because you have no emotional attachment to it, so you're in a position to evaluate it objectively. Ask yourself how people might react to a lower class person that insisted on touching the magic king. What about someone who refused to recant their belief that the magic king had no influence on the weather? As you imagine the reactions, consider what things in your own social sphere or society would be met with similar feelings from others. Then ask yourself if they're a human universal, or something that could theoretically be different if people felt differently. Once you've identified a handful of these you're on your way to examining social reality as a phenomena. I suggest you keep most of these thoughts to yourself, for your own protection.
Another aspect is the invisible models and expectations of others. In the Jargon File example above, the guard has been told that his role is to prevent unauthorized items from entering the building. This role is very much real, and its "procedures" are as rote and trickable as any computer program. As Morpheus tells us:
This is a sparring program, similar to the programmed reality of The Matrix. It has the same basic rules, rules like gravity. What you must learn is that these rules are no different than the rules of a computer system. Some of them can be bent. Others, can be broken.
A great deal of the phone phreaking tradition is about running a wedge into the places where social reality and the territory don't meet, and performing wild stunts based on them. For example, did you know that one of the most common attacks against locks is to just order a second lock because they're keyed-alike?
The big difference of course is that when you trick a computer program, it doesn't notice. Humans are very likely to notice you tricking them if you violate their expectations. So the art of social engineering is a very different realm in that respect, the technical complexity is lower but the solution space is narrowed by what people won't perceive as too strange. It engages your EQ, at least as much as it engages your IQ.
----
Some book recommendations for a better sense:
Ghost In The Wires by Kevin Mitnick
The Challenger Launch Decision by Diane Vaughan
The Righteous Mind by Jonathon Haidt
I think users that are used to Markdown will often use single bold words as heading, and I feel hesitant to deviate too much from the standard Markdown conventions of how you should parse Markdown into HTML.
Don't know where you got this notion from, but absolutely not. Markdown has syntax that's used for headings, and I've never used bolded text as a replacement for a proper heading.
(As a wider point, Said Achmiz is as usual correct in his approach and it would be much appreciated if you didn't inflict any more appalling HTML practices on API consumers)
I don't use them.
(My guess is you wanted to write “Can’t I post any Open Questions I have right now...“, so I will respond to that, but let me know in case I misunderstood)
Nope. My question was literally just whether I can post some open questions I have right now to LessWrong, this sounds like an excellent direction for the website to take.
We’re interested in people’s thoughts on the idea so far. Any questions about Open Questions?
Can I post any Open Questions I have right now with a title like:
"[Open Question] Bla bla bla bla?"
Will second not enjoying Neuromancer very much.
I missed that line and I apologize. A strong upvote for your troubles.
I have not invented a "new style," composite, modified or otherwise that is set within distinct form as apart from "this" method or "that" method. On the contrary, I hope to free my followers from clinging to styles, patterns, or molds. Remember that Jeet Kune Do is merely a name used, a mirror in which to see "ourselves". . . Jeet Kune Do is not an organized institution that one can be a member of. Either you understand or you don't, and that is that. There is no mystery about my style. My movements are simple, direct and non-classical. The extraordinary part of it lies in its simplicity. Every movement in Jeet Kune Do is being so of itself. There is nothing artificial about it. I always believe that the easy way is the right way. Jeet Kune Do is simply the direct expression of one's feelings with the minimum of movements and energy. The closer to the true way of Kung Fu, the less wastage of expression there is. Finally, a Jeet Kune Do man who says Jeet Kune Do is exclusively Jeet Kune Do is simply not with it. He is still hung up on his self-closing resistance, in this case anchored down to reactionary pattern, and naturally is still bound by another modified pattern and can move within its limits. He has not digested the simple fact that truth exists outside all molds; pattern and awareness is never exclusive. Again let me remind you Jeet Kune Do is just a name used, a boat to get one across, and once across it is to be discarded and not to be carried on one's back.
While writing the about page for the upcoming Whistling Lobsters 2.0 forum, I took a shot at giving a brief history of and definition of rationality. The following is the section providing a definition. I think I did an okay job:
The Rationalist Perspective
Rationality is related to but distinct from economics. While they share many ideas and goals, rationality is its own discipline with a different emphasis. It has two major components, instrumental and epistemic rationality. Instrumental means "in the service of", it's about greater insight in the service of other goals. Epistemic means "related to knowledge", and focuses on knowing the truth for its own sake. Instrumental rationality might be best described as "regret minimization". Certainly this phrase captures the key points of the rationalist perspective:
-
Rationality cares about opportunity cost, which is the biggest shared trait with economics. Rationality is not skepticism, skeptics only care about not-losing. Rationalists care about winning, which means that the failure to realize full benefits is incorporated into the profit/loss evaluation.
-
A rationalist should never envy someone else just for their choices. Consider Spock, the 'rational' first officer of the USS Enterprise in Star Trek. Often Spock will insist against helpful action because it would "be illogical". The natural question is "Illogical to whom?". No points are awarded for fetishism. If there are real outcomes to consider, perhaps you hold yourself back for some social benefit, that is all well and good. But there is nothing noble in doing things that make you or the world worse off because you've internalized fake rules.
-
Long term thinking. Regret is generally something you start doing after you've had a bit of experience, it's something you need to think about early to avoid. You don't regret wasting your 20's until you're in your 30's. Regret is about your life path, which is utility vs. time. Most economics focuses on one shot utility maximization scenarios, or iterated games. But the real world has every kind of game imaginable just about, and your 'score' is how you perform on all of them.
Yup. Empirically, people who lose lots of weight and keep it off have a CONSTANT VIGILANCE mindset going.
This isn't to say that OP's post is untrue, but rather they're underestimating just how badly the odds are stacked against those who are obese.
HBO's The Weight Of The Nation documentary goes into the Weight Control Registry study on long term weight loss, and the common factors between people who manage to keep it off:
https://www.youtube.com/watch?v=hLv0Vsegmoo&t=1h1m28s
Even if we take that interpretation, I think 3 and 4 are useful operational expansions of 1 and 2. They're concrete things you can do to implement them.
"How hard it is to obtain the truth is a key factor to consider when thinking about secrets. Easy truths are simply accepted conventions. Pretty much everybody knows them. On the other side of the spectrum are things that are impossible to figure out. These are mysteries, not secrets. Take superstring theory in physics, for instance. You can’t really design experiments to test it. The big criticism is that no one could ever actually figure it out. But is it just really hard? Or is it a fool’s errand? This distinction is important. Intermediate, difficult things are at least possible. Impossible things are not. Knowing the difference is the difference between pursuing lucrative ventures and guaranteed failure."
- Peter Thiel’s CS183: Startup - Class 11 Notes Essay - Secrets
One of the reasons why academia has all those strict norms around plagiarism and citing sources is that it makes the "conceptual family tree" legible. Otherwise it just kind of becomes soupy and difficult to discern.
So how many "confirmed kills" of ideas found in the sequences actually are there? I know the priming studies got eviscerated, but the last time I looked into this I couldn't exactly find an easy list of "famous psychology studies that didn't replicate" to compare against.
To be really frank, and really succinct:
Abuse of the word 'rational' was one of the original social stressors that killed LessWrong.
It is not more fitting, and you should actually go back and edit your post to change it.
The most common pattern I run into, where I’m not sure what to do, is patterns of comments from a given user that are either just barely over the line, or where each given comment is under the line, but so close to a line that repetition of it adds up to serious damage – making LW either not fun, or not safe feeling.
What I used to do on the #lesswrong IRC was put every time I see someone make a comment like this into a journal, and then once I find myself really annoyed with them I open the journal to help establish the pattern. I'd also look at peoples individual chat history to see if there's a consistent pattern of them doing the thing routinely, or if it's a thing they just sometimes happen to do.
I definitely agree this is one of the hardest challenges of moderation, and I pretty much always see folks fail it. IMO, it's actually more important than dealing with the egregious violations, since those are usually fairly legible and just require having a spine.
My most important advice would be don't ignore it. Do not just shrug it off and say "well nothing I can do, it's not like I can tell someone off for being annoying". You most certainly can and should for many kinds of 'annoying'. The alternative is that the vigor of a space slowly gets sucked out by not-quite-bad-actors.
On the one hand, I too resent that LW is basically an insight porn factory near completely devoid of scholarship.
On the other hand, this is not a useful comment. I can think of at least two things you could have done to make this a useful comment:
Specified even a general direction of where you feel the body of economic literature could have been engaged. I know you might resent doing someone elses research for them if you're not already familiar with said body, but frankly the norm right now is to post webs spun from the fibrous extrusions of peoples musing thoughts. The system equilibrium isn't going to change unless some effort is invested into moving it. Notice you could write your comment on most posts while only changing a few words.
Provide advice on how one might go about engaging with 'the body of economic literature'. Many people are intelligent and reasonably well informed, but not academics. Taking this as an excuse to mark them swamp creatures beyond assistance is both lazy and makes the world worse. You could even link to reasonably well written guides from someone else if you don't want to invest the effort (entirely understandable).
For anyone else reading, Harvard has a nice page up on how to do a strong literature review: https://guides.library.harvard.edu/c.php?g=310271&p=2071512
Excellent question. The short answer is when I'm not swamped and running on razor-thin margins of slack, hopefully soon.
This is actually a fairly powerful intuition that I hadn't considered before. In case it might help others:
Keep in mind that a Dunbar-sized tribe of 300 people or so is going to have more than 1 'leader' (and 300 is the upper limit on tribe size). Generally you're looking at a small suite of leaders. Lets say there are a dozen of them. In that case we should naively expect the level of personal fitness required to 'lead a tribe' to be somewhere in the 1-in-30 range, you meet people that would have been leaders in the ancestral environment quite literally every day, multiple times a day even.
Reconcile this with what you actually observe in your life.