Posts

The "Think It Faster" Exercise 2024-12-11T19:14:10.427Z
Subskills of "Listening to Wisdom" 2024-12-09T03:01:18.706Z
The 2023 LessWrong Review: The Basic Ask 2024-12-04T19:52:40.435Z
JargonBot Beta Test 2024-11-01T01:05:26.552Z
The Cognitive Bootcamp Agreement 2024-10-16T23:24:05.509Z
OODA your OODA Loop 2024-10-11T00:50:48.119Z
Scaffolding for "Noticing Metacognition" 2024-10-09T17:54:13.657Z
"Slow" takeoff is a terrible term for "maybe even faster takeoff, actually" 2024-09-28T23:38:25.512Z
2024 Petrov Day Retrospective 2024-09-28T21:30:14.952Z
[Completed] The 2024 Petrov Day Scenario 2024-09-26T08:08:32.495Z
What are the best arguments for/against AIs being "slightly 'nice'"? 2024-09-24T02:00:19.605Z
Struggling like a Shadowmoth 2024-09-24T00:47:05.030Z
Interested in Cognitive Bootcamp? 2024-09-19T22:12:13.348Z
Skills from a year of Purposeful Rationality Practice 2024-09-18T02:05:58.726Z
What is SB 1047 *for*? 2024-09-05T17:39:39.871Z
Forecasting One-Shot Games 2024-08-31T23:10:05.475Z
LessWrong email subscriptions? 2024-08-27T21:59:56.855Z
Please stop using mediocre AI art in your posts 2024-08-25T00:13:52.890Z
Would you benefit from, or object to, a page with LW users' reacts? 2024-08-20T16:35:47.568Z
Optimistic Assumptions, Longterm Planning, and "Cope" 2024-07-17T22:14:24.090Z
Fluent, Cruxy Predictions 2024-07-10T18:00:06.424Z
80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly) 2024-07-03T20:34:50.741Z
What percent of the sun would a Dyson Sphere cover? 2024-07-03T17:27:50.826Z
What distinguishes "early", "mid" and "end" games? 2024-06-21T17:41:30.816Z
"Metastrategic Brainstorming", a core building-block skill 2024-06-11T04:27:52.488Z
Can we build a better Public Doublecrux? 2024-05-11T19:21:53.326Z
some thoughts on LessOnline 2024-05-08T23:17:41.372Z
Prompts for Big-Picture Planning 2024-04-13T03:04:24.523Z
"Fractal Strategy" workshop report 2024-04-06T21:26:53.263Z
One-shot strategy games? 2024-03-11T00:19:20.480Z
Rationality Research Report: Towards 10x OODA Looping? 2024-02-24T21:06:38.703Z
Exercise: Planmaking, Surprise Anticipation, and "Baba is You" 2024-02-24T20:33:49.574Z
Things I've Grieved 2024-02-18T19:32:47.169Z
CFAR Takeaways: Andrew Critch 2024-02-14T01:37:03.931Z
Skills I'd like my collaborators to have 2024-02-09T08:20:37.686Z
"Does your paradigm beget new, good, paradigms?" 2024-01-25T18:23:15.497Z
Universal Love Integration Test: Hitler 2024-01-10T23:55:35.526Z
2022 (and All Time) Posts by Pingback Count 2023-12-16T21:17:00.572Z
Raemon's Deliberate (“Purposeful?”) Practice Club 2023-11-14T18:24:19.335Z
Hiring: Lighthaven Events & Venue Lead 2023-10-13T21:02:33.212Z
"The Heart of Gaming is the Power Fantasy", and Cohabitive Games 2023-10-08T21:02:33.526Z
Related Discussion from Thomas Kwa's MIRI Research Experience 2023-10-07T06:25:00.994Z
Thomas Kwa's MIRI research experience 2023-10-02T16:42:37.886Z
Feedback-loops, Deliberate Practice, and Transfer Learning 2023-09-07T01:57:33.066Z
Open Thread – Autumn 2023 2023-09-03T22:54:42.259Z
The God of Humanity, and the God of the Robot Utilitarians 2023-08-24T08:27:57.396Z
Book Launch: "The Carving of Reality," Best of LessWrong vol. III 2023-08-16T23:52:12.518Z
Feedbackloop-first Rationality 2023-08-07T17:58:56.349Z
Private notes on LW? 2023-08-04T17:35:37.917Z
Exercise: Solve "Thinking Physics" 2023-08-01T00:44:48.975Z

Comments

Comment by Raemon on Basics of Rationalist Discourse · 2024-12-19T22:02:44.271Z · LW · GW

The complaints I remember about this post seem mostly to be objecting to how some phrases were distilled into the opening short "guideline" section. When I go reread the details it mostly seems fine. I have suggestions on how to tweak it.

(I vaguely expect this post to get downvotes that are some kind of proxy for vague social conflict with Duncan, and I hope people will actually read what's written here and vote on the object level. I also encourage more people to write up versions of The Basics of Rationalist Discourse as they seem them)

The things I'd want to change are:

1. Make some minor adjustments to the "Hold yourself to the absolute highest standard when directly modeling or assessing others' internal states, values, and thought processes." (Mostly, I think the word "absolute" is just overstating it. "Hold yourself to a higher standard" seems fine to me. How much higher-a-standard depends on context)

2. Somehow resolve an actual confusion I have with the "...and behave as if your interlocutors are also aiming for convergence on truth" clause. I think this is doing important, useful work, but a) it depends on the situation, b) it feels like it's not quite stating the right thing.

Digging into #2...

Okay, so when I reread the detailed section, I think I basically don't object to anything. I think the distillation sentence in the opening paragraphs conveys a thing that a) oversimplifies, and b) some people have a particularly triggered reaction to.

The good things this is aiming for that I'm tracking:

  • Conversations where everyone trusts that each other are converging on truth are way less frictiony than ones where everyone is mistrustful and on edge about it.
  • Often, even when the folk you're talking to aren't aiming for convergence on truth, proactively acting as if they are helps make it more true. Conversational vibes are contagious.
  • People are prone to see others' mistakes as more intense than their own mistakes, and if most humans aren't specifically trying to compensate for this bias, there's a tendency to spiral into a low-trust conversation unnecessarily (and then have the wasted motion/aggression of a low-trust conversation instead of a medium-or-high one). 

I think maybe the thing I want to replace this with is more like "aim for about 1-2 levels more trusting-that-everyone-is-aiming-for-truth than currently feel warranted, to account for your own biases, and to lead by example in having the conversation focus on truth." But I'm not sure if this is quite right either.

...

This post came a few months before we created our New User Reject Template system. It should have at least occurred to me to use some of the items here as some of the advice we have easily-on-hand to give to new users (either as part of a rejection notice, or just "hey, welcome to LW but it seems like you're missing some of the culture here."

If this post was voted in the Top 50, and a couple points were resolved, I'd feel good making a making a fork with minor context-setting adjustments and then linking to it as a moderation resource), since I'd feel like The People had a chance to weigh in on it. 

The context-setting I'm imagining is not "these are the official norms of LessWrong", but, if I think a user is making a conversation worse for reasons covered in this post, be more ready to link to this post. Since this post came out, we've developed better Moderator UI for sending users comments on their comments, and it hadn't occurred to me until now to use this post as reference for some of our Stock Replies.

(Note: I currently plan to make it so, during the Review, anyone write Reviews on a post even if normally blocked on commenting. Ideally I'd make it so they can also comment on Review comments. I haven't shipped this feature yet but hopefully will soon)

Comment by Raemon on Dear Self; we need to talk about ambition · 2024-12-19T21:52:43.370Z · LW · GW

Previously, I think I had mostly read this through the lens of "what worked for Elizabeth?" rather than actually focusing on which of this might be useful to me. I think that's a tradeoff on the "write to your past self" vs "attempt to generalize" spectrum – generalizing in a useful way is more work.

When I reread it just now, I found the "Ways to Identify Fake Ambition" the most useful section (both for the specific advice of "these emotional reactions might correspond to those motivations", and the meta-level advice of "check for your emotional reactions and see what they seem to be telling you."

I'd kinda like to see a post that is just that section, with a bit of fleshing out to help people figure out when/why they should check for fake ambition (and how to relate to it). I think literally a copy-paste version would be pretty good, and I think there's a more (well, um) ambitious version that does more interviewing with various people and seeing how the advice lands for them.

I might incorporate this section more directly into my metastrategy workshops.

Comment by Raemon on Subskills of "Listening to Wisdom" · 2024-12-18T18:38:26.801Z · LW · GW

Well to be honest in the future there is probably mostly an AI tool that just beams wisdom directly into your brain or something.

Comment by Raemon on Everything you care about is in the map · 2024-12-18T18:35:59.261Z · LW · GW

I wrote about 1/3 of this myself fyi. (It was important to me to get it to a point where it was not just a weaksauce version of itself but where I felt like I at least might basically endorse it and find it poignant as a way of looking at things)

Comment by Raemon on Being Present is Not a Skill · 2024-12-18T01:47:22.786Z · LW · GW

One way I parse this is "the skill of being present (may be) about untangling emotional blocks that prevent you from being present, more than some active action you take."

It's not like entangling emotional blocks isn't tricky! 

Comment by Raemon on Being Present is Not a Skill · 2024-12-18T01:39:17.444Z · LW · GW

I don't have a strong belief that this experience won't generalize, but, I want to flag the jump between "this worked for me" and an implied "this'll work for everyone/most-people." (I expect most people would benefit from hearing this suggestion, just generally have a yellow-flag about some of the phrasings you have here)

Comment by Raemon on Everything you care about is in the map · 2024-12-18T00:19:07.657Z · LW · GW

Nod. 

Fwiw I mostly just thought it was funny in a way that was sort of neutral on "is this a reasonable frame or not?". It was the first thing I thought of as soon as I read your post title.

(I think it's both true that in an important sense everything we care about is in the Map, and also true in an important sense that it's not, and in the ways it was true it felt like a kind of legitimately poignant rewrite that felt like it helped me appreciate your post, and insofar as it was false it seemed hilarious (non-meanspiritedly, just in a "it's funny that so many lines from the original remain reasonable sentences when you reframe it as about epistemology"))

Comment by Raemon on Everything you care about is in the map · 2024-12-17T21:24:00.250Z · LW · GW

lol at the strong downvote and wondering if it is more objecting to the idea itself or more because Claude co-wrote it?

Comment by Raemon on Everything you care about is in the map · 2024-12-17T20:15:25.533Z · LW · GW

Look again at that map. 

That's here. That's all we know. That's us. 

On that map lies everything you love, everyone you know, everything you've ever heard of. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every thought and feeling, every hero and villain, every creator and destroyer of ideas, every paradigm and perspective, every romantic notion, every parent's love, every child's wonder, every flash of insight and exploration, every moral framework, every friendship, every "universal truth", every "fundamental principle" - all of these lived there, in a mere approximation suspended in consciousness.

Our mind is a very small theater in the vast unknown of reality. Think of the endless conflicts between holders of one corner of this mental map and the barely distinguishable beliefs of another corner, how frequent their misunderstandings, how eager they are to impose their models on one another, how fervent their certainties. Think of the rivers of ink spilled by all those philosophers and ideologues so that, in glory and triumph, they could become the momentary arbiters of a fraction of a map.

It has been said that epistemology is a humbling and character-building pursuit. There is perhaps no better demonstration of the folly of human certainty than this recognition of our lenses' limits. To me, it underscores our responsibility to hold our maps more lightly, to deal more kindly with those whose maps differ, and to preserve and cherish this precious capacity for understanding, the only world we've ever known.

(partially written by Claude because I was too lazy busy to write the whole thing by hand)

Comment by Raemon on The 2023 LessWrong Review: The Basic Ask · 2024-12-17T17:18:43.323Z · LW · GW

Can you post a screenshot?

One confounded: by default it’s filtering to posts you’ve read. Toggle off the read filter to see the entire amount.

Comment by Raemon on The 2023 LessWrong Review: The Basic Ask · 2024-12-16T18:43:18.458Z · LW · GW

(I've appreciated your reviews that went and took this to heart, thanks!)

Comment by Raemon on A Way To Be Okay · 2024-12-16T07:23:09.675Z · LW · GW

Another piece of the "how to be okay in the face of possible existential loss" puzzle. I particularly liked the "don't locate your victory conditions inside people/things you can't control" frame. (I'd heard that elsewhere I think but it felt well articulated here)

Comment by Raemon on Competitive, Cooperative, and Cohabitive · 2024-12-16T07:18:58.622Z · LW · GW

I appreciated both this and Mako Yass' Cohabitive Games so Far (I believe Screwtape's post actually introduced the term "cohabitive", which Mako adopted). I think both posts 

I have an inkling that cohabitive games may turn out to be important for certain kinds of AI testing and evaluation – can an AI not only win games with rutheless optimization, but also be a semi-collaborative player in an opended context? (This idea is shaped in part by some ideas I got reading about Encultured)

Comment by Raemon on Fighting without hope · 2024-12-16T07:10:39.173Z · LW · GW

A simple but important point, that has shaped my frame for how to be an emotionally healthy and productive person, even if the odds seem long.

Comment by Raemon on Biological risk from the mirror world · 2024-12-14T19:57:08.166Z · LW · GW

Curated. I'd previously heard vague things about Mirror Life but didn't understand why it would be threatening. This post laid out the case much more clearly than I'd previously heard.

Comment by Raemon on Cohabitive Games so Far · 2024-12-14T19:30:50.118Z · LW · GW

I think it's fine to edit in "here's a link to the thing I shipped later" at the top and/or bottom and/or middle of the post.

Comment by Raemon on Communications in Hard Mode (My new job at MIRI) · 2024-12-14T03:15:54.630Z · LW · GW

Mod note: I frontpaged this. It was a bit of an edge case because we normally don't frontpage "organizational announcements", but, I felt like this one had enough implicit models that I'd feel good reading it in a couple years, whether or not MIRI is no longer doing this particular strategy.

Comment by Raemon on Cohabitive Games so Far · 2024-12-14T00:16:42.755Z · LW · GW

Now they aren't :) This is a case where I think the review's sort of caught the development process in amber.

I'm not sure I understand what the topic is, but, flagging that you are encouraged to edit posts during the Review to make the better, more timeless versions of themselves.

Comment by Raemon on The "Think It Faster" Exercise · 2024-12-13T20:28:39.484Z · LW · GW

I dunno, @sarahconstantin do you remember?

(I'm also curious what @Eliezer Yudkowsky thinks of this post, for that matter, if he's up for it)

Comment by Raemon on The "Think It Faster" Exercise · 2024-12-13T20:15:26.840Z · LW · GW

I don't actually know for sure. 

The thing I think he meant was "he trains (over a longish period of time, or at least more than 30 seconds) to perform only the essential steps (in 30 seconds)". That's at least what I'm aiming at. (I'm setting the less ambitious initial goal of "~15 minutes.")

This essay doesn't actually focus much on the followup "drill yourself until you can actually do the steps in [30 seconds / 15 minutes]" because it feels early stage enough that I'm not quite sure which things make most sense to drill. 

Although now that I draw my attention to that I think I should maybe be prioritizing the followup drilling harder. I'm trying to have more Purposeful Practice be part of my life but it's labor-intensive.

Comment by Raemon on [deleted post] 2024-12-13T20:01:30.621Z

I think this is still sort of the wrong frame. I also plan to explain AI risk through various social media. I will use different phrasings when talking to different target audiences that I expect to have different cruxes. I think what you're calling "explaining rationally", I'd describe as "being insufficiently skilled at explaining things." (To be fair, it's often quite hard to explain things! But, that's because it's hard, not because people are irrational or rational explaining is impossible)

Comment by Raemon on [deleted post] 2024-12-13T19:35:19.969Z

I think you're conflating some things. I recommend reading Bucket Errors, and rereading Rationalist Taboo and breaking down more clearly what you mean by "rational." (It's very underspecified what "be rational with the public" means. It could mean a lot of different things)

Comment by Raemon on First Thoughts on Detachmentism · 2024-12-13T19:27:52.867Z · LW · GW

Mod note: This was a post I was on the fence about approving as a first post. I think it some sense the post is "fine", but I didn't expect it to fair well on LessWrong (mostly for not asking or answering questions that the LessWrong community finds particularly interesting, in a way that the LessWrong community thinks are likely to be helpful). 

I don't currently have a good cached idea on how to handle posts in that reference class. (Especially given that I don't have to really read post in much detail)

Interested in takes from both LW users, (and the author Jacob Peterson for what they think they would have preferred)

Apologies to Jacob Peterson for the awkwardness of using your post as an example.

Comment by Raemon on The "Think It Faster" Exercise · 2024-12-13T19:19:23.290Z · LW · GW

Someone pointed out that I didn't really explain how "identify all the constraints" fit into "think it faster". I added another piece to the Example section to tie it back together more.

Comment by Raemon on [deleted post] 2024-12-13T19:13:31.425Z

I think you are overgeneralizing with the "all." I (and most people I know) might be mistaken about about how to interact with the public, but I don't think we're making the particular mistake you're worried about. (There might be some people who are, who maybe you correctly identified, but you should be pretty carefully making sweeping statements about "all" people in a group)

Comment by Raemon on [deleted post] 2024-12-13T18:14:25.929Z

fwiw I think this is missing the point about what Habryka is frustrated about. 

Comment by Raemon on The 2023 LessWrong Review: The Basic Ask · 2024-12-13T08:53:27.988Z · LW · GW

Note: I plan to extend the Nomination phase through ~Monday, I didn't mean for it to end partway through the weekend.

Comment by Raemon on Basics of Rationalist Discourse · 2024-12-13T08:19:41.539Z · LW · GW

Yeah, I'm not making any object level claims about this post one way or another, just thinking about the general principles.

Thinking a bit more:

I think the Review does fairly naturally make a schelling time for people to write up more top-level responses on things they disagree with. I think it's probably important for it to be possible to write reviews on posts during the Review unless the author specifically removes it from consideration in the Review (which maybe should be special-cased, and doesn't mean people can write more back-and-forth comments, just write a top level review). 

(that's still me musing-out-loud, not like making a final decision, but I will think about it more)

Comment by Raemon on Basics of Rationalist Discourse · 2024-12-13T04:01:20.246Z · LW · GW

I think the point is, anything aspiring to that needs to not have people blocked.

Comment by Raemon on The "Think It Faster" Exercise · 2024-12-12T17:37:36.621Z · LW · GW

Ah gotcha. Yeah, this is why Deliberate Grieving is a core rationalist skill.

Comment by Raemon on The "Think It Faster" Exercise · 2024-12-12T17:37:05.434Z · LW · GW

The idea of 'thinking it faster' is provocative, because it seems to be over-optimising for speed rather than other values, where as the way you're implementing it is by generating more meaningful or efficient decisions which are underpinned by a meta-analysis of your process—which is actually about increasing the quality of your decision-making.

I considered changing it to "Think it Sooner", which nudges you a bit away from "try to think frenetically fast" and towards "just learn to steer towards the most efficient parts of your thought process, avoid wasted motion, and use more effective metastrategies." "Think It Sooner" feels noticeably harder to say so I decided to stick with the original (although I streamlined the phrasing from "Think That Thought faster" a bit so it rolled off the tongue)

Comment by Raemon on The "Think It Faster" Exercise · 2024-12-12T17:30:54.363Z · LW · GW

I actually think this third thing is likely to be a key lesson learned from meta-analysis, to not be stubborn and to pivot to the better solution more freely, what I call "back it up and break it".

I'm not sure I understood this point, could you say more?

Comment by Raemon on Here's Why I'm Hesitant To Respond In More Depth · 2024-12-12T17:23:04.299Z · LW · GW

Is this a thinly veiled attempt to get Elephant Seal 3 into the Review? :P

Comment by Raemon on Modal Fixpoint Cooperation without Löb's Theorem · 2024-12-12T17:18:33.822Z · LW · GW

I haven't had much success articulating why.

I'd be interested in a more in-depth review where you take another pass at this.

Comment by Raemon on Understanding Shapley Values with Venn Diagrams · 2024-12-11T20:50:13.178Z · LW · GW

Curated. This was a quite nice introduction. I normally see Shapley values brought up in a context that's already moderately complicated, and having a nice simple explainer is helpful!

I'd like it if the post went into a bit more detail about when/how Shapley values tend to get used in real world contexts.

Comment by Raemon on Second-Time Free · 2024-12-11T20:05:51.163Z · LW · GW

Mod note: I normally leave this sort of post on personal blog because they are pretty niche, but, I frontpaged this one because it was a) short, and b) the principle seemed like it might generalize to other marketing situations.

Comment by Raemon on Subskills of "Listening to Wisdom" · 2024-12-11T19:15:30.292Z · LW · GW

Nod. So, first of all: I don't know. My own guesses would depend on a lot of details of the individual person (even those in a similar situation to you).

(This feels somewhat outside the scope of the main thrust of this post, but, definitely related to my broader agenda of 'figure out a training paradigm conveying the skills and tools necessary to solve very difficult, confusing problems')

But, riffing so far. And first, summarizing what seemed like the key points:

  • You want to predict optimization daemons as they arise in a system, and want a good mathematical basis for that, and don't feel satisfied with the existing tools.
  • You're currently exploring this on-the-side while working on some more tractable problems.
  • You've identified two broad-strategies, which are:
    • somehow "deliberate practice" this,
    • somehow explore and follow your curiosity intermitently

Three things I'd note:

  • "Deliberate practice" is very openended. You can deliberate practice noticing and cultivating veins of curiosity, for example.
  • You can strategize about how to pursue curiosity, or explore (without routing through the practice angle)
  • There might be action-spaces other than "deliberate practice" or "explore/curiosity" that will turn out to be useful.

My current angle for deliberate practice is to find problem sets that feel somehow-analogous to the one you're trying to tackle, but simpler/shorter. They should be difficult enough that they feel sort of impossible while you're working on them, but, also actually solvable. They should be varied enough that you aren't overfitting to one particular sort of puzzle.

After the exercise, apply the Think It Faster meta-exercise to it.

Part of the point here is notice strategies like "apply explicit systematic thinking" and strategies like "take a break, come back to it when you feel more inspired", and start to develop your own sense of which strategies work best for you.

Comment by Raemon on Subskills of "Listening to Wisdom" · 2024-12-10T22:10:50.686Z · LW · GW

Shortly before finishing this post, I reread Views on when AGI comes and on strategy to reduce existential risk. @TsviBT notes that there are some difficult confrontation + empathy skills that might help communicating with people doing capabilities research. But, this "goes above what is normally called empathy."

It may go beyond what's normally called empathy, understanding, gentleness, wisdom, trustworthiness, neutrality, justness, relatedness, and so on. It may have to incorporate a lot of different, almost contradictory properties; for example, the intervener might have to at the same time be present and active in the most oppositional way (e.g., saying: I'm here, and when all is said and done you're threatening the lives of everyone I love, and they have a right to exist) while also being almost totally diaphanous (e.g., in fact not interfering with the intervened's own reflective processes)

He also noted that there are people who are working on similar abilities, but they aren't pushing themselves enough:

Some people are working on related abilities. E.g. Circlers, authentic relaters, therapists. As far as I know (at least having some substantial experience with Circlers), these groups aren't challenging themselves enough. Mathematicians constantly challenge themselves: when they answer one sort of question, that sort of question becomes less interesting, and they move on to thinking about more difficult questions. In that way, they encounter each fundamental difficulty eventually, and thus have likely already grappled with the mathematical aspect of a fundamental difficulty that another science encounters.

This isn't quite the same thing I was looking at in this post, but something about it feels conceptually related. I may have more to say after thinking about it more.

Comment by Raemon on Exercise: Solve "Thinking Physics" · 2024-12-10T02:24:55.195Z · LW · GW

 Nod. I think I would basically argue that wasn't really a reasonable probability to give the second option. (When I thought it was 90/80/2 I was like "okay well that's close to 50/50 which feels like a reasonable guess for the authorial intent as well as, in practice, what you can derive from unlabeled graphs.")

Comment by Raemon on Subskills of "Listening to Wisdom" · 2024-12-09T23:21:17.090Z · LW · GW

So, as this post describes, I think there's basically a skill of "being good at imagination", that makes it easier to (at least) extrapolate harder from your existing experience library to new things. A skilled imagineer that hasn't advanced in a competitive game, but has gained some other kind of confidence, or suffered some kind of "reality kick to the face", can probably extrapolate to the domain of competitive games.

But, also, part of the idea here is to ask "what do you actually need in order for the wisdom to be useful."

So, my challenge for you: what are some things you think someone will (correctly) do differently, once they have this combination of special qualia?

Comment by Raemon on Subskills of "Listening to Wisdom" · 2024-12-09T23:09:43.857Z · LW · GW

Yeah, a lot of my work recently has gone into figuring out how to teach this specific skill. I have another blogpost about it in the works. "Recursively asking 'Why exactly is this impossible?'"

Comment by Raemon on Exercise: Solve "Thinking Physics" · 2024-12-09T22:11:24.565Z · LW · GW

I just briefly skimmed your answer (trying not to actually engage with it enough to figure out the problem or your thought process), and then went and looked at the problem.

I got the answer B. The reason I went with B is that (especially contrasted with other illustrations in the book), the problem looks like it's going out of it's way to signal that the squares are regular enough that they are trying to convey "this is the same relative size."

I think there's not going to be an objective answer here – sometimes, graphs without units are complete bullshit, or on a logscale, or with counterintuitive units or whatever. Sometimes, they are basically what they appear-at-first-glance to be.

instead I did a 90-80-2 split across these, getting it 'wrong' when the answer was deemed to be b.

Does this mean you assignd ~49% on B? (not 100% sure how to parse this)

The way I approach Thinking Physics problems is

a) I do assume I am trying to guess what the author thought, which does sometimes mean psychologizing (this is sort of unfortunate but also not that different from most real world practical examples, where you often get a task that depends on what other people think-they-meant, and you have to do a mix of "what is the true underlying territory" and "am I interpreting the problem correct?"

b) whenever there are multiple things I'm uncertain of ("what does 'pressure' mean?", "what does the author mean by 'pressure'?) I try to split those out into multiple probabilities

Comment by Raemon on sarahconstantin's Shortform · 2024-12-09T22:01:08.796Z · LW · GW

otoh I also don't think cutting off contact with anyone "impure", or refusing to read stuff you disapprove of, is either practical or necessary. we can engage with people and things without being mechanically "nudged" by them.

Is there a particular reason to believe this? Or is it more of a hope?

Comment by Raemon on Hazard's Shortform Feed · 2024-12-09T20:05:08.870Z · LW · GW

(I have not engaged with this thread deeply)

I've talked to Michael Vassar many times in person. I'm somewhat confident he has taken LSD based on him saying so (although if this turned out wrong I wouldn't be too surprised, my memory is hazy)

I definitely have the experiencing of him saying lots of things that sound very confusing and crazy, making pretty outlandish brainstormy-style claims that are maybe interesting, which he claims to take as literally true, that seem either false, or, at least require a lot of inferential gap. I have also heard him make a lot of morally charged, intense statements that didn't seem clearly supported.

(I do think I have valued talking to Michael, despite this, he is one of the people who helped unstick me in certain ways, but, the mechanism by which he helped me was definitely via being kinda unhinged sounding.) 

Comment by Raemon on Subskills of "Listening to Wisdom" · 2024-12-09T19:47:55.200Z · LW · GW

(FYI this is George from the essay, in case people were confused)

Comment by Raemon on Subskills of "Listening to Wisdom" · 2024-12-09T19:26:48.050Z · LW · GW

My overall frame is it's best to have emotional understanding and system 2 deeply integrated. How to handle local tradeoffs unfortunately depends a lot on your current state, and where your bottlenecks are.

Could you provide a specific, real-world example where the tradeoff comes up and you're either unsure of how to navigate it, or, you think I might suggested navigating it differently?

Comment by Raemon on Are there ways to artificially fix laziness? · 2024-12-08T23:43:09.603Z · LW · GW

I think many people have found this frame ultimately sort of unhelpful at the stated goal.

My experience has been that generally, "laziness" is more of a symptom than a cause, or at least not very useful as a model. 

Here are a few alternate frames:

  1. do you actually believe in the thing you are procrastinating away from doing?
    1. is the problem that your job just sucks and you should get a different one? Or a different work environment?
    2. can you connect more strongly with whatever is good about the thing you are trying to do?
  2. is there something particularly unpleasant about the thing you're procrastinating? Can you somehow address that?
  3. are you specifically addicted to particular flavors of brain-rot content?
  4. do you need rest? (Often I need rest, but then slide into addictive youtube content or videogames instead of resting, so #3 is also relevant but incomplete)

It's useful to be able to power-through with willpower, but if you're finding yourself needing willpower IMO it usually means something else is wrong.

Comment by Raemon on The 2023 LessWrong Review: The Basic Ask · 2024-12-07T04:06:42.280Z · LW · GW

Things I am interested in:

  • what have you learned since then? Have you changed your mind or your ontology?
  • What would you change about the post? (Consider actually changing it)
  • What do you most want people to know about this post, for deciding whether to read or review-vote on it?
  • How concretely have you (or others you know of) used or built on the post? How has it contributed to a larger conversation 
Comment by Raemon on The 2023 LessWrong Review: The Basic Ask · 2024-12-07T04:03:02.126Z · LW · GW

…the status quo is that we’re hiding information on one page. I’m proposing no longer do that, which sounds like what you want?

Comment by Raemon on The 2023 LessWrong Review: The Basic Ask · 2024-12-06T18:47:41.123Z · LW · GW

A thing unclear to me: is it worth hiding the authors from the Voting page?

On the first LessWrong Review, we deliberately hid authors and randomized the order of the voting results. A few year later, we've mostly shifted towards "help people efficiently sort through the information" rather than "making sure the presentation is random/fair." It's not like people don't know who the posts are by once they start reading them.

Curious what people think.