Posts

Why is it so hard to change people's minds? Well, imagine if it wasn't... 2023-07-10T13:55:38.239Z

Comments

Comment by Celarix on Rejecting Television · 2024-04-23T15:44:39.103Z · LW · GW

The visual techniques of TV—cuts, zooms, pans, and sudden noises—all activate the orient response.

 

Anecdote, but this form of rapid cutting is most assuredly alive and well. I saw a promotional ad for an upcoming MLB baseball game on TBS. In a mere 25 seconds, I counted over 35 different cuts, cuts between players, cuts between people in the studio, cut after cut after cut. It was strangely exhausting.

Comment by Celarix on A New Response To Newcomb's Paradox · 2024-04-20T02:14:22.706Z · LW · GW

The thing about Newcomb's problem for me was always the distribution between the two boxes, one being $1,000,000 and the other being $1,000. I'd rather not risk losing $999,000 for a chance at an extra $1,000! I could just one-box for real, take the million, then put it in an index fund and wait for it to go up by 0.1%.

I do understand that the question really comes into play when the amounts vary and Omega's success rate is lower - if I could one-box for $500 and two-box for $1,500 total and Omega is wrong 25% of the time observed, that would be a different play.

Comment by Celarix on Taking into account preferences of past selves · 2024-04-16T02:09:56.720Z · LW · GW

I don’t want to spend ten years figuring this out.

A driving factor in my own philosophy around figuring out what to do with my life. Some people spend decades doing something or living with something they don't like, or even something more trivially correctable, like spending one weekend to clean up the basement vs. living with a cluttered mess for years on end.

Comment by Celarix on Medical Roundup #2 · 2024-04-10T14:32:50.733Z · LW · GW

Hmm. My family and I always let the ice cream sit for about 10 to 15 minutes to let it soften first. Interesting to see the wide range of opinions, wasn't even aware that wasn't a thing.

Comment by Celarix on Open Thread Spring 2024 · 2024-04-05T15:14:02.010Z · LW · GW

My thinking is that the more discussed threads would have more value to the user. Small threads with 1 or 2 replies are more likely to be people pointing out typos or just saying +1 to a particular passage.

Of course, there is a spectrum - deeply discussed threads are more likely to be angry back-and-forths that aren't very valuable.

Comment by Celarix on Open Thread Spring 2024 · 2024-04-03T14:26:03.566Z · LW · GW

Ooh, nice. I've been wanting this kind of discussion software for awhile. I do have a suggestion: maybe, when hovering over a highlighted passage, you could get some kind of indicator of how many child comments are under that section, and/or change the highlight contrast for threads that have more children, so we can tell which branches of the discussion got the most attention

Comment by Celarix on Vernor Vinge, who coined the term "Technological Singularity", dies at 79 · 2024-03-29T14:35:49.735Z · LW · GW

Noted, thank you. This does raise my confidence in Alcor.

Comment by Celarix on Vernor Vinge, who coined the term "Technological Singularity", dies at 79 · 2024-03-28T13:37:39.989Z · LW · GW

This doesn't really raise my confidence in Alcor, an organization that's supposed to keep bodies preserved for decades or centuries.

Comment by Celarix on A Teacher vs. Everyone Else · 2024-03-22T03:08:36.493Z · LW · GW

I can kind of see the original meme's point in the extremes. Consider a mechanic shop that has had very, very slow business for months and is in serious financial trouble. I can see the owners Moloching their way into "suggesting" that their technicians maybe don't fix it all the way. After all, what's the harm in having a few customers come back a little more often if it means maybe saving the business?

But this is only on the extremes.

Comment by Celarix on The Felt Sense: What, Why and How · 2024-03-13T15:15:19.315Z · LW · GW

Noted, thank you.

Comment by Celarix on The Felt Sense: What, Why and How · 2024-03-12T15:06:15.723Z · LW · GW

Here's Duncan Sabien describing the experience of honing down on a particular felt sense

 

I'm confused - the original author seems to be Connor Morton?

Comment by Celarix on 2023 Survey Results · 2024-02-22T13:39:01.920Z · LW · GW

I mean, sure, but that does kinda answer the question in the question - "if event X happens, should you believe that event X is possible?" Well, yes, because it happened. I guess, in that case, the question could be more measuring something like "I, a Rationalist, would not believe in ghosts because that would lower my status in the Rationalist community, despite seeing strong evidence for it"

Sort of like asking "are you a Rationalist or are you just saying so for status points?"

Comment by Celarix on 2023 Survey Results · 2024-02-22T01:28:36.287Z · LW · GW

I kinda disagree - if you see ghosts, almost all the probability space should be moving to "I am hallucinating".

Comment by Celarix on Wrong answer bias · 2024-02-04T20:46:46.811Z · LW · GW

Fair! That's a simple if not easy solution, definitely bottom-left quadrant instead of bottom-right!

Comment by Celarix on Wrong answer bias · 2024-02-04T00:45:22.645Z · LW · GW

Likely true. The sorts of problems I was thinking about for the razor are ones that have had a simple solutions for a very long time - walking, talking, sending electrical current from one place to another, illuminating spaces, stuff like that.

Perhaps a 2x2 grid would be helpful?

a 2x2 grid of solution exists or does not exist vs. people believe it exists or they don't

I feel like this post is standing against the top-left quadrant and would prefer everyone to move to the bottom-left quadrant, which I agree with. My concern is the people in the bottom-right quadrant, which I don't believe lukehmiles is in, but I fear they may use this post as fuel for their belief - i.e. "depression is easy, you attention-seeking loser! just stop being sad, it's a solved problem!"

Comment by Celarix on Wrong answer bias · 2024-02-03T17:26:25.394Z · LW · GW

Yes, so long as one can tell the difference between a problem that is solved (construction, microprocessor design, etc.) and one that is not ("depressed? just stop being sad, it's easy")

Also, we might apply an unnamed razor: If a problem has a simple solution, everyone would already be doing it.

Comment by Celarix on Wrong answer bias · 2024-02-03T01:56:01.532Z · LW · GW

I broadly agree, but I think it's worth it to learn to distinguish scenarios where a simple solution is known from ones where it is not. We have, say, building design and construction down pat, but AGI alignment? A solid cure for many illnesses? The obesity crisis? No simple solution is currently known.

Comment by Celarix on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-09T02:38:56.176Z · LW · GW

Pretty good overall. My favorite posts are about the theory of the human mind that helps me build a model of my own mind and the minds of others, especially in how it can go wrong (mental illness, ADHD, et. al.)

The AI stuff is way over my head, to the point where my brain just bounces off of the titles alone, but that's fine - not everything is for everyone. Also reading the acronyms EDT and CDT always make me think of the timezones, not the decision theories.

About the only complaint I have is that the comments can get pretty dense and recursively meta, which can be a bit hard to follow. Zvi will occasionally talk about a survey of AI safety experts giving predictions about stuff and it just feels like a person talking about people talking about predictions about risks associated with AI. But this is more of a me thing and probably people who can keep up find these things very useful.

Comment by Celarix on Limerence Messes Up Your Rationality Real Bad, Yo · 2023-12-07T16:12:43.121Z · LW · GW

This post demonstrates another surface of the important interplay between our "logical" (really just verbal) part-of-mind and our emotional part-of-mind. Other posts on this site, including by Kaj Sotala and Valentine, go into this interplay and how our rationality is affected by it.

It's important to note, both for ourselves and for our relationships with others, that the emotional part is not something that can be dismissed or fought with, and I think this post does well in explaining an important facet of that. Plus, when we're shown the possible pitfalls ahead of any limerence, we can be more aware of it when we do fall in love, which is always nice.

Comment by Celarix on (briefly) RaDVaC and SMTM, two things we should be doing · 2023-12-05T06:34:58.676Z · LW · GW

My review mostly concerns the SMTM's A Chemical Hunger part of this review. RaDVaC was interesting if not particularly useful, but SMTM's series has been noted by many commenters to be a strange theory, possibly damaging, and there were, as of my last check, no response by SMTM to the various rebuttals.

It does not behoove rationalism to have members that do not respond to critical looks at their theories. They stand to do a lot of damage and cost a lot of lives if taken seriously.

Comment by Celarix on Sherlockian Abduction Master List · 2023-12-04T06:19:16.947Z · LW · GW

Oh, yes, true. However, I still maintain that particularly jerkish people would be happy to misgender in that manner as they'd think that the only good gender is male or somesuch nonsense.

Comment by Celarix on Sherlockian Abduction Master List · 2023-12-03T20:05:05.357Z · LW · GW

Counterpoint: it could also be because the speaker thinks male is default and automatically thinks of an unknown person as male.

Comment by Celarix on [Linkpost] George Mack's Razors · 2023-11-29T15:15:16.753Z · LW · GW

20. Memetic Razor: If you hear news "through the grapevine" or see something on the "popular" feeds of social media, it has likely traveled a long journey of memetic selection to get to you, and is almost certainly modified from the original.

Comment by Celarix on Will no one rid me of this turbulent pest? · 2023-10-16T13:24:20.205Z · LW · GW

I notice I feel some opposition to this, mostly on the grounds that messing with nature tends to end rather poorly for us. Nature is trillions of deeply interconnected dimensions doing who-knows-what at every layer; there is a small chance that release of these gene drives could be an x-risk. So do we take a guaranteed 600,000 dead every year, or an x% chance of accidentally wiping out all life on Earth? What value of x is acceptably low?

Comment by Celarix on Why is it so hard to change people's minds? Well, imagine if it wasn't... · 2023-08-02T14:07:31.008Z · LW · GW

This is good stuff, thank you. I think these are all good ways to avoid the trap of letting others decide your goals for you, and I like the idea of continuously changing your goals if you find they aren't working/have been Goodharted/etc.

Comment by Celarix on Why is it so hard to change people's minds? Well, imagine if it wasn't... · 2023-08-02T00:08:35.633Z · LW · GW

Good catch, didn't think of that. Definitely seems like peer pressure is a better way to change minds rather than one-on-one. This is still parasitism, though - I don't know if I'd trust most people to form a group to hold me accountable for changes in my behavior. Seems too easy for them to, intentionally or not, shape my request into ways that benefit them.

For example, I might form a group to help me lose weight. I care very much about my physical wellbeing and reducing discomfort, but they might care more about my ugly appearance and assume that's what I'm going for, too. Worse yet, my discomfort is invisible to them, and my ugliness in their eyes is invisible to me!

Certainly not an insurmountable obstacle, of course, but one to be aware of.

EDIT: I read your paragraph on cults and then completely ignored it when writing my response. Of course you know that peer pressure can be bad, you said it yourself. My mistake.

Comment by Celarix on I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful? · 2023-07-22T15:09:07.993Z · LW · GW

I do have this feeling from time to time. Some stuff that's helped me:

  • Simplify, simplify, simplify. Do your chores with as little effort as possible, per https://mindingourway.com/half-assing-it-with-everything-youve-got/. Buy stuff that does your work for you. Your goal is to complete your chores, not to do work; work is in service to the goal, not the goal itself.
  • Success by default. Make the easiest, lowest-energy, laziest way to do the thing also the right way. Have exactly one clothes hamper for dirty clothes. Buy clothes that don't need ironing. Put decorative items on unused surfaces so you don't put random junk on them.
  • Suggested to me by ACX commenters: random rewards. Pick something you like (chocolates, kisses from your wife, etc.) and set up a timer that goes off at a randomly selected interval which entitles you to that reward. The randomness might help your brain to associate chores with better feelings.
  • Make it satisfying: Power-washers for big jobs, steam cleaners, compressed air, powered dish brushes. There's a lot of cleaning content on /r/oddlysatisfying for a reason.

This next one's a lot more speculative, feel free to completely disregard. Seriously, I'm suggesting the following because it works for me, but my anxiety response might be way less than yours is and my method might be totally useless for that.

Anyway, it sounds like you have a strong anxiety response when either doing chores or thinking about them. I had a similar issue, and my strategy was flooding, exposure therapy turned up to eleven. Do not recommend.

My real suggestion is, when you feel these racing thoughts and fast breathing, is to stop the chore for a moment and stand there. Don't try to distract yourself, because the focus is, for a moment, now on the anxiety response itself.

For me, anxiety is a series of waves of intense physical symptoms like fast heartbeat and breathlessness. There's definitely a sense of "here it comes!" before each next wave. One thing that helped me was realizing that, while the anxiety response can be quite long-lived, each wave was only 10-15 seconds long. Noticing this gave me strength; if I make it through this wave, I'll have a few minutes to work with the anxiety before the next. This may not be true for you, though.

You say that the endless nature of these chores terrifies you. In this moment of standing still, your goal is to dig a bit deeper, to try to ask "why does the fact that these chores are endless scare me so much?"

Possible answers could be:

  • Endless chores scare me because they'll take up a huge chunk of my finite life.
  • Endless chores scare me because every time I finish them, I have to do them again soon; all my previous work has amounted to nothing.
  • Endless chores scare me because my thoughts race and my breathing speeds up.

Or something else entirely! It might take awhile to get a good answer, and you might have to think through many of them until one seems to fit.

All this might sound very familiar; you say you're taking anxiety medication, but you may also have heard all this through therapy (Cognitive Behavioral Therapy) or meditation. Again, my advice works for me but may not work for you! Please don't feel bad if this doesn't apply!

Comment by Celarix on I'm consistently overwhelmed by basic obligations. Are there any paradigm shifts or other rationality-based tips that would be helpful? · 2023-07-22T02:25:57.583Z · LW · GW

Not a full answer, but Kaj Sotala's Multiagent Models of Mind (https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip) is a great sequence that introduces some of these concepts.

Comment by Celarix on Introducing Fatebook: the fastest way to make and track predictions · 2023-07-13T15:40:11.053Z · LW · GW

Cool, thank you!

Comment by Celarix on Introducing Fatebook: the fastest way to make and track predictions · 2023-07-11T23:09:55.506Z · LW · GW

Oh HELL yeah. I tried Metaculus's private predictions for this, but they needed just as much detail as the public ones did, at least in terms of "this field is required". They seem to be aiming more for the superforecaster/people who actually give their predictions some thought camp, which is perfectly fine, but not suited for me, who just wants something quick and simple.

Signup was easy, I love how it watches for dates in the question and automatically sets them in the resolve field. Posting a comment containing a link by itself (https://www.cnbc.com/2023/04/24/bitcoin-btc-price-could-hit-100000-by-end-2024-standard-chartered.html) on one of my private predictions seems to have posted a blank comment, though. (p.s. my odds on 1 BTC = $100,000 are about 15%)

Absolutely love this one. I've already migrated my (admittedly few) existing predictions over. Thanks for this!

EDIT: "Each day I’ll write down whether I want to leave or stay in my job. After 2 months, will I have chosen ‘leave’ on >30 days?" is such a good suggested question, both to demonstrate details about the site and also to get people to think about what prediction can do for them, personally! And I love how you can just click the suggestion and have it auto-populate into the title field.

Comment by Celarix on Why is it so hard to change people's minds? Well, imagine if it wasn't... · 2023-07-11T13:26:07.784Z · LW · GW

That's actually one I wanted to link but I just could not remember the title for the life of me. Thanks!

Comment by Celarix on Why is it so hard to change people's minds? Well, imagine if it wasn't... · 2023-07-10T22:54:08.395Z · LW · GW

Sounds about right! Thanks for these links, I look forward to reading them. Pulling sideways is an underappreciated life skill - sometimes you have to question the playing field, not just the game.

Comment by Celarix on Optimized for Something other than Winning or: How Cricket Resists Moloch and Goodhart's Law · 2023-07-07T22:21:29.343Z · LW · GW

This is why it is important that the 'spirit of cricket' is never properly codified into laws. If it was, then players would simply game the rules and find the most successful strategy that operates within the laws of the game and the process would be Goodharted.

 

This is a fascinating take! Ambiguity and things different people see differently as a defense against Moloch and Goodhart. I think there's a lot of people in this community, myself very included, that don't like ambiguity and would prefer if everything had a solid, clear, objective answer.

Comment by Celarix on Most people should probably feel safe most of the time · 2023-05-10T13:36:17.424Z · LW · GW

I'd say kind of... you definitely have to keep your attention and wits about you on the road, but if you're relying on anxiety and unease to help you drive, you're probably actually doing a bit worse than optimal safety - too quick to assume that something bad will happen, likely to overcorrect and possibly cause a crash.

Comment by Celarix on Should LW have an official list of norms? · 2023-04-27T15:11:14.935Z · LW · GW

I'm afraid I don't have the time for a full writeup, but the Stack Exchange community went through a similar problem: should the site have a place to discuss the site? Jeff Atwood, cofounder, said [no](https://blog.codinghorror.com/meta-is-murder/) initially, but the community wanted a site-to-discuss-the-site so badly, they considered even a lowly phpBB instance. Atwood eventually [realized he was wrong](https://blog.codinghorror.com/listen-to-your-community-but-dont-let-them-tell-you-what-to-do/) and endorsed the concept of Meta StackExchange.

Comment by Celarix on Moderation notes re: recent Said/Duncan threads · 2023-04-27T00:56:20.417Z · LW · GW

I think if you model things as just "an internet community" this will give you the wrong intuitions. 

This, plus Vaniver's comment, has made me update - LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.

Comment by Celarix on Moderation notes re: recent Said/Duncan threads · 2023-04-27T00:47:27.198Z · LW · GW

Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn't really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he's made, paid to settle a legal issue... the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.

Comment by Celarix on Moderation notes re: recent Said/Duncan threads · 2023-04-27T00:41:29.920Z · LW · GW

The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between "good user" and "ban".

I guess I'm having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I'm familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don't know).

I do want to note that my problem isn't with offering Said money - any offer to any user of any Internet community feels... extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that's contracting and not unusual. I'm not even necessarily offended by such an offer, just, again, extremely surprised.

Comment by Celarix on Moderation notes re: recent Said/Duncan threads · 2023-04-26T14:35:34.809Z · LW · GW

by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts

What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?

how is this even a reasonable-

Isn't this community close in idea terms to Effective Altruism? Wouldn't it be better to say "Said, if you change your commenting habits in the manner we prescribe, we'll donate $10k-$100k to a charity of your choice?"

I can't believe there's a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I've been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.

Comment by Celarix on Contra Yudkowsky on AI Doom · 2023-04-25T00:18:13.367Z · LW · GW

I don't have much to contribute on AI risk, but I do want to say +1 for the gutsy title. It's not often you see the equivalent of "Contra The Founding Mission of an Entire Community".

Comment by Celarix on Goodhart's Law inside the human mind · 2023-04-22T00:39:11.248Z · LW · GW

A lot of good stuff here, especially about the part on being wrong about the emotions having a subtle noticeability. I feel like this supports somewhat tighter cycle times and checking in with subagents more often so one doesn't spend years chasing the wrong ideas.

Thanks for the reply!

Comment by Celarix on Goodhart's Law inside the human mind · 2023-04-18T02:05:29.639Z · LW · GW

This is one of my favorite sequences on this site and I'm quite glad to see a new entry. I do have a question regarding the last section:

Rather, I would suggest opening up to feelings. Becoming familiar with them, understanding where they come from and what they are trying to do, and allowing them to become updated with new evidence and feedback.

How does one gain confidence that the read on their own emotions is an accurate description of the message they're trying to communicate? That is, how can one be more sure that they're actually listening to their emotions and not just assuming?

For example, many of us might be familiar with the type that listens to half of your description of an issue, assumes they immediately understand it perfectly, then gives you advice that doesn't match your problem at all. ("I've been feeling sad late-" "oh yeah I know, man. Just get some more sleep, you'll perk right up!") How do I know I'm not doing that to my own emotions?

It seems like the Rationalist approach to psychology has reached some incredibly important yet very subtle places where the valuable signals we want to pay attention to (i.e. true intent of emotion) are incredibly weak. People wander the metaphorical wilderness for decades without truly seeing what's going on in their heads, many who regularly go to therapy. I'm afraid of ascribing the completely wrong message to what my emotions are trying to tell me and getting stuck examining the wrong model for large parts of my life.

Anyway, an excellent post in an excellent sequence. Your work and Valentine's work, more than many others here, have made things make sense to me. Thank you!

Comment by Celarix on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-16T17:28:10.950Z · LW · GW

Reflection: this adds a lot of extra writing in order to answer the questions. Worth being aware of; adding GPT-4 to your journaling process may increase the time it takes to complete it each day/week/etc.

Comment by Celarix on Reflective journal entries using GPT-4 and Obsidian that demand less willpower. · 2023-04-15T18:54:39.959Z · LW · GW

I might try this with GPT 3.5, as I can't yet justify spending the $20/month on GPT-4. Something like this seems quite interesting!

(oh god we're going to have to be using this near-AGI stuff for everything soon, aren't we?)

Comment by Celarix on LW Team is adjusting moderation policy · 2023-04-07T14:38:45.621Z · LW · GW

3. Needing help with depression, akrasia, or medical advice with confusing mystery illness.

Bit of a shame to see this one, but I understand this one. It's crunch time for AGI alignment and there's a lot on the line. Maybe those of us interested in self-help can go to/post their thoughts on some of the rationalsphere blogs, or maybe start their own.

I got a lot of value out of the more self-help and theory of mind posts here, especially Kaj Sotala's and Valentine's work on multiagent models of mind, and it'd be cool to have another place to continue discussions around that.

Comment by Celarix on Shutting down AI is not enough. We need to destroy all technology. · 2023-04-02T00:44:17.746Z · LW · GW

There's only one way to save us from AI armageddon. Either the AI mercilessly tears us apart, or we choose our own dignified exit.

Cause vacuum decay. Obliterate the universe. Guarantee that nothing will ever occur again.

Comment by Celarix on The Overton Window widens: Examples of AI risk in the media · 2023-03-26T16:09:30.701Z · LW · GW

COVID at least had some policy handles that the government could try to pull: lockdowns, masking, vaccines, etc. What could they even do against AGI?

Comment by Celarix on GPT-4 and ASCII Images? · 2023-03-19T15:52:04.499Z · LW · GW

I like this question - if it proves true that GPT-4 can produce recognizable ASCII art of things, that would mean it was somehow modelling an internal sense of vision and ability to recognize objects.

Comment by Celarix on Dice Decision Making · 2023-03-11T02:31:50.198Z · LW · GW

The pedant in me wants to say that three tails means 7 (tails, tails, tails > 111 > 1 + 2 + 4 > 7).

EDIT: Ah, now I see you started with 1, so the max value is indeed 8.

Comment by Celarix on AI #2 · 2023-03-03T01:24:41.178Z · LW · GW

The voting response seems a bit harsh to this one; I read it as "where there is opportunity, there will be people selling pickaxes and shovels, and there will also be grifters teaching courses on how to market pickaxes and shovels."