Posts

Back to Basics: Truth is Unitary 2024-03-29T21:10:33.399Z
Many people lack basic scientific knowledge 2024-03-29T06:43:19.219Z
Non-Confusion 2024-03-12T02:46:27.853Z
Mushin 2024-03-06T03:27:14.491Z
flowing like water; hard like stone 2024-02-20T03:20:46.531Z
Lsusr's Rationality Dojo 2024-02-13T05:52:03.757Z
A Socratic Dialogue about Socratic Dialogues 2023-12-19T07:50:05.935Z
The Dark Arts 2023-12-19T04:41:13.356Z
What is the next level of rationality? 2023-12-12T08:14:14.846Z
Embedded Agents are Quines 2023-12-12T04:57:31.588Z
A Socratic dialogue with my student 2023-12-05T09:31:05.266Z
[Bias] Restricting freedom is more harmful than it seems 2023-11-22T09:44:12.445Z
Petrov Day [Spoiler Warning] 2023-09-27T19:20:04.657Z
Newcomb Variant 2023-08-29T07:02:58.510Z
When Omnipotence is Not Enough 2023-08-25T19:50:51.038Z
2084 2023-08-25T07:42:13.053Z
[Review] Two People Smoking Behind the Supermarket 2023-05-16T07:25:10.511Z
[Prediction] Humanity will survive the next hundred years 2023-02-25T18:59:57.845Z
The Caplan-Yudkowsky End-of-the-World Bet Scheme Doesn't Actually Work 2023-02-25T18:57:00.105Z
Self-Reference Breaks the Orthogonality Thesis 2023-02-17T04:11:15.677Z
Beyond Reinforcement Learning: Predictive Processing and Checksums 2023-02-15T07:32:55.931Z
Path-Dependence in ChatGPT's Political Outputs 2023-02-04T02:02:21.936Z
Mlyyrczo 2022-12-26T07:58:57.920Z
Predictive Processing, Heterosexuality and Delusions of Grandeur 2022-12-17T07:37:39.794Z
Free Will is [REDACTED] 2022-12-06T08:14:31.281Z
MrBeast's Squid Game Tricked Me 2022-12-03T05:50:02.339Z
Always know where your abstractions break 2022-11-27T06:32:09.643Z
Science and Math 2022-11-27T04:05:25.977Z
[Book Review] "Station Eleven" by Emily St. John Mandel 2022-11-07T05:56:19.994Z
The Teacup Test 2022-10-08T04:25:16.461Z
What are you for? 2022-09-06T03:32:23.536Z
Seattle Robot Cult 2022-08-25T19:29:52.721Z
How do you get a job as a software developer? 2022-08-15T14:45:20.923Z
Checksum Sensor Alignment 2022-07-11T03:31:51.272Z
The Alignment Problem 2022-07-11T03:03:03.271Z
Deontological Evil 2022-07-02T06:57:18.085Z
Dagger of Detect Evil 2022-06-21T06:23:01.264Z
To what extent have ideas and scientific discoveries gotten harder to find? 2022-06-18T07:15:44.193Z
The Mountain Troll 2022-06-11T09:14:01.479Z
The Burden of Worldbuilding 2022-06-04T01:15:44.078Z
Silliness 2022-06-03T04:59:51.456Z
Here's a List of Some of My Ideas for Blog Posts 2022-05-26T05:35:28.236Z
Glass Puppet 2022-05-25T23:01:15.473Z
Seattle Robot Cult 2022-05-07T00:25:13.975Z
Squires 2022-05-02T03:36:13.490Z
One master. One apprentice. 2022-05-01T17:38:18.926Z
The Gospel of Martin Luther 2022-04-28T04:29:58.601Z
Letter to my Squire 2022-04-28T04:16:38.905Z
Rationality Dojo 2022-04-24T00:53:57.384Z
Re: So You Want to Be a Dharma Teacher 2022-04-23T22:31:33.400Z

Comments

Comment by lsusr on Bayeswatch 12: The Singularity War · 2024-04-19T04:50:55.983Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Symbiotic Conflicts · 2024-04-10T19:13:37.058Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Bayeswatch 1: Jewish Space Laser · 2024-04-05T23:36:49.129Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Glass Puppet · 2024-04-03T16:47:08.431Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Anti-Corruption Market · 2024-04-03T16:46:45.660Z · LW · GW

Fixed. Thanks.

Comment by lsusr on The Mountain Troll · 2024-04-03T16:44:55.080Z · LW · GW

Fixed. Thanks.

Comment by lsusr on The Teacup Test · 2024-04-03T16:44:24.083Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Back to Basics: Truth is Unitary · 2024-03-31T18:10:38.810Z · LW · GW

Details aside, you nailed the ambiance.

In my imagination there's no statute in the center. Just a pool of water in the center, but I like the second row of statues. The acolyte in that picture works well too.

Did you use the keyword "Parthenon"? That's what the building is based on.

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-30T00:14:48.637Z · LW · GW

I do! Never seen that one before. It's interesting. I wish I had an easy way to confirm its accuracy, but the more I think about it, the more of my real life experience I connect it to.

The recursion example rings especially true. It's not just in writing that the ability to do recursion seems to have a hard cutoff.

That greentext helps me understand other people so much better. I take the ability to distinguish ethical anachronisms for granted, and hadn't realized how difficult it must be for other people.

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-30T00:11:13.670Z · LW · GW

Not much. I'm using it as a proxy measurement for general knowledge.

I was thinking about that scene when I wrote this post.

Comment by lsusr on Back to Basics: Truth is Unitary · 2024-03-29T21:58:09.395Z · LW · GW

That part seems reasonable.

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-29T18:42:52.163Z · LW · GW

Wow. "Level 2" includes things like "the respondent may have to make use of a novel online form".

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-29T18:30:56.324Z · LW · GW

The thing I'm trying to do is calibrate my model of the distribution of human intelligence. The actual distribution is way lower than my immediate environment makes it appear. Here's another post I wrote which should provide some context on what I mean when I write about "human intelligence". The basic idea is that things like "can fix a carburetor" and "understands genetics" are correlated, not anti-correlated.

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-29T18:27:42.391Z · LW · GW

Here's the exact title and subtitle.

Title: New Poll Gauges Americans' General Knowledge Levels

Subtitle: Four-fifths know earth revolves around sun

Why are they even wronger?

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-29T18:27:13.817Z · LW · GW

That's a good point. Human intuitions are geocentric, so the number of people guessing on the heliocentrism question is probably less than 18%. From an expected value perspective, we can treat 18% as guessing, whereas from a default geocentric perspective we can treat 0% as guessing.

But it goes both ways. For questions matching human intuition, if % guess wrong then we should assume >% got it correct by guessing.

This is where the word "belief" gets fuzzy. I think that's what's actually going on is that going on with the laser question is people read "Lasers work by focusing <mumble>" which does match the truth. Due to bad heuristics, it's possible for more than 50% of a survey population to guess wrong on a true-or-false question, which means the things they guess right need to be adjusted downward of else we get nonsensical results.

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-29T18:22:03.681Z · LW · GW

Yup. I feel similarly about "human values". The values of specific people are great. Humanity's declared preferences are contradictory and incoherent. Humanity's revealed preferences are awful.

Comment by lsusr on Many people lack basic scientific knowledge · 2024-03-29T18:21:11.221Z · LW · GW

Those are great links! They help me understand Apple's business model so much better.

This is so outside my personal experience. The most non-technical person at my company uses spreadsheets, which puts him well into Level 3.

Comment by lsusr on Economics Roundup #1 · 2024-03-26T18:56:26.731Z · LW · GW

I like the bit about the security of checks.

Comment by lsusr on My Clients, The Liars · 2024-03-08T00:41:51.537Z · LW · GW

I know all of those words. I'm not super-comfortable with "chicanery", but it didn't cause any issues with my reading. Please click [agree] on my comment if you know all three words and [disagree] if there is at least one you don't know.

Comment by lsusr on Using axis lines for good or evil · 2024-03-06T17:30:38.541Z · LW · GW

There's something about the way you write introductions that reminds me of good YouTube videos. It's a combination of easy-to-understand illustrations, simple words, and starting with an interesting question.

Comment by lsusr on My Clients, The Liars · 2024-03-05T23:35:44.887Z · LW · GW

I like these kinds of posts.

Comment by lsusr on The Parable Of The Fallen Pendulum - Part 1 · 2024-03-01T01:19:05.331Z · LW · GW

To measure the period of a pendulum, the pendulum must leave a position and then return to it. The pendulum is not leaving its current position. Therefore it is incorrect to conclude that the pendulum's period is 0.0 seconds.

The students should continue monitoring the pendulum until it leaves its position and then returns to it.

Comment by lsusr on Open Thread – Winter 2023/2024 · 2024-02-23T00:44:54.743Z · LW · GW

The secret is out. Ben's secret identity is Ben Pace.

Comment by lsusr on flowing like water; hard like stone · 2024-02-20T19:26:11.045Z · LW · GW

These are good guidelines.

Comment by lsusr on Open Thread – Winter 2023/2024 · 2024-02-16T02:41:31.282Z · LW · GW

I think the Dialogue feature is really good. I like using it, and I think it nudges community behavior in a good direction. Well done, Lightcone team.

Comment by lsusr on Lsusr's Rationality Dojo · 2024-02-15T23:52:38.747Z · LW · GW

How do you know that this approach doesn't miss entire categories of error?

Comment by lsusr on Lsusr's Rationality Dojo · 2024-02-15T22:16:51.301Z · LW · GW

The points you bring up are subtle and complex. I think a dialogue would be a better way to explore them rather than a comment thread. I've PM'd you.

Comment by lsusr on Lsusr's Rationality Dojo · 2024-02-15T16:44:52.890Z · LW · GW

I tried that too. It didn't work on my first ~1 hour attempt.

Comment by lsusr on Open Thread – Winter 2023/2024 · 2024-02-15T06:11:33.715Z · LW · GW

I want to express appreciation for a feature the Lightcone team implemented a long time ago: Blocking all posts tagged "AI Alignment" keeps this website usable for me.

Comment by lsusr on I played the AI box game as the Gatekeeper — and lost · 2024-02-13T23:52:38.779Z · LW · GW

I will bet at odds 10:1 (favorable to you) that I will not let the AI out

I too am confident enough as gatekeeper that I'm willing to offer similar odds. My minimum and maximum bets are my $10,000 USD vs your $1,000 USD.

Comment by lsusr on Lsusr's Rationality Dojo · 2024-02-13T22:40:20.843Z · LW · GW

I was wondering how long it would take for someone to ask these questions. I will paraphrase a little.

How does rhetorical aikido differ from well-established Socratic-style dialogue?

Socratic-style dialogue is a very broad umbrella. Pretty much any question-focused dialogue qualifies. A public schoolteacher asking a class of students "What do you think?" is both "Socratic" and ineffective at penetrating delusion.

The approach gestured at here is entirely within the domain of "Socratic"-style dialogue. However, it is far more specific. The techniques I practice and teach are laser-focused on improving rationality.

Here are a few examples of techniques I use and train, but which are not mandatory for a dialogue to be "Socratic":

  • If, while asking questions, you are asked "what do you believe" in return, you must state exactly what you believe.
  • You yield as much overt frame to the other person as possible. This is especially the case with definitions. In all but the most egregious situations, you let the other person define terms.
  • There are basic principles about how minds work that I'm trying to gesture at. One of my primary objectives in the foundational stages is to get students to understand how the human mind lazily [in the computational sense of the word "lazily"] evaluates beliefs and explanations. Socrates himself was likely aware of these mechanics but, in my experience, most teachers using Socratic methods are not aware of them.
  • I use specific conversational techniques to draw attention to specific errors. Which brings us to….

Is there any existing "taxonomy" of conversational methods, classified with respect to the circumstances in which they are most effective?

It depends on your goal. There are established techniques for selling things, seducing people, telling stories, telling jokes, negotiating, and getting your paper accepted into an academic journal. Truth in Comedy: The Manual of Improvisation is a peerless manual for improvisation. But it's not a rationalist handbook.

I have been assembling a list of mistakes and antidotes in my head, but I haven't written it down (yet?).

Here are a few quick examples.

  • The way to get an us-vs-them persuasion-oriented rambler to notice they're mistaken is via an Intellectual Turing Test. If they're a Red and assume you're a Blue, then you let them argue about why the Blues are wrong. After a while, you ask "What do you think I believe?" and you surprise them when they find out you're not a Blue. They realized they wasted their reputation and both of your time. One of my favorite sessions with a student started with him arguing against the Blues. He was embarrassed to discover that I wasn't a Blue. Then he spent an hour arguing about why I'm wrong for being a Green. The second time I asked "What do you think I believe?" was extra satisfying, because I had already warned him of the mistake he was making.
  • If someone is making careless mistakes because they don't care about whether they're right or wrong, you ask if you can publish the dialogue on the Internet. The earnest people clean up their act. The disingenuous blowhards slink away.
  • If someone does a Gish gallop, you ask them to place all their chips on the most important claim.
  • If someone says "Some people argue " you ask "Do you argue "? If yes, then they now have skin in the game. If no, then you can dismiss the argument.
Comment by lsusr on Lsusr's Rationality Dojo · 2024-02-13T19:26:26.007Z · LW · GW

Thanks. ❤️

I stole that line from Eric Raymond who stole it from Zen.

Comment by lsusr on Childhood and Education Roundup #4 · 2024-01-30T18:45:15.705Z · LW · GW

I skipped two years of math in grade school. That saved me two years of class time, but the class was still too easy. That's because the speed of the class was the same. Smart kids don't just know more. They learn much faster.

For smart students to learn math at an appropriate speed, it's not enough to skip grades. They need an accelerated program.

Comment by lsusr on Childhood and Education Roundup #4 · 2024-01-30T18:39:10.368Z · LW · GW

Personal counterfactual: I was smarter than my peers and didn't skip any grades.

Result: I didn't physically play with or date the other students.

Exceptions: I did play football and did Boy Scouts, but those were both after-school activities. Moreover, neither of them were strictly segregated by age. Football was weight-based, and Boy Scouts lumped everyone from 11 to 17 into the same troop.

Putting students in the same math class based on age (ignoring intelligence) is like putting students on the same football team based on age (ignoring size).

Comment by lsusr on Luna Lovegood and the Chamber of Secrets - Part 1 · 2024-01-29T13:17:58.847Z · LW · GW

Different people have different preferences regarding translation. Personally, I'm okay with you translating anything I write here as long as you include a link back to my original here on Less Wrong.

I don't believe this website has any official English-only policy. However, English is the primary language used here. I recommend you just post it in Russian, but include a short note in English at the top explaining something like "This is a Russian translation of …. The original can be found at …."

Comment by lsusr on Universal Love Integration Test: Hitler · 2024-01-11T09:03:45.249Z · LW · GW

The video can be summarized by these two lines at timestamp 5:39.

Justin: How do you feel genuine love towards those that cause—you know—monumental suffering for others?

Lsusr: How can you not? They're human beings.

I use the word "love" but, as you noted, that word has many definitions. It would be less ambiguous if I were to say "compassion".

Comment by lsusr on Open Thread – Winter 2023/2024 · 2024-01-10T01:21:16.674Z · LW · GW

That's funny. When I read lc's username I think "that username looks similar to 'lsusr'" too.

Comment by lsusr on What is the next level of rationality? · 2023-12-13T04:25:59.763Z · LW · GW

I don't plan to read David Chapman's writings. His website is titled "Meta-rationality". When I'm teaching rationality, one of the first things I have to do is tell students repeatedly is to stop being meta.

Empiricism is about reality. "Meta" is at least one step away from reality, and therefore at least one step farther from empiricism.

Comment by lsusr on The Mountain Troll · 2023-12-12T09:14:34.907Z · LW · GW

The first paragraph was supposed to be sarcastic satire.

Comment by lsusr on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-11T19:58:16.294Z · LW · GW

I meant side-comments. I never use them myself, but people often use them to comment on my posts. When they do, the comments tend to be constructive, especially compared to blockquotes.

Comment by lsusr on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-11T19:44:43.717Z · LW · GW

Another improvement I didn't notice until right now is the "respond to a part of the original post" feature. I feel like it nudges comments away from nitpicking.

Comment by lsusr on A Socratic dialogue with my student · 2023-12-11T18:30:22.053Z · LW · GW

TL;DR: I don't think it matters much.

This question is a rounding error compared to a much bigger problem in civic planning: car-centric cities are expensive and enable worse quality-of-life compared to traditional, walkable cities. They're not even natural. They only exist as a result of government intervention. For a more detailed dive into this subject, I recommend the Not Just Bikes YouTube channel.

Comment by lsusr on A Socratic dialogue with my student · 2023-12-11T03:52:20.876Z · LW · GW

I'm glad you enjoyed it.

The way I think about things, if the person I'm talking with is smiling, laughing, and generally having a good time, then that's what's important.

In a more recent video, I've tried out a toga instead.

Comment by lsusr on A Socratic dialogue with my student · 2023-12-11T03:19:59.260Z · LW · GW

Hm... your new student seems like an interesting person to talk to. Mind asking if he'd be interested in a chat with someone else his age?

I've sent you his Discord information via PM. (After obtaining permission, of course.)

Say with a straight face that student loans help the economy, and the power of social cognition will make it so.

XD

Yep. In a debate competition, you can win with arguments that are obviously untrue to anyone who knows what you're talking about, which is why I'm much less interested in traditional debate these days. (Not to discourage you, of course. The dark arts are useful.) When teaching Socratic dialogues, the first thing I have to teach is "Don't give arguments you don't actually believe in."

There's lots of tricks I use to get around this in real life (mostly betting face, since betting money only works for facts), but they're not allowed in a debate tournament.

Comment by lsusr on A Socratic dialogue with my student · 2023-12-11T03:03:27.949Z · LW · GW

Thank you for checking my numbers.

Comment by lsusr on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T05:02:08.334Z · LW · GW

Many readers appeared to dislike my example post. IIRC, prior to mentioning it here, it's karma (excluding my auto hard upvote) was close to zero, despite it having about 40 votes.

Comment by lsusr on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T01:36:16.669Z · LW · GW

Which makes you feel like it's improving how you think?

I'm learning how to film, light and edit video. I'm learning how to speak better too, and getting a better understanding about how the media ecosystem works.

Making videos is harder than writing, which means I learn more from it.

Comment by lsusr on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-07T00:46:33.878Z · LW · GW

Here's part of a comment on one of my posts. The comment negatively impacted my desire to post deviant ideas on LessWrong.

Bullshit. If your desire to censor something is due to an assessment of how much harm it does, then it doesn't matter how open-minded you are. It's not a variable that goes into the calculation.

I happen to not care that much about the object-level question anymore (at least as it pertains to LessWrong), but on a meta level, this kind of argument should be beneath LessWrong. It's actively framing any concern for unrestricted speech as poorly motivated, making it more difficult to have the object-level discussion.

The comment doesn't represent a fringe opinion. It has +29 karma and +18 agreement.

Comment by lsusr on A Socratic dialogue with my student · 2023-12-06T19:27:39.563Z · LW · GW

Thanks for watching out! Your comment thoroughly passes any reasonable cost-benefit expected value calculation. That post is a useful, concise resource.

I actually did run into (what I think are) vitamin deficiency issues initially. I began taking a daily multivitamin (that includes vitamin B12, among other things), and the problems went away. I also drink a bit of milk that seems to be tolerably-sourced.

Comment by lsusr on How do you feel about LessWrong these days? [Open feedback thread] · 2023-12-06T06:50:35.152Z · LW · GW

First of all, I appreciate all the work the LessWrong / Lightcone team does for this website.

The Good

  • I was skeptical of the agree/disagree voting. After using it, I think it was a very good decision. Well done.
  • I haven't used the dialogue feature yet, but I have plans to try it out.
  • Everything just works. Spam is approximately zero. The garden is gardened so well I can take it for granted.
  • I love how much you guys experiment. I assume the reason you don't do more is just engineering capacity.

And yet…

Maybe there's a lot of boiling feelings out there about the site that never get voiced?

I tend to avoid giving negative feedback unless someone explicitly asks for it. So…here we go.

Over the 1.5 years, I've been less excited about LessWrong than any time since I discovered this website. I'm uncertain to what extent this is because I changed or because the community did. Probably a bit of both.

AI Alignment

The most obvious change is the rise of AI Alignment writings on LessWrong. There are two things that bother me about AI Alignment writing.

  • It's effectively unfalsifiable. Even betting markets don't really work when you're betting on the apocalypse.
  • It's highly political. AI Alignment became popular on LessWrong before AI Alignment became a mainstream political issue. I feel like LessWrong has a double-standard, where political writing is held to a high epistemic standard unless it's about AI.

I have hidden the "AI Alignment" tag from my homepage, but there is still a spillover effect. "Likes unfalsifiable political claims" is the opposite of the kind of community I want to be part of. I think adopting lc's POC || GTFO burden of proof would make AI Alignment dialogue productive, but I am pessimistic about that happening on a collective scale.

Weird ideas

When I write about weird ideas, I get three kinds of responses.

  • "Yes and" is great.
  • "I think you're wrong because " is fine.
  • "We don't want you to say that" makes me feel unwelcome.

Over the years, I feel like I've gotten fewer "yes and" comments and more "we don't want you to say that" comments. This might be because my writing has changed, but I think what's really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.

I used to post my weird ideas immediately to LessWrong. Now I don't, because I feel like the reception on LessWrong would bum me out.[1]

I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.[2]

I get the basic idea

I have learned a lot from reading and writing on LessWrong. Eight months ago, I had an experience where I internalized something very deep about rationality. I felt like I graduated from Level 1 to Level 2.

According to Eliezer Yudkowsky, his target audience for the Sequences was 2nd grade. He missed and ended up hitting college-level. They weren't supposed to be comprehensive. They were supposed to be Level 1. But after that, nobody wrote a Level 2. (The postrats don't count.) I've been trying―for years―to write Level 2, but I feel like a sequence of blog posts is a suboptimal format in 2023. Yudkowsky started writing the Sequences in 2006, when YouTube was still a startup. That leads me to…

100×

The other reason I've been posting less on LessWrong is that I feel like I'm hitting a soft ceiling with what I can accomplish here. I'm nowhere near the my personal skill cap, of course. But there is a much larger potential audience (and therefore impact) if I shifted from writing essays to filming YouTube videos. I can't think of anything LessWrong is doing wrong here. The editor already allows embedded YouTube links.


  1. Exception: I can usually elicit a positive response by writing fiction instead of nonfiction. But that takes a lot more work. ↩︎

  2. This might be entirely in my head, due to hedonic adaptation. ↩︎