Posts
Comments
Fixed. Thanks.
Fixed. Thanks.
Details aside, you nailed the ambiance.
In my imagination there's no statute in the center. Just a pool of water in the center, but I like the second row of statues. The acolyte in that picture works well too.
Did you use the keyword "Parthenon"? That's what the building is based on.
I do! Never seen that one before. It's interesting. I wish I had an easy way to confirm its accuracy, but the more I think about it, the more of my real life experience I connect it to.
The recursion example rings especially true. It's not just in writing that the ability to do recursion seems to have a hard cutoff.
That greentext helps me understand other people so much better. I take the ability to distinguish ethical anachronisms for granted, and hadn't realized how difficult it must be for other people.
Not much. I'm using it as a proxy measurement for general knowledge.
I was thinking about that scene when I wrote this post.
That part seems reasonable.
Wow. "Level 2" includes things like "the respondent may have to make use of a novel online form".
The thing I'm trying to do is calibrate my model of the distribution of human intelligence. The actual distribution is way lower than my immediate environment makes it appear. Here's another post I wrote which should provide some context on what I mean when I write about "human intelligence". The basic idea is that things like "can fix a carburetor" and "understands genetics" are correlated, not anti-correlated.
Here's the exact title and subtitle.
Title: New Poll Gauges Americans' General Knowledge Levels
Subtitle: Four-fifths know earth revolves around sun
Why are they even wronger?
That's a good point. Human intuitions are geocentric, so the number of people guessing on the heliocentrism question is probably less than 18%. From an expected value perspective, we can treat 18% as guessing, whereas from a default geocentric perspective we can treat 0% as guessing.
But it goes both ways. For questions matching human intuition, if % guess wrong then we should assume >% got it correct by guessing.
This is where the word "belief" gets fuzzy. I think that's what's actually going on is that going on with the laser question is people read "Lasers work by focusing <mumble>" which does match the truth. Due to bad heuristics, it's possible for more than 50% of a survey population to guess wrong on a true-or-false question, which means the things they guess right need to be adjusted downward of else we get nonsensical results.
Yup. I feel similarly about "human values". The values of specific people are great. Humanity's declared preferences are contradictory and incoherent. Humanity's revealed preferences are awful.
Those are great links! They help me understand Apple's business model so much better.
This is so outside my personal experience. The most non-technical person at my company uses spreadsheets, which puts him well into Level 3.
I like the bit about the security of checks.
I know all of those words. I'm not super-comfortable with "chicanery", but it didn't cause any issues with my reading. Please click [agree] on my comment if you know all three words and [disagree] if there is at least one you don't know.
There's something about the way you write introductions that reminds me of good YouTube videos. It's a combination of easy-to-understand illustrations, simple words, and starting with an interesting question.
I like these kinds of posts.
To measure the period of a pendulum, the pendulum must leave a position and then return to it. The pendulum is not leaving its current position. Therefore it is incorrect to conclude that the pendulum's period is 0.0 seconds.
The students should continue monitoring the pendulum until it leaves its position and then returns to it.
The secret is out. Ben's secret identity is Ben Pace.
These are good guidelines.
I think the Dialogue feature is really good. I like using it, and I think it nudges community behavior in a good direction. Well done, Lightcone team.
How do you know that this approach doesn't miss entire categories of error?
The points you bring up are subtle and complex. I think a dialogue would be a better way to explore them rather than a comment thread. I've PM'd you.
I tried that too. It didn't work on my first ~1 hour attempt.
I want to express appreciation for a feature the Lightcone team implemented a long time ago: Blocking all posts tagged "AI Alignment" keeps this website usable for me.
I will bet at odds 10:1 (favorable to you) that I will not let the AI out
I too am confident enough as gatekeeper that I'm willing to offer similar odds. My minimum and maximum bets are my $10,000 USD vs your $1,000 USD.
I was wondering how long it would take for someone to ask these questions. I will paraphrase a little.
How does rhetorical aikido differ from well-established Socratic-style dialogue?
Socratic-style dialogue is a very broad umbrella. Pretty much any question-focused dialogue qualifies. A public schoolteacher asking a class of students "What do you think?" is both "Socratic" and ineffective at penetrating delusion.
The approach gestured at here is entirely within the domain of "Socratic"-style dialogue. However, it is far more specific. The techniques I practice and teach are laser-focused on improving rationality.
Here are a few examples of techniques I use and train, but which are not mandatory for a dialogue to be "Socratic":
- If, while asking questions, you are asked "what do you believe" in return, you must state exactly what you believe.
- You yield as much overt frame to the other person as possible. This is especially the case with definitions. In all but the most egregious situations, you let the other person define terms.
- There are basic principles about how minds work that I'm trying to gesture at. One of my primary objectives in the foundational stages is to get students to understand how the human mind lazily [in the computational sense of the word "lazily"] evaluates beliefs and explanations. Socrates himself was likely aware of these mechanics but, in my experience, most teachers using Socratic methods are not aware of them.
- I use specific conversational techniques to draw attention to specific errors. Which brings us to….
Is there any existing "taxonomy" of conversational methods, classified with respect to the circumstances in which they are most effective?
It depends on your goal. There are established techniques for selling things, seducing people, telling stories, telling jokes, negotiating, and getting your paper accepted into an academic journal. Truth in Comedy: The Manual of Improvisation is a peerless manual for improvisation. But it's not a rationalist handbook.
I have been assembling a list of mistakes and antidotes in my head, but I haven't written it down (yet?).
Here are a few quick examples.
- The way to get an us-vs-them persuasion-oriented rambler to notice they're mistaken is via an Intellectual Turing Test. If they're a Red and assume you're a Blue, then you let them argue about why the Blues are wrong. After a while, you ask "What do you think I believe?" and you surprise them when they find out you're not a Blue. They realized they wasted their reputation and both of your time. One of my favorite sessions with a student started with him arguing against the Blues. He was embarrassed to discover that I wasn't a Blue. Then he spent an hour arguing about why I'm wrong for being a Green. The second time I asked "What do you think I believe?" was extra satisfying, because I had already warned him of the mistake he was making.
- If someone is making careless mistakes because they don't care about whether they're right or wrong, you ask if you can publish the dialogue on the Internet. The earnest people clean up their act. The disingenuous blowhards slink away.
- If someone does a Gish gallop, you ask them to place all their chips on the most important claim.
- If someone says "Some people argue " you ask "Do you argue "? If yes, then they now have skin in the game. If no, then you can dismiss the argument.
Thanks. ❤️
I stole that line from Eric Raymond who stole it from Zen.
I skipped two years of math in grade school. That saved me two years of class time, but the class was still too easy. That's because the speed of the class was the same. Smart kids don't just know more. They learn much faster.
For smart students to learn math at an appropriate speed, it's not enough to skip grades. They need an accelerated program.
Personal counterfactual: I was smarter than my peers and didn't skip any grades.
Result: I didn't physically play with or date the other students.
Exceptions: I did play football and did Boy Scouts, but those were both after-school activities. Moreover, neither of them were strictly segregated by age. Football was weight-based, and Boy Scouts lumped everyone from 11 to 17 into the same troop.
Putting students in the same math class based on age (ignoring intelligence) is like putting students on the same football team based on age (ignoring size).
Different people have different preferences regarding translation. Personally, I'm okay with you translating anything I write here as long as you include a link back to my original here on Less Wrong.
I don't believe this website has any official English-only policy. However, English is the primary language used here. I recommend you just post it in Russian, but include a short note in English at the top explaining something like "This is a Russian translation of …. The original can be found at …."
The video can be summarized by these two lines at timestamp 5:39.
Justin: How do you feel genuine love towards those that cause—you know—monumental suffering for others?
Lsusr: How can you not? They're human beings.
I use the word "love" but, as you noted, that word has many definitions. It would be less ambiguous if I were to say "compassion".
That's funny. When I read lc's username I think "that username looks similar to 'lsusr'" too.
I don't plan to read David Chapman's writings. His website is titled "Meta-rationality". When I'm teaching rationality, one of the first things I have to do is tell students repeatedly is to stop being meta.
Empiricism is about reality. "Meta" is at least one step away from reality, and therefore at least one step farther from empiricism.
The first paragraph was supposed to be sarcastic satire.
I meant side-comments. I never use them myself, but people often use them to comment on my posts. When they do, the comments tend to be constructive, especially compared to blockquotes.
Another improvement I didn't notice until right now is the "respond to a part of the original post" feature. I feel like it nudges comments away from nitpicking.
TL;DR: I don't think it matters much.
This question is a rounding error compared to a much bigger problem in civic planning: car-centric cities are expensive and enable worse quality-of-life compared to traditional, walkable cities. They're not even natural. They only exist as a result of government intervention. For a more detailed dive into this subject, I recommend the Not Just Bikes YouTube channel.
I'm glad you enjoyed it.
The way I think about things, if the person I'm talking with is smiling, laughing, and generally having a good time, then that's what's important.
In a more recent video, I've tried out a toga instead.
Hm... your new student seems like an interesting person to talk to. Mind asking if he'd be interested in a chat with someone else his age?
I've sent you his Discord information via PM. (After obtaining permission, of course.)
Say with a straight face that student loans help the economy, and the power of social cognition will make it so.
XD
Yep. In a debate competition, you can win with arguments that are obviously untrue to anyone who knows what you're talking about, which is why I'm much less interested in traditional debate these days. (Not to discourage you, of course. The dark arts are useful.) When teaching Socratic dialogues, the first thing I have to teach is "Don't give arguments you don't actually believe in."
There's lots of tricks I use to get around this in real life (mostly betting face, since betting money only works for facts), but they're not allowed in a debate tournament.
Thank you for checking my numbers.
Many readers appeared to dislike my example post. IIRC, prior to mentioning it here, it's karma (excluding my auto hard upvote) was close to zero, despite it having about 40 votes.
Which makes you feel like it's improving how you think?
I'm learning how to film, light and edit video. I'm learning how to speak better too, and getting a better understanding about how the media ecosystem works.
Making videos is harder than writing, which means I learn more from it.
Here's part of a comment on one of my posts. The comment negatively impacted my desire to post deviant ideas on LessWrong.
Bullshit. If your desire to censor something is due to an assessment of how much harm it does, then it doesn't matter how open-minded you are. It's not a variable that goes into the calculation.
I happen to not care that much about the object-level question anymore (at least as it pertains to LessWrong), but on a meta level, this kind of argument should be beneath LessWrong. It's actively framing any concern for unrestricted speech as poorly motivated, making it more difficult to have the object-level discussion.
The comment doesn't represent a fringe opinion. It has +29 karma and +18 agreement.
Thanks for watching out! Your comment thoroughly passes any reasonable cost-benefit expected value calculation. That post is a useful, concise resource.
I actually did run into (what I think are) vitamin deficiency issues initially. I began taking a daily multivitamin (that includes vitamin B12, among other things), and the problems went away. I also drink a bit of milk that seems to be tolerably-sourced.
First of all, I appreciate all the work the LessWrong / Lightcone team does for this website.
The Good
- I was skeptical of the agree/disagree voting. After using it, I think it was a very good decision. Well done.
- I haven't used the dialogue feature yet, but I have plans to try it out.
- Everything just works. Spam is approximately zero. The garden is gardened so well I can take it for granted.
- I love how much you guys experiment. I assume the reason you don't do more is just engineering capacity.
And yet…
Maybe there's a lot of boiling feelings out there about the site that never get voiced?
I tend to avoid giving negative feedback unless someone explicitly asks for it. So…here we go.
Over the 1.5 years, I've been less excited about LessWrong than any time since I discovered this website. I'm uncertain to what extent this is because I changed or because the community did. Probably a bit of both.
AI Alignment
The most obvious change is the rise of AI Alignment writings on LessWrong. There are two things that bother me about AI Alignment writing.
- It's effectively unfalsifiable. Even betting markets don't really work when you're betting on the apocalypse.
- It's highly political. AI Alignment became popular on LessWrong before AI Alignment became a mainstream political issue. I feel like LessWrong has a double-standard, where political writing is held to a high epistemic standard unless it's about AI.
I have hidden the "AI Alignment" tag from my homepage, but there is still a spillover effect. "Likes unfalsifiable political claims" is the opposite of the kind of community I want to be part of. I think adopting lc's POC || GTFO burden of proof would make AI Alignment dialogue productive, but I am pessimistic about that happening on a collective scale.
Weird ideas
When I write about weird ideas, I get three kinds of responses.
- "Yes and" is great.
- "I think you're wrong because " is fine.
- "We don't want you to say that" makes me feel unwelcome.
Over the years, I feel like I've gotten fewer "yes and" comments and more "we don't want you to say that" comments. This might be because my writing has changed, but I think what's really going on is that this happens to every community as it gets older. What was once radical eventually congeals into dogma.
I used to post my weird ideas immediately to LessWrong. Now I don't, because I feel like the reception on LessWrong would bum me out.[1]
I wonder what fraction of the weirdest writers here feel the same way. I can't remember the last time I've read something on LessWrong and thought to myself, "What a strange, daring, radical idea. It might even be true. I'm scared of what the implications might be." I miss that.[2]
I get the basic idea
I have learned a lot from reading and writing on LessWrong. Eight months ago, I had an experience where I internalized something very deep about rationality. I felt like I graduated from Level 1 to Level 2.
According to Eliezer Yudkowsky, his target audience for the Sequences was 2nd grade. He missed and ended up hitting college-level. They weren't supposed to be comprehensive. They were supposed to be Level 1. But after that, nobody wrote a Level 2. (The postrats don't count.) I've been trying―for years―to write Level 2, but I feel like a sequence of blog posts is a suboptimal format in 2023. Yudkowsky started writing the Sequences in 2006, when YouTube was still a startup. That leads me to…
100×
The other reason I've been posting less on LessWrong is that I feel like I'm hitting a soft ceiling with what I can accomplish here. I'm nowhere near the my personal skill cap, of course. But there is a much larger potential audience (and therefore impact) if I shifted from writing essays to filming YouTube videos. I can't think of anything LessWrong is doing wrong here. The editor already allows embedded YouTube links.