Posts
Comments
Thank you for taking it!
. . . yep, it should be. I think I just fixed it, but I can't figure out how that got like that in the first place. Thanks!
"Hourly" doesn't count while asleep. If you use it for work, weekends don't count against "Daily." Etc.
Thank you, that's appreciated!
Hrm. My definition of "anti-agathic" is something that prolongs life, so it isn't obviously not counting a brain transplant to a younger body.
I'm somewhat opposed to tweaking the wording on long-standing parts of the census, since that makes it harder to compare to earlier years. If we want to go this route, I'd rather write a new question and ask both some year so we can compare them.
Not easily.
In order to give people a copy of their responses with google forms, I would need to collect emails. It even becomes a required question on the form. Collecting emails changes the tenor of the survey quite a bit I think, even if I invited people to enter nonsense for an email if they didn't want to give that information.
You're welcome! Thank you for taking it.
Noted. I'm already expecting marginal values of IQ to be weird since IQ isn't a linear scale in the first place.
I admit I'm testing a chain of conjectures with those questions and probably will only get weak evidence for my actual question. The feedback is really appreciated!
Argh, I hate tweaking historical questions. This seems equivalent so lets try it.
It wound up phrased that way trying to make a minimal change from the historical version of the question, where the question and the title were at odds.
Whelp, that's a dumb error on my part. Fixed and thank you.
Done!
I imagine two people are talking and one says "oh, I think you should read this essay, here's the link!" and the second asks "oh, what's it about? Any good quotes?"
If the first doesn't have an answer to that, then it feels like a weird recommendation? I guess that's the second stage of, where people review them.
We're looking for speakers for the Boston Solstice. This year Solstice is December 28th, 7pm. Being a speaker at solstice is pretty straightforward; public speaking skill is useful but you can read off a script, don't feel like you need to memorize something.
If you're at all interested, reach out. We have speeches ranging from very short and silly to a couple of pages and somber.
Additionally, if you feel like you have an original speech on the themes of persistence or camaraderie, especially if you feel you have a good speech about not giving up even when it's hard, then please feel free to send a draft! The overall arc is set at this point but you might have something better for a given slot.
Tentative support for only auto-importing the first few paragraphs, if not that then start by auto-importing the whole post and waiting until anybody complains. My guess (~65%?) is that somebody will. Against having an LLM extract some important highlights- if doing highlights is the way to go I think whoever nominated the piece for the review can find the highlights?
I'd love it if I could use LessWrong as a central place to read rationalsphere content, and since more and more rationalist sphere writers are writing elsewhere this seems like it's worth trying.
London, UK
December 19th, 7pm.
Event link: https://partiful.com/e/0ML9Ec1F8SCWh6TszHgt
(I'm not running it, but was asked by one of the organizers to put this here!)
I don't know who you met at LessOnline, but there's a few people looking for roommates.
There's a Discord server for attendees with a Finding Roommates channel. The way I envision this working is people show up in Discord, introduce themselves, and ask if anyone wants to room together. Once people have grouped up, one of them rents the room and the other reimburses them.
This involves a bit more lateral trust than last year where people indicated how many roommates they were comfortable with, paid me for their share of the room, and I sorted people together. On the other hand it allows for a bit more choice and offloads a bit of setup from me to the attendees, which is increasingly useful as megameetup scales up.
I think NYC is the only solstice with a megameetup tradition. Does anyone know of a second?
Boston
December 28th, 6:30pm.
Connexion, 149 Broadway, Somerville
Facebook: https://www.facebook.com/events/1217047559386391
Not the most important response to this essay but "Leave the hand-wringing to those with all their fingers" made me laugh. Thanks for the smile.
I can understand that feeling. I currently disagree with it, but I think I understand it.
Lots of people seem to do something like this on intuition. Some people don't. Take the “why do you care about something boring like horses?” example. What do you say to someone who makes that kind of mistake?
"Did you mean to make them upset?" "No."
"Did you think about how they would react to you calling their interest boring?" "No. I didn't mean to call it boring."
"If you think about it, do you understand how they interpreted what you said as calling their interest boring?" "Yeah, that makes sense."
"Did you think about how they would interpret what you said before you said it?" "Not really."
"Can you think about how someone will interpret what you say before you say it next time?" "Yeah, I can do that."
I say please and thank you when asking for a dish at the table. I worked out what kinds of raised voice parses as anger, and don't use it unless I'm actually angry- and even then, I try to say calmly that something makes me angry rather than yell at people. There are countless small touches in how we phrase things and how we hold ourselves that help everyone feel better about social interactions, and some people genuinely do not do those things automatically. I think it's better to do them by explicitly thinking about it rather than not do them at all.
You can overdo this, leading to complicated webs of half-truths and things needing to be said just right, and I think that can be bad. You can also overdo this and leave yourself an anxious wreck, overindexed on whether anything you say or do will make people upset with you. But for people who don't do the thing, and who are regularly running into people getting mad at them? Yeah, I think it's worth taking some time and energy to practice this.
Huh! I view it as a bit overbroad since "what do I think I know?" is sometimes about things like "is the bloke across the poker table from me holding an ace?" but I think most of my "what do I think I know?" internal questions are about what's happened in the past. "Does sugar dissolve in water?" often breaks down into "the last time I tried it, did sugar dissolve in water?" or "have people told me that sugar dissolves in water and were they usually right about things like that?"
Still, the past/present/future frame isn't the key part of the third fundamental question. Best of luck and skill with the new technique!
Yep, compilers and booting are good examples. Making a compiler from scratch is a pain in the rear, making a second compiler when you already have the first is easier.
For a concrete example: I once screwed up my operating system and got it into a state where it wouldn't boot. Downloading a fresh copy of an OS is pretty straightforward, if you have a working copy of an operating system already, but I didn't. In this case, I wound up asking a friend to download a copy and then used that to get my machine working again.
I'm not sure I understand your point, but I think you're pointing out that these aren't always booleans?
There's cases where if you're doing well, it's easier to do even better. Money is fairly continuous but so is friendship. (You might have acquaintances even if you don't have close friends.) The central example of an Anvil here is boolean though; if you have enough juice in your car battery to start the car you're fine and can charge it up more, but if you don't have enough juice then you need someone to jump you.
Darn. Seems like this particular bit of jargon is already taken. I haven't commonly heard this use of Anvil Problem, hence thinking the phrase was open, but oh well.
The "Anvil" part is pretty core to my mnemonic for it. Anyone have thoughts on whether something like Anvil Issues or Anvil Blockers would be workable?
Yep, and to spell out the general case: there are techniques you shouldn't use unless you're confident you can use them correctly, because they do not degrade gracefully. Often these techniques aren't taught unless the instructor is reasonably sure the student has the other pieces to use it well.
As a note of pedagogy I usually prefer when the teachers says something like "This is the basic way to do it, and we're going to practice this first. If you're unsure, do it this way. We might get into variations later."
Per request, I just added "LLM Frequency" and "LLM Use case" to the survey, under LessWrong Team Questions. I'll probably tweak the options and might move it to Bonus Questions later when I can sit down and take some time to think. Suggestions on the wording are welcome!
On it!
I just added "LLM Frequency" and "LLM Use case" to the survey, under LessWrong Team Questions. I'll probably tweak the options and might move it to Bonus Questions later. Suggestions welcome!
So, I think Fight 1 is funny, but it is kind of high context, involving reading two somewhat long stories. (Planecrash in particular is past a million words long!) I'd considered "Who would win in a fight, Eliezer Yudkowsky or Scott Alexander? ["Eliezer", "Scott", "Wait, what's this? It's Aella with a steel chair!"]" and "Who is the rightful caliph? ["Eliezer Yudkowsky","Scott Alexander", "Wait, what's this? It's Robin Hanson with a steel chair!"]" but feel a bit weird about including real people.
I think they're just as funny though, and far more people will understand it, so maybe I should switch. Anyone have convincing thoughts here?
I have no opinion on the difference and chatgpt agrees with you, so sure, changed to "eighty percent of the benefit."
Thanks for the year catch.
I could check their expected price of bitcoin, but that feels like more weight than I want to put on bitcoin- it's already a little bit overlapping with the S&P question. What I'd like to replace it with is something that 1. will have a definitive answer by next summer, 2. people have enough context to understand the question, and 3. isn't at obvious.
The questions are not checking for social skills. I am not sure how I'd do that on an online survey that's going to be self reported, and if you have thoughts about that I'm kind of curious? What percentage of the survey being about social skills would be sufficient? (I'm heavily into meetups and in-person gatherings for LessWrong events, so I might be one of the more receptive audiences for this line of argument!)
I could, but what if someone genuinely thinks it's that high number? Someone put 1,000,000 on the 2022 version of that question.
Expanding a little:
I think something like speaking the truth even when you're afraid to is a skill. I've noticed apprehension holds me back sometimes, both consciously and in a sneaky quiet voice in the back of my head asking if I'm sure, why not check again, surely this isn't the fight I want to pick. When I imagine an idealized rationalist, they don't keep quiet because of nagging anxiety about what might happen and that feels important.
I don't know if it's like, one of the top ten core rationalist skills I want to ask about, and I'm not at all sure this is the right phrasing.
Many worlds and the Simulation question are probably not going to change our anticipated experiences. I do think we can put probabilities on things we don't expect to change our experiences- for instance, if you flip a coin, look at it, and commit to never telling me whether it came up heads, I still think the coin has a 50% chance of coming up heads. That's less ontologically weird though.
Those two are longstanding census standard questions, and I'm probably going to keep them because I like being able to do comparisons over time. Many Worlds in particular is interesting to me as an artifact of the Sequences.
Hrm.
So, if I want that information I think I could get close by looking at everyone who answered the question before and the question after, but didn't answer Singularity.
I'll change the text to say they should enter something not a number, like "N/A" and then filter out anything that isn't a number when I'm doing math to it.
Yeah, the skills section is very much a draft that I'm hoping people will have good ideas for.
I've changed the wording to "speaking the truth even against social pressure" but I don't think this is good, just a little better.
I adapted the version from 2022 and added it to Bonus Political.
"Voting
Did you vote in your country's last major national election? If you were ineligible to vote for some reason, the answer is No. [Yes, No, My country does not have elections]"
Should be fixed now. Thanks!
. . . This is going to mess up comparisons to previous years, I can already tell.
Wait. No. That market is for a release before 2025, not by the end of 2025.
. . . I was pretty sure there was a market for end of 2025 and now I can't find it. Hrm.
Yeah, this would either need options for many countries or one schema for many countries.
Asking whether they voted or not in a national election is straight forward enough, and there's been past questions like that.
"Voting Did you vote in your country's last major national election?"
Hrm. I parse this as part of an example: if you are partnered and monogamous (and faithful!) then you should put down 1. If you're polyamorous, but happen to have one partner, you would also put 1 for this question. There's a Relationship Styles question that gets at what people prefer.
Do you think this example will confuse people?
I'm being a little bit sneaky here, and trying to compare the LessWrong community to Manifold. Here's the Manifold Market I'm trying to track.
I don't want to add multiple paragraphs to the question text, but there's probably a way to make this a little clearer.
I'm planning to run the unofficial LessWrong Community Census again this year. There's a post with a link to the draft and a quick overview of what I'm aiming for here, and I'd appreciate comments and feedback. In particular, if you
- Have some political questions you want to get into detail with or
- Have experience or opinions on the foundational skills of rationality and how to test them on a survey
then I want to hear from you. I care a lot about rationality skills but don't know how to evaluate them in this format, but I have some clever ideas if I had a signal I could sift out of the survey. I don't care about politics, but lots of people do and I don't want to spoil their fun.
You can also propose other questions! I like playing with survey data :)
I've been thinking that EA should try to elect a president, someone who is empowered but also accountable to the general people in the movement, a schelling person to be the face of EA.
Counterargument, I think there's enough different streams of EA that this would not be especially helpful.
There exists a president of GiveWell. There exists a president of 80k Hours. There exists a president of Open Philanthropy. Those three organizations seem pretty close to each other, and there's a lot of others further afield. I think there would be a lot of debating, some of it acrimonious, about who counted as 'in the movement' enough to vote on a president of EA, and it would be easy to wind up with a president that nobody with a big mailing list or a pile of money actually had to listen to.
This is a crux. I acknowledge I probably share more values with a random EA than a random university student, but I don't think that's actually saying that much, and I believe there's a lot of massively impactful difference in culture and values.
My best guess is something like a third of rationalists are also EAs, at least going by identification. (I'm being lazy for the moment and not cross checking "Identifies as Rationalist" against "Identifies as EA" but I can if you want me to and I'm like 85% sure the less-lazy check will bear that out.) My educated but irresponsible guess is something like 10% of EAs are rationalists. Last time I did a straw poll at an ACX meetup, more than half the people attending were also EAs. Whatever the differences are, it's not stopping a substantial overlap on membership, and I don't think that's just at the level of random members but includes a lot of the notable members.
I'd be pretty open to a definition of 'rationalist' that was about more than self-identification, but to my knowledge we don't have a workable definition better than that. It's plausible to me that the differences matter as you lean on them a lot, but I think it's more likely the two groups are aligned for most purposes.
From my observations it's fairly common for post-rationalists to go to rationalist events and vice-versa, so there's at least engagement on the level of waving hello in the lunchroom. There's enough overlap in identification that some people people in both categories read each other's blogs, and the essays that wind up at the intersection of both interests will have some back and forth in the comments. Are you looking for something more substantial than that?
I can't think of any reverting rationalists off the top of my head, though they might well be out there.
I think the best Less Wrong Census for mental illness would be 2016, though 2012 did ask about autism. You're probably going to have better luck using the 2024 SSC/ACX survey data, as it's more recent and bigger.
Have fun!
I have heard of them. The first time was when someone at LessWrong Community Weekend used their cards as part of an exercise, the second time when they came up on Clearer Thinking.
School of Thought is at least adjacent via Clearer Thinking. I think your question is a little under-defined. Are you asking if the people running it identify as rationalists?
I'm not in AI Safety so if someone who is in the field has better suggestions, assume they're right and I'm wrong. Still, I hang out adjacent to AI Safety a lot. The best, easily accessible on-ramp I'm aware of is AiSafety.Quest. The best program I'm aware of is probably AI Safety Fundamentals, though I think they might get more applications than they can take.
Best of luck and skill, and I'm glad to have people working on the problem.
Hello, and welcome! I'm also a habitual roleplayer (mostly tabletop rpgs for me, with the occasional LARP) and I'm a big fan of Alexander and Yudkowsky's fiction. Any particular piece of fiction stand out as your favourite? It isn't one of theirs, but I love The Cambist and Lord Iron.
I've been using Zvi's articles on AI to try and keep track of what's going on, though I tend to skim them unless something catches my eye. I'm not sure if that's what you're looking for in terms of resources.