Posts

[Site Meta] Feature Update: More Tags! (Experimental) 2020-04-22T02:12:00.518Z · score: 59 (16 votes)
LW Team Updates: Pandemic Edition (March 2020) 2020-03-26T23:55:02.238Z · score: 37 (11 votes)
The Danes wish to know more about the coronavirus 2020-03-14T16:39:46.697Z · score: 13 (5 votes)
Conversational Cultures: Combat vs Nurture (V2) 2020-01-08T20:23:53.772Z · score: 136 (49 votes)
Conversation about whether LW Moderators should express individual opinions about mod policy - 2019/12/22 2019-12-23T03:46:31.060Z · score: 18 (5 votes)
LW Team Updates - December 2019 2019-12-05T20:40:39.473Z · score: 41 (14 votes)
[LW Team] Request for User-Interviews about Tagging/Search/Wikis 2019-11-14T22:16:35.840Z · score: 14 (4 votes)
LW Team Updates - November 2019 (Subscriptions & More) 2019-11-08T02:39:29.498Z · score: 30 (13 votes)
[Team Update] Why we spent Q3 optimizing for karma 2019-11-07T23:39:55.274Z · score: 66 (20 votes)
[Site Update] Subscriptions, Bookmarks, & Pingbacks 2019-10-29T04:32:31.109Z · score: 95 (24 votes)
Open & Welcome Thread - October 2019 2019-10-01T23:10:57.782Z · score: 10 (3 votes)
LW Team Updates - October 2019 2019-10-01T23:08:18.283Z · score: 32 (11 votes)
Novum Organum: Introduction 2019-09-19T22:34:21.223Z · score: 81 (24 votes)
Open & Welcome Thread - September 2019 2019-09-03T02:53:21.771Z · score: 10 (4 votes)
LW Team Updates - September 2019 2019-08-29T22:12:55.747Z · score: 41 (13 votes)
[Resource Request] What's the sequence post which explains you should continue to believe things about a particle moving that's moving beyond your ability to observe it? 2019-08-04T22:31:37.063Z · score: 7 (1 votes)
Open & Welcome Thread - August 2019 2019-08-02T23:56:26.343Z · score: 13 (5 votes)
Do you fear the rock or the hard place? 2019-07-20T22:01:48.392Z · score: 43 (14 votes)
Why did we wait so long for the bicycle? 2019-07-17T18:45:09.706Z · score: 49 (19 votes)
Causal Reality vs Social Reality 2019-06-24T23:50:19.079Z · score: 40 (31 votes)
LW2.0: Technology Platform for Intellectual Progress 2019-06-19T20:25:20.228Z · score: 32 (8 votes)
LW2.0: Community, Culture, and Intellectual Progress 2019-06-19T20:25:08.682Z · score: 28 (5 votes)
Discussion Thread: The AI Does Not Hate You by Tom Chivers 2019-06-17T23:43:00.297Z · score: 36 (10 votes)
Welcome to LessWrong! 2019-06-14T19:42:26.128Z · score: 148 (93 votes)
LessWrong FAQ 2019-06-14T19:03:58.782Z · score: 69 (27 votes)
An attempt to list out my core values and virtues 2019-06-09T20:02:43.122Z · score: 26 (6 votes)
Feedback Requested! Draft of a New About/Welcome Page for LessWrong 2019-06-01T00:44:58.977Z · score: 30 (5 votes)
A Brief History of LessWrong 2019-06-01T00:43:59.408Z · score: 22 (14 votes)
The LessWrong Team 2019-06-01T00:43:31.545Z · score: 25 (8 votes)
Site Guide: Personal Blogposts vs Frontpage Posts 2019-05-31T23:08:07.363Z · score: 36 (11 votes)
A Quick Taxonomy of Arguments for Theoretical Engineering Capabilities 2019-05-21T22:38:58.739Z · score: 29 (6 votes)
Could humanity accomplish everything which nature has? Why might this not be the case? 2019-05-21T21:03:28.075Z · score: 8 (2 votes)
Could humanity ever achieve atomically precise manufacturing (APM)? What about a much-smarter-than-human-level intelligence? 2019-05-21T21:00:30.562Z · score: 8 (2 votes)
Data Analysis of LW: Activity Levels + Age Distribution of User Accounts 2019-05-14T23:53:54.332Z · score: 27 (9 votes)
How do the different star-types in the universe (red dwarf, etc.) related to habitability for human-like life? 2019-05-11T01:01:52.202Z · score: 6 (1 votes)
How many "human" habitable planets/stars are in the universe? 2019-05-11T00:59:59.648Z · score: 6 (1 votes)
How many galaxies could we reach traveling at 0.5c, 0.8c, and 0.99c? 2019-05-08T23:39:16.337Z · score: 6 (1 votes)
How many humans could potentially live on Earth over its entire future? 2019-05-08T23:33:21.368Z · score: 9 (3 votes)
Claims & Assumptions made in Eternity in Six Hours 2019-05-08T23:11:30.307Z · score: 49 (14 votes)
What speeds do you need to achieve to colonize the Milky Way? 2019-05-07T23:46:09.214Z · score: 6 (1 votes)
Could a superintelligent AI colonize the galaxy/universe? If not, why not? 2019-05-07T21:33:20.288Z · score: 6 (1 votes)
Is it definitely the case that we can colonize Mars if we really wanted to? Is it reasonable to believe that this is technically feasible for a reasonably advanced civilization? 2019-05-07T20:08:32.105Z · score: 8 (2 votes)
Why is it valuable to know whether space colonization is feasible? 2019-05-07T19:58:59.570Z · score: 6 (1 votes)
What are the claims/arguments made in Eternity in Six Hours? 2019-05-07T19:54:32.061Z · score: 6 (1 votes)
Which parts of the paper Eternity in Six Hours are iffy? 2019-05-06T23:59:16.777Z · score: 18 (5 votes)
Space colonization: what can we definitely do and how do we know that? 2019-05-06T23:05:55.300Z · score: 33 (10 votes)
What is corrigibility? / What are the right background readings on it? 2019-05-02T20:43:45.303Z · score: 6 (1 votes)
Speaking for myself (re: how the LW2.0 team communicates) 2019-04-25T22:39:11.934Z · score: 47 (17 votes)
[Answer] Why wasn't science invented in China? 2019-04-23T21:47:46.964Z · score: 80 (27 votes)
Agency and Sphexishness: A Second Glance 2019-04-16T01:25:57.634Z · score: 27 (14 votes)

Comments

Comment by ruby on Tag Index [Beta] · 2020-05-26T00:18:29.460Z · score: 2 (1 votes) · LW · GW

Oops, that's a mistake. Fixed now. Thanks.

Comment by ruby on Literature Review For Academic Outsiders: What, How, and Why · 2020-05-12T00:44:48.224Z · score: 23 (15 votes) · LW · GW

Curated (with multiple endorsements from the mod team). As noted in my previous comment, this post includes lots of links and references to further resources, but it also motivates the need for lit reviews well. It's not just a "how to guide", but also a "why guide" as well. It's a timely post too.

Go back a few years, and lukeprog was the champion/symbol of scholarship on LessWrong. Unfortunately for us, he's not able to contribute to LessWrong as much anymore; which it makes great that others are taking up the banner and reminding us of the need to build on existing knowledge (and helping people know how to do so).

I say this post is timely, that's because making LessWrong more scholarly continues to be a major focus of my work on the LessWrong team. Scholarship/Lit Reviews are actually a major goal of the new Tagging/Wiki system, whose larger goal still is increasing LessWrong's intellectual output. The hope is to make it much easier for writers on LessWrong to discover and build upon LessWrong's decade of previous work. "Shoulder of Giants", etc. 

Obviously, the overwhelming supermajority of the world's knowledge isn't in LessWrong's posts (though the very best insights might be), and our thinkers absolutely need to the skills (and virtue) to mine the troves of knowledge outside our shores. Hence the value in this post.

[At the same time, I do think we shouldn't let a requirement of lit review become too high a barrier to contributing on LessWrong. There's a lot of value in thinking through things for yourself fresh, and sometimes just getting random uninformed thoughts published stimulates discussion and provides motivation to then go for a thorough survey of the literature.]

All in all, kudos.

(And thanks for the recommendation of Intellectual Foundation of Information Organization, that was a good one.)

Comment by ruby on Open & Welcome Thread—May 2020 · 2020-05-11T02:38:40.856Z · score: 4 (2 votes) · LW · GW

Welcome!

The dictionary definition of "persuade" misses some of the connotations. Persuading someone often means "get them to agree with you" and not "jointly arrive at what's true, which includes the possibility that others can point out your mistakes and you change your mind." Explaining usually means more something like "explain your reasoning and facts, which might lead someone to agree with if they think your reasoning is good."

The key difference might be something like "persuade" is written to get the reader to accept what is written regardless of whether it's true, while "explain" wants you to accept the conclusion only if it's true. It's the idea symmetric/asymmetric weapons in this post.

Sorry if that's still a bit unclear, I hope it helps.

Comment by ruby on Literature Review For Academic Outsiders: What, How, and Why · 2020-05-10T17:29:55.070Z · score: 11 (7 votes) · LW · GW

Many thanks for writing this. Great overall and I really like the large number of links and references to other resources too (and would have said that even if it wasn't actually the whole topic :P). I'm so pleased when LW gets another thing about how to study/research. I gave this a strong tag relevance vote on the Scholarship & Learning wikitag.

Comment by ruby on Coping and Cultures · 2020-05-07T05:29:13.446Z · score: 7 (4 votes) · LW · GW

I believe that military stuff, including and maybe especially culture, is a long-term interest of LW user, Lionhearted. You could message him, also look at his writing on mental toughness within the Strategic Review series.

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-05-05T01:06:17.070Z · score: 6 (4 votes) · LW · GW

Rationalist culture and life extension might make sense. We have a Cryonics tag already. If we can round up a few posts on either of those topics, would create these.

Comment by ruby on [Site Meta] Quick Guide to Tagging · 2020-04-30T16:07:47.681Z · score: 2 (1 votes) · LW · GW

To remove a tag, just downvote (it might look like it's gone to -2, which is fine, upon refresh it will be gone).

Yeah, some of those definitely seem like good tag. I've had the idea for Coordination/Cooperation, Group Rationality, and Communication.

The others I think we'd want to ensure there isn't too much overlap with existing things. There's a programming tag, does that do the thing for Software? And then curious about what you seeing going in tools vs the existing techniques (which might also cover "soft skills")

It's good to see all these suggestions though. Even if we don't make a tag because of an existing one, soon we might send up "redirects" for terms towards things that are almost the same, or at least the closest match.

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-26T04:56:04.997Z · score: 4 (2 votes) · LW · GW

It's reasonable to mention "there's this comment which is relevant to this topic..."

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-26T02:06:40.369Z · score: 4 (2 votes) · LW · GW

These are really good. 

Embedded Agency is a clear win. 

Mechanism Design/Aligning Incentives seems good too. Agree there are choices about the name, and I guess scope too. Do you mean it to be material about how to align incentives but exclude related stuff of examples where incentives failed to be aligned. Would Boeing 737 MAX MCAS as an agent corrigibility failure be part of it?

"Resource Bounded Epistemics" sounds like a cool category. So does "Interdisciplinary Analogies", or should it be "Interdisciplinary Applications"? 

Anyhow, these are great. More are welcome.

Fake Frameworks, yeah, hmm. We might consider "only authors can apply these tags", I'm not sure. Those might make sense for general "epistemic state" tags.
 

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-26T01:52:23.982Z · score: 2 (1 votes) · LW · GW

These are great! I'll make these soon. Those posts definitely justify doing so in my mind. Re: Wei Dai's comment, I think it's reasonable to mention in the tag description text (and those will soon be everyone-editable wiki entries and should include extra info relevant to the tag/wiki/concept, including "notable comments"). 

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-23T21:54:57.257Z · score: 3 (2 votes) · LW · GW

Yeah, I agree with most of that.

I do think there will be an appreciable number of tags (even if they're a minority) that are strictly subsets of, say, AI alignment. Like everything under Value Learning or Embedded Agents, etc, and maybe it's worth it to have that automatically update.

I do feel tag descriptions linking to other tags is extremely important for the system to work and will help a lot here.

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-23T21:41:23.936Z · score: 2 (1 votes) · LW · GW

Interesting. I agree we want more specific tags too for that post. Though "Problem-Solving Tactics", actually feels pretty broad too, though a good definition/description might help give it shape. I'll think about it, not sure if you had one in mind.

Another thing that helps is having other posts in mind too for the tag.

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-22T22:59:34.104Z · score: 2 (1 votes) · LW · GW

I think I understand the motivation behind that. They're too easy to create and end up applying to too many different things? Does that seem right?

A challenge which stark in my mind is how to avoid creating too many heavily overlapping tags, which seems easy to do with higher-level tags.

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-22T19:09:36.849Z · score: 3 (2 votes) · LW · GW

To untag a post, just downvote its tag relevance. (Either in the hover-over or on the tag page).

Yeah, agree with need a better solution for showing currently available tags. In the meantime, you can look at www.lesswrong.com/tags or www.lesswrong.com/tags/all

A heuristic the team has discussed is that tags should have 3 good post by at least two different authors. I do want some kind of wellbeing category, and a separate health one make sense too. Anatomy, if it isn't a topic discussed by others, may or may not make sense. I'm not sure. If it's to help people find your other writing (the main goal of tagging), you could create a sequence or two to link them.

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-22T19:05:50.616Z · score: 3 (2 votes) · LW · GW

I'm inclined to treat COVID-19 posts as exception and not tag them with anything except Coronavirus, unless they're also applicable more broadly and timelessly.

Comment by ruby on [Site Meta] Feature Update: More Tags! (Experimental) · 2020-04-22T19:05:08.216Z · score: 6 (3 votes) · LW · GW

Nice, will definitely look at these.

Comment by ruby on What's the upper bound of how long COVID is contagious? · 2020-04-11T01:06:13.844Z · score: 6 (3 votes) · LW · GW

These papers on viral load probably help inform the answer. It was flagged to me that Ct might not have straightforward interpretation, but I haven't looked into it. So posting these as resources.

https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(20)30113-4/fulltext?fbclid=IwAR3crOZxhVP1eVPMcO_wujJBxHFAjp2fj4_jNj30ld_nVcKTqtcT1IjXozI

https://www.nejm.org/doi/full/10.1056/NEJMc2001737

Comment by ruby on What is the safe in-person distance for COVID-19? · 2020-04-10T23:47:23.607Z · score: 2 (1 votes) · LW · GW

This came up on my Facebook feed. I have only glanced at a briefly, but is probably of interest here:

Belgian-Dutch Study: Why in times of COVID-19 you should not walk/run/bike close to each other.

Comment by ruby on Core Tag Examples [temporary] · 2020-04-07T19:24:06.978Z · score: 2 (1 votes) · LW · GW

AI Alignment Tag Examples

  1. Embedded Agency (full-text version)
  2. Goodhart Taxonomoy
  3. An Untrollable Mathematician Illustrated
  4. AlphaGo Zero and the Foom Debate
  5. The unexpected difficulty of comparing AlphaStar to humans
  6. Outperforming the human Atari benchmark
  7. How does OpenAI's language model affect our AI timeline estimates?
  8. Jeff Hawkins on neuromorphic AGI within 20 years
  9. What failure looks like
  10. Soft takeoff can still lead to decisive strategy advantage
  11. My current framework for thinking about AGI timelines

Examples of AI Alignment Community Posts (tagged both AI Alignment and Community)

  1. 2019 AI Alignment Literature Review and Charity Comparison
  2. What I'll be doing at MIRI
  3.  Offer of collaboration and/or mentorship
  4. Where are people thinking and talking about global coordination for AI safety?
Comment by ruby on Core Tag Examples [temporary] · 2020-04-07T19:00:49.214Z · score: 2 (1 votes) · LW · GW

Rationality Tag Examples
 
Positive Examples

  1. Highly Advanced Epistemology 101 for Beginners
  2. Where Recursive Justification Hits Rock Bottom
  3. Novum Organum: Introduction
  4. What is Evidence?
  5. Sequence introduction: non-agent and multiagent models of mind
  6. Adult Neurogenesis – A Pointed Review
  7. Biases: An Introduction
  8. A Sketch of Good Communication
  9. Double Crux –– A Strategy for Resolving Disagreements
  10. "Focusing" for Skeptics
  11. Active Curiosity vs Open Curiosity
  12. The Schelling Choice is "Rabbit", not "Stag"
  13. Understanding information cascades
  14. The Neglected Virtue of Scholarship
  15. The mechanics of my recent productivity
  16. More Dakka

Negative Examples

  1. The First Rung: Insights from 'Linear Algebra Done Right'
  2. Tech economics pattern: "Commoditize Your Complement"
  3. On Building Theories of Histories

Unusual Edge Cases (included)

  1. Can You Prove Two Particles Are Identical?
Comment by ruby on How will this recession differ from the last two? · 2020-03-31T01:40:21.478Z · score: 10 (5 votes) · LW · GW

I don't know enough economics to have great thoughts here, but I almost wonder if it's something like people's ability to trade has been eroded. The "market", i.e., place where you trade, has been lost. I would still purchases handmade cocktails and live music experiences if I could and you would still produce them if you could, but now we're no longer able to trade, no longer able to exchange value.

I'm not sure where this line of thought leads.

Comment by ruby on What should we do once infected with COVID-19? · 2020-03-24T19:12:33.079Z · score: 8 (4 votes) · LW · GW

Ah, the problem is my brain is not working today. I missed the word "not" despite intentionally looking for its presence or absence. My bad. Question retracted.

Comment by ruby on What should we do once infected with COVID-19? · 2020-03-24T19:01:30.544Z · score: 2 (1 votes) · LW · GW

As I reach for the Ibuprofen and hesitate:

France is recommending against NSAIDs and against ibuprofen in particular. I will be very surprised if that ends up being born out (and WHO agrees with me)

Which part of the WHO status makes you think they don't think it will be born out? It says they're recommending what France says for now even though they don't currently have evidence that it's a problem.

Comment by ruby on What should we do once infected with COVID-19? · 2020-03-24T18:00:15.458Z · score: 4 (2 votes) · LW · GW

Some additional thoughts:

I have a lot of uncertainty when hearing the 5% runny nose figure from data. Things like:
1) how did they define runny nose, maybe their cut off is much more stringent? If the paper defines this, it isn't getting passed along.
2) It's possible that different strains/mutations of coronavirus elicit different symptoms? I don't know enough to judge how likely that is. Same for whether different populations might present differently.
3) Allergies might cause runny nose independently of COVID-19.

Comment by ruby on What should we do once infected with COVID-19? · 2020-03-24T17:44:00.311Z · score: 4 (2 votes) · LW · GW

Do you have a runny nose? Probably not COVID-19

I'm concerned about this one as advice. I think it's fine to say it's a likelihood ratio of 20x against, but the in presence of severe fever, cough, and difficulty breathing, I think a person should still place non-negligible probability on it being COVID-19 notwithstanding having had a runny nose at some point. I'm worried people about hearing the "runny nose != COVID" updating too hard that they don't have it. 1 in 20 people isn't that rare.

I  think it's more reasonable to say that if you don't have fever and do have runny nose, the odds are probably in your favor, but the runny nose alone shouldn't be an overriding diagnostic consideration.

Comment by ruby on Coronavirus Justified Practical Advice Summary · 2020-03-16T02:15:56.381Z · score: 6 (3 votes) · LW · GW

I recall there being some concern that a residue can build up on the copper tape making it less effective. Certainly the tape on the back of my phone is discolored now.

If the tape ended up being not effective but it made people complacent such that they didn't clean the surfaces as much, that could be worse?

Did this get resolved?

I write this atop a mound of copper tape.

Comment by ruby on [Site Update] Subscriptions, Bookmarks, & Pingbacks · 2020-03-01T17:47:32.232Z · score: 4 (2 votes) · LW · GW

Seems no one replied to this. The top 5 is a limitation we are well aware of and is on the list to fix for Pingbacks is moved out of beta-status. I think it was there just because it takes extra UI work to have an expanding list. I agree the lack of indication that it's happening is pretty back.

I also really like the notification idea, I do hope we make that happen.

Comment by ruby on More writeups! · 2020-02-07T11:32:52.478Z · score: 4 (2 votes) · LW · GW

Checkout www.lesswrong.com/tag/postmortems, it’s an experimental tag within the  under-development feature.

Comment by ruby on LessWrong FAQ · 2020-01-29T20:12:39.897Z · score: 3 (2 votes) · LW · GW

Good question! I think this was missed in FAQ and I'll add it in. Currently multiple authors can only be added by an admin. If it works for you, send us a message through Intercom, or email team@lesswrong.com.

Comment by ruby on 2018 Review: Voting Results! · 2020-01-24T22:46:34.574Z · score: 2 (1 votes) · LW · GW

There are five people on the team. I wasn't the most involved, but I was still very involved. But you'll hear from all of soon, don't you worry.

Comment by ruby on 2018 Review: Voting Results! · 2020-01-24T17:26:32.364Z · score: 13 (6 votes) · LW · GW

The team will be conducting a Review of the Review where we take stock of what happened, discuss the value and costs of the Review process, and think about how to make the review process more effective and efficient in future years.

I just want to speak up for myself, as I mentioned in a different comment, that at least in my mind, we need to properly review this year's Review before we're definitely committing to run this every year. I think the OP implies a greater level of confidence that the project was a "success" and will be repeated in subsequent than I feel.

Just so far, I've seen a lot of good come from this year's review that I'm very pleased with, but it's a costly project (for the team and the community), so that calculation needs to be done carefully. 

This comment shouldn't be interpreted as a sign that I'm negative on the Review. This is my attitude to every project that takes up significant resources. I won't have a firm opinion until I've thought about the Review a lot more and discussed at length with the team. We had to get the results out there quick though, ;)

Comment by ruby on 2018 Review: Voting Results! · 2020-01-24T17:07:07.749Z · score: 5 (3 votes) · LW · GW

If voters are at all consistent, you'd expect at lease some positive correlation because the same factors that made them upvote for karma also made upvote for the Review.

Beyond that, I'm guessing people voted for the posts they'd read, and people would have read higher karma posts more often since they get more exposure, e.g. sticking around the Latest Posts list for longer.

Comment by ruby on 2018 Review: Voting Results! · 2020-01-24T17:03:39.484Z · score: 11 (6 votes) · LW · GW

So, my question is - do the organizers think it was worth it? And if yes, do you think it is worth it enough for publishing in a book? And if yes to both - what would failure have looked like?

These are really excellent questions. The OP mentions the intention to "review the review" in coming weeks; there will be posts about this, so hang tight. Obviously the whole project had very high costs, so we have to think carefully through whether the benefits justify them and whether we should continue the Review process in future years. Speaking for myself, it's not obvious that it was worth it, but still quite possible. It's a hard question because I expect the many of the benefits to accrue over time and be not straightforward to measure.

I think we should do a thorough review now with what we know now, and would need to do another review in ~year's time before pressing go on the next iteration.

I've generally been pushing for all major projects at LW to be properly reviewed with an eye to: Where they worth it? What did we learn? And what remains to be done?

Comment by ruby on 2018 Review: Voting Results! · 2020-01-24T17:00:33.869Z · score: 17 (5 votes) · LW · GW

It seems like very few people voted overall if the average is "10-20" voters per post. I hope they are buying 50+ books each otherwise I don't see how the book part is remotely worth it.

I'm confused by this. Why would only voters be interested in the books? Also, this statement assumes that you have to sell 500-1000 books for it to be worth it– what's the calculation for the value of a book sold vs the cost of making the books?

The voting was broken in multiple ways - you could spend as many points as possible, but instead of a cut-off, your vote was just cast out due to the organizers' mistake to allow it.

I was surprised by this design decision too, though I'll note that the number of points spent was displayed and went red once you exceeded the budget. (Which has the advantage of if you're going over, you can place a vote and then decide whether to remove it or another.) Everyone except for the single person who spent 10,000 points kept to 500 or less.

Comment by ruby on 2018 Review: Voting Results! · 2020-01-24T16:48:22.874Z · score: 13 (6 votes) · LW · GW

If a similar system is used on future occasions, it might be a good idea to limit how strong votes are made for users who don't cast many votes.

The quadratic-vote-allocator's multiplier of non-quadratic votes was capped at a multiplier of 6x. A "No" vote starts out with a cost -4, so even if you only voted "No" on one item, it wouldn't become more than a cost of 24 which translates into a vote with weight -6. 

I'd say the -30 was intentional.

Comment by ruby on 2018 Review: Voting Results! · 2020-01-24T02:13:03.627Z · score: 24 (7 votes) · LW · GW

Bounty offered for Analysis of the Results

I'm offering a pool  of $100+ of my personal money for the best analyses of the results, as judged by me. I'm looking for things that are meaningful insights drawn from the data, e.g. modeling the interaction between the karma score of a post and its vote outcomes.

There are a number of aggregate stats for each post included in the linked spreadsheet, but I'm also open to making available further stats or data to people upon request so long as they keep the voters anonymous.

EDIT: Be creative in what analyses you might run and don't limit yourself to just what's the in spreadsheet. As above, I'll share more data if it seems appropriate. This might be data about posts, comments, and anything else to do with the site.

Comment by ruby on Player vs. Character: A Two-Level Model of Ethics · 2020-01-20T17:31:17.116Z · score: 2 (1 votes) · LW · GW

I voted very hard for this post. The idea feels correct, though I'd describe it as pointing at a key unresolved confusion/conflict for me. It fuels this quiet voice of doubt about everything I do my life (and about others in theirs). I'm not entirely sure what do with this model though, like, the entailment is missing or something. I voted hard mostly because I see it as the start of an issue to be resolved, not a finished work.

I'm not sure if the lack of "solution/response" or possibility of bad solution/responses is what you think is dangerous, or perhaps something in the very framing itself (if so, I'm not seeing it).

I should probably give the whole topic bit more thought rather than looping on my feelings of "stuck" around it.

Comment by ruby on Explicit and Implicit Communication · 2020-01-20T00:42:30.037Z · score: 2 (1 votes) · LW · GW

[Rambly notes while voting.] This post has some merit, but it feels too...jumpy, and, as the initial comments point out, it's unclear in what's being considered "explicit" vs "implicit" communication. Only getting to the comments did I realize that the author's sense of those words was not quite my own.

I'm also not sure it's either 1) telling the whole picture, vs 2) correct. A couple of examples are brought, but examples are easy to cherry-pick. The fact that the case brought with Bruce Lee seemed to be in favor of a non-compassionate feels maybe, maybe like an existence proof, but I'm not even sure of what. I do think the military example could be fleshed out as a case of when it doesn't make sense to communicate length about everything.

As pitched, I do think the recommendation for Difficult Conversations sounds pretty cool.

Comment by ruby on Voting Phase of 2018 LW Review · 2020-01-19T07:26:23.455Z · score: 8 (4 votes) · LW · GW

We see information about how much individuals vote

For accuracy's sake, I'll add that we have all the data about who voted on what. Our internal policy is not to look at votes by specific users on specific posts unless we have really good reason to such as suspecting foul play.

Ray is correct about what we in fact look at, but feels important to say that we in principle could see it all if we chose to, and that we're requesting some trust from the community.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-19T00:01:06.199Z · score: 5 (2 votes) · LW · GW

Appendix 4: Author's Favorite Comments

Something I've never had the opportunity to do before, since I've never revised a post before, is collect the comments that I think that added the most to the conversation by building on, responding to, questioning, or contradicting the post.

Here's that list for this post:

  • This comment from Said Achmiz that seems correct to me in both its points: 1) that Nurture-like Cultures can be abused politically, and 2), that close interpersonal relationships trend Combative as the closeness grows.
  • Benquo's comment about the dimension of whether participants are trying to minimize or maximize the scope of a disagreement.
  • Ben Pace's comment talking about when and where the two cultures fit best, and particularly regarding how Nurture Culture is required to hold space when discussing sensitive topics like relationships, personal standards, and confronting large life choices.
  • PaulK's comment about "articulability": how a Nurturing culture makes it easier to express ill-formed, vague, or not yet justifiable thoughts.
  • AdrianSmith's comment about how Combat Culture can help expose the weak points on one's belief which wouldn't come up in Nurture Culture (even if only updates after the heat of "battle"), and Said Achmiz's expansion of this point with quotes from Schopenhauer, claiming that continuing to fight for one's position without regard for truth might actually be epistemically advantageous. 

Best humorous comments:

Comment by ruby on Being a Robust Agent (v2) · 2020-01-18T23:28:22.017Z · score: 2 (1 votes) · LW · GW

I feel like perhaps the name "Adaptive Agent" captures a large element of what you want: an agent capable of adapting to shifting circumstances.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:34:44.383Z · score: 2 (1 votes) · LW · GW

Appendix 3: How to Nurture

These are outtakes from a draft revision for Nurture Culture which seemed worth putting somewhere:

A healthy epistemic Nurture Culture works to make it possible to safely have productive disagreement by showing that disagreement is safe. There are better and worse ways to do this. Among them:

  • Adopting a “softened tone” which holds the viewpoints as object and at some distance: “That seems mistaken to me, I noticed I’m confused” as opposed to “I can’t see how anyone could possibly think that”.
  • Expending effort to understand: “Okay, let me summarize what you’re saying and see if I got right . . .”
  • Attempting to be helpful in the discussion: “I’m not sure what you’re saying, is this is it <some description or model>?”
  • Mentioning what you think is good and correct: “I found this post overall very helpful, but paragraph Z seems gravely mistaken to me because <reasons>.” This counters perceived reputational harms and can put people at ease.

Things which are not very Nurturing:

  • “What?? How could anyone think that”
  • A comment that only says “I think this post is really wrong.”
  • You’re not accounting for X, Y, Z. <insert multiple paragraphs explaining issues at length>

Items in the first list start to move the dial on the dimensions of collaborativeness and are likely to be helpful in many discussions, even relatively Combative ones; however, they have the important additional Nurturing effect of signaling hard that a conversation has the goal of mutual understanding and reaching truth-together– a goal whose salience shifts the significance of attacking ideas to purely practical rather than political.

While this second list can include extremely valuable epistemic contributions, they can heighten the perception of reputational and other harms [1] and thereby i) make conversations unpleasant (counterfactually causing them not to happen), and ii) raise the stakes of a discussion, making participants less likely to update.

Nurture Culture concludes that it’s worth paying the costs of more complicated and often indirect speech in order to make truth-seeking discussion a more positive experience for all.

[1] So much of our wellbeing and success depends on how others view us. It reasonable for people be very sensitive to how others perceive them.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:33:01.757Z · score: 2 (1 votes) · LW · GW

Appendix 2: Priors of Trust

I’ve said that that Combat Culture requires trust. Social trust is complicated and warrants many dedicated posts of its own, but I think it’s safe to say that having following priors help one feel safe in a “combative” environment: 

  • A prior that you are wanted, welcomed and respected,
  • that others care about you and your interests,
  • that one’s status or reputation are not under a high-level of threat, 
  • that having dumb ideas is safe and that’s just part of the process,
  • that disagreement is perfectly fine and dissent will not be punished, and 
  • that you won’t be punished for saying the wrong thing.

If one has a strong priors for the above, you can have a healthy Combat Culture.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:32:22.077Z · score: 2 (1 votes) · LW · GW

Appendix 1: Conversational Dimensions

Combat and Nurture point at regions within conversation space, however as commenters on the original pointed out, there are actually quite a few different dimensions relevant to conversations. (Focused on truth-seeking conversations.)

Some of them:

  • Competitive vs Cooperative: within a conversation, is there any sense of one side trying to win against the others? Is there a notion of “my ideas” vs “your ideas”? Or is there just us trying to figure it out together.
    • Charitability is a related concept.
    • Willingness to Update: how likely are participants to change their position within a conversation in response to what’s said?
  • Directness & Bluntness: how straightforwardly do people speak? Do they say “you’re absolutely wrong” or do they say, “I think that maybe what you’re saying is not 100%, completely correct in all ways”?
  • Filtering: Do people avoid saying things in order to avoid upsetting or offending others?
  • Degree of Concern for Emotions: How much time/effort/attention is devoted to ensuring that others feel good and have a good experience? How much value is placed on this?
  • Overhead: how much effort must be expended to produce acceptable speech acts? How many words of caveats, clarification, softening? How carefully are the words chosen?
  • Concern for Non-Truth Consequences: how much are conversation participants worried about the effects of their speech on things other than obtaining truth? Are people worrying about reputation, offense, etc?
  • Playfulness & Seriousness: is it okay to make jokes? Do participants feel like they can be silly? Or is it no laughing business, too much at stake, etc.?
  • Maximizing or Minimizing the Scope of Disagreement: are participants trying to find all the ways in which they agree and/or sidestep points of disagreement, or are they clashing and bringing to the fore every aspect of disagreement? [See this comment by Benquo.]

Similarly, it’s worth noting the different objectives conversations can have:

  • Figuring out what’s true / exchanging information.
  • Jointly trying to figure out what’s true vs trying to convince the other person.
  • Fun and enjoyment.
  • Connection and relationship building.

The above are conversational objectives that people can share. There are also objectives that most directly belong to individuals:

  • To impress others.
  • To harm the reputation of others.
  • To gain information selfishly.
  • To enjoy themselves (benignly or malignantly).
  • To be helpful (for personal or altruistic gain).
  • To develop relationships and connection.

We can see which positions along these dimensions cluster together and which correspond to the particular clusters that are Combat and Nurture.

A Combat Culture is going to be relatively high on bluntness and directness, can be more competitive (though isn’t strictly); if there is concern for emotions, it’s going be a lower priority and probably less effort will be invested. 

A Nurture Culture may inherently be prioritizing the relationships between and experiences of participants more. Greater filtering of what’s said will take place and people might worry more about reputational effects of what gets said.

These aren’t exact and different people will focus on cultures which differ along all of these dimensions. I think of Combat vs Nurture as tracking an upstream generator that impacts how various downstream parameters get set.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:30:46.315Z · score: 2 (1 votes) · LW · GW

[2] A third possibility is someone who is not really enacting either culture: they feel comfortable being combative towards others but dislike it if anyone acts in kind to them. I think is straightforwardly not good.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:30:28.520Z · score: 2 (1 votes) · LW · GW

[1] I use the term attack very broadly and include any action which may be cause harm to a person acted upon. The harm caused by an attack could be reputational (people think worse of you), emotional (you feel bad), relational (I feel distanced from you), or opportunal (opportunities or resources are impacted).

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:30:05.476Z · score: 2 (1 votes) · LW · GW

Footnotes

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:29:08.454Z · score: 11 (3 votes) · LW · GW

Changes from V1 to V2

This section describes the most significant changes from version 1 to version 2 of this post:

  • The original post opened with a strong assertion that it intended to be descriptive. In V2, I’ve been more prescriptive/normative.
  • I clarified that the key distinction between Combat and Nurture is the meaning assigned to combative speech-acts.
  • I changed the characterization of Nurture Culture to be less about being “collaborative” (which can often be true of Combat), and more about intentionally signaling friendliness/non-hostility.
  • I expanded the description of Nurture Culture which in the original was much shorter than the description of Combat, including the addition of a hopefully evocative example.
  • I clarify that Combat and Nurture aren’t a complete classification of conversation-culture space– far from it. And further describe degenerate neighbors: Combat without Safety, Nurture without Caring.
  • Adding appendices which cover:
    • Dimensions along which conversations and conversations vary.
    • Factors that contribute to social trust.

 

Shout out to Raemon, Bucky, and Swimmer963 for their help with the 2nd Version.

Comment by ruby on Conversational Cultures: Combat vs Nurture (V2) · 2020-01-17T20:27:18.894Z · score: 7 (1 votes) · LW · GW

SUPPLEMENTAL CONTENT FOR V2
 

Please do post comments at the top level.

Comment by ruby on Please Critique Things for the Review! · 2020-01-17T06:08:06.736Z · score: 3 (2 votes) · LW · GW

Yeah, true, that seems like a fair reason to point out for why there wouldn't be more reviews. Thanks for sharing your personal reasons.