bendini's Shortform 2019-12-19T19:59:04.859Z · score: 4 (1 votes)
Welcome to Kernel Project (Manchester, UK) [Edit With Your Details] 2018-03-12T06:26:06.039Z · score: 7 (2 votes)
The Craft & The Community - A Post-Mortem & Resurrection 2017-11-02T03:45:46.384Z · score: 109 (63 votes)


Comment by bendini on Construct a portfolio to profit from AI progress. · 2020-07-26T08:06:50.464Z · score: 4 (5 votes) · LW · GW

I've given this a strong downvote, but I'm writing a comment so the OP and passerby aren't confused why a long comment that provides relevant answers is (currently) sitting at -3 karma:

  1. Repeating the false but popular assertion that smart people can't outperform indexes without insider knowledge/huge amounts of luck.
  2. Conflating whether it's moral to invest in China with whether it is profitable.
  3. The suggestion that the asker should look into selling moonshine/other contraband. (This isn't a moral complaint, it's just bad advice. Starting a manufacturing business isn't a remotely suitable replacement for stock investing + the risk adjusted returns of such a business are very poor.)
Comment by bendini on Raemon's Shortform · 2020-02-25T03:50:03.359Z · score: 1 (1 votes) · LW · GW

I agree, but I also think there's a bit of a chicken and egg problem there too. Leaders fear that enforcing order will result in a mutiny, but if that fear is based on an accurate perception of what will happen, telling leadership to grow a pair is not going to fix it.

Comment by bendini on Raemon's Shortform · 2020-02-19T09:49:56.081Z · score: 6 (2 votes) · LW · GW

Thinking about my own experiences of seeing these bottlenecks in action, I don't think either is a subset of the other. It seems more like there's a ton of situations where the only way forward is for a few people to grow a spine and have the tough conversations, and an adjacent set of problems that need centralised competent leadership to solve, but it's in short supply for the usual economic reasons plus things like "rationalists won't defer authority to anyone they don't personally worship unless bribed with a salary".

Comment by bendini on Potential Ways to Fight Mazes · 2020-01-31T09:44:12.373Z · score: 1 (1 votes) · LW · GW

As food for thought on the last line, here's my comment from a previous post on moral mazes:

Comment by bendini on How Doomed are Large Organizations? · 2020-01-22T05:38:47.058Z · score: 2 (2 votes) · LW · GW

It was meant to include Canada (because I suspect it still applies to them and I was unsure if they were included in Moral Mazes) but not Mexico or any countries south of Mexico which are technically in North America. This was not clear in retrospect and I have edited my comment in light of that.

Comment by bendini on How Doomed are Large Organizations? · 2020-01-22T00:43:03.338Z · score: 1 (1 votes) · LW · GW

Fortunately or unfortunately, this problem seems much worse in America compared to other western countries. Unfortunately, because most of the audience lives and works there. Fortunately, because it means large organisations aren't destined to become hellholes. By no means are they absent, but when I researched this they seemed far less intense.

Have you looked into the workings of large organisations outside of the US or Canada?

Comment by bendini on How to Escape From Immoral Mazes · 2020-01-17T14:00:36.507Z · score: 14 (6 votes) · LW · GW
As George Carlin says, some people need practical advice. I didn't know how to go about providing what such a person would need, on that level. How would you go about doing that?

The solution is probably not a book. Many books have been written on escaping the rat race that could be downloaded for free in the next 5 minutes, yet people don't, and if some do in reaction to this comment they probably won't get very far.

Problems that are this big and resistant to being solved are not waiting for some lone genius to find the 100,000 word combination that will drive a stake right through the middle. What this problem needs most is lots of smart but unexceptional people hacking away at the edges. It needs wikis. It needs offline workshops. It needs case studies from people like you so it feels like a real option to people like you.

Then there's the social and financial infrastructure part of the problem. Things such as:

  • Finding useful things for people to do outside of salaried work that don't feel like sitting at the kids table. (See: every volunteer role outside of open source.)
  • Establishing intellectual networks outside of the high cost of living/rat race cities. (Not necessarily out of cities in general.)
  • Developing things that make it cheaper to maintain a comfortable standard of living at a lower level of income.
  • Finding ways to increase productivity on household tasks so it becomes economically practical to do them yourself rather than outsource them.
Comment by bendini on How to Escape From Immoral Mazes · 2020-01-16T16:23:12.147Z · score: 10 (5 votes) · LW · GW

I've been following your whole series on moral mazes. I felt the rest of them were important because they explained why "working for the man" was bad in explicit terms, but this one was a pleasant surprise. Until about halfway through this post, I was under the impression you were articulating the dangers of moral mazes in the abstract while carefully ignoring any implications it would have for your own career on Wall Street. The point I realised you'd actually quit was a jaw-dropping moment, given that I already knew you weren't staying in that situation because you had a good use for the money.

My only complaint about this post would be that the intellectually detached way that it's written and lack of object-level game plans will prevent it from feeling like a real option to a lot of readers. Most people know that something is wrong with these systems, but when the rubber meets the road, they default to the familiar script the same way you did. Intellectual understanding of a problem is necessary for a certain kind of person to take action, but it isn't sufficient, and in some cases it can leave people dangerously unprepared for reality the same way that learning karate does for a street-fight.

Comment by bendini on Please Critique Things for the Review! · 2020-01-12T07:58:15.015Z · score: 3 (2 votes) · LW · GW
Often what needs reviewing is less like "author made an unsubstantiated claim or logical error" and more like "is the entire worldview that generated the post, and the connections the post made to the rest of the world, reasonable?

I agree with this, but given that these posts were popular because lots of people thought they were true and important, deeming the entire worldview of the author flawed would also imply the worldview of the community was flawed as well. It's certainly possible that the community's entire worldview is flawed, but even if you believe that to be true, it would be very difficult to explain in a way that people would find believable.

Comment by bendini on Please Critique Things for the Review! · 2020-01-12T07:16:36.087Z · score: 3 (2 votes) · LW · GW

Those numbers look pretty good in percentage terms. I hadn't thought about it from that angle and I'm surprised they're that high.

FWIW, my original perception that there was a shortage was based on the ratio between the quantity of reviews and the quantity of new posts that have been written since the start of the review period. In theory, the latter takes a lot more effort than the former, so it would be unexpected if more people do the higher effort thing automatically and less people do the lower effort thing despite explicit calls to action and $2000 in prize money.

Comment by bendini on Please Critique Things for the Review! · 2020-01-12T04:34:10.505Z · score: 3 (2 votes) · LW · GW

I'm not surprised to learn that is the case.

This is my understanding of how karma maps to social prestige:

  • People with existing social prestige will be given more karma for a post or a comment than if it was written by someone unknown to the community.
  • Posts with more karma tend to be more interesting, which helps boost the author's prestige because more people will click on a post with higher karma.
  • Comments with high karma are viewed as more important.
  • Comments with higher karma than other comments in the same thread are viewed as the correct opinion.
  • Virtually nobody looks at how much karma you've got to figure out how seriously to take your opinions. This is probably because by the time you have accumulated enough for it to mean something, regulars will already associate your username with good content.
Comment by bendini on Please Critique Things for the Review! · 2020-01-11T23:16:14.548Z · score: 5 (7 votes) · LW · GW

The shortage of reviews is both puzzling and concerning, but one explanation for it is that the expected financial return of writing reviews for the prize money is not high enough to motivate the average LessWrong user, and the expected social prestige for commenting on old things is lower per unit of effort than writing new things. (It's certainly true for me, I find commenting way easier than posting but I've never got any social recognition from it, whereas my single LW post introduced me to about 50 people.)

Another potential reason is that it's pretty hard to "review" the submissions. Like most essays on LessWrong, they state one or two big ideas and then spend the vast majority of the words on explaining the ideas and connecting them to other things we know. This insight density is what makes them interesting, but it also makes it very hard to evaluate the theories within them. If you can't examine the evidence that's behind a theory, you have to either assume it or challenge the theory as a whole, which is what usually happens in the comments section after it's first published. If true, this means that you're not really asking for reviews, but lengthy comments that can say something that wouldn't have been said last year.

Comment by bendini on Speaking Truth to Power Is a Schelling Point · 2019-12-30T10:47:27.542Z · score: 9 (5 votes) · LW · GW

I find this theory intuitively plausible, and I expect it will be very important if it's true. Having said that, you didn't provide any evidence for this theory, and I can't think of a good way to validate it using what I currently know.

Do you have any evidence that people could use to check this independently?

Comment by bendini on Why is the mail so much better than the DMV? · 2019-12-30T09:43:34.923Z · score: 3 (4 votes) · LW · GW

One possibility is that

1. The DMV is especially bad, because people don't have to tolerate using it on a weekly basis.

2. The USPS isn't especially good, but it's hard to notice because American delivery companies aren't much better by comparison.

Comment by bendini on How was your decade? · 2019-12-29T06:12:12.050Z · score: 4 (3 votes) · LW · GW

I've already given this an upvote, but I'm also leaving a comment because I think LessWrong has a shortage of this kind of content. I think broad personal overviews are particularly important because a lot of useful information you can get from "comparing notes" is hard to turn into standalone essays.

Comment by bendini on bendini's Shortform · 2019-12-19T19:59:05.438Z · score: 6 (4 votes) · LW · GW

Yesterday I noticed that some of what I'd attributed to cultural differences in communication strength between myself and the LessWrong audience was actually due to differences in when I would choose to verbalise something. I originally thought this was me opting to state my positions clearly instead of couching them in false uncertainty so they would sound less abrasive, but yesterday I left some comments where I found myself wanting to use vocabulary that was a significantly more "nuanced" than it used to be (example) and yet I didn't feel like I was being insincere.

I don't think this is a case of learning from my youthful hubris or assimilating into rationalist culture, as I still endorse both the opinion and the tone it was expressed in. The real difference seems to be the *stage* at which I voiced my opinion. In the old comment, I was discussing a topic I had spent a lot of time thinking about and researching, and came to the conclusion that the community was making insane decisions because they were the default option. Whereas in yesterday's set of comments, I had a few strong points, but I hadn't reached a strong conclusion overall before I entered the discussion.

I think this raises an important problem with our discussion norms. If you've figured out that the community has made a big mistake, you are at a disadvantage if you've managed to "read ahead of the class" because effective persuasion requires you to emulate ignorance of information more than a few inferential steps ahead of the audience.

Comment by bendini on Propagating Facts into Aesthetics · 2019-12-19T18:25:55.293Z · score: 10 (5 votes) · LW · GW

I like this post a lot, but the example debates that seem like intractable aesthetic disagreements seem to be missing a 2 key ideas that are preventing resolution:

1. Shared verbal acknowledgement that regardless of the aesthetic considerations, the status quo is not working. If you're debating the merits of "everyone pitch in" vs "specialise and outsource" and you've failed to recognise that people are generally not clearing up after themselves or funnelling money towards the problem, your first order of business shouldn't be to get into a long-winded philosophical debate over aesthetics.

2. Overlooking resource constraints and avoiding fuzzy quantification. In the case of clean code vs quick hacks, unless you are writing the code just for fun, what each person prefers is much less relevant than the business-world constraints you are under. If you are under extreme time pressure and the thing must be done, the choice is quick hacks or death. If you are trying to "scale up" but there are no impending deadlines, whether a piece of code should be written cleanly depends on how reliant you expect to be on that code in future, how clearly you understand the problem it is solving, how much longer it will take to do things properly and the opportunity cost of that time. While you won't have precise answers to these things, they will be a lot more tractable than reconciling aesthetic disagreements.

Comment by bendini on "You can't possibly succeed without [My Pet Issue]" · 2019-12-19T04:18:40.745Z · score: 3 (2 votes) · LW · GW

For what it's worth, I think that post made the right tradeoff. There will probably be some people who will have glossed over it due to lack of examples, but in that case I think it was an acceptable price to pay.

What I'm referring to is when the community does this by default, not when the author has explicitly weighed up the pros and cons. Not wanting to get into an issue is okay in isolation, but when everyone does this it impedes the flow of information in ways that make it even more difficult to avoid talking past each other.

Comment by bendini on "You can't possibly succeed without [My Pet Issue]" · 2019-12-19T03:44:47.842Z · score: 18 (5 votes) · LW · GW

I don't disagree with that, but I do think one reason we find it difficult to form good models and coordinate is that there's an insane norm of only ever talking about issues in abstract terms like X and Y. Maybe the issue in question here is super sensitive, since I have no idea what you are talking about, but "raising awareness of general patterms" often seems to be used as a (mostly subconscious) justification for avoiding the object level because it might make someone important look bad.

Comment by bendini on "You can't possibly succeed without [My Pet Issue]" · 2019-12-19T02:22:26.242Z · score: 9 (2 votes) · LW · GW


My first reaction was thinking of a few scenarios that were analogous to the original framing, one example being "if it takes you years to coordinate the local removal of [obvious abuser], why do you think you will be able to coordinate safe AI development on a global scale?"

This isn't a pet issue of mine, but I suspect it is important to be able to say things like this. I guess my overall view is that crystallising this pattern might be putting ducttape over a more structural problem.

Comment by bendini on "You can't possibly succeed without [My Pet Issue]" · 2019-12-19T01:48:50.226Z · score: 10 (3 votes) · LW · GW

I have no trouble believing that this is common thing to hear if you're in a position of power, but what about situations where this is correct? After all, if it was never correct, people would never find it persuasive.

Are there any heuristics you use to figure out when this is likely to be true?

Comment by bendini on More Dakka · 2019-12-09T10:47:56.511Z · score: 1 (1 votes) · LW · GW

I'm reading this again now because I remember liking it and wanted to link it in something I'm writing, however:

Yes, some countries printed too much money and very bad things happened, but no  countries printed too much money because they wanted more inflation. That’s not a thing.

That is absolutely a thing that some governments do. Even if we disregard hyperinflation, when a government's tax brackets, spending commitments and sovereign debt are denominated in nominal currency and it needs more money for stuff, the political cost of high inflation is sometimes less than it would be to raise taxes, cut spending or default on bonds.

Comment by bendini on Karate Kid and Realistic Expectations for Disagreement Resolution · 2019-12-09T09:12:59.244Z · score: 4 (2 votes) · LW · GW

(Site meta: it would be useful if there was a way to get a notification for this kind of mention)

Some thoughts about specific points:

the whole point of this sequence is to go "Yo, guys, it seems like we should actually be able to be good at this?"

This is true for the sequence overall, but this post and some others you've written elsewhere follow the pattern of "we don't seem to be able to do the thing, therefore this thing is really hard and we shouldn't beat ourselves up about not being able to do it" that seems to come from a hard-coded mindset rather than a balanced evaluation of how much change is possible, how things could be changed and whether it was important enough to be worth the effort.

I think the mindset of "things are hard, everyone is doing the best we can" can be very damaging, as it reduces our collective agency by passively addressing the desire for change in a way that takes the wind out of its sails.

There is a risk that if you try earnestly to look at the evidence and change your mind, but your partner is just pushing their agenda, and you don't have some skills re: "resilience to social pressure", then you may be sort of just ceding ground in a political fight without even successfully improving truthseeking.

Resilience to social pressure is part of it, but there also seems to be a lot of people who lack the skill to evaluate evidence in a way that doesn't bottom out at "my friends think this is true" or "the prestigious in-group people say this is true".

It seems like having some kind of mutally-trustable-procedure for mutual "disarmament" would be helpful.

A good starting point for this would be listing out both positions in a way that orders claims separately, ranked by importance, and separating the evidence for each into 1) externally verifiable 2) circumstantial 3) non-verifiable personal experience 4) intuition.

if one person says "this UI looks good" and another person says "this UI looks bad", there's an aspect of that that doesn't lend itself well to "debate"

I've had design arguments like this (some of them even about LW), but my takeaway from them was not that this can't be debated, but that:

1) People usually believe that design is almost completely subjective

2) Being able to crux on design requires solving 1 first

3) Attempts to solve 1 are seen as the thin end of the wedge

4) If you figure out how to test something they assumed couldn't be tested, they feel threatened by it rather than see it as a chance to prove they were right.

5) The question "which design is better" contains at least 10 cruxable components which need to be unpacked.

6) If the other person doesn't know how to unpack the question, they will see your attempts as a less funny version of proving that 1 = 2.

7) People seem to think they can bury their heads in the sand and the debate will magically go away.

Arguments about design have a lot of overlap with debates about religion, but if you're trying to debate "does God exist?" on face value rather than questions like "given the scientific facts we can personally verify, what is the probability that God exists?" and "regardless of God's existence, which religious teachings should we follow anyway?" then it is unlikely to make progress.

Comment by bendini on LW For External Comments? · 2019-12-06T04:43:35.729Z · score: 5 (3 votes) · LW · GW

I strongly support this suggestion.

Comment by bendini on Karate Kid and Realistic Expectations for Disagreement Resolution · 2019-12-05T09:52:19.000Z · score: 7 (3 votes) · LW · GW

a) that you don't think disagreements take a long time for the reasons discussed in the post

Disagreements aren't always trivial to resolve, but you've been actively debating an issue for a month and zero progress has been made, either the resolution process is broken or someone is doing something besides putting maximum effort into resolving the disagreement.

b) that rationalists should easily be able to avoid the traps of disagreements being lengthy and difficult if only they "did it right".

Maybe people who call themselves rationalists "should" be able to, but that doesn't seem to be what happens in practice. Then again, if you've ever watched a group of them spend 30 minutes debating something that can be googled, you have to wonder what else they might be missing.

I'm concerned you'll be missing ways to actually solve disagreements in more cases by dismissing the problem as other people's fault.

It's true that if you are quick to blame others, you can fail to diagnose the real source of the problem. However, the reverse is also true. If the problem is that you or others aren't putting in enough effort, but you've already ruled it out on principle, you will also fail to diagnose it.

Something about this comment feels slightly off.

I'm not surprised that the comment feels off, it felt off to write it. Saying something that's outside the Overton window that doesn't sound like clever contrarianism feels wrong. (Which may also explains why people rarely leave comments like that in good faith.)

Comment by bendini on Karate Kid and Realistic Expectations for Disagreement Resolution · 2019-12-05T08:14:56.552Z · score: 9 (8 votes) · LW · GW

I'm glad this post was written, but I don't think it's true in the sense that things have to be this way, even without new software to augment our abilities.

It's true that 99% of people cannot resolve disagreements in any real sense, but it's a mistake to assume that because Yudkowsky couldn't resolve a months long debate with Hanson and the LessWrong team can't resolve their disagreements that they're inherently intractable.

If the Yud vs Hanson debate was basically Eliezer making solid arguments and Hanson responding with interesting contrarian points because he sees being an interesting contrarian as the purpose of debating, then their inability to resolve their debate tells you little about how easy the agreement would be to resolve.

If the LessWrong team is made up entirely of conflict-avoidant people who don't ground their beliefs in falsifiable predictions (this is my impression, having spoken to all of them individually), then the fact that their disagreements don't resolve after a year of discussion shouldn't be all that surprising.

The bottleneck is the dysfunctional resolution process, not the absolute difficulty of resolving the disagreement.

Comment by bendini on Is daily caffeine consumption beneficial to productivity? · 2019-11-27T14:25:29.218Z · score: 5 (4 votes) · LW · GW

I deliberately avoided giving a citation because I don't remember which paper I read that confirmed it, so searching for one that backs up a cached memory to appear more rigorous would be bad epistemic practice.

Instead, my confidence that this is true rests on several pieces of circumstantial evidence:

  • My experience for it working this way for other drugs.
  • The SSC survey where the majority of people reported not becoming dependant on other stimulants at therapeutic doses over the long term.
  • The fact that coffee has become universal to workplace culture (metis knowledge)
  • The fact that even if coffee gave you a focus boost that nets to 0, being able to borrow energy from the 2/3rd of the day you aren't working into the 1/3 that you are would still boost net productivity.
  • The fact that I used to believe that I was being clever by never using caffeine because of the idea that there is no free lunch and changed my mind a few years ago.
  • Other things I can't recall right now but I know I could recall them if I sat down for several hours trying to remember them. (How could I possibly know this? it happens on a regular basis)

I don't necessarily expect you to believe it, but it occurred to me that the implicit choice between:

A. showing you the watertight meta-analysis that I've spent a week going over with a fine-tooth comb .

B. saying nothing at all and likely leaving you with no responses because everyone else assumes a response needs to do A to be worth giving. one of the reasons why LessWrong is a terrible place to find practical knowledge.

I'd be happy to bet on it being true at at least 4 to 1 odds, although you will have to devise an objective test that can be judged true or false on the original question rather than the proxy question. Then again, even saying I'm willing to bet doesn't mean much as a bet of $100 still wouldn't be worth your time to organise on a financial basis. This makes the bet less likely and therefore boosts the credibility of an argument with a costly signal that's actually far less costly than it appears. (This is currently an open problem.)

Comment by bendini on Is daily caffeine consumption beneficial to productivity? · 2019-11-26T13:31:58.930Z · score: 2 (2 votes) · LW · GW

Yes. When it comes to tolerance of stimulant drugs, there is such thing as a free lunch.

While you will get some tolerance, and ceasing use will give you some withdrawal effects, tolerance will eventually plateau unless you are taking far more than you should be. After tolerance is accounted for, using caffeine will still give you a higher baseline of productivity than taking nothing at all.

Comment by bendini on Do you get value out of contentless comments? · 2019-11-22T06:47:19.868Z · score: 3 (2 votes) · LW · GW

I don't get any value out of content-free comments, but a sentence or two explaining what someone liked about my post gives me better feedback than an anonymous upvote. And even if it's just a phatic "Good post!", just knowing who said it can be quite useful.

Comment by bendini on Comment, Don't Message · 2019-11-18T23:09:20.687Z · score: 2 (2 votes) · LW · GW

I'd like to second this and say my experience has also been completely different.

There are some conversations that make sense to have 1v1, and most of the value I've gained from writing things has been when someone contacts me in private.

It does seem that while LessWrong doesn't actively discourage it, the site's UX makes it quite inconvenient to have those interactions.

Comment by bendini on Arguing about housing · 2019-11-16T02:34:52.249Z · score: 14 (5 votes) · LW · GW
Squeezing everyone into college-dorm-style housing would certainly reduce living costs, but people who want that can already do it. Most don't.

You're right that dorm-style housing is an existing option, and most people don't want to in them for obvious reasons. However:

  • There isn't going to be a one-size-fits all solution to high housing costs, but that's okay. Housing isn't an all or nothing problem, progress can be made on the margin. If you come up with something that gets on the front page of Hacker News and receives 500 comments saying it's the worst idea ever, but just 50 people find it works for their unique circumstances and save $200/month over the next 3 years because of it, you'll have made the problem $360,000 smaller.
  • While I would never want to live in a PodShare, hundreds of Californians seem to think paying $1200/month to sleep in an open-plan room with 20 strangers is better than their current alternatives. The fact that this is true should indicate some *very* low hanging fruit here.
Your solution is... a bunk bed with cabinets built in?

You could call it a loft bed for adults, but that doesn't tell you why anyone would want one.

It's not so much a loft bed as a system designed from first principles around the specific constraints of a freelancer aged 20-30 renting a small room (or half of a large one) inside a grouphouse. Considerations such as:

  • Privacy
  • Having somewhere for your clothes and suitcase
  • Having a secure place to store valuables and sensitive documents
  • Having somewhere to dry your towel
  • Having a romantic partner be able to stay the night
  • Being able to have sex without waking up the whole house
  • Low ceilings
  • Being able to have sex without one of you hitting their head on the ceiling
  • Not having to crouch when walking under the bed if you're 6ft2
  • Having a work-space that helps you to be productive
  • Having no control over the location of sockets or lights
  • Not being able to change the landlord's curtains
  • Not being able to put any holes in the wall
  • Being able to bring the system with you when you move and having it fit in your new room
  • Being able to build the system yourself
    • Without knowing the exact dimensions of the room beforehand
    • With cheap and commonly available materials
    • With only handyman-level skills and a few basic power tools
    • Being able to cut the wood and do most of the assembly outside/in a garage
    • Being able to get the components through a bedroom doorway
    • Being able to assemble them like an IKEA flatpack and have everything fit together correctly
    • Having it look neat and precise enough that people don't assume you made it yourself
Comment by bendini on Arguing about housing · 2019-11-15T21:00:54.393Z · score: 9 (5 votes) · LW · GW

(Thoughts translated from private message)

As I've said before, if political solutions were viable then this would have been solved 5+ years ago.

Addressing the problem will require an approach that doesn't assume you can build more housing in the expensive metro areas with good jobs. While that doesn't leave many options, I can think of at least 3 that are somewhat practical:

1. Find ways to increase the quality of the average grouphouse so more people want to live in them.

2. Coordinate groups of people to move from NIMBY cities with 10/10 jobs and 10/10 house prices to YIMBY cities with 8/10 jobs but 3/10 house prices.

3. Find ways to reduce the overall cost of living that don't require someone to expend much effort per $ saved, reduce their quality of life or shift negative externalities onto someone else's balance sheet.

The project I've been running (Kernel) has been doing some research on this, and we've found potential solutions in all 3 areas. To give one example, if you found a way to increase the efficiency of a grouphouse bedroom so everything that would usually take 150ft2 can be done in 75ft2 without throwing important considerations under the bus, someone would only need to rent half as much room to maintain the same quality of life.

(Yes, I have found a way to do this. Yes, I accounted for that consideration. And that one. That one too. Yes, this is designed to bait everyone into asking questions.)

Comment by bendini on Make more land · 2019-10-23T20:10:20.735Z · score: 17 (5 votes) · LW · GW

The problem with this proposal is not that it's a bad idea.

The problem is that you--a smart individual with no domain experience--can come up with an extremely sensible and pragmatic way to address a problem that:

  • Is causing over a trillion dollars of economic misallocation.
  • Has existed for 2 decades and gotten significantly worse over time.
  • Has reached a crisis point such that it has visceral effects on the day-to-day life of millionaires that they can't buy their way out of (e.g. faeces everywhere, being attacked by crazy homeless people).
  • Has a laundry list of founders, VCs and tech CEOs desperately trying to solve it.

...yet is still not solved. Which should make you wonder, is a lack of sensible ideas really the main bottleneck?

Comment by bendini on Deleted · 2019-10-23T19:08:45.003Z · score: 32 (17 votes) · LW · GW

I have no particular interest in sharing any of my own, but there does seem to be a bad dynamic going on here that is worth pointing out.

Some people are downvoting the comments that they find abhorrent. This would normally be fine, but in this case it punishes people for correctly following instructions.

I've done what I can to remedy this by giving a strong upvote to the responses with low scores, but LessWrong needs to have a way to deal with this in future so the platform doesn't disincentivize the very behaviours it wants to encourage.

Comment by bendini on Noticing Frame Differences · 2019-09-30T01:57:48.534Z · score: 5 (4 votes) · LW · GW

I'm interested to find out what worked for you, but I suspect that the root cause of failure in most cases is lacking enough motivation to converge. It takes two to tango, and without a shared purpose that feels more important than losing face, there isn't enough incentive to overcome epistemic complacency.

That being said, better software and social norms for arguing could significantly reduce the motivation threshold.

Comment by bendini on Are there technical/object-level fields that make sense to recruit to LessWrong? · 2019-09-30T00:56:28.481Z · score: 4 (4 votes) · LW · GW

Aside from what's already here, I can think of a few "character profiles" of fields that would benefit from LessWrong infrastructure:

  • Hard fields that are in decent epistemic health but could benefit from outsiders and cross pollination with our memeplex (e.g. economics).
  • Object level things where outside experts can perform the skill but the current epistemological foundations are so shaky that procedural instructions work poorly (e.g. home cooking).
  • Things that are very useful where good information exists but finding it requires navigating a lemon market (e.g. personal finance).
  • Fields that have come up regularly as inputs into grand innovations that required knowledge from multiple areas (e.g. anything Elon needed to start his companies)

I don't think the bottleneck is lack of recruitment though, the problem is that content has no place to go. As you rightly point out, things that aren't interesting to the general LW audience get crickets. I have unusual things I really want to show on LessWrong that are on their 5th rewrite because I have to cross so many inferential gaps and somehow make stuff LW doesn't care about appealing enough to stay on the front page.

Comment by bendini on Meetups: Climbing uphill, flowing downhill, and the Uncanny Summit · 2019-09-23T04:31:07.173Z · score: 3 (2 votes) · LW · GW

The somewhat cynical take is that open attendance events ( and LW) are like group projects where organizers are competing for attendees. This makes organizing events a servant role rather than a leadership role, meaning that if you expend the resources to put on an interesting talk and offer free pizza people will think they've done their bit by showing up and adding entropy. Like the way people balk at paying for software now that Google et all have figured out that it's more efficient to take it out of your back pocket via advertising, people treat meetups the same way because organizers have zero leverage when attendees can go to some other meetup with free pizza because it's a recruitment funnel for a tech company.

Fixing this will require more than words alone. Informing attendees that the meetup is a "take it seriously" meetup does not cause them to take it seriously because there's no way at present to give those words credibility.

(Unrelated: I stumbled on this post by happenstance only to see a comment I made form a key part of it. This seems exactly like the sort of thing that should go in a user's notifications)

Comment by bendini on Meetups as Institutions for Intellectual Progress · 2019-09-17T14:02:23.980Z · score: 7 (6 votes) · LW · GW

As someone who has organised meetups outside of the main hubs my experience matches pretty much everything said here. The current format is not ideal for accomplishing anything, so much so that I've stepped down from organising mine because they were providing so little value. It's a sad state of affairs, but from what I can tell the majority are content with them being low-effort social groups.

In terms of coordinating between regional hubs I would suggest opting for LessWrong instead of Facebook. Many people simply won't see the content due to either algorithms or newsfeed blockers plus Facebook no longer maintains the monopoly over everyone's social calendar that it had just 2 years ago.

Comment by bendini on A new rationality YouTube channel emerges · 2019-09-06T05:46:32.830Z · score: 2 (2 votes) · LW · GW

Focusing on video quality instead of talking to a webcam is a differentiator, so that should raise your odds of success.

Comment by bendini on A new rationality YouTube channel emerges · 2019-08-30T08:17:34.476Z · score: 15 (7 votes) · LW · GW

I disagree.

If someone specifically asks for criticism and I have something to say, I like to treat them like an adult instead of assuming they're just repeating tribal shibboleths. This also has a bonus of punishing people who are insincere about wanting critisism while rewarding those who honestly seek it.

While it's possible to gain useful skills from a failed project, opportunity costs are real. I don't think people should be risk averse (quite the opposite), but I do think people should put a bit of thought into a viable strategy before commiting the time needed to determine if a project will succeed.

Yes, I'm aware that my comment resembles the snark you get on Hacker News, but there is a distinction: I'm saying "There's a pile of skulls on this mountain, if you are going to climb it, figure out how to avoid making the same mistakes"

Comment by bendini on A new rationality YouTube channel emerges · 2019-08-29T07:40:36.400Z · score: 10 (6 votes) · LW · GW

Critical question : If you've done some cursory research you'll know that you aren't the first person to think of this. There have been somewhere between 10-100 channels started that focused on the Sequences, with only a couple achieving minor success (e.g. Julia Galef's channel). Given this reality, what do you plan to do differently so this doesn't end up as a waste of time?

Comment by bendini on Raemon's Shortform · 2019-08-15T14:20:42.476Z · score: 1 (3 votes) · LW · GW

The fact that such debates can go on for 500 pages without significant updates from either side point towards a failure to 1) systematically determine which arguments are strong and which ones are distractions 2) restrict the scope of the debate so opponents have to engage directly rather than shift to more comfortable ground.

There are also many simpler topics that could have meaningful progress made on them with current debating technology, but they just don't happen because most people have an aversion to debating.

Comment by bendini on Diversify Your Friendship Portfolio · 2019-07-10T03:04:58.991Z · score: 22 (9 votes) · LW · GW

I see how the idea is sensible for some, but I've never felt satisfied with compartmentalised friendships where I share a small facet of myself with each group.

In addition to diversification being somewhat alienating, there are some benefits of tight-knit groups you'd struggle to replicate in diversified social portfolio:

  • Lowered social transaction costs - when you divide your social time between fewer people you have more time to learn how best to work with each person
  • Easier trust coordination - repeated interactions over a long period of time mean you have a lot of past data to evaluate someone's trustworthiness
  • Emotional investment - loyalty is rational when each person isn't a replaceable commodity. Having tough conversations that will cause friction but pay off in the long run is worth it if there's actually going to be a long run.
Comment by bendini on "Rationalizing" and "Sitting Bolt Upright in Alarm." · 2019-07-09T07:53:07.270Z · score: 12 (3 votes) · LW · GW

Meta beliefs about jargon: There are some benefits to using a new word free of existing connotations, but costs often exceed the benefits. In the first stage only a few insiders know what it means. In the second stage you can use it with most of the community, but you need to translate it for casual members and a general audience. In the third stage the meaning becomes diluted as the community starts using it for everything, so you're basically back where you started.

In addition to the tendency for jargon to be diluted in general, jargon that's shorthand for "I see pattern X and that has very important implications" will be very powerful, so it's almost certain to be misused unless there are real costs (i.e. social punishments) for doing so. A better method may be to use existing phrases that are more linguistically stable.

Some draft proposals:

  • Carl is engaging in motivated cognition -> Carl has a conflict of interest/Carl is deceiving himself/Carl is quite attached to this belief (depending on which one is applicable)
  • Carl is wrong about something and it's influencing others -> Carl is a bad influence
  • Everyone in the community is saying X -> Our community has a systemic bias regarding idea X
  • Alice is "blatantly" wrong about X -> Alice has substantial disagreements with us about X

Most of these proposals sound quite confrontational, but that's inherent to what's being communicated. You can't use jargon for "Alice is saying dangerous things" within earshot of Alice and avoid social repercussions if the meaning is common knowledge.

Comment by bendini on LW authors: How many clusters of norms do you (personally) want? · 2019-07-08T07:07:33.137Z · score: 3 (3 votes) · LW · GW

I generally prefer norms that look like sparring - anything that's relevant is fair game, anything on the boundary of personal attack is fair game so long as you can make the case for its relevance.

Personal preferences aside, the biggest norm problem I've encountered is when people make an assertion based on priors that are taboo to discuss but you can't make a solid counterargument without addressing them.

Comment by bendini on Being Wrong Doesn't Mean You're Stupid and Bad (Probably) · 2019-06-30T01:08:56.667Z · score: 9 (7 votes) · LW · GW

This post relies on several assumptions that I believe are false:

1. The rationalist community has managed to avoid bringing in any outside cultural baggage so when someone admits they were wrong about something important (and not making a strategic disclosure) people will only raise their estimate of incompetence by a Bayesian 0.42%.

2. The base rate of being "stupid and bad" by rationalist standards is 5% or lower (The sample has been selected for being better than average, but the implicit standards are much higher)

3. When people say they are worried about being "wrong" and therefore "stupid" and "bad", they are referring to things with standard definitions that are precise enough to do math with.

4. The individuals you're attempting to reassure with this post get enough of a spotlight that their 1 instance of publicly being wrong is balanced by a *salient* memory of the 9 other times they were right.

5. Not being seen as "stupid and bad" in this community is sufficient for someone to get the things they want/avoid the things they don't want.

6. In situations where judgements must be made with limited information (e.g. job interviews) using a small sample of data is worse than defaulting to base rates. (Thought experiment: you're at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?)

Comment by bendini on Discussion Thread: The AI Does Not Hate You by Tom Chivers · 2019-06-19T16:36:57.592Z · score: 14 (8 votes) · LW · GW

Just finished the book today, I'm somewhat impressed by how it came out given the suspicion many people had.

The author managed to take the AI arguments seriously while also striking a balance between writing an honest account of his interactions with the community, keeping it interesting for the typical reader and avoiding lazy potshots against nerds.

My only wish was that there was a section on the practical aspect to rationality, but was widely neglected by many of the hardcore fans, so it's hardly a fair critique of a book about AI safety.

Comment by bendini on The Craft & The Community - A Post-Mortem & Resurrection · 2018-04-27T04:21:32.631Z · score: 11 (9 votes) · LW · GW

The amounts are disputed, due to damages resulting from Greg's personal negligence, and if all points in our counterclaim for damages hold water, you would actually be owing thousands to us. After amounts were disputed, you rebuffed all claims as trivial and gave us 36 hours to pay up or else, since then you have taken this to every platform you could find, including contacting one person's startup team members and potential seed accelerators or another person's immediate family in attempt to pressure them into compliance.

With regards to the vision, please don't pretend to mourn something you actively opposed during the nine months you shared a house with us.

Comment by bendini on Updates from Boston · 2017-12-05T20:26:02.392Z · score: 10 (3 votes) · LW · GW

I like this post, and would like to see more posts like this.

Did you discover why Order of the Sphex failed?

Comment by bendini on Civility Is Never Neutral · 2017-11-26T04:02:02.420Z · score: 7 (4 votes) · LW · GW

I agree with the idea that civility norms as they are currently implemented are never neutral, but not that it is humanly impossible.

Incisive questioning of a locally unpopular view is called “being insightful”; the proponent of a locally unpopular view being triggered by it is called “letting your emotions run away with you in a rational discussion” and “blowing up at someone for no reason.” Incisive questioning of a locally popular view is called “uncharitable” and “incredibly rude”; the proponent of a locally popular view being triggered by it is called “a reasonable response to someone else being a jerk.” It all depends on whether the people doing the enforcement find it easier to put themselves in the shoes of the upset person or the person doing the questioning.

It does, if the enforcers see themselves as adjudicators of good taste rather than the people who execute the rules other people have agreed on. I suppose this is one of the few situations where not questioning authority would actually be beneficial.

It's also worth stating that if you want more than just the pretense of civil discourse, a person who retaliates against a harsh but true critisism of their idea has to be reprimanded, not in spite of but because the audence is sympathetic to their emotional reaction.

Conversely, Great-Aunt Bertha skipped school in the fifties to go get drunk with sailors and was the first woman in the Hell’s Angels. Great-Aunt Bertha thinks it is very rude that Great-Aunt Gertrude keeps saying “a-HEM” five times a sentence just because she’s talking the way she normally talks. It’s not polite to interrupt what people are saying by getting offended and storming out. And that whole “sir” and “ma’am” business is actually offensive. Children are people and it is wrong to treat them as if they are subservient to adults.
Great-Aunt Bertha and Great-Aunt Gertrude will have some difficulty agreeing about what is polite behavior at the Thanksgiving table.

I'm not particularly sure if this is true of your tyical Aunt Bertha, but it is my experience that everyone, including the more Bertha-ish types such as myself, agree that politeness means something approximating Aunt Gertrude. The counterpoint is not that politeness is completely subjective but at what point along the continuum between blunt honesty and hyper-politeness is best in a given situation.

This isn't the same for respect, as that is an internal reaction, rather than a consensus based social norm. Many hacker-types will only take the time out of their day to poke holes in an idea if it at least has some parts that are worth saving. This makes critisism a mark of respect in those subcultures, in opposition to almost everywhere else.

On the other hand, many aspects of etiquette have nothing to do with being nice to people but instead are ways of signalling that one is upper-class, or at least a middle-class person with pretensions of same. (Most obviously, anything about what forks one uses; more controversially, rules about greetings, introductions, when to bring gifts, etc.) You wind up excluding poor and less educated people, which people in many spaces don’t want.

I'd like to use this to register an informal complaint that the norms in the rationalist community, including the ones on discourse contain a large proportion of things that suit the aesthetic sensibilities of WASPy middle class intellectuals rather than what's instrumentally rational for acheiving most of our stated goals.