I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
AFAICT jQuery UI is somsthing like a component library, which is (possibly) a piece of what you might build this out of, but not the thing itself (which is to say, a well functioning, maintainable, complete website).
Although I don't think it's really designed to do the sort of thing I'm talking about here.
A fairly common mod practice has been to fix typos and stuff in a sort of "move first and then ask if it was okay" thing. (I'm not confident this is the best policy, but it saves time/friction, and meanwhile I don't think anyone had had an issue with it). But, your preference definitely makes sense and if others felt the same I'd reconsider the overall policy.
(It's also the case that adding an image is a bit of a larger change than the usual typo fixing, and may have been more of an overstep of bounds)
In any case I definitely won't edit your stuff again without express permission.
I know I'll go to programmer hell for asking this... but... does anyone have a link to a github repo that tried really hard to use jQuery to build their entire website, investing effort into doing some sort of weird 'jQuery based components' thing for maintainable, scalable development?
People tell me this can't be done without turning into terrifying spaghetti code but I dunno I feel sort of like the guy in this xkcd and I just want to know for sure.
I edited the image into the comment box, predicting that the reason you didn't was because you didn't know you could (using markdown). Apologies if you prefer it not to be here (and can edit it back if so)
I've found the "set a 5 minute timer" meme to not-quite-work because it takes me like 15 minutes just to get all my cached thoughts out, before I get to anything original. But yeah this basic idea here is a big part of my "actually thinking for real" toolkit.
Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence
Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do
Example Experiment: How many variables are relevant to betting on horses?
Eight experienced horserace handicappers were shown a list of 88 variables found on a typical horse-past-performance chart. Each handicapper identified the 5 most important items of information—those he would wish to use to handicap a race if he were limited to only five items of information per horse. Each was then asked to select the 10, 20, and 40 most important variables he would use if limited to those levels of information.
At this point, the handicappers were given true data (sterilized so that horses and actual races could not be identified) for 40 past races and were asked to rank the top five horses in each race in order of expected finish. Each handicapper was given the data in increments of the 5, 10, 20 and 40 variables he had judged to be most useful. Thus, he predicted each race four times—once with each of the four different levels of information. For each prediction, each handicapper assigned a value from 0 to 100 percent to indicate degree of confidence in the accuracy of his prediction.
When the handicappers’ predictions were compared with the actual outcomes of these 40 races, it was clear that average accuracy of predictions remained the same regardless of how much information the handicappers had available.
3 of the handicappers showed less accuracy as the amount of information increased
2 improved their accuracy
3 were unchanged.
All, however, expressed steadily increasing confidence in their judgments as more information was received. This relationship between amount of information, accuracy of the handi-
Partial review / thoughts / summaries of "Psychology of Intelligence Analysis" (Work in progress)
This book generally reads as if a CIA analyst wrote "Thinking Fast and Slow" with "CIA analysts" as a target audience (although written in 1999, a decade earlier), Mostly it's arguing that the CIA should take cognitive biases and other intelligence failure modes seriously, and implement study and training to improve the situations. He has some overall suggestions on how to go about that which I didn't find very surprising.
I generally agree with the "more explanation is better, all else being equal". A background belief that has me less-than-fully-enthusiastically agreeing with you is that a stronger norm of "always include explanations and caveats like this" has a decent chance of causing people to not bother writing things at all (esp. if they're on a busy day).
I guess I also just thought it was totally fine for you to ask me for additional information (and I'm updating that it may be more common than I thought for the OP phrasing to make people feel like they couldn't ask).
First: I accidentally quoted the wrong section (all three paragraphs were relevant, with the final "we can be reasonably sure the second example decreased value" being the most relevant bit). Not sure if that changes the rest of your comment. I've now updated the OP.
The most important bit of information I intended to communicate here is not the particular reasons to disagree with the framing, but simply the fact that there exist prominent LW who would not agree with the "we can be reasonably confident that the second restaurant ends up canceling their AMF donations decreases value."
I understand it being frustrating to not get to understand or discuss the reasons why, but it seems important for it to be a socially acceptable move to say "hey, your blanket statement does not apply to me, or to people I know of" without having to take time to explain why.
In this case my own answer of "am I up for being asked" is "you can certainly ask, and I may or may not get around to responding." Although I can say briefly that possible reasons here include 'you might not think AMF is net positive, or you might think the general practice of donating to things like AMF is not a good strategy.'
Just wanted to note that I am thinking about this exchange, hope to chime in at some point. I'm not sure whether I'm on the same page as Ben about it. May take a couple days to have time to respond in full.
A few years ago, EA was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.
By now, I think we have the capacity (both financial, coordinational and human-talent) that that's less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.
A few reasons for I think competition is good:
Diversity of worldviews is better. Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
Easier criticism. When there's only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn't get done at all. Multiple orgs can allow people to think more freely about the situation.
Competition forces people to shape up a bit. If you're the only org in town doing a thing, there's just less pressure to do a good job.
"Healthy" Competition enables certain kinds of integrity. Sort of related to the previous two points. Say you think Cause X is real important, but there's only one org working on it. If you think Org A isn't being as high integrity as you'd like, your options are limited (criticize them, publicly or privately, or start your own org, which is very hard. If you think Org A is overall net positive you might risk damaging Cause X by criticizing it. But if there are multiple Orgs A and B working on Cause X, there are less downsides of criticizing it. (Alternate framing is that maybe criticism wouldn't actually damage cause X but it may still feel that way to a lot of people, so getting a second Org B can be beneficial). Multiple orgs working on a topic makes it easier to reward good behavior.
In particular, if you notice that you're running the only org in town, and you want to improve you own integrity, you might want to cause there to be more competition. This way, you can help set up a system that creates better incentives for yourself, that remain strong even if you gain power (which may be corrupting in various ways)
There are some special caveats here:
Some types of jobs benefit from concentration.
Communication platforms sort of want to be monopolies so people don't have to check a million different sites and facebook groups.
Research orgs benefit from having a number of smart people bouncing ideas around.
See if you can refactor a goal into something that doesn't actually require a monopoly.
If it's particularly necessary for a given org to be a monopoly, it should be held to a higher standard – both in terms of operational competence and in terms of integrity.
If you want to challenge a monopoly with a new org, there's likewise a particular burden to do a good job.
I think "doing a good job" requires a lot of things, but some important things (that should be red flags to at least think about more carefully if they're lacking) include:
Having strong leadership with a clear vision
Make sure you have a deep understanding of what you're trying to do, and a clear model of how it's going to help
Not trying to do a million things at once. I think a major issue facing some orgs is lack of focus.
Probably don't have this be your first major project. Your first major project should be something it's okay to fail at. Coordination projects are especially costly to fail at because they make the job harder for the next person.
You have a restaurant. It's a local monopoly, and you're running decent profits. A new restaurant opens down the street. Some of your customers are diverted, so you lower your prices. You can no longer buy that sweet Ferrari.
You have a restaurant. It's a local monopoly, and you're running decent profits. A new restaurant opens down the street. Some of your customers are diverted, so you lower your prices. You have to cancel most of your donations to AMF.
In the first example, we can be reasonably sure that competition increased value. In the second example, we can be reasonable sure that competition decreased value.
Just flagging that I know people who would disagree strongly with this framing, fwiw.
FYI, a feature I expect to build in the not-too-distant future is "subscribe to author", which might address that particular use case more directly. Curious if that feels like it's pointing in a direction that's useful to you.
We definitely should make an overall better pagination system. And given how tucked away it is and that it's probably used mostly by power-users who want to read everything I probably agree it should show negative karma comments.
I agree that /allComments should be linked from _somewhere_ although no, it's not meant to be a primary use case. We switched to the Recent Discussion setup because "show all comments in order" naturally creates cascading effects where a single huge thread becomes even more huge because it's the only thing most readers see in the comments section.
Most users don't read through every single comment. It probably makes sense to me to have a section that works well for dedicated power-user-comment readers, but I expect it'd be a feature that maybe 5-20 people would use total. (If I turned out to be wrong about that I might prioritize it higher)
First: note that you can turn this off in your user settings. (Habryka mentioned this in the stickied post about it)
The reason we're pushing meetups this month is because of SlateStarCodex Meetups Everywhere. This is not a permanent change to the site, it's a temporary solution to a fairly difficult problem. The point is not that people who aren't interested in meetups should be forced to engage with them, the problem is that we do want everyone who is interested in meetups to know that there's a major meetup coordination event happening this month.
(I do think we should make the "hide map" button easier to access and will probably add something like that next week)
There's a generally hard problem where LessWrong as a site does a lot of different things, and there's really only one space to put things that everyone will see them. We mostly haven't emphasized meetups over the past couple years because they don't quite make the cut of "top 2 things that we want everyone to see on the frontpage". (And meanwhile this has made it harder for meetup organizers to get traction)
I actually think the compromise solution of "once a year, for few weeks, the site reminds everyone that meetups exist and gets them to sign up for notifications if they want them" might actually be the least-bad option. (Although what I notice just now is that the front-page-map fails to have the most important call-to-action there, of the "sign up for meetup announcements" button)
The frontpage map in general was somewhat rushed to be "in time to be useful to SSC everywhere", so there are probably some more optimizations we can do to it, both to make it easier for people to turn off if they don't care, and to actually sign up for notifications if they want.
It certainly seemed better than rapid-fire commenting.
I don't know whether it was better than not commenting at all – I spent this thread mostly feeling exasperated that after 20 hours of debate and doublecrux it seemed like the conversation hadn't really progressed. (Or at least, I was still having to re-explain things that I felt I had covered over and over again)
I do think Zack's final comment is getting at something fairly important, but which still felt like a significant topic shift to me, and which seemed beyond scope for the current discussion.
This is as good a time as any to note that the Opt Into Beta flag will now show you "Hover previews" for links to lesswrong posts (i.e. if a link would go to a LW post you'll see the title/author/karma/highlight)
I've found this post useful for crystallizing my own thinking, both about rules I follow (as a human taking actions), and even a bit helpful for grokking the overall Law vs Toolbox distinction.
Looking over the comments, I see some people seem to have found it less crystalizing than I. I have a sense that there's a version of this post that could have bridged some inferential gulf better. But also, there's not necessarily such a thing as a universally good explanation
I have some sense that the post could be improved if it was given a second draft whose goals was specifically to find someone who didn't grok the first version of the post, and exploring various different explanations until it clicks.
But, also there's no such thing as a universally compelling explanation and maybe this is just a case where it was useful to add one more road to Rome that was helpful for at least some people.
One thing to doublecheck (I'm not sure this matters that much in your case but worth checking): if you open your Recommendations settings, you should see a checkbox that says "show only unread". Is that checked for you?
random anecdote in time management and life quality. Doesn't exactly have obvious life lesson
I use Freedom.to to block lots of sites (I block LessWrong during the morning hours of each day so that I can focus on coding LessWrong :P).
Once a upon a time, I blocked the gaming news website, Rock/Paper/Shotgun, because it was too distracting.
But a little while later I found that there was a necessary niche in my life of "thing that I haven't blocked on Freedom, that is sort of mindlessly entertaining enough that I can peruse it for awhile when I'm brain dead, but not so bottomlessly entertaining that it'll consume too much of my time." If I didn't have such a site, I would find one.
If I didn't have a standardized one, I would find one at random, and it'd be a bit of a crap shoot whether it was 5 minutes of eyes-glazed-skimming, or an hour of tabsplosioning.
The site I ended up settling on as my default blah-time was Kotaku, which was... basically RockPaperShotgun but worse. Gaming news that was sort of pointless and devoid of personality but juuuust over the threshold of "interesting enough that I actually wanted to read it."
Which I thought about a bit and then decided I reflectively endorsed.
Meanwhile, while I could access RockPaperShotgun in the evenings... I didn't, because, well, it wasn't that important and I was trying to cut back on videogames anyway.
Two years later... I dunno I found myself sort of thinking "you know, I wish I was passively gaining more interesting videogame news."
And... I unblocked RockPaperShotgun.
And I was surprised to notice
a) wow, most the content was actually interesting, tailored for the sorts of games I like, and written in a more entertaining voice
b) there were only a couple articles per day, whereas Kotaku used a vaguely facebook-like algorithm of "most of the articles are crap, but every few ones is a gem, which sort of gets me into a skinner-box that (I realized, in retrospect) probably had me reading _more_ than RPS did.
Huh. I don't think that's been true of any fantasy series I've read – magic usually has lots of limitations in the stories I've read, and often is less or comparably powerful to modern technology AFAICT.
Sort of small, but in important: I had a few instances recently of someone pointing out a thing I did wrong or way I could improve that made me defensive, but I took a breath and shifted gears into "okay, how can I learn from this?" instead of arguing.
I see "friendship" as basically requiring both. Someone recently asked me "how do people become friends" and I said: "Step one: have lots of sporadic, unplanned opportunities to bump into each other and figure out if you like each other in low stakes situations. Step two: go to Mordor together."
Doing things on purpose requires that you have people who are coordinated in some way
Being coordinated requires you to be able to have a critical mass of people who are actually trying to do effortful things together (such as maintain norms, build a culture, etc)
If you don't have a fence that lets some people in and doesn't let in others, and which you can ask people to leave, then your culture will be some random mishmash that you can't control
There are a few existing sets of fences.
The strongest fences are group houses, and organizations. Group houses are probably the easiest and most accessible resource for the "village" to turn into a stronger culture and coordination point.
Some things you might coordinate using group houses for:
Select people who actually have a decent chance of wanting to be good friends
Don't stress overmuch about getting the perfect set of people – overly stressing about finding the 'best' people to be friends with is one of the pathologies in the Bay area that make friendship harder. If everyone's doing it, no one has the ability to let a friendship actually grow, which takes time.
DO find people you enjoy hanging out with, talking to, and share some interests with
It may take multiple years to find a group house where everyone gets along with everyone. I think it makes sense, earlier on, to focus on exploring (i.e. if you've just moved to the Bay, don't worry about getting a group house culture that is a perfect fit), but within 3 years I think it's achievable for most people to have found a group house that is good for friendship.
Once you've got a group house that seems like a good longterm home, actually invest it.
Do things with your roommates.
Allocate time, not just for solving logistical problems, but for getting on the same page emotionally
"Deep friendships often come from striving and growing together." Look for opportunities for shared activities that are active rather than passive and involve growing skills that you are excited about.
But, probably don't try to force this. Sometimes you're at the same stage in a life trajectory as someone else, and you're growing in the same way at the same time. But not always. And later on you may want to keep growing in a direction where someone else feels that they've solved their bottleneck and growing more in that direction isn't that relevant to them anymore. That's okay.
Having a nicer place to live
I think this is an important "lower Maslow hierarchy" level than the strong friendships one. If your house isn't a nice place to live, you'll probably have a harder time forming friendships with people there.
"Nice place to live" means different things to different people. Form a group house with people who have similar desires re: cleanliness and approaches to problem solving and aesthetics, etc.
Deliberately cultivating your incentives
What sort of environment you're in shapes what sort of ways you grow. You might care about about this for reasons other than incidentally helping deepen friendships.
This depends both on having people who want to cultivate the same sorts of incentives that you do, and on actually coordinating with each other to hold each other to those incentives
Be wary of your, and other's, desire to have the self-image as someone who wants to grow in a particular way. I've seen a failure mode where people felt vaguely obligated to pay lip service to certain kinds of growth but it wasn't actually what they wanted
Be wary of "generic emphasis on growth". A thing I've seen a few group houses try is something like "self improvement night" where they try to help each other level up, and it often doesn't work because people are just interested in pretty different skillsets.
Also, programming and the internet is just literally magic (down to the "if you learn someone's true name you gain power over them"), and if you're not inspired by it you probably wouldn't be inspired by "regular magic" if it were real.
I dunno man investigating the legend of the mole people was pretty fun. The main difference between it and fiction was that in fiction I'd be killing orcs, but if I wanted to kill people I'd just go war zones. (but, also, killing is actually bad and scary and I don't want it)
It occurs to me that there's probably a weird divide of people who find WikiWand strictly better than wikipedia (because of generally nicer typography, less clutter, etc), who have adblock and so don't notice the part where it's full of obnoxious ads... and those who are like "why the hell would you want this?"
Nod. I haven't actually been to CFAR recently, not sure how they go about it there. But I think for local meetups doing practice breaking it down into subskills seems pretty useful and I agree with active listening being another key one.
After a recent 'doublecrux meetup' (I wasn't running it but observed a bit), I was reflecting on why it's hard to get people to sufficiently disagree on things in order to properly practice doublecrux.\
But, it still sure is handy to have practiced doublecruxing before needing to do it in an important situation. What to do?
Two options that occur to me are
First try to develop a plan for building an actual product together, THEN find a thing to disagree about organically through that process.
[note: I haven't actually talked much with the people who's major focus is teaching doublecrux, not sure how much of this is old hat, or if there's a totally different approach that sort of invalidates it]
One challenge about doublecrux practice is that you have to find something you have strong opinions about and also someone else has strong opinions about. So... just sidestep that problem but only worrying about something that you have strong opinions about.
Pick a belief that is actually relevant to your plans (such as where you're planning to go to college, or what kind of career to go into, or ideally a project you're actually working on that you're excited about.
What beliefs are you confident in, that are underpinning your entire approach? (i.e. "going to college in the first place is the right move" or "A job in this industry will make me happier than this other industry" or "this project is a good idea because people will buy the product I'm building.")
Instead of practicing discussing this with someone else, you can just ask yourself, with no one else around you, why you believe what you believe, and what would change your mind about it.
Having considered this, I think I like it a lot as an "doublecruxing 101" skill.
One problem with learning doublecrux is that doing it properly takes awhile, and in my experience starts with a phase that's more about model-sharing, before moving to the "actually find your own cruxes and figure out what would change your mind." But, the first part isn't actually all that different from regular debate, or discussion. And it's not quite clear when to transition to the second part (or, it naturally interweaves with the first part. See postformal doublecrux).
This makes it hard to notice and train the specific skills that are unique to doublecrux.
I like the notion that "first you learn singlecrux, then doublecrux" because a) it's just generally a useful skill to ask why you actually believe the things you do and what would change your mind, and b) I think it's much easier to focus on the active, unique ingredients when the topic isn't getting blurred with various other conversational skills, and/or struggling to find a thing that's worth disagreeing about in the first place.
It'd also have the advantage that you can think about something that's actually slightly triggering/uncomfortable for you to consider (which I think is pretty valuable for actually learning to do the skill "for real"), but where you only have to worry about how you feel about it, rather than also have to figure out how to relate to someone else who might also feel strongly.
I think this'd be particular good for local meetups that don't have the benefit of instructors with a lot of practice helping people learn the doublecrux skill.
(I might still have people pair up, but only after thinking privately about it for 5 minutes, and the pairs of people would not be disagreeing with each other, just articulating their thought processes to each other. At any given time, you'd have one"active participant" talking through why they believe what they believe and how they might realistically change their mind about it, and another partner acting more as a facilitator to notice if they're stuck in weird thought pattern loops or something)
Finding a product you both might actually want to build, then disagreeing about it
But, still, sooner or later you might want to practice doublecruxing. What then? How do you reliably find things you disagree about?
The usual method I've observed is have people write down statements that they believe in confidently, that they think other people might disagree with, and then pair up based on shared disagreement. This varies in how well it produces disagreements that feel really 'alive' and meaningful.
But if doublecrux actually is mostly for building products, a possible solution might be instead to pair up based on shared interests in the sorts of projects you might want to build. You (probably?) won't actually build the product, but it seems important that you be able to talk about it as realistically as possible.
Then, you start forming a plan about how to go about building it, while looking for points of disagreement. Lean into that disagreement when you notice it, and explore the surrounding concept-space.
At last night's meetup, I paired with someone and suggested this idea to them. We ended up with the (somewhat meta) actual shared product of "how to improve the Berkeley rationality community." We discussed that for 20 minutes, and eventually found disagreement about whether communities require costly signals of membership to be any good, or whether they could instead be built off other human psychological quirks.
This was not a disagreement I think we would have come up with if we "listed a bunch of things we felt strongly about." And it felt a lot more real.
(I do think there's a risk of most pairs of peoples ending up being "the local community" or something similarly meta as their 'product', but the actual disagreements I expect to still be fairly unique and dependent on the people in question)