$1,000 Bounty for Pro-BLM Policy Analysis 2020-06-18T01:48:52.725Z · score: 14 (11 votes)
Petrov Day in Boston 2019-09-15T22:13:33.563Z · score: 3 (1 votes)
Boston SSC Meetup 2018-10-18T03:43:53.398Z · score: 3 (1 votes)
Petrov Day in Boston 2018-09-16T02:14:11.886Z · score: 3 (1 votes)
Boston SSC Meetup 2018-09-16T01:23:32.613Z · score: 3 (1 votes)


Comment by taymon-beal on $1,000 Bounty for Pro-BLM Policy Analysis · 2020-06-18T17:28:16.233Z · score: 1 (1 votes) · LW · GW

Cross-posting from Facebook:

Any policy goal that is obviously part of BLM's platform, or that you can convince me is, counts. Police reform is the obvious one but I'm open to other possibilities.

It's fine for "heretics" to make suggestions, at least here on LW where they're somewhat less likely to attract unwanted attention. Efficacy is the thing I'm interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.

Small/limited returns are okay if they're the best that can be done. Time preference is moderately high (because that matches my assessment of the BLM moral framework) but still limited.

Suggestions from non-Americans are fine.

Comment by taymon-beal on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T04:48:05.747Z · score: 23 (10 votes) · LW · GW
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.

I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization's core competencies. I've reached the point where I no longer find even gross failures of this kind surprising.

(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)

Comment by taymon-beal on [deleted post] 2019-09-28T17:55:48.774Z

The organizer wound up posting their own event:

Comment by taymon-beal on [deleted post] 2019-07-23T18:55:11.424Z

This looks like a duplicate.

Comment by taymon-beal on Nash equilibriums can be arbitrarily bad · 2019-05-01T21:59:59.631Z · score: 17 (7 votes) · LW · GW

Nit: I think this game is more standardly referred to in the literature as the "traveler's dilemma" (Google seems to return no relevant hits for "almost free lunches" apart from this post).

Comment by taymon-beal on Book review: The Sleepwalkers by Arthur Koestler · 2019-04-25T03:29:17.894Z · score: 6 (3 votes) · LW · GW

Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren't suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurring many politically-motivated clever arguments about Deep Philosophical Issues. Somewhere during that process somebody came up with scientific anti-realism, and it gained traction because it was politically workable as a compromise position, being sufficiently nonthreatening to both sides that they were content to let it be. Except for Galileo, who thought it was bullshit and refused to play along, which (in conjunction with his general penchant for pissing people off, plus the political environment having changed since Copernicus due to the Counter-Reformation) got him locked up.

Comment by taymon-beal on Book review: The Sleepwalkers by Arthur Koestler · 2019-04-23T04:25:00.085Z · score: 5 (3 votes) · LW · GW

Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.

Comment by taymon-beal on Book review: The Sleepwalkers by Arthur Koestler · 2019-04-23T00:52:55.445Z · score: 14 (5 votes) · LW · GW

This essay argues against the idea of "saving the phenomenon", and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of "it doesn't matter if it's real or not" comes across as suspiciously modern.

Comment by taymon-beal on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2019-04-03T02:42:52.241Z · score: 14 (5 votes) · LW · GW

For EAs and people interested in discussing EA, I recommend the EA Corner Discord server, which I moderate along with several other community members. For a while there was a proliferation of several different EA Discords, but the community has now essentially standardized on EA Corner and the other servers are no longer very active. Nor is there an open EA chatroom with comparable levels of activity on any other platform, to the best of my knowledge.

I feel that we've generally done a good job of balancing access needs associated with different levels of community engagement. A number of longtime EAs with significant blogosphere presences hang out here, but the culture is also generally newcomer-friendly. Discussion topics range from 101 stuff to open research questions. Speaking only for myself, I generally strive to maintain civic/public moderation norms as much as possible.

Also you can get a pretty color for your username if you donate 10% or do direct work.

Comment by taymon-beal on LW Update 2019-03-12 -- Bugfixes, small features · 2019-03-13T04:38:42.080Z · score: 7 (4 votes) · LW · GW

The Slate Star Codex sidebar is now using localStartTime to display upcoming meetups, fixing a longstanding off-by-one bug affecting displayed dates.

Comment by taymon-beal on LW2.0 Mailing List for Breaking API Changes · 2019-02-26T01:10:25.108Z · score: 5 (4 votes) · LW · GW

You probably want to configure this such that anyone can read and subscribe but only you can post.

Comment by taymon-beal on Open Thread January 2019 · 2019-01-19T16:13:57.889Z · score: 2 (4 votes) · LW · GW

I don't feel like much has changed in terms of evaluating it. Except that the silliness of the part about cryptocurrency is harder to deny now that the bubble has popped.

Comment by taymon-beal on Norms of Membership for Voluntary Groups · 2018-12-12T00:30:20.885Z · score: 31 (15 votes) · LW · GW

I linked this article in the EA Discord that I moderate, and made the following comments:

Posting this in #server-meta because it helps clarify a lot of what I, at least, have struggled to express about how I see this server as being supposed to work.
Specifically, I feel pretty strongly that it should be run on civic/public norms. This is a contrast to a lot of other rationalsphere Discords, which I think often at least claim to be running on guest norms, though I don’t have a super-solid understanding of the social dynamics involved.
The standard failure mode of civic/public norms is that the people in charge, in the interest of not having a too-high standard of membership (as this set of norms requires), are overly tolerant of behaviors with negative externalities.
The problem with this is not simply that negative externalities are bad, it’s that if you have too many of them it ceases to be worth good actors’ while to participate, at which point they leave because the whole thing is voluntary. Whatever the goals of the space are, you probably can’t achieve them if there’s nobody left but trolls.
Thus it is occasionally argued that civic/public norms are self-defeating. In particular, in the rationalsphere something like this has become accepted wisdom (“well-kept gardens die by pacifism”), and attempts to make spaces more civic/public are by default met with suspicion.
(Of course, it can also hard to tell a principled attempt at civic/public norms apart from a simple bias towards inaction on the part of the people in charge. Such a bias can stem from aversion to social conflict. Certainly, I myself am so averse.)
The way we deal with this on this server, I think, is to identify patterns that if left unchecked would cause productive people to leave (not specific productive people, but rather in the abstract), and then as principledly as possible tweak the rules to officially discourage and/or prohibit those behaviors.
It’s a fine line to walk, but I don’t think it’s impossible to do well. And there are advantages; I suspect that insecure and/or conflict-averse people may have an easier time in this kind of space, especially if they don’t have a guest or coalitional space that happens to favor them and so makes them feel safe. (Something something typical mind fallacy.)
Also, civic/public norms are the best at preventing forks and schisms. Guest norms are the worst at this. One can of course argue about whether it’s worth it, but these do very much have costs.
The other thing I found especially interesting was this quote: “Asking for “inclusiveness” is usually a bid to make the group more Civic or Coalitional.”
I found this interesting because recently I made an ex cathedra statement that almost used the word “inclusive” in reference to what this server strives to be. By this I meant civic/public. I took it out because the risk of misinterpretation seemed high, because in the corners of the internet that many of us frequent, “inclusive” more often means coalitional.
Comment by taymon-beal on LW Update 2018-11-22 – Abridged Comments · 2018-12-10T03:13:26.496Z · score: 10 (6 votes) · LW · GW

I fear that this system doesn't actually provide the benefits of a breadth-first search, because you can't really read half a comment. If I scroll down a comment page without uncollapsing it, I don't feel like I got much of a picture of what anyone actually said, and also repeatedly seeing what people are saying cut off midsentence is really cognitively distracting.

Reddit (and I think other sites, but on Reddit I know I've experienced this) makes threads skimmable by showing a relatively small number of comments, rather than a small snippet of each comment. At least in my experience, this actually works, in that I've skimmed threads this way and felt like I got a good picture of the overall gist of the thread without having to read every comment.

I know you don't like Reddit's algorithm because it feeds the Matthew effect. But if most comments were hidden entirely and only a few were shown, you could optimize directly for whatever it is you're trying to do, by tweaking the algorithm that determines which comments to show. As a degenerate example, if you wanted to optimize for strict egalitarianism, you could just show a uniform random sample of comments.

Comment by taymon-beal on LW Update 2018-11-22 – Abridged Comments · 2018-12-10T00:54:34.625Z · score: 3 (3 votes) · LW · GW

You don't currently expand comments that are positioned below the clicked comment but not descendants of it.

Comment by taymon-beal on LW Update 2018-11-22 – Abridged Comments · 2018-12-09T22:04:35.425Z · score: 5 (4 votes) · LW · GW

Idea: If somebody has expanded several comments, there's a good chance they want to read the whole thread, so maybe expand all of them.

Comment by taymon-beal on Speculative Evopsych, Ep. 1 · 2018-11-23T00:07:14.660Z · score: 6 (2 votes) · LW · GW

Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.

Comment by taymon-beal on Rationality Is Not Systematized Winning · 2018-11-12T19:58:39.592Z · score: 13 (3 votes) · LW · GW

There's an argument to be made that even if you're not an altruist, that "societal default" only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.

Comment by taymon-beal on Modes of Petrov Day · 2018-09-24T14:09:09.907Z · score: 1 (1 votes) · LW · GW

No, we didn't participate in this in Boston. Our Petrov Day is this Wednesday, the actual anniversary of the Petrov incident.

Comment by taymon-beal on Modes of Petrov Day · 2018-09-23T03:26:42.946Z · score: 3 (2 votes) · LW · GW

Some disconnected thoughts:

In Boston we're planning Normal Mode. (We rejected Hardcore Mode in previous years, in part because it was a serious problem for people who underwent significant inconvenience to be able to attend.)

I'm good at DevOps and might be able to help the Seattle folks make their app more available if they need it.

I happened to give a eulogy of sorts for Stanislav Petrov last year.

I'm currently going through the latest version of the ritual book and looking for things to nitpick, since I know that a few points (notably the details of the Arkhipov story) have fallen into dispute since last year.

I'd be curious to know what considerations are affecting your decisions to possibly change Petrov Day.

Comment by taymon-beal on Berkeley REACH Supporters Update: September 2018 · 2018-09-17T01:01:46.928Z · score: 17 (5 votes) · LW · GW

Thanks for this update!

I have a question as a donor, that I regret not thinking of during the fundraising push. Could you identify a few possible future outcomes, that success or failure on could be measured within a year, that if achieved would indicate that REACH was probably producing significant value from an EA perspective (as opposed to from a community-having-nice-things perspective)? And could you offer probability estimates on those outcomes being achieved?

I certainly understand if this would be overly time-consuming, but I'd feel comfortable donating more if I had a good answer to this in hand.

Edit: Kelsey on Discord proposed a few possible outcomes that might (or might not, depending on how you envision REACH working) be answers to this question:

  • The regular meetups REACH hosts get ~50 people to attend at least four EA meetups a year when they wouldn't have attended any.
  • As a result of the things they learned at those meetups, at least ten people change where they're donating to or what they're prioritizing in the next year.
  • At least five people join the community via REACH events/staying there/interacting with people staying there, and at least one of them is doing useful work in an EA priority area.
Comment by taymon-beal on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T19:49:55.896Z · score: 3 (2 votes) · LW · GW

Then I think the post should have waited until those arguments were up, so that the discussion could be about their merits. The problem is the "hyping it up to Be An Internet Event", as Ray put it in a different subthread; since the thing you're hyping up is so inflammatory, we're left in the position of having arguments about it without knowing what the real case for it is.

Comment by taymon-beal on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T16:42:07.035Z · score: 5 (5 votes) · LW · GW

I think it's an antisocial move to put forth a predictably inflammatory thesis (e.g., that an esteemed community member is a pseudo-intellectual not worth reading) and then preemptively refuse to defend it. If the thesis is right, then it would be good for us to be convinced of it, but that won't happen if we don't get to hear the real arguments in favor. And if it's wrong, then it should be put to bed before it creates a lot of unproductive social conflict, but that also won't happen as long as people can claim that we haven't heard the real arguments in favor (kind of like the motte-and-bailey doctrine).

I don't doubt your sincerity in that you're doing this because you don't believe the thesis yourself, but your friend does. But I don't think that makes it okay. If your friend, or at least someone who actually believes the thesis, is not going to explain why it should be taken seriously, then it's bound to be net negative for intellectual progress and you shouldn't post it.

Comment by taymon-beal on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T16:31:29.688Z · score: 1 (5 votes) · LW · GW

Unless a comment was edited or deleted before I got the chance to read it, nobody but you has used the word "violence" in this thread. So I don't understand how an argument about the definition of "violence" is in any way relevant.

Comment by taymon-beal on Last Chance to Fund the Berkeley REACH · 2018-06-30T02:52:09.986Z · score: 10 (2 votes) · LW · GW

Hmmm. Do you think that's a bug, or a feature?

LessWrong seems like a bit of a weird example since CFAR's senior leadership were among the people pushing for it in the first place. IIRC even people working at EA meta-orgs have encountered difficulties and uncertainty trying to personally fund projects through the org.

Comment by taymon-beal on Last Chance to Fund the Berkeley REACH · 2018-06-30T02:42:01.686Z · score: 18 (6 votes) · LW · GW

I've just pledged $40 per month.

I could afford to pay more. I'd do so if I ever actually visited REACH, but I live thousands of miles away (and did give a small donation when I visited for the pre-EA Global party, and will continue to do so if I ever come back). I'd also pay more if I were more convinced that it was a good EA cause, but the path from ingroup reinforcement to global impact is speculative and full of moral hazard and I'm still thinking about it.

My pledge represents a bet that REACH will ultimately make a difference in my life by some causal pathway not yet visible. Perhaps I ultimately wind up in the Bay and it helps me connect to the community there, or perhaps its success ultimately facilitates other community-building projects that aren't so geographically limited (which is a thing I'd really like to see). It'd be nice to be able to wait and see, but that won't work if REACH runs out of startup capital and dies—so I'm taking the risk.

Comment by taymon-beal on Last Chance to Fund the Berkeley REACH · 2018-06-30T02:24:41.582Z · score: 10 (2 votes) · LW · GW

This is a problem I've been thinking about for awhile in a broader EA context.

It's claimed fairly widely that EA needs a lot more smallish projects, including ones that aren't immediately legible enough to be fundable by large institutional donors (e.g., because the expected value depends on assessments of the competence and value alignment of the person running the project, which the large institutional funders can't assess). It's also claimed (e.g., by Nick Beckstead of OpenPhil at EA Global San Francisco 2017) that smallish earning-to-give donors' best bet to do the most good is to use their local knowledge to find and fund promising opportunities that the big institutional donors aren't already covering.

This creates a seemingly obvious opportunity for an EA org to make it easier for donors to crowdfund these kinds of projects. E.g., by being a 501(c)(3) they can funnel donations from DAFs, which individuals can't accept. (For me, at least, this is a bigger deal than tax deductibility; my DAF is overprovisioned relative to my personal savings right now, so I'd rather make donations from there.)

The two obvious hypotheses for why nobody's already doing this are 1) all the EA meta-orgs are too constrained on staff time to set it up, and 2) it doesn't actually work because the level of oversight required to avoid undue legal and/or reputational risk would destroy the efficiency gains. I would very much like to know to what extent each of these is the case.

Comment by taymon-beal on Using the LessWrong API to query for events · 2018-06-23T02:31:39.426Z · score: 4 (2 votes) · LW · GW

Re: local events: Although I haven't checked this with Scott, my default assumption for the SSC sidebar is that keeping it free of clutter and noise is of the highest importance. As such, I'm only including individual events that a human actually took explicit action to advertise, to prevent the inclusion of "weekly" events from groups that have since flaked or died out.

(This is also why the displayed text only includes the date and Google-normalized location, to prevent users from defacing the sidebar with arbitrary text.)

LW proper may have different priorities. Might be worth considering design options here for indicating how active a group is.

Comment by taymon-beal on Using the LessWrong API to query for events · 2018-06-23T02:26:15.249Z · score: 4 (2 votes) · LW · GW

So correct me if I'm wrong here, but the way timezones seem to work is that, when creating an event, you specify a "local" time, then the app translates that time from whatever it thinks your browser's time zone is into UTC and saves it in the database. When somebody else views the event, the app translates the time in the database from UTC to whatever it thinks their browser's time zone is and displays that.

I suppose this will at least sometimes work okay in practice, but if somebody creates an event in a time zone other than the one they're in right now, it will be wrong, and if you're viewing an event in a different time zone from your own, it'll be unclear which time zone is meant. Also, Moment.js's guess as to the user's time zone is not always reliable.

I think the right way to handle this would be to use the Google Maps API to determine, from the event's location and the given local time, what time zone the event is in, and then to attach that time zone to the time stored in the database and display it explicitly on the page. Does this make sense?

Comment by taymon-beal on Using the LessWrong API to query for events · 2018-05-29T01:27:48.584Z · score: 3 (1 votes) · LW · GW

Also, two other questions:

  • Is there any way to link the new event form to have a type box prechecked? How hard is this to implement in Vulcan?
  • How do time zones of events work?
Comment by taymon-beal on Using the LessWrong API to query for events · 2018-05-29T00:32:52.045Z · score: 3 (1 votes) · LW · GW

Thanks. I'd originally written up a wishlist of server-side functionality here, but at this point I'm thinking maybe I'll just do the sorting and filtering on the client, since this endpoint seems able to provide a superset of what I'm looking for. It's less efficient and definitely an evil hack, but it means not needing server-side code changes.

I'll note that filter: "SSC" doesn't work in the GraphiQL page; events that don't match the filter still get returned.

More generally, the way the API works now basically means that you can only ask for things that correspond to features of the web client. In effect, the server-side implementations of those features are what you're exposing as the API. There's an additional problem with this besides it just being limiting: you're likely to want to change those features later, and you risk breaking third-party clients if you do. If you want to support those clients, maybe they should instead use a more general API for querying the database (although I'm not sure exactly how to implement that while maintaining security).

Comment by taymon-beal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T22:13:54.786Z · score: 3 (1 votes) · LW · GW

I think I agree that if you see that as the development of explicit new norms as the primary point then Facebook doesn't really work and you need something like this. I guess I got excited because I was hoping that you'd solved the "audience is inclined towards nitpicking" and "the people I most want to hear from will have been prefiltered out" problems, and now it looks more like those aren't going to change.

Comment by taymon-beal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T21:12:02.258Z · score: 7 (2 votes) · LW · GW

I guess there's an inherent tradeoff between archipelago and the ability to shape the culture of the community. The status quo on LW 2.0 leans too far towards the latter for my tastes; the rationalist community is big and diverse and different people want different things, and the culture of LW 2.0 feels optimized for what you and Ben want, which diverges often enough from what I want that I'd rather post on Facebook to avoid dealing with that set of selection effects. Whether you should care about this depends on how many other people are in a similar position and how likely they are to make valuable contributions to the project of intellectual progress, vs. the costs of loss of control. I'm quite confident that there are some people whose contributions are extremely valuable and whose style differs from the prevailing one here—Scott Alexander being one, although he's not active on Facebook in particular—but unfortunately I have no idea whether the costs are worth it.

Comment by taymon-beal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T20:38:41.318Z · score: 3 (1 votes) · LW · GW

Yes, this was what I was trying to suggest.

Comment by taymon-beal on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T07:25:29.816Z · score: 22 (9 votes) · LW · GW

Thanks for articulating why Facebook is a safer and more pleasant place to comment than LW. I tried to post pretty much this on a previous thread but wasn't able to actually articulate the phenomenon so didn't say anything.

That being said, I still feel like I'd rather just post on Facebook.

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there's a technical solution to this, because Jeff Kaufman implemented it on his blog.

The second problem is that a number of prominent people in the community are ideologically anti-Facebook and we don't want to exclude them. There's a partial technical solution for this; a site that mirrored Facebook comments could also let users comment directly and interleave those comments with the Facebook ones. But I don't think those comments could be made to show up on Facebook, so the conversation would still be fractured. I admit I would probably care more about this if not for my disagreement with the central claim that Facebook is uniquely evil.

Other than that, Facebook seems to have the whole "archipelago" thing pretty much solved.

Meanwhile, if I post on LessWrong I still expect to be heavily nitpicked, because I expect the subset of the community that's active on this site to be disproportionately prone to nitpicking. Similarly, certain worldviews and approaches to problem-solving are overrepresented here relative to the broader community, and these aren't necessarily the ones I most want to hear from.

Maybe this just boils down to the problem of my friends not being on here and it's not worth your time to try to solve. But it still feels like a problem.

Comment by taymon-beal on Arbital postmortem · 2018-01-31T00:07:48.504Z · score: 34 (11 votes) · LW · GW

Thanks for the informative writeup.

I already said all of this on Facebook, but just to reiterate:

  • I believed from the first announcement, and continue to believe, that much of the value of Arbital as it exists is in the software itself. (By comparison, if Wikipedia stopped existing, MediaWiki would still be important and valuable.)
  • I, personally, want my own Arbital instance that I can use to write about EA donation opportunities. (I think Malcolm Ocean has said he wants one too.)
  • If and when it gets open sourced under any of the usual open source licenses, I will contribute documentation, automation scripts, and/or settings cleanup as needed to make it self-hostable.