Posts

Boston Secular Solstice 2024 2024-11-19T05:05:47.131Z
ACX Ballot Meetup: Boston 2024-10-08T01:09:31.670Z
$1,000 Bounty for Pro-BLM Policy Analysis 2020-06-18T01:48:52.725Z
Petrov Day in Boston 2019-09-15T22:13:33.563Z
Boston SSC Meetup 2018-10-18T03:43:53.398Z
Petrov Day in Boston 2018-09-16T02:14:11.886Z
Boston SSC Meetup 2018-09-16T01:23:32.613Z

Comments

Comment by Taymon Beal (taymon-beal) on MIRI 2024 Mission and Strategy Update · 2024-01-07T12:51:45.282Z · LW · GW

Does that logic apply to crawlers that don't try to post or vote, as in the public-opinion-research use case? The reason to block those is just that they drain your resources, so sophisticated measures to feed them fake data would be counterproductive.

Comment by Taymon Beal (taymon-beal) on MIRI 2024 Mission and Strategy Update · 2024-01-05T18:46:01.894Z · LW · GW

I didn't downvote (I'm just now seeing this for the first time), but the above comment left me confused about why you believe a number of things:

  • What methodology do you think MIRI used to ascertain that the Time piece was impactful, and why do you think that methodology isn't vulnerable to bots or other kinds of attacks?
  • Why would social media platforms go to the trouble of feeding fake data to bots instead of just blocking them? What would they hope to gain thereby?
  • What does any of this have to do with the Social Science One incident?
  • In general, what's your threat model? How are the intelligence agencies involved? What are they trying to do?
  • Who are you even arguing with? Is there a particular group of EAsphere people who you think are doing public opinion research in a way that doesn't make sense?

Also, I think a lot of us don't take claims like "I've been researching this matter professionally for years" seriously because they're too vaguely worded; you might want to be a bit more specific about what kind of work you've done.

Comment by Taymon Beal (taymon-beal) on Brighter Than Today Versions · 2023-12-20T23:08:46.584Z · LW · GW

For people in Boston, I made a straw poll to gauge community sentiment on this question: https://forms.gle/5BJEG5fJWTza14eL9

Comment by Taymon Beal (taymon-beal) on Another Way to Be Okay · 2023-02-19T21:41:51.264Z · LW · GW

I assume this is referring to the ancient fable "The Ant and the Grasshopper", which is about what we would today call time preference. In the original, the high-time-preference grasshopper starves because it didn't spend the summer stockpiling food for winter, while the low-time-preference ant survives because it did. Of course, alternate interpretations have been common since then.

Comment by Taymon Beal (taymon-beal) on Solstice 2022 Roundup · 2022-11-27T00:19:25.803Z · LW · GW

Boston

Saturday, December 17; doors open at 6:30, Solstice starts at 7:15
69 Morrison Ave., Somerville, MA 02144

RSVPs appreciated for planning purposes: https://www.facebook.com/events/3403227779922411

Let us know in advance if you need to park onsite (it's accessible by public transportation). We're up a flight of stairs.

Comment by Taymon Beal (taymon-beal) on LW Petrov Day 2022 (Monday, 9/26) · 2022-09-23T03:35:26.662Z · LW · GW

As someone who was very unhappy with last year's implementation and said so (though not in the public thread), I think this is an improvement and I'm happy to see it. In previous years, I didn't get a code, but if I'd had one I would have very seriously considered using it; this year, I see no reason to do that.

I do think that, if real value gets destroyed as a result of this, then the ethical responsibility for that loss of value lies primarily with the LW team, and only secondarily with whoever actually pushed the button. So if the button got pushed and some other person were to say "whoever pushed the button destroyed a bunch of real value" then I wouldn't necessarily quibble with that, but if the LW team said the same thing then I'd be annoyed.

Comment by Taymon Beal (taymon-beal) on Vavilov Day Discussion Post · 2022-01-31T04:41:39.658Z · LW · GW

So this wound up going poorly for me for various reasons. I ultimately ended up not doing the fast, and have been convinced that I’m not going to be able to in the future either, barring unanticipated changes in my mental-health situation. Other people are going to be in a different situation and that seems fine. But there are a couple community-level things that I feel ought to be expressed publicly somewhere, and this is where they're apparently allowed, so:

First, it's not a great situation if there are like three rationalist holidays and one of them is this dangerous/unhealthy for a substantial fraction of people (e.g., eating disorders, which appear to exist at a high rate in the ratsphere). As far as I can tell, nobody intended that outcome; the original Vavilov Day proposal was like 90% “individual thing to do for personal reasons”, 10% “new rationalist holiday”, and then commenters here and on social media seized on the 10% because we currently don’t have enough rationalist holidays and people are desperate for more. (This is why, e.g., the original suggestion that people propose alternative ways of honoring Vavilov didn't get any traction; that wouldn't have met the pent-up demand for more ritual as effectively, so there wasn't interest.) But it meant that the choice was between “do something that's maybe not at all a good idea for you” and “lose access to communal affirmation of shared values with no available substitute”. The idea here isn't that there shouldn't be anything this risky; it’s that something this risky should be one thing among many, and right now we aren't there.

The counterpoint is that if we hold every new idea to a “good for the overall shape of the community” standard then defending ideas from critics becomes too unrewarding and we don't get any new ideas at all. Bulldozer vs. vetocracy, except mediated by informal community attitudes rather than by any authority. This seems like a valid point to me and I don't have any particularly helpful thoughts about how to navigate this tradeoff.

(It might have been possible to mitigate the tradeoff—assuming we wanted something like Vavilov Day to be a rationalist holiday at all, rather than an individual thing, which maybe we didn't—by putting more overt focus on questions like “how should people decide whether this is good for them” and “how should people whom this isn't good for relate to it”. But while these seem pretty non-costly to me, it might be the case that other people have different ideas for what non-costly precautions should be taken, and if you try to take all of them then it's not non-costly anymore. Again, I don't know.)

Second, I’ve heard from multiple sources that some people had concerns about the event but felt that they couldn’t express them in public. (You should take this claim with a grain of salt; not all of my knowledge here is firsthand, and even with respect to what is, since I’m not providing any details, you can’t trust that I haven’t omitted context that would lead you to a different conclusion if you knew it.) The resulting appearance of unanimity definitely left me feeling pretty unnerved and made it hard to tell whether I should participate. There are obvious reasons for people to refrain from public criticism—to the extent that it’s a personal thing, maybe we shouldn't criticize people's life choices, and to the extent that it’s a community thing, maybe we should err on the side of non-criticism in order to prevent chilling effects—and I don't really have any useful thoughts about what to think or do about this. I’m not sure anyone should particularly do anything differently based on this information. But I'd feel remiss if I allowed it to just not exist in public at all.

(This wound up being mostly about the meta-level ritual/holiday stuff, but I’m posting it in this thread rather than the other one because I wanted to say something about the application of that meta-level stuff to this particular situation, rather than about how to build rationalist ritual/holidays in full generality. I'm basically in favor of the things being suggested in the other thread; my only serious worry is that nobody will actually do them, given that many of them have been suggested before.)

Comment by Taymon Beal (taymon-beal) on Exterminating humans might be on the to-do list of a Friendly AI · 2021-12-08T03:01:09.715Z · LW · GW

This strikes me as a purely semantic question regarding what goals are consistent with an agent qualifying as "friendly".

Comment by Taymon Beal (taymon-beal) on [$10k bounty] Read and compile Robin Hanson’s best posts · 2021-10-20T23:09:26.163Z · LW · GW

He tweeted his approval.

Comment by Taymon Beal (taymon-beal) on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-26T20:22:58.761Z · LW · GW

Correction: The annual Petrov Day celebration in Boston has never used the button.

Comment by Taymon Beal (taymon-beal) on Takeaways from one year of lockdown · 2021-03-04T21:04:58.797Z · LW · GW

I've talked to some people who locked down pretty hard pretty early; I'm not confident in my understanding but this is what I currently believe.

I think characterizing the initial response as over-the-top, as opposed to sensible in the face of uncertainty, is somewhat the product of hindsight bias. In the early days of the pandemic, nobody knew how bad it was going to be. It was not implausible that the official case fatality rate for healthy young people was a massive underestimate.

I don't think our community is "hyper-altruistic" in the Strangers Drowning sense, but we do put a lot of emphasis on being the kinds of people who are smart enough not to pick up pennies in front of steamrollers, and on not trusting the pronouncements of officials who aren't incentivized to do sane cost-benefit analyses. And we apply that to altruism as much as anything else. So when a few people started coordinating an organized response, and used a mixture of self-preservation-y and moralize-y language to try to motivate people out of their secure-civilization-induced complacency, the community listened.

This doesn't explain why not everyone eased up on restrictions once the epistemic Wild West of February and March gave way to the new normal later in the year. That seems more like a genuine failure on our part. I think I prefer Raemon's explanation from this subthread: the concentrated attention that was required to make the initial response work turned out to be a limited resource, and it had been exhausted. By the time it replenished, there was no longer a Schelling event to coordinate around, and the problems no longer seemed so urgent to the people doing the coordinating.

Comment by Taymon Beal (taymon-beal) on We Need Browsers as Platforms · 2021-02-11T23:14:05.403Z · LW · GW

Docker is not a security boundary.

Comment by Taymon Beal (taymon-beal) on Manifesto of the Silent Minority · 2020-11-26T07:57:47.762Z · LW · GW

Eh, if you read the raw results most are pretty innocuous.

Comment by Taymon Beal (taymon-beal) on Industrial literacy · 2020-10-04T20:02:57.547Z · LW · GW

Not at the scale that would be required to power the entire grid that way. At least, not yet. This is of course just one study (h/t Vox via Robert Wiblin) but provides at least a rough picture of the scale of the problem.

Comment by Taymon Beal (taymon-beal) on On "Not Screwing Up Ritual Candles" · 2020-09-28T01:52:26.119Z · LW · GW

I feel obligated to link to my house's Petrov Day "Bad/X-risk Future" candle.

Comment by Taymon Beal (taymon-beal) on $1,000 Bounty for Pro-BLM Policy Analysis · 2020-06-18T17:28:16.233Z · LW · GW

Cross-posting from Facebook:

Any policy goal that is obviously part of BLM's platform, or that you can convince me is, counts. Police reform is the obvious one but I'm open to other possibilities.

It's fine for "heretics" to make suggestions, at least here on LW where they're somewhat less likely to attract unwanted attention. Efficacy is the thing I'm interested in, with the understanding that the results are ultimately to be judged according to the BLM moral framework, not the EA/utilitarian one.

Small/limited returns are okay if they're the best that can be done. Time preference is moderately high (because that matches my assessment of the BLM moral framework) but still limited.

Suggestions from non-Americans are fine.

Comment by Taymon Beal (taymon-beal) on Reality-Revealing and Reality-Masking Puzzles · 2020-01-17T04:48:05.747Z · LW · GW
It is easy to get the impression that the concerns raised in this post are not being seen, or are being seen from inside the framework of people making those same mistakes.

I don't have a strong opinion about the CFAR case in particular, but in general, I think this is impression is pretty much what happens by default in organizations, even when people running them are smart and competent and well-meaning and want to earn the community's trust. Transparency is really hard, harder than I think anyone expects until they try to do it, and to do it well you have to allocate a lot of skill points to it, which means allocating them away from the organization's core competencies. I've reached the point where I no longer find even gross failures of this kind surprising.

(I think you already appreciate this but it seemed worth saying explicitly in public anyway.)

Comment by taymon-beal on [deleted post] 2019-09-28T17:55:48.774Z

The organizer wound up posting their own event: https://www.lesswrong.com/events/ndqcNdvDRkqZSYGj6/ssc-meetups-everywhere-1

Comment by taymon-beal on [deleted post] 2019-07-23T18:55:11.424Z

This looks like a duplicate.

Comment by Taymon Beal (taymon-beal) on Nash equilibriums can be arbitrarily bad · 2019-05-01T21:59:59.631Z · LW · GW

Nit: I think this game is more standardly referred to in the literature as the "traveler's dilemma" (Google seems to return no relevant hits for "almost free lunches" apart from this post).

Comment by Taymon Beal (taymon-beal) on Book review: The Sleepwalkers by Arthur Koestler · 2019-04-25T03:29:17.894Z · LW · GW

Irresponsible and probably wrong narrative: Ptolemy and Simplicius and other pre-modern scientists generally believed in something like naive realism, i.e., that the models (as we now call them) that they were building were supposed to be the way things really worked, because this is the normal way for humans to think about things when they aren't suffering from hypoxia from going up too many meta-levels, so to speak. Then Copernicus came along, kickstarting the Scientific Revolution and with it the beginnings of science-vs.-religion conflict, spurring many politically-motivated clever arguments about Deep Philosophical Issues. Somewhere during that process somebody came up with scientific anti-realism, and it gained traction because it was politically workable as a compromise position, being sufficiently nonthreatening to both sides that they were content to let it be. Except for Galileo, who thought it was bullshit and refused to play along, which (in conjunction with his general penchant for pissing people off, plus the political environment having changed since Copernicus due to the Counter-Reformation) got him locked up.

Comment by Taymon Beal (taymon-beal) on Book review: The Sleepwalkers by Arthur Koestler · 2019-04-23T04:25:00.085Z · LW · GW

Oh, I totally buy that it was relevant in the Galileo affair; indeed, the post does discuss Copernicus. But that was after the controversy had become politicized and so people had incentives to come up with weird forms of anti-epistemology. Absent that, I would not expect such a distinction to come up.

Comment by Taymon Beal (taymon-beal) on Book review: The Sleepwalkers by Arthur Koestler · 2019-04-23T00:52:55.445Z · LW · GW

This essay argues against the idea of "saving the phenomenon", and suggests that the early astronomers mostly did believe that their models were literally true. Which rings true to me; the idea of "it doesn't matter if it's real or not" comes across as suspiciously modern.

Comment by Taymon Beal (taymon-beal) on What LessWrong/Rationality/EA chat-servers exist that newcomers can join? · 2019-04-03T02:42:52.241Z · LW · GW

For EAs and people interested in discussing EA, I recommend the EA Corner Discord server, which I moderate along with several other community members. For a while there was a proliferation of several different EA Discords, but the community has now essentially standardized on EA Corner and the other servers are no longer very active. Nor is there an open EA chatroom with comparable levels of activity on any other platform, to the best of my knowledge.

I feel that we've generally done a good job of balancing access needs associated with different levels of community engagement. A number of longtime EAs with significant blogosphere presences hang out here, but the culture is also generally newcomer-friendly. Discussion topics range from 101 stuff to open research questions. Speaking only for myself, I generally strive to maintain civic/public moderation norms as much as possible.

Also you can get a pretty color for your username if you donate 10% or do direct work.

Comment by Taymon Beal (taymon-beal) on LW Update 2019-03-12 -- Bugfixes, small features · 2019-03-13T04:38:42.080Z · LW · GW

The Slate Star Codex sidebar is now using localStartTime to display upcoming meetups, fixing a longstanding off-by-one bug affecting displayed dates.

Comment by Taymon Beal (taymon-beal) on LW2.0 Mailing List for Breaking API Changes · 2019-02-26T01:10:25.108Z · LW · GW

You probably want to configure this such that anyone can read and subscribe but only you can post.

Comment by Taymon Beal (taymon-beal) on Open Thread January 2019 · 2019-01-19T16:13:57.889Z · LW · GW

I don't feel like much has changed in terms of evaluating it. Except that the silliness of the part about cryptocurrency is harder to deny now that the bubble has popped.

Comment by Taymon Beal (taymon-beal) on Norms of Membership for Voluntary Groups · 2018-12-12T00:30:20.885Z · LW · GW

I linked this article in the EA Discord that I moderate, and made the following comments:

Posting this in #server-meta because it helps clarify a lot of what I, at least, have struggled to express about how I see this server as being supposed to work.
Specifically, I feel pretty strongly that it should be run on civic/public norms. This is a contrast to a lot of other rationalsphere Discords, which I think often at least claim to be running on guest norms, though I don’t have a super-solid understanding of the social dynamics involved.
The standard failure mode of civic/public norms is that the people in charge, in the interest of not having a too-high standard of membership (as this set of norms requires), are overly tolerant of behaviors with negative externalities.
The problem with this is not simply that negative externalities are bad, it’s that if you have too many of them it ceases to be worth good actors’ while to participate, at which point they leave because the whole thing is voluntary. Whatever the goals of the space are, you probably can’t achieve them if there’s nobody left but trolls.
Thus it is occasionally argued that civic/public norms are self-defeating. In particular, in the rationalsphere something like this has become accepted wisdom (“well-kept gardens die by pacifism”), and attempts to make spaces more civic/public are by default met with suspicion.
(Of course, it can also hard to tell a principled attempt at civic/public norms apart from a simple bias towards inaction on the part of the people in charge. Such a bias can stem from aversion to social conflict. Certainly, I myself am so averse.)
The way we deal with this on this server, I think, is to identify patterns that if left unchecked would cause productive people to leave (not specific productive people, but rather in the abstract), and then as principledly as possible tweak the rules to officially discourage and/or prohibit those behaviors.
It’s a fine line to walk, but I don’t think it’s impossible to do well. And there are advantages; I suspect that insecure and/or conflict-averse people may have an easier time in this kind of space, especially if they don’t have a guest or coalitional space that happens to favor them and so makes them feel safe. (Something something typical mind fallacy.)
Also, civic/public norms are the best at preventing forks and schisms. Guest norms are the worst at this. One can of course argue about whether it’s worth it, but these do very much have costs.
The other thing I found especially interesting was this quote: “Asking for “inclusiveness” is usually a bid to make the group more Civic or Coalitional.”
I found this interesting because recently I made an ex cathedra statement that almost used the word “inclusive” in reference to what this server strives to be. By this I meant civic/public. I took it out because the risk of misinterpretation seemed high, because in the corners of the internet that many of us frequent, “inclusive” more often means coalitional.
Comment by Taymon Beal (taymon-beal) on LW Update 2018-11-22 – Abridged Comments · 2018-12-10T03:13:26.496Z · LW · GW

I fear that this system doesn't actually provide the benefits of a breadth-first search, because you can't really read half a comment. If I scroll down a comment page without uncollapsing it, I don't feel like I got much of a picture of what anyone actually said, and also repeatedly seeing what people are saying cut off midsentence is really cognitively distracting.

Reddit (and I think other sites, but on Reddit I know I've experienced this) makes threads skimmable by showing a relatively small number of comments, rather than a small snippet of each comment. At least in my experience, this actually works, in that I've skimmed threads this way and felt like I got a good picture of the overall gist of the thread without having to read every comment.

I know you don't like Reddit's algorithm because it feeds the Matthew effect. But if most comments were hidden entirely and only a few were shown, you could optimize directly for whatever it is you're trying to do, by tweaking the algorithm that determines which comments to show. As a degenerate example, if you wanted to optimize for strict egalitarianism, you could just show a uniform random sample of comments.

Comment by Taymon Beal (taymon-beal) on LW Update 2018-11-22 – Abridged Comments · 2018-12-10T00:54:34.625Z · LW · GW

You don't currently expand comments that are positioned below the clicked comment but not descendants of it.

Comment by Taymon Beal (taymon-beal) on LW Update 2018-11-22 – Abridged Comments · 2018-12-09T22:04:35.425Z · LW · GW

Idea: If somebody has expanded several comments, there's a good chance they want to read the whole thread, so maybe expand all of them.

Comment by Taymon Beal (taymon-beal) on Speculative Evopsych, Ep. 1 · 2018-11-23T00:07:14.660Z · LW · GW

Would you mind saying in non-metaphorical terms what you thought the point was? I think this would help produce a better picture of how hard it would have been to make the same point in a less inflammatory way.

Comment by Taymon Beal (taymon-beal) on Rationality Is Not Systematized Winning · 2018-11-12T19:58:39.592Z · LW · GW

There's an argument to be made that even if you're not an altruist, that "societal default" only works if the next fifty years play out more-or-less the same way the last fifty years did; if things change radically (e.g., if most jobs are automated away), then following the default path might leave you badly screwed. Of course, people are likely to have differing opinions on how likely that is.

Comment by Taymon Beal (taymon-beal) on Modes of Petrov Day · 2018-09-24T14:09:09.907Z · LW · GW

No, we didn't participate in this in Boston. Our Petrov Day is this Wednesday, the actual anniversary of the Petrov incident.

Comment by Taymon Beal (taymon-beal) on Modes of Petrov Day · 2018-09-23T03:26:42.946Z · LW · GW

Some disconnected thoughts:

In Boston we're planning Normal Mode. (We rejected Hardcore Mode in previous years, in part because it was a serious problem for people who underwent significant inconvenience to be able to attend.)

I'm good at DevOps and might be able to help the Seattle folks make their app more available if they need it.

I happened to give a eulogy of sorts for Stanislav Petrov last year.

I'm currently going through the latest version of the ritual book and looking for things to nitpick, since I know that a few points (notably the details of the Arkhipov story) have fallen into dispute since last year.

I'd be curious to know what considerations are affecting your decisions to possibly change Petrov Day.

Comment by Taymon Beal (taymon-beal) on Berkeley REACH Supporters Update: September 2018 · 2018-09-17T01:01:46.928Z · LW · GW

Thanks for this update!

I have a question as a donor, that I regret not thinking of during the fundraising push. Could you identify a few possible future outcomes, that success or failure on could be measured within a year, that if achieved would indicate that REACH was probably producing significant value from an EA perspective (as opposed to from a community-having-nice-things perspective)? And could you offer probability estimates on those outcomes being achieved?

I certainly understand if this would be overly time-consuming, but I'd feel comfortable donating more if I had a good answer to this in hand.

Edit: Kelsey on Discord proposed a few possible outcomes that might (or might not, depending on how you envision REACH working) be answers to this question:

  • The regular meetups REACH hosts get ~50 people to attend at least four EA meetups a year when they wouldn't have attended any.
  • As a result of the things they learned at those meetups, at least ten people change where they're donating to or what they're prioritizing in the next year.
  • At least five people join the community via REACH events/staying there/interacting with people staying there, and at least one of them is doing useful work in an EA priority area.
Comment by Taymon Beal (taymon-beal) on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T19:49:55.896Z · LW · GW

Then I think the post should have waited until those arguments were up, so that the discussion could be about their merits. The problem is the "hyping it up to Be An Internet Event", as Ray put it in a different subthread; since the thing you're hyping up is so inflammatory, we're left in the position of having arguments about it without knowing what the real case for it is.

Comment by Taymon Beal (taymon-beal) on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T16:42:07.035Z · LW · GW

I think it's an antisocial move to put forth a predictably inflammatory thesis (e.g., that an esteemed community member is a pseudo-intellectual not worth reading) and then preemptively refuse to defend it. If the thesis is right, then it would be good for us to be convinced of it, but that won't happen if we don't get to hear the real arguments in favor. And if it's wrong, then it should be put to bed before it creates a lot of unproductive social conflict, but that also won't happen as long as people can claim that we haven't heard the real arguments in favor (kind of like the motte-and-bailey doctrine).

I don't doubt your sincerity in that you're doing this because you don't believe the thesis yourself, but your friend does. But I don't think that makes it okay. If your friend, or at least someone who actually believes the thesis, is not going to explain why it should be taken seriously, then it's bound to be net negative for intellectual progress and you shouldn't post it.

Comment by Taymon Beal (taymon-beal) on Ask Us Anything: Submit Questions Asking About What We Think SSC is Wrong About, and Why · 2018-09-08T16:31:29.688Z · LW · GW

Unless a comment was edited or deleted before I got the chance to read it, nobody but you has used the word "violence" in this thread. So I don't understand how an argument about the definition of "violence" is in any way relevant.

Comment by Taymon Beal (taymon-beal) on Last Chance to Fund the Berkeley REACH · 2018-06-30T02:52:09.986Z · LW · GW

Hmmm. Do you think that's a bug, or a feature?

LessWrong seems like a bit of a weird example since CFAR's senior leadership were among the people pushing for it in the first place. IIRC even people working at EA meta-orgs have encountered difficulties and uncertainty trying to personally fund projects through the org.

Comment by Taymon Beal (taymon-beal) on Last Chance to Fund the Berkeley REACH · 2018-06-30T02:42:01.686Z · LW · GW

I've just pledged $40 per month.

I could afford to pay more. I'd do so if I ever actually visited REACH, but I live thousands of miles away (and did give a small donation when I visited for the pre-EA Global party, and will continue to do so if I ever come back). I'd also pay more if I were more convinced that it was a good EA cause, but the path from ingroup reinforcement to global impact is speculative and full of moral hazard and I'm still thinking about it.

My pledge represents a bet that REACH will ultimately make a difference in my life by some causal pathway not yet visible. Perhaps I ultimately wind up in the Bay and it helps me connect to the community there, or perhaps its success ultimately facilitates other community-building projects that aren't so geographically limited (which is a thing I'd really like to see). It'd be nice to be able to wait and see, but that won't work if REACH runs out of startup capital and dies—so I'm taking the risk.

Comment by Taymon Beal (taymon-beal) on Last Chance to Fund the Berkeley REACH · 2018-06-30T02:24:41.582Z · LW · GW

This is a problem I've been thinking about for awhile in a broader EA context.

It's claimed fairly widely that EA needs a lot more smallish projects, including ones that aren't immediately legible enough to be fundable by large institutional donors (e.g., because the expected value depends on assessments of the competence and value alignment of the person running the project, which the large institutional funders can't assess). It's also claimed (e.g., by Nick Beckstead of OpenPhil at EA Global San Francisco 2017) that smallish earning-to-give donors' best bet to do the most good is to use their local knowledge to find and fund promising opportunities that the big institutional donors aren't already covering.

This creates a seemingly obvious opportunity for an EA org to make it easier for donors to crowdfund these kinds of projects. E.g., by being a 501(c)(3) they can funnel donations from DAFs, which individuals can't accept. (For me, at least, this is a bigger deal than tax deductibility; my DAF is overprovisioned relative to my personal savings right now, so I'd rather make donations from there.)

The two obvious hypotheses for why nobody's already doing this are 1) all the EA meta-orgs are too constrained on staff time to set it up, and 2) it doesn't actually work because the level of oversight required to avoid undue legal and/or reputational risk would destroy the efficiency gains. I would very much like to know to what extent each of these is the case.

Comment by Taymon Beal (taymon-beal) on Using the LessWrong API to query for events · 2018-06-23T02:31:39.426Z · LW · GW

Re: local events: Although I haven't checked this with Scott, my default assumption for the SSC sidebar is that keeping it free of clutter and noise is of the highest importance. As such, I'm only including individual events that a human actually took explicit action to advertise, to prevent the inclusion of "weekly" events from groups that have since flaked or died out.

(This is also why the displayed text only includes the date and Google-normalized location, to prevent users from defacing the sidebar with arbitrary text.)

LW proper may have different priorities. Might be worth considering design options here for indicating how active a group is.

Comment by Taymon Beal (taymon-beal) on Using the LessWrong API to query for events · 2018-06-23T02:26:15.249Z · LW · GW

So correct me if I'm wrong here, but the way timezones seem to work is that, when creating an event, you specify a "local" time, then the app translates that time from whatever it thinks your browser's time zone is into UTC and saves it in the database. When somebody else views the event, the app translates the time in the database from UTC to whatever it thinks their browser's time zone is and displays that.

I suppose this will at least sometimes work okay in practice, but if somebody creates an event in a time zone other than the one they're in right now, it will be wrong, and if you're viewing an event in a different time zone from your own, it'll be unclear which time zone is meant. Also, Moment.js's guess as to the user's time zone is not always reliable.

I think the right way to handle this would be to use the Google Maps API to determine, from the event's location and the given local time, what time zone the event is in, and then to attach that time zone to the time stored in the database and display it explicitly on the page. Does this make sense?

Comment by Taymon Beal (taymon-beal) on Using the LessWrong API to query for events · 2018-05-29T01:27:48.584Z · LW · GW

Also, two other questions:

  • Is there any way to link the new event form to have a type box prechecked? How hard is this to implement in Vulcan?
  • How do time zones of events work?
Comment by Taymon Beal (taymon-beal) on Using the LessWrong API to query for events · 2018-05-29T00:32:52.045Z · LW · GW

Thanks. I'd originally written up a wishlist of server-side functionality here, but at this point I'm thinking maybe I'll just do the sorting and filtering on the client, since this endpoint seems able to provide a superset of what I'm looking for. It's less efficient and definitely an evil hack, but it means not needing server-side code changes.

I'll note that filter: "SSC" doesn't work in the GraphiQL page; events that don't match the filter still get returned.

More generally, the way the API works now basically means that you can only ask for things that correspond to features of the lesswrong.com web client. In effect, the server-side implementations of those features are what you're exposing as the API. There's an additional problem with this besides it just being limiting: you're likely to want to change those features later, and you risk breaking third-party clients if you do. If you want to support those clients, maybe they should instead use a more general API for querying the database (although I'm not sure exactly how to implement that while maintaining security).

Comment by Taymon Beal (taymon-beal) on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T22:13:54.786Z · LW · GW

I think I agree that if you see that as the development of explicit new norms as the primary point then Facebook doesn't really work and you need something like this. I guess I got excited because I was hoping that you'd solved the "audience is inclined towards nitpicking" and "the people I most want to hear from will have been prefiltered out" problems, and now it looks more like those aren't going to change.

Comment by Taymon Beal (taymon-beal) on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T21:12:02.258Z · LW · GW

I guess there's an inherent tradeoff between archipelago and the ability to shape the culture of the community. The status quo on LW 2.0 leans too far towards the latter for my tastes; the rationalist community is big and diverse and different people want different things, and the culture of LW 2.0 feels optimized for what you and Ben want, which diverges often enough from what I want that I'd rather post on Facebook to avoid dealing with that set of selection effects. Whether you should care about this depends on how many other people are in a similar position and how likely they are to make valuable contributions to the project of intellectual progress, vs. the costs of loss of control. I'm quite confident that there are some people whose contributions are extremely valuable and whose style differs from the prevailing one here—Scott Alexander being one, although he's not active on Facebook in particular—but unfortunately I have no idea whether the costs are worth it.

Comment by Taymon Beal (taymon-beal) on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T20:38:41.318Z · LW · GW

Yes, this was what I was trying to suggest.

Comment by Taymon Beal (taymon-beal) on Meta-tations on Moderation: Towards Public Archipelago · 2018-02-25T07:25:29.816Z · LW · GW

Thanks for articulating why Facebook is a safer and more pleasant place to comment than LW. I tried to post pretty much this on a previous thread but wasn't able to actually articulate the phenomenon so didn't say anything.

That being said, I still feel like I'd rather just post on Facebook.

There are two specific problems with Facebook as a community forum that I'm aware of. The first is that the built-in archiving and discovery tools are abysmal, because that's not the primary use case for the platform. Fortunately, we know there's a technical solution to this, because Jeff Kaufman implemented it on his blog.

The second problem is that a number of prominent people in the community are ideologically anti-Facebook and we don't want to exclude them. There's a partial technical solution for this; a site that mirrored Facebook comments could also let users comment directly and interleave those comments with the Facebook ones. But I don't think those comments could be made to show up on Facebook, so the conversation would still be fractured. I admit I would probably care more about this if not for my disagreement with the central claim that Facebook is uniquely evil.

Other than that, Facebook seems to have the whole "archipelago" thing pretty much solved.

Meanwhile, if I post on LessWrong I still expect to be heavily nitpicked, because I expect the subset of the community that's active on this site to be disproportionately prone to nitpicking. Similarly, certain worldviews and approaches to problem-solving are overrepresented here relative to the broader community, and these aren't necessarily the ones I most want to hear from.

Maybe this just boils down to the problem of my friends not being on here and it's not worth your time to try to solve. But it still feels like a problem.