Posts

ACX Biannual Meetup (with Vancouver rationality & EA) 2022-04-17T21:06:11.199Z
Why effective altruists should do Charity Science’s Christmas fundraiser 2015-12-01T23:59:14.688Z
What Makes the New Atheists So Charitable? 2015-10-29T01:50:45.618Z
SSC Discussion: No Time Like The Present For AI Safety Work 2015-06-05T02:34:28.645Z
SSC discussion: "bicameral reasoning", epistemology, and scope insensitivity 2015-05-27T05:08:37.621Z
SSC discussion: growth mindset 2015-04-11T15:13:59.432Z
Slate Star Codex: alternative comment threads on LessWrong? 2015-03-27T21:05:43.039Z
Join a major effective altruist fundraiser: get sponsored to eat for under $2.50 a day 2015-03-16T22:55:40.580Z
What topics are appropriate for LessWrong? 2015-01-12T18:58:16.791Z
Productivity poll: how frequently do you think you *should* check email? 2015-01-10T16:36:57.680Z
The new GiveWell recommendations are out: here's a summary of the charities 2014-12-01T21:20:23.819Z
Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission 2014-11-24T08:29:32.996Z
Introducing an EA Donation Registry, covering existential risk donors 2014-10-21T14:10:19.784Z
Introducing Effective Altruist Profiles 2014-10-03T23:01:48.913Z
2014 Survey of Effective Altruists 2014-05-05T02:32:28.735Z
Email tone and status: !s, friendliness, 'please', etc. 2014-05-03T19:05:43.491Z
Jobs and internships available at the Centre for Effective Altruism: new 'EA outreach' roles added 2014-02-21T11:50:45.044Z
Jobs and internships available at the Centre for Effective Altruism 2014-02-07T12:16:19.212Z

Comments

Comment by tog on Notes on Integrity · 2021-06-14T05:27:36.150Z · LW · GW

An underrated and little understood virtue in our culture. 

And a nice summary with many good, non-obvious and practical points. I've done a lot of what you describe in the section on process, and can testify to its effectiveness.

I'd be curious to hear any examples you have of integrity-maintaining examples of playing a role (which are non-obvious, and where a more simple high integrity approach might naively think one simply shouldn't play the role). 

Comment by tog on Takeaways from one year of lockdown · 2021-03-02T01:27:45.055Z · LW · GW

I'm curious, what countries have and haven't seen substantial focus on hand hygiene?

We have that here in Canada.

Comment by tog on Covid 1/28: Muddling Through · 2021-01-29T01:51:49.314Z · LW · GW

Also I somehow keep not giving holidays proper respect.

 

I thought you were an advocate of the Sabbath? 😉

Comment by tog on Rest Days vs Recovery Days · 2021-01-26T02:29:00.583Z · LW · GW

"Free Day", while perhaps not the best option overall, has the merit that these days involving freeing the part of you that communicatess through your gut (and through what you feel like doing). During much of our working (and non-working) week, that part is overridden by our mind's sense of what we have to do. 

By contrast, in OP's Recovery Days this part is either:

(a) doing the most basic recharging before it can do things it positively feels like and enjoys, or

(b) overridden or hijacked by addictive behaviours that it doesn't find as roundly rewarding as Free Day activities.

Addiction can also be seen as a lack of freedom. 

Comment by tog on Rest Days vs Recovery Days · 2021-01-25T17:34:08.785Z · LW · GW

I agree about the names. 'Rest' days are particularly confusing, since recovery days involve a lot of rest. A main characteristic of 'rest' days instead seems to doing what you feel like and following your gut.

Comment by tog on Covid 10/1: The Long Haul · 2020-10-21T04:01:07.777Z · LW · GW

Yes, it seems more reasonable to treat it as evidence of upper bound. Still weak evidence IMO, due to the self-reporting of perceived symptoms.

Comment by tog on Covid 10/1: The Long Haul · 2020-10-10T14:13:41.562Z · LW · GW

They say they haven't accounted for sampling bias, though, which makes me doubt the methodology overall, as sampling bias could be huge over 90 day timespans.

 

Yes, the article doesn't describe the exact methodology, but they could be well deriving the percentages from people who choose to self-report how they're doing after 30 and 90 days. These would be far more likely to be people who still feel unwell. 

As a separate point, and I'm skirting around using the word "hypochondria" here, asking people is they still feel unwell or have symptoms a month or three after first contracting covid is going to get some fairly subjective answers. All in all I don't think this particular study tells us much about the likelihood of covid causing permanent damage.

 

Comment by tog on Stupid Questions December 2016 · 2016-12-23T20:21:28.770Z · LW · GW

That plus it's a more intelligent than average community with shared knowledge and norms of rationality. This is why I personally value LessWrong and am glad it's making something of a comeback.

Comment by tog on Why effective altruists should do Charity Science’s Christmas fundraiser · 2015-12-11T06:11:45.968Z · LW · GW

These aren't letters from charities, asking for your money for themselves (even if they then spend some or most or all of it on others). If you get a stock letter signed by the president of Charity X, who you don't know, saying they hope your family is well, that's quite different.

Comment by tog on Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity · 2015-12-08T16:42:44.044Z · LW · GW

Yep - we were thinking Dec 31st, but we've now decided to make it Jan 31st as some student EA groups have said they'd like to share it in their newsletters after students return from the holidays.

Comment by tog on Why effective altruists should do Charity Science’s Christmas fundraiser · 2015-12-08T16:40:39.155Z · LW · GW

I think it's possible to send versions of these emails which aren't annoying. I've sent a bunch myself and people haven't seemed to find them annoying.

Comment by tog on Why effective altruists should do Charity Science’s Christmas fundraiser · 2015-12-08T16:39:40.357Z · LW · GW

I disagree - I know Peter was genuinely interested in hearing back from people.

Comment by tog on Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity · 2015-12-01T17:43:22.561Z · LW · GW

For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.

Notable findings included:

  • The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
  • 66.9% of the EAs in our sample were from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on the Map of EAs!
  • The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.
  • The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.
  • 238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.
  • The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 ­­ AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.
  • Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.
  • 33.1% of EAs in our sample were either vegan or vegetarian.
  • 34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.
Comment by tog on Open thread, Nov. 30 - Dec. 06, 2015 · 2015-11-30T08:50:02.713Z · LW · GW

Here's drawing your attention to this year's Effective Altruism Survey, which was recently released and which Peter Hurford linked to in LessWrong Main. As he says there:

This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.

Take the EA Survey

Comment by tog on Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity · 2015-11-30T02:30:24.331Z · LW · GW

For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.

Notable findings included:

  • The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
  • 66.9% of the EAs in our sample were from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on the Map of EAs!
  • The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.
  • The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.
  • 238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.
  • The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 ­­ AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.
  • Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.
  • 33.1% of EAs in our sample were either vegan or vegetarian.
  • 34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.
Comment by tog on You Can Face Reality · 2015-10-02T15:24:55.822Z · LW · GW

You're conflating something here. The statement only refers to "what is true", not your situation; each pronoun refers only to "what is true"

In that case saying "Owning up to the truth doesn't make the truth any worse" is correct, but doesn't settle the issue at hand as much as people tend to think it does. We don't just care about whether someone owning up to the truth makes the truth itself worse, which it obviously doesn't. We also care about whether it makes their or other people's situation worse, which it sometimes does.

Comment by tog on Graphical Assumption Modeling · 2015-08-19T15:15:15.431Z · LW · GW

I like the name it sounds like you may be moving to - "guesstimate".

Comment by tog on Graphical Assumption Modeling · 2015-08-19T15:12:48.300Z · LW · GW

Do you think you'd use this out of interest Owen?

Comment by tog on Yvain's most important articles · 2015-08-18T05:12:47.964Z · LW · GW

And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.

Comment by tog on Yvain's most important articles · 2015-08-18T05:12:20.554Z · LW · GW

I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:

http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html

http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf

But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.

Comment by tog on Yvain's most important articles · 2015-08-18T05:10:01.297Z · LW · GW

I've been looking for this all my life without even knowing it. (Well, at least for half a year.)

Comment by tog on Effective Altruism from XYZ perspective · 2015-07-15T20:16:07.797Z · LW · GW

That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.

It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt.

Your version and phrasing of what you're interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I'm sceptical that any single human being goes the full distance. Most EAs plausibly don't make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I've talked about these issues a lot.

* Which doesn't mean they haven't done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that's great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.

Comment by tog on Productivity poll: how frequently do you think you *should* check email? · 2015-07-15T08:14:12.588Z · LW · GW

People's expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.

Then change people's expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer - I know this may no be feasible, and you make a fair point).

Comment by tog on Effective Altruism from XYZ perspective · 2015-07-13T01:27:27.153Z · LW · GW

As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. [ ... ] Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.

I do know - indeed, live with :S - a couple.

Comment by tog on Effective Altruism from XYZ perspective · 2015-07-12T16:44:17.967Z · LW · GW

Effective altruism ==/== utilitarianism

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

Comment by tog on 'Effective Altruism' as utilitarian equivocation. · 2015-07-12T16:43:34.994Z · LW · GW

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

Comment by tog on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-19T07:11:52.304Z · LW · GW

Potentially worth actually doing - what'd be the next step in terms of making that a possibility?

Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at https://github.com/tog22/eaforum and https://github.com/tog22/eaforum/issues

Comment by tog on SSC Discussion: No Time Like The Present For AI Safety Work · 2015-06-05T17:00:50.820Z · LW · GW

Thanks, fixed, now points to http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/

Comment by tog on SSC discussion: "bicameral reasoning", epistemology, and scope insensitivity · 2015-05-27T05:11:09.228Z · LW · GW

For my part, I'm interested in the connection to GiveWell's powerful advocacy of "cluster thinking". I'll think about this some more and post thoughts if I have time.

Comment by tog on Effective effective altruism: Get $400 off your next charity donation · 2015-04-23T17:29:58.320Z · LW · GW

http://www.moneysavingexpert.com/ is the best way to learn about these.

Comment by tog on Effective effective altruism: Get $400 off your next charity donation · 2015-04-20T22:52:55.218Z · LW · GW

Shop for Charity is much better - 5%+ directly to GiveWell-recommended charities, plus browser plugins people have made that apply this every time you buy from Amazon.

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-20T22:47:33.593Z · LW · GW

Did you edit your original comment?

Not that I recall

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-15T15:07:08.890Z · LW · GW

Some people offer arguments - eg http://philpapers.org/archive/SINTEA-3.pdf - and for some people it's a basic belief or value not based on argument.

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:20:40.655Z · LW · GW

This is a good solution when marginal money has roughly equal utility to Alice and Bob, but suffers otherwise.

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:18:14.893Z · LW · GW

If C doesn't want A to play music so loud, but it's A's right to do so, why should A oblige? What is in it for A?

Some (myself included) would say that A should oblige if doing so would increase total utility, even if there's nothing in it for A self-interestedly. (I'm assuming your saying A had a right to play loud music wasn't meant to exclude this.)

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:14:13.635Z · LW · GW

"Tit-for-tat is a better strategy than Cooperate-Bot."

Can you use this premise in an explicit argument that expected reciprocation should be a factor in your decision to be nice toward others. How big a factor, relative to others (e.g. what maximises utility)? If there's an easy link to such an argument, all the better!

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:10:54.895Z · LW · GW

What if people don't believe in 'duty' - eg certain sorts of consequentialists?

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-29T20:49:45.797Z · LW · GW

Upvotes/downvotes on LW might take care of the quality worry.

Comment by tog on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-29T06:50:31.600Z · LW · GW

How about moral realist consequentialism? Or a moral realist deontology with defeasible rules like a prohibition on murdering? These can certainly be coherent. I'm not sure what you require them to be non-arbitrary, but one case for consequentialism's being non-arbitrary would be that it is based on a direct acquaintance with or perception of the badness of pain and goodness of happiness. (I find this case plausible.) For a paper on this, see http://philpapers.org/archive/SINTEA-3.pdf

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-29T01:18:12.358Z · LW · GW

Are you good to do these posts in the future? If not, is anyone else?

Comment by tog on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-28T15:17:18.093Z · LW · GW

I largely agree with the post. Saying Robertson's thought experiment was off limits and he was fantasising about beheading and raping atheists is silly. I think many people's reaction was explained by their being frustrated with his faulty assumption that all atheists are necessarily (implicitly or explicitly) nihilists of the sort who'd say there's nothing wrong with murder.

One amendment I'd make to the post is that many error theorists and non-cognitivists wouldn't be on board with what the murderer is saying in the thought experiment. For example, they could be quasi-realists. I say this as someone who personally leans moral realist.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T15:06:28.656Z · LW · GW

The latest from Scott:

I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"

In this thread some have also argued for not posting the most hot-button political writings.

Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T15:03:11.811Z · LW · GW

On fragmentation, I find Raemon's comment fairly convincing:

2) Maybe it'll split the comments? Sure, but the comments there are already huge and unwieldy (possibly more-than-dunbar's number worth of commenters) so I'm actually fine with that. Discussion over there is already pretty split up among comment threads in a hard to follow fashion.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T02:43:09.055Z · LW · GW

To be clear, I don't have the time to do it personally, I'd just do it for any posts I'd particularly enjoy reading discussion on or discussing. So if someone else feels it's a good idea and Scott's cool with it, their doing it would be the best way to make it happen.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T01:33:03.271Z · LW · GW

I would be more in favour of pushing SSC to have up/downvotes

That doesn't look like a goer given Scott's response that I quoted.

I would certainly be against linking every single post here given that some of them would be decisively off topic.

Noting that it may be best to exclude some posts as off topic.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-27T22:01:56.386Z · LW · GW

I'm not sure those topics are outside the norms of LW, outside the puns. Cf. this discussion: http://lesswrong.com/r/discussion/lw/lj4/what_topics_are_appropriate_for_lesswrong/

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-27T22:00:30.657Z · LW · GW

There's discussion of this on the LW Facebook group: https://www.facebook.com/groups/144017955332/permalink/10155300261480333/

It includes this comment from Scott:

I've unofficially polled readers about upvotes for comments and there's been what looks like a strong consensus against it on some of the grounds Benjamin brings up. I'm willing to listen to other proposals for changing the comments, although if it's not do-able via an easy WordPress plugin someone else will have to do it for me.

Comment by tog on Join a major effective altruist fundraiser: get sponsored to eat for under $2.50 a day · 2015-03-19T23:05:23.284Z · LW · GW

SCI used them some previous years.

Comment by tog on Join a major effective altruist fundraiser: get sponsored to eat for under $2.50 a day · 2015-03-19T16:14:50.973Z · LW · GW

Yes, LBTL actually doesn't have any GiveWell charities this year, and also charges the charities a 10% fee plus thousands up front; we don't take any cut. We're officially partnered with SCI on this and are their preferred venue.

Comment by tog on [LINK] Terry Pratchett is dead · 2015-03-12T17:49:02.885Z · LW · GW

Very sad. I enjoyed his books - I'd particularly recommend Small Gods for LessWrongers (it's also the one I enjoyed most in general).

Has anyone seen anything on how he died?