Posts

4 days left in Giving What We Can's 2015 fundraiser - £34k to go 2015-06-27T02:16:22.956Z
Giving What We Can needs your help! 2015-05-29T16:30:26.038Z
Six Ways To Get Along With People Who Are Totally Wrong* 2015-05-27T12:37:40.552Z
Could you be Prof Nick Bostrom's sidekick? 2014-12-05T01:09:37.040Z
The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach 2014-11-19T22:41:16.968Z
The principle of ‘altruistic arbitrage’ 2012-04-09T01:29:17.794Z

Comments

Comment by RobertWiblin on A LessWrong Crypto Autopsy · 2018-01-31T02:42:21.462Z · LW · GW
1: Our epistemic rationality has probably gotten way ahead of our instrumental rationality

I would defend the instrumental rationality of having a rule of thumb that unless you're quite wealthy, you don't bother looking into anything that appears to be a 'get rich quick' scheme, or seek to invest in high-risk high-return projects you can't evaluate.

Yes sometimes it will fail big, if you miss the boat on bitcoin, or Facebook or whatever. Every strategy fails in some scenarios. Sometimes betting it all on 23 red will have been the right call.

But because it i) lowers risk, ii) saves you wasting time looking into lots of dud investments to find the occasional good one, iii) makes you less of a mark for scams and delusions, I think it's sensible for most.

Comment by RobertWiblin on A LessWrong Crypto Autopsy · 2018-01-31T02:23:14.563Z · LW · GW

From a selfish point of view, I don't think most rationalists would benefit significantly from a bit of extra money, so it doesn't make much sense to be dedicating their truly precious resource (time and attention) to identifying high-risk high-return investments like bitcoin and in this case figuring out how to buy/store them safely. And I'm someone who bought bitcoin for the sake of entertainment.

From an altruistic point of view, yes I expect hundreds of millions of dollars to be donated, and the current flow is consistent with that - I know of 5 million in the last few months, and there's probably more than hasn't been declared.

"then it's no longer so plausible that "hundreds of millions is a substantial fraction as good as billions"."

At the full community level the marginal returns on further donations also declines, though more slowly: https://80000hours.org/2017/11/talent-gaps-survey-2017/#how-diminishing-are-returns-in-the-community

Comment by RobertWiblin on A LessWrong Crypto Autopsy · 2018-01-30T14:35:42.809Z · LW · GW

Collectively the community has made hundreds of millions from cypto. But it did so by getting a few wealthy people to buy many bitcoin, rather than many people to buy a few bitcoin. This is a more efficient model because it avoids big fixed costs for each individual.

It also avoid everyone in the community having to dedicate some of their attention to thinking about what outstanding investment opportunities might be available today.

Due to declining marginal returns, hundreds of millions is a substantial fraction as good as billions. So I think we did alright.

Comment by RobertWiblin on Effective altruism is self-recommending · 2017-04-21T20:01:20.643Z · LW · GW

"After they were launched, I got a marketing email from 80,000 Hours saying something like, "Now, a more effective way to give." (I’ve lost the exact email, so I might be misremembering the wording.) This is not a response to demand, it is an attempt to create demand by using 80,000 Hours’s authority, telling people that the funds are better than what they're doing already. "

I write the 80,000 Hours newsletter and it hasn't yet mentioned EA Funds. It would be good if you could correct that.

Comment by RobertWiblin on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T00:26:51.572Z · LW · GW

"If we could somehow install Holden Karnofsky as president it would probably improve the lives of a billion people"

Amusingly, our suggestion of these two charities is entirely syndicated from a blog post put up by Holden Karnofsky himself: http://www.openphilanthropy.org/blog/suggestions-individual-donors-open-philanthropy-project-staff-2016

Comment by RobertWiblin on 80,000 Hours: EA and Highly Political Causes · 2017-01-29T00:24:49.918Z · LW · GW

Thanks for your interest in our work.

As we say in the post, on this and most problem areas 80,000 Hours defers charity recommendations to experts on that particular cause (see: What resources did we draw on?). In this case our suggestion is based entirely on the suggestion of Chloe Cockburn, the Program Officer for Criminal Justice Reform at the Open Philanthropy Project, who works full time on that particular problem area and knows much more than any of us about what is likely to work.

To questions like "does 80,000 Hours have view X that would make sense of this" or "is 80,000 Hours intending to do X" - the answer is that we don't really have an independent view on any of these things. We're just syndicating content from someone we perceive to be an authority (just as we do when we include GiveWell's recommended charities without having independently investigated them). I thought the article was very clear about this, but perhaps we needed to make it even more so in case people skipped down to a particular section without reading the preamble.

If you want to get these charities removed then you'd need to speak with Chloe. If she changes her suggestions - or another similar authority on this topic appears and offers a contrary view - then that would change what we include.

Regarding why we didn't recommend the Center for Criminal Justice Reform: again, that is entirely because it wasn't on the Open Philanthropy Project's list of suggestions for individual donors. Presumably that is because they felt their own grant - which you approve of - had filled their current funding needs.

All the best,

Rob

Comment by RobertWiblin on 4 days left in Giving What We Can's 2015 fundraiser - £34k to go · 2015-07-20T01:57:51.800Z · LW · GW

Yes, thanks so much to everyone who contributed! :)

Comment by RobertWiblin on Giving What We Can needs your help! · 2015-06-27T00:09:53.938Z · LW · GW

Hi Eric - no they don't!

Comment by RobertWiblin on Giving What We Can needs your help! · 2015-05-29T16:32:47.273Z · LW · GW

This fundraiser has been promoted on the Effective Altruism Forum already, so you may find your questions answered on the thread:

http://effective-altruism.com/ea/hz/please_support_giving_what_we_can_this_spring/

http://effective-altruism.com/ea/j9/giving_what_we_can_needs_your_help/

Comment by RobertWiblin on Six Ways To Get Along With People Who Are Totally Wrong* · 2015-05-27T12:38:14.527Z · LW · GW

I'll re-post this comment as well:

"If I was going to add another I think it would be

  1. Have fun

Talking to people who really disagree with you can represent a very enjoyable intellectual exploration if you approach it the right way. Detach yourself from your own opinions, circumstances and feelings and instead view the conversation as a neutral observer who was just encountering the debate for the first time. Appreciate the time the other person is putting into expressing their points. Reflect on how wrong most people have been throughout history and how hard it is to be confident about anything. Don't focus just yet on the consequences or social desirability of the different views being expressed - just evaluate how true they seem to be on their merits. Sometimes this perspective is described as 'being philosophical'."

Comment by RobertWiblin on [Link] An argument on colds · 2015-01-18T22:05:03.504Z · LW · GW

When someone has an incurable and lethal respiratory illness, I think we do require them to stay in quarantine and this is broadly accepted. The reason this doesn't apply to HIV and other such diseases is that they are barely contagious.

Comment by RobertWiblin on [Link] An argument on colds · 2015-01-18T21:15:28.104Z · LW · GW

Well I wasn't proposing a strict quarantine or limits on travel. Merely preventing people from coming into close contact with colleagues at work where the risk of contagion is highest, and requiring them to have the option to reschedule their (expensive) travel. People are already familiar and comfortable with regulations in workplaces and aviation.

If I were proposing a thoroughgoing quarantine, I expect people wouldn't be nearly as enthusiastic.

Comment by RobertWiblin on Could you be Prof Nick Bostrom's sidekick? · 2014-12-13T12:24:58.045Z · LW · GW

Thanks for the feedback.

Note it was also the most popular post on the Facebook group (as measured by likes) in almost two weeks, so clearly some other members thought this was a sensible proposal.

I can see how it could come across as 'hero worship', except that Bostrom is indeed a widely-recognised world-leading academic at the highest ranked philosophy department in the world. There are sound reasons to be respectful of his work.

"sexual innuendo"

I can assure you the intended level of sexual innuendo in this ad is less than zero.

Comment by RobertWiblin on Could you be Prof Nick Bostrom's sidekick? · 2014-12-08T15:31:59.487Z · LW · GW

As we have not secured funding yet it would be premature to do either of these things. We can negotiate a salary later on in the process depending on the person's qualifications.

Comment by RobertWiblin on Could you be Prof Nick Bostrom's sidekick? · 2014-12-05T14:57:09.736Z · LW · GW

I think it'll be faster to get a sense of that from a personal conversation.

Comment by RobertWiblin on Could you be Prof Nick Bostrom's sidekick? · 2014-12-05T13:02:59.741Z · LW · GW

Exactly - if anything I am trying to make the job seem less appealing than it will be, so we attract only the right kind of person.

Comment by RobertWiblin on When should an Effective Altruist be vegetarian? · 2014-11-24T22:27:46.463Z · LW · GW

I was just giving what would be sufficient conditions, but they aren't all necessarily necessary.

Comment by RobertWiblin on When should an Effective Altruist be vegetarian? · 2014-11-24T20:24:11.063Z · LW · GW

If you can't otherwise improve their lives, the death is painless, and murder isn't independently bad.

Comment by RobertWiblin on When should an Effective Altruist be vegetarian? · 2014-11-24T18:58:58.963Z · LW · GW

"Isn't it suspicious that people who make the strange claim that animals count as objects of moral concern also make the strange claim that animal lives aren't worth living"

No, this makes perfect sense. 1. They decide animals are objects of moral concern. 2. Look into the conditions they live in, and decide that in some cases they are worse than not being alive. 3. Decide it's wrong to fund expansion of a system that holds animals in conditions that are worse than not being alive at all.

Comment by RobertWiblin on When should an Effective Altruist be vegetarian? · 2014-11-24T18:44:34.778Z · LW · GW

For what it's worth, I've found being vegetarian almost no effort at all. Being vegan is a noticeable inconvenience, especially cutting out the last bits of dairy (and that shows up in your examples, which are both about dairy).

Comment by RobertWiblin on The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach · 2014-11-21T18:30:14.660Z · LW · GW

Yes, you can apply for whatever combination of positions you like.

Comment by RobertWiblin on The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach · 2014-11-21T18:29:59.651Z · LW · GW

If not immediately, then at some point, yes.

Comment by RobertWiblin on The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach · 2014-11-20T12:11:38.356Z · LW · GW

Hey, this doesn't seem like the best location for it. Is there a post on the 80,000 Hours or EA blogs related to your criticism you could use?

Comment by RobertWiblin on 2013 Survey Results · 2014-03-24T18:19:53.695Z · LW · GW

"Finally, at the end of the survey I had a question offering respondents a chance to cooperate (raising the value of a potential monetary prize to be given out by raffle to a random respondent) or defect (decreasing the value of the prize, but increasing their own chance of winning the raffle). 73% of effective altruists cooperated compared to 70% of others - an insignificant difference."

Assuming an EA thinks they will use the money better than the typical other winner, the most altruistic thing to do could be to increase their chances of winning, even at the cost of a lower prize. Or maybe they like the person putting up the prize, in which case they would prefer it to be smaller.

Comment by RobertWiblin on Effective Altruism Through Advertising Vegetarianism? · 2013-06-15T11:07:27.258Z · LW · GW

"Public declarations would only be signaling, having little to do with maximizing good outcomes."

On the contrary, trying to influence other people in the AI community to share Eliezer's (apparent) concern for the suffering of animals is very important, for the reason given by David.

"I am also not aware of any Less Wrong post or sequence establishing (or really even arguing for) your view as the correct one."

a) Less Wrong doesn't contain the best content on this topic. b) Most of the posts disputing whether animal suffering matter are written by un-empathetic non-realists, so we would have to discuss meta-ethics and how to deal with meta-ethical uncertainty to convince them. c) The reason has been given by Pablo Stafforini - when I directly experience the badness of suffering, I don't only perceive that suffering is bad for me (or bad for someone with blonde hair, etc), but that suffering would be bad regardless of who experienced it (so long as they did actually have the subjective experience of suffering). d) Even if there is some uncertainty about whether animal suffering is important, that would still require that it be taken quite seriously; even if there were only a 50% chance that other humans mattered, it would be bad to lock them up in horrible conditions, or signal through my actions to potentially influential people that doing so is OK.

Comment by RobertWiblin on Effective Altruism Through Advertising Vegetarianism? · 2013-06-15T01:57:58.953Z · LW · GW

Needless to say, I think 1 is settled. As for the second point - Eliezer and his colleagues hope to exercise a lot of control over the future. If he is inadvertently promoting bad values to those around him (e.g. it's OK to harm the weak), he is increasing the chance that any influence they have will be directed towards bad outcomes.

Comment by RobertWiblin on Effective Altruism Through Advertising Vegetarianism? · 2013-06-14T23:29:06.313Z · LW · GW

I think David is right. It is important that people who may have a big influence on the values of the future lead the way by publicly declaring and demonstrating that suffering (and pleasure) are important where-ever they occur, whether in humans or mice.

Comment by RobertWiblin on Effective Altruism Through Advertising Vegetarianism? · 2013-06-14T23:24:06.657Z · LW · GW

I think some weighting for the sophistication of a brain is appropriate, but I think the weighting should be sub-linear w.r.t. the number of neurones; I expect that in simpler organisms, a larger share of the brain will be dedicated to processing sensory data and generating experiences. I would love someone to look into this to check if I'm right.

Comment by RobertWiblin on Peter Singer and Tyler Cowen transcript · 2012-04-09T04:59:52.380Z · LW · GW

Thanks for doing this. I found it a very memorable when it first aired years ago.

Comment by RobertWiblin on The principle of ‘altruistic arbitrage’ · 2012-04-09T02:44:20.372Z · LW · GW

To Larks and Shminux - I am twisting the idea of arbitrage, to be more like 'economic profit' or 'being sure to beat the market rate of return on investment/altruism'. Maybe I should stop using the term arbitrage.

"Isn't your point basically just that consumer surplus can be unusually high for individuals with unusual demand functions because the supply (of chances to do good) is fixed so lower demand => lower price?"

Yes, though the supply surve just slopes upwards - it isn't vertical.

I could re-write the principle as 'when supply curves slope upwards the purchases that offer the highest consumer surplus to you will mostly be things that you value but others don't.' On financial markets that isn't so important as most investors have very similar values, but in other areas it matters more.

I like your point about feedback loops in finance, but shouldn't proven effective philanthropists attract more donations if people cared about efficacy?

Comment by RobertWiblin on Attention Lurkers: Please say hi · 2010-04-19T09:46:26.776Z · LW · GW

I lurked until I read something I really disagreed with.

Comment by RobertWiblin on Attention Lurkers: Please say hi · 2010-04-19T09:45:43.226Z · LW · GW

I lurked til a few weeks back when I read something I really disagreed with.

Comment by RobertWiblin on Pain and gain motivation · 2010-04-11T16:25:52.939Z · LW · GW

Possibly doing nothing is a good idea for hunter gatherers in case of starvation, but that seems worth checking in the anthropology research. If starvation were a frequent risk, lethargy would surely been prompted by insufficient food intake, which is rare for humans today. We wouldn't just be lazy for that reason all the time; during times of abundance you ought to gather and store as much food as possible.

Apparently hunter gatherer bands were egalitarian, so it's unlikely people would have been beaten up by (non-existent) leaders just for hunting and gathering well, especially given food was shared. Again the conditions under which people would be picked on in bands are something we can find out by looking at existing anthropology research. Nonetheless it's hard to imagine that hunter gatherer bands which would push out members merely for contributing to the food intake of the group would be the most successful. We don't favour do-nothings over well-meaning incompetents today as far as I can tell.

Comment by RobertWiblin on Pain and gain motivation · 2010-04-10T12:46:40.196Z · LW · GW

I am skeptical of the evolutionary explanation he poses for inactivity.

I don't believe large numbers of people were typically thrown out of hunter gatherer bands for incompetence, surely not more than inactive people (http://books.google.com.au/books?sitesec=reviews&id=ljxS8gUlgqgC). And in how many crisis situations is doing nothing really the best option? Hiding from a predator would surely be one of only a few.

Comment by RobertWiblin on SIA won't doom you · 2010-03-26T01:23:36.077Z · LW · GW

That's because chances for us to go extinct seem many. If we are necessarily held back from space for thousands of years, it's very unlikely we would last that long just here on Earth.

Comment by RobertWiblin on Welcome to Heaven · 2010-01-30T09:17:56.895Z · LW · GW

I am using (total) preference utilitarianism to mean: "we should act so as to maximise the number of beings' preferences that are satisfied anywhere at any time".

"As for hedonistic utilitarians, why would any existing mind want to build something like that or grow into something like that?"

Because they are not selfish and they are concerned about the welfare of that being in proportion to its ability to have experiences?

"Further, why would something like that be better at seizing resources?"

That's a weakness, but at some point we have to start switching from maximising resource capture to using those resources to generate good preference satisfaction (or good experiences if you're a hedonist). At that point a single giant 'utility monster' seems most efficient.

Comment by RobertWiblin on That Magical Click · 2010-01-30T06:36:16.415Z · LW · GW

I can understand valuing oneself more than others (simple selfishness is unsurprising), but I think Eliezer is saying cryonics is a positive good, not just that it benefits some people at the equal expense of others.

If uploading gets up as is probably required for people to be woken up, the future will be able to make minds as diverse as they like.

Comment by RobertWiblin on Welcome to Heaven · 2010-01-30T05:28:38.140Z · LW · GW

What if we could create a wirehead that made us feel as though we were doing 1 or 2? Would that be satisfactory to more people?

Comment by RobertWiblin on Welcome to Heaven · 2010-01-30T04:07:38.392Z · LW · GW

OK, I'm new to this.

Comment by RobertWiblin on That Magical Click · 2010-01-29T17:35:50.154Z · LW · GW

Why is it worse to die (and people cryonically frozen don't avoid the pain of death anyway) than to never have been born? Assuming the process of dying isn't painful, they seem the same to me.

Comment by RobertWiblin on Welcome to Heaven · 2010-01-29T17:24:58.076Z · LW · GW

Probably the proportion of 'kill all humans' AIs that are friendly is low. But perhaps the proportion of FAIs that 'kill all humans' is large.

Comment by RobertWiblin on That Magical Click · 2010-01-29T17:09:20.879Z · LW · GW

Sorry, this may be a stupid question, but why is it a good for people to get cyonically frozen? Obviously if they don't they won't make it to the future - but other people will be born or duplicated in the future and the total number of people will be the same.

Why care more about people who live now than future potential people?

Comment by RobertWiblin on You cannot be mistaken about (not) wanting to wirehead · 2010-01-29T16:58:13.818Z · LW · GW

Silly to worry only about the preferences of your present self - you should also act to change your preferences to make them easier to satisfy. Your potential future self matters as much as your present self does.

Comment by RobertWiblin on Welcome to Heaven · 2010-01-29T16:43:54.529Z · LW · GW

What are they then?

Comment by RobertWiblin on Welcome to Heaven · 2010-01-29T16:39:44.646Z · LW · GW

Utility as I care about it is probably the result of information processing. Not clear why information should only be able to be processed in that way by human type minds, let alone fleshy ones.

Comment by RobertWiblin on Welcome to Heaven · 2010-01-29T14:05:36.456Z · LW · GW

You will be gone and something which does want to be a big integer will replace you and use your resources more effectively. Both hedonistic and preference utilitarianism demand it.

Comment by RobertWiblin on Welcome to Heaven · 2010-01-29T13:58:23.231Z · LW · GW

Needn't be total - average would suggest creating one single extremely happy being - probably not human.

Needn't only include hedonic pleasure - a preference utilitarian might support eliminating humans and replacing them with beings whose preferences are cheap to satisfy (hedonic pleasure being one cheap preference). Or you could want multiple kinds of pleasure, but see hedonic as always more efficient to deliver as proposed in the post.

Comment by RobertWiblin on Welcome to Heaven · 2010-01-29T13:53:49.544Z · LW · GW

Who cares about humans exactly? I care about utility. If the AI thinks humans aren't an efficient way of generating utility, we should be eliminated.