Posts

[Madison] Collaborative Truthseeking 2019-03-26T04:09:55.881Z · score: 3 (1 votes)
[Madison] Meditations on Moloch 2018-11-28T19:21:08.231Z · score: 4 (2 votes)
Social Meetup: Bandung Indonesian 2018-11-17T06:44:22.671Z · score: 3 (1 votes)
Is skilled hunting unethical? 2018-02-17T18:48:21.635Z · score: 16 (15 votes)

Comments

Comment by elephantiskon on Information empathy · 2019-07-30T05:56:23.620Z · score: 12 (4 votes) · LW · GW

Why should we say that someone has "information empathy" instead of saying they possess a "theory of mind"?

Possible reasons: "theory of mind" is an unwieldy term, it might be useful to distinguish in fewer words a theory of mind with respect to beliefs from a theory of mind with respect to preferences, you want to emphasise a connection between empathy and information empathy.

I think if there's established terminology for something we're interesting in discussing, there should be a pretty compelling reason why it doesn't suffice for us.

Comment by elephantiskon on On AI and Compute · 2019-04-04T15:30:17.491Z · score: 8 (7 votes) · LW · GW

It felt weird to me to describe shorter timeline projections as "optimistic" and longer ones as "pessimistic"- AI research taking place over a longer period is going to be more likely to give us friendly AI, right?

Comment by elephantiskon on (Why) Does the Basilisk Argument fail? · 2019-02-10T14:12:26.657Z · score: 1 (1 votes) · LW · GW

This approach can be made a little more formal with FDT/LDT/TDT: being the sort of agent who robustly does not respond to blackmail maximises utility more than being the sort of agent who sometimes gives in to blackmail, because you will not wind up in situations where you're being blackmailed.

Comment by elephantiskon on Subjunctive Tenses Unnecessary for Rationalists? · 2018-10-09T16:52:24.784Z · score: 3 (3 votes) · LW · GW

The subjunctive mood and really anything involving modality is complicated. Paul Portner has a book on mood which is probably a good overview if you're willing to get technical. Right now I think of moods as expressing presuppositions on the set of possible worlds you quantify over in a clause. I don't think it's often a good idea to try to get people to speak a native language in a way incompatible with the language as they acquired it in childhood; it adds extra cognitive load and probably doesn't affect how people reason (the exception being giving them new words and categories, which I think can clearly help reasoning in some circumstances).

Comment by elephantiskon on A compendium of conundrums · 2018-10-08T21:14:48.140Z · score: 4 (3 votes) · LW · GW

These are a blast!

Comment by elephantiskon on Advice Wanted; Reconcile with religious parent · 2018-09-22T15:27:53.065Z · score: 2 (2 votes) · LW · GW

I'm atheist and had an awesome Yom Kippur this year, so believing in God isn't a pre-req for going to services and not being unhappy. I think it would be sad if your father's kids gave up ritual practices that were especially meaningful to him and presumably to his ancestors. I think it would be sad if you sat through services that were really unpleasant for you year after year. I think it would be really sad if your relationship with your father blew up over this.

I think the happiest outcome would be that you wind up finding bits of the high holidays that you can enjoy, and your dad is satisfied with you maybe doing a little less than he might like. Maybe being stuck in synagogue for an entire day is bad, but going there for an hour or two gives you some interesting ethnographic observations to mull over. Talk it out with him, see what he really values, and compromise if you can.

Comment by elephantiskon on Wirehead your Chickens · 2018-06-21T21:43:39.485Z · score: 31 (10 votes) · LW · GW

I've seen this discussed before by Rob Wiblin and Lewis Bollard on the 80,000 Hours podcast (edit: tomsittler actually beat me to the punch in mentioning this).

Robert Wiblin: Could we take that even further and ultimately make animals that have just amazing lives that are just constantly ecstatic like they’re on heroin or some other drug that makes people feel very good all the time whenever they are in the farm and they say, “Well, the problem has basically been solved because the animals are living great lives”?
Lewis Bollard: Yeah, so I think this is a really interesting ethical question for people about whether that would, in people’s minds, solve the problem. I think from a pure utilitarian perspective it would. A lot of people would fine that kind of perverse having, for instance, particularly I think if you’re talking about animals that might psychologically feel good even in terrible conditions. I think the reason why it’s probably going to remain a thought experiment, though, is that it ultimately relies on the chicken genetics companies and the chicken producers to be on board...

I encourage anyone interested to listen to this part of the podcast or read it in the transcript, but it seems clear to me right now that it will be far easier to develop clean meat which is widely adopted than to create wireheaded chickens whose meat is widely adopted.

In particular, I think that implementing these strategies from the OP will be at least as difficult as creating clean meat:

  • breed animals who enjoy pain, not suffer from it
  • breed animals that want to be eaten, like the Ameglian Major Cow from the Hitchhiker's Guide to the Galaxy

I think that getting these strategies widely adopted is at least as difficult as getting enough welfare improvements widely adopted to make non-wireheaded chicken lives net-positive

  • identify and surgically or chemically remove the part of the brain that is responsible for suffering
  • at birth, amputate the non-essential body parts that would give the animals discomfort later in life

I think that breeding for smaller brains is not worthwhile because smaller brain size does not guarentee reduced suffering capacity and getting it widely adopted by chicken breeders is not obviously easier than getting many welfare improvements widely adopted.

I'm not as confident that injecting chickens with opioids would be a bad strategy, but getting this widely adopted by chicken farms is not obviously easier to me than getting many other welfare improvements widely adopted. I would be curious to see the details of the study romeostevensit mentioned, but my intuition is that outrage at that practice would far exceed outrage at current factory farm practices because of "unnaturalness", which would make adoption difficult even if the cost of opioids is low.

Comment by elephantiskon on Beyond Astronomical Waste · 2018-06-12T11:54:25.512Z · score: 3 (1 votes) · LW · GW

Nothing, if your definition of a copy is sufficiently general :-)

Am I understanding you right that you believe in something like a computational theory of identity and think there's some sort of bound on how complex something we'd attribute moral patienthood or interestingness to can get? I agree with the former, but don't see much reason for believing the latter.

Comment by elephantiskon on A Rationalist Argument for Voting · 2018-06-08T01:30:04.672Z · score: 4 (2 votes) · LW · GW

I just listened to a great talk by Nick Bostrom I'd managed to miss before now which mentions some considerations in favor and opposed to voting. He does this to illustrate a general trend that in certain domains it's easy to come across knock-down arguments ("crucial considerations") that invalidate or at least strongly counter previous knock-down arguments. Hope I summarized that OK!

When I last went to the polls, I think my main motivation for doing so was functional decision theory.

Comment by elephantiskon on Beyond Astronomical Waste · 2018-06-08T01:17:53.867Z · score: 6 (3 votes) · LW · GW

I feel like scope insensitivity is something to worry about here. I'd be really happy to learn that humanity will manage to take good care of our cosmic endowment but my happiness wouldn't scale properly with the amount of value at stake if I learned we took good care of a super-cosmic endowment. I think that's the result of my inability to grasp the quantities involved rather than a true reflection of my extrapolated values, however.

My concern is more that reasoning about entities in simpler universes capable of conducting acausal trades with us will turn out to be totally intractable (as will the other proposed escape methods), but since I'm very uncertain about that I think it's definitely worth further investigation. I'm also not convinced Tegmark's MUH is true in the first place, but this post is making me want to do more reading on the arguments in favor & opposed. It looks like there was a Rationally Speaking episode about it?

Comment by elephantiskon on Shadow · 2018-03-15T02:38:20.456Z · score: 3 (1 votes) · LW · GW

I actually like the idea of building a "rationalist pantheon" to give us handy, agenty names for important but difficult concepts. This requires more clearly specifying what the concept being named is: can you clarify a bit? Love Wizard of Earthsea, but don't get what you're pointing at here.

Comment by elephantiskon on Is skilled hunting unethical? · 2018-02-19T15:31:31.335Z · score: 3 (1 votes) · LW · GW

I think normal priors on moral beliefs come from a combination of:

  • Moral intuitions
  • Reasons for belief that upon reflection, we would accept as valid (e.g. desire for parsimony with other high-level moral intuitions, empirical discoveries like "vaccines reduce disease prevalence")
  • Reasons for belief that upon reflection, we would not accept as valid (e.g. selfish desires, societal norms that upon reflection we would consider arbitrary, shying away from the dark world)

I think the "Disney test" is useful in that it seems like it depends much more on moral intuitions than on reasons for belief. In carrying out this test, the algorithm you would follow is (i) pick a prior based on the movie heuristic, (ii) recall all consciously held reasons for belief that seem valid, (iii) update your belief in the direction of those reasons from the heuristic-derived prior. So in cases where our belief could be biased by (possibly unconscious) reasons for belief that upon reflection we would not accept as valid, where the movie heuristic isn't picking up many of these reasons, I'd expect this algorithm to be useful.

In the case of vaccinations, the algorithm makes the correct prediction: the prior-setting heuristic would give you a strong prior that vaccinations are immoral, but I think the valid reasons for belief are strong enough that the prior is easily overwhelmed.

I can come up with a few cases where the heuristic points me towards other possible moral beliefs I wouldn't have otherwise considered, whose plausibility I've come to think is undervalued upon reflection. Here's a case where I think the algorithm might fail: wealth redistribution. There's a natural bias towards not wanting strong redistributive policies if you're wealthy, and an empirical case in favor of redistribution within a first-world country with some form of social safety net doesn't seem nearly as clear-cut to me as vaccines. My moral intuition is that hoarding wealth is still bad, but I think the heuristic might point the other way (it's easy to make a film about royalty with lots of servants, although there are some examples like Robin Hood in the other direction).

Also, your comments have made me think a lot more about what I was hoping to get out of the heuristic in the first place and about possible improvements; thanks for that! :-)

Comment by elephantiskon on Is skilled hunting unethical? · 2018-02-18T00:34:12.960Z · score: 2 (2 votes) · LW · GW

I don't think the vaccination example shows that the heuristic is flawed: in the case of vaccinations, we do have strong evidence that vaccinations are net-positive (since we know their impact on disease prevalance, and know how much suffering there can be associated with vaccinatable diseases). So if we start with a prior that vaccinations are evil, we quickly update to the belief that vaccinations are good based on the strength of the evidence. This is why I phrased the section in terms of prior-setting instead of evidence, even though I'm a little unsure how a prior-setting heuristic would fit into a Bayesian epistimology. If there's decently strong evidence that skilled hunting is net-positive, I think that should outweigh any prior developed through the children's movie heuristic. But in the absence of such evidence, I think we should default to the naive position of it being unethical. Same with vaccines.

I'd be interested to know if you can think of a clearer counterexample though: right now, I'm basing my opinion of the heuristic on a notion that the duck test is valuable when it comes to extrapolating moral judgements from a mess of intuitions. What I have in mind as a counterexample is a behavior that upon reflection seems immoral but without compelling explicit arguments on either side, for which it is much easier to construct a compelling children's movie whose central conceit is that the behavior is correct than it is to construct a movie with the conceit that the behavior is wrong (or vice-versa).

Comment by elephantiskon on Is skilled hunting unethical? · 2018-02-17T21:47:31.707Z · score: 8 (2 votes) · LW · GW

Thanks for the feedback Raemon!

Concrete Concerns

I'd like to see ["when predators are removed from a system, a default thing that seems to happen is that death-by-predator is replaced by death-by-starvation" and "how do you do population control without hunting?"] at least touched on in wild-animal-suffering pieces

I'd like to see those talked about too! The reason I didn't is I really don't have any insights on how to do population control without hunting, or on which specific interventions for reducing wild animal suffering are promising. I could certainly add something indicating I think those sorts of questions are important, but that I don't really have any answers beyond "create welfare biology" and "spread anti-speciesism memes so that when we have better capabilities we will actually carry out large interventions".

have a table of contents of the issues at hand

I had a bit of one in the premise ("wild animal welfare, movement-building, habit formation, moral uncertainty, how to set epistemic priors"), but it sounds like you might be looking for something different/more specific? You're not talking about a table of contents consisting of more or less the section headings right?

Aiming to Persuade vs Inform

My methodology was "outline different reasons why skilled hunting could remain an unethical action", but I did a poor job of writing if the article seemed as though I thought each reason was likely to be true! I did put probabilities on everything to calculate the 90% figure at the top, but since I don't consider myself especially well-calibrated I thought it might be better to leave them off... The only reason that I think is actually more likely to be valid than wrong is #3, but I do assign enough probability mass to the others that I think they're of some concern.

I thought the arguments in favor of skilled hunting (make hunters happy and prevent animals from experience lives which might involve lots of suffering) were pretty apparent and compelling, but I might be typical-minding that. I also might be missing something more subtle?

In terms of whether that methodology was front-page appropriate, I do think that if the issue I was writing about was something slightly more political this would be very bad. But as I saw it, the main content of the piece isn't the proposition that skilled hunting is unethical, it's the different issues that come up in the process discussing it ("wild animal welfare, movement-building, habit formation, moral uncertainty, how to set epistemic priors"). My goal is not to persuade people that I'm right and you must not hunt even if you're really good at it, but to talk about interesting hammers in front of an interesting nail.

[Edit: Moved to personal blog.]

Comment by elephantiskon on Rationalist Lent · 2018-02-14T14:38:59.660Z · score: 7 (2 votes) · LW · GW

Why do you think we should be more worried about reading fiction? Associated addictiveness, time consumption, escapism?

Comment by elephantiskon on What Are Meetups Actually Trying to Accomplish? · 2018-02-09T01:10:29.794Z · score: 29 (13 votes) · LW · GW

Possible low-hanging fruit: name tags.

Comment by elephantiskon on What the Universe Wants: Anthropics from the POV of Self-Replication · 2018-01-12T20:43:43.783Z · score: 12 (3 votes) · LW · GW

What I'm taking away from this is that if (i) it is possible for child universes to be created from parent universes, and if (ii) the "fertility" of a child universe is positively correlated with that of its parent universe, then we should expect to live in a universe which will create lots of fertile child universes, whether this is accomplished through a natural process or as you suggest through inhabitants of the universe creating fertile child universes artificially.

I think that's a cool concept, and I wrote a quick Python script for a toy model to play around with. Your consequences seem kind of implausible to me though (I might try to write more on that later).

Comment by elephantiskon on Hidden Hope For Control · 2018-01-12T05:38:43.395Z · score: 7 (2 votes) · LW · GW

Essentially, I read this as an attempt at continental philosophy rather than analytic philosophy, and I don't find continental-style work very interesting or useful. I believe you that the post is meaningful and thoughtful, but the costs of time or effort to understand the meanings or thoughts you're driving at are too high for me at least. I think trying to lay things out in a more organized and explicit manner would be helpful for your readers and possibly for you in developing these thoughts.

I don't want to get too precise about answering the above unless you're still interested in me doing so and don't mind me stating things in a way that might come across as pretty rude. Also, limiting myself to one more reply here since I should really stop procrastinating work, and just in case.

Comment by elephantiskon on Hidden Hope For Control · 2018-01-11T20:03:38.393Z · score: 20 (5 votes) · LW · GW

I'm downvoting this post because I don't understand it even after your reply above, and the amount of negative karma currently on the post indicates to me that it's probably not my fault. It's possible to write a poetic and meaningful post about a topic and pleasant when someone has done so well, but I think you're better off first trying to state explicitly whatever you're trying to state to make sure the ideas are fundamentally plausible. I'm skeptical that meditations on a topic of this character are actually helpful to truth-seeking, but I might be typical-minding you.

Comment by elephantiskon on An Artificial paradise made by humans. (A bit Sci-fi idea) · 2018-01-11T03:33:38.515Z · score: 16 (4 votes) · LW · GW

I'm downvoting this because it appears to be a low-effort post which doesn't contribute or synthesize any interesting ideas. Prime Intellect is the novel that first comes to mind as discussing some of what you're talking about, but several chapters are very disturbing, and there's probably better examples out there. If you have Netflix, San Junipero (Season 3 Episode 4) of Black Mirror is fantastic and very relevant.

Comment by elephantiskon on The Loudest Alarm Is Probably False · 2018-01-03T00:30:23.591Z · score: 7 (3 votes) · LW · GW

I like this post's brevity, its usefulness, and the nice call-to-action at the end.

Comment by elephantiskon on Phoenix Song · 2018-01-03T00:24:45.679Z · score: 11 (3 votes) · LW · GW

I found the last six paragraphs of this piece extremely inspiring, to the extent that I think it nonnegligably raised the likelihood that I'll be taking "exceptional action" myself. I didn't personally connect much with the first part, though it was interesting. Did you used to want to want your reaction to idiocy be “'how can I help'”, even when it wasn't?

Comment by elephantiskon on The essay "Interstellar Communication Using Microbes: Implications for SETI" has implications for The Great Filter. · 2017-12-22T08:02:44.412Z · score: 11 (4 votes) · LW · GW

The case against "geospermia" here is vastly overstated: there's been a lot of research over the past decade or two establishing very plausible pathways for terrestrial abiogensis. If you're interested, read through some work coming out of Jack Szostak's lab (there's a recent review article here). I'm not as familiar with the literature on prebiotic chemistry as I am with the literature on protocell formation, but I know we've found amino acids on meteorites, and it wouldn't be surprising if they and perhaps some other molecules which are important to life were introduced to earth through meteorites rather than natural syntheses.

But in terms of cell formation, the null hypothesis should probably be that it occured on Earth. Panspermia isn't ridiculous per se, but conditions on Earth appear to have been much more suitable for cell formation than those of the surrounding neighborhood, and sufficiently suitable that terrestrial abiogensis isn't implausible in the least. When it comes to ways in which there could be wild-animal suffering on a galactic scale, I think the possibility of humans spreading life through space colonization is far more concerning.

Also, Zubrin writes:

Furthermore, it needs to be understood that the conceit that life originated on Earth is quite extraordinary. There are over 400 billion of stars in our galaxy, with multiple planets orbiting many of them. There are 51 billion hectares on Earth. The probability that life first originated on Earth, rather than another world, is thus comparable to the probability that the first human on our planet was born on any particular 0.1 hectare lot chosen at random, for example my backyard. It really requires evidence, not merely an excuse for lack of evidence, to be supported.

This is poor reasoning. A better metaphor would be that we're looking at a universe with no water except for a small pond somewhere, and wondering where the fish that currently live in that pond evolved. If water is so rare, why shouldn't we be confused that the pond exists in the first place? Anthropic principle (but be careful with this). Disclaimer: Picking this out because I thought it was the most interesting part in the piece, not because I went looking for bad metaphors.

As a meta-note, I was a little suspicious of this piece based on some bad signaling (the bio indicates potential bias, tables are made through screenshots, the article looks like it wants to be in a journal but is hosted on a private blog). I don't like judging things based on potentially spurious signals, but this might have nevertheless biased me a bit and I'm updating slightly in the direction of those signals being valuable.

Comment by elephantiskon on Rationalist Politicians · 2017-12-22T01:39:39.808Z · score: 9 (3 votes) · LW · GW

Have a look at 80K's (very brief) career profile for party politics. My rough sense is that efective altruists generally agree that pursuing elected office can be a very high-impact career path for individuals particularly well-suited to it, but think that even with an exceptional candidate succeeding is very difficult.

Comment by elephantiskon on Improvement Without Superstition · 2017-12-16T22:14:47.553Z · score: 4 (3 votes) · LW · GW

Upvoted mostly for surprising examples about obstetrics and CF treatment and for a cool choice of topic. I think your question, "when is one like the doctors saving CF patients and when is one like the doctors doing super-radical mastectomies?" is an important one to ask, and distinct from questions about modest epistomology.

Say there is a set of available actions of which a subset have been studied intensively enough that their utility is known with high degree of certainty, but that the utility of the other available actions in is uncertain. Then your ability to surpass the performance of an agent who chooses actions only from essentially comes down to a combination of whether choosing uncertain-utility actions from precludes also picking high-utility actions from , and what the expected payoff is from choosing uncertain-utility actions in according to your best information.

I think you could theoretically model many domains like this, and work things out just by maximizing your expected utility. But it would be nice to have some better heuristics to use in daily life. I think the most important questions to ask yourself are really (i) how likely are you to horribly screw things up by picking an uncertain-utility action, and (ii) do you care enough about the problem you're looking at to take lots of actions that have a low chance of being harmful, but a small chance of being positive.

Comment by elephantiskon on Strategic High Skill Immigration · 2017-12-06T06:23:55.044Z · score: 2 (2 votes) · LW · GW

I don't have much of a thoughtful opinion on the question at hand yet (though I have some questions below), but I wanted to express a deep appreciation for your use of detail elements: it really helps readability!

One concern I would want to see addressed is an estimation of negative effects of a "brain drain" on regional economies- if a focused high-skilled immigration policy has the potential to exacerbate global poverty, the argument that it has a positive impact on the far future needs to be very compelling. So would these economic costs be significant, or negligible? And would a more broadly permissive immigration policy have similar advantages? Also, given the scope of the issues at hand I would be very surprised if the advantages you ascribe to high-skilled immigration are all of roughly equal expected value: is there one which you think dominates the others? (Like reduced x-risk from AI?)

Comment by elephantiskon on Motivating a Semantics of Logical Counterfactuals · 2017-09-23T18:09:01.577Z · score: 1 (1 votes) · LW · GW

(Disclaimer: There's a good chance you've already thought about this.)

In general, if you want to understand a system (construal of meaning) forming a model of the output of that system (truth-conditions and felicity judgements) is very helpful. So if you're interested in understanding how counterfactual statements are interpreted, I think the formal semantics literature is the right place to start (try digging through the references here, for example).

Comment by elephantiskon on Fish oil and the self-critical brain loop · 2017-09-15T13:47:28.891Z · score: 1 (1 votes) · LW · GW

Muting the self-critical brain loop (and thanks for that terminology!) is something I'm very interested in. Have you investigated vegan alternatives to fish oil at all?

Comment by elephantiskon on Open thread, Jan. 16 - Jan. 22, 2016 · 2017-01-16T21:17:17.123Z · score: 2 (2 votes) · LW · GW

At what age do you all think people have the greatest moral status? I'm tempted to say that young children (maybe aged 2-10 or so) are more important than adolescents, adults, or infants, but don't have any particularly strong arguments for why that might be the case.