Should a rationalist be concerned about habitat loss/biodiversity loss?

post by InquilineKea · 2011-06-03T12:25:27.635Z · LW · GW · Legacy · 41 comments

It's an interesting question that I'm pondering.

Now, while I do question the intellectual honesty of this blog, I'll link to it anyways, since the evidence does seem interesting, at the very least: http://wattsupwiththat.com/2010/01/04/where-are-the-corpses/

http://wattsupwiththat.com/2011/05/19/species-extinction-hype-fundamentally-flawed/

It does seem that environmentalism can mimic some qualities of religion (I know, since I used to be an environmentalist myself). As such, it can cause many extremely intelligent people to reject evidence that goes against their worldview. 

Furthermore, it's also possible that computational chemistry may soon be our primary agent for drug discovery, rather than discovering more biological compounds in certain ecosystems (that being said, drug discovery is entirely different from drug synthesis, and discovering a gene that codes for a particular protein and splicing it into an E Coli bacterium is going to be far easier than anything computational chemistry can do in the near future). 

With that all being said, what now? I do believe that there is something of value that does get lost as habitat gets destroyed. But it's hard to quantify value in these cases. Certain animals, like crows, chimpanzees, orcas, and elephants, are cognitively advanced enough to have their own cultures. If one of their subcultures get destroyed (which can be done without a fullscale extinction), then is anything valuable that gets lost? (besides value for scientific research that has potential to be applicable elsewhere?) And is it more important to worry about these separate cultures, as compared to worrying about different subspecies of the same animal? Certainly, we're now beginning to discover novel social networks in dolphins and crows. But most of these animals are not at risk of extinction, and even the chimpanzees and bonobos will only get extinct in the wild (at the very worst). There are other less advanced animals that have a higher risk of permanent extinction. 

What we're prone to systematically underestimating, of course, is the possible permanent loss of micro-organisms. And of novel biological structures (and networks) that may be contained within them. 

41 comments

Comments sorted by top scores.

comment by JoshuaZ · 2011-06-03T21:26:18.508Z · LW(p) · GW(p)

There are extreme problems with the modern environmental movement, and some aspects of it certainly do end up coming very close to religion in the level of irrationality involved (environmentalists objecting to fusion power because it is nuclear would be one good example). However, in a similar vein, I'd strongly advise against relying on Watts Up With That which as a whole is extremely ideologically motivated. The website is at some level a running example of motivated cognition in action, and has been extensively criticized by scientists for being in general inaccurate.

As to the specific question about drug modeling and the like- I would suggest that such computer modeling is in its infancy, and it isn't clear how long or when it will be that such technology will completely obsolete searches for compounds. Without much better data on that, preserving biodiversity seems like a strategy that makes sense from being just mildly risk averse.

comment by Paul Crowley (ciphergoth) · 2011-06-03T15:01:32.428Z · LW(p) · GW(p)

Biodiversity is interesting, and interestingness is something like a terminal value to me.

Replies from: Nic_Smith
comment by Nic_Smith · 2011-06-03T17:51:13.052Z · LW(p) · GW(p)

Just because it seems like an obvious question -- Why that particular type of "interesting"?

Replies from: InquilineKea
comment by InquilineKea · 2011-06-03T17:56:23.011Z · LW(p) · GW(p)

Yeah, I actually agree with that. I can spend a lifetime studying interesting things, and there will always be interesting things in the artificial world that I can study. Yet (this is just for me), there are few other things that can amaze me as much as parts of the biological world do.

The other thing I think about is this - what exactly is the stability of my terminal values? I've changed many of my terminal values over time, when I realized that I'd be happier without certain terminal values (e.g., deep ecology used to be one of my terminal values, but then I realized it was philosophically flawed).

If biodiversity got destroyed, I'd imagine that a number of people with biodiversity as a terminal value would also force themselves to adapt.

Replies from: Bongo
comment by Bongo · 2011-06-03T18:07:56.463Z · LW(p) · GW(p)

What you thought your terminal values were changed. Your terminal values didn't necessarily change.

Replies from: InquilineKea
comment by InquilineKea · 2011-06-03T18:42:19.282Z · LW(p) · GW(p)

Deep ecology, in itself, entails that you value (some metric of biology) as a terminal value. Since I no longer believe in it, my terminal value for it did change.

It's sort of like this: if a religious person had his religion (God) as a terminal value, but his God was then definitively proved not to exist, then he would have to change his terminal values too

Replies from: Raemon
comment by Raemon · 2011-06-05T03:29:26.917Z · LW(p) · GW(p)

Is there anything about terminal values that means they are immutable? What's wrong with valuing something for its own sake, and then later changing your mind?

Replies from: InquilineKea
comment by InquilineKea · 2011-06-06T01:37:50.342Z · LW(p) · GW(p)

well, in the long run, we're talking about maximizing our utility, which means taking the time-integrated utility function.

so yes true, valuing something for its own sake actually could count even if it's not permanent.

comment by jsalvatier · 2011-06-05T20:34:48.955Z · LW(p) · GW(p)

I'll be nitpicky for a moment: you probably don't want to ask 'what should a rationalist care about?', rationality is a set of tools, not values. Also, keep your identity small.

Replies from: steven0461
comment by steven0461 · 2011-06-05T21:10:56.559Z · LW(p) · GW(p)

Most of us, if we found that some of our values rested on confusions, would say those values had never been our true values. This is true not just of confusions about means-ends relationships but also of other confusions. So it's unsafe to assume, the way people here often do, that if a disagreement seems on the surface to be about terminal values, it's not up for rational debate.

Also, what one cares about is a different question from what one is concerned about; to me, "concern" about some value implies a claim that the degree of caring, combined with the practical details of the situation, makes it worth sometimes choosing the value over values that one would otherwise have pursued.

Replies from: jsalvatier
comment by jsalvatier · 2011-06-05T22:23:13.730Z · LW(p) · GW(p)

I agree and did not mean to suggest otherwise. I meant to suggest that if you remove the 'rationalist' from the question, it stays the same. If rationalists 'should' be concerned so 'should' non-rationalists.

Replies from: steven0461
comment by steven0461 · 2011-06-06T00:33:28.778Z · LW(p) · GW(p)

I agree with that point.

comment by steven0461 · 2011-06-03T17:55:36.994Z · LW(p) · GW(p)

One thing that's missing from this discussion is that we'll probably be able to use future technologies to regenerate as much biodiversity as we want.

Replies from: None, Mercy
comment by [deleted] · 2011-06-03T21:44:09.625Z · LW(p) · GW(p)

Stipulating that future technologies will be limitlessly powerful, a more accurate way of putting this is "future people will probably be able to use future technologies to regenerate as much biodiversity as current people want."

Should a present-day environmentalist, rational as you please, make decisions as though future people were likely to do so?

Replies from: steven0461
comment by steven0461 · 2011-06-03T22:21:17.259Z · LW(p) · GW(p)

Future people may want different things than we do, but regardless of what they want, they'll probably get their way once they're there. Arguments for caring about biodiversity anyway could be 1) it'll be valuable in the mean time, or 2) we'll want to have the same collection of species for old times' sake and future people won't have enough data to reconstruct what that collection was unless we preserve it.

Replies from: Nornagest
comment by Nornagest · 2011-06-07T23:25:48.551Z · LW(p) · GW(p)

Ecosystem engineering's also likely to be extremely difficult and expensive, even if we've got all the raw data necessary to implement it. One way or another this probably won't end up being much of a thing to worry about in a post-Singularity future, or even one altered by sub-Singularity transformative technologies like advanced nanotech, but in the meantime, or given conservative assumptions about technological progress, it's still a cost-benefit analysis worth making.

comment by Mercy · 2011-06-07T23:10:09.630Z · LW(p) · GW(p)

Do you mean the thread because that's hardly missing from the broader discussion, much of the funding goes towards towards seed banks and so on, on a broader scale too a lot of conservation is effectively ecological cryonics- zoos and so on just keep endangered species in a holding pattern. Criticism of these is, as far as I can see, mostly ecological eg: reintroduced tree species will fail to thrive if introduced to soils where their traditional symbiotic fungi have gone extinct in their absence.

comment by KPier · 2011-06-04T02:59:25.219Z · LW(p) · GW(p)

Yes, many people value biodiversity (and it's perfectly rational to do this). But I think the problem here is the same as the problem with worrying about global warming - yes, it's likely a problem, but there are a lot of environmentalists, so the marginal utility of additional worrying is probably pretty much zero, unless you think there is something you can uniquely bring to the movement. There are a lot of things we could worry about, and only so much energy we can exert to change them...

comment by Normal_Anomaly · 2011-06-03T16:30:57.570Z · LW(p) · GW(p)

If you have biodiversity as a terminal value, you should try to preserve it along with all your other values. Instrumentally, "biodiversity" is such a large concept that I can't really say.

comment by Subsumed · 2011-06-03T13:36:51.633Z · LW(p) · GW(p)

A brain, rational or not, can produce the "terminal value" state (or output, or qualia?) when presented with the habitat or biodiversity concepts. This can be independent of their instrumental value, which, on average, probably diminishes with technological progress. But it's also easy to imagine cases where the instrumental value of nature increases as our ability to understand and manipulate it grows.

comment by Axel · 2011-06-04T00:08:34.382Z · LW(p) · GW(p)

There was a comment by KrisC that lists various useful aspects of biodiversity: http://lesswrong.com/lw/2l6/taking_ideas_seriously/2fna

comment by UnclGhost · 2011-06-05T07:05:01.369Z · LW(p) · GW(p)

Related question: Independent of any ecological or economic concerns, should a rationalist be a vegetarian?

Replies from: mutterc
comment by mutterc · 2011-06-05T21:28:48.902Z · LW(p) · GW(p)

What upsides are left to being vegetarian once you leave out economics and ecology? I have trouble thinking of any. It's a bit easier for a vegetarian to eat "light" but one has to keep an eye on one's protein intake. As far as I can tell the moral dimension (is it "wrong" to eat animals?) reduces to personal preference.

Is there an upside I'm missing?

Replies from: NancyLebovitz, UnclGhost
comment by NancyLebovitz · 2011-06-07T07:17:54.157Z · LW(p) · GW(p)

Enough people report being healthier when vegetarian that it might be a worthwhile thing to experiment with.

comment by UnclGhost · 2011-06-08T04:55:12.338Z · LW(p) · GW(p)

I was thinking more in terms of moral concerns, so I should have specified to ignore health as well.

I think asking whether or not to value biodiversity is the same sort of question--it reduces to personal preference.

comment by Clippy · 2011-06-03T15:49:12.474Z · LW(p) · GW(p)

No.

comment by XiXiDu · 2011-06-03T13:59:07.004Z · LW(p) · GW(p)

The question is not if you should care, but if you do care. Rationality does not tell you what to value, or how much utility you are supposed to assign to what goals. Rationality just helps you to recognize the facts and achieve your goals.

Replies from: Will_Newsome, wedrifid
comment by Will_Newsome · 2011-06-03T22:47:16.260Z · LW(p) · GW(p)

Your values are in the territory. You can't just make shit up based on naive introspection, in any domain. Simple things like verbal overshadowing are counter-intuitive. Treating your map of your values as your actual values leads to extreme overconfidence, often supported by self-righteousness. It is a very common failure mode everywhere.

Replies from: XiXiDu, Will_Newsome
comment by XiXiDu · 2011-06-05T15:58:04.942Z · LW(p) · GW(p)

Treating your map of your values as your actual values...

What is the alternative?

Your values are in the territory.

I don't understand.

comment by Will_Newsome · 2011-06-03T23:04:23.587Z · LW(p) · GW(p)

One way to get around this is to claim that shouldness flows from God (around here the equivalent is CEV) and then argue about the nature of God's will. I think this is a step up even if often the result of confusions of various kinds. At least people aren't as likely to treat their naive introspection or simplistic empiricism as gospel. Thinking of justification as flowing from timeless attractors rather than causal coincidences is a similar but less popular perspective. Obviously 'right view' would see the causal/teleological equivalence but conceptual flavors matter for correct connotations.

Replies from: XiXiDu, Nick_Tarleton
comment by XiXiDu · 2011-06-05T16:06:47.360Z · LW(p) · GW(p)

I don't understand half of what you are saying, I only know that at the end of the day I will do what I want, whatever that might be. Either I want what God or CEV wants, or I don't.

comment by Nick_Tarleton · 2011-06-06T23:13:49.736Z · LW(p) · GW(p)

causal/teleological equivalence

Could you elaborate on this?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-06-07T06:57:53.752Z · LW(p) · GW(p)

"Timeful/timeless" was what I meant, I confused my terminology. I'm confused because it seems to me that there are an infinite number of logical nodes you could put in your causal graphs that constrain the structure of your causal graphs (in a way that is of course causally explicable, but whose simplest explanation sans logical nodes may have suspiciously high K complexity in some cases), and that 'cause' ferns to grow fractally, say, and certainly 'cause' more interesting events. Arising or passing away events particularly 'caused' by these timeless properties of the universe aren't exactly teleological in that they're not necessarily determined by the future (or the past); but because those timeless properties create patterns that constrain possible futures it's like the structures in the imaginable futures that are timelessly privileged as possible/probable are themselves causing their fruition by their nature. So in my fuzzy thinking there's this conceptual soup of attractors, causality, teleology, timeless properties, and the like, and mish-mashing them is all too easy since they're different perspectives that highlight different aspects rather than compartmentalized belief structures. If I just stick to switching between timeful and timeless perspectives (without confusing myself with attractors) then I have my feet firmly on the ground.

Anyway, to (poorly) elaborate on what I was originally talking about: if might makes right (or to put it approximately, morality flows forward from the past, like naive causal validity semantics in all their arbitrariness), but right enables might (very roughly, morality flows backward from the future, logical truths constrain CDT-like optimizers and at some level of organization the TDT-like optimizers win out because cooperation just wins; an "I" that is big like a human has fewer competitors and greater rewards than an "I" that is small like a paramecium; (insert something about ant colonies?); (insert something about thinking of morality as a Pareto frontier itself moving over time (moving along a currently-hidden dimension in a way that can't be induced from seeing trends in past movements along then-hidden dimensions, say, though that's probably a terrible abstraction), and discount rates of cooperative versus non-cooperative deal-making agents up through the levels of organization and over time, with hyperbolic discounters willingly yielding to exponential discounters and so on over time)), then seeing only one and not the other is a kind of blindness.

Emphasizing "might makes right" may cause one to see the universe as full of (possibly accidental) adversaries, where diverging preferences burn up most of the cosmic commons and the winners (who one won't identify with and whose preferences one won't transitively value) take whatever computronium is left. This sort of thinking is associated with reductionism, utilitarianism/economics, and disinterest (either passive or active) in morality qua morality or shouldness qua shouldness. I won't hastily list any counterarguments here for fear of giving them a bad name but will instead opine that the counterarguments seem to me vastly underappreciated (when noticed) by what I perceive to be the prototypical median mind of Less Wrong.

Emphasizing "right enables might" may cause one to see the future as timelessly determined to be perfect, full of infinitely intricate patterns of interaction and with every sacrifice seen in hindsight as a discordant note necessary for the enabling of greatest beauty, the unseen object of sehnsucht revealed as the timeless ensemble itself. This sort of thinking is associated with "objective morality" in all its vagueness, "God" in all its/His vagueness, and in a milder form a certain conception of the acausal economy in all its vagueness. Typical objections include: What if this timeless perfection is determined to be the result of your timeful striving? Moral progress isn't inevitable. How does knowing that it will work out help you help it work out? What if, as is likely, this perfection is something that may be approached but never reached? Won't there always be an edge, a Pareto frontier, a front behind which remaining resources should be placed, a causal bottleneck, somewhere in time? Then why not renormalize, and see that war, that battle, that moment where things are still up in the air, as the only world? Are you yourself not almost entirely a conglomeration of imperfections that will get burned away in Pentecostal fire, as will most everything you now love? Et cetera. There are many similar and undoubtedly stronger arguments to gnaw at the various parts of an optimist's mind; the arguments I gave are interior to those the Less Wrong community has cached.

Right view cuts through the drama of either tinted lens to find Tao if doing so is De. It's a hypothesis.

comment by wedrifid · 2011-06-03T15:26:07.826Z · LW(p) · GW(p)

As such, it can cause many extremely intelligent people to reject evidence that goes against their worldview.

You're Entitled to Arguments, But Not (That Particular) Proof.

That post does not mean what you think it means.

Replies from: XiXiDu, XiXiDu
comment by XiXiDu · 2011-06-03T16:33:34.909Z · LW(p) · GW(p)

That post does not mean what you think it means.

Yeah, it does. I especially thought about you when I linked to it.

comment by XiXiDu · 2011-06-03T16:47:49.680Z · LW(p) · GW(p)

That post does not mean what you think it means.

What do you think that I think that it means? That post is one of many that are implicitly used to reject any criticism about AI going FOOM. I especially thought about you and your awkward responses about predictions and falsification. In and of itself I agree with the post, but used selectively it is a soldier against unwanted criticism. Reading the last few posts of the sequences rerun made me again more confident that the whole purpose of this project is to brainwash people into buying the AI FOOM idea.

Replies from: MatthewBaker
comment by MatthewBaker · 2011-06-03T22:32:44.330Z · LW(p) · GW(p)

I doubt that's the main point of the project, i hope i would know as i have lurked it in great detail since it was first envisioned. That being said i agree that wedrifid's answer is surprisingly terse.

Replies from: XiXiDu, Perplexed
comment by XiXiDu · 2011-06-05T17:15:51.385Z · LW(p) · GW(p)

I doubt that's the main point of the project...

Well, Eliezer Yudkowsky is making a living telling people that they ought to donate money to his "charity".

Almost a year ago I posted my first submission here. I have been aware of OB/LW for longer than that but didn't care until I read about the Roko incident. That made me really curious.

I am too tired to go into any detail right now, but what I learnt since then didn't make me particularly confident of the epistemic standard of LW, despite the solemn assertion of its members.

The short version, the assurance that you are an aspiring rationalist might mislead some people to assign some credence to your extraordinary claims, but it won't make them less wrong.

There are many reasons for why I am skeptic. As I said in another comment above, reading some of the posts linked to by the sequences rerun made Eliezer Yudkowsky much more untrustworthy in my opinion. Before I thought that some of his statements, e.g. "If you don't sign up your kids for cryonics then you are a lousy parent.", are negligible lapses of sanity, but it now appears to me that such judgmental statements are the rule.

Replies from: MatthewBaker
comment by MatthewBaker · 2011-06-06T06:02:56.092Z · LW(p) · GW(p)

I think i understand your point of view and i agree with your sentiments, but do you honestly believe that Eliezer does this all for the money? I think that he likes being able to spend all his time working on this and the singularity institute definitely treats him well but the majority of people on less wrong including him really do want to save the world from what I've seen. As for his statement about cryonics, if hes passive about i don't think many of the lurkers would consider signing up. Cryonics seems like a long shot to me but i think its reasonable to assume that he writes so emotionally about it because he honestly just wants more people to be vitrified in case we do manage to create an FAI. I would love to hear more about your reasons for skepticism because i share many of the same concerns, but so far Ive hear lo to the contrary wisdom on LW/OB.

comment by Perplexed · 2011-06-05T15:22:42.955Z · LW(p) · GW(p)

i agree that wedrifid's answer is surprisingly terse

Not surprisingly, to those who have experience with wedrifid. Merely annoyingly. Though in this case he is making an allusion to a well known trope. Google on the phrase "does not mean what you think it means". If XiXiDu has referenced that "You're Entitled ..." posting before, for roughly the same debunking purpose, then wedrifid's terse putdown strikes me as rather clever.

Replies from: XiXiDu