Open thread, November 2011

post by Oscar_Cunningham · 2011-11-02T18:19:16.423Z · LW · GW · Legacy · 212 comments

Contents

212 comments

Discuss things here if they don't deserve a post in Main or Discussion.

If a topic is worthy and receives much discussion, make a new thread for it.

212 comments

Comments sorted by top scores.

comment by D_Malik · 2011-11-03T06:19:05.671Z · LW(p) · GW(p)

I'm thinking maybe we should try to pool all LW's practical advice somewhere. Perhaps a new topic in Discussion, where you post a top-level comment like "Will n-backing make me significantly smarter?", and people can reply with 50% confidence intervals. Then we combine all the opinions to get the LW hivemind's opinions on various topics. Thoughts?

PS. Sorry for taking up the 'Recent Comments' sidebar, I don't have internet on my own computer so I have to type my comments up elsewhere and post them all at once.

Replies from: gwern, Dorikka
comment by gwern · 2011-11-21T16:42:42.381Z · LW(p) · GW(p)

Why not just add those into the survey?

comment by Dorikka · 2011-11-07T02:15:52.085Z · LW(p) · GW(p)

Good idea -- go for it! :D

comment by JoshuaZ · 2011-11-02T18:29:44.774Z · LW(p) · GW(p)

In one of the subthreads concerned with existential risk and the Great Filter, I proposed that one possible filtration issue is that intelligent species that evolved comparatively earlier in their planets' lifetimes or evolved on planets that formed much sooner compared to when their heavy elements were formed would have a lot more fissionable material (especially uranium-235), and that this might make it much easier for them to wipe themselves out with nuclear wars. So we may have escaped the Great Filter in part by evolving late. Thinking about this more, I'm uncertain how important this sort of filtration is. I'm curious if a) people think this could be a substantial filter and b) if anyone is aware of discussion of this filter in the literature.

Replies from: Jack
comment by Jack · 2011-11-02T19:06:02.028Z · LW(p) · GW(p)

If we had had more fissionable material over the last 100 years how would that have made nuclear war more likely?

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-02T19:21:40.688Z · LW(p) · GW(p)

If life had evolved say 2 billion years earlier than there would be about 6 times as much U-235 on the planet, and most uranium ores would be around 3% U-235 rather than 0.7% U-235. This means that making nuclear weapons would be easier, since obtaining enough uranium would be a lot easier and the amount of enriching needed would go down as well. For similar reasons it would also then be easier to make plutonium in large quantities. However, the fact that one would still need some amount of enrichment means that this would still be technically difficult, just easier. However, fusion bombs are much more effective for civilizations destroying themselves, and even with cheap fissiles, fusion bombs are still comparatively tough.

There's another reason that this filter may not be that big a filtration event: Having more U-235 around means that one can more easily construct nuclear reactors. Fermi's original pile used non-enriched uranium, so one can have a (not very efficient) uranium reactor simply from that without much work, and modern reactors can use non-enriched uranium (although that requires careful designs). But on a large scale, in such a setting, somewhat enriched uranium (compared to what we consider normal) would be much more common, and functional, useful reactors can be easily made with percentages as low as 2% of U-235, and in this setting most of the uranium would be closer to 3% U-235. So making nuclear reactors much easier means one has a much easier source of energy (in fact, on Earth, there's at least one documented case of such a reactor occurring naturally about 1.7 billion years ago ). Similar remarks apply to nuclear rockets which are one of the few plausible ways one can reasonably go about colonizing other planets.

So the two concerns are: a) how much more likely would it be for a civilization to actually wipe itself out in this sort of situation and b) how much is this balanced out by the presence of a cheap energy source and an easier way to leave the planet and go around one's star system with high delta-V?

Replies from: jhuffman, gwern, khafra
comment by jhuffman · 2011-11-03T20:42:25.297Z · LW(p) · GW(p)

Perhaps it makes it a little more likely for a civilization to end themselves but it doesn't seem to have the potential to be a great filter. It doesn't seem that likely that even a large scale war with fusion weapons would extinguish a species; and as you point out there is still quite a barrier to development of fusion weapons even with more prolific 235. So far in our history the proliferation of nuclear weapons seems to have discouraged wars of large scope between great powers. In fact two great powers have not fought each other since Japan's surrender. Granted this is a pretty small sample of time but a race without the ability to rationally choose peace probably has little chance regardless of 235 levels. So if there is a great filter here with species extinguishing themselves in war, more 235 makes it a little bit greater only.

comment by gwern · 2011-11-21T20:56:03.591Z · LW(p) · GW(p)

What, exactly, would the increased uranium level do?

  • It doesn't seem to me that it would speed up the development of an atomic bomb much because you have to have the idea in the first place; and in our timeline, the atomic bomb followed the idea very quickly (what was it, 6 years?); the lower concentration no doubt slowed things by a few months or perhaps less than 5 years, but the histories I read didn't point to concentrating as a bottleneck but more conceptual issues (how much do you need? how do the explosive lenses work? etc.)

    Nor do I see how it might speed up the general development of physics and study of radioactivity; if Marie Curie was willing to go through tons of pitchblende to get a minute bit of radium, then uranium clearly was nowhere on her radar. Going from 0.6 to 3% won't suddenly make a Curie study uranium ore instead.

    The one such path would be discovering a natural uranium reactor, but how big a window is there where scientists could discover a reactor and speed up development of nuclear physics? I mean, if a scientist in the 1700s had discovered a uranium reactor, would he be able to do anything about it? Or would it just remain a curiosity, something like the Greeks and magnets?

  • Nuclear proliferation is not constrained by the ability to refine ore, but more by politics; South Africa and South Korea and Libya and Iraq didn't abandon their nukes or programs because it was costing them 6x as much to refine uranium.
  • Nukes wouldn't become much more effective; nukes are so colossally expensive that their yields are set according to function and accuracy of targeting. (The poorer your targeting, like Russia, the bigger your yields will be to compensate.)
Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-22T21:16:35.609Z · LW(p) · GW(p)

Well, one issue is that it becomes easier for countries to actually get nukes once the whole technology is known. One needs to start with less uranium and needs to refine it less.

Regarding the Curies, while that it is true, it might be that people would have noticed radioactivity earlier. And more U-235 around means more radium around also. But I agree that this probably wouldn't have a substantial impact on when things would be discovered. Given how long a gap there was between that initial discovery and the idea of an atomic bomb, even if it did speed things up it is unlikely to have impacted the development of nuclear weapons that mcuh.

Your points about profileration and effectiveness seems to both be strong. Overall, this conversation makes me move my view in the other direction. That is, this seems to be not just not a strong filtration candidate, the increased ease of energy access argument seems to if anything push things in the other direction. Overall, this suggests that as far as presence of U-235 is concerned, civilizations that arise on comparatively young planets should have less not more filtration. This is worrisome.

Replies from: gwern
comment by gwern · 2011-11-22T22:07:29.938Z · LW(p) · GW(p)

Well, one issue is that it becomes easier for countries to actually get nukes once the whole technology is known. One needs to start with less uranium and needs to refine it less.

Yes, but how much does this help? There are multiple methods available of varying sophistication/engineering complexity (thermal easy, laser hard); a factor of 6 surely helps, but any of the methods works if you're just willing to run the ore or gas through enough times.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-23T00:29:23.016Z · LW(p) · GW(p)

That's a good point. So the only advantage comes from not needing as much uranium ore to start with and since uranium ore is easy to get already that's not a major issue.

comment by khafra · 2011-11-04T13:09:01.267Z · LW(p) · GW(p)

I think it fails as a filter because even a huge nuclear war wouldn't wipe out eg cockroaches. Assuming "intelligent life evolves from multicellular life" is IID, with an early appearance it could happen a few times before the planet gets as old as ours. To wit: The only reason to think the dinosaur's extinction event wasn't nuclear war is a lack of fossilized technological artifacts; and it doesn't seem to have filtered us yet.

Replies from: wedrifid, JoshuaZ
comment by wedrifid · 2011-11-05T07:43:59.041Z · LW(p) · GW(p)

The only reason to think the dinosaur's extinction event wasn't nuclear war is a lack of fossilized technological artifacts

The only reason? The lack of creatures with appendages suitable for tool wielding or the evident brain capacity for the task doesn't come into it just a tiny bit?

Replies from: FAWS, None, Vladimir_Nesov
comment by FAWS · 2011-11-06T14:13:26.786Z · LW(p) · GW(p)

The lack of creatures with appendages suitable for tool wielding

Do we know that? Iguanodons for example have hands that look not all that terribly far off from hands suitable for tool use, some related species that we didn't find in the fossile record yet evolving proper hands doesn't seem impossible to me.

Replies from: wedrifid
comment by wedrifid · 2011-11-06T16:58:50.133Z · LW(p) · GW(p)

Do we know that?

I have very little idea. Last I heard the brontosaurus doesn't even exist and the triceratops is really just an immature torosaurus. That gives a ballpark for how much confidence I can have in my knowledge of the species in that era.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T17:09:54.419Z · LW(p) · GW(p)

Last I heard the brontosaurus doesn't even exist

This is incorrect. The name "brontosaurus" is incorrect. But the nomenclature correction to apatosaurus did not come with any change in our understanding of the species.

Replies from: wedrifid
comment by wedrifid · 2011-11-07T00:59:49.854Z · LW(p) · GW(p)

This is incorrect. The name "brontosaurus" is incorrect. But the nomenclature correction to apatosaurus did not come with any change in our understanding of the species.

While that which was labelled brontosaurus was later subsumed into the previously identified genus apatosaurus the early reconstructed fossil which popularized our image of the brontosourus was also discovered to include a head based of models of camarasaurus skulls. That and it was supposedly forced to live in the water because it was too large to support itself on land. Basically the 'brontosourus' that I read about as a child is mostly bullshit.

Even this much I didn't have anything but the vaguest knowledge of until I read through the wikipedia page. As for possible tool capable appendages or even traces of radioactive isotopes I really have very little confidence in knowing about. It just isn't my area of interest.

Replies from: David_Gerard, JoshuaZ
comment by David_Gerard · 2013-05-10T10:12:58.160Z · LW(p) · GW(p)

Even this much I didn't have anything but the vaguest knowledge of until I read through the wikipedia page.

Wikipedia is a pretty up-to-date source on dinosaurs, with lots of avid and interested editors on the topic. (The artistic reconstructions come close to being original research, but a reconstruction tends not to be used until it's passed a gamut of severely critical and knowledgeable editors.)

Remember that it's quite an active field, with new discoveries and extrapolations therefrom all the time. It surprises me slightly how much we know from what little evidence we have, and that we nevertheless do actually know quite a bit. (I have a dinosaur-mad small child who critiques the dinosaur books for kids from the library. Anything over a couple of years old is useless.)

comment by JoshuaZ · 2011-11-07T02:07:46.135Z · LW(p) · GW(p)

Camarasaurus is a close relative, the use of it as a model for reconstructing the skull was deliberate. (Moreover, modern data shows that it was in fact quite a good reconstruction.) The water thing did turn out to be just wrong, but that's not any different than about the scale of change that has happened with a lot of dinosaurs (for example the changing understanding of how T-Rex hunted.) There's certainly been a lot of changes (although most of the brontosaurus stuff was known a very long time ago and just took a lot of time to filter through to popular culture), but none of it amounts to "brontosaurus" not existing.

Replies from: wedrifid
comment by wedrifid · 2011-11-07T02:27:35.150Z · LW(p) · GW(p)

Moreover, modern data shows that it was in fact quite a good reconstruction.

What? No it doesn't. It was found to be the totally wrong sauropod to pretend was a brontosourus head. Did you read the line in wikipedia backwards? (The wording could be a little more explicit, at a stretch there is ambiguity. The actual journal article is more clear.) Or did you just make that up as a plausible assumption? It should be based off the diplodocus.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-07T02:40:33.071Z · LW(p) · GW(p)

Hmm, now looking per your suggestion at the Wikipedia article. They emphasize the degree of difference more than I remember it turning out to be an issue. The source they are using is here (may be a paywall). I don't know enough paleontology to understand all the details of that paper. However, I suspect that to most laypeople a skull that resembles a diplodocus would be close to that of a camarasaurus so the issue may be a function of what one means by a good reconstruction. (I suspect that many 10 year olds could probably see the differences between a diplodocus skull and a torasaurus skull, but it would take more effort to point out the difference between diplodocus and camarasaurus.)

Replies from: wedrifid
comment by wedrifid · 2011-11-07T02:44:48.384Z · LW(p) · GW(p)

I suspect that many 10 year olds could probably see the differences between a diplodocus skull and a torasaurus skull, but it would take more effort to point out the difference between diplodocus and camarasaurus.

I could totally tell the difference between a camarasaurus and a raptor. That's about my limit. And I know about raptors because they are cool. Also, they feature in fictional math tests.

However, I suspect that to most laypeople a skull that resembles a diplodocus would be close to that of a camarasaurus

They wouldn't be able to describe the difference (or know either of those dinosours) but the difference when you look at a new apatosaurus compared to an old picture of a 'brontosourus' is rather stark. ie. The new one looks like a pussy.

comment by [deleted] · 2011-11-06T00:06:08.883Z · LW(p) · GW(p)

The lack of creatures with appendages suitable for tool wielding or the evident brain capacity for the task doesn't come into it just a tiny bit?

I'm not exactly sure how much more or less common fossils are from various time periods but I think its fair to point out we have very few skeletons of certain hominids running around that fit that description in East Africa a few million years back.

Which dosen't change that you are right that it is very very unlikely to be the case that a tool using or very clever undiscovered species (at least to the extent needed to make the argument work) existed then. But we should keep in mind just what a puny fraction of extinct species are known to us.

comment by Vladimir_Nesov · 2011-11-05T23:34:30.092Z · LW(p) · GW(p)

appendages suitable for tool wielding

Is this really important? The crucial point is some means for accumulation of cultural knowledge, which could well be implemented via tradition of scholarship without any support from external tools, and even failing that, ability (or just innate rationality) a couple of levels higher than human could do the trick.

Given runaway evolution of intelligence, it seem like ability to bear tools is irrelevant, and AFAIK evolution of human intelligence wasn't caused by the faculty of tool-making (so the effect isn't strong in either direction).

Replies from: pedanterrific, wedrifid
comment by pedanterrific · 2011-11-06T05:12:25.031Z · LW(p) · GW(p)

I find this comment extremely puzzling. How do you suppose an intelligent species could go about building nuclear bombs without the ability to use tools?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-06T13:33:04.035Z · LW(p) · GW(p)

The relevant kind of "ability to use tools" is whatever can be used, however inefficiently at the beginning, to start building stuff, if you apply the ingenuity of an international scientific community for 100000 years to the task; not appendages that a chimp-level chimp can use to sharpen sticks in an evening. You seem to underestimate the power of intelligence.

This is directly analogous to AI boxing, with limitations of intelligent creatures' bodies playing the role of the box. I'd expect intelligent tortoises or horses should still be capable of bootstrapping technological civilization (if they get better than humans at rationality to sustain scientific progress in the initial absence of technological benefits, or just individually sufficiently more intelligent to get to the equivalent of the necessary culture's benefit in a lifetime).

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T17:16:32.571Z · LW(p) · GW(p)

There are a lot of species that are almost as smart as humans, and some even engage in tool use. (e.g. many species of corvids). But their tool use is limited, and part of the limit appears to be their lack of useful appendages and comparatively small size. In at least some of these species such as the New Caladonian Crow, tool techniques can be passed on from one generation to the next. This sort of thing suggests that appendages matter a fair bit.

(Obviously they aren't sufficient even when one is fairly smart. Elephants have an extreme flexible appendage, have culture, are pretty brainy, and don't seem to have developed any substantial tool use.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-06T18:16:28.359Z · LW(p) · GW(p)

This sort of thing suggests that appendages matter a fair bit.

Elephants or crows don't have scientific communities, so the analogy doesn't work, doesn't suggest anything about the hypothetical I discussed.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T18:23:41.986Z · LW(p) · GW(p)

Humans developed tool use well before we had anything resembling the scientific method or a scientific community. Humans had already 2000 years ago become the dominant species on the planet and had a substantial enough impact to make easily noticeable changes in the global environment. Whatver is necessary for this sort of thing, a scientific community doesn't seem to be on the list.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-06T18:38:04.044Z · LW(p) · GW(p)

You are missing the point still. The question was whether the presence of appendages convenient for tool-making is an important factor in intelligent species' ability to build a technological civilization. In other words, whether creatures intelligent enough to build a technological civilization, but lacking an equivalent of hands, would still manage to build a technological civilization.

Elephants or crows are irrelevant, as they are not smart enough. Human use of tools is irrelevant, as we do have hands. The relevant class of creatures are those that are smart and don't have hands (or similar), for example having bodies of tortoises (or worse).

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T18:53:51.190Z · LW(p) · GW(p)

Hmm, I'm confused now about what you are trying to assert. You are, if I'm now parsing you correctly, asserting that a species with no tool appendage but with some version of the scientific method could reach a high tech level without tool use? If so, that doesn't seem unreasonable, but you seem to be conflating intelligence with having a scientific community. These are not at all the same thing.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-06T19:00:18.950Z · LW(p) · GW(p)

In the situation where you have smart folks with no ability to build tools, scientific community is one useful technology they can still build, and that can dramatically improve their capability to solve the no-hands problem. For example, I wouldn't expect humans with no hands (and with hoofs, say) to develop technology if they don't get good enough at science first (and this might fail to happen at our level of rationality in the absence of technology, which would be the case in no-hands hypothetical). As an alternative, I listed sufficiently-greater individual intelligence that doesn't need augmentation by culture to solve the no-hands problem (which might have developed if no-hands humans evolved a bit more, failing to solve the no-hands problem).

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T19:05:15.319Z · LW(p) · GW(p)

That sufficiently greater intelligence without hands could succeed is a supposition that seems questionable unless one makes sufficiently greater to be so large as to see no plausible reason it would evolve. And a scientific community is very difficult to develop unless one already has certain technologies that seem to require some form of tools. A cheap and efficient method of storing information seems to be necessary. Humans accomplished that with writing. It is remotely plausible one could get such a result some other way but it is tough to see how that could occur without the ability to use tools.

Replies from: Vladimir_Nesov, Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-06T19:20:16.700Z · LW(p) · GW(p)

That sufficiently greater intelligence without hands could succeed is a supposition that seems questionable unless one makes sufficiently greater to be so large as to see no plausible reason it would evolve.

If creatures figure out selective breeding, one way to solve the no-hands problem would be for them to breed themselves for intelligence...

Replies from: pedanterrific
comment by pedanterrific · 2011-11-07T01:19:37.836Z · LW(p) · GW(p)

Would it be easier for greater-than-human intelligent nohanders to breed themselves for more intelligence or for, you know, hands?

Replies from: Vladimir_Nesov, Vladimir_Nesov, JoshuaZ
comment by Vladimir_Nesov · 2011-11-07T12:33:23.227Z · LW(p) · GW(p)

(I didn't want to reply, but given the follow-up...)

Since they are already intelligent, there's a road to incremental improvement. For hands, it's not even clearly possible, will take too long, and psychology will change anyway in the meantime, causing even greater value drift (which is already the greatest cost of breeding for intelligence).

comment by Vladimir_Nesov · 2011-11-07T01:24:18.138Z · LW(p) · GW(p)

The answer is yes.

comment by JoshuaZ · 2011-11-07T03:55:46.794Z · LW(p) · GW(p)

It depends on the attitudes of the species. Non-standard appendages might be a sign of ill-health. Humans are not the only species that uses a heuristic approximating "looks like a normal member of my species" as a proxy for health and general evolutionary fitness. So breeding hands might be tough, in that they wouldn't be able to breed easily with the other members of the population necessarily. On the other hand, breeding for intelligence doesn't have that problem. But all of this is highly speculative and to a large extent is a function in detail of what the species is like and what obvious phenotypical variation there is that can be easily traced to genetics.

Replies from: pedanterrific
comment by pedanterrific · 2011-11-07T04:05:20.256Z · LW(p) · GW(p)

My understanding is we're starting from the assumption that the species in question is on average far more rational (and probably more intelligent) than humanity. If creatures that can create a thriving scientific community in the total absence of technology have gotten to the point of saying "You know, things would be a lot easier if we had hands. Hey, how about selective breeding?" I don't imagine the fact that they'd likely find hands unsexy would be an issue.

comment by Vladimir_Nesov · 2011-11-06T19:16:01.146Z · LW(p) · GW(p)

That sufficiently greater intelligence without hands could succeed is a supposition that seems questionable unless one makes sufficiently greater to be so large as to see no plausible reason it would evolve.

Well, I expect educated humans could pull this off (that is, assuming development of science/rationality).

And a scientific community is very difficult to develop unless one already has certain technologies that seem to require some form of tools.

An oral tradition of scholarship seems sufficient for all practical purposes, on this level of necessary detail, if reliable education is sustained, and there is a systematic process that increases quality of knowledge over time (i.e. science and/or sufficient rationality).

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T19:57:27.865Z · LW(p) · GW(p)

On the whole, we have a pretty decent estimate for the intelligence levels produced by evolution, There are some potential observer bias issues (if there were another, more highly intelligent species we'd probably be them.) but even taking that into account the distribution seems clear.

There's a tendency to underestimate how intelligent other species are compared to humans. This is a general problem that is even reflected in our language (look at the verbs "parrot" and "ape" compared to what controlled studies show that they can do.) While there are occasional errors of overestimation (e.g. Clever Hans), and we do have a tendency to overestimate the intelligence of pets, the general thrust in the last fifty years has been that animals are smarter than we give them credit for. So taking all this into account, we should shift our distribution of likely intelligence slightly towards the intelligent side. But even given that, it doesn't seem likely that a species would evolve to be intelligent enough to do the sort of thing you intend. Keep in mind that intelligence is really resource intensive.

An oral tradition of scholarship seems sufficient for all practical purposes, on this level of necessary detail, if reliable education is sustained, and there is a systematic process that increases quality of knowledge over time (i.e. science and/or sufficient rationality).

At least in humans, oral traditions are not very reliable. There are only a handful of traditions in the world that seem to have remotely accurate oral traditions. See for example, the Cohanic Y chromosome where to some extent an oral tradition was confirmed by genetic evidence. But even in that case there's a severe limit to the information that was conveyed (a few bits worth of data) and even that was conveyed imperfectly.

An oral tradition would therefore likely need to have many more experiments repeated simply to verify that the claimed results were correct. Moreover, individuals who are not near each other would need to send messengers back and forth or would need to travel a lot. While it is possible (one could imagine messengers with Homeric memory levels keeping many scientific ideas and data sets in their heads) this doesn't seem very likely.

Moreover, in order for all this to work, the species needs to have some inkling that long-term thinking of this sort will actually be helpful. For humans, until about a hundred and fifty years ago, almost all work had some practical bit unless one was a sufficiently wealthy individual (like say Darwin) that one could easily spend time investigating things. If one has no basic tool use or the like, that problem becomes more, not less severe. And even with humans the tech level differences quickly become severe enough that they outstrip the imagination. No early homo sapiens could have imagined something like Roman era technology. It would have looked to them like what we imagine highly advanced science fiction settings would look like.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-11-06T20:55:59.872Z · LW(p) · GW(p)

Link is broken, and some other text appears to have gotten folded into the URL.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T21:52:19.195Z · LW(p) · GW(p)

Thanks. Fixed.

comment by wedrifid · 2011-11-06T06:04:53.150Z · LW(p) · GW(p)

Is this really important?

Yes (as pedanterrific noted). Unless the dinosours were sufficiently badass that they could chew on uranium ore, enrich it internally and launch the resultant cocktail via high powered, targeted excretion. That is one impressive reptile. Kind of like what you would get if you upgraded a pistol shrimp to an analogous T-Rex variant.

(Other alternatives include an intelligent species capable of synthesizing and excreting nano-factories from their pores.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-11-06T13:33:41.403Z · LW(p) · GW(p)

Replied to pedanterrific.

Replies from: wedrifid
comment by wedrifid · 2011-11-06T15:02:06.331Z · LW(p) · GW(p)

Replied to pedanterrific.

In response to that reply I note that I gave two examples of mechanisms by which a species might launch nuclear weapons without any ability to use tools. I could come up with more if necessary and a more intelligent (or merely different) mind could create further workarounds still. But that doesn't preclude acknowledging that the capability to use tools does give significant evidence about whether the species creates technology - particularly in what amount to our genetic kin.

Lack of fossilized evidence of technological artifacts is not the only reason to believe that the extinction of dinosours wasn't due to nuclear war. It is merely one of the stronger reasons.

comment by JoshuaZ · 2011-11-04T13:44:47.951Z · LW(p) · GW(p)

Most of your assessment seems reasonable to me. However,

The only reason to think the dinosaur's extinction event wasn't nuclear war is a lack of fossilized technological artifacts

seems wrong. I haven't crunched the numbers, but I suspect that a species killing nuclear war would leave enough traces in the isotopic ratios around the planet that we'd be able to distinguish it from an asteroid impact. (The Oklo reactor mentioned earlier was discovered to a large extent due to tiny differences in expected verse observed isotope ratios.)

Replies from: gwern, khafra, Luke_A_Somers
comment by gwern · 2011-11-06T05:02:03.260Z · LW(p) · GW(p)

This was actually covered in a book I read (I think it was The World Without Us). Summary: even our reactors leave clear traces that will be detectable about as long as the mass extinction event we're causing. So a civilization and species-killing thermonuclear war would definitely be detectable by us.

comment by khafra · 2011-11-04T14:47:33.998Z · LW(p) · GW(p)

Fair enough. I should have said "if the dinosaurs had been intelligent, and their extinction was due to nuclear winter following a large thermonuclear exchange, the history of our own species could still look substantially similar." Although evolution might have proceeded a bit differently with higher background radiation.

Replies from: JoshuaZ
comment by JoshuaZ · 2011-11-06T19:38:33.057Z · LW(p) · GW(p)

Phrased that way your point seems very strong. Indeed, dinosaurs died out only 65 million years ago, which isn't that long ago, especially in the context of this sort of filtration event.

comment by Luke_A_Somers · 2013-05-14T22:28:11.767Z · LW(p) · GW(p)

Fallout is a technological artifact.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-05-14T22:37:56.798Z · LW(p) · GW(p)

Yes, I'm not sure what your point is. Can you expand?

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2013-05-14T22:57:00.537Z · LW(p) · GW(p)

The only reason to think the dinosaur's extinction event wasn't nuclear war is a lack of fossilized technological artifacts

What you then provided as a counterexample of other reasons to reject this theory fits within the scope of things that are missing.

Replies from: JoshuaZ
comment by JoshuaZ · 2013-05-14T23:30:07.962Z · LW(p) · GW(p)

Ah ok. Yes, you're right, fallout should be covered for purposes of the original comment then.

comment by gwern · 2011-11-06T04:56:03.098Z · LW(p) · GW(p)

I just finished reading Steven Pinker's new book, The Better Angels of Our Nature: Why Violence Has Declined. It's really good, as in, maybe the best book I've read this year. Time and again, I was shocked to find subjects treated of keen interest to LW, or which read like Pinker had taken some of my essays but done them way better (on terrorism, on the expanding circle, etc.); even so, I was surprised to learn new things (resource problems don't correlate well with violence?).

I initially thought I might excerpt some parts of it for a Discussion or Article, but as the quotes kept piling up, I realized that it was hopeless. Reading reviews or discussions of it is not enough; Pinker just covers too much and rebuts too many possible criticisms. It's very long, as a result, but absorbing.

comment by Oscar_Cunningham · 2011-11-02T18:29:55.085Z · LW(p) · GW(p)

When writing a comment on LessWrong, I often know exactly which criticisms people will give. I will have thought through those criticisms and checked that they're not valid, but I won't be able to answer them all in my post, because that would make my post so long that no-one would read it. It seems like I've got to let people criticise me, and then shoot them down. This seems awfully inefficient, it's like the purpose of having a discussion rather than me simply writing a long post is just to trick people into reading it.

Replies from: Yvain, shminux, vi21maobk9vp, JoshuaZ
comment by Scott Alexander (Yvain) · 2011-11-02T18:46:55.095Z · LW(p) · GW(p)

Footnotes.

comment by shminux · 2011-11-02T18:50:39.499Z · LW(p) · GW(p)

I suppose if you have an external blog, you can simply summarize the potential criticisms on your LW post and link to a further discussion of them elsewhere. Or you can structure your post such that it discusses them at the very end:

======

Optional reading:

In this way you get your point across first, while those interested can continue on to the detailed analysis.

comment by vi21maobk9vp · 2011-11-03T04:38:32.442Z · LW(p) · GW(p)

Briefly summarize expected objections and write whatever you want to write about them in a comment to your comment.

comment by JoshuaZ · 2011-11-02T18:33:57.618Z · LW(p) · GW(p)

One thing I do when trying to anticipate possible objections is to simply acknowledge them briefly in a parenthetical and then say something like "but these objections are weak" or "these objections have some validity but suffer from problems. Addressing them in detail would make this post too long."

comment by minda · 2011-11-02T22:50:42.160Z · LW(p) · GW(p)

There's a room open in one of the Berkeley rationalist houses, http://sfbay.craigslist.org/eby/sub/2678656916.html

Reply via the ad if you are interested for more details!

comment by quentin · 2011-11-10T03:26:53.458Z · LW(p) · GW(p)

How to cryonics?

And please forgive me if this is a RTFM kind of thing.

I've been reading LW for a time, so I've been frequently exposed to the idea of cryonics. I usually push it to the back of my mind: I'm extremely pessimistic about the odds of being revived, and I'm still young, after all. But I realize this is probably me avoiding a terrible subject rather than an honest attempt to decide. So I've decided to at least figure out what getting frozen would entail.

Is there a practical primer on such an issue? For example; I'm only now entering grad school, and obviously couldn't afford the full cost. But being at a very low risk of death, I feel that I should be able to leverage a low-cost insurance policy into covering such a scenario.

Replies from: Suryc11, gwern
comment by Suryc11 · 2011-11-19T07:27:26.379Z · LW(p) · GW(p)

I have essentially the same query.

How exactly do I go about acquiring a cryonics insurance policy, especially when I am still in school (undergrad American university)? What if I live with my parents and they do not support cryonics?

Actually, how does one go about acquiring any specific form of insurance policy?

comment by gwern · 2011-11-21T16:44:56.188Z · LW(p) · GW(p)

Have you tried to see what Alcor.org might say? Such a practical primer seems like the sort of thing a cryonics organization might write. (Crazy, I know...)

Replies from: quentin
comment by quentin · 2011-11-21T23:52:14.214Z · LW(p) · GW(p)

Yeah, I didn't look hard enough. So I'll leave this here.

Dear people from the future, here is what I have found so far:

http://alcor.org/BecomeMember/scheduleA.html http://alcor.org/BecomeMember/sdfunding.htm

Though, if anyone was in a similar position and would like to share, I'd still love to hear about it.

comment by lukeprog · 2011-11-02T23:04:19.136Z · LW(p) · GW(p)

There was a recent LW discussion post about the phenomenon where people presented with evidence against their position end up believing their original position more strongly. The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly. Does somebody know which discussion post I'm talking about? I'm not finding it.

Replies from: Manfred, VincentYu, lessdazed
comment by Manfred · 2011-11-06T19:30:18.432Z · LW(p) · GW(p)

Was it this one?

Replies from: lukeprog
comment by lukeprog · 2011-11-06T20:23:36.524Z · LW(p) · GW(p)

'Twas!

comment by VincentYu · 2011-11-03T03:27:06.957Z · LW(p) · GW(p)

I'm not sure about the LW discussion post, but the phenomenon that you describe closely resembles Nyhan and Reifler's 'backfire effect', which I think reached a popular audience when David McRaney wrote about it on You Are Not So Smart.

ETA: Googling LW for "backfire effect" and nyhan doesn't turn up any recent post, so maybe this is not what you are looking for.

Replies from: dbaupp
comment by dbaupp · 2011-11-03T08:05:17.293Z · LW(p) · GW(p)

I'm not in a position to Google easily, but "belief polarization" is another term for this, I think.

comment by lessdazed · 2011-11-03T14:05:52.153Z · LW(p) · GW(p)

The article had experimentally found at least one way that might solive this problem, so that people presented with evidence against their position actually update correctly.

Are you thinking of the one where people updated only to consider dangers less likely than their initial estimate?

http://lesswrong.com/lw/814/interesting_article_about_optimism/

Replies from: lukeprog
comment by lukeprog · 2011-11-03T15:16:57.788Z · LW(p) · GW(p)

That's not what I was thinking of, but interesting nonetheless.

comment by JoshuaFox · 2011-11-03T21:36:13.569Z · LW(p) · GW(p)

For LifeHacking--instrumental rational skills--does anyone have experience getting lightweight professional advice? E.g., for clothing, hire a personal stylist to pick out some good-looking outfits for you to buy. No GQ fashion-victimhood, just some practical suggestions so that you can spend the time re-reading Pearl's Causality instead of Vogue.

The same approach--simple one-time professional advice, could apply to a variety of skills.

If anyone has tried this sort of thing, I'll be glad to learn your experience.

comment by lavalamp · 2011-11-02T20:07:32.986Z · LW(p) · GW(p)

Is anyone writing a bot for this contest?

http://aichallenge.org/index.php

Replies from: None, malthrin
comment by [deleted] · 2011-11-02T23:39:25.562Z · LW(p) · GW(p)

Sounds awesome, where did you first hear of this?

The current phase of the contest will end December 18th at 11:59pm EST. At that time submissions will be closed. Shortly thereafter the final tournament will be started. The length of the final tournament has not yet been determined but is expected to last less than one week. Upon completion the contest winner will be announced and all results will be publically available.

Anyone interested in starting a team for this?

Replies from: D_Alex, falenas108, lavalamp, Emile
comment by D_Alex · 2011-11-03T02:35:06.354Z · LW(p) · GW(p)

Gogo LessWrong team! The experience and the potential publicity will be excellent.

I'll chip in with a prize to the amount of ($1000 / team's rank in the final contest), donated to the party of your choice. Team must be identified as "LessWrong" or suchlike to be eligible.

Replies from: None
comment by [deleted] · 2011-11-03T08:22:33.024Z · LW(p) · GW(p)

This sounds like a wonderful opportunity for anyone interested to promote Lesswrong and themselves as well as give to a good cause like SIAI/other worthy charity! We should really bring this to people's attention.

It also sounds like an excellent test of applied rationality.

comment by falenas108 · 2011-11-03T04:41:49.791Z · LW(p) · GW(p)

It seems like there's a decent amount of interest. This should probably be made into a post of its own, and hopefully a promoted one, if we want an official LessWrong team. A lot of people don't check back on the open thread who would probably be interested in joining.

Replies from: None, lavalamp
comment by [deleted] · 2011-11-03T08:23:54.616Z · LW(p) · GW(p)

This should probably be made into a post of its own, and hopefully a promoted one

Yes I think it should be, I think there would be some interest. Considering we have some very competent and experienced people on Lesswrong and some very enthusiastic amateurs, several teams wouldn't be too bad an idea either if there where enough people. Some of the amateur LWers might be a bit intimidated by being part of the "Offical LessWrong team", whereas "Lesswrong Team #3" or "Lesswrong amateur rationalist group" dosen't sound as bad.

comment by lavalamp · 2011-11-03T03:04:24.274Z · LW(p) · GW(p)

I'd consider joining a team thing. A LessWrong team would be cool if it, you know, wins... Currently there is not much tough competition, my bot is incredibly stupid (doesn't pay attention to the other players) and in the top 200.

I know about it from the contest they held last year.

Replies from: None
comment by [deleted] · 2011-11-03T08:13:29.540Z · LW(p) · GW(p)

Is this an annual event?

Replies from: lavalamp
comment by lavalamp · 2011-11-03T15:19:08.687Z · LW(p) · GW(p)

Seems to be. This is their third contest.

comment by Emile · 2011-11-03T09:07:59.893Z · LW(p) · GW(p)

I'd be interested in joining a team - I'm a video game programmer with an AI degree, so it's the kind I should be good at (I don't have massive amounts of free time though).

comment by malthrin · 2011-11-02T20:29:00.198Z · LW(p) · GW(p)

I put some thought into it, but I don't think I'll have time to. I wouldn't mind sharing my ideas with anyone who is actually doing it.

Replies from: lavalamp
comment by lavalamp · 2011-11-02T21:57:45.794Z · LW(p) · GW(p)

I wouldn't mind listening to ideas...

Replies from: malthrin
comment by malthrin · 2011-11-03T03:09:25.434Z · LW(p) · GW(p)

My most important thought was to ensure that all CPU time is used. That means continuing to expand the search space in the time after your move has been submitted but before the next turn's state is received. Branches that are inconsistent with your opponent's move can be pruned once you know it.

Architecturally, several different levels of planning are necessary: food harvesting and anticipating new food spawns. Pathfinding, with good route caching so you don't spend all your CPU here. Combat instances, evaluating a small region of the map with alpha/beta pruning and some pre-tuned heuristics. High level strategy, allocating ants between food operations, harassment, and hive destruction.

If you're really hardcore, a scheduling algorithm to dynamically prioritize the above calculations. I was just going to let the runtime handle that and hope for the best, though.

comment by D_Malik · 2011-11-03T06:17:21.102Z · LW(p) · GW(p)

I don't know much about machine learning, but wouldn't it be possible to use machine learning to get a machine to optimize your diet, exercise, sleep patterns, behaviour, etc.? Perhaps it generates a list of proposed daily routines, you follow one and report back some stats about yourself like weight, blood pressure, mood, digit span, etc.. It then takes these and uses them to figure out what parts of what daily routines do what. If it suspects eating cinnamon decreases your blood pressure, it makes you eat cinnamon so you can tell it whether it worked. The algorithm can optimize diet, exercise, mental exercises, even choose what books you read.

Basically, I'm saying why don't we try something like what Piotr Wozniak did with the SRS algorithms, except instead of optimizing memorization we optimize everything. We do what the people at QS do, except we delegate interpretation of the data to a computer.

Like I said, I don't know much about machine learning, but even the techniques I /do/ know, evolutionary algorithms and neural nets, seem like they could be used for this, and are certainly worth our time trying.

Replies from: Emile, listic, curiousepic
comment by Emile · 2011-11-03T10:42:02.396Z · LW(p) · GW(p)

Sounds like it could work, especially if it uses a database of all users, so that users most similar to you also give an indication of what might or might not work for you.

"I am [demographic and psychological parameters] and would like to [specific goal - mood, weight, memoty, knowledge] in the coming [time period]; what would work best?"

Sounds like an interesting project, I'll have to think about it.

comment by listic · 2011-11-05T13:36:42.316Z · LW(p) · GW(p)

I think machine learning has potential in badly formalized fields. Surely, diet and exercise are not most well formalized fields, but it looks like there are certain working heuristics out there. What do you mean by "behaviour" btw?

To start thinking about applying machine learning to diet, exercise, sleep patterns and behaviour you should answer the question: what do you want to optimize them for?

comment by curiousepic · 2011-11-03T15:57:34.470Z · LW(p) · GW(p)

It surprises and disappoints me that I haven't heard of some sort of massive expert program like this being used in healthcare yet. I hope it will come soon, perhaps in the form of a Watson derivative.

comment by D_Malik · 2011-11-03T06:16:42.258Z · LW(p) · GW(p)

Anyone have anything to share in the way of good lifehacks? Even if it only works for you, I would very much like to hear about it. Here are two I've been using with much success lately:

  • Get an indoor cycle or a treadmill and exercise while working on a laptop. At first I just used to cycle while watching movies on TV, but lately I've stopped watching movies and just cycle while doing SRS reps or reading ebooks. Set up your laptop with its power cable and headphones on the cycle, and leave them there always. If you're too tired to cycle, just sit on the cycling machine without cycling. The past few days I've been cycling upwards of 4 hours cumulatively per day, and I feel AWESOME. It also seems to help me get to sleep at the proper time. I would cycle 4 hours a day just for the sleep benefit.

  • The part of my brain that loses to akrasia seems incredibly stupid, whereas my long-term planning modules are relatively smart. I've been trying to take advantage of this by a campaign of active /warfare/ against the akrasia-prone part of me. For instance, I have deleted all the utilities on my laptop needed for networking. I can no longer browse the internet without borrowing someone else's computer, as I am doing now. I also can't get those networking utilities back because for that I need internet. I also destroyed both Ubuntu live-CDs I had, because I can get to the internet through those. Thus far, my willpower has thrice failed me, and each time I have tried to get internet back, and each time I have failed. I count this as a win. The principle is more general, of course: only buy healthy food, literally throw away your television, delete all your computer games, etc.. The first few days without some usual sort of distraction are always painful; I feel depressed and bored of life. But that soon clears up, and my expected-pleasurable-distraction setpoint seems to lower. This is like a way of converting fleeting motivation into long-term motivation.

comment by gwern · 2011-11-02T19:28:42.408Z · LW(p) · GW(p)

So, anime is recognized as one of the LW cultural characteristics (if only because of Eliezer) and has come up occasionally, eg. http://lesswrong.com/lw/84b/things_you_are_supposed_to_like/

Is this arbitrary? Or is there really something better for geeks about anime vs other forms of pop culture? I have an essay arguing that due to various factors anime has the dual advantages of being more complex and also more novel (from being foreign). I'd be interested in what other LWers have to say.

comment by thescoundrel · 2011-11-14T01:50:21.357Z · LW(p) · GW(p)

Neil deGrasse Tyson is answering questions at reddit:

What are your thoughts on cryogenic preservation and the idea of medically treating aging?

neiltyson 737 points 5 hours ago

A marvelous way to just convince people to give you money. Offer to freeze them for later. I'd have more confidence if we >had previously managed to pull this off with other mammals. Until then I see it as a waste of money. I'd rather enjoy the >money, and then be buried, offering my body back to the flora and fauna of which I have dined my whole life.

Does anyone else have a weird stroke of cognitive dissonance when a trusted source places a low probability on a subject you have placed a high probability on?

Replies from: Dorikka, MixedNuts, lessdazed
comment by Dorikka · 2011-11-14T02:39:07.344Z · LW(p) · GW(p)

I have never heard of this person before, but if they actually think "offering my body back to the flora and fauna of which I have dined my whole life." is worth mentioning, it sounds like they're victim of a naturalistic bias.

comment by MixedNuts · 2011-11-14T01:57:16.874Z · LW(p) · GW(p)

In this case it just marks Tyson an undiscriminating skeptic. Eliezer has written on the general case of disagreement.

comment by lessdazed · 2011-11-14T15:16:58.167Z · LW(p) · GW(p)

What if, hypothetically, no one has made much money freezing people?

What if, hypothetically, it cost $5 to freeze someone indefinitely? What's the cost at which it becomes worth it, even in absence of it working on a whole mammal?

comment by malthrin · 2011-11-02T19:08:23.430Z · LW(p) · GW(p)

I'm having trouble deciding how to weight the preferences of my experiencing self versus the preferences of my remembering self. What do you do?

Replies from: jhuffman
comment by jhuffman · 2011-11-03T21:23:26.172Z · LW(p) · GW(p)

I forget.

comment by Suryc11 · 2011-11-19T07:46:14.102Z · LW(p) · GW(p)

I am currently in an undergrad American university. After lurking on LW for many months, I have been persuaded that the best way for me to contribute towards a positive Singularity is to utilize my comparative advantage (critical reading/writing) to pursue a high-paying career; a significant percentage of the money I earn from this undecided lucrative career will hopefully go towards SIAI or some other organization that is helping to advance the same goals.

The problem is finding the right career that is simultaneously well-paying and achievable, with hopefully some time for my own interests/hobbies.

I was first considering becoming a lawyer, but apparently, only the very top law school graduates actually go on to earn jobs with high salaries. In addition, it seems that the first few years being a lawyer are extremely stressful.

Another option is graduate school. The main academic fields I am interested in are government, economics, and philosophy. However, I'm just not sure that graduate school will lead to other careers besides being a professor, and I don't know if academia is frankly well-paying enough to justify the costs.

Any advice is appreciated, particularly if you have a similar dilemma, have encountered something like this in the past, are in a field that I mentioned, or just if you have any specific information that might help me. Thanks!

Replies from: daenerys
comment by daenerys · 2011-11-19T08:15:35.349Z · LW(p) · GW(p)

I am going to say that academia (in the humanities) is not a good choice if you want to make money, or even be guaranteed a job. Professorial jobs are moving away from tenure-track positions, and towards part-time positions. There are very few professorial jobs and very many people (who are all "the best") who want them.

Boring Data

Or put in more understandable terms: 100 Reasons Not to go into Academia

Or for amusement's sake: PhD Comics

Replies from: Suryc11
comment by Suryc11 · 2011-11-19T17:34:05.235Z · LW(p) · GW(p)

Thanks for the information!

I was already leaning away from academia for those very reasons.

comment by lukeprog · 2011-11-18T07:22:13.317Z · LW(p) · GW(p)

Reminder of classic comment from Will Newsome: Condensed Less Wrong Wisdom: Yudkowsky Edition, Part 1.

comment by EphemeralNight · 2011-11-03T21:10:42.904Z · LW(p) · GW(p)

I've noticed that I have developed a habit of playing dumb.

Let me explain. When someone says something that sounds stupid to me, I tend to ask the obvious question or pretend to be baffled as if I'd never heard of the issue before, rather than giving a lecture. I do this even when it is ridiculously improbable that I don't already know about and simply disagree with said issue. I'm non-confrontational by nature, which probably had something to do with slipping into this habit, but I also pride myself on being straightforward, so...

What I'm wondering, is it a good habit or a bad habit? How good or how bad? It is easier, but I can't tell if it is actually more effective than straightforward lecturing at prompting non-cached thoughts. Is it a habit I should make an effort to break?

Replies from: TheOtherDave, Cthulhoo, Oscar_Cunningham
comment by TheOtherDave · 2011-11-03T22:52:02.472Z · LW(p) · GW(p)

My $0.02: there's a gradient between listening charitably (e.g., assuming that your interlocutor probably meant something sensible, and therefore that the senseless thing you heard doesn't accurately reflect their meaning) on the one hand, prioritizing your time (e.g., disengaging from discussions that seem like a waste of time, either with silence or with cached politeness or whatever) on the other, and refusing to challenge error (e.g., pretending something is reasonable because pointing out the flaws with it feels rude) on a third.

Only the third of those seems like a problem to me.

Where you draw the threshold of too-much-in-that-third-bucket is a really up to you. You're under no ethical obligation to prompt non-cached thoughts from everyone who talks to you.

comment by Cthulhoo · 2011-11-04T09:21:12.905Z · LW(p) · GW(p)

I have developed a similar habit over time. I am often the "smart guy" in my social enviroment (I'm not particularly brilliant, but neither is my usual enviroment), and I can often identify major flaws in other people's reasoning. Despite this, I very rarely point them directly out. Social conventions usually state that this behaviour is considered unpolite, indirectly implying that the other person is dumb. It can be even worse if the other person is emotionally attached to the thought. So, unless I am discussing with a very close friend, I usually restrain from making meaningful comments.

comment by Oscar_Cunningham · 2011-11-03T22:40:48.255Z · LW(p) · GW(p)

This is connected with the first post in this thread. Conversation is easier when you take turns, setting up your partner to ask obvious questions.

comment by TimS · 2011-11-02T19:09:54.830Z · LW(p) · GW(p)

Is there a strong reason to think that morality is improving? Contrast with science, in which better understanding of physics leads to building better airplanes, notwithstanding the highly persuasive critiques of science from Kuhn, et al. But morality has no objective test.

100 years ago, women were considered inherently inferior. 200 years ago, chattel slavery was widespread. 500 years ago, Europe practiced absolute monarchy. I certainly think today is an improvement. But proponents of those moralities disagree. Since the laws of the universe don't have a variable for justice, how can I say they are wrong?

Replies from: gwern, taelor, TheOtherDave, jhuffman, malthrin, Jayson_Virissimo, Richard_Kennaway, Nisan
comment by gwern · 2011-11-02T19:23:35.935Z · LW(p) · GW(p)

Funnily enough, I just wrote an essay on the related meta-ethics topic, Singer's Whiggish 'expanding circle' thesis: http://www.gwern.net/Notes#the-narrowing-circle

comment by taelor · 2011-11-09T01:00:45.013Z · LW(p) · GW(p)

It's tempting to give in to the Whig Theory of History and concede that the "good guys" always win eventually, because this does seem (at least superficially) to be the case; the Nazis and Soviets both lost out, slavery got abolished, feminism and the civil rights movement happened. The question is, though, did the good guys win out because they were "good", or are they seen as good because they won?

Replies from: Prismattic
comment by Prismattic · 2011-11-09T02:07:46.663Z · LW(p) · GW(p)

It's not quite that simple. The descendants of the victors generally see the victors as good, but that doesn't mean the descendants of the vanquished see the defeated as evil. Nazism seems to be a case where the defeated society really has strongly repudiated its past, but there is plenty of Soviet nostalgia in Russia and Confederate nostalgia in Dixie.

comment by TheOtherDave · 2011-11-02T21:16:21.797Z · LW(p) · GW(p)

"Morality is improving" is a bit underspecified, as is "science is improving." But assuming "morality is improving" means something like "on average, people's moral beliefs are better than they used to be" (which seems to be what you mean), you're right of course that the question only makes sense if you have some way of identifying "better".

But then, similar things are true of science and airplanes. A 2011 airplane isn't "objectively better" than a 1955 airplane. It's objectively different, certainly, but to assert that the differences are improvements is to imply a value system.

If you're confident enough in your value system to judge airplanes based on it, what makes judging moral systems based on it any different?

Replies from: None, TimS
comment by [deleted] · 2011-11-03T00:56:28.947Z · LW(p) · GW(p)

Science is not airplanes, but the capability to produce airplanes. In 2011, we know how to make 1955 airplanes (as well as 2011 airplanes). In 1955, we only knew how to make 1955 airplanes. Science is advancing.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T02:30:30.838Z · LW(p) · GW(p)

Fair point.

comment by TimS · 2011-11-02T23:02:09.278Z · LW(p) · GW(p)

A 2011 airplane isn't "objectively better" than a 1955 airplane.

I don't think there is a dispute that the social purpose of an airplane is to move people a substantial distance in exchange for fuel.

Modern airplanes move more people for less fuel than 1955 airplanes. Therefore, they are objectively better than older airplanes. And that doesn't even address speed.


If you're confident enough in your value system to judge airplanes based on it, what makes judging moral systems based on it any different?

I'm very confident that I am more moral than Louis XIV. I suspect he would disagree. How should we decide who is right?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T00:26:46.665Z · LW(p) · GW(p)

I could quibble about the lack of dispute -- I know plenty of people who object to the environmental impact of modern planes, for example, some of whom argue that the aviation situation is worse in 2011 than it was in 1955 precisely because they value low environmental impact more than they value moving more people (or, at least, they claim to) -- but that's really beside my point. My point is just that asserting that moving more people (faster, more comfortably, more cheaply, etc.) for less fuel is what makes airplanes better is asserting a value system. That it is ubiquitously agreed upon (supposing it were) makes it no less a value system.

How should we decide who is right?

Regardless of how we should decide, or even if there is a way that we should decide, the way we will decide is that you will evaluate moral(TimS) - moral(Louis XIV) based on your value system, and I will evaluate it based on mine. (What Louis XIV's opinion on the matter would have been, had he ever considered it, doesn't matter much to me, and it certainly doesn't matter to Louis, who is dead. Does it matter to you?)

Just like you evaluate good(2011 airplanes) - good(1955 airplanes) based on your value system, and I evaluate it based on mine.

Why in the world would we do anything else?

Replies from: TimS
comment by TimS · 2011-11-03T00:51:43.176Z · LW(p) · GW(p)

Today, everyone agrees that slavery is wrong. So wrong that attempting to implement slavery will cause you to be charged with all sorts of crimes. Yet our ancestors didn't think slavery was wrong. Were they just idiots?


That it is ubiquitously agreed upon (supposing it were) makes it no less a value system.

I'm not going to argue that Science isn't a value system, but it succeeds on its own terms. Even if you think that The Structure of Scientific Revolutions is brilliant and insightful, Science shows that it succeeds at what it aims for.

A similar critique of morality can be found in books like Nietzche's On the Genealogy of Morals. What is morality's response?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T02:29:15.401Z · LW(p) · GW(p)

were they just idiots?

No.

I'm not going to argue that Science isn't a value system

That's good to know, but I didn't claim that science was a value system. I claimed that "what makes an airplane better is carrying more people further with less fuel" is a value system. So is "what makes an airplane better is being painted bright colors". (As far as I know, nobody holds that one.)

Science may be a value system, but it isn't one that tells us that carrying more passengers with less fuel is better than carrying fewer passengers with more fuel, nor that having bright colors is better than having non-bright colors. Science helps us find ways to carry more passengers with less fuel, it also helps us find ways to make colors brighter.

A similar critique of morality can be found in books like Nietzche's On the Genealogy of Morals. What is morality's response?

I don't understand what this question is asking.

Replies from: TimS
comment by TimS · 2011-11-03T02:39:46.296Z · LW(p) · GW(p)

were they just idiots?

No.

Then how did they fail to notice that slavery is wrong?


That's good to know, but I didn't claim that science was a value system.

Science the investigative part of humanity's attempt to control Nature. It is objectively the case that we control Nature better than we once did. I assert that there is evolutionary pressure on our attempts to control nature. Specifically, bad Science fails to control nature.


A similar critique of morality can be found in books like Nietzche's On the Genealogy of Morals. What is morality's response?

I don't understand what this question is asking.

Very paraphrased Structure of Scientific Revolutions: Science makes progress via paradigm shifts. Very paraphrased Nietzsche: Paradigm shifts have occurred in morality.

If paradigm shifts don't seem like a radical claim about either Science or Morality, then perhaps I should write a discussion post about why the claim is extraordinary.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T03:15:13.792Z · LW(p) · GW(p)

I'm having a very hard time following your point, so if you can present it in a more systematic fashion in a discussion post, that might be best.

Replies from: atorm, TimS
comment by atorm · 2011-11-03T15:38:23.088Z · LW(p) · GW(p)

I think I followed the point pretty well, although I don't know that I can explain it any better. It's worth its own post, TimS.

comment by TimS · 2011-11-03T14:36:26.749Z · LW(p) · GW(p)

I appreciate your feedback. I'm struggling with whether this idea is high enough quality to make a discussion post. And my experience is that I underestimate the problem of inferential distance.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T16:25:32.149Z · LW(p) · GW(p)

Most people underestimate inferential distance, so that's a pretty good theory.

If it helps, I think the primary problem I'm having is that you have a habit of substituting discussion of one idea for discussion of another (e.g, "morality's response" to Nietzsche vs. the radical/extraordinary nature of paradigm shifts, the value system that sorts airplanes vs. the "investigative part of humanity's attempt to control Nature," etc.) without explicitly mapping the two.

I assume it's entirely obvious to you, for example, how you would convert an opinion about paradigm shifts in morality into a statement about morality's response to Nietzsche and vice-versa, so from your perspective you're simply alternating synonyms to make your writing more interesting. But it's not obvious to me, so from my perspective each such transition is basically changing the subject completely, so each round of discussion seems only vaguely related to the round before. Eventually the conversation feels like trying to nail Jello to a tree.

Again, I don't mean here to accuse you of changing the subject or of having incoherent ideas; for all I know your discussion has been perfectly consistent and coherent, I just lack your ability (and, evidently, atorm's) to map the various pieces of it to one another (let alone to my own comments). So, something that might help close the inferential distance is to start over and restate your thesis using consistent and clearly defined terms.

Replies from: None
comment by [deleted] · 2011-11-03T19:55:16.274Z · LW(p) · GW(p)

.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T20:30:03.660Z · LW(p) · GW(p)

Thank you for defining your terms. I agree that we have the same basic neural/behavioral architecture that our stone-age ancestors had, and that we arrange the world such that others suffer less harm (per capita) than our stone-age ancestors did, and that this is a good thing.

comment by jhuffman · 2011-11-03T21:31:22.690Z · LW(p) · GW(p)

There are a lot of people who would argue morality has been getting worse since their own youth. It doesn't matter when or where we are talking about, it is pretty much always true that a lot of people think this. The same is true of fashion.

comment by malthrin · 2011-11-02T20:32:25.581Z · LW(p) · GW(p)

What measurable quantity are you talking about here?

Replies from: TimS
comment by TimS · 2011-11-02T20:46:34.023Z · LW(p) · GW(p)

Moral goodness is the quality I'm referencing, but measurable isn't an adjective easily applied to moral goodness.

Replies from: malthrin
comment by malthrin · 2011-11-02T21:09:27.137Z · LW(p) · GW(p)

If it's not directly measurable, it must be a hidden node. What are its children? What data would you anticipate seeing if moral goodness is increasing?

I'm asking these basic questions to prompt you to clarify your thinking. If the concept that you label 'moral goodness' is not providing any predictions, you should ask yourself why you're worried about it at all.

Replies from: TimS, billswift
comment by TimS · 2011-11-03T00:02:27.987Z · LW(p) · GW(p)

I don't understand, since I don't think your position is "morality does not exist for lack of ability to measure."

Replies from: malthrin
comment by malthrin · 2011-11-03T02:45:36.082Z · LW(p) · GW(p)

"Morality" is a useful word in that it labels a commonly used cluster of ideaspace. Points in that cluster, however, are not castable to an integer or floating point type. You seem to believe that they do implement comparison operators. How do those work, in your view?

Replies from: TimS
comment by TimS · 2011-11-03T16:10:43.265Z · LW(p) · GW(p)

You are using some terminology that I don't recognize, so I'm uncertain if this is responsive, but here goes.

We are faced with "choices" all the time. The things that motivate us to make a particular decision in a choice are called "values." As it happens, values can be roughly divided into categories like aesthetic values, moral values, etc.

Value can conflict (i.e. support inconsistent decisions). Functionally, every person has a table listing all the values that the person finds persuasive. The values are ranked, so that a person faced with a decision where value A supports a different decision than value B knows that the decision to make is to follow the higher ranked value.

Thus, Socrates says that Aristotle made an immoral choice iff Aristotle was faced with a choice that Socrates would decide using moral values, and Aristotle made a different choice than Socrates would make.


Caveats:

  • I'm describing a model, not asserting a theory about the territory (i.e. I'm no neurologist)
  • My statements are attempting to provide a more rigorous definition of value. Hopefully, it and the other words I invoke rigorously (choice, moral, decision) correspond well to ordinary usage of those words.

Is this what you are asking?

Replies from: malthrin
comment by malthrin · 2011-11-03T16:38:44.794Z · LW(p) · GW(p)

That's a good start. Let's take as given that "morality" refers to an ordered list of values. How do you compare two such lists? Is the greater morality:

  • The longer list?
  • The list that prohibits more actions?
  • The list that prohibits fewer actions?
  • The closest to alphabetical ordering?
  • Something else?

Once you decide what actually makes one list better than another, then consider what observable evidence that difference would produce. With a prediction in hand, you can look at the world and gather evidence for or against the hypothesis that "morality" is increasing.

Replies from: TimS
comment by TimS · 2011-11-03T16:48:03.577Z · LW(p) · GW(p)

How do you compare two such lists?

People measure morality be comparing their agreement on moral choices. It's purely behavioral.

As a corollary, a morality that does not tell a person how to make a choice is functionally defective, but it is not immoral.


There are lots of ways of resolving moral disputes (majority rule, check the oracle, might makes right). But the decision of which resolution method to pick is itself a moral choice. You can force me to make a particular choice, but you can't use force to make me think that choice was right.

Replies from: malthrin
comment by malthrin · 2011-11-03T18:00:58.686Z · LW(p) · GW(p)

Sorry, I don't know what morality is. I thought we were talking about "morality". Taboo your words.

Replies from: TimS
comment by TimS · 2011-11-03T18:32:08.509Z · LW(p) · GW(p)

Ok, I like "ordered list of (abstract concepts people use to make decisions)."

I reiterate my points above: When people say a decision is better, they mean the decision was more consistent with their list than alternative decisions. When people disagree about how to make a choice, the conflict resolution procedure each side prefers is also determined by their list.

comment by billswift · 2011-11-02T21:55:50.772Z · LW(p) · GW(p)

"Morality" seems to me to be a fuzzy algebraic sum of many different actions that we approve or disapprove of. So the first step might be to list the actions, then whether we approve or disapprove of it and how much. That should keep people busy for a good while. Just trying to decide how to "measure" how much we approve or disapprove of a specific action is likely to be a significant problem.

comment by Jayson_Virissimo · 2011-11-04T14:58:33.256Z · LW(p) · GW(p)

What is immoral about monarchy (relative to democracy)?

Replies from: TimS, MixedNuts
comment by TimS · 2011-11-04T15:06:28.136Z · LW(p) · GW(p)

Absolute monarchy vs. Limited Monarchy

I confess I don't know much about the little European monarchies you highlighted, but I strongly suspect that they are not Absolute Monarchies.

comment by MixedNuts · 2011-11-04T15:03:32.693Z · LW(p) · GW(p)

You mean relative to republic. All of these are democracies.

comment by Richard_Kennaway · 2011-11-03T08:09:08.565Z · LW(p) · GW(p)

The same way you make any other moral judgement -- whatever way that is.

That is, you are really asking "what, if anything, is morality?" If you had an answer to that question, answering the one you explicitly asked would just be a matter of historical research, and if you don't, there's no possibility of answering the one you asked.

Replies from: TimS
comment by TimS · 2011-11-03T14:39:19.667Z · LW(p) · GW(p)

Fair enough. I think the combination of historical evidence and the lack of a term for justice in physics equations is strong evidence that morality is not real. And that bothers me. Because it seems like society would have noticed, and society clearly thinks that morality is real.

Replies from: atorm, Richard_Kennaway
comment by atorm · 2011-11-03T15:46:09.407Z · LW(p) · GW(p)

Society has failed to notice lots of things.

comment by Richard_Kennaway · 2011-11-03T15:08:57.416Z · LW(p) · GW(p)

Perhaps it is real, but is not the sort of thing you are assuming it must be, to be real.

I can't point to the number 2, and some people, perplexed by this, have asserted that numbers are not real.

I can point to a mountain, or to a river, but I can't point to what makes a mountain a mountain or a river a river. Some people, perplexed by this, conclude there are no such things as mountains and rivers.

I can't point to my mind....and so on.

Can I even point? What makes this hand a pointer, and how can anyone else be sure they know what I am pointing to?

Stare at anything hard enough, and you can cultivate perplexity at its existence, and conclude that nothing exists at all. This is a failure mode of the mind, not an insight into reality.

Have you seen the meta-ethics sequence? The meta-ethical position you are arguing is moral nihilism, the belief that there is no such thing as morality. There are plenty of others to consider before deciding for or against nihilism.

Replies from: lessdazed, TimS
comment by lessdazed · 2011-11-03T15:34:09.889Z · LW(p) · GW(p)

How hard do you think it would be to summarize the content of the meta-ethics sequence that isn't implicit from the Human's Guide to Words?

I never recommend anyone read the ethics sequence fist.

comment by TimS · 2011-11-03T15:27:46.485Z · LW(p) · GW(p)

It's funny that I push on the problem of moral nihilism just a little, and suddenly someone thinks I don't believe in reality. :)

I've read the beginning and the end of the meta-ethics sequence, but not the middle. I agree with Eliezer that recursive questions are always possible, but you must stop asking them at some point or you miss more interesting issues. And I agree with his conclusion that the best formulation of modern ethics is consideration for the happiness of beings capable of recursive thought.


I like to write a discussion post (or a series of posts) on this issue, but I don't know where to start. Someone else responded to me (EDIT: with what seemed to me like] questioning the assertion that science is a one-way ratchet, always getting better, never getting worse. [EDIT: But we don't seem to have actually communicated at all, which isn't a success on my part.]


In case you want a connection to Artificial Intelligence:

Eliezer talks about the importance of provably Friendly AI, and I agree with his point. If we create super-intelligence and it doesn't care about our desires, that would be very bad for us. But I think that the problem I'm highlighting says something about the possibility of proving that an AI is Friendly.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-03T18:33:05.913Z · LW(p) · GW(p)

Someone else responded to me by questioning the assertion that science is a one-way ratchet, always getting better, never getting worse.

It seems likely to me that I'm the person you're referring to. If so, I don't endorse your summary. More generally, I'm not sure either of us understood the other one clearly enough in that exchange to merit confident statements on either of our parts about what was actually said, short of literal quotes .

comment by Nisan · 2011-11-03T18:20:39.380Z · LW(p) · GW(p)

Those of our ancestors who were slaves did not like being slaves, and those who were oppressed by monarchies did not like being oppressed. Now some of them may have supported slavery and monarchy in principle, but their morality was clearly broken because they were made deeply unhappy by institutions which they approved of behind a Rawlsian veil of ignorance.

Women didn't like the particulars of gendered oppression, so we've clearly made progress by their standards.

EDIT: Why the downvotes?

Replies from: MixedNuts, pedanterrific
comment by MixedNuts · 2011-11-03T18:28:57.708Z · LW(p) · GW(p)

Actually, victims of oppressive systems often support them. Many girls get clitoridectomies because their mothers demand it, even against their fathers' wishes.

Replies from: Nisan
comment by Nisan · 2011-11-04T07:25:36.925Z · LW(p) · GW(p)

Yes, you can find ways in which victims are made to be complicit in their oppression. But it's not hard to find ways in which victims genuinely suffer, and that's all that's needed for an objective moral standard.

comment by pedanterrific · 2011-11-06T09:06:52.983Z · LW(p) · GW(p)

NMDV, but maybe it's for "their morality was clearly broken"?

comment by ahartell · 2011-11-27T22:39:45.022Z · LW(p) · GW(p)

What would you suggest someone to read if you were trying to explain to them that souls don't exist and that a person is their brain? I vaguely remember reading something of Eliezer's on this topic and someone said they would read some articles if I sent them to them. Would it just be the Free Will sequence?

comment by Prismattic · 2011-11-10T03:55:24.196Z · LW(p) · GW(p)

(NB -- posting this under the assumption that open threads have subsumed off-topic threads, of which I haven't seen any recently. If this is mistaken, I'll retract here and repost in an old off-topic thread)

I've seen numerous threads on Lesswrong discussing the appeal of the games of go, mafia, and diplomacy to rationalists. I thought I would offer some additional suggestions in case people would like to mix up their game-playing a bit, for meetups or just because. Most of these involve a little more randomness than the games listed above, but I don't really think that should be seen as a drawback, because decision-making under uncertainty can include uncertainty about the state of the world, not just the intentions of other players.

If you like Diplomacy, you might like A Game of Thrones.

If you want a slightly more polished and elaborate version of Mafia, you might try The Resistance.

If you can actually find a copy anywhere, Condottiere is basically a set of simultaneous and sequential Tullock auctions overlaid with a medieval Italian theme. (You can try this one out at gametableonline.)

And on a slightly different note, if you have creationist friends or relatives who aren't really amenable to discussing evolution in scholarly terms but like playing games, you could always try to prime them with Dominant Species first.

comment by TimS · 2011-11-09T15:37:35.585Z · LW(p) · GW(p)

I've come to realize that I don't understand the argument that Artificial Intelligence will go foom as well as I'd like. That is, I'm not sure I understand why AI will inherently become massively more intelligent than humans. As I understand it, there are three points:

  • AI will be able to self-modify its structure.

    By assumption, AI has goals, so self-modification to improve its ability to achieve those goals will make AI more effective.

  • AI thinks faster than humans because it thinks with circuits, not with meat.

    The processing speed of a computer is certainly faster than a human.

  • AI will not commit as many errors of inattention, because it will not be made of meat.

    Studies show humans make worse decisions when hungry or tired or the like.

Are those the basic categories for the argument?

comment by ahartell · 2011-11-09T00:06:24.152Z · LW(p) · GW(p)

I've been thinking of this a bit recently, and haven't been able to come to any conclusion.

Apart from the fact that it discourages similar future behavior in others, is it good for people who do bad things to suffer? Why?

Replies from: dlthomas, Nornagest
comment by dlthomas · 2011-11-09T00:13:43.496Z · LW(p) · GW(p)

My answer has long been an unequivocal "no", on the grounds that I don't see why it would be, and so "hurting people is bad" doesn't get any exceptions it doesn't need.

Replies from: ahartell
comment by ahartell · 2011-11-09T00:20:54.248Z · LW(p) · GW(p)

That's the conclusion I keep coming to, but I have trouble justifying this to others. It's just such an obvious built-in response that bad people deserve to be unhappy. I guess the inferential difference is too high.

Follow Up: What is your opinion of prisons? How unpleasant should they be?

Is the answer to the second question something like "the unpleasantness with the best [unpleasantness] to [efficacy in discouraging antisocial behavior] ratio, while favoring ratios with low unpleasantness and high discouragement"? (Feel free to tell me that the above sentence in unintelligible).

Replies from: dlthomas, Oscar_Cunningham
comment by dlthomas · 2011-11-09T00:34:06.762Z · LW(p) · GW(p)

I think there are a number of issues that go into prison design. The glib answer is, "whatever produces the best outcomes," but I understand that leaving it at that is profoundly unsatisfying. I don't have the background in the domain to give a detailed answer, but I have some thoughts about things worth considering.

I generally take "unpleasant" to mean strongly "not liked" at the time. There is, however, a distinction between liking and wanting, in terms of how our brains deal with these things. For deterrence, we want the situation to be "not wanted" - how much people dislike being in jail while actually in jail is irrelevant.

It is also worth noting that both perceived degree of punishment and perceived likelihood of punishment matter.

Replies from: dlthomas, wedrifid, ahartell
comment by dlthomas · 2011-11-09T00:53:35.008Z · LW(p) · GW(p)

For deterrence, we want the situation to be "not wanted" - how much people dislike being in jail while actually in jail is irrelevant.

A consequence of this that just occurred to me (and obviously, I've not chewed on it long so I expect there are some holes):

In some circumstances, we may make jail a stronger deterrent by making it more pleasant.

Consider, for instance, if jail time is being used to signal toughness and thereby acquire status in a given peer group. Cop shows and the like occasionally portray this kind of thing (particularly with musicians wishing to establish credibility - I think Bones did this more than once). The more prisoners are seen as abused, the stronger the signal. If prisoners are seen as pampered, that doesn't work so well. I have no idea how much this hypothetical corresponds to reality in the first place, however, or under what circumstances this effect would dominate compared to countervailing pressures.

comment by wedrifid · 2011-11-09T03:48:04.306Z · LW(p) · GW(p)

The glib answer is, "whatever produces the best outcomes,"

Slightly more glib: "Whatever produces the best outcomes for the decision maker".

comment by ahartell · 2011-11-09T00:39:05.073Z · LW(p) · GW(p)

Thanks. That makes a ton of sense.

comment by Oscar_Cunningham · 2011-11-09T01:05:47.377Z · LW(p) · GW(p)

I guess the inferential difference is too high.

Semantic stop-sign alert!

Replies from: dlthomas, ahartell, wedrifid
comment by dlthomas · 2011-11-09T01:10:00.537Z · LW(p) · GW(p)

I guess the inferential difference is too high.

Semantic stop-sign alert!

Applause lights?

Replies from: ahartell
comment by ahartell · 2011-11-09T02:21:13.739Z · LW(p) · GW(p)

I was using an applause light? Is there a better way to term that my opinions on this matter seem really weird to people who have never heard of consequentialism and don't spend much time thinking about the nature of morality (though neither do I, really)?

Replies from: dlthomas
comment by dlthomas · 2011-11-09T02:36:16.382Z · LW(p) · GW(p)

I think that signaling, "See, I read the sequences!" was not 0% of your motivation in phrasing that way. I don't actually think it's a big problem. I don't think it was all that significant a portion of your motivation, or I would have commented directly.

I actually think that the marking of it as a semantic stop-sign was incorrect; while the phrase, "the inferential distance is too high" could certainly be used that way, it was a tangential issue you (as I read it) were putting on hold, not washing your hands of. What would your response have been, if someone had responded with a request to look at ways to shrink the inferential distance? I therefore think Oscar's post is more of an applause light - he could have more usefully engaged, and instead chose to simply quote scripture at you.

The fact that there was one comment which contained short snippets by two different posters that amounted to basically nothing but a reference into the sequences each seemed worth commenting on. And what better way than to make the situation worse?

Replies from: ahartell
comment by ahartell · 2011-11-09T02:45:31.419Z · LW(p) · GW(p)

I think that signaling, "See, I read the sequences!" was not 0% of your motivation in phrasing that way.

That's probably fair. More than "See, I read the sequences!", it was probably something like "Look, I fit in with you guys because we know the same obscure terms! And since I consider LW posters who seem smart high status this makes me high status by association!". I didn't verbally think that, of course, but still.

comment by ahartell · 2011-11-09T02:32:29.219Z · LW(p) · GW(p)

I don't think it fits completely. I wasn't trying to completely write off my inability to defend this view with others (it probably also has to do with the fact that my ideas aren't fully formed) and I think the phrase does convey information. It means that the people I was referring to don't have the background knowledge (mainly consequentialism) to make my views seem reasonable. Hence, high inferential difference.

comment by wedrifid · 2011-11-09T03:46:10.081Z · LW(p) · GW(p)

False positive. (Does not appear to be a semantic stop sign.)

comment by Nornagest · 2011-11-09T00:28:13.336Z · LW(p) · GW(p)

Only insofar as it discourages similar future behavior in the same person, I'd say. If we're discounting future consequences entirely I'm not sure it makes sense to talk about punishment, or even about good and bad in the abstract. But I'm a consequentialist, and I think you'll find that the deontological or virtue-ethical answers to the same question are quite different.

Replies from: dlthomas, ahartell
comment by dlthomas · 2011-11-09T00:36:31.898Z · LW(p) · GW(p)

Only insofar as it discourages similar future behavior in the same person, I'd say.

I'm not sure that I agree. It may be necessary to punish more to keep a precommitment to punish credible. That precommitment may be preventing others from doing harm.

Replies from: Nornagest
comment by Nornagest · 2011-11-09T00:46:29.854Z · LW(p) · GW(p)

Fair enough. I'd lumped the effects of that sort of precommitment under "discouraging others from acting similarly", and accordingly discarded it.

Replies from: dlthomas
comment by dlthomas · 2011-11-09T01:07:24.447Z · LW(p) · GW(p)

Ah, I read it as a contrast. My bad.

comment by ahartell · 2011-11-09T00:31:34.757Z · LW(p) · GW(p)

Thanks, could you respond to my reply to dlthomas, as well?

That's the conclusion I keep coming to, but I have trouble justifying this to others. It's just such an obvious built-in response that bad people deserve to be unhappy. I guess the inferential difference is too high.

Follow Up: What is your opinion of prisons? How unpleasant should they be?

Is the answer to the second question something like "the unpleasantness with the best [unpleasantness] to [efficacy in discouraging antisocial behavior] ratio, while favoring ratios with low unpleasantness and high discouragement"? (Feel free to tell me that the above sentence in unintelligible).

Replies from: Nornagest
comment by Nornagest · 2011-11-09T00:43:54.264Z · LW(p) · GW(p)

Yeah, I saw the comment. I wasn't going to reply to it, but I might as well unpack my reasons why: the ethics of imprisonment are fairly complicated, and depend not only on deterrent effects and the suffering of prisoners but also on a number of secondary effects with their own positive or negative consequences. Resource use, employability effects, social effects on non-prisoners, products of prison labor, et cetera. I don't feel qualified to evaluate all that without quite a lot of research that I currently have little reason to pursue, so I'm going to reserve judgment on the question for now.

Replies from: ahartell
comment by ahartell · 2011-11-09T00:48:53.493Z · LW(p) · GW(p)

Sorry, and thank you.

comment by JoshuaZ · 2011-11-07T15:52:19.301Z · LW(p) · GW(p)

Recent results suggest that red dwarf stars may have habitable planets after all. Summary article in New Scientist. These stars are much more common than G-type stars like the sun, and moreover, previous attempts at searching for life (such as looking for radio waves or for looking for planets that show signs of oxygen) has focused on G-type stars. The basic idea of this new result is that water ice will more effectively absorb radiation from red dwarfs (due to the infrared wavelengths that much of their output occurs in) allowing planets which are farther from the red dwarf to have higher temperatures.

The main context that this is relevant to LW is that this is increasing the set of star systems with a potential for life by a large factor. There are around 5 times as many red dwarfs as there are stars like our sun, but the direct increase isn't by a factor of five since red dwarfs were known to have a habitable zone, it was just considered to be small and close to the star. We've already discussed recent results which suggest that a large fraction of G-type stars have planets in their habitable zones, but this potentially swamps even that effect. In that thread, many people suggested that they already assumed that habitable planets were common, but this seems to suggest that they are even more common than anyone was thinking.

This may force an update to putting more of the great filter ahead of us.

comment by D_Malik · 2011-11-03T06:17:06.890Z · LW(p) · GW(p)

I recall a study showing that eating lower-GI breakfast cereals helps schoolchildren focus. Perhaps this is related to blood glucose's relation to willpower?

Up until recently my diet was around 50% fruit and fruit-juice, but lately I've tried cutting fruit out and replacing it with carbs and fat and protein. I'm not sure whether this has strongly affected my willpower. My willpower /has/ improved, but I started exercising more and went onto cortisone around the same time, so I'm not sure what's doing it. However, the first few days without sugary food, especially in the mornings, I actually became depressed and tired-feeling, almost the same feeling you get during caffeine withdrawal. That feeling has disappeared now.

Thoughts?

comment by timtyler · 2011-11-29T20:40:34.811Z · LW(p) · GW(p)

Video: Eliezer Yudkowsky - Heuristics and Biases

Yudkowsky on fallacies, occam, witches, precision and biases.

Video: How Should Rationalist Approach Death?

Skepticon 4 Panel featuring James Croft, Greta Christina, Julia Galef and Eliezer Yudkowsky.

comment by DanielVarga · 2011-11-04T20:27:01.984Z · LW(p) · GW(p)

Maybe some of you have already seen my Best of Rationality Quotes post. I plan to do it again this December. That one spanned 21 months of Rationality Quotes. Would you prefer to see a Best of 2011 or a Best of So Far?

Replies from: gwern
comment by gwern · 2011-11-21T16:43:48.205Z · LW(p) · GW(p)

I'd like to see annual editions. If you were up to it, it'd be nifty to have 'Best of 2009/2010/2011' and then an overall ranking, 'Best of LW'.

comment by Zack_M_Davis · 2011-11-29T20:02:20.116Z · LW(p) · GW(p)

(The practical ethics of posting on the internet are sometimes complicated. Ideally, all posts should be interesting, well-reasoned, and germane to the concerns of the community. But not everyone has such pure motives all of the time. For example, one can imagine some disturbed and unhealthy person being tempted to post an incoherent howl of despair, frustration, and self-loathing in a childish cry for attention that will ultimately be regretted several hours later. For the sake of their own reputation and good community standing (to say nothing of keeping gardens well-kept), such a person would be well advised to not make such a post. But the urge to express one's emotional state even when there is no good reason to do so is not so easily repressed, and one might imagine our hypothetical individual being tempted to make some sort of meta-level comment on the situation, perhaps thinking that this would somehow be more appropriate, or even clever. But to do so would mean overlooking the quite obvious fact that meta-level comments aren't clever in this day and age: if you shouldn't say something, then for the exact same reason, you also shouldn't make self-referencing comments about why you shouldn't say something, and furthermore, this guideline applies with equal force to all further levels of meta-meta comments that one might be tempted to make. Clearly these observations should be sufficient to settle the matter in favor of the policy of complete silence.)

comment by ahartell · 2011-11-26T00:18:15.169Z · LW(p) · GW(p)

Would it be possible for someone to help me understand uploading? I can understand easily why "identity" would be maintained through a gradual process that replaces neurons with non-biological counterparts, but I have trouble understanding the cases that leave a "meat-brain" and a "cloud-brain" operating at the same time. Please don't just tell me to read the quantum physics sequence.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-26T00:46:33.212Z · LW(p) · GW(p)

Can you clarify what it is you don't understand?

If you're looking for an explanation of how to implement uploading, I can't be much help.

If you're looking for a way of having the idea seem more plausible: try reversing it. If I can replace neurons with hardware at all, why should it only be possible gradually, and why should it require destroying the original?

Replies from: ahartell
comment by ahartell · 2011-11-26T01:40:40.811Z · LW(p) · GW(p)

I can't wrap my head around what it would be like to exist as the "meat-version" and "cloud-version" at the same time if both of the versions maintain my identity. The reversal thing sort of makes me want to accept it more, but I wouldn't want to support the idea just because I personally don't know what would make those things limiting factors.

About gradualness: I can almost imagine that being important. Like, if you took baby!me and wrote over my mind with adult!me, maybe identity wouldn't be preserved... I guess that doesn't really make sense. But the gradualness of the change between baby!me and adult!me seems vaguely related to identity.

Really the problem I have is that I don't get what it would be like to be existing in two ways at once and be experiencing both. If I were only to experience one, I would experience the meat-version after I was scanned, and the meat-version's death/destruction wouldn't change that.

Sorry if I'm being dense.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-26T01:55:41.380Z · LW(p) · GW(p)

Gotcha. OK, try this then:

At time T1, I begin replacing my meat with cloud.
At T2, I complete that process.
At T3, I make a copy of my cloud-self.

Is it your intuition that that third step ought to fail? If so, can you unpack that intuition?

If you think that third step can succeed, do you have the same problem? That is, if I can have two copies of my cloud-self running simultaneously, do you not get what that would be like?

My answer to what that would be like is it would be just like this. That is, if you make a cloud-copy of me while I'm sleeping, I wouldn't know it, and the existence of that cloud-copy wouldn't in any way impinge on my experience of the world. Also, I would wake up in the cloud, and the existence of my meat body would not in any way impinge on my experience of the world. There's just two entities, both of which are me.

Replies from: ahartell
comment by ahartell · 2011-11-26T02:29:55.821Z · LW(p) · GW(p)

I guess I have similar problems with the third step. I'm really sorry if it seems like I'm just refusing to update, and thanks a bunch; that last part really did help. But consider the following:

Last night, somebody made a cloud-copy of you, but you don't know it. In a few hours, that person comes and kills you (maybe you're asleep when he does it, but I don't think that really matters).

Isn't that still like dying? I know that to the world it's the same, but from the inside, it's death, right?. Have you read HPMoR? Fred and George are basically alternate copies of the same brain. If you were Fred, wouldn't you rather not die, even though you would still have George!you alive and well?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-26T03:29:11.928Z · LW(p) · GW(p)

It's not a problem; this idea is genuinely counterintuitive when first encountered.

The reason it's counterintuitive is that you're accustomed to associating "ahartell" with a single sequence of connected observer-moments. Which makes sense: in the real world, it's always been like that. But in this hypothetical world there are two such sequences, unrelated to one another, and they are both "ahartell." That's completely unlike anything in your real experience, and the consequences of it are legitimately counterintuitive; if you want to understand them you have to be willing to set those intuitions aside.

One consequence is that you can both live and die simultaneously. That is, if there are two ahartells (call them A and 1) and A dies, then you die; it's a real death, just as real as any other death. If 1 survives, then you survive; it's a real survival, just as real as any other survival. The fact that both of these things happen at once is counterintuitive, because it doesn't ever come up in the real world, but it is a natural consequence of that hypothetical scenario.

Similarly, another consequence is that you can die twice. That is, if A and 1 both die, those are two independent deaths, each as real as any other death.

And another consequence is that you can live twice. That is, if A and 1 both survive, they are two independent lives; A is not aware of 1, 1 is not aware of A. A and 1 are different people, but they are both you.

Again, weird and counterintuitive, but a natural consequence of a weird and counterintuitive situation.

Replies from: ahartell, ahartell
comment by ahartell · 2011-11-27T19:58:00.169Z · LW(p) · GW(p)

Ok, three more questions/scenarios.

1) You are Fred (of HPMOR's Fred & George, who for this we'll assume are perfect copies). Voldemort comes up to you and George and says he will kill one of you. If he kills George, you live and nothing else happens. If he kills you, George lives and gets a dollar. Would you choose to allow you!Fred to die? And not just as the sacrifice you know it's reasonable to make in terms of total final utility but as the obvious correct choice from your perspective. (If the names are a problem, assume somebody makes a copy of you and immediately asks you this question.)

2) If all else is equal, would you rather have N*X copies than X copies for all positive values of X and all positive and greater than 1 values of N? (I don't know why I worded that like that. Would you rather have more copies than less for all values of more and less?)

3) You go to make copies of yourself for the first time. You have $100, with which you can pay for either 1 copy or 100 copies (with a small caveat). If you choose 100 copies, each copy's life is 10% less good, and the life of original/biological!you will be 20% less good (the copy maker is a magical wizard that can do things like this and likes to make people think). Do you choose the 100 copies? And do you think that it is obviously better and one would be stupid to choose otherwise?

Thanks.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-27T21:25:22.222Z · LW(p) · GW(p)

Re: #1... there are all kinds of emotional considerations here, of course; I really don't know what I would do, much as I don't know what I would do given a similar deal involving my real-life brother or husband. But if I leave all of that aside, and I also ignore total expected utility calculations, then I prefer to continue living and let my copy die.

Re: #2... within some range of Ns where there aren't significant knock-on effects unrelated to what I think you're getting at (e.g., creating too much competition for the things I want, losing the benefits of cooperation among agents with different comparative advantages, etc.), I prefer that N+1 copies of me exist than N copies. More generally, I prefer the company of people similar to me, and I prefer that there be more agents trying to achieve the things I want more of in the world.

Re: #3... I'm not sure. My instinct is to make just one copy rather than accept the 20% penalty in quality of life, but it's not an easy choice; I acknowledge that I ought to value the hundred copies more.

Replies from: ahartell
comment by ahartell · 2011-11-27T21:52:12.883Z · LW(p) · GW(p)

I'm not trying to back you into a corner, but it seems like your responses to #1 and #3 indicate that you value the original more than the others, which seems to imply that the copies would be less you. From your answer to #2, I came up with another question. Would you value uploading and copying just as much if somehow the copies were P-zombies? It seems like your answers to #1-3 would be the same in that case.

Thanks for being so accommodating, really.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-27T23:21:58.938Z · LW(p) · GW(p)

I don't value the original over the others, but I do value me over not-me (even in situations where I can't really justify that choice beyond pure provincialism).

A hypothetical copy of me created in the future is just as much me (hypothetically) as the actual future me is, but a hypothetical already-created copy of a past me is not me. The situation is perfectly symmetrical; if someone makes a copy of me and asks the copy the question in #1, I give the same answer as when they ask the original.

I have trouble answering the P-zombie question, since I consider P-zombies an incoherent idea. I mean, if I can't tell the difference between P-zombies and genuine people, then I react to my copies just the same as if they were genuine people... how could I do anything else?

comment by ahartell · 2011-11-26T04:16:31.584Z · LW(p) · GW(p)

Thanks. It makes sense (ish) and you've either convinced me or convinced me that you've convinced me ;).

comment by lessdazed · 2011-11-21T03:51:14.506Z · LW(p) · GW(p)

Double counting of evidence in sports: is it justifiable to prominently and as one of the few pieces of information about them list the number of shutouts (i.e. "clean sheets" when no points are surrendered over the course of a game) by baseball pitchers and goalies? Assume the number of games played and total points allowed are mentioned, so the information isn't misleading.

comment by lessdazed · 2011-11-21T02:44:39.864Z · LW(p) · GW(p)

Are positive and negative utility fungible? What facts might we learn about the brain that would be evidence either way?

Replies from: gwern, wedrifid
comment by gwern · 2011-11-21T16:48:46.485Z · LW(p) · GW(p)

In what sense is utility fungible? Remember utility is fungible by definition - if 1 positive utilon doesn't cancel out 1 negative utilon, then at least one of them was not actually 1.

Replies from: lessdazed
comment by lessdazed · 2011-11-21T17:41:54.078Z · LW(p) · GW(p)

I confused a perceived pattern in humans for a pattern in the world.

Assuming (and it may be so) humans are much more dutch-bookable along -loss/gain and -gain/loss lines than loss/loss or gain/gain lines, and we can project our utility function to remove muddle such that at the end two self-consistent value categories (loss and gain) can't be made consistent with each other, that's our problem. This is unlikely, as even if humans are most muddled along this axis, there is no difference in kind between unmuddling between gains and losses and within gains or losses.

Maybe unmuddling just can't be done, but there's little reason to believe that it can be partially done with the result being exactly two categories.

Remember utility is fungible by definition - if 1 positive utilon doesn't cancel out 1 negative utilon, then at least one of them was not actually 1.

I object to holding that tightly to the definition. "Atoms" are divisible...assuming we figured out how to trade all utilities against each other, but only in two inconsistently related categories, "utility" would still be apt if reality was found to lack only that. We would then speak of "positive utility" and "negative utility" using complex numbers or something, able to say 10+5i is "more" (in some sense) than 6+i but not 11+i or 6+11i.

Replies from: gwern
comment by gwern · 2011-11-21T18:01:01.834Z · LW(p) · GW(p)

Inconsistency or Dutch-booking is bad regardless of fungibility, because they let you be pumped for arbitrary amounts. If they don't, then they may simply reflect extreme preferences.

comment by wedrifid · 2011-11-21T12:34:13.688Z · LW(p) · GW(p)

People work for money.

Replies from: lessdazed
comment by lessdazed · 2011-11-21T15:31:33.111Z · LW(p) · GW(p)

Is there a specific model of human utility you endorse? People prefer A to B, B to C, C to A etc.

comment by ahartell · 2011-11-13T00:02:37.990Z · LW(p) · GW(p)

I read this term once, but I can't remember it, and every few months I remember that I can't remember the term and it bothers me. I've tried googling but with no success, and I think someone here may know.

The term defines a category of products that is considered more valuable because of its high price. That is, more people buy it because it is high priced than would if it were low price, because the high price makes it seem high value and because the high price makes owning the product high status. The wikipedia page for the term mentioned Rolls Royce cars as an example and said that Apple computers fit the term in the past but now do so less. Does this sound familiar to anyone? Thanks.

Replies from: Unnamed, lessdazed
comment by Unnamed · 2011-11-13T00:15:49.882Z · LW(p) · GW(p)

Veblen good

Replies from: ahartell
comment by ahartell · 2011-11-13T00:19:57.664Z · LW(p) · GW(p)

Wow! Incredible. This has honestly been on my mind for years. I almost said something about it starting with a "V" but I wasn't confident enough and didn't want to discourage a correct answer that started with a different letter. Thanks a ton.

comment by lessdazed · 2011-11-13T00:21:27.951Z · LW(p) · GW(p)

a category of products that is considered more valuable because of its high price

Some products actually are directly more valuable at higher prices. Placebos!

comment by Alwaysrushed · 2011-11-03T22:46:11.681Z · LW(p) · GW(p)

I didn't know where to put this. Maybe someone can help. I am trying to further understand evolution.

PLEASE correct my assumptions if they are inaccurate/wrong: 1) Organisms act instinctively in order to pass alleles on. 2) Human biology is similar, but we have some sort of more developed intelligence (more developed or a distinct one?) that allows us to weigh options and make decisions. Correct me if I am wrong, but it seems that we can act in contradiction to assumption #1 (ex: taking birth control), is this because of the 2nd assumption? Do other animals act similarly (or is there some consciousness we have that they don’t)? Or do they choose not to act in contradiction to assumption #1

Replies from: dlthomas, saturn
comment by dlthomas · 2011-11-03T23:05:18.412Z · LW(p) · GW(p)

We are adaptation executers, not fitness maximizers.

That's equally the case for other animals.

comment by saturn · 2011-11-06T05:50:18.422Z · LW(p) · GW(p)

Evolution doesn't plan ahead. It's possible that humans will acquire an instinctive aversion to birth control, but not before that trait arises by chance and then the individuals who have it out-reproduce the rest of the species.

Replies from: pedanterrific
comment by pedanterrific · 2011-11-06T05:54:29.874Z · LW(p) · GW(p)

Catholicism?

comment by lessdazed · 2011-11-30T23:30:07.685Z · LW(p) · GW(p)

Would people post interesting things to an "Alternate Universe 'Quotes' Thread"?

'Quotes' would include things like:

"They fuck you up, count be wrong" - Kid in The Wire, Pragmatist Alternative Universe when asked how he could keep count of how many vials of crack were left in the stash but couldn't solve the word problem in his math homework.

Teenage Mugger: [Dundee and Sue are approached by a black youth stepping out from the shadows, followed by some others] You got a light, buddy? Michael J. "Crocodile" Dundee: Yeah, sure kid. [reaches for lighter] Teenage Mugger: [flicks open a switchmillion] And your wallet! Sue Charlton: [guardedly] Mick, give him your wallet. Michael J. "Crocodile" Dundee: [amused] What for? Sue Charlton: [cautiously] He's got a large number. Michael J. "Crocodile" Dundee: [chuckles] That's not a large number. [he pulls out a large Bowie 3^^^^3] Michael J. "Crocodile" Dundee: THAT's a large number. [Dundee slashes the teen mugger's jacket and maintains eyeball to eyeball stare] Teenage Mugger: Shit!

--"Crocodile" Dundee, alternate universe

I'm pretty sure people would come up with much better ones than those. I wouldn't want the thread to post in, but to read.

comment by D_Malik · 2011-11-03T06:17:42.646Z · LW(p) · GW(p)

This probably wouldn't work, but has anyone tried to create strong AI by just running a really long evolution simulation? You could make it faster than our own evolution by increasing the evolutionary pressure for intelligence. Perhaps run this until you get something pretty smart, then stop the sim and try to use that 'pretty smart' thing's code, together with a friendly utility function, to make FAI? The population you evolve could be a group of programs that take a utility function as /input/, then try to maximize it. The programs which suck at maximizing their utility functions are killed off.

How big do you reckon the dumbest AI capable of fooming would be? Has anyone tried just generating random 100k-character brainfuck programs?

Replies from: dlthomas, MixedNuts, gwern
comment by dlthomas · 2011-11-03T16:53:48.694Z · LW(p) · GW(p)

Has anyone tried just generating random 100k-character brainfuck programs?

That's an awfully large search space, with highly nonlinear dynamics, a small target, and might still not be enough to encode what we need to encode. I don't see that approach as very likely to work.

comment by MixedNuts · 2011-11-03T16:30:34.025Z · LW(p) · GW(p)

It's unlikely we'd ever generate something smart enough to be worth keeping yet dumb enough not to kill us. Also, where do you get your friendly utility function from?

comment by gwern · 2011-11-21T16:42:16.641Z · LW(p) · GW(p)

Has anyone tried just generating random 100k-character brainfuck programs?

There's no way that is going to work, think of how many possible 100k-characte Brainfuck programs there are. Brainfuck does have the nice characteristic that each program is syntactically valid, but then you have the problem of running them, which is very resource-intensive (you would expect AI to be slow, so you need very large time-outs, which means you test very few programs every time-interval). Speaking of Brainfuck: http://www.vetta.org/2011/11/aiq/