Open Thread: January 2010

post by Kaj_Sotala · 2010-01-01T17:02:39.373Z · score: 5 (10 votes) · LW · GW · Legacy · 761 comments

And happy new year to everyone.

761 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2010-01-02T00:40:33.018Z · score: 16 (18 votes) · LW · GW

The Guardian published a piece citing Less Wrong:

The number's up by Oliver Burkeman

When it comes to visualising huge sums – the distance to the moon, say, or the hole the economy is in – we're pretty useless really

comment by RichardKennaway · 2010-01-02T14:33:17.829Z · score: 2 (2 votes) · LW · GW

Here's a nice visualisation of some big numbers.

comment by MichaelGR · 2010-01-19T19:49:36.604Z · score: 1 (1 votes) · LW · GW

Should we compile a list of all media mentions of LW on a page in the Wiki?

Could be useful someday.

comment by Vladimir_Nesov · 2010-01-20T00:02:18.327Z · score: 1 (1 votes) · LW · GW

Only if there is enough enthusiasm to keep such a page current and representative. Otherwise it'll die after your initial write-up.

comment by PhilGoetz · 2010-01-07T05:09:04.641Z · score: 15 (15 votes) · LW · GW

I heard an interview on NPR with a surgeon who asked other surgeons to use checklists in there operating rooms. Most didn't want to. He convinced some to try them out anyway.

(If you're like me, at this point you need time to get over your shock that surgeons don't use checklists. I mean, it's not like they're doing something serious, like flying a plane or extracting a protein, right?)

After trying them out, 80% said they would like to continue to use checklists. 20% said they still didn't want to use checklists.

So he asked them, If they had surgery, would they want their surgeon to use a checklist? 94% said they would want their surgeon to use a checklist.

comment by Vladimir_Nesov · 2010-01-07T16:41:21.683Z · score: 5 (5 votes) · LW · GW

Link: Checklists (previously discussed on LW).

comment by roland · 2010-01-16T17:31:49.444Z · score: 0 (0 votes) · LW · GW

After reading the original link I have this to comment:

The interesting part is that the surgeon who did the study didn't himself expect the checklist to make any difference and was resisting its use. But after starting to use it himself he noticed a massive improvement in his results.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-01T20:53:01.962Z · score: 14 (16 votes) · LW · GW

Recent observations on the art of writing fiction:

  1. My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses. I think this may be related to the fundamental attribution error: we see ourselves as reacting naturally to the environment, but others as driven by innate impulses. Unfortunately this doesn't work for storytelling at all! It means my viewpoint character ends up as a ping-pong ball in a world of strong, driven other characters. (If you don't see this error in my published fiction, it's because I don't publish unsuccessful stories.)

  2. Closely related to the above is another recent observation: My main character has to be sympathetic, in the sense of having motivations that I can respect enough to write them properly. Even if they're mistaken, I have to be able to respect the reasons for their mistakes. Otherwise my viewpoint automatically shifts to the characters around them, and once again the non-protagonist ends up stronger than the protagonist.

  3. Just as it's necessary to learn to make things worse for your characters, rather than following the natural impulse to make things better, it's also necessary to learn to deepen mysteries rather than following the natural impulse to explain them right away.

  4. Early problems in a story have to echo the final resolution.

  5. This isn't really about my own work, but I've been reading some fanfiction lately and it just bugs the living daylights out of me. I hereby dub this the First Law of Fanfiction: Every change which strengthens the protagonists requires a corresponding worsening of their challenges. Or in plainer language, You can't make Frodo a Jedi without giving Sauron the Death Star. There are stories out there with correctly spelled words, and even good prose, which are failing out of ignoring this one simple principle. If I could put this up on a banner on all the authors' pages of Fanfiction.Net, I would do so.

comment by CronoDAS · 2010-01-02T08:58:48.323Z · score: 4 (4 votes) · LW · GW

My main characters in failed/incomplete/unsatisfactory stories are surprisingly reactive, that is, driven by events around them rather than by their own impulses.

That's not uncommon. Villains act, heroes react.

I hereby dub this the First Law of Fanfiction: Every change which strengthens the protagonists requires a corresponding worsening of their challenges. Or in plainer language, You can't make Frodo a Jedi without giving Sauron the Death Star.

It's already called The Law of Bruce, but it's stated a little differently.

comment by wedrifid · 2010-01-02T09:15:13.743Z · score: 3 (3 votes) · LW · GW

I noticed where I was while on the first page this time. Begone with you!

comment by Technologos · 2010-01-09T04:34:06.898Z · score: 0 (0 votes) · LW · GW

That's not uncommon. Villains act, heroes react.

I interpreted Eliezer as saying that that was a cause of the stories' failure or unsatisfactory nature, attributing this to our desire to feel like decisions come from within even when driven by external forces.

comment by Alicorn · 2010-01-29T20:30:44.173Z · score: 12 (14 votes) · LW · GW

"Former Christian Apologizes For Being Such A Huge Shit Head All Those Years" sounds like an Onion article, but it isn't. What's impressive is not only the fact that she wrote up this apology publicly, but that she seems to have done it within a few weeks of becoming an atheist after a lifetime of Christianity, and in front of an audience that has since sent her so much hate mail she's stopped reading anything in her inbox that's not clearly marked as being on another topic.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-29T22:51:23.341Z · score: 4 (4 votes) · LW · GW

This woman is a model unto the entire human species.

comment by Unknowns · 2010-01-30T09:05:16.755Z · score: 1 (1 votes) · LW · GW

It isn't that impressive to me. As far as I can see, what it shows is that she has been torturing herself for a long time, probably many years, over her issues with Christianity. She's just expressing her anger with the suffering it caused her.

comment by RobinZ · 2010-01-29T22:03:26.273Z · score: 1 (1 votes) · LW · GW

Thank you for posting that. It's an inspiration.

comment by Paul Crowley (ciphergoth) · 2010-01-30T08:56:09.065Z · score: 0 (2 votes) · LW · GW

I wish it were possible to mail her and tell her she doesn't have to apologise!

comment by Erebus · 2010-01-05T10:45:18.975Z · score: 11 (11 votes) · LW · GW

Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.

On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.

On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.

Even worse, Jaynes makes several strong claims about mathematics that seem to admit no favorable interpretation: the are simply wrong. All of the "paradoxes" surrounding the concepts of infinity he gives in Chapter 15 (*) are so fundamentally flawed that even a passing familiarity of what measure theory actually says dispels them as mere word-plays caused by fuzzy or shifting definitions, or simply erroneous applications of the theory. Intuitionism and other finitist positions are certainly consistent philosophical positions, but they aren't made appealing by advocates like Jaynes who claim to find errors in standard mathematics while simply misunderstanding what the standard theory says.

Also, Jaynes' claims about mathematics that I know to be wrong make it very difficult to take him seriously when he goes into rant mode about other things I know less about (such as "orthodox" statistics or thermodynamics).

I'm extremely frustrated by the book, but I still find it valuable. But I definitely wouldn't recommend it to anyone who didn't know enough mathematics to correct Jaynes' errors in the "paradoxes" he gives. So.. why haven't I seen qualifications, disclaimers or warnings in recommendations of the book here? Are the matters concerning pure mathematics just not considered important by those recommending the book here?

(*) I admit I only glanced at the longer ones, "tumbling tetrahedron" and the "marginalization paradox". They seemed to be more about the interpretation of probability than about supposed problems with the concepts of infinity; and given how Jaynes misunderstands and/or misrepresents the mathematical theories of measure and infinities in general elsewhere in the book, I wouldn't expect them to contain any real problems with mathematics anyway.

comment by komponisto · 2010-01-05T12:18:56.488Z · score: 3 (3 votes) · LW · GW

Amen. Amen-issimo.

The solution, of course, is for the Bayesian view to become widespread enough that it doesn't end up identified particularly with Jaynes. The parts of Jaynes that are correct -- the important parts -- should be said by many other people in many other places, so that Jaynes can eventually be regarded as a brilliant eccentric who just by historical accident happened to be among the first to say these things.

There's no reason that David Hilbert shouldn't have been a Bayesian. None.

comment by orthonormal · 2010-01-03T05:39:31.686Z · score: 10 (10 votes) · LW · GW

After pondering the adefinitemaybe case for a bit, I can't shake the feeling that we really screwed this one up in a systematic way, that Less Wrong's structure might be turning potential contributors off (or turning them into trolls). I have a few ideas for fixes, and I'll post them as replies to this comment.

Essentially, what it looks like to me is that adefmay checked out a few recent articles, was intrigued, and posted something they thought clever and provocative (as well as true). Now, there were two problems with adefmay's comment: first, they had an idea of the meaning of "evidence" that rules out almost everything short of a mathematical proof, and secondly, the comment looked like something that a troll could have written in bad faith.

But what happened next is crucial, it seems to me. A bunch of us downvoted the comment or (including me) wrote replies that look pretty dismissive and brusque. Thus adefmay immediately felt attacked from all sides, with nobody forming a substantive and calm reply (at best, we sent links to pages whose relevance was clear to us but not to adefmay). Is it any wonder that they weren't willing to reconsider their definition of evidence, and that they started relishing their assigned role?

It might be too late now to salvage this particular situation, but the general problem needs to be addressed. When somebody with rationalist potential first signs up for an account, I think the chances of this situation recurring are way too high if they just jump right into a current thread as seems natural, because we seem like people who talk in special jargon and dismiss the obvious counterarguments for obscure reasons. It's not clear from the outset that there are good reasons for the things we take for granted, or that we're answering in shorthand because the Big Idea the new person just presented is fully answered within an old argument we've had.

comment by orthonormal · 2010-01-03T06:00:54.714Z · score: 7 (7 votes) · LW · GW

Partial Fix #2:

I can't help but think that some people might have hesitated to downvote adefmay's first comment, or might have replied at greater length with a more positive tone, had it been obvious that this was in fact adefmay's first post. (I did realize this, but replied in a comically insulting fashion anyhow. Mea culpa.)

It might be helpful if there were some visible sign that, for instance, this was among the 20 first comments from an account.

comment by Jack · 2010-01-03T06:25:58.276Z · score: 4 (4 votes) · LW · GW

When it became clear that adefmay couldn't role with the punches there were quite a few sensitive comments with good advice and explanations for why he/she had been sent links. His/her response to those was basically to get rude, indignant and come up with as many counter-arguments as possible while not once trying to understand someone else's position or consider the possibility he/she was mistaken about something.

I don't know if adefmay was intentionally trolling but he/she was certainly deficient in rationalist virtue.

That said, I think we need to handle newcomers better anyway and an FAQ section is really important. I'd help with it.

comment by orthonormal · 2010-01-03T07:49:10.550Z · score: 5 (5 votes) · LW · GW

It seems plausible that things could have turned out much differently, but that the initial response did irreparable damage to the conversation. Perhaps putting adefmay on the defensive so soon made it implicitly about status and not losing face. Or perhaps the exchange fell into a pattern where acting the troll started to feel too good.

Overall, I didn't find adefmay's tone and obstinacy at the start to be worse than some comments (elsewhere) by people who I consider valuable members of Less Wrong.

comment by RichardKennaway · 2010-01-03T10:58:46.064Z · score: 0 (0 votes) · LW · GW

There have been several newcomers in the last few days -- maybe the mention in the Guardian drew them here.

Besides telling them what we're all about, a standing invitation for newcomers to introduce themselves might be useful, but there isn't a place for them to do so. How about another standard monthly thread?

We don't have personal profile pages here, do we?

comment by Jack · 2010-01-03T11:02:38.873Z · score: 3 (3 votes) · LW · GW

There is this thread. But it needs to be linked to from some kind of faq page because right now it is too hidden from new users to be helpful.

comment by MatthewB · 2010-01-03T13:01:34.787Z · score: 1 (1 votes) · LW · GW

I just noticed that I showed up around the same time as the Guardian Mention as well... However, I have been lurking (without registering) for two years now. I met Eliezer Yudowski at the First Singularity Summit, and became aware of OB as a result, and then became aware of this blog shortly after he split from OB.

However, I would like to say that a newcomers section in a FAQ or Wiki would have been most welcome.

I do have a little bit of a clue what I am doing here as well, as I have spent a lot of time on forums such as Richard Dawkins' and Sam Harris' and decided that I wanted to find some people who were a) more into AI and rational reasoning and b) closer to home.

I would second the suggestion for an introductory thread. And, some better guidelines for posting (what is likely to get downvoted, what is likely to get upvoted... although, from my vote count, I seem to have some clue of what works and what doesn't.. Still, I could use a few more definitive guidelines that just not making stupid posts - or trollish posts).

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-03T05:54:42.286Z · score: 4 (4 votes) · LW · GW

I'd have to say that the trollness seems obvious as all hell to me. Also, consider the prior probabilities.

comment by orthonormal · 2010-01-03T06:18:34.695Z · score: 1 (1 votes) · LW · GW

I may be giving adefmay the benefit of the doubt due to an overactive conscience; I go back and forth on this particular case. Still, it seems to me that being new here can involve a lot of early perceived hostility (people who've joined the community more recently, feel free to support or correct this claim), that we may well be losing LW contributors for this reason, and that some relatively easy fixes might do a lot of good.

comment by Nick_Tarleton · 2010-01-03T06:08:40.661Z · score: 1 (1 votes) · LW · GW

I'd have to say that the trollness seems obvious as all hell to me.

Me too. Obvious from his second comment on, even. (Or, if not a troll, not going to become a valued contributor without some growing up.)

comment by MatthewB · 2010-01-03T06:09:19.487Z · score: 0 (0 votes) · LW · GW

Seeing as I missed that whole thing, and I am interested in how to best define evidence (I need such a definition for other forums, probably more than I would need it here)... Could someone post those same links about the definition (or, I see the word "Meaning" used... Why is that???) of Evidence?

Never mind... It's in the Wiki...

comment by Nick_Tarleton · 2010-01-03T06:14:32.917Z · score: 0 (0 votes) · LW · GW

from the wiki

comment by orthonormal · 2010-01-03T05:54:47.550Z · score: 3 (5 votes) · LW · GW

Partial Fix #1:

We put together a special forum (subset of threads and posts) for a number of old argument topics, and make sure that it is readily accessible from the main page, or especially salient for new people. We have a norm there to (as much as possible) write out our points from scratch instead of using shorthand and links as we do in discussions between LW veterans.

Benefits:

  • It's much less of a status threat to be told that one's comment belongs in another thread than to have it dismissed as happened to adefmay.

  • Most of the trouble seems to happen when new people jump into a current thread and derail a conversation between LW veterans, who react brusquely as above. Separating the newest/most advanced conversations from the old objections should make everyone happier.

  • I find that the people who have been on LW for a few months have just the right kind of zeal for these newfound ideas that makes them eager and able to defend them against the newest people, who find them absurd. I think this would be a good thing for both groups of people, and I expect it to happen naturally should such a place be created.

So if we made some collection of "FAQ threads" and made a big, obvious, enticing link to them on either the front page or the account creation page (that is, we give them a list of counterintuitive things we believe or interesting questions we've tackled, in the hopes they head there first), we might avoid more of these unfortunate calamities in the future.

comment by Jack · 2010-01-03T07:38:18.973Z · score: 16 (16 votes) · LW · GW

I'm not sure there needs to be more than one FAQ thread. But lets start by generating a list of frequently asked questions, coming up with answers with consensus support.

  • Why is almost everyone here an atheist?
  • What are the "points" on each comment?
  • Aren't knowledge and truth subjective or undefinable?
  • Can you ever really prove anything?
  • What's all this talk about probabilities and what is a Bayesian?
  • Why do you all agree on so much? Am I joining a cult?
  • What are the moderation rules? What kind of comments will result in downvotes and what kind of comments could result in a ban?
  • Who are you people? (Demographics, and a statement to the effect of demographics don't matter here. )

What else? Anyone have drafts of answers?

comment by orthonormal · 2010-01-03T18:19:51.237Z · score: 3 (3 votes) · LW · GW

More FAQ topics:

  • Why the MWI?
  • Why do you all think cryonics will probably work?
  • Why a computational theory of mind?
  • What about free will and consciousness?
  • What do you mean by "morality", anyway?
  • Wait a sec. Torture over dust specks?!?

Basically, I think we need to do more for newcomers than just tell them to read a sequence; I mean, I think each of us had to actually argue out points we thought were obvious before we moved forward on these issues. Having a continuous open thread on such topics (including, of course, links to the relevant posts or Wiki entry) would be much better, IMO.

A monthly "Old Topics" thread, or a collection of them on various topics, would be great, although there ought to be a really obvious link directing people to it.

comment by Jack · 2010-01-03T22:13:49.683Z · score: 1 (1 votes) · LW · GW

While I'm not saying there shouldn't be a place to discuss those topics I think the first thing a newcomer sees should focus on epistemology, rationality and community norms of rationality.

1) This is still presumably what this site is about.

2) Once you get the right attitude and the right approach the other subjects don't require patient explanation. A place to discuss those things is fine, but if the issue comes up elsewhere and a veteran does respond brusquely to a newcomer they can probably deal with it if they have internalized less wrong norms, traditional rationality and some of the Bayesian type stuff we do here.

3) There seems to be near universal agreement on the rationality stuff but I'm not sure that is the case with the other issues. I know I agree with the typical LW position on the first four of your questions, but I disagree on the last two. I suspect most people here don't think cryonics will probably work (just that it working is likely enough to justify the cost). There are probably some determinists mixed in with a lot of compatibilists and there are definitely dissenters on theory of the mind stuff (I'm thinking of Michael Porter who otherwise appears to be a totally reasonable less wrong member). Check the survey results for more evidence of dissent. That there is still disagreement on these issues that is reason to keep discussing them. But I don't know if we should present the majority views on all these issues as resolved to new users.

But I might just be privileging my own minority views. If the community wants these included I won't object.

comment by orthonormal · 2010-01-04T05:10:20.367Z · score: 2 (2 votes) · LW · GW

Good points, but I still think that these questions belong in some kind of "Old Topics" thread, because there's already been a lot said about them, and because most new people will want to argue them anyway. Even if they're not considered to be settled or to be conditions that define LW, I'd prefer if there's a place for new people to start discussing them other than 2-year-old threads or tangential references in new posts.

comment by dfranke · 2010-01-01T18:27:10.246Z · score: 10 (10 votes) · LW · GW

In one of the dorkier moments of my existence, I've written a poem about the Great Filter. I originally intended to write music for this, but I've gone a few months now without inspiration, so I think I'll just post the poem to stand by itself and for y'all to rip apart.

The dire floor of Earth afore
saw once a fortuitous spark.
Life's swift flame sundry creature leased
and then one age a freakish beast
awakened from the dark.

Boundless skies beheld his eyes
and strident through the void he cried;
set his devices into space;
scryed for signs of a yonder race;
but desolate hush replied.

Stars surround and worlds abound,
the spheres too numerous to name.
Yet still no creature yet attains
to seize this lot, so each remains
raw hell or barren plain.

What daunting pale do most 'fore fail?
Be the test later or done?
Those dooms forgone our lives attest
themselves impel from first inquest:
cogito ergo sum.

Man does boast a charmèd post,
to wield the blade of reason pure.
But if this prov'ence be not rare,
then augurs fate our morrow bare,
our fleeting days obscure.

But might we nigh such odds defy,
and see before us cosmos bend?
Toward the heavens thy mind set,
and waver not: this proof, till 'yet,
did ne'er with man contend!

Suggested tweaks are welcome. Things that I'm currently unhappy with are that "fortuitous" scans awkwardly, and the skies/eyes rhyme feels clichéd.

comment by pjeby · 2010-01-03T05:39:26.614Z · score: 3 (3 votes) · LW · GW

I'll just post the poem to stand by itself and for y'all to rip apart.

It reminds me of something that happened in college, where a poem of mine was being put in some sort of collection; there was a typo in it, and I mentioned a correction to the professor. He nodded wisely, and said, "yes, that would keep it to iambic pentameter."

And I said, "iambic who what now?"... or words to that effect.

And then I discovered the wonderful world of meter. ;-)

Your poem is trying to be in iambic tetrameter (four iambs - "dit dah" stress patterns), but it's missing the boat in a lot of places. Iambic tetrameter also doesn't lend itself to sounding serious; you can write something serious in it, sure, but it'll always have kind of a childish singsong-y sort of feel, so you have to know how to counter it.

Before I grokked this meter stuff, I just randomly tried to make things sound right, which is what your poem appears to be doing. If you actually know what meter you're trying for, it's a LOT easier to find the right words, because they will be words that naturally hit the beat. Ideally, you should be able to read your poem in a complete monotone and STILL hear the rhythmic beating of the dit's and dah's... you could probably write a morse code message if you wanted to. ;-)

Anyway, you will probably find it a lot easier to fix the problems with the poem's rhythm if you know what rhythm you are trying to create. Enjoy!

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-03T05:55:17.812Z · score: 2 (2 votes) · LW · GW

For those who still read books, recommend "The Poem's Heartbeat".

comment by dfranke · 2010-01-03T06:01:11.363Z · score: 1 (1 votes) · LW · GW

Yes, I'm well aware of what iambic tetrameter is and that the poem generally conforms to it :-). The intended meter isn't quite that simple though. The final verse of each stanza is only three feet, and the first foot of the third verse of each stanza is a spondee. Verses are headless where necessary.

There's also an inverted foot in "Be the test later or done?", but I'm leaving that in even though I could easily substitute "ahead" for "later". Despite breaking the meter, it sounds better as-is.

comment by pjeby · 2010-01-03T08:33:07.098Z · score: 0 (0 votes) · LW · GW

The intended meter isn't quite that simple though.

Fair enough. I found other aspects of the poem so awkward, though, that I never actually finished any one full stanza without wincing. The rhythm seemed like the one thing I could offer a semi-objective opinion on, and I figured that maybe some of the other things that were bothering me were a result of you trying to fit a meter without conscious awareness of what meter you were trying to fit.

comment by rwallace · 2010-01-02T10:28:20.696Z · score: 0 (0 votes) · LW · GW

I think it works very well as is. Upvoted.

Edit: but perhaps 'wondrous' for 'fortuitous'?

comment by Wei_Dai · 2010-01-11T22:23:45.072Z · score: 8 (8 votes) · LW · GW

I rewatched 12 Monkeys last week (because my wife was going through a Brad Pitt phase, although I think this movie cured her of that :), in which Bruce Willis plays a time traveler who accidentally got locked up in a mental hospital. The reason I mention it here is because It contained an amusing example of mutual belief updating: Bruce Willis's character became convinced that he really is insane and needs psychiatric care, while simultaneously his psychiatrist became convinced that he actually is a time traveler and she should help him save the world.

Perhaps the movie also illustrates a danger of majoritarianism: if someone really found a secret that could save the world, it would be tragic if he allowed himself to be convinced otherwise due to majoritarian considerations. Don't most (nearly all?) true beliefs start their existence as a minority?

comment by MichaelGR · 2010-01-19T16:19:45.577Z · score: 2 (2 votes) · LW · GW

The movie is also a good example of existential risk in fiction (in this case, a genetically engineered biological agent).

comment by HalFinney · 2010-01-14T23:01:48.144Z · score: 0 (0 votes) · LW · GW

I agree about the majoritarianism problem. We should pay people to adopt and advocate independent views, to their own detriment. Less ethically we could encourage people to think for themselves, so we can free-ride on the costs they experience.

comment by Wei_Dai · 2010-01-15T19:25:21.458Z · score: 1 (1 votes) · LW · GW

We should pay people to adopt and advocate independent views, to their own detriment.

I guess we already do something like that, namely award people with status for being inventors or early adopters of ideas (think Darwin and Huxley) that eventually turn out to be accepted by the majority.

comment by komponisto · 2010-01-05T12:03:25.943Z · score: 8 (8 votes) · LW · GW

Okay, so....a confession.

In a fairly recent little-noticed comment, I let slip that I differ from many folks here in what some may regard as an important way: I was not raised on science fiction.

I'll be more specific here: I think I've seen one of the Star Wars films (the one about the kid who apparently grows up to become the villain in the other films). I have enough cursory familiarity with the Star Trek franchise to be able to use phrases like "Spock bias" and make the occasional reference to the Starship Enterprise (except I later found out that the reference in that post was wrong, since the Enterprise is actually supposed to travel faster than light -- oops), but little more. I recall having enjoyed the "Tripod" series, and maybe one or two other, similar books, when they were read aloud to me in elementary school. And of course I like Yudkowsky's parables, including "Three Worlds Collide", as much as the next LW reader.

But that's about the extent of my personal acquaintance with the genre.

Now, people keep telling me that I should read more science fiction; in fact, they're often quite surprised that I haven't. So maybe, while we're doing these New Year's Resolutions, I can "resolve" to perhaps, maybe, some time, actually do that (if I can ever manage to squeeze it in between actually doing work and procrastinating on the Internet).

Problem is, there seems to be a lot of it out there. How would a newcomer know where to start?

Well, what better place to ask than here, a place where many would cite this type of literature as formative with respect to developing their saner-and-more-interesting-than-average worldviews?

Alicorn recommended John Scalzi (thanks). What say others?

comment by Vladimir_Nesov · 2010-01-05T21:14:08.114Z · score: 8 (8 votes) · LW · GW

Greg Egan: Permutation City, Diaspora, Incandescence.
Vernor Vinge: True Names, Rainbows End.
Charlie Stross: Accelerando.
Scott Bakker: Prince of Nothing series.

comment by jscn · 2010-01-06T23:08:05.306Z · score: 3 (3 votes) · LW · GW

Voted up mainly for the Greg Egan recommendations.

comment by djcb · 2010-01-09T15:43:44.700Z · score: 0 (0 votes) · LW · GW

I read Vinge's Rainbows End, and I found the futurism interesting (it seems Google is starting to work on the book scanning stuff), but I couldn't really get into the story.

(edit: fixed typo, thanks)

comment by RobinZ · 2010-01-09T15:58:57.480Z · score: 0 (0 votes) · LW · GW

Rainbows End, but I agree.

comment by Paul Crowley (ciphergoth) · 2010-01-05T15:22:59.912Z · score: 6 (6 votes) · LW · GW

My first recommendation here is always Iain M Banks, Player of Games.

comment by MichaelGR · 2010-01-19T16:11:01.715Z · score: 0 (0 votes) · LW · GW

Personally, I'd recommend starting with Consider Phlebas, then Use of Weapons, then Player of Games.

comment by AllanCrossman · 2010-01-18T22:35:19.078Z · score: 0 (0 votes) · LW · GW

Why that Culture novel, precisely? I don't recall it as one of the better ones.

Admittedly, I'm unusual in that my favourite Culture story is The State of the Art. General Pinochet Chili Con Carne! Richard Nixon Burgers! What's not to like?

comment by Paul Crowley (ciphergoth) · 2010-01-18T22:37:42.471Z · score: 1 (1 votes) · LW · GW

It's one of my favourites, and I also think it's a good one to start with. But so is The State of the Art. My favourite by him is Feersum Endjinn.

comment by Alicorn · 2010-01-05T14:39:46.255Z · score: 6 (6 votes) · LW · GW

If you'd like some TV recommendations as well, here are some things that you can find on Hulu:

Firefly. It's not all available at the same time, but they rotate the episodes once a week; in a while you'll be able to start at the beginning. If you haven't already seen the movie, put it off until you've watched the whole series.

Babylon 5. First two seasons are all there. It takes a few episodes to hit its stride.

If you're willing to search a little farther afield, Farscape is good, and of the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this seems for some reason to be correlated with gender).

comment by ShardPhoenix · 2010-01-07T03:08:25.593Z · score: 2 (2 votes) · LW · GW

If you're willing to search a little farther afield, Farscape is good, and of the Star Treks, DS9 is my favorite (many people prefer TNG, though, and this seems for some reason to be correlated with gender).

Maybe that's because DS9 is about a bunch of people living in a big house, while TNG is about a bunch of people sailing around in a big boat ;). I prefer DS9 myself though and I'm a guy.

comment by randallsquared · 2010-01-06T03:35:38.651Z · score: 1 (1 votes) · LW · GW

With respect to B5, I'd say "a few episodes" is the entire first season and a quarter of the second. I don't regret having spent the time to watch that, but I'm not sure I would have bothered had I not had friends raving about it, knowing in advance what I know now. :)

comment by Jack · 2010-01-06T17:14:24.716Z · score: 0 (0 votes) · LW · GW

Does Jericho count as sci-fi? Either way, I highly recommend it.

Who will be the first person to recommend Lexx? :-)

comment by MrHen · 2010-01-06T16:54:06.472Z · score: 0 (0 votes) · LW · GW

You can probably find someone who has the Firefly discs, too.

comment by MatthewB · 2010-01-06T07:29:50.328Z · score: 0 (2 votes) · LW · GW

I was not at all impressed with Firefly. It's idioms for the more primitive were too primitive (dresses from the 1800s???). It's Premise was awesome, but due to the mainstream audience, the writers were very constrained. Had it been done as an anime, I imagine it would have looked far more like Trigun.

Now, Farscape. This was a re-telling of the Buck-Rogers story, and it was done Freaking Well! They did not focus overly much on the technologies, which were mostly post-Singularity (as were many of the alien species), but due to the collapse of the civilization that supported that portion of the Galaxy, the Peacekeepers had become a force for malevolence and dystopic vision rather than the force for good which they began as.

I have never been able to enjoy Star Trek in any of its genres past TOS. The lack of obvious applications of much of the technologies, and the strict adherence to a dualist New-Age philosophy of consciousness really kept me away from the show. They occasionally had some excellent shows, but overall; I found that their lack of general AI, given the supposed power of many of their computers, and their lack of nano-tech based technologies (given the absolute necessity of nanotech for some technologies) was just appalling. The Medical technologies were also rather wonky. If they can regrow bone, and they can regrow nerves, and they can regrow skin, and they can regrow muscles... Why cannot they re-grow entire limbs.

Also, the silly rationale behind there were not more technologies like Giordi's eyes really made no sense.

IMO, the absolute Best Sci-Fi TV series in recent years has been the new BSG, and the upcoming Caprica, which will tackle a civilization as it approaches its own singularity and then fails to make it through the event horizon. Not due to having created unfriendly AI, but by having their AI corrupted by a psychopathic religious girl who manages to inhabit that AI. It should be excellent.

comment by Jack · 2010-01-06T17:22:13.870Z · score: 2 (2 votes) · LW · GW

They did not focus overly much on the technologies, which were mostly post-Singularity

Was this ever said or shown in an episode? It seems like a cop out to just assume magical technology is post-Singularity without it being in the back story.

the strict adherence to a dualist New-Age philosophy of consciousness really kept me away from the show.

Wasn't there a consciousness swapping episode of Farscape? Also, what about the Data is basically a person trope in TNG? I agree that Star Trek technology doesn't make a lot of sense, though.

Give your high standards shouldn't the fact that cylons were never much more intelligent than humans bother you?

comment by MatthewB · 2010-01-07T04:06:25.426Z · score: 0 (2 votes) · LW · GW

Was this ever said or shown in an episode? It seems like a cop out to just assume magical technology is post-Singularity without it being in the back story.

A society, or group of societies, needn't have the concept of the singularity in order for one to have occurred. Farscape had some very obvious technologies (mostly medical) which were very highly advanced nanotech, and there were elements of AI. Most of the theme of the show, though, was that they were living in a fallen society, which had once passed through a Singularity (at least parts of the interstellar civilization) yet had fallen back below it, with these magical items being carefully guarded and little understanding of how they worked. That did bother me a little, but since the story was driven by the plot and some of the characters, and they rarely sunk into techno-babble, it was easier to overlook.

There was a consciousness swapping episode of Farscape. It was not one of my favorite episodes of the show. As for Star Trek and Data... That was something that I hated. If they had the type of imaging technology that they claimed to have in their medical and scanning technologies, creating more Datas should have been the easiest thing in the world. and, Data should have known that there was no more to him that the patterns in his "Positronic Matrix" and that if he was taken apart, all that would be necessary was a back-up of this matrix... Of course, just by fiat they claimed that this was impossible.

And... As to BSG... It did bother me that the Cylons (the human ones) were never much more intelligent than humans did bother me, until it was explained why (The episode where Cavil has his screaming fit at Ellen Tigh where he screams at her "I am just a machine... I want to see gamma-rays, smell x-rays, hear radio waves, touch the solar wind and taste dark matter. Yet, you gave me this arthritic old body and these failing eyes to look at the wonders of the universe".

It was explained the Cavil, in his jealousy of the First 5, who had arrived from the Original earth in the final months of the Cylon War with the colonies, had managed to trap the 5 (long after the end of the Cylon War), and suppress their original memories and knowledge (which were vastly greater than either man or existing Cylons) and replace these memories with false memories & knowledge. Cavil then placed them in the colonies to await the final destruction of the colonies, to live among the humans (and discover how much they deserved annihilation) only to have his plans thwarted when the extermination did not go as plan.

When Ellen is resurrected after Saul kills her on New Caprica, she re-gains her old memories (and knowledge), yet does not share it with Cavil (or the other Cylons) because of Cavil's betrayal of her (and the original 5's) Values (and because Cavil killed Daniel, who was the most successful and advanced of the 13 models of cylons... Yes, there were 13 models, not 12. Cavil completely destroyed Daniel in a fit of jealousy because of Daniel's incredible brilliance and talent.

Lastly, the remaining Cylons were more intelligent than the Humans. only Baltar came close to their level of intelligence. Their technology was higher than humans as well. It was not significantly greater due to the fact that the Original 5 refused to give the remaining 7 much of their technical knowledge because of Cavil's pride and desire to exterminate mankind for what were sins that should have been forgiven.

comment by Kevin · 2010-01-05T12:31:58.585Z · score: 6 (6 votes) · LW · GW

I am a big fan of Isaac Asimov. Start with his best short story, which I submit as the best sci-fi short story of all time. http://www.multivax.com/last_question.html

comment by Bindbreaker · 2010-01-05T12:52:58.969Z · score: 6 (8 votes) · LW · GW

I prefer this one, and yes, it really is that short.

comment by Kevin · 2010-01-05T14:02:30.603Z · score: 0 (0 votes) · LW · GW

Thanks, Brown wrote that in 1954, two years before Asimov wrote The Last Question. Do you think Asimov read Brown's story?

comment by Technologos · 2010-01-05T16:23:18.368Z · score: 0 (0 votes) · LW · GW

Asimov thought it was his best story, too (or at least his favorite). Can't say I disagree.

comment by komponisto · 2010-01-05T12:47:45.623Z · score: 0 (0 votes) · LW · GW

Ah yes, CronoDAS recommended that, too. (Sorry, I should have acknowledged!)

comment by Jack · 2010-01-06T20:34:24.462Z · score: 2 (2 votes) · LW · GW

Oh! More Asimov, "I, Robot". Here the guy was talking about Friendly AI in 1942.

comment by Zack_M_Davis · 2010-01-06T20:49:02.624Z · score: 4 (6 votes) · LW · GW

Here the guy was talking about Friendly AI in 1942.

Not really; they're not decision theory stories. The Three Laws are adversarial injunctions that hide huge amounts of complexity under short English words like harm. It wouldn't actually work. It didn't even work in the story.

comment by Jack · 2010-01-06T21:06:59.081Z · score: 6 (6 votes) · LW · GW

The whole point of the stories is that it doesn't work in the end, it is a case study in how not to do it. How it can go wrong. Obviously he didn't solve the problem. The first digital computer had just been constructed, what would you expect?

comment by Vladimir_Nesov · 2010-01-07T13:24:29.487Z · score: 0 (0 votes) · LW · GW

Obviously he didn't solve the problem. The first digital computer had just been constructed, what would you expect?

The FAI problem has nothing to do with digital computers. It's a math problem. You'd only need digital computers after you've solved the problem, to implement the solution.

comment by Zack_M_Davis · 2010-01-06T21:26:09.918Z · score: 0 (0 votes) · LW · GW

Not that they weren't good stories, and not that I expect fiction authors to do their own basic research, but I wouldn't say they're about the Friendly AI problem.

comment by JDM · 2013-05-24T18:58:43.420Z · score: 0 (0 votes) · LW · GW

It is most certainly not an academic look at the concept, but that doesn't mean he didn't play a role in bringing the concept to the public eye. It doesn't have to be a scientific paper to have a real influence on the idea.

comment by Kevin · 2010-01-09T06:06:10.230Z · score: 1 (1 votes) · LW · GW

Along those lines, I'd recommend the Metamorphosis of Prime Intellect. It's a short-novel length expression of an AI that gains control of all matter and energy in the universe while being constrained by Asimov's Three Laws.

It's available free online under copyright. http://www.kuro5hin.org/prime-intellect/

comment by Kevin · 2010-01-05T14:00:09.147Z · score: 2 (2 votes) · LW · GW

If you want to read a full length Asimov book, my personal recommendation is the End of Eternity. It has a rather unique take on time travel and functions well as a stand alone book. It has just been reprinted after being out of print for too long.

Foundation is his most well known novel and it also very much worth reading.

I can't find someone violating the copyright online with a quick Google, but Asimov's short story "The Last Answer" is also a good one with a different take on religion than "The Last Question".

comment by NancyLebovitz · 2010-01-06T01:14:23.036Z · score: 4 (4 votes) · LW · GW

Vinge's Marooned in Real Time, A Fire Upon the Deep. The former introduced the idea of the Singularity, the latter gets a lot of fun playing near the edge of it.

Olaf Stapledon: Last and First Men, Star Maker.

Poul Anderson: Brain Wave. What happens if there's a drastic, sudden intelligence increase?

After you've read some science fiction, if you let us know what you've liked, I bet you'll get some more fine-tuned recommendations.

comment by Wei_Dai · 2010-01-07T00:27:18.386Z · score: 3 (3 votes) · LW · GW

I second A Fire Upon the Deep (and anything by Vinge, but A Fire Upon the Deep is my favorite). BTW, it contains what is in retrospect a clear reference to the FAI problem. See http://books.google.com/books?id=UGAKB3r0sZQC&lpg=PA400&ots=VBrKocfTHM&dq=%22fast%20burn%20transcendence%22&pg=PA400

If anyone read it for the first time recently, I'm curious what you think of the Usenet references. Those were my favorite parts of the book when I first read it.

comment by zero_call · 2010-01-08T06:00:56.777Z · score: 1 (1 votes) · LW · GW

I thought the Usenet references were really cool and really clever, both from a reader's standpoint, and also from an author's standpoint. For example, it doesn't take a lot of digression to explain it or anything since most readers are already familiar with similar stuff (e.g., Usenet.) It also just seems really plausible as a form of universe-scale "telegram" communication, so I think it works great for the story. Implausibility just ruins science fiction for me, it destroys that crucial suspension of disbelief.

comment by ChristianKl · 2010-01-08T14:21:19.133Z · score: 0 (0 votes) · LW · GW

If you would have tried to explain to people a hundred years ago that we will have interlinked computers and a lot of people will use them to view images of naked females I think most people would have found that hypothesis very implausible.

Any accurate description of the world that will exist 100 years in the future is bound to contain lots of implausible claims.

comment by zero_call · 2010-01-09T03:23:39.324Z · score: 2 (2 votes) · LW · GW

If you're suggesting that all science fiction is implausible though, then that's not true. There's a difference between coming up with random, futuristic ideas, and coming up with random, futuristic ideas that have justification for working.

comment by NancyLebovitz · 2010-04-13T02:38:13.682Z · score: 3 (3 votes) · LW · GW

It depends on what you're looking for. Books you might enjoy? If so, we need to know more about your tastes. Books we've liked? Books which have influenced us? An overview of the field?

In any case, some I've liked-- Heinlein's Rocketship Galileo which is quite a nice intro to rationality and also has Nazis in abandoned alien tunnels on the Moon, and Egan's Diaspora which is an impressive depiction of people living as computer programs.

Oh, and Vinge's A Fire Upon the Deep which is an effort to sneak up on writing about the Singularity (Vinge invented the idea of the Singularity), and Kirsteen's The Steerswoman (first of a series), which has the idea of a guild of people whose job it is to answer questions-- and if you don't answer one of their questions, you don't get to ask them anything ever again.

comment by jscn · 2010-01-06T23:11:34.618Z · score: 3 (3 votes) · LW · GW
  • Solaris by Stanislaw Lem is probably one of my all time favourites.
  • Anathem by Neal Stephenson is very good.
comment by djcb · 2010-01-09T15:37:08.401Z · score: 0 (0 votes) · LW · GW

I really like Anathem (am about halfway reading it); I'd goes into many of the themes popular around here (rationalism, MWI), except for the singularity stuff.

comment by Jawaka · 2010-01-05T14:30:16.886Z · score: 3 (3 votes) · LW · GW

I am a huge fan of Philip K. Dick. I don't usually read much fiction or even science fiction, but PKD has always fascinated me. Stanislav Lem is also great.

comment by Dreaded_Anomaly · 2011-01-24T05:08:47.370Z · score: 2 (2 votes) · LW · GW

I second the recommendations of 1984 and Player of Games (the whole Culture series is good, but that one especially held my interest).

Recommendations I didn't see when skimming the thread:

  • The Hitchhiker's Guide to the Galaxy series by Douglas Adams: A truly enjoyable classic sci-fi series, spanning the length of the galaxy and the course of human history.
  • Timescape by Gregory Benford: Very realistic and well-written story about sending information back in time. The author is an astrophysicist, and knows his stuff.
  • The Andromeda Strain, Sphere, Timeline, Prey, and Next by Michael Crichton: These are his best sci-fi works, aimed at realism and dealing with the consequences of new technology or discovery.
  • Replay by Ken Grimwood: A man is given the chance to relive his life. A stirring tale with several twists.
  • The Commonwealth Saga and The Void Trilogy by Peter F. Hamilton: Superb space opera, in which humanity has colonized the stars via traversable wormholes, and gained immortality via rejuvenation technology. The trilogy takes place a thousand years after the saga, but with several of the same characters.
  • The Talents series and the Tower and Hive series by Anne McCaffrey: These novels deal with the emergence and organization of humans with "psychic" abilities (telekinesis, telepathy, teleportation, and so forth). The first series takes place roughly in the present day, the second far in the future on multiple planets.
  • Priscilla Hutchins series and Alex Benedict series by Jack McDevitt: Two series, unrelated, both examining how humans might explore the galaxy and what they might find (many relics of ancient civilizations, and a few alien races still living). The former takes place in the relatively near future, while the latter takes place millennia in the future.
  • Hyperion Cantos by Dan Simmons: An epic space opera dealing heavily with singularity-related concepts such as AI and human bio-modification, as well as time travel and religious conflict.
  • Otherland series by Tad Williams: In the near future, full virtual reality has been developed. The story moves through a plethora of virtual environments, many drawn from classic literature.

Edit: I have just now realized, after writing all of this out, that this is the open thread for January 2010 rather than January 2011. Oh well.

comment by JoshuaZ · 2010-08-09T21:00:49.242Z · score: 2 (2 votes) · LW · GW

I wouldn't recommend Scalzi. Much of Scalzi is miltiary scifi with little realism and isn't a great introduction for scifi. I'd recommend Charlie Stross. "The Atrocity Archives", "Singularity Sky" and "Halting State" are all excellent. The third is very weird in that it is written in the second person, but is lots of fun. Other good authors to start with are Pournelle and Niven (Ringworld, The Mote in God's Eye, and King David's Spaceship are all excellent).

comment by Risto_Saarelma · 2010-08-10T07:41:16.104Z · score: 2 (2 votes) · LW · GW

Am I somehow unusual for being seriously weirded out by the cultural undertones in Scalzi's Old Man's War books? I keep seeing people in generally enlightened forums gushing over his stuff, but the book read pretty nastily to me with its mix of very juvenile approach to science, psychology and pretty much everything it took on, and its glorification of genocidal war without alternatives. It brought up too much associations to telling kids who don't know better about the utter necessity of genocidal war in simple and exiting terms in real-world history, and seemed too little aware of this itself to be enjoyable.

Maybe it's a Heinlein thing. Heinlein is pretty obscure here in Europe, but seems to be woven into the nostalgia trigger gene in the American SF fan DNA, and I guess Scalzi was going for something of a Heinlein pastiche.

comment by NancyLebovitz · 2010-08-10T10:16:29.770Z · score: 2 (2 votes) · LW · GW

It's nice to know that I'm not the only person who hated Old Man's War, though our reasons might be different.

It's been a while since I've read it, but I think the character who came out in favor of an infrastructure attack (was that the genocidal war?) turned out to be wrong.

What I didn't like about the book was largely that it was science fiction lite-- the world building was weak and vague, and the viewpoint character was way too trusting. I've been told that more is explained in later books, but I had no desire to read them.

There's a profoundly anti-imperialist/anti-colonialist theme in Heinlein, but most Heinlein fans don't seem to pick up on it.

comment by Risto_Saarelma · 2010-08-10T10:57:59.675Z · score: 3 (3 votes) · LW · GW

The most glaring SF-lite problem for me was that in both Old Man's War and The Ghost Brigades, the protagonist was basically written as a generic twenty-something Competent Man character, despite both books deliberately setting the protagonist up as very unusual compared to the archetype character. in Old Man's War, the protagonist is a 70-year old retiree in a retooled body, and in The Ghost Brigades something else entirely. Both of these instantly point to what I thought would have been the most interesting thing about the book, how does someone who's coming from a very different place psychologically approach stuff that's normally tackled by people in their twenties. And then pretty much nothing at all is done with this angle. Weird.

comment by NancyLebovitz · 2010-08-10T14:15:16.684Z · score: 1 (1 votes) · LW · GW

There was so much, so very much sf-lite about that book. Real military life is full of detail and jargon. OMW had something like two or three kinds of weapons.

There was the big sex scene near the beginning of the book, and then the characters pretty much forgot about sex.

It was intentionally written to be an intro to sf for people who don't usually read the stuff. Fortunately, even though the book was quite popular, that approach to writing science fiction hasn't caught on.

comment by Risto_Saarelma · 2010-08-10T13:30:07.345Z · score: 1 (1 votes) · LW · GW

Come to think of it, I had a similar problem with James P. Hogan's Voyage from Yesteryear, which was about a colony world of in vitro grown humans raised by semi-intelligent robots without adult parents. I thought this would lead to some seriously weird and interesting social psychology with the colonists, when all sorts of difficult to codify cultural layers are lost in favor of subhuman machines as parental authorities and things to aspire to.

Turned out it was just a setup to lecture how anarchism with shooting people you don't like would lead to the perfect society if it weren't for those meddling history-perpetuating traditionalists, with the colonists of course being exemplars of psychological normalcy and wholesomeness as well as required by the lesson, and then I stopped reading the book.

comment by RobinZ · 2010-08-10T15:51:01.235Z · score: 0 (0 votes) · LW · GW

What I didn't like about the book was largely that it was science fiction lite-- the world building was weak and vague, and the viewpoint character was way too trusting. I've been told that more is explained in later books, but I had no desire to read them.

Nor I - I've read Agent to the Stars, which was just as bad, so I have no expectation of improvement.

comment by JoshuaZ · 2010-08-10T12:17:01.786Z · score: 0 (0 votes) · LW · GW

This isn't a Scalzi problem so much as a general problem with the military end of SF. See for example, Starship Troopers and Ender's Game. Ender's Game makes it more complicated, but there's still some definite sympathy with genocide (speciescide?).

comment by NancyLebovitz · 2010-08-10T14:08:09.312Z · score: 1 (1 votes) · LW · GW

I wonder how important what the characters say is compared to what they do-- and the importance may be in what the readers remember.

Card has an actual genocide.

In ST, Heinlein speaks in favor of crude "roll over the other guys so that your genes can survive" expansionism, but he portrays a society where racial/ethnic background doesn't matter for humans, and an ongoing war which won't necessarily end with the Bugs or the humans being wiped out.

comment by daos · 2010-01-17T17:01:01.453Z · score: 2 (2 votes) · LW · GW

many good recommendations so far but unbelievably nobody has yet mentioned Iain M. Banks' series of 'Culture' novels based on a humanoid society (the 'Culture') run by incredibly powerful AI's known as 'Minds'.

highly engaging books which deal with much of what a possible highly technologically advanced post singularity society might be like in terms of morality, politics, philosophy etc. they are far fetched and a lot of fun. here's the list to date:

  • Consider Phlebas (1987)
  • The Player of Games (1988)
  • Use of Weapons (1990)
  • Excession (1996)
  • Inversions (1998)
  • Look to Windward (2000)
  • Matter (2008)

they are not consecutive so reading order isn't that important though it is nice to follow their evolution from the perspective of the writing.

comment by Paul Crowley (ciphergoth) · 2010-01-17T17:30:29.458Z · score: 0 (0 votes) · LW · GW

I mentioned "Player of Games" above.

comment by daos · 2010-01-17T18:29:27.088Z · score: 0 (0 votes) · LW · GW

duly noted. i missed it before amongst all the BSG and ST dicussions.. good choice btw - i've always considered it to be one of his best.

comment by [deleted] · 2010-01-05T15:24:44.122Z · score: 2 (2 votes) · LW · GW

Lord of Light by Roger Zelazny.

Snow Crash by Neal Stephenson

comment by Technologos · 2010-01-05T16:21:48.995Z · score: 5 (5 votes) · LW · GW

I strongly second Snow Crash. I enjoyed it thoroughly.

comment by Jack · 2010-01-05T15:01:28.199Z · score: 2 (4 votes) · LW · GW

LeGuin- The Dispossessed

William Gibson- Neuromancer

George Orwell- 1984

Walter Miller - A Canticle for Leibowitz

Philip K. Dick- The Man in the High Castle

That actually might be my top five books of all time.

comment by RichardKennaway · 2010-01-05T12:50:54.330Z · score: 2 (2 votes) · LW · GW

Bearing in mind that you're asking this on LessWrong, these come to mind:

Greg Egan. Everything he's written, but start with his short story collections, "Axiomatic" and "Luminous". Uploading, strong materialism, quantum mechanics, immortality through technology, and the implications of these for the concept of personal identity. Some of his short stories are online.

Charles Stross. Most of his writing is set in a near-future, near-Singularity world.

On related themes are "The Metamorphosis of Prime Intellect", and John C. Wright's Golden Age) trilogy.

There are many more SF novels I think everyone should read, but that would be digressing into my personal tastes.

Some people here have recommended M. Scott Bakker's trilogy that begins with "The Darkness That Comes Before", as presenting a picture of a superhuman rationalist, although having ploughed through the first book I'm not all that moved to follow up with the rest. I found the world-building rather derivative, and the rationalist doesn't play an active role. Can anyone sell me on reading volume 2?

comment by Zack_M_Davis · 2010-01-05T19:15:54.623Z · score: 2 (2 votes) · LW · GW

Strongly seconding Egan. I'd start with "Singleton" and "Oracle."

Also of note, Ted Chiang.

comment by gwern · 2012-08-21T22:01:46.541Z · score: 0 (0 votes) · LW · GW

Can anyone sell me on reading volume 2?

I couldn't unless 'pretty good fantasy version of the Crusades' sounds like your cup of tea.

comment by whpearson · 2010-01-05T12:29:44.441Z · score: 2 (4 votes) · LW · GW

I'd say identify what sort of future scenarios you want to explore and ask us to identify exemplars. Or is the goal is just to get a common vocabulary to discuss things?

Reading Sci-Fi while potentially valuable should be done with a purpose in mind. Unless you need another potential source of procrastination.

comment by komponisto · 2010-01-05T12:38:58.641Z · score: 5 (5 votes) · LW · GW

Reading Sci-Fi while potentially valuable should be done with a purpose in mind.

Goodness gracious. No, just looking for more procrastination/pure fun. I've gotten along fine without it thus far, after all.

(Of course, if someone actually thinks I really do need to read sci-fi for some "serious" reason, that would be interesting to know.)

comment by Technologos · 2010-01-05T16:29:21.768Z · score: 1 (1 votes) · LW · GW

While I don't think you need to read it, per se, I have found sci fi to be of remarkable use in preparing me for exactly the kind of mind-changing upon which Less Wrong thrives. The Asimov short stories cited above are good examples.

I also continue to cite Asimov's Foundation trilogy (there are more after the trilogy, but he openly said that he wrote the later books purely because his publisher requested them) as the most influential fiction works in pushing me into my current career.

comment by Sniffnoy · 2010-01-09T20:25:40.529Z · score: 1 (1 votes) · LW · GW

Since noone's mentioned it yet, Rendevous with Rama. You really don't want to touch the sequels, though.

comment by Jonathan_Graehl · 2010-08-10T06:21:31.813Z · score: 0 (0 votes) · LW · GW

Agreed on both points.

comment by Kevin · 2010-01-09T06:07:14.494Z · score: 1 (1 votes) · LW · GW

Oh, definitely 1984 if you've never read it. Scary how much predictive power it's had.

comment by brian_jaress · 2010-01-08T09:16:54.956Z · score: 1 (1 votes) · LW · GW

This might not be the best place to ask because so many people here prefer science fiction to regular fiction. I've noticed that people who prefer science fiction have a very different idea of what makes good science fiction than people who have no preference or who prefer regular fiction.

Most of what I see in the other comments is on the "prefers science fiction" side, except for things by LeGuin and maybe Dune.

Of course, you might turn out to prefer science fiction and just not have realized it. Then all would be well.

comment by zero_call · 2010-01-08T05:54:22.098Z · score: 1 (1 votes) · LW · GW

It's actually very important to ask people for recommendations for books, and especially for sci-fi, since it seems like a large majority of the work out there is well, garbage. Not to be too harsh, as IMO, the same thing could be said for a lot of artistic genres (anime, modern action film, etc, etc.).

For sci-fi, there are some really top notch work out there. But be warned, that in general the rest of the series isn't as good as the first book. Some classics, all favorites of mine are:

  • Dune (Frank Herbert)
  • Starship Troopers (Robert Heinlein)
  • Ringworld (first book) (Larry Niven)
  • Neuromancer (William Gibson) (Warning: last half of the book becomes s.l.o.w. though)
  • Fire Upon the Deep (Vernor Vinge)
comment by Blueberry · 2010-01-06T23:10:26.650Z · score: 1 (1 votes) · LW · GW

I haven't seen much of the Star Wars or Star Trek stuff either, and don't really consider them science fiction as much as space action movies. That's not really what we're talking about.

I would strongly advise you to start with short stories, specifically Isaac Asimov, Robert Heinlein, Arthur C. Clarke, Robert Sheckley, and Philip K. Dick. All those authors are considered giants in the field and have anthologies of collected short stories. Science fiction short stories tend to be easier to read because you don't get bogged down in detail, and you can get right to the point of exploring the interesting and speculative worlds.

comment by AdeleneDawner · 2010-01-06T17:57:00.333Z · score: 1 (1 votes) · LW · GW

I don't know whether to be surprised that no one has recommended the Ender's Game series or not. They're not terribly realistic in the tech (especially toward the end of the series), and don't address the idea of a technological singularity, but they're a good read anyway.

Oh - I'm not sure if this is what you were thinking of by sci-fi or not, and it gets a bit new-agey, but Spider Robinson's "Telempath" is a personal favorite. It's set in a near-future (at the time of writing) earth after a virus was released that magnified everyone's sense of smell to the point where cities, and most modern methods of producing things, became intolerable. (Does anyone else have post-apocalyptic themed favorites? I have a fondness for the genre, sci-fi or not.)

comment by Cyan · 2010-01-06T18:16:03.547Z · score: 3 (5 votes) · LW · GW

I had a high opinion of Ender's Game once (less so for its sequels). Then I read this.

comment by Blueberry · 2010-01-08T06:34:08.426Z · score: 1 (1 votes) · LW · GW

A poorly thought out, insult-filled rant comparing scenes in Ender's Game to "cumshots" changed your view of a classic, award-winning science fiction novel? Please reconsider.

comment by Cyan · 2010-01-08T19:32:10.135Z · score: 4 (4 votes) · LW · GW

If you strip out the invective and the appeal to emotion embodied in the metaphorical comparison to porn, there yet remains valid criticism of the structure and implied moral standards of the book.

comment by xamdam · 2010-08-10T01:04:02.822Z · score: 1 (1 votes) · LW · GW

I did not believe this was possible, but this analysis has turned EG into ashes retroactively. Still, it gets lots of kids into scifi, so there is some value.

A really great kids scifi book is "Have spacesuit, will travel" by Heinlein.

comment by NancyLebovitz · 2010-08-10T01:29:45.485Z · score: 3 (3 votes) · LW · GW

I did not believe this was possible, but this analysis has turned EG into ashes retroactively.

I've heard that effect called "the suck fairy". The suck fairy sneaks into your life and replaces books you used to love with vaguely similar books that suck.

comment by xamdam · 2010-08-10T02:08:53.442Z · score: 1 (1 votes) · LW · GW

Great name, but unfortunately it's the same book; the analysis made it incompatible with self-respect.

comment by NancyLebovitz · 2010-08-10T02:59:54.865Z · score: 1 (1 votes) · LW · GW

The suck fairy always brings something that looks exactly like the same book, but somehow....

I'm not sure if I'll ever be able to enjoy Macroscope again. Anthony was really interesting about an information gift economy, but I suspect that "vaguely creepy about women" is going to turn into something much worse.

comment by Jack · 2010-01-06T18:04:38.836Z · score: 0 (0 votes) · LW · GW

I recommended "A Canticle for Leibowitz" and "Jericho" earlier. Also, Ender's Game and Speaker for the Dead would have been the next two books on my list, though I read them when I was younger and don't know if they would be appealing to adults. How do people think Card (a devout Mormon) does at writing atheist/agnostic characters (nearly all the main characters in the series)?

comment by AdeleneDawner · 2010-01-06T18:31:56.445Z · score: 0 (0 votes) · LW · GW

I haven't really thought about his portrayal of atheists, but he did a good enough job of writing a convincing, non-demonized gay man in Songbird that I was speechless when I discovered that he firmly believes that such people are going to hell.

comment by Alicorn · 2010-01-06T22:45:04.126Z · score: 3 (3 votes) · LW · GW

He believes that they are sinning. Mormons have a really complicated dolled-up afterlife, so if he's sticking to doctrine, he probably doesn't actually expect gays as a group to all go to Hell.

Edit: He did a gay guy in the Memory of Earth series too (the plot of which, I later found, is a blatant ripoff of the Book of Mormon). Like the gay guy in Songbird, this one ends up with a woman, although less tragically.

comment by Jack · 2010-01-06T23:12:31.297Z · score: 2 (2 votes) · LW · GW

I have to say. It is an interesting coincidence that he has written two gay characters that end up with women. Especially since he is absolutely terrible at writing (heterosexual) sex scenes/sexuality- I mean really I've never read a professional writer who was worse at this.

comment by SilasBarta · 2010-01-08T21:54:06.760Z · score: 2 (2 votes) · LW · GW

Is there any significance to how OSC avoids using the standard terms for gay, but instead uses a made-up in-world term for it that you have to infer means "gay". (At least in the Memory of Earth series; I haven't read the other.)

comment by bogus · 2010-01-08T23:54:37.542Z · score: 1 (3 votes) · LW · GW

Is there any significance to how OSC avoids using the standard terms for gay, but instead uses a made-up in-world term for it that you have to infer means "gay".

wtf? that's the kwyjiboest thing I've ever seen. omg lol

comment by Alicorn · 2010-01-06T23:19:55.254Z · score: 1 (1 votes) · LW · GW

I don't think it's a coincidence at all. The way I understand it is that under Mormon doctrine, the act, not the temptation towards the act, is what's a sin: so a gay character who marries a woman and (regardless of whether he actually has sex with her or not) refrains from extramarital sexual activity is just fine and dandy. The Songbird character didn't get married; the Memory of Earth one did. But the former, while not "demonized", was presented as a fairly weak person; the latter was supposed to be a generally decent guy.

comment by RolfAndreassen · 2010-01-06T23:32:04.072Z · score: 0 (0 votes) · LW · GW

Where does OSC even attempt to do so? He generally just leaves the actual sex scenes out of the books, to the best of my recollection. Would that Turtledove had shown similar restraint.

comment by Jack · 2010-01-06T23:46:27.157Z · score: 0 (0 votes) · LW · GW

It has been a while since a read any Card but Folk of the Fringe included a really bizarre story about sex between a young white boy and an middle-aged native American. The Enders Game sequels almost all include ostensibly sexual relationships and he tries to describe aspects of that and moments when, presumably, the characters would be experiencing sexual attraction.

comment by RolfAndreassen · 2010-01-07T21:48:54.000Z · score: 0 (0 votes) · LW · GW

Ok, I was thinking more in terms of straight-out sex scenes, as in Turtledove, where the tab goes in the slot. I must say I didn't find OSC's writing on sexual attraction particularly awkward; what about it did you dislike so?

comment by Jack · 2010-04-12T22:39:41.015Z · score: 1 (1 votes) · LW · GW

Sorry, really late reply. Was just looking over this thread and happened to see this.

Card's writing that involves sexual attraction just comes off as asexual. I never got the sense that the characters were actually sexually attracted to each other; affectionate maybe, but not aroused. It's like the way sexuality looks on tv, not the way people actually experience it. I recall reading Card himself say that he didn't think he was very good at writing about sex or sexual attractions in an interview or something. It might have been in the Folk of the the Fringe book somewhere but I can't find it in my library.

comment by RolfAndreassen · 2010-04-13T16:46:53.480Z · score: 0 (0 votes) · LW · GW

Ok, I guess I agree with that. He either cannot or will not write such that you feel the emotions associated with sexual attraction; it is an area where he tells rather than showing. Perhaps this is a deliberate choice based in his Mormon religion; he's also rather down on porn. Either way, though, it seems to me that his stories rarely suffer from this. To take an example, 'Empire' is way worse than the Ender sequels, but it's not because of the sex; indeed it has effectively zero sex in it, even of the kind you describe. Rather it suffers from being nearly-explicit propaganda.

comment by AdeleneDawner · 2010-01-06T22:48:25.385Z · score: 1 (1 votes) · LW · GW

I went back and checked my source (wikipedia); you're right, I'd mis-remembered.

comment by Jack · 2010-01-06T17:24:31.766Z · score: 1 (1 votes) · LW · GW

Films:

Blade Runner

Gattaca

2001: A Space Odyssey

comment by Furcas · 2010-01-05T21:35:15.846Z · score: 1 (1 votes) · LW · GW

Isaac Asimov's Foundation series:

  • Foundation
  • Foundation and Empire
  • Second Foundation
  • Foundation's Edge
  • Foundation and Earth

There are prequels too, but I don't like 'em.

comment by sketerpot · 2010-01-05T21:31:17.152Z · score: 1 (3 votes) · LW · GW

Robert Heinlein wrote some really good stuff (before becoming increasingly erratic in his later years). Very entertaining and fun. Here are some that I would recommend for starting out with:

Tunnel in the Sky. The opposite of Lord of the Flies. Some people are stuck on a wild planet by accident, and instead of having civilization collapse, they start out disorganized and form a civilization because it's a good idea. After reading this, I no longer have any patience for people who claim that our natural state is barbarism.

Citizen of the Galaxy. I can't really summarize this one, but it's got some good characters in it.

Between Planets. Our protagonist finds himself in the middle of a revolution all of a sudden. This was written before we knew that Venus was not habitable.

I was raised on this stuff. Also, I'd like to recommend Startide Rising, by David Brin, and its sequel The Uplift War. They're technically part of a trilogy, but reading the first book (Sundiver) is completely unnecessary. It's not really light reading, but it's entertaining and interesting.

comment by NancyLebovitz · 2010-08-09T21:06:46.380Z · score: 1 (1 votes) · LW · GW

Note about Tunnel in the Sky-- they didn't just form a society (not a civilization) because they thought it was a good idea to do-- they'd had training in how to build social structures.

comment by Cyan · 2010-01-05T14:19:10.816Z · score: 1 (1 votes) · LW · GW

I recommend anything by Charles Stross, Lois McMaster Bujold's Vorkosigan Saga (link gives titles and chronology), and anthing by Ursula LeGuin, but especially City of Illusions and The Left Hand of Darkness.

comment by RolfAndreassen · 2010-01-06T23:33:03.099Z · score: 0 (0 votes) · LW · GW

Upvoted for the Vorkosigan suggestion; seconded.

comment by AdeleneDawner · 2010-01-05T14:26:35.386Z · score: 0 (0 votes) · LW · GW

As much as I love LeGuin, her work tends to be fairly challenging. It's worth noting that her novels tend to be much easier to read than her short stories, unlike most authors.

comment by Alicorn · 2010-01-05T14:44:01.501Z · score: 0 (0 votes) · LW · GW

You find her novels easier? I've loved many LeGuin short stories (most notably The Ones who Walk Away from Omelas, and everything in the Changing Planes collection) but I can't stand her novels. They lose me ten pages in; I've never managed to slog more than halfway through a single one.

comment by AdeleneDawner · 2010-01-05T14:58:30.139Z · score: 0 (0 votes) · LW · GW

The novels are definitely still challenging, but until I'd read a few of her novels and figured out how to think about her writing, I wasn't able to make sense of most of her short stories (Omelas being one exception to that). I'd get to the end of the text and go 'wait, was there supposed to be a story in that set of words?'

comment by MartinB · 2010-08-09T20:52:14.954Z · score: 0 (0 votes) · LW · GW

Just reading that I am curious what you did end up reading and what you think about it.

My recents were Heinleins: citizen of the galaxy, and the starbeast.

comment by RobinZ · 2010-01-09T15:05:06.652Z · score: 0 (0 votes) · LW · GW

I can see you have already been deluged in recommendations, but here are a few novels I liked, with notes:

Mission of Gravity by Hal Clement. One of the better-written books from one of my first favorite authors. Hal Clement is, in my opinion, the definitive writer of hard science fiction, the benchmark to which others should be compared. If possible, get a copy with the essay "Whirligig World" included (the volume Heavy Planet, for example).

Islands in the Net by Bruce Sterling. Something of a science-fiction bildungsroman, and some of my favorite writing of all time. It's surprisingly accurate as futurology, although that's not a particularly important feature in a novel; more to the point, it's got wonderful worldbuilding and characterization.

A Fire Upon the Deep by Vernor Vinge. Excellent epic science fiction. I don't believe it is a classic in the way some others may have suggested, but I do believe it's a good read.

A Woman of the Iron People by Eleanor Arnason. An excellent entry in the realm of anthropological science fiction, with beautiful characterization of both the human anthropologists and the population of aliens. (Worth comparing to Sheri S. Tepper, Ursula K. LeGuin, and Joan D. Vinge.)

comment by Morendil · 2010-01-05T21:03:03.752Z · score: 0 (0 votes) · LW · GW

You already have more than enough, I'll nevertheless add a few:

Larry Niven's Ringworld

David Brin's Uplift books

John Varley's Titan, Wizard, Demon

comment by gwern · 2010-01-02T13:50:47.803Z · score: 8 (8 votes) · LW · GW

I recently revisited my old (private) high school, which had finished building a new >$15 million building for its football team (and misc. student activities & classes).

I suddenly remembered that when I was much younger, the lust of universities and schools in general for new buildings had always puzzled me: I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and then maintain, and space didn't seem plausible (because I passed empty classrooms all the time and they were often the same classroom pretty much all day). So this always puzzled me as a kid - big buildings seemed like perfect white elephants. I could understand the donors' reason, but not anyone else's.

When I remembered my childhood aporia, I suddenly realized - 'Oh, this is status-seeking behavior; big buildings are unfakeable social signals of wealth and influence. I was just being narrow-minded in assuming that if it didn't have your name on it, it couldn't boost your status.'

(I don't really have any point to this anecdote, but I thought it was interesting that OB/LW reading solved a longstanding puzzle of mine.)

* Obviously a few subject-matters do require specialized facilities; it's hard to do pottery without a specialized art-room, for example. But those are a minority.

comment by quanticle · 2010-01-03T03:03:48.658Z · score: 0 (0 votes) · LW · GW

I knew perfectly well that I learned more or less the same whether the classroom was shiny new or grizzled gray and that this was true of just about every subject-matter*, and even then it was obvious that buildings must cost a lot to build and then maintain, and space didn't seem plausible (because I passed empty classrooms all the time and they were often the same classroom pretty much all day). So this always puzzled me as a kid - big buildings seemed like perfect white elephants. I could understand the donors' reason, but not anyone else's.

I don't know about that. I know that there are several buildings at my university that I hate to have classes in, because they're either too hot, too cold, or poorly ventilated. Yes, you're correct that in the majority of cases, the age of the building makes no difference (e.g. no one recognizes the difference between a two year old building and a twenty year old building), but in extremis, the age can make a difference (e.g. if the building does not have proper ventilation or temperature control). Its very difficult to keep focused when the classroom is 30 degrees Celcius and the lecture is two hours long.

comment by gwern · 2010-01-03T13:58:41.310Z · score: 1 (1 votes) · LW · GW

Well, I can't really object to the extremes theory. You aren't a Third-Worlder or a highly driven Indian or Chinese or pre-20th century American child who wouldn't be bothered by such conditions, after all.

But most school building is not about avoiding such extremes. I can cite exactly one example in my educational career where a building had a massive overhaul due to genuine need (a fire in the gym burned the roof badly); all the other expansions and new buildings.... not so much.

Its very difficult to keep focused when the classroom is 30 degrees Celcius and the lecture is two hours long.

This reflects a failure of pedagogy more than the value of architecture - I've never seen any research saying students can really focus & learn for 2 hours, and the research I glanced over suggest much shorter lectures than that. (IIRC, the FAA or USAF found pilot-education lectures should be no longer than 20 minutes and followed immediately by review.)

comment by [deleted] · 2010-01-03T20:35:02.678Z · score: 0 (0 votes) · LW · GW

Yes, you're correct that in the majority of cases, the age of the building makes no difference (e.g. no one recognizes the difference between a two year old building and a twenty year old building) . . .

My dorm building has the number 2008 carved conspicuously into one of the stones in its facade. It's pretty easy to tell that it's a two year old building.

comment by CronoDAS · 2010-01-02T23:09:02.180Z · score: 0 (0 votes) · LW · GW

My town has fairly recently (in the past ten years) added several new school buildings. The old buildings had problems (leaky roofs, no air conditioning, etc.) and the town's school-age population was growing.

Now, if they would only be willing to expand the library. :(

comment by gwern · 2010-01-03T00:30:21.737Z · score: 1 (1 votes) · LW · GW

So make the classes bigger, perhaps. In a Hansonian vein:

"But while state legislatures for decades have passed laws — and provided millions of dollars — to cap the size of classes, some academic researchers and education leaders say that small reductions in the number of students in a room often have little effect on their performance."
...
Dan Goldhaber, an education professor at the University of Washington, said the obsession with class size stemmed from a desire for “something that people can grasp easily — you walk into a class and you see exactly how many kids are there.”
“Whether or not it translates into an additional advantage doesn’t necessarily matter,” Professor Goldhaber said. “We know that teachers are the most important thing, but teacher quality is not stamped on someone’s forehead.”
http://www.nytimes.com/2009/02/22/education/22class.html

(I don't think I ever met someone who failed to learn something because somewhere in the school there was a leak. Because of no air conditioning, maybe, but puddles or leaks?)

comment by quanticle · 2010-01-03T03:30:15.644Z · score: 2 (2 votes) · LW · GW

Well, classrooms are of limited size. I know that the classrooms at my old high school were only designed for thirty kids each. Now they hold nearly forty each. There is a significant cost from having correspondingly less space per person. The corresponding reductions in mobility and classroom flexibility have an impact on learning.

This is especially pronounced in science labs. Having even one more person per lab station can have a surprisingly detrimental impact on learning. If there are two or three people at a lab station, then pretty much everyone is forced to participate (and learn) in order to finish the lesson. However, if there are four or more kids at a lab station, then you can have a person slacking off, not doing much and the others can cover for the slacker. The slacker doesn't learn anything, and the other students are resentful because three are doing the work of four.

comment by CronoDAS · 2010-01-03T06:31:56.120Z · score: 0 (0 votes) · LW · GW

Leaks damage things. Such as ceilings, for example.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-01T20:37:48.322Z · score: 8 (8 votes) · LW · GW

Akrasia FYI:

I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working. I would have thought that recovering in large bites and working in large bites would have been more efficient, but apparently no, it's not.

I'm currently testing the hypothesis that reading fiction (possibly reading anything?) comes out of my energy-to-work-on-the-book budget.

Next up to try: Pick up a CPAP machine off Craigslist.

comment by wedrifid · 2010-01-02T09:49:13.139Z · score: 6 (6 votes) · LW · GW

I tried creating a separate login on my computer with no distractions, and tried to get my work done there. This reduced my productivity because it increased the cost of switching back from procrastinating to working.

A technical problem that is easily solvable. My approach has been to use VMWare. All the productive tools are installed on the base OS. Procrastination tools are installed on a virtual machine. Starting the procrastination box takes about 20 seconds (and more importantly a significant active decision) but closing it to revert to 'productive mode' takes no time at all.

comment by jimrandomh · 2010-01-02T02:34:05.494Z · score: 3 (3 votes) · LW · GW

I've noticed the same problem in separating work from procrastination environments. But it might work if it was asymmetric - say, there's a single fast hotkey to go from procrastination mode to work mode, but you have to type a password to go in the other direction. (Or better yet, a 5 second delay timer that you can cancel.)

comment by kpreid · 2010-01-03T12:57:06.354Z · score: 2 (2 votes) · LW · GW

I had the same problem when I was using just virtual screens with a key to switch, not even separate accounts. It was a significant decrease in productivity before I realized the problem. I think it's not just the effort to switch; it's also that the work doesn't stay visible so that you think about it.

comment by groupuscule · 2010-01-04T05:18:48.792Z · score: 0 (0 votes) · LW · GW

This strategy works for me. I made the password to my non-work login something that would remind me why I set up the system. (I know of people doing similar things to the phone numbers of people they don't want to call.)

comment by AdeleneDawner · 2010-01-01T18:06:56.005Z · score: 7 (7 votes) · LW · GW

This article about gendered language showed up on one of my feeds a few days ago. Given how often discussions of nongendered pronouns happen here, I figure it's worth sharing.

comment by Daniel_Burfoot · 2010-01-02T13:47:48.205Z · score: 5 (5 votes) · LW · GW

Nice, I liked the part about Tuyuca:

Most fascinating is a feature that would make any journalist tremble. Tuyuca requires verb-endings on statements to show how the speaker knows something. Diga ape-wi means that “the boy played soccer (I know because I saw him)”, while diga ape-hiyi means “the boy played soccer (I assume)”. English can provide such information, but for Tuyuca that is an obligatory ending on the verb. Evidential languages force speakers to think hard about how they learned what they say they know.

It would be fun to try to build a "rational" dialect of English that requires people to follow rules of logical inference and reasoning.

comment by Seth_Goldin · 2010-01-07T04:18:21.562Z · score: 6 (12 votes) · LW · GW

Hello all,

I've been a longtime lurker, and tried to write up a post a while ago, only to see that I didn't have enough karma. I figure this is is the post for a newbie to present something new. I already published this particular post on my personal blog, but if the community here enjoys it enough to give it karma, I'd gladly turn it into a top-level post here, if that's in order.


Life Experience Should Not Modify Your Opinion http://paltrypress.blogspot.com/2009/11/life-experience-should-not-modify-your.html

When I'm debating some controversial topic with someone older than I am, even if I can thoroughly demolish their argument, I am sometimes met with a troubling claim, that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that my opinion is based primarily on nothing more than my perception from personal experience.

When my cornered opponent makes this claim, it's a last resort. It's unwarranted condescension, because it reveals how wrong their entire approach is. Just by making the claim, they demonstrate that they believe all opinions are based primarily on an accumulation of personal experiences, even their own opinions. Their assumption reveals that they are not Bayesian, and that they intuit that no one is. For not being Bayesian, they have no authority that warrants such condescension.

I intentionally avoid presenting personal anecdotes cobbled together as evidence, because I know that projecting my own experience onto a situation to explain it is no evidence at all. I know that I suffer from all sorts of cognitive biases that obstruct my understanding of the truth. As such, my inclination is to rely on academic consensus. If I explain this explicitly to my opponent, they might dismiss academics as unreliable and irrelevant, hopelessly stuck in the ivory tower of academia.

Dismiss academics at your own peril. Sometimes there are very good reasons for dismissing academic consensus. I concede that most academics aren't Bayesian because academia is an elaborate credentialing and status-signaling mechanism. Furthermore, academics have often been wrong. The Sokal affair illustrates that entire fields can exist completely without merit. That academic consensus can easily be wrong should be intuitively obvious to an atheist; religious community leaders have always been considered academic experts, the most learned and smartest members of society. Still, it would be a fallacious inversion of an argument from authority to dismiss academic consensus simply because it is academic consensus.

For all of academia's flaws, the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history. It is noble and desirable to criticize academic theories, but only as part of intellectually honest, impartial scientific inquiry. Dismissing academic consensus out of hand is primitive, and indicates intellectual dishonesty.

comment by Morendil · 2010-03-18T14:03:58.329Z · score: 5 (5 votes) · LW · GW

What you seem to be saying, that I agree with, is that it's irritating as well as irrelevant when people try to pull authority on you, using "age" or "quantity of experience" as a proxy for authority. Yes, argument does screen off authority. But that's no reason to knock "life experience".

If opinions are not based on "personal experience", what can they possibly be based on? Reading a book is a personal experience. Arguing an issue with someone (and changing your mind) is a personal experience. Learning anything is a personal experience, which (unless you're too good at compartmentalizing) colors your other beliefs.

Perhaps the issue is with your thinking that "demolishing someone's argument" is a worthwhile instrumental goal in pursuit of truth. A more fruitful goal is to repair your interlocutor's argument, to acknowledge how their personal experience has led them to having whatever beliefs they have, and expose symmetrically what elements in your own experience lead you to different views.

Anecdotes are evidence, even though they can be weak evidence. They can be strong evidence too. For instance, having read this comment after I read the commenter's original report of his experience as an isolated individual, I'd be more inclined to lend credence to the "stealth blimp" theory. I would have dismissed that theory on the basis of reading the Wikipedia page alone or hearing the anecdote along, but I have a low prior probability for someone on LessWrong arranging to seem as if he looked up news reports after first making a honest disclosure to other people interested in truth-seeking.

It seems inconsistent on your part to start off with a rant about "anecdotes", and then make a strong, absolute claimed based solely on "the Sokal affair" - which at the scale of scientific institutions is anecdotal.

I think you're trying to make two distinct points and getting them mixed up, and as a result not getting either point across. One of these points I believe needs to be moderated - the one where you say "personal experiences aren't evidence" - because they are evidence; the other is where you say "people who speak with too much confidence are more likely to be wrong, including a) people older than you, b) some academics, but not necessarily the academic consensus".

That is perhaps a third point - just why you think that "the process of peer-reviewed scientific inquiry, informed by logic, statistics, and regression analysis, offers a better chance at discovering truth than any other institution in history". That's a strong claim subject to the conjunction fallacy: are each of peer review, logic, statistics and regression analysis necessary elements of what makes scientific inquiry our best chance at discovering truth? Are they sufficient elements to be that best chance?

comment by Seth_Goldin · 2010-03-18T17:09:37.006Z · score: 1 (1 votes) · LW · GW

Hi Morendil,

Thanks for the comment. The particular version you are commenting on was an earlier, worse version than what I posted and then pulled this morning. The version I posted this morning was much better than this. I actually changed the claim about the Sokal affair completely.

Due to what I fear was an information cascade of negative karma, I pulled the post so that I might make revisions.

The criticism concerning both this earlier version and the newer one from this morning still holds though. I too realized after the immediate negative feedback that I actually was combining, poorly, two different points and losing both of them in the process. I think I need to revise this into two different posts, or cut out the point about academia entirely. I will concede that anecdotes are evidence as well in the future version.

Unfortunately I was at exactly 50 karma, and now I'm back down to 20, so it will be a while before I can try again. I'll be working on it.

comment by Seth_Goldin · 2010-03-18T19:31:27.384Z · score: 1 (1 votes) · LW · GW

Here's the latest version, what I will attempt to post on the top level when I again have enough karma.


"Life Experience" as a Conversation-Halter

Sometimes in an argument, an older opponent might claim that perhaps as I grow older, my opinions will change, or that I'll come around on the topic. Implicit in this claim is the assumption that age or quantity of experience is a proxy for legitimate authority. In and of itself, such "life experience" is necessary for an informed rational worldview, but it is not sufficient.

The claim that more "life experience" will completely reverse an opinion indicates that to the person making such a claim, belief that opinion is based primarily on an accumulation of anecdotes, perhaps derived from extensive availability bias. It actually is a pretty decent assumption that other people aren't Bayesian, because for the most part, they aren't. Many can confirm this, including Haidt, Kahneman, Tversky.

When an opponent appeals to more "life experience," it's a last resort, and it's a conversation halter. This tactic is used when an opponent is cornered. The claim is nearly an outright acknowledgment of a move to exit the realm of rational debate. Why stick to rational discourse when you can shift to trading anecdotes? It levels the playing field, because anecdotes, while Bayesian evidence, are easily abused, especially for complex moral, social, and political claims. As rhetoric, this is frustratingly effective, but it's logically rude.

Although it might be rude and rhetorically weak, it would be authoritatively appropriate for a Bayesian to be condescending to a non-Bayesian in an argument. Conversely, it can be downright maddening for a non-Bayesian to be condescending to a Bayesian, because the non-Bayesian lacks the epistemological authority to warrant such condescension. E.T. Jaynes wrote in Probability Theory about the arrogance of the uninformed, "The semiliterate on the next bar stool will tell you with absolute, arrogant assurance just how to solve the world's problems; while the scholar who has spent a lifetime studying their causes is not at all sure how to do this."

comment by Seth_Goldin · 2010-03-18T19:40:35.667Z · score: 0 (0 votes) · LW · GW

Sorry; I didn't realize that I can still post. I went ahead and posted it.

comment by SilasBarta · 2010-03-18T14:36:38.265Z · score: 1 (1 votes) · LW · GW

Yes, argument does screen off authority. But that's no reason to knock "life experience". ... Learning anything is a personal experience, which colors your other beliefs. ... A more fruitful goal is to repair your interlocutor's argument, to acknowledge how their personal experience has led them to having whatever beliefs they have, and expose symmetrically what elements in your own experience lead you to different views.

I agree with your point and your recommendation. Life experiences can provide evidence, and they can also be an excuse to avoid providing arguments. You need to distinguish which one it is when someone brings it up. Usually, if it is valid evidence, the other person should be able to articulate which insight a life experience would provide to you, if you were to have it, even if they can't pass the experience directly to your mind.

I remember arguing with a family member about a matter of policy (for obvious reasons I won't say what), and when she couldn't seem to defend her position, she said, "Well, when you have kids, you'll see my side." Yet, from context, it seems she could have, more helpfully, said, "Well, when you have kids, you'll be much more risk-averse, and therefore see why I prefer to keep the system as is" and then we could have gone on to reasons about why one or the other system is risky.

In another case (this time an email exchange on the issue of pricing carbon emissions), someone said I would "get" his point if I would just read the famous Coase paper on externalities. While I hadn't read it, I was familiar with the arguments in it, and ~99% sure my position accounted for its points, so I kept pressing him to tell me which insight I didn't fully appreciate. Thankfully, such probing led him to erroneously state what he thought was my opinion, and when I mentioned this, he decided it wouldn't change my opinion.

comment by thomblake · 2010-01-07T20:31:16.117Z · score: 3 (3 votes) · LW · GW

The Sokal affair illustrates that entire fields can exist completely without merit.

It illustrated nothing of the sort. The Sokal affair illustrated that a non-peer-reviewed, non-science journal will publish bad science writing that was believed to be submitted in good faith.

Social Text was not peer-reviewed because they were hoping to... do... something. What Sokal did was similar to stealing everything from a 'good faith' vegetable stand and then criticizing its owner for not having enough security.

comment by Seth_Goldin · 2010-01-07T20:42:51.937Z · score: 5 (5 votes) · LW · GW

Noted. In another draft I'll change this to make the point how easy it is for high-status academics to deal in gibberish. Maybe they didn't have so much status external to their group of peers, but within it, did they?

What the Social Text Affair Does and Does Not Prove

http://www.physics.nyu.edu/faculty/sokal/noretta.html

"From the mere fact of publication of my parody I think that not much can be deduced. It doesn't prove that the whole field of cultural studies, or cultural studies of science -- much less sociology of science -- is nonsense. Nor does it prove that the intellectual standards in these fields are generally lax. (This might be the case, but it would have to be established on other grounds.) It proves only that the editors of one rather marginal journal were derelict in their intellectual duty, by publishing an article on quantum physics that they admit they could not understand, without bothering to get an opinion from anyone knowledgeable in quantum physics, solely because it came from a conveniently credentialed ally'' (as Social Text co-editor Bruce Robbins later candidly admitted[12]), flattered the editors' ideological preconceptions, and attacked theirenemies''.[13]"

comment by thomblake · 2010-01-07T20:49:38.384Z · score: 1 (1 votes) · LW · GW

I'd forgotten that Sokal himself admitted that much about it - thanks for the cite.

comment by Vladimir_Nesov · 2010-01-07T19:07:59.187Z · score: 2 (2 votes) · LW · GW

For not being Bayesian, they have no authority that warrants such condescension.

It's unclear what you mean by both "Bayesian" and by "authority" in this sentence. If a person is "Bayesian", does it give "authority" for condescension?

There clearly is some truth to the claim that being around longer sometimes allows to arrive at more accurate beliefs, including more accurate intuitive assessment of the situation, if you are not down a crazy road in the particular domain. It's not a very strong evidence, and it can't defeat many forms of more direct evidence pointing in the contrary direction, but sometimes it's an OK heuristic, especially if you are not aware of other evidence ("ask the elder").

comment by Seth_Goldin · 2010-01-07T19:34:56.839Z · score: 0 (0 votes) · LW · GW

Maybe "authority" is the wrong word. What I mean is that the opponent making this claim is dismissing my stance as wrong, because of my supposed less experience. It means that they believe that truth follows from collecting anecdotes. They ascertain that because they have more anecdotes, they are correct, and I am incorrect. For not being rational, we can't trust their standard of truth to dismiss my position as wrong, since their whole methodology is hopelessly flawed.

comment by Vladimir_Nesov · 2010-01-07T19:42:45.718Z · score: 0 (0 votes) · LW · GW

For not being rational, we can't trust their standard of truth to dismiss my position as wrong, since their whole methodology is hopelessly flawed.

Your core claim seems to be that you should dismiss statements (as opposed to arguments) by "irrational" people. This is a more general idea, basically unrelated to amount of their personal experience or other features of typical conversations which you discuss in your comment.

comment by Seth_Goldin · 2010-01-07T20:24:17.578Z · score: 0 (0 votes) · LW · GW

If someone's argument, and therefore position, is irrational, how can we trust them to give honest and accurate criticism of other arguments?

comment by Vladimir_Nesov · 2010-01-07T20:42:50.753Z · score: 1 (1 votes) · LW · GW

If someone's argument, and therefore position, is irrational, how can we trust them to give honest and accurate criticism of other arguments?

At which point you are completely forsaking your original argument (rightfully or wrongly, which is a separate concern), which is the idea of my critical comment above. It's unclear what you are arguing about, if your conclusion is equivalent to a much simpler premise that you have to assume independently of the argument. This sounds like rationalization (again, no matter whether the conclusion-advice-heuristic is correct or not).

comment by Seth_Goldin · 2010-01-07T22:04:08.293Z · score: 0 (0 votes) · LW · GW

OK, let me break it down.

I take "life experience" to mean a haphazard collection of anecdotes.

Claims from haphazardly collected anecdotes do not constitute legitimate evidence, though I concede those claims do often have positive correlations with true facts.

As such, relying on "life experience" is not rational. The point about condescension is tangential. The whole rhetorical technique is frustrating, because there is no way to move on from it. If "life experience" were legitimate evidence for the claim, the argument would not be able to continue until I have gained more "life experience," and who decides how much would be sufficient? Would it be until I come around? Once we throw the standard of evidence out, we're outside the bounds of rational discourse.

comment by thomblake · 2010-01-07T22:23:39.588Z · score: 3 (3 votes) · LW · GW

I take "life experience" to mean a haphazard collection of anecdotes.

I don't think that's something that most people who think "life experience" is valuable would agree to.

Claims from haphazardly collected anecdotes do not constitute legitimate evidence, though I concede those claims do often have positive correlations with true facts.

It might be profitable for you to revise your criteria for what constitutes legitimate evidence. Throwing away information that has a positive correlation with the thing you're wondering about seems a bit hasty.

comment by Seth_Goldin · 2010-01-08T03:28:45.020Z · score: 0 (0 votes) · LW · GW

I am calling attention to reverting to "life experience" as recourse in an argument. If someone strays to that, it's clear that we're no longer considering evidence for whatever the argument is about. Referring back to "life experience" is far too nebulous to take as any evidence anything.

As for what constitutes legitimate evidence, even if anecdotes can correlate, anecdotes are not evidence!

http://www.scientificamerican.com/article.cfm?id=how-anecdotal-evidence-can-undermine-scientific-results

comment by Nick_Tarleton · 2010-01-08T11:51:42.733Z · score: 2 (2 votes) · LW · GW

As for what constitutes legitimate evidence, even if anecdotes can correlate, anecdotes are not evidence!

Anecdotes are rational evidence, but not scientific evidence.

comment by Seth_Goldin · 2010-01-09T04:11:05.191Z · score: 0 (0 votes) · LW · GW

For a debate involving complex religious, scientific, or political arguments, this won't suffice.

comment by Seth_Goldin · 2010-01-08T23:05:46.115Z · score: 0 (0 votes) · LW · GW

Let's say I'm debating someone on whether or not poltergeists exist.

comment by Seth_Goldin · 2010-01-07T18:02:02.934Z · score: 0 (0 votes) · LW · GW

All,

Thanks for the votes. So, I'm not exactly sure how the karma system works. On the main page I see articles from people with less than 50 points, and I see prominent users that have nonsensically low counts. Do I still need 50 points to post a main article?

comment by kpreid · 2010-01-07T19:15:43.261Z · score: 2 (2 votes) · LW · GW

Users' karma is only displayed on their user page (and the top contributors list). The number in the header of an article or comment is the score for that post only. Does this help?

comment by Seth_Goldin · 2010-01-07T19:35:45.030Z · score: 0 (0 votes) · LW · GW

Yes, thank you.

comment by pdf23ds · 2010-01-03T00:54:20.547Z · score: 6 (6 votes) · LW · GW

If quantum immortality is correct, and assuming life extension technologies and uploading are delayed for a long time, wouldn't each of us, in our main worldline, become more and more decrepit and injured as time goes on, until living would be terribly and constantly painful, with no hope of escape?

comment by Alicorn · 2010-01-03T00:55:58.005Z · score: 4 (6 votes) · LW · GW

We frequently become unconscious (sleep) in our threads of experience. There is no obvious reason we couldn't fall comatose after becoming sufficiently battered.

comment by SoullessAutomaton · 2010-01-07T03:31:55.646Z · score: 3 (3 votes) · LW · GW

I present for your consideration a delightful quote, courtesy of a discussion on another site:

The Sibyl of Cumae, who led Aeneas on his journey to the underworld, for which he collected the Golden Bough, was the most famous prophetess of the ancient world. Beloved of Apollo, she was given anything she might desire. She asked for eternal life. Sadly, Apollo granted her wish, for she had forgotten to ask for eternal youth. Now dried, dessicated, and shrunken, she is carried in a cricket cage, and when the boys ask her what she desires, she says: "I want to die."

I think the moral of the story is: stay healthy and able-bodied as much as possible. If, at some point, you should find yourself surviving far beyond what would be reasonably expected, it might be wise to attempt some strategic quantum suicide reality editing while you still have the capacity to do so...

comment by Roko · 2010-01-03T13:07:52.572Z · score: 2 (2 votes) · LW · GW

quantum immortality is correct

How could it be "correct" or "incorrect"? QI doesn't make a falsifiable factual claim, as far as I know...

comment by orthonormal · 2010-01-03T18:55:51.248Z · score: 4 (4 votes) · LW · GW

A superhuman intelligence that understood the nature of human consciousness and subjective experience would presumably know whether QI was correct, incorrect, or somehow a wrong question. Consciousness and experience all happen within physics, they just currently confuse the hell out of us.

comment by Roko · 2010-01-03T21:12:40.222Z · score: 2 (2 votes) · LW · GW

somehow a wrong question

I think it is becoming clear that it is a wrong question.

see Max Tegmark on MWI

comment by orthonormal · 2010-01-04T05:10:37.055Z · score: 1 (1 votes) · LW · GW

Neat paper!

comment by pdf23ds · 2010-01-05T23:14:46.614Z · score: -1 (1 votes) · LW · GW

As I understand it, it makes a prediction about your future experience (and the MWI measure of that experience)--not dying. Is that not falsifiable? I suppose you could argue that it's a logical and inescapable consequence of MWI, and not in itself falsifiable, but that doesn't seem like an important distinction.

I don't see how Tegmark's paper is relevant to this question.

comment by Roko · 2010-01-06T12:02:15.055Z · score: 1 (1 votes) · LW · GW

I suppose you could argue that it's a logical and inescapable consequence of MWI

It is. If you believe MWI, you believe that Schrodinger's cat will experience survival every time, even if you repeated the experiment 100 times, but that you will observe the cat dead if you repeat the experiment enough times.

There is no falsifiable fact above and beyond MWI as far as I can see, apart from the general air of confusion about subjective experience, which hasn't coalesced into anything sufficiently definite enough to be falsified.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-03T04:01:35.750Z · score: 2 (2 votes) · LW · GW

"The author recommends that anyone reading this story sign up with Alcor or the Cryonics Institute to have their brain preserved after death for later revival under controlled conditions."

(From a little story which assumes QTI.)

comment by rwallace · 2010-01-03T02:43:53.187Z · score: 1 (1 votes) · LW · GW

Even supposing this unpleasant scenario is true, it is not hopeless. There are things we can do to improve matters. The timescale to develop life extension and uploading is not a prior constant; we can work to speed it up, and we should be doing this anyway. And we can sign up for cryonics to obtain a better alternative worldline.

comment by Nick_Tarleton · 2010-01-03T03:57:16.748Z · score: 0 (0 votes) · LW · GW

Not if, as is at least conceivable*, enough Friendly superintelligences model the past and reconstruct people from it that eventually most of your measure comes from them. (Or other, mostly less pleasant but seemingly much less likely possibilities.)

* It actually seems a lot more than "at least conceivable" to me, but I trust this seeming very little, since the idea is so comforting.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-03T04:01:06.177Z · score: 0 (0 votes) · LW · GW

That requires a double assumption about not just quantum immortality, but about "subjective measure / what happens next" continuing into all copies of a computation, rather than just the local causal future of a computation.

comment by Nick_Tarleton · 2010-01-03T04:05:47.549Z · score: 0 (0 votes) · LW · GW

Right, MWI has a different causal structure than other multiverses and quantum immortality is a distinct case of, call it 'modal-realist immortality'. I do tend to forget that.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-03T04:26:54.825Z · score: 0 (2 votes) · LW · GW

Sorry, could you repeat that? Both clauses?

comment by Kaj_Sotala · 2010-01-01T17:13:19.514Z · score: 6 (8 votes) · LW · GW

Suppose you could find out the exact outcome (up to the point of reading the alternate history equivalent of Wikipedia, history books etc.) of changing the outcome of a single historical event. What would that event be?

Note that major developments like "the Roman empire would never have fallen" or "the Chinese wouldn't have turned inwards" involve multiple events, not just one.

comment by Scott Alexander (Yvain) · 2010-01-01T17:53:48.808Z · score: 15 (15 votes) · LW · GW

So many. I can't limit it to one, but my top four would be "What if Mohammed had never been born?", "What if Julian the Apostate had succeeded in stamping out Christianity?" and "What if Thera had never blown and the Minoans had survived?" and "What if Alexander the Great had lived to a ripe old age?"

The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places. It also dealt a fatal wound to Byzantium as well. If Mohammed had never existed, I would look forward to reading about the Zoroastrian Persians, the Byzantines, and the Romanized Syrians and Egyptians surviving much longer than they did.

The Minoans were the most advanced civilization of their time, and had plumbing, three story buildings, urban planning and possibly even primitive optics in 2000 BC (I wrote a bit about them here). Although they've no doubt been romanticized, in the romanticized version at least they had a pretty equitable society, gave women high status, and revered art and nature. Then they were all destroyed by a giant volcano. I remember reading one historian's speculation that if they'd lived, a man would've landed on the moon by 1 AD.

I don't have such antipathy to Christianity that I'd want to prevent it from ever existing, but it sure did give us 2000 odd years of boring religion. Julian the Apostate was a Roman emperor who ruled a few reigns after Constantine and tried to turn back the clock, de-establish Christianity, and revive all the old pagan cults. He was also a philosopher, an intellectual, and by most accounts a pretty honest and decent guy. He died after reigning barely over a year, from a spear wound incurred in battle. If he'd lived, for all we know the US could be One Nation Under Zeus (or Wodin, or whoever) right now.

As for Alexander the Great, he was just plain nifty. I think I heard he was planning a campaign against Carthage before he died. If he'd lived to 80, he could've conquered all Europe, North Africa, and Western Asia, and have unified the whole western world under a dynasty of philosopher-kings dedicated to spreading Greek culture and ideas. Given a few more years, he might also have solved that whole "successor" issue.

comment by James_Miller · 2010-01-01T18:02:12.729Z · score: 12 (16 votes) · LW · GW

Given that Alexander was one of the most successful conquerors in all of history, he almost certainly benefited from being extremely lucky. If he had lived longer, therefore, he would have probably experienced much regression to the mean with respect to his military success.

comment by wedrifid · 2010-01-02T00:35:21.564Z · score: 4 (4 votes) · LW · GW

Of course, once you are already the most successful conqueror alive you tend to need less luck. You can get by on the basic competence that comes from experience and the resources you now have at your disposal. (So long as you don't, for example, try to take Russia. Although even then Alexander's style would probably have worked better than Napoleon's.)

comment by DanArmak · 2010-01-02T22:24:11.344Z · score: 1 (1 votes) · LW · GW

The civilizations of the Near East were fascinating, and although the early Islamic Empire was interesting in its own right it did a lot to homogenize some really cool places.

As did the Christian culture before them. And the original Roman Empire before that. And Alexander's Hellenistic culture spread by the fragments of his mini-empire. And the Persian empires that came and went in the region...

comment by Morendil · 2010-01-01T22:54:48.032Z · score: 12 (14 votes) · LW · GW

I'd really, really like to see what the world would be like today if a single butterfly's wings had flapped slightly faster back in 5000 B.C.

comment by anonym · 2010-01-02T20:38:18.616Z · score: 2 (2 votes) · LW · GW

Along the same idea, but much more likely to yield radical differences to the future of human society, I'd like to know what would have happened if some ancient bottleneck epidemic had not happened or had happened differently (killed more or fewer people, or just different individuals). Much or all of the human gene pool after that altered event would be different.

comment by DanArmak · 2010-01-02T22:26:31.200Z · score: 2 (2 votes) · LW · GW

I'd like to see a world in which all ancestor-types of humans through to the last common ancestor with chimps still lived in many places.

comment by Zack_M_Davis · 2010-01-02T23:13:18.162Z · score: 0 (0 votes) · LW · GW

Book recommendation

comment by loqi · 2010-01-03T00:15:46.043Z · score: 0 (0 votes) · LW · GW

I'd be pretty interested in seeing the results of this set of Malaria-resistance mutations having been more widespread.

comment by SilasBarta · 2010-01-02T23:25:16.631Z · score: -1 (1 votes) · LW · GW

Probably not badly enough to pony up for the computational power necessary to find the answer though, right?

ETA: Nevermind, didn't see the parent prompt. Still an important consideration though, so I'm leaving it in...

comment by Kaj_Sotala · 2010-01-01T17:18:52.261Z · score: 7 (9 votes) · LW · GW

I'd be curious to know what would have happened if Christopher Columbus's fleet had been lost at sea during his first voyage across the Atlantic. Most scholars were already highly skeptical of his plans, as they were based on a miscalculation, and him not returning would have further discouraged any explorers from setting off in that direction. How much longer would it have taken before Europeans found out about the Americas, and how would history have developed in the meanwhile?

comment by Jack · 2010-01-02T12:09:52.699Z · score: 1 (1 votes) · LW · GW

Have you read Orson Scott Card's "Pastwatch: The Redemption of Christopher Columbus"? It suggest an answer to this question.

comment by CronoDAS · 2010-01-02T23:12:59.497Z · score: 1 (1 votes) · LW · GW

Not a very realistic one, though.

comment by RolfAndreassen · 2010-01-01T23:57:51.172Z · score: 4 (4 votes) · LW · GW

I would try to study the effects of individual humans, Great-Man vs Historical Inevitability style, by knocking out statesmen of a particular period. Hitler is a cliche, whom I'd nonetheless start with; but I'd follow up by seeing what happens if you kill Chamberlain, Churchill, Roosevelt, Stalin... and work my way down to the likes of Turing and Doenitz. Do you still get France overrun in six weeks? A resurgent German nationalism? A defiant to-the-last-ditch mood in Britain? And so on.

Then I'd start on similar questions for the unification of Germany: Bismarck, Kaiser Wilhelm, Franz Josef, Marx, Napoleon III, and so forth. Then perhaps the Great War or the Cold War, or perhaps I'd be bored with recent history and go for something medieval instead - Harald wins at Stamford Bridge, perhaps. Or to maintain the remove-one-person style of the experiment, there's the three claimants to the British throne, one could kill Edgar the Confessor earlier, the Pope has a hand in it, there's the various dukes and other feudal lords in England... lots of fun to be had with this scenario!

comment by DanArmak · 2010-01-02T22:29:04.788Z · score: 1 (1 votes) · LW · GW

Don't limit yourself to just killing people. It's not a good way to learn how history works, just like studying biology by looking at organisms with defective genes doesn't tell us everything we'd like to know about cell biology.

comment by RolfAndreassen · 2010-01-04T23:10:21.207Z · score: 0 (0 votes) · LW · GW

Nu, but I specified the particular part of "how history works" that I want to study, namely, are individuals important to large-scale events? For that purpose I think killing people would work admirably well. For other studies, certainly, I would use a different technique.

comment by DanArmak · 2010-01-05T10:09:05.220Z · score: 1 (1 votes) · LW · GW

For that purpose I think killing people would work admirably well.

If you're ok with a yes or no answer, then it's enough. If you also want to know how individuals may be important to events, killing may not be enough, I think.

comment by dfranke · 2010-01-01T17:47:54.232Z · score: 4 (4 votes) · LW · GW

I'd like to know what would have happened if movable type had been invented in the 3rd century AD.

comment by Nick_Novitski · 2010-01-04T18:09:54.523Z · score: 2 (2 votes) · LW · GW

For starters, the Council of Nicea would flounder helplessly as every sect with access to a printing press floods the market with their particular version of christianity.

comment by anonym · 2010-01-02T19:58:09.674Z · score: 3 (3 votes) · LW · GW

I'd like to know what would have happened if the Library of Alexandria hadn't been destroyed. If even the works of Archimedes alone -- including the key insight underlying Integral Calculus -- had survived longer and been more widely disseminated, what difference would that have made to the future progress of mathematics and technology?

comment by PeterS · 2010-01-01T20:26:08.790Z · score: 3 (3 votes) · LW · GW

I've been curious to know what the "U.S." would be like today if the American Revolution had failed.

Also, though it's a bit cliche to respond to this question with something like "Hitler is never born", it is interesting to think about just what is necessary to propel a nation into war / dictatorship / evil like that (e.g. just when can you kill / eliminate a single man and succeed in preventing it?) That's something I'm fairly curious about (and the scope of my curiosity isn't necessarily confined to Hitler - could be Bush II, Lincoln, Mao, an Islamic imam whose name I've forgotten, etc.).

comment by DanielLC · 2010-01-01T23:56:34.884Z · score: 2 (2 votes) · LW · GW

Something like Canada I guess.

While we're at it, what if the Continental Congress failed at replacing the Articles of Confederation?

comment by i77 · 2010-01-01T21:47:04.891Z · score: 1 (3 votes) · LW · GW

I've been curious to know what the "U.S." would be like today if the American Revolution had failed.

Code Geass :)

comment by LucasSloan · 2010-01-01T23:04:43.722Z · score: 0 (0 votes) · LW · GW

Sadly, that is more like the result if the ARW fails and the laws of physics were weirdly different.

comment by Alicorn · 2010-01-01T17:29:21.305Z · score: 3 (5 votes) · LW · GW

I would like to know what would have happened if, sometime during the Dark Ages let's say, benevolent and extremely advanced aliens had landed with the intention to fix everything. I would diligently copy and disseminate the entire Wikipedia-equivalent for the generously-divulged scientific and sociological knowledge therein, plus cultural notes on the aliens such that I could write a really keenly plausible sci-fi series.

comment by Gavin · 2010-01-01T21:58:34.331Z · score: 3 (3 votes) · LW · GW

A sci-fi series based on real extra-terrestrials would quite possibly be so alien to us that no one would want to read it.

comment by billswift · 2010-01-01T23:18:17.017Z · score: 5 (5 votes) · LW · GW

Not just science fiction and aliens either. Nearly all popular and successful fiction is based around what are effectively modern characters in whatever setting. I remember a paper I read back around the mid-eighties pointing out that Louis L'Amour's characters were basically just modern Americans with the appropriate historical technology and locations.

comment by dclayh · 2010-01-04T22:33:19.862Z · score: 0 (0 votes) · LW · GW

I've found that Umberto Eco's novels do the best job I've seen at avoiding this.

comment by pdf23ds · 2010-01-02T18:02:10.868Z · score: 0 (0 votes) · LW · GW

I'd love to see an essay-length expansion on this theme.

comment by billswift · 2010-01-03T06:30:00.770Z · score: 0 (0 votes) · LW · GW

As I wrote, I read it in something in the 1980s. Probably, but I 'm not sure, in Olander and Greenberg's "Robert A Heinlein" or in Franklin's "Robert A Heinlein: America as Science Fiction".

comment by Alicorn · 2010-01-01T22:13:00.145Z · score: 2 (2 votes) · LW · GW

I might have to mess with them a bit to get an audience, yes.

comment by Zack_M_Davis · 2010-01-01T22:18:41.521Z · score: 2 (4 votes) · LW · GW

Of course you can't fully describe the scenario, or you would already have your answer, but even so, this question seems tantalizingly underspecified. Fix everything, by what standard? Human goals aren't going to sync up exactly with alien goals (or why even call them aliens?), so what form does the aliens' benevolence take? Do they try to help the humans in the way that humans would want to be helped, insofar as that problem has a unique answer? Do they give humanity half the stars, just to be nice? Insofar as there isn't a unique answer to how-humans-would-want-to-be-helped, how can the aliens avoid engaging in what amounts to cultural imperialism---unilaterially choosing what human civilization develops into? So what kind of imperialism do they choose?

How advanced are these aliens? Maybe I'm working off horribly flawed assumptions, but in truth it seems kind of odd for them to have interstellar travel without superintelligence and uploading. (You say you want to write keenly plausible science fiction, so you are going have to do this kind of analysis.) The alien civilization has to be rich and advanced enough to send out a benevolent rescue ship, and yet not develop superintelligence and send out a colonization wave at near-c to eat the stars and prevent astronomical waste. Maybe the rescue ship itself was sent out at near-c and the colonization wave won't catch up for a few decades or centuries? Maybe the rescue ship was sent out, and then the home civilization collapsed or died out?---and the rescue ship can't return or rebuild on its own (not enough fuel or something), so they need some of the Sol system's resources?

Or maybe there's something about the aliens' culture and psychology such that they are capable of developing interstellar travel but not capable of developing superintelligence? I don't think it should be too surprising if the aliens should be congenitally confused, unable to discover certain concepts. (Compare how the hard problem of consciousness just seems impossible; maybe humans happen to be flawed in such a way such that we can never understand qualia.) So the aliens send their rescue ship, share their science and culture (insofar as alien culture can be shared), and eighty years later, the humans build an FAI. Then what?

comment by Alicorn · 2010-01-01T22:25:02.014Z · score: 3 (3 votes) · LW · GW

Human goals aren't going to sync up exactly with alien goals

Why not, as long as I'm making things up?

(or why even call them aliens?)

Because they are from another planet.

I do not know enough science to address the rest of your complaints.

comment by Zack_M_Davis · 2010-01-01T23:08:20.671Z · score: 3 (3 votes) · LW · GW

Why not, as long as I'm making things up?

I'm worried that some of my concepts here are a little be shaky and confused in a way that I can't articulate, but my provisional answer is: because their planet would have to be virtually a duplicate of Earth to get that kind of match. Suppose that my deepest heart's desire, my lifework, is for me to write a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. That's a necessary condition for my ideal universe: it has to contain me writing this beautiful, beautiful novel.

It doesn't seem all that implausible that powerful aliens would have a goal of "be nice to all sentient creatures," in which case they might very well help me with my goal in innumerable ways, perhaps by giving me a better word processor, or providing life extension so I can grow up to have a broader experience base with which to write. But I wouldn't say that this is the same thing as the alien sharing my goals, because if humans had never evolved, it almost certainly wouldn't have even occurred to the alien to create, from scratch, a human being who writes a grand romance novel about an actuary who lives in New York and her unusually tall boyfriend. A plausible alien is simply not going to spontaneously invent those concepts and put special value on them. Even if they have rough analogues to courtship story or even person who is rewarded for doing economic risk-management calculations, I guarantee you they're not going to invent New York.

Even if the alien and I end up cooperating in real life, when I picture my ideal universe, and when they picture their ideal universe, they're going to be different visions. The closest thing I can think of would be for the aliens to have evolved a sort of domain-general niceness, and to have a top-level goal for the universe to be filled with all sorts of diverse life with their own analogues of pleasure or goal-achievement or whatever, which me and my beautiful, beautiful novel would qualify as a special case of. Actually, I might agree with that as a good summary description of my top-level goal. The problem is, there are a lot of details that that summary description doesn't pin down, which we would expect to differ. Even if the alien and I agree that the universe should blossom with diverse life, we would almost certainly have different rankings of which kinds of possible diverse life get included. If our future lightcone only has room for 10^200 observer-moments, and there are 10^4000 possible observer-moments, then some possible observer-moments won't get to exist. I would want to ensure that me and my beautiful, beautiful novel get included, whereas the alien would have no advance reason to privilege me and my beautiful, beautiful novel over the quintillions of other possible beings with desires that they think of as their analogue of beautiful, beautiful.

This brings us to the apparent inevitability of something like cultural imperialism. Humans aren't really optimizers---there doesn't seem to be one unique human vision for what the universe should look like; there's going to be room for multiple more-or-less reasonable construals of our volition. That being the case, why shouldn't even benevolent aliens pick the construal that they like best?

comment by Alicorn · 2010-01-01T23:37:52.580Z · score: 2 (2 votes) · LW · GW

Domain-general niceness works. It's possible to be nice to and helpful to lots of different kinds of people with lots of different kinds of goals. Think Superhappies except with respect for autonomy.

comment by orthonormal · 2010-01-01T22:43:19.741Z · score: 3 (3 votes) · LW · GW

OK, I sense cross-purposes here. You're asking "what would be the most interesting and intelligible form of positive alien contact (in human terms)", and Zack is asking "what would be the most probable form of positive alien contact"?

(By "positive alien contact", I mean contact with aliens who have some goal that causes them to care about human values and preferences (think of the Superhappies), as opposed to a Paperclipper that only cares about us as potential resources for or obstacles to making paperclips.)

Keep in mind that what we think of as good sci-fi is generally an example of positing human problems (or allegories for them) in inventive settings, not of describing what might most likely happen in such a setting...

comment by MichaelGR · 2010-01-19T16:16:36.722Z · score: 1 (1 votes) · LW · GW

I wonder if much in 20th century history would have been different if the USSR had been first to land someone on the Moon.

At the time, both sides played it like was something very important, if only for psychological reasons. But did that symbolic victory really mean that much? Did it actually alter the course of history much?

comment by blogospheroid · 2010-01-02T10:08:03.811Z · score: 1 (1 votes) · LW · GW

China not imposing the Hai Jin edict. Greater chinese exploration would have meant an extremely different and interesting history.

comment by DanArmak · 2010-01-02T22:29:33.482Z · score: 2 (2 votes) · LW · GW

Greater chinese exploration would have meant an extremely different and interesting history.

May you live in interesting times!

comment by [deleted] · 2010-01-02T08:25:24.585Z · score: 1 (1 votes) · LW · GW

A recent Facebook status of mine: Too bad Benjamin Franklin wasn't alive in 1835; he could have invented the Internet. The relay had been invented around then; that's theoretically all that's needed for computation and error correction, though it would go very slowly.

comment by JohannesDahlstrom · 2010-01-02T17:24:37.725Z · score: 3 (3 votes) · LW · GW

Well, Charles Babbage was alive back then...

comment by [deleted] · 2010-01-02T22:40:34.596Z · score: 2 (2 votes) · LW · GW

Huh. Then, uh... too bad Charles Babbage wasn't Benjamin Franklin?

comment by SilasBarta · 2010-01-02T23:32:21.928Z · score: 0 (0 votes) · LW · GW

And if not then, by the time they had extensive telegraph or telephone networks, basic computation, and typewriters, about 1890 (sic). Why didn't it? Numerous barriers, and their overcoming since then counts as political and scientific advances.

comment by MatthewB · 2010-01-03T09:21:35.638Z · score: 0 (0 votes) · LW · GW

This one is hard.

Take Cannae, for example. Can you really measure this as one outcome? It could be broken down into all kinds of things:

  • Varro's (or Paullus' if you happen to believe that Varro was indeed scapegoated for the disaster and that Paullus was really in command that day) decision to mass the legions in a phalanx, instead of their usual wide maniples.
  • Whether the Celts, Celtiberians and Iberians would have been able to hold the center against the legionaires without breaking.
  • Whether Hasdrubal would have managed to stop the Roman and Italian Cavalry
  • I cannot recall who was in command of he Numidians on the other flank, but if they had not been turned around when they went to pursue The Italian Allied Cavalry on their flank, that side of the battlefield would not have been enveloped by the Punic/African Heavy Infantry that Hannibal had held back

And, one could continue, probably down to the level of "Did Legionaire Plebius manage to hurl his pila soon enough to impale the charging Celtberian Villoni in time to keep the aforementioned Celtiberian from eventually killing Plebius' centurian, who would have then gone on to kill Hannibal, just before he had time to issue the final order of the day?"

So, how are you defining "A single event" here?

comment by Kaj_Sotala · 2010-01-03T09:42:50.799Z · score: 0 (0 votes) · LW · GW

So, how are you defining "A single event" here?

Loosely. Any of the ones you listed would be fine for me.

comment by NancyLebovitz · 2010-01-03T08:14:08.306Z · score: 0 (0 votes) · LW · GW

I don't think you get a single outcome even from the best specified event-- you'd get a big sheaf of outcomes.

If you could see all the multiple futures branching off from the present and have some way of sorting through them, you could presumably make better choices than you do now, but it would still be very hard to optimize much of anything.

comment by Kaj_Sotala · 2010-01-03T08:46:26.524Z · score: 0 (0 votes) · LW · GW

Okay - "suppose you could find out the single most probable outcome..."

comment by orthonormal · 2010-01-03T09:01:19.221Z · score: 0 (0 votes) · LW · GW

Since we're talking about a continuous probability measure, I'm not sure if that's the right way to think about it. Perhaps it's best to think of a randomly chosen point from the probability measure that evolves from a concentrated mass around a particular starting configuration— that is, a typical history given a particular branching point.

comment by Kaj_Sotala · 2010-01-03T09:23:48.954Z · score: 0 (0 votes) · LW · GW

One could always argue that since there is only a finite (even if unimaginably huge) amount of possible branching points, we're actually talking about a discrete probability distribution.

Your approach works, too.

comment by orthonormal · 2010-01-03T09:36:10.436Z · score: 0 (0 votes) · LW · GW

One could always argue that since there is only a finite (even if unimaginably huge) amount of possible branching points, we're actually talking about a discrete probability distribution.

How do you mean?

I'm talking about the fundamental physics of the universe. From a mathematical perspective, it's far more elegant (ergo, more likely) to deal with a partial differential equation defined on a continuous configuration space. Attempts to discretize the space in the name of infinite-set atheism seem ad-hoc to me.

comment by Kaj_Sotala · 2010-01-03T09:46:03.434Z · score: 0 (0 votes) · LW · GW

Oh, right - I was under the impression that MWI would have involved discrete transitions at some point (I haven't had the energy to read all of the MWI sequence). If that's incorrect, then ignore my previous comment.

comment by DanArmak · 2010-01-02T22:19:35.256Z · score: 0 (0 votes) · LW · GW

What would that event be?

The easy and trite answer is: the event of EY discovering a correct FAI theory, which is so simple that it's fully described in the Wikipedia article.

comment by Zack_M_Davis · 2010-01-02T23:17:16.332Z · score: 0 (0 votes) · LW · GW

Related: what if I. J. Good had taken himself seriously and started a Singularity effort rather than just writing that one article?

comment by Larks · 2010-01-01T23:49:56.229Z · score: 0 (0 votes) · LW · GW

If Einstein was wrong, and Newton right. More specifically, if experiments held at the time revealed the speed of light were relative and the earth moved in either.

comment by Jack · 2010-01-02T11:48:22.637Z · score: 1 (1 votes) · LW · GW

Surely this isn't changing a single historical event but the laws governing our universe.

comment by Wei_Dai · 2010-01-21T02:49:20.315Z · score: 5 (5 votes) · LW · GW

Suppose we want to program an AI to represent the interest of a group. The standard utilitarian solution is to give the AI a utility function that is an average of the utility functions of the individual in the group, but that runs into the interpersonal comparison of utility problem. (Was there ever a post about this? Does Eliezer have a preferred approach?)

Here's my idea for how to solve this. Create N AIs, one for each individual in the group, and program it with the utility function of that individual. Then set a time in the future when one of those AIs will be randomly selected and allowed to take over the universe. In the mean time the N AIs are to negotiate amongst themselves, and if necessary, given help to enforce their agreements.

The advantages of this approach are:

  • AIs will need to know how to negotiate with each other anyway, so we can build on top of that "for free".
  • There seems little question that the scheme is fair, since everyone is given an equal amount of bargaining power.

Comments?

ETA: I found a very similar idea mentioned before by Eliezer.

comment by Alicorn · 2010-01-21T02:56:37.076Z · score: 3 (3 votes) · LW · GW

Unless you can directly extract a sincere and accurate utility function from the participants' brains, this is vulnerable to exaggeration in the AI programming. Say my optimal amount of X is 6. I could program my AI to want 12 of X, but be willing to back off to 6 in exchange for concessions regarding Y from other AIs that don't want much X.

comment by wedrifid · 2010-01-21T03:14:04.109Z · score: 1 (1 votes) · LW · GW

This does not seem to be the case when the AIs are unable to read each other's minds. Your AI can be expected to lie to others with more tactical effectiveness than you can lie indirectly via deceiving it. Even in that case it would be better to let the AI rewrite itself for you.

On a similar note, being able to directly extract a sincere and accurate utility function from the participants' brains leaves the system vulnerable to exploitations. Individuals are able to rewrite their own preferences strategically in much the same way that an AI can. Future-me may not be happy but present-me got what he wants and I don't (necessarily) have to care about future me.

comment by Wei_Dai · 2010-01-21T03:28:46.628Z · score: 0 (0 votes) · LW · GW

I had also mentioned this in an earlier comment on another thread. It turns out that this is a standard concern in bargaining theory. See section 11.2 of this review paper.

So, yeah, it's a problem, but it has to be solved anyway in order for AIs to negotiate with each other.

comment by timtyler · 2011-06-04T20:47:05.192Z · score: 0 (0 votes) · LW · GW

Create N AIs, one for each individual in the group, and program it with the utility function of that individual. [...] everyone is given an equal amount of bargaining power.

Do you think the more powerful group members are going to agree to that?!? They worked hard for their power and status - and are hardly likely to agree to their assets being ripped away from them in this way. Surely they will ridicule your scheme, and fight against it being implemented.

comment by Wei_Dai · 2011-06-05T21:56:32.903Z · score: 3 (3 votes) · LW · GW

The main idea I wanted to introduce in that comment was the idea of using (supervised) bargaining to aggregate individual preferences. Bargaining power (or more generally, weighing of individual preferences) is a mostly orthogonal issue. If equal bargaining power turns out to be impractical and/or immoral, then some other distribution of bargaining power can be used.

comment by Roko · 2010-01-24T20:01:44.910Z · score: 0 (0 votes) · LW · GW

Why not use virtual agents, which are given only a safe interface to negotiate with each other over, and no physical powers, and are monitored by a meta-AI that prevents them from trying to game the system, fool each other, etc. This would avoid having wars between superintelligences in the real physical universe.

comment by Wei_Dai · 2010-01-25T03:17:13.402Z · score: 0 (0 votes) · LW · GW

I think that's what I implied: there is a supervisor process that governs the negotiation process and eventually picks a random AI to be released into the real world.

comment by Roko · 2010-01-25T13:53:30.851Z · score: 0 (0 votes) · LW · GW

ok, just checking you weren't advocating a free-for-all.

comment by Vladimir_Nesov · 2010-01-22T14:54:01.439Z · score: 0 (0 votes) · LW · GW

What exactly is "equal bargaining power" is vague. If you "instantiate" multiple AIs, their "bargaining power" may well depend on their "positions" relative to each other, the particular values in each of them, etc.

Then set a time in the future when one of those AIs will be randomly selected and allowed to take over the universe.

Why this requirement? A cooperation of AIs might as well be one AI. Cooperation between AIs is just a special case of operation of each AI in the environment, and where you draw the boundary between AI and environment is largely arbitrary.

comment by Wei_Dai · 2010-01-22T16:38:34.818Z · score: 1 (1 votes) · LW · GW

Why this requirement?

The idea is that the status quo (i.e., the outcome if the AIs fail to cooperate) is N possible worlds of equal probability, each shaped according to the values of one individual/AI. The AIs would negotiate from this starting point and improve upon it. If all the AIs cooperate (which I presume would be the case), then which AI gets randomly selected to take over the world won't make any difference.

What exactly is "equal bargaining power" is vague. If you "instantiate" multiple AIs, their "bargaining power" may well depend on their "positions" relative to each other, the particular values in each of them, etc.

In this case the AIs start from an equal position, but you're right that their values might also figure into bargaining power. I think this is related to a point Eliezer made in the comment I linked to: a delegate may "threaten to adopt an extremely negative policy in order to gain negotiating leverage over other delegates." So if your values make you vulnerable to this kind of threat, then you might have less bargaining power than others. Is this what you had in mind?

comment by Vladimir_Nesov · 2010-01-22T22:46:15.131Z · score: 1 (1 votes) · LW · GW

Letting a bunch of AIs with given values resolve their disagreement is not the best way to merge values, just like letting the humanity go on as it is is not the best way to preserve human values. As extraction of preference shouldn't depend on the actual "power" or even stability of the given system, merging of preference could also possibly be done directly and more fairly when specific implementations and their "bargaining power" are abstracted away. Such implementation-independent composition/interaction of preference may turn out to be a central idea for the structure of preference.

comment by andreas · 2010-01-24T01:06:47.326Z · score: 1 (1 votes) · LW · GW

There seems to be a bootstrapping problem: In order to figure out what the precise statement is that human preference makes, we need to know how to combine preferences from different systems; in order to know how preferences should combine, we need to know what human preference says about this.

comment by Vladimir_Nesov · 2010-01-24T01:21:15.288Z · score: 1 (1 votes) · LW · GW

If we already have a given preference, it will only retell itself as an answer to the query "What preference should result [from combining A and B]?", so that's not how the game is played. "What's a fair way of combining A and B?" may be more like it, but of questionable relevance. For now, I'm focusing on getting a better idea of what kind of mathematical structure preference should be, rather than on how to point to the particular object representing the given imperfect agent.

comment by Wei_Dai · 2010-01-25T04:15:14.531Z · score: 0 (0 votes) · LW · GW

For now, I'm focusing on getting a better idea of what kind of mathematical structure preference should be

What is/are your approach(es) for attacking this problem, if you don't mind sharing?

In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.

comment by Vladimir_Nesov · 2010-01-29T19:23:29.127Z · score: 0 (0 votes) · LW · GW

For now, I'm focusing on getting a better idea of what kind of mathematical structure preference should be

What is/are your approach(es) for attacking this problem, if you don't mind sharing?

Since I don't have self-contained results, I can't describe what I'm searching for concisely, and the working hypotheses and hunches are too messy to summarize in a blog comment. I'll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.

In my UDT1 post I suggested that the mathematical structure of preference could be an ordering on all possible (vectors of) execution histories of all possible computations. This seems general enough to represent any conceivable kind of preference (except preferences about uncomputable universes), but also appears rather useless for answering the question of how preferences should be merged.

Yes, this is not very helpful. Consider the question: what is the difference between (1) preference, (2) strategy that the agent will follow, and the (3) whole of agent's algorithm? Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don't know, nor will ever know with certainty, the true laws of the universe. And what we really want is to get to (3), not (1), but with good understanding of (1) so that we know (3) to be based on our (1).

comment by Wei_Dai · 2010-01-30T01:25:07.800Z · score: 0 (0 votes) · LW · GW

I'll give some of the motivations I found towards the end of the current blog sequence, and possibly will elaborate in the next one if the ideas sufficiently mature.

Thanks. I look forward to that.

Histories of the universe could play a role in semantics of (1), but they are problematic in principle, because we don't know, nor will ever know with certainty, the true laws of the universe.

I don't understand what you mean here, and I think maybe you misunderstood something I said earlier. Here's what I wrote in the UDT1 post:

More generally, we can always represent your preferences as a utility function on vectors of the form where E1 is an execution history of P1, E2 is an execution history of P2, and so on.

(Note that of course this utility function has to be represented in a compressed/connotational form, otherwise it would be infinite in size.) If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse. There is uncertainty about "which universes, i.e., programs, we're in", but that's a problem we already have a handle on, I think.

So, I don't know what you're referring to by "true laws of the universe", and I can't find an interpretation of it where your quoted statement makes sense to me.

comment by Vladimir_Nesov · 2010-01-30T13:16:58.219Z · score: 0 (0 votes) · LW · GW

If we consider the multiverse to be the execution of all possible programs, there is no uncertainty about the laws of the multiverse.

I don't believe that directly posing this "hypothesis" is a meaningful way to go, although computational paradigm can find its way into description of the environment for the AI that in its initial implementation works from within a digital computer.

comment by andreas · 2010-01-24T17:50:38.516Z · score: 0 (0 votes) · LW · GW

Here is a revised way of asking the question I had in mind: If our preferences determine which extraction method is the correct one (the one that results in our actual preferences), and if we cannot know or use our preferences with precision until they are extracted, then how can we find the correct extraction method?

Asking it this way, I'm no longer sure it is a real problem. I can imagine that knowing what kind of object preference is would clarify what properties a correct extraction method needs to have.

comment by Vladimir_Nesov · 2010-01-24T18:53:30.547Z · score: 0 (0 votes) · LW · GW

Going meta and using the (potentially) available data such as humans in form of uploads, is a step made in attempt to minimize the amount of data (given explicitly by the programmers) to the process that reconstructs human preference. Sure, it's a bet (there are no universal preference-extraction methods that interpret every agent in a way it'd prefer to do itself, so we have to make a good enough guess), but there seems to be no other way to have a chance at preserving current preference. Also, there may turn out to be a good means of verification that the solution given by a particular preference-extraction procedure is the right one.

comment by pdf23ds · 2010-01-23T12:51:10.651Z · score: 1 (1 votes) · LW · GW

So you know how to divide the pie? There is no interpersonal "best way" to resolve directly conflicting values. (This is further than Eliezer went.) Sure, "divide equally" makes a big dent in the problem, but I find it much more likely any given AI will be a Zaire than a Yancy. As a simple case, say AI1 values X at 1, and AI2 values Y at 1, and X+Y must, empirically, equal 1. I mean, there are plenty of cases where there's more overlap and orthogonal values, but this kind of conflict is unavoidable between any reasonably complex utility functions.

comment by Vladimir_Nesov · 2010-01-23T13:11:15.895Z · score: 1 (1 votes) · LW · GW

here is no interpersonal "best way" to resolve directly conflicting values.

I'm not suggesting an "interpersonal" way (as in, by a philosopher of perfect emptiness). The possibilities open for the search of "off-line" resolution of conflict (with abstract transformation of preference) are wider than those for the "on-line" method (with AIs fighting/arguing it over) and so the "best" option, for any given criterion of "best", is going to be better in "off-line" case.

comment by andreas · 2010-01-24T01:06:31.173Z · score: 0 (0 votes) · LW · GW

There seems to be a bootstrapping problem: In order to figure out what the precise statement is that human preference makes, we need to know how to combine preferences from different systems; in order to know how preferences should combine, we need to know what human preference says about this.

comment by andreas · 2010-01-24T01:06:09.908Z · score: 0 (0 votes) · LW · GW

There seems to be a bootstrapping problem: In order to figure out what the precise statement is that human preference makes, we need to know how to combine preferences from different systems; in order to know how preferences should combine, we need to know what human preference says about this.

comment by Wei_Dai · 2010-01-23T00:39:09.616Z · score: 0 (0 votes) · LW · GW

Letting a bunch of AIs with given values resolve their disagreement is not the best way to merge values

[Edited] I agree that it is probably not the best way. Still, the idea of merging values by letting a bunch of AIs with given values resolve their disagreement seems better than previous proposed solutions, and perhaps gives a clue to what the real solution looks like.

BTW, I have a possible solution to the AI-extortion problem mentioned by Eliezer. We can set a lower bound for each delegate's utility function at the status quo outcome, (N possible worlds with equal probability, each shaped according to one individual's utility function). Then any threats to cause an "extremely negative" outcome will be ineffective since the "extremely negative" outcome will have utility equal to the status quo outcome.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-07T01:52:54.995Z · score: 5 (5 votes) · LW · GW

Richard Dawkins talking to an astrologer. Best part at 10m28s.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-03-03T02:10:30.358Z · score: 8 (8 votes) · LW · GW

Transcript:

--

Dawkins: We could devise a little experiment where we take your forecasts and then give some of them straight, give some of them randomized, sometimes give Virgo the Pisces forecast et cetera. And then ask people how accurate they were.

Astrologer: Yes, that would be a perverse thing to do, wouldn't it.

Dawkins: It would be - yes, but I mean wouldn't that be a good test?

Astrologer: A test of what?

Dawkins: Well, how accurate you are.

Astrologer: I think that your intention there is mischief, and I'd think what you'd then get back is mischief.

Dawkins: Well my intention would not be mischief, my intention would be experimental test. A scientific test. But even if it was mischief, how could that possibly influence it?

Astrologer: (Pause.) I think it does influence it. I think whenever you do things with astrology, intentions are strong.

Dawkins: I'd have thought you'd be eager.

Astrologer: (Laughs.)

Dawkins: The fact that you're not makes me think you don't really in your heart of hearts believe it. I don't think you really are prepared to put your reputation on the line.

Astrologer: I just don't believe in the experiment, Richard, it's that simple.

Dawkins: Well you're in a kind of no-lose situation then, aren't you.

Astrologer: I hope so.

--

comment by PhilGoetz · 2010-01-07T05:14:10.746Z · score: 4 (4 votes) · LW · GW

Dawkins: "Well... you're sort of in a no-lose situation, then."

Astrologer: "I certainly hope so."

comment by AngryParsley · 2010-01-09T02:39:30.418Z · score: 3 (3 votes) · LW · GW

That video has been taken down, but you can skip to around 5 minutes into this video to watch the astrology bit.

comment by blashimov · 2012-11-14T18:11:16.502Z · score: 0 (0 votes) · LW · GW

The linked video is set to private? I can't view it. Not a big deal, the transcript is almost as good.

comment by Cyan · 2010-01-20T17:34:30.879Z · score: 2 (2 votes) · LW · GW

A fine example of:

To correctly anticipate, in advance, which experimental results shall need to be excused, the dragon-claimant must (a) possess an accurate anticipation-controlling model somewhere in his mind, and (b) act cognitively to protect either (b1) his free-floating propositional belief in the dragon or (b2) his self-image of believing in the dragon.

comment by Furcas · 2010-01-07T02:07:03.788Z · score: 0 (0 votes) · LW · GW

Now you've gone and depressed me.

comment by Kaj_Sotala · 2010-01-03T08:20:30.943Z · score: 5 (5 votes) · LW · GW

Oh, and to post another "what would you find interesting" query, since I found the replies to the last one to be interesting. What kind of crazy social experiment would you be curious to see the results of? Can be as questionable or unethical as you like; Omega promises you ve'll run the simulation with the MAKE-EVERYONE-ZOMBIES flag set.

comment by Blueberry · 2010-01-03T12:34:05.471Z · score: 10 (10 votes) · LW · GW

There are several that I've wondered about:

  1. Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like.

  2. Try to create a society of unconscious people with bicameral minds, as described in Julian Jaynes's "The Origin of Consciousness in the Breakdown of the Bicameral Mind", using actors taking on the appropriate roles. (Jaynes's theory, which influenced Daniel Dennett, was that consciousness is a recent cultural innovation.)

  3. Try to create a society where people grow up seeing sexual activity as casual, ordinary, and expected as shaking hands or saying hello, and see whether sexual taboos develop, and study how sexual relationships form.

  4. Raise a bunch of kids speaking artificial languages, designed to be unlike any human language, and study how they learn and modify the language they're taught. Or give them a language without certain concepts (relatives, ethics, the self) and see how the language influences they way they think and act.

comment by Roko · 2010-01-03T13:03:37.506Z · score: 9 (17 votes) · LW · GW

Raise a kid by machine, with physical needs provided for, and expose the kid to language using books, recordings, and video displays, but no interactive communication or contact with humans. After 20 years or so, see what the person is like

They'd probably be like the average less wrong commenter/singularitarian/transhumanist, so really no need to run this one.

comment by Jack · 2010-01-03T13:20:12.150Z · score: 0 (2 votes) · LW · GW

Before I spend a lot of time writing a response: Was this a joke?

comment by Roko · 2010-01-03T16:05:31.481Z · score: 13 (19 votes) · LW · GW

So, one result of this experiment would be/is a significantly below average ability to distinguish humor from serious debate...

comment by AdeleneDawner · 2010-01-03T16:24:43.110Z · score: 9 (13 votes) · LW · GW

Or significantly below average ability to signal whether something is humorous or serious. ;)

comment by Jack · 2010-01-03T16:51:04.412Z · score: 0 (4 votes) · LW · GW

What Adelene said. I'm afraid it isn't very funny. :-)

comment by MatthewB · 2010-01-03T13:09:35.446Z · score: 2 (2 votes) · LW · GW

I've noticed that some of the Pacific Island countries don't have much in the way of sexual taboos, and they tend to teach their kids things like:

  • Don't stick your thingy in there without proper lube

or

  • If you are going to do that, clean up afterward.

Japan is also a country that has few sexual taboos (when compared to western Christian society). They still have their taboos and strangeness surrounding sex, but it is not something that is considered sinful or dirty

I am really interested in that last suggestion, and it sounds like one of the areas I want to explore when I get to grad school (and beyond). At Eliezer's talk at the first Singularity Summit (and other talks I have heard him give) he speaks of a possible mind space. I would like to explore that mind space further outside of the human mind.

As John McCarthy proposed in one of his books. It might be the case that even a thermostat is a type of a mind. I have been exploring how current computers are a type of evolving mind with people as the genetic agents. we take things in computers that work for us, and combine those with other things, to get an evolutionary development of an intelligent agent.

I know that it is nothing special, and others have gone down that path as well, but I'd like to look into how we can create these types of minds biologically. Is it possible to create an alien mind in a human brain? Your 4th suggestion seems to explore this space. I like that (I should up vote it as a result)

comment by NancyLebovitz · 2010-01-04T12:25:23.619Z · score: 1 (1 votes) · LW · GW

Point 1: I'm not sure what you mean by physical needs. If human babies aren't cuddled, they die. Humans are the only known species to do this.

A General Theory of Love describes the connection between the limbic system and love-- I thought it was a good book, but to judge by the Amazon reviews, it's more personally important to a lot of intellectual readers than I would have expected.

comment by Blueberry · 2010-01-04T19:41:32.546Z · score: 1 (1 votes) · LW · GW

I'm not sure what you mean by physical needs. If human babies aren't cuddled, they die. Humans are the only known species to do this.

I've heard that called "failure to thrive" before. Yes, we'd need some kind of machine to provide whatever tactile stimulation was required. Given the way many primates groom each other and touch each other for social bonding, I'd be surprised if it were just humans who needed touch.

comment by NancyLebovitz · 2010-01-05T12:02:08.027Z · score: 1 (1 votes) · LW · GW

A lot of animals need touch to grow up well. Only humans need touch to survive.

A General Theory of Love describes experiments with baby rodents to determine which physical systems are affected by which aspects of contact with the mother-- touch is crucial for one system, smell for another.

comment by Peter_de_Blanc · 2010-01-04T06:04:50.804Z · score: 0 (0 votes) · LW · GW

I just read about #2 on wikipedia. Wow. Science is so much weirder than science fiction.

comment by Blueberry · 2010-01-04T19:39:44.251Z · score: 0 (0 votes) · LW · GW

I should warn you that Julian Jaynes's theory may be more like science fiction than science. It's interesting speculation but it's still a very controversial theory (which is why I'd love to test it). Daniel Dennett has written a couple articles talking about how he's adapted parts of Jaynes's theory into his theories of consciousness, and his books discuss some of the experimental evidence which sheds some light on similar theories about consciousness.

comment by MBlume · 2010-01-05T11:56:25.636Z · score: 3 (5 votes) · LW · GW

I'd like to put about 50 anosognosiacs and one healthy person in a room on some pretext, and see how long it takes the healthy person to notice everyone else is delusional, and whether ve then starts to wonder if ve is delusional too.

comment by Kaj_Sotala · 2010-01-03T08:25:45.617Z · score: 3 (3 votes) · LW · GW

I'd be really curious to see what happened in a society where your social gender was determined by something else than your biological sex. Birth order, for instance. Odd male and even female, so that every family's first child is considered a boy and their second a girl. Or vice versa. No matter what the biology. (Presumably, there'd need to be some certain sign of the gender to tell the two apart, like all social females wearing a dress and no social males doing so.)

comment by RichardKennaway · 2010-01-03T18:30:08.292Z · score: 0 (0 votes) · LW · GW

The concept of the berdache might be relevant. The link is just to a Google search on the word, as the politics surrounding it leave me uncertain what to believe about the subject.

comment by AdeleneDawner · 2010-01-03T12:16:40.216Z · score: 0 (0 votes) · LW · GW

Ursula LeGuin has written a short story with a premise that's not quite the same, but still interesting. (The introduction is the useful part, there - the story excerpt cuts off before getting anywhere terribly interesting.)

comment by Kaj_Sotala · 2010-01-03T13:08:50.907Z · score: 0 (0 votes) · LW · GW

That is indeed an interesting variation of the premise. (It does feel a bit contrived, but then again, so does my original.)

comment by MatthewB · 2010-01-03T09:11:17.719Z · score: 2 (4 votes) · LW · GW

I'd like to know how many people would eat human meat if it was not so taboo (No nervous system so as to avoid nasty prion diseases). I know that since I accidentally had a bite of finger when I was about 19 that I've wondered what a real bite of a person would taste like (prepared properly... Maybe a ginger/garlic sauce???).

Also, building on Kaj Sotala's proposal, what about sexual assignment by job or profession (instead of biological sex). So, all Doctors or Health Care workers would be female, all Soldiers would be male, all ditch diggers would be male, yet all bakers would be female. All Mailmen would be male, yet all waiters would be female.

Then, one could have multiple sex-assignments if one worked more than one job. How about a neuter sex and a dual sex in their as well (so the neuter sex would have no sex, and the hermaphrodite would be... well, both...)

comment by orthonormal · 2010-01-03T09:45:14.671Z · score: 2 (2 votes) · LW · GW

since I accidentally had a bite of finger when I was about 19

After your prior revelations and this, I'm waiting for the third shoe to drop.

comment by MatthewB · 2010-01-03T12:21:58.998Z · score: 3 (3 votes) · LW · GW

Then shoes could be dropping for quite a while...

Edit: I better stop biographing for a while. I've led a life that has been colorful to say the least (I wish that it had been more profitable - it was at one point... But, well, you have a link to what happened to the money)

comment by [deleted] · 2010-01-03T22:16:10.015Z · score: -3 (5 votes) · LW · GW

Hey, no linking to people's revelations without their permission.

comment by RichardKennaway · 2010-01-04T22:06:44.531Z · score: 1 (1 votes) · LW · GW

I'd like to know how many people would eat human meat if it was not so taboo

Isn't that circular? Not eating human meat is the taboo.

comment by MatthewB · 2010-01-04T23:06:19.409Z · score: 2 (2 votes) · LW · GW

A better way to have said that would be

I'd like to know how many people would eat human meat if it was not so taboo to eat human meat.

In other words: If there were no taboo against eating human meat, how many people would eat it?

From what I remember of the bite of finger, it had a white meat taste. Sort of like pork-turkey... I guess kinda like a hot dog (only it had no salt on/in it beyond the sweat that was on the hand).

I do think that human meat would stack up against Pork and Turkey as a delicious meat. Maybe if we ate condemned criminals. They would spend their time in prison before their execution fattening up. (OK, I realize that I am getting really out-there morbid now).

Cannibalism is a subject that fascinates me though. I have often wondered about fantastic settings in which the only thing that existed to eat was other people. Say, a planet in which there existed no other life forms at all. No plants, microbes, animals, etc. The Planet would have water, or maybe springs that had a liquid that contained nutrients that weren't in human meat... And, it would have people. So, the people would be the only things to eat, and the only things out of which tools could be made.

I do actually have a series of stories based upon this premise written. It was an interesting thought experiment to think about the types of cultures that could arise to deal with such a dilemma. And, if the inhabitants didn't know that any other life existed, (and had some cultural memory of the expression You are what you eat) then they might consider it a horrid idea to eat anything but people (should they eventually discover that other people from other planets eat dumb animals and plants that cannot even think.

If You are what you eat, then eating a stupid immobile plant or a flatulent stupid bovine would seem like the ultimate in self-condemnation.

comment by Nick_Tarleton · 2010-01-04T23:31:38.680Z · score: 2 (2 votes) · LW · GW

I have often wondered about fantastic settings in which the only thing that existed to eat was other people. Say, a planet in which there existed no other life forms at all. No plants, microbes, animals, etc. The Planet would have water, or maybe springs that had a liquid that contained nutrients that weren't in human meat... And, it would have people. So, the people would be the only things to eat, and the only things out of which tools could be made.

Larry Niven, "Bordered in Black". Sort of.

comment by MatthewB · 2010-01-05T01:27:47.699Z · score: 3 (3 votes) · LW · GW

Isn't that the Short Story where the first two Superluminal astronauts arrive at a planet that contains a giant ocean and just one island, that is surrounded by a dark black line.

The dark black area turns out to be algae and people's remains, and a crowd of people wander the island's coast eating either the algae or each other.

I don't see a very large similarity (but then I am looking at it from much more information about the place than you), as those people had no real developed culture or solitary food source. I was surprised to read it when I did, because it did come close to my idea (I first thought of this idea in 2nd grade when we had a nutritional lecture: "You Are What You Eat"). I spent three weeks wondering when the cafeteria was going to start serving people. I figured "I am a person. If I am what I eat, then I must eat people to continue being one." The teacher had to call my parents when I asked her directly about when we would start eating people or, if "This was only something grown-ups did." My mother did her normal "How could you do this to me?!", and my father did the "Look what you've done to your mother!"

The Culture that I envisioned was large and highly populous, and the whole point of life was to eventually be able to give your meat to your family (although, many children are eaten if they don't live up to standards). They build cities out of mud and bone, and use glass for some tools (created by burning bone and intestinal gasses created in special people who are nothing but huge guts. These people also produce other chemicals in different metabolic processes, but the point is that a whole class of person exists that is nothing but a chemical factory. These people usually have most of their cortex removed as well, so they are basically vegetables. They use the neocortex of these people as artificial memory devices).

There are other groups on this imaginary world as well, who are much less "Civilized" and predatory. They all live under ground in tunnels that are constantly being dug so that the people on the surface cannot locate them and exterminate them (as they upset the status quo of the surface civilization).

The "Planet" also have a rather unusual topology. From the surface of the planet, it is an infinite plane that continues on in all directions (except for up and down. Down leads back up, and up leaves the surface in a physical as well as temporal direction). There is an event horizon around the planet (it looks like a planet from outside this event horizon), that, once penetrated, reveals the planar surface that is directly below the point of contact of the event horizon. So, there are an infinite number of such surfaces on this planet.

But, you are correct. It does have a few similarities to Bordered in Black

comment by Multiheaded · 2012-02-21T20:32:25.422Z · score: 1 (1 votes) · LW · GW

Wow. Did elements of this appear in your mind during one or several bad trips?

comment by Cyan · 2010-01-05T00:57:08.435Z · score: 0 (0 votes) · LW · GW

It's not circular. One might pose the question of how many people in cultures where eating pork is taboo would eat it if it weren't taboo. Conversely, there's no taboo against eating smoked salmon that I know of, but I can't stand the stuff.

comment by dclayh · 2010-01-04T22:36:06.163Z · score: 0 (0 votes) · LW · GW

Perhaps he means how would it stack up in deliciousness against beef, chicken, fish, etc..

comment by RichardKennaway · 2010-01-05T12:32:49.660Z · score: 0 (0 votes) · LW · GW

In this sort of environment, Jeffreyssai poses the question: "Find what is valuable in religion."

Among the false starts that he will instantly slap down are responses that say what is non-valuable or pernicious ("This was not the question. Do not waste our time rehearsing irrelevancies known to us all"), evolutionary explanations of religion ("What is valuable, not what merely happened"), religiously motivated good works ("we do superior works without"), and any concept of useful lies.

comment by NancyLebovitz · 2010-01-03T08:17:00.807Z · score: 5 (5 votes) · LW · GW

Has anyone here tried Lojban? Has it been useful?


I recommend making a longer list of recent comments available, the way Making Light does.


If you've been working with dual n-back, what have you gotten out of it? Which version are you using?


Would an equivalent to a .newsrc be possible? I would really like to be able to tell the site that I've read all the comments in a thread at a given moment, so that when I come back, I'll default to only seeing more recent comments.

comment by RichardKennaway · 2010-01-03T10:30:07.167Z · score: 2 (2 votes) · LW · GW

Years ago I was involved with both Loglan (the original) and Lojban (the spin-off, started by a Loglan enthusiast who thought the original creator was being too possessive of Loglan). For me it was simply an entertaining hobby, along with other conlangs such as Láadan and Klingon. But in the history of artificial languages, it is important as the first to be based on the standard universal language of mathematics, first-order predicate calculus.

comment by Lightwave · 2010-01-05T13:35:50.606Z · score: 0 (0 votes) · LW · GW

I recommend making a longer list of recent comments available, the way Making Light does.

+1

It looks like this. I would even add a sorting functionality for the list of the last X comments by topic

comment by MatthewB · 2010-01-03T13:13:18.503Z · score: 0 (0 votes) · LW · GW

There is some guy on the forums of Ray Kurzweil's website who regularly goes off on these huge tangents about Lojban and Pot and how AIs will all be the multi-agent Lojban speaking, pot smoking embodiments of... something...

Thus, where-ever/whenever I see the word lojban, I tend to have a negative reaction. I did manage to have a sane conversation with Steve Omohundro about Lojban when he spoke at my school last year, so my reaction has tempered somewhat. RichardKenneyway seems to say more about it (usefully) than I have said.

comment by MichaelGR · 2010-01-02T17:44:41.078Z · score: 5 (5 votes) · LW · GW

I spent December 23rd, 24th and 25th in the hospital. My uncle died of brain cancer (Glioblastoma multiforme). He was an atheist, so he knew that this was final, but he wasn't signed up for cryonics.

We learned about the tumor 2 months ago, and it all happened so fast.. and it's so final.

This is a reminder to those of you who are thinking about signing up for cryonics; don't wait until it's too late.

comment by Larks · 2010-01-02T19:10:54.548Z · score: 10 (10 votes) · LW · GW

Because trivial inconvieniences be a strong deterent, maybe someone should make a top-level post on the practicallities of cryonics; an idiots guide to immortality.

comment by Alicorn · 2010-01-02T18:34:44.056Z · score: 9 (11 votes) · LW · GW

I want to sign up. I don't want to sign up alone. I can't convince any of my family to sign up with me. Help.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-02T23:51:56.751Z · score: 9 (9 votes) · LW · GW

Most battles like this end in losses; I haven't been able to convince any of my parents or grandparents to sign up. You are not alone, but in all probability, the ones who stand with you won't include your biological family... that's all I can say.

comment by MatthewB · 2010-01-03T13:17:56.738Z · score: 0 (0 votes) · LW · GW

the ones who stand with you won't include your biological family...

I have found that to be very true.

I think that I would not wish to have most of my family around if their lives were interrupted for 20, 50, or 100 years. Most of them have a hard enough time with living in a world that is moving at the pace of our current world, much less the drastic change that they would experience if they were to suddenly wake to a world to which they had no frame of reference.

I would not wish to be lonely in such a world, but, I already have friends with Alcor plans.

comment by Technologos · 2010-01-02T18:39:54.117Z · score: 5 (5 votes) · LW · GW

Now that would be a great extension of the LW community--a specific forum for people who want to make rationalist life decisions like that, to develop a more personal interaction and decrease subjective social costs.

comment by aausch · 2010-01-02T23:55:51.284Z · score: 5 (5 votes) · LW · GW

It could be a more general advice-giving forum. Come and describe your problem, and we'll present solutions.

That might also be a useful way to track the performance of rationalist methods in the real world.

comment by Technologos · 2010-01-03T05:47:51.818Z · score: 1 (1 votes) · LW · GW

I like it. Sure would beat the hell out of a lot of the advice I've heard, and if nothing else it would be good training in changing our minds and in aggregating evidence appropriately.

comment by Dagon · 2010-01-02T22:37:53.731Z · score: 4 (4 votes) · LW · GW

Can I help by pointing out flaws in your implied argument ("I believe cryonics is worthwhile, but without my family, I'd rather die, and they don't want to")?

Do you intend to kill yourself when some or all of your current family dies? If living beyond them is positive value, then cryonics seems a good bet even if no current family member has signed up.

Also, your arguments to them that they should sign up gets a LOT stronger with your family if you're actually signed up and can help with the paperwork, insurance, and other practical barriers. In fact, some of your family might be willing to sign up if you set everything up for them, including paying, and they just have to sign.

In fact, cryonics as gift seems like a win all around. It's a wonderful signal: I love you so much I'll spend on your immortality. It gets more people signed up. It sidesteps most of the rationalization for non-action (it's too much paperwork, I don't know enough about what group to sign up, etc.).

comment by Alicorn · 2010-01-02T22:43:19.633Z · score: 7 (7 votes) · LW · GW

Do you intend to kill yourself when some or all of your current family dies?

No. I do expect to create a new family of my own between now and then, though. It is the prospect of spending any substantial amount of time with no beloved company that I dread, and I can easily imagine being so lonely that I'd want to kill myself. (Viva la extroversion.) I would consider signing up with a fiancé(e) or spouse to be an adequate substitute (or even signing up one or more of my offspring) but currently have no such person(s).

Actually, shortly after posting the grandparent, I decided that limiting myself to family members was dumb and asked a couple of friends about it. My best friend has to talk to her fiancé first and doesn't know when she'll get around to that, but was generally receptive. Another friend seems very on-board with the idea. I might consider buying my sister a plan if I can get her to explain why she doesn't like the idea (it might come down to finances; she's being weird and mumbly about it), although I'm not sure what the legal issues surrounding her minority are.

Edit: Got a slightly more coherent response from my sister when I asked her if she'd cooperate with a cryonics plan if I bought her one. Freezing her when she dies "sounds really, really stupid", and she's not interested in talking about her "imminent death" and asks me to "please stop pestering her about it". I linked her to this, and think that's probably all I can safely do for a while. =/

comment by Peter_de_Blanc · 2010-01-03T00:35:26.738Z · score: 3 (3 votes) · LW · GW

Even if none of your relatives sign up for cryonics, I would expect some of them to still be alive when you are revived.

comment by Vladimir_Nesov · 2010-01-03T00:48:54.248Z · score: 4 (4 votes) · LW · GW

Since there is already only a slim chance of actually getting to the revival part (even though high payoff keeps the project interesting, like with insurance), after mixing in the requirement of reaching the necessary tech in (say) 70 years for someone alive today to still be around, and also managing to die before that, not a lot is left, so I wouldn't call it something to be "expected". "Conditional on you getting revived, there is a good chance some of your non-frozen relatives are still alive" is more like it (and maybe that's what you meant).

comment by Alicorn · 2010-01-03T00:47:18.715Z · score: 2 (2 votes) · LW · GW

Do you mean that a relative I have now, or one who will be born later, will probably be around at that time? Because the former would require that I die soon (while my relatives don't) or that there's an awfully rapid turnaround between my being frozen and my being defrosted.

comment by JamesAndrix · 2010-01-03T09:44:29.332Z · score: 4 (4 votes) · LW · GW

Well the whole point of signing up now is that you might die soon.

So sign up now. If you get to be old And still have no young family And the singularity doesn't seem close, then cancel.

comment by AndrewWilcox · 2010-01-05T00:55:28.817Z · score: 2 (2 votes) · LW · GW

You have best friends now, how did you meet them? In the worst case scenario where people you currently know don't make it, do you doubt that you'll be able to quickly make new friends?

Suppose that there are hundreds of people who would want to be your best friend, and that you would genuinely be good friends with. Your problem is that you don't know who they are, or how to find them. Not to be too much of a technology optimist :-), but imagine if the super-Facebook-search engine of the future would be able to accurately put you in touch with those hundreds.

comment by Alicorn · 2010-01-05T01:11:39.360Z · score: 0 (0 votes) · LW · GW

I met a significant percentage of my friends on a message board associated with a webcomic called The Order of the Stick. Others I met in school. One I met when she sent me a fan e-mail regarding my first webcomic. A majority of my friends, I met through people I already knew through one method or another.

When I pop out into the bright and glorious future, might they have a super-Facebook that would ferry me the cream of the friendship crop and have me re-ensconced in a comfy social net in a week tops? Maybe. But that's adding one more if to the long string of ifs that cryonics already is, and that's the if I can't get over. What I do know is that my standard methods of making friends can't be relied upon to work. I do not expect to wake up to fans of my webcomic eagerly awaiting my defrosting. I do not expect to wake up to find the Order of the Stick forum bustling with activity. I don't expect to wake up to find myself enrolled in school. I certainly don't expect that, if nobody I'm friends with gets frozen, I'll be introduced to any of their friends.

comment by orthonormal · 2010-01-06T02:43:12.716Z · score: 2 (2 votes) · LW · GW

Well, at least you'll have the Less Wrong reunion.

comment by Zack_M_Davis · 2010-01-06T02:48:40.758Z · score: 0 (0 votes) · LW · GW

In the vanishingly small fraction of worlds where the Earth is not destroyed.

comment by orthonormal · 2010-01-06T04:17:42.239Z · score: 2 (2 votes) · LW · GW

I follow Nick Bostrom on anthropic reasoning as well as existential risk, so I expect to see you there.

comment by Alicorn · 2010-01-06T02:48:36.570Z · score: 0 (0 votes) · LW · GW

In certain moods, that might be enough to push me to sign up, but the moods rarely last long enough that I could rely on the impetus from one to get through all the necessary paperwork.

comment by AndrewWilcox · 2010-01-06T01:55:59.711Z · score: 2 (2 votes) · LW · GW

Hmm, what about an outside view? That is, thinking about what it would be like for someone else. I'm a little too sleepy now to recall the exact reference, but there was something said here about how people make better estimates e.g. about how long a project will take if they think about how long similar projects have taken then how long they think this project will take. And, because you know about the present, let's make our thought experiment happen in the present.

So, what if a woman was frozen a hundred years ago, and woke up today? Would she be able to make any friends? Would anyone care about anything she cared about? Would anyone be interested in her?

Another thought that occurs to me is that making friends is a skill that can be learned like any other skill. Perhaps you haven't needed to be very skilled at making friends because you've grown up in this environment where friends have come to you fairly easily. So if you practice and become really good at that skill and have demonstrated to yourself that you can make friends easily in any situation, then you'd alleviate the worry that is causing you to feel conflicted about cryonics?

comment by Alicorn · 2010-01-06T02:01:31.649Z · score: 3 (3 votes) · LW · GW

I imagine such a woman would be viewed as a worthwhile curiosity, but probably not a good prospective friend, by history geeks and journalists. I think she would find her sensibilities and skills poorly suited to letting her move comfortably about in mainstream society, which would inhibit her ability to pick up friends in other contexts. If there were other defrostees, she might connect with them in some sort of support group setting (now I'm imagining an episode of Futurama the title of which eludes me), which might provide the basis for collaboration and maybe, eventually, friendship, but it seems to me that that would take a while to develop if in fact it worked.

comment by kpreid · 2010-01-06T04:23:16.998Z · score: 2 (2 votes) · LW · GW

(Meta) I wish byrnema had not deleted their comment which was in this position.

comment by AndrewWilcox · 2010-01-08T01:43:44.336Z · score: 1 (1 votes) · LW · GW

Hmm, I wonder if you could leave instructions, kind of like a living will except in reverse, so to speak... e.g., "only unfreeze me if you know I'll be able to make good friends and will be happy". Perhaps with a bit more detail explaining what "good friends" and "being happy" means to you :-)

If I were in charge of defrosting people, I'd certainly respect their wishes to the best of my ability.

And, if your life does turn out to be miserable, you can, um, always commit suicide then... you don't have to commit passive suicide now just in case... :-)

But it certainly is a huge leap in the dark, isn't it? With most decisions, we have some idea of the possible outcomes and a sense of likelihoods...

comment by Alicorn · 2010-01-08T01:45:20.713Z · score: 0 (0 votes) · LW · GW

Why would they be in a position to know that I'd be able to make good friends and be happy?

comment by SoullessAutomaton · 2010-01-08T02:52:04.287Z · score: 1 (1 votes) · LW · GW

Well, if everyone else they've revived so far has ended up a miserable outcast in an alien society, or some other consistent outcome, they might be able to take a guess at it.

comment by Alicorn · 2010-01-08T03:00:40.176Z · score: 0 (0 votes) · LW · GW

Bit of a gap between "not a miserable outcast in an alien society" and "has good close friends".

comment by AndrewWilcox · 2010-01-08T03:34:48.121Z · score: 0 (0 votes) · LW · GW

I can think of three possibilities...

If I'm in charge of unfreezing people, and I'm intelligent enough, it becomes a simple statistical analysis. I look at the totality of historical information available about the past life of frozen people: forum posts, blog postings, emails, youtube videos... and find out what correlates with the happiness or unhappiness of people who have been unfrozen. Then the decision depends what confidence level you're looking for: do you want to be unfrozen if there's a 80% chance that you'll be happy? 90%? 95%? 99%? 99.9%?

Two, I might not be intelligent enough, or there might not be enough data available, or we might not be finding useful statistical correlates. Then if your instructions are to not unfreeze you if we don't know, we don't unfreeze you.

Three, I might be incompetent or mistaken so that I unfreeze you even if there isn't any good evidence that you're going to be happy with your new situation.

comment by byrnema · 2010-01-06T03:41:51.616Z · score: 1 (1 votes) · LW · GW

I would expect that it would be very natural to treat defrostees like foreign exchange students or refugees. They would be taken care of by a plain old mothering type like me, that are empathetic and understand what it's like to wake up in a foreign place. I would show this 18th century woman places that she would relate to (the grocery store, the library, window shopping downtown) and introduce her to people, a little bit at a time. It would be a good 6-9 months before she felt quite acclimated, but by then she'd be finding a set of friends and her own interests. When she felt overwhelmed, I would tell her to take a bath and spend an evening reading a book.

I've stayed in foster homes in several countries for a variety of reasons, and this is quite usual.

comment by AngryParsley · 2010-01-03T11:54:32.726Z · score: 3 (3 votes) · LW · GW

It's much easier to overcome your own aversion to signing up alone than to convince your family to sign up with you. Even assuming you can convince them that living longer is a good thing, there are a ton of prerequisites needed before one can accurately evaluate the viability of cryonics.

comment by scotherns · 2010-01-07T14:30:44.917Z · score: 2 (2 votes) · LW · GW

Do it anyway. Lead by example. Over time, you might find they become more used to the idea, particularly if they have someone who can help them with the paperwork and organisational side of things. If you can help them financially, so much the better.

If you are successfully revived, you will have plenty of time to make new friends, and start a new family. I'm not meaning to sound callous, but its not unheard of for people to lose their families and eventually recover. I'm doing everything I can to persuade my family to sign up, but its up to them to make the final decision.

I'd give my life to save my family, but I wouldn't kill myself if I found myself alone.

comment by Alicorn · 2010-01-07T17:10:51.320Z · score: 1 (1 votes) · LW · GW

I'd be more convinced of my ability to lead by example if I'd ever convinced anyone to become a vegetarian.

comment by scotherns · 2010-01-08T08:38:34.490Z · score: 0 (0 votes) · LW · GW

Did you become vegetarian, despite the fact that you couldn't persuade anyone else? Did your decision at least make some people at least consider the option seriously?

comment by Alicorn · 2010-01-08T14:23:06.099Z · score: 0 (0 votes) · LW · GW

Yes, because unlike with being alive, being a vegetarian is something I don't need company to do happily. I probably wouldn't have become a vegetarian if it involved being shipped to the Isle of the Vegetarians, population: a lot of strangers, unless I could convince people to join me. I don't think my vegetarianism has made anyone give really serious thought to the diet; the person who has reacted with the most thoughtfulness upon my disclosure has a vegan mother and I'm inclined to credit her for all his respect for not eating animals.

comment by scotherns · 2010-01-11T08:50:31.850Z · score: 2 (2 votes) · LW · GW

Well, the future will certainly be full of mostly strangers. If you can't convince any of your current friends/family to sign up, you might be better of making friends with those that have already signed up. There are bound to some you would get along with (I've read OOTS since it started :-) )

If I ever have any success in convincing anyone else to sign up for cryonics, I'll let you know how I did it (in the unlikely event that this will help!).

comment by rwallace · 2010-01-03T03:06:24.142Z · score: 1 (3 votes) · LW · GW

I think it's great that you've taken the first steps, and would encourage you to go ahead and sign up.

In my experience, arguing with people who've decided they definitely don't want to do something, especially if their reasons are irrational, is never productive. As Eliezer says, it may simply be that those who stand with you will be your friends and the family you create, not the family you came from. But I would guess the best chance of your sister signing up would be obtained by you going ahead right now, but not pushing the matter, so that in a few years the fact of your being signed up will have become more of an established state of affairs.

It's a sobering demonstration of just how much the human mind relies on social proof for anything that can't be settled by immediate personal experience. (Conjecture: any intelligence must at least initially work this way; a universe in which it were not necessary, would be too simple to evolve intelligence in the first place. But I digress.)

Is there anything that can be done to bend social instinct more in the right direction here? For example, I know there have been face-to-face gatherings for those who live within reach of them; would it help if several people at such a gathering showed up wearing 'I'm signed up for cryonics' badges?

comment by byrnema · 2010-01-02T18:59:50.850Z · score: 1 (1 votes) · LW · GW

What do you perceive as the main barrier to their signing up?

comment by Alicorn · 2010-01-02T19:05:53.563Z · score: 5 (5 votes) · LW · GW

My dad was the only one with any non-mumbling answer to the suggestion. I told him I wanted him to live forever and he told me I was selfish. He said some things about overpopulation and global warming and universalizability and no proven results from the procedure.

comment by Roko · 2010-01-03T13:16:48.675Z · score: 3 (3 votes) · LW · GW

Well, if it is any consolation, I have had zero success and a bunch of ridicule from all friends and family I mentioned the idea to.

I've had the "selfish, overpopulation and global warming" objection from my mother, and I then reminded her that (a) she had a fair amount of personal wealth and wasn't remotely interested in spending any of it on third world charities, charities who try to reduce population or efficient ways to combat global warming and (b) she wasn't in favor of killing people to reduce population. Of course, this had no effect.

comment by DanArmak · 2010-01-02T22:47:20.643Z · score: 2 (2 votes) · LW · GW

Do you think it's worthwhile to argue with him rationally on the details, or that if you make him understand his reasons aren't valid he'll just mumble "no" like the rest of your family?

comment by Alicorn · 2010-01-02T22:51:27.065Z · score: 2 (2 votes) · LW · GW

Arguing with my dad is profoundly unpleasant, and he is extremely stubborn. I may send him links to websites, especially if I need his cooperation to involve my sister because she's 16, but I don't anticipate a good result from continuing to engage him directly (at least if I'm the one doing it: our relationship history is such that the odds of me convincing him of anything he's presently strongly against approach nil, and prolonged attempts to do so end in tears.)

comment by MatthewB · 2010-01-02T22:29:22.609Z · score: 0 (0 votes) · LW · GW

I wonder if any insurance companies have policies that cover cryonics? I have emailed a friend who is pretty tied in with the Alcor people in Austin Texas (as well as other cryonics companies, and in other locales) whom I asked for some info about what to do about paying for the service.

It seems that some form of indentured servitude should be available if they really have a belief that reanimation of some sort is possible.

comment by CronoDAS · 2010-01-02T23:10:43.526Z · score: 1 (1 votes) · LW · GW

You can pay for cryonics with life insurance.

comment by MatthewB · 2010-01-03T06:52:06.921Z · score: 1 (1 votes) · LW · GW

Wooo... Hooo... I just talked to a friend in Texas, too, who gave me info on an Alcor plan (he runs Alcor Meetups in Austin TX), and it seems that they have plans that one can buy as well (on installments).

I need to get this set up as soon as I can. I would rather not worry about being hit by a truck and not being prepared.

comment by Vladimir_Nesov · 2010-01-02T13:23:48.087Z · score: 5 (5 votes) · LW · GW

Alexandre Borovik summarizes the Bayesian error in null hypothesis rejection method, citing the classical
J. Cohen (1994). `The Earth Is Round (p < .05)'. American Psychologist 49(12):997-1003.

The fallacy of null hypothesis rejection

If a person is an American, then he is probably not a member of Congress. (TRUE, RIGHT?)
This person is a member of Congress.
Therefore, he is probably not an American.

comment by SilasBarta · 2010-01-02T23:23:43.266Z · score: 0 (2 votes) · LW · GW

If a person is an American, then he is probably not a member of Congress. (TRUE, RIGHT?)
This person is a member of Congress.
Therefore, he is probably not an American.

Valid reasoning. The problem lies in the failure to include all relevant knowledge (A member of Congress is very likely an American), not in the form of reasoning. The reason it looks so wrong is that we automatically add the extra premise on seeing discussion of a "member of Congress". Look at how the reasoning works in a context where there isn't such a premise:

If a person is an American, then he is probably not a Russian. (TRUE, RIGHT?)
This person is a Russian.
Therefore, he is probably not an American.

Somehow I get the feeling that the point your comment just whooshed over my head...

ETA: Okay, it's not valid reasoning. My point about the assumed premise of the reader remains though.

ETA: Yes it is valid reasoning. See my reply to Cyan.

comment by Peter_de_Blanc · 2010-01-03T00:43:20.009Z · score: 0 (0 votes) · LW · GW

If a person is an American, then he is probably not a member of Congress. (TRUE, RIGHT?) This person is a member of Congress. Therefore, he is probably not an American.

Valid reasoning.

It's not valid Bayesian reasoning, because we haven't said anything about P(member of congress | not american).

comment by AdeleneDawner · 2010-01-03T00:03:38.898Z · score: -1 (1 votes) · LW · GW

If a person is an American, then he is probably not a member of Congress.

This person is a member of Congress.

Therefore, he is probably not an American.

If a person is an American, then he is probably not a Russian.

This person is a Russian.

Therefore, he is probably not an American.

Both of these have false statements in the third position. The problematic word is 'therefore'. Most Russians aren't Americans, but that's not because most Americans aren't Russian; it's because most people don't have dual citizenship (among other possible facts that you could infer that from).

comment by Vladimir_Nesov · 2010-01-02T23:29:54.231Z · score: -2 (4 votes) · LW · GW

You are being obnoxious. Why would you argue with a short example intended to illustrate the topic discussed in the linked paper at length?

comment by SilasBarta · 2010-01-02T23:48:08.726Z · score: 1 (1 votes) · LW · GW

It wasn't clear to me how that misses the point of the paper, and in acknowledgment of that possibility I added the caveat at the end. Hardly "obnoxious".

Nevertheless, your original comment would be a lot more helpful if you actually summarized the point of the paper well enough that I could tell that my comment is irrelevant.

Could you edit your original post to do so? (Please don't tell me it's impossible. If you do, I'll have to read the paper myself, post a summary, save everyone a lot of time, and prove you wrong.)

comment by Cyan · 2010-01-03T02:09:45.502Z · score: 2 (2 votes) · LW · GW

The point of the paper is that the reasoning behind the p-value approach to null hypothesis rejection ignores a critical factor, to wit, the ratio of the prior probability of the hypothesis to that of the data. Your s/member of Congress/Russian example shows that sometimes that factor close enough to unity that it can be ignored, but that's not the fallacy. The fallacy is failing to account for it at all.

comment by SilasBarta · 2010-01-06T04:56:06.326Z · score: 1 (1 votes) · LW · GW

On second though, my original reasoning was correct, and I should have spelled it out. I'll do so here.

It's true that the ratio influences the result, but just the same, you can use your probability distribution of what predicates will appear in the "member of Congress" slot, over all possible propositions. It's hard to derive, but you can come up with a number.

See, for example, Bertrand's paradox, the question of how probable a randomly-chosen chord of a circle is of being greater than the length of side of an inscribed equilateral triangle. Some say the answer depends on how you randomly choose the chord. But as E. T. Jaynes argued#Unique_solution_using_the_.22maximum_ignorance.22_principle), the problem is well-posed as is. You just strip away any false assumptions you have of how the chord is chosen, and use the max-entropy probability distribution subject to whatever constraints are left.

Likewise, you can assume you're being given a random syllogism of this form, weighted over the probabilities of X and Y appearing in those slots

If a person is an X, then he is probably not a Y.
This person is a Y.
Therefore, he is probably not an X.

comment by Cyan · 2010-01-06T05:16:44.599Z · score: 0 (0 votes) · LW · GW

my original reasoning was correct

It wasn't: when a certain form of argument is asserted to be valid, it suffices to demonstrate a single counterexample to falsify the assertion. It's kind of funny -- you wrote

Valid reasoning. The problem lies in the failure to include all relevant knowledge [].

But the the failure to include all relevant knowledge is exactly why the reasoning isn't valid.

comment by SilasBarta · 2010-01-06T22:14:21.413Z · score: 1 (1 votes) · LW · GW

It wasn't: when a certain form of argument is asserted to be valid, it suffices to demonstrate a single counterexample to falsify the assertion.

Not for probabilistic claims.

It's kind of funny -- you wrote

Valid reasoning. The problem lies in the failure to include all relevant knowledge [].

But the the failure to include all relevant knowledge is exactly why the reasoning isn't valid.

No. The reasoning can be valid even though, given additional information, the conclusion would be changed.

Example:

Bob is accused of murder.
Then, Bob's fingerprints are the only ones found on the murder weapon.
Bob has an ironclad alibi: 30 witnesses and video footage of where he was.

O(guilty|accused of murder) = 1:3
P(prints on weapon|guilty) / P(prints on weapon|~guilty) = 1000
O(guilty|accused of murder, prints on weapon) = 1000*(1:3) = 1000:3
P(guilty| ....) > 99%.

If Bob is accused of murder, he has a moderate chance of being guilty.
Bob's prints are much more likely to later be the only ones found on the murder weapon if he were guilty than if he were not.
Bob's prints are the only ones on the murder weapon.
Therefore, there is a very high probability Bob is guilty.
Bob probably isn't guilty.
Therefore the Bayes Theorem is invalid reasoning. (???)

See the problem? The form of the reasoning presented originally is valid. That is what I was defending. But obviously, you can show the conclusion is invalid if you include additional information. In the general case, reasoning that

If a person is an X, then he is probably not a Y.
This person is a Y.
Therefore, he is probably not an X.

is valid, if that is all you know. But you can only invert the conclusion by assuming a higher level of knowledge than what is presented (in the quoted model above) -- specifically, that you have an additional low-entropy point in your probability distribution for "Y implies high probability of X". But again, this assumes a probability distribution of lower entropy (higher informativeness) than you can justifiably claim to have.

So you can actually form a valid probabilistic inference without looking up the specific p(H)/p(E) ratio applying to this specific situation -- just use your max entropy distribution for those values, which favors the reasoning I was defending.

I'm actually writing up an article for LW about the "Fallacy Fallacy" that touches on these issues -- I think it would be worthwhile to finish it and post it. (So no, I'm not just arguing this point to save face -- there's an important lesson here that ties into the Bertrand Paradox and Jaynes's work.)

comment by Cyan · 2010-01-07T00:46:46.181Z · score: 3 (3 votes) · LW · GW

See the problem?

Not really. You keep demonstrating my point as if it supports your argument, so I know we've got a major communication problem.

The form of the reasoning presented originally is valid. That is what I was defending.

And that's what I'm attacking. We are using the same definition of "valid", right? An argument is valid if and only if the conclusion follows from the premises. You're missing the "only if" part.

It wasn't: when a certain form of argument is asserted to be valid, it suffices to demonstrate a single counterexample to falsify the assertion.

Not for probabilistic claims.

Yes, even for probabilistic claims. See Jaynes's policeman's syllogism in Chapter 1 of PT:LOS for an example of a valid probabilistic argument. You can make a bunch of similarly formed probabilistic syllogisms and check them against Bayes' Theorem to see if they're valid. The syllogism you're attempting to defend is

P(D|H) has a low value.
D is true.
Therefore, P(H|D) has a low value.

But this doesn't follow from Bayes' Theorem at all, and the Congress example is an explicit counterexample.

So you can actually form a valid probabilistic inference without looking up the specific p(H)/p(E) ratio applying to this specific situation -- just use your max entropy distribution for those values, which favors the reasoning I was defending.

Once you know the specific H and E involved, you have to use that knowledge; whatever probability distribution you want to postulate over p(H)/p(E) is irrelevant. But even ignoring this, the idea is going to need more development before you put in into a post: Jaynes's argument in the Bertrand problem postulates specific invariances and you've failed to do likewise; and as he discusses, the fact that his invariances are mutually compatible and specify a single distribution instead of a family of distributions is a happy circumstance that may or may not hold in other problems. The same sort of thing happens in maxent derivations (in continuous spaces, anyway): the constraints under which entropy is being maximized may be overspecified (mutually inconsistent) or underspecified (not sufficient to generate a normalizable distribution).

comment by SilasBarta · 2010-01-07T21:45:58.886Z · score: 1 (3 votes) · LW · GW

Okay, let me first try to clarify where I believe the disagreement is. If you choose to respond, please let me know which claims of mine you disagree with, and where I mischaracterize your claims.

I claim that the following syllogism S1 is valid in that it reaches a conclusion that is, on average, correct.

P(D|H) has a low value.
D is true.
Therefore, P(H|D) has a low value.

So, I claim, if you know nothing about what H and D are, except that the first two lines hold, your best bet (expected circumstance over all possibilities) is that the third line holds as well. You claim that the syllogism is invalid because this syllogism, S2, is invalid:

P(D|H) has a low value.
D is true.
P(H|D) has a high value.
Therefore, P(H|D) has a low value.

I claim your argument is mistaken, because the invalidity of S2 does not imply the invalidity of S1; it's using different premises.

(You further claim that the existence of a case where P(H|D) has a high value despite lines 1 and 2 of S1 holding, is proof that S1 is invalid. I claim that its probabilistic nature means that it doesn't have to get the right answer (that further knowledge reveals) every time, giving a long example about murder.)

I claim that the article cited by Vladimir was claiming that S1 is an invalid syllogism. I claim that it is in error to do so, and that it was actually showing the errors that result from failing to incorporate all knowledge. So, it is not the use of the template S1 that is the problem, but failing to recognize that your template is actually S2, since your knowledge about members of congress adds the line 3 in S2.

I further claim that S1 is justified by maximum entropy inference, and that the parallels to Bertrand's paradox were clear. I take back the latter part, and will now attempt to show why similar reasoning and invariances apply here.

Given line 1, you know that, whatever the probability distribution of D, it intersects with, at least, a small fraction of H. So draw the Venn/Euler diagram: the D circle (well, a general bounded curve, but we'll call it a circle) could be encompassing only that small portion of H (in the member of Congress case). Or it could encompass that, and some area outside H. At the other extreme, it could encompass all of ~H. Averaging over all these possibilities, there is only a small (meta)chance that your D circle just happens to be at or very near the low end of the possibilities.

In terms of the Bayes's theorem: P(H|D) = P(D|H)*P(H)/P(D). You know P(D|H) is low. Now here's the problem: you claim you must account for P(H)/P(D). However, under maximum entropy assumptions, if all you know is line 1 and 2, you have a very "flat" probability distribution. As you probably agree, you cannot justify at this point, the belief that P(H) is much greater than P(D), nor that it is much less. Rather, you must smear your (meta)probability distribution on P(H) and P(D) across the range from 0 to 1. This gives an expected ratio of 1, which indeed corresponds to zero knowlege. (And, not surprisingly, the informativeness of a piece of evidence is often characterized by the absolute value of the log of the Bayes factor: the more informative, the more the ratio log-deviates from 1.)

Since your minimum knowledge assumption puts P(H)/P(D) at 1, then a small P(D|H) implies a small P(H|D). Yes, additional knowledge can over turn this. But on average, a low P(H|D) follows from applying all knowledge you have, and none that you don't.

So, are we saying the same thing in different ways, or what? I suspect some of the confusion comes from gauging the full implications of knowing nothing about the claims H and D except for line 1 and 2.

comment by Cyan · 2010-01-07T21:51:00.443Z · score: 1 (1 votes) · LW · GW

We're using different definitions of validity. Yours is "[a] syllogism... is valid [if] it reaches a conclusion that is, on average, correct." Mine is this one.

ETA: Thank you taking the time to explain your position thoroughly; I've upvoted the parent. I'm unconvinced by your maximum entropy argument because, at the level of lack of information you're talking about, H and D could be in continuous spaces, and in such spaces, maximum entropy only works relative to some pure non-informative measure, which has be be derived from arguments other than maximum entropy.

comment by SilasBarta · 2010-01-07T23:44:28.602Z · score: 0 (0 votes) · LW · GW

We're using different definitions of validity. Yours is "[a] syllogism... is valid [if] it reaches a conclusion that is, on average, correct." Mine is this one.

Okay, then how to you reply to my point about Bayesian reasoning in general? All Bayesian inference does is tell you what probability distribution you are justified in having, given your current level of knowledge.

With additional knowledge, that probability distribution changes. That doesn't make your original probability assignments wrong. It doesn't invalidate the probabilistic syllogisms you made using Bayes's Theorem. So it seems like your definition of validity in probabilistic syllogisms matches mine.

Again, refer back to the murder example. The fact that the alibi reverses the probability of guilt resulting from the fingerprint evidence, does not mean it was invalid to assign a high probability of guilt when you only had the fingerprint evidence.

"But the alibi is additional evidence!" Yes, but so is knowledge of what H and D stand for.

I'm unconvinced by your maximum entropy argument because, at the level of lack of information you're talking about, H and D could be in continuous spaces,

A continuous space, yes, but on a finite interval. That lets you define the max-entropy (meta)probability distribution. If q equals P(D|H) (which is low), then your (meta)distribution on P(D) is a flat line over the interval [q,1]. Most of that distribution is such that P(H|D) is also low.


I appreciate the civility with which you've approached this disagreement.

comment by Cyan · 2010-01-08T01:14:02.050Z · score: 1 (1 votes) · LW · GW

So it seems like your definition of validity in probabilistic syllogisms matches mine.

I only call syllogisms about probabilities valid if they follow from Bayes' Theorem. You permit yourself a meta-probability distribution over the probabilities and call a syllogism valid if it is Cyan::valid on average w.r.t. to your meta-distribution. I'm not saying that SilasBarta::valid isn't a possibly interesting thing to think about, but it doesn't seem to match Cyan::valid to me.

A continuous space, yes, but on a finite interval. That lets you define the max-entropy (meta)probability distribution.

No, a finite interval is not sufficient. You really need to specify the invariant measure to use maxent in the continuous case. For instance, suppose we had a straw-throwing machine, a spinner-controlling machine, and a dart-throwing machine, each to be used to draw a chord on a circle (extending the physical experiments described here#Physical_experiments)). We have testable information about each of their accuracies and precisions. According to my understanding of Jaynes, when maximizing entropy we need to use different invariant measures for the three different machines, even though the (finite) outcome space is the same in all cases.

comment by SilasBarta · 2010-01-08T03:38:46.639Z · score: -1 (1 votes) · LW · GW

I only call syllogisms about probabilities valid if they follow from Bayes' Theorem. You permit yourself a meta-probability distribution over the probabilities and call a syllogism valid if it is Cyan::valid on average w.r.t. to your meta-distribution.

But you're permitting yourself the same thing! Whenever you apply the Bayes Theorem, you're asserting a probability distribution to hold, even though that might not be the true generating distribution of the phenomenon. You would reject the construction of such as scenario (where your inference is way off) as a "counterexample" or somehow showing the invalidity of updates performed under the Bayes theorem. And why? Because that distribution is the best probability estimate, on average, for scenarios in which you occupy that epistemic state.

All I'm saying is that the same situation holds with respect to undefined tokens. Given that you don't know what D and H are, and given the two premises, your best estimate of P(H|D) is low. Can you find cases where it isn't low? Sure, but not on average. Can you find cases where it necessarily isn't low? Sure, but they involve moving to a different epistemic state.

No, a finite interval is not sufficient. You really need to specify the invariant measure to use maxent in the continuous case

Wrong:

The uniform distribution on the interval [a,b] is the maximum entropy distribution among all continuous distributions which are supported in the interval [a, b] (which means that the probability density is 0 outside of the interval).

comment by Cyan · 2010-01-08T04:26:34.432Z · score: 0 (0 votes) · LW · GW

But you're permitting yourself the same thing! Whenever you apply the Bayes Theorem...

Checks for a syllogism's Cyan::validity do not apply Bayes' Theorem per se. No prior and likelihood need be specified, and no posterior is calculated. The question is "can we start with Bayes' Theorem as an equation, take whatever the premises assert about the variables in that equation (inequalities or whatever), and derive the conclusion?" Checks for SilasBarta::validity also don't apply Bayes' Theorem as far as I can tell -- they just involve an extra element (a probability distribution for the variables of the Bayes' Theorem equation) and an extra operation (expectation w.r.t. to the previously mentioned distribution).

You would reject the construction of such as scenario (where your inference is way off) as a "counterexample" or somehow showing the invalidity of updates performed under the Bayes theorem.

This is definitely a point of miscommunication, because I certainly never intended to impeach Bayes' Theorem.

Given that you don't know what D and H are, and given the two premises, your best estimate of P(H|D) is low.

Maybe. I've still yet to be convinced that it's possible to derive a meta-probability distribution for the unconditional probabilities.

Wrong:

The text you link uses Shannon's definition of the entropy of a continuous distribution, not Jaynes's.

comment by SilasBarta · 2010-01-08T04:46:20.727Z · score: 0 (2 votes) · LW · GW

But you're permitting yourself the same thing! Whenever you apply the Bayes Theorem..

Checks for a syllogism's Cyan::validity do not apply Bayes' Theorem per se. ...

Argh. I wasn't saying that you were using the Bayes Theorem in your claimed definition of Cyan::validity. I was saying that when you are deriving probabilities through Bayesian inference, you are implicitly applying a standard of validity for probabilistic syllogisms -- a standard that matches mine, and yields the conclusion I claimed about the syllogism in question.

This is definitely a point of miscommunication, because I certainly never intended to impeach Bayes' Theorem.

Yes, definitely a miscommunication: my point there was that the existence of cases where Bayesian inference gives you a probability differing from the true distribution are not evidence for the Bayes Theorem being invalid. I don't know how you read it before, but that was the point, and I hope it makes more sense now.

Given that you don't know what D and H are, and given the two premises, your best estimate of P(H|D) is low.

Maybe. I've still yet to be convinced that it's possible to derive a meta-probability distribution for the unconditional probabilities.

Why? Because you don't see how defining the variables is a kind of information you're not allowed to have here? Because you think you can update (have a non-unity P(D)/P(H) ratio) in the absence of any information about P(D) and P(H)? Because you don't see how the "member of Congress" case is an example of a low entropy, concentrated-probability-mass case? Because you reject meta-probabilities to begin with (in which case it's not clear what makes probabilities found through Bayesian inference more "right" or "preferable" to other probabilities, even as they can be wrong)?

The text you link uses Shannon's definition of the entropy of a continuous distribution, not Jaynes's.

So? The difference only matters if you want to know the absolute (i.e. scale-invariant) magnitude of the entropy. If you're only concerned about which distribution has the maximum entropy, you don't need to pick an invariant measure (at least not for a case as simple as this one), and Shannon and Jaynes give the same result.

comment by Cyan · 2010-01-08T14:26:36.763Z · score: 1 (1 votes) · LW · GW

when you are deriving probabilities through Bayesian inference, you are implicitly applying a standard of validity for probabilistic syllogisms... that matches mine

I do not agree that that it what I'm doing. I don't know why my willingness to use Bayes' Theorem commits me to SilasBarta::validity.

I hope it makes more sense now.

I think I understand what you meant now. I deny that I am permitting myself the same thing as you. I try to make my problems well-structured enough that I have grounds for using a given probability distribution. I remain unconvinced that probabilistic syllogisms not attached to any particular instance have enough structure to justify a probability distribution for their elements -- too much is left unspecified. Jaynes makes a related point on page 10 of "The Well-Posed Problem" at the start of section 8.

Why [are you unconvinced]?

Because the only argument you've given for it is a maxent one, and it's not sufficient to the task, as I explain further below.

If you're only concerned about which distribution has the maximum entropy, you don't need to pick an invariant measure (at least not for a case as simple as this one), and Shannon and Jaynes give the same result.

This is not correct. The problem is that Shannon's definition is not invariant to a change of variable. Suppose I have a square whose area is between 1 cm^2 and 4 cm^2. The Shannon-maxent distribution for the square's area is uniform between 1 cm^2 and 4 cm^s. But such a square has sides whose lengths are between 1 cm and 2 cm. For the "side length" variable, the Shannon-maxent distribution is uniform between 1 cm and 2 cm. Of course, the two Shannon-maxent distributions are mutually inconsistent. This problem doesn't arise when using the Jaynes definition.

In your problem, suppose that, for whatever reason, I prefer the floodle scale to the probability scale, where floodle = prob + sin(2*pi*prob)/(2.1*pi). Why do I not get to apply a Shannon-maxent derivation on the floodle scale?

comment by SilasBarta · 2010-01-08T22:22:22.658Z · score: -1 (1 votes) · LW · GW

I do not agree that that it what I'm doing. I don't know why my willingness to use Bayes' Theorem commits me to SilasBarta::validity.

Because you're apparently giving the same status ("SilasBarta::validity") to Bayesian inferences that I'm giving to the disputed syllogism S1. In what sense is it true that Bob is "probably" the murderer, given that you only know he's been accused, and that his prints were then found on the murder weapon? Okay: in that sense I say that the conclusion of S1 is valid.

Where do you think I'm saying something different?

I deny that I am permitting myself the same thing as you. I try to make my problems well-structured enough that I have grounds for using a given probability distribution. I remain unconvinced that probabilistic syllogisms not attached to any particular instance have enough structure to justify a probability distribution for their elements -- too much is left unspecified.

What about the Bayes Theorem itself, which does exactly that (specify a probability distribution on variables not attached to any particular instance)?

In your problem, suppose that, for whatever reason, I prefer the floodle scale to the probability scale, where floodle = prob + sin(2piprob)/(2.1*pi). Why do I not get to apply a Shannon-maxent derivation on the floodle scale?

Because a) your information was given with the probability metric, not the floodle metric, and b) a change in variable can never be informative, while this one allows you to give yourself arbitrary information that you can't have, by concentrating your probability on an arbitrary hypothesis.

The link I gave specified that the uniform distribution maximizes entropy even for the Jaynes definition.

comment by Cyan · 2010-01-09T02:53:54.695Z · score: 1 (1 votes) · LW · GW

Because you're apparently giving the same status ("SilasBarta::validity") to Bayesian inferences that I'm giving to the disputed syllogism S1.

For me, the necessity of using Bayesian inference follows from Cox's Theorem, an argument which invokes no meta-probability distribution. Even if Bayesian inference turns out to have SilasBarta::validity, I would not justify it on those grounds.

What about the Bayes Theorem itself, which does exactly that (specify a probability distribution on variables not attached to any particular instance)?

I wouldn't say that Bayes' Theorem specifies a probability distribution on variables not attached to any particular instance; rather it uses consistency with classical logic to eliminate a degree of freedom in how other methods can specify otherwise arbitrary probability distributions. That is, once I've somehow picked a prior and a likelihood, Bayes' Theorem shows how consistency with logic forces my posterior distribution to be proportional to the product of those two factors.

Because a) your information was given with the probability metric, not the floodle metric, and b) a change in variable can never be informative, while this one allows you to give yourself arbitrary information that you can't have, by concentrating your probability on an arbitrary hypothesis.

I'm going to leave this by because it is predicated on what I believe to be a confusion about the significance of using Shannon entropy instead of Jaynes's version.

The link I gave specified that the uniform distribution maximizes entropy even for the Jaynes definition.

We're at the "is not! / is too!" stage in our dialogue, so absent something novel to the conversation, this will be my final reply on this point.

The link does not so specify: this old revision shows that the example refers specifically to the Shannon definition. I believe the more general Jaynes definition was added later in the usual Wikipedia mishmash fashion, without regard to the examples listed in the article.

In any event, at this point I can only direct you to the literature I regard as definitive: section 12.3 of PT:LOS (pp 374-8) (ETA: Added link -- Google Books is my friend). (The math in the Wikipedia article Principle of maximum entropy follows Jaynes's material closely. I ought to know: I wrote the bulk of it years ago.) Here's some relevant text from that section:

The conclusions, evidently, will depend on which [invariant] measure we adopt. This is the shortcoming from which the maximum entropy principle has suffered until now, and which must be cleared up before we can regard it as a full solution to the prior probability problem.

Let us note the intuitive meaning of this measure. Consider the one-dimensional case, and suppose it is known that a < x < b but we have no other prior information. Then... [e]xcept for a constant factor, the measure m(x) is also the prior distribution describing 'complete ignorance' of x. The ambiguity is, therefore, just the ancient one which has always plagued Bayesian statistics: how do we find the prior representing 'complete ignorance'? Once this problem is solved [emphasis added], the maximum entropy principle will lead to a definite, parameter-independent method of setting up prior distributions based on any testable prior information.

comment by SilasBarta · 2010-01-10T03:03:48.130Z · score: 0 (0 votes) · LW · GW

Y... you mean you were citing as evidence a Wikipedia article you had heavily edited? Bad Cyan! ;-)

Okay, I agree we're at a standstill. I look forward to comments you may have after I finish the article I mentioned. FWIW, the article isn't about this specific point I've been defending, but rather, about the Bayesian interpretation of standard fallacy lists, where my position here falls out as a (debatable) implication.

comment by Cyan · 2010-01-08T19:35:18.433Z · score: 0 (0 votes) · LW · GW

Requesting explanation for the downvote of the parent.

comment by Tyrrell_McAllister · 2010-01-07T22:12:16.017Z · score: 0 (0 votes) · LW · GW

One obstacle to understanding in this conversation seems to be that it involves the notion of "second-order probability". That is, a probability is given to the proposition that some other proposition has a certain probability (or a probability within certain bounds).

As far as I know, this doesn't make sense when only one epistemic agent is involved. An ideal Bayesian wouldn't compute probabilities of the form p(x1 < p(A) < x2) for any proposition A.

Of course, if two agents are involved, then one can speak of "second-order probabilities". One agent can assign a certain probability that the other agent assigns some probability. That is, if I use probability-function p, and you use probability function p*, then I might very well want to compute p(x1 < p*(A) < x2).

And the "two agents" here might be oneself at two different times, or one's conscious self and one's unconscious intuitive probability-assigning cognitive machinery.

From where I'm sitting, it looks like SilasBarta just needs to be clear that he's using the coherent notion of "second-order probability". Then the disagreement dissolves.

comment by Cyan · 2010-01-08T01:51:56.192Z · score: 0 (0 votes) · LW · GW

One obstacle to understanding in this conversation seems to be that it involves the notion of "second-order probability".

Naw, that part's cool. (I already had the idea of a meta-probability in my armamentarium.) The major obstacle to understanding was that we meant different things by the word "valid".

comment by thomblake · 2010-01-07T22:30:58.557Z · score: 0 (0 votes) · LW · GW

As far as I know, this doesn't make sense when only one epistemic agent is involved.

If you think there's a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: "There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?"

Unless you think "What probability should I assign to A" is entirely a different sort of question than simply "What is p(A)".

comment by Tyrrell_McAllister · 2010-01-07T23:38:43.590Z · score: 0 (0 votes) · LW · GW

If you think there's a fact of the matter about what p(A) is (or should be) then it makes sense. You can reason as follows: "There are some situations where I should assign an 80% probability to a. What is the probability that A is such an a?"

I have plenty to learn about Bayesian agents, so I may be wrong. But I think that this would be a mixing of the object-language and the meta-language.

I'm supposing that a Bayesian agent evaluates probabilities p(A) where A is a sentence in a first-order logic L. So how would the agent evaluate the probability that it itself assigns a certain probability to some sentence?

We can certainly suppose that the agent's domain of discourse D includes the numbers in the interval (0, 1) and the functions mapping sentences in L to the interval (0, 1). For each such function f let 'f' be a function-symbol for which f is the interpretion assigned by the agent. Similarly, for each number x in (0, 1), let 'x' be a constant-symbol for which x is the interpretation.

Now, how do we get the agent to evaluate the probability that p(A) = x? The natural thing to try might be to have the agent evaluate p('p'(A) = 'x'). But the problem is that 'p'(A) = 'x' is not a well-formed formula in L. Writing a sentence as the argument following a function symbol is not one of the valid ways to construct well-formed formulas.

comment by Vladimir_Nesov · 2010-01-03T00:05:13.559Z · score: 0 (0 votes) · LW · GW

[...] I'll have to read the paper myself [...]

Wouldn't I say that to be for the best, given that I started the thread by linking to the paper?

comment by SilasBarta · 2010-01-03T02:43:25.745Z · score: 0 (4 votes) · LW · GW

That's not excuse for not providing a meaningful summary so that others can gauge whether it's worth their time. You need to give more than "Vladimir says so" as a reason for judging the paper worthwhile.

You ... do ... understand the paper well enough to provide such a summary ... RIGHT?

comment by Vladimir_Nesov · 2010-01-03T13:07:21.298Z · score: 2 (2 votes) · LW · GW

I was linking not just to the paper, but to a summary of the paper, and included that example out of that summary, a summary-of-summary. Others have already summarized what you got wrong in your reply. You can see that the paper has about 1300 citations, which should count for its importance.

comment by MatthewB · 2010-01-02T22:34:46.948Z · score: 0 (0 votes) · LW · GW

I need to read those links... I'll probably have to edit this as soon as I do...

Obviously, I did need to edit it. This is just a strange form of Modus Tollens except with a probabilistic thingy thrown in (pardon the technical term). Obviously, I need to go back and re-read the article again, because I am not seeing what they were talking about

comment by whpearson · 2010-01-21T00:02:34.779Z · score: 4 (4 votes) · LW · GW

Different responses to challenges seen through the lens of video games. Although I expect the same can be said for character driven stories (rather than say concept driven).

It turns out there are two different ways people respond to challenges. Some people see them as opportunities to perform - to demonstrate their talent or intellect. Others see them as opportunities to master - to improve their skill or knowledge.

Say you take a person with a performance orientation ("Paul") and a person with a mastery orientation ("Matt"). Give them each an easy puzzle, and they will both do well. Paul will complete it quickly and smile proudly at how well he performed. Matt will complete it quickly and be satisfied that he has mastered the skill involved.

Now give them each a difficult puzzle. Paul will jump in gamely, but it will soon become clear he cannot overcome it as impressively as he did the last one. The opportunity to show off has disappeared, and Paul will lose interest and give up. Matt, on the other hand, when stymied, will push harder. His early failure means there's still something to be learned here, and he will persevere until he does so and solves the puzzle.

While a performance orientation improves motivation for easy challenges, it drastically reduces it for difficult ones. And since most work worth doing is difficult, it is the mastery orientation that is correlated with academic and professional success, as well as self-esteem and long-term happiness.


When I learned about performance and mastery orientations, I realized with growing horror just what I'd been doing for most of my life. Going through school as a "gifted" kid, most of the praise I'd received had been of the "Wow, you must be smart!" variety. I had very little ability to follow through or persevere, and my grades tended to be either A's or F's, as I either understood things right away (such as, say, calculus) or gave up on them completely (trigonometry). I had a serious performance orientation. And I was reinforcing it every time I played an RPG.

comment by RobinZ · 2010-01-21T01:21:44.979Z · score: 0 (0 votes) · LW · GW

Good link!

comment by MrHen · 2010-01-18T18:27:09.270Z · score: 4 (4 votes) · LW · GW

What is the informal policy about posting on very old articles? Specifically, things ported over from OB? I can think of two answers: (a) post comments/questions there; (b) post comments/questions in the open thread with a link to the article. Which is more correct? Is there a better alternative?

comment by Paul Crowley (ciphergoth) · 2010-01-18T20:55:21.148Z · score: 2 (2 votes) · LW · GW

(a). Lots of us scan the "Recent Comments" page, so if a discussion starts up there plenty of people will get on board.

comment by orthonormal · 2010-01-18T20:26:59.523Z · score: 1 (1 votes) · LW · GW

I think each has their advantages. If you post a comment on the open thread, it's more likely to be read and discussed now; if you post one on the original thread, it's more likely to be read by people investigating that particular issue some time from now.

comment by timtyler · 2010-01-18T19:46:07.504Z · score: 1 (1 votes) · LW · GW

There, I figure (a).

comment by CarlShulman · 2010-01-18T20:48:57.186Z · score: 0 (0 votes) · LW · GW

People can read them from the sequences page and Google searches, so I'd suggest a). A follow-up post linking to the old article is also a possibility!

comment by RobinZ · 2010-01-18T20:37:42.027Z · score: 0 (0 votes) · LW · GW

I'm not aware of any policy - I tend to do (a).

comment by timtyler · 2010-01-09T10:03:00.495Z · score: 4 (4 votes) · LW · GW

James Hughes - with a (IMO) near-incoherent Yudkowsky critique:

http://ieet.org/index.php/IEET/more/hughes20100108/

comment by Sly · 2010-01-03T10:52:41.072Z · score: 4 (4 votes) · LW · GW

I am curious as to how many LWers attempt to work out and eat healthy to lengthen life span. Especially among those who have signed up for cryogenics.

comment by RichardKennaway · 2010-01-04T22:00:49.418Z · score: 4 (4 votes) · LW · GW

I work out and eat healthily to make right now better.

Of course, I hope that the body will last longer as well, but I wouldn't undertake a regimen that guaranteed I'd see at least 120, at the cost of never having the energy to get much done with the time. Not least because I'd take such a cost as casting doubt on the promise.

comment by Jawaka · 2010-01-07T13:57:43.525Z · score: 2 (2 votes) · LW · GW

I stopped smoking after I learned about the Singularity and Aubrey de Grey. I don't have any really good data on what healthy food is but I think I am doing alright. I have also singed up to a Gym recently. However I don't think I can sign up to cryogenics in Germany.

comment by Morendil · 2010-01-07T14:04:57.165Z · score: 1 (1 votes) · LW · GW

You can sign up from anywhere, in principle (CI and Alcor list a number of non-US members). The major issue is that it will obviously cost more to transport you to suspension facilities in the US, while avoiding damage to your brain cells in transit.

One disturbing thing about cryonics is that it forces you to allocate probabilities to a wide range of end-of-life scenarios. Am I more likely to die hit by a truck (in which case I wouldn't make much of my chances for successful suspension and revival), or a fatal disease diagnosed early enough, yet not overly aggressive, such that I can relocate to Michigan or Arizona for my final weeks ? And who know how many other likely scenarios.

comment by DanArmak · 2010-01-07T14:11:32.811Z · score: 2 (2 votes) · LW · GW

You can sign up from anywhere, in principle (CI and Alcor list a number of non-US members). The major issue is that it will obviously cost more to transport you to suspension facilities in the US, while avoiding damage to your brain cells in transit.

I'd guess that getting your local hospitals and government to allow your body to be treated correctly would be the biggest non-financial problem.

I live in Israel, and even if I had unlimited money and could sign up, I'm not at all sure I could solve this problem except by leaving the country.

comment by AngryParsley · 2010-01-08T09:58:17.945Z · score: 1 (1 votes) · LW · GW

I'm signed up for cryonics and I exercise regularly. I usually run 3-4 miles a day and do some random stretching, push-ups, and sit-ups. I slack if I'm on vacation or if the weather is bad. I never eat properly. Some days I forget most meals. Other days I'll have bacon and ice cream.

comment by scotherns · 2010-01-07T09:28:43.857Z · score: 1 (1 votes) · LW · GW

I work out regularly, eat healthy, and I am signed up for Cryonics. One data point for you :-)

comment by Sly · 2010-01-06T09:02:22.075Z · score: 0 (0 votes) · LW · GW

Are either of you two signed up for cryogenics?

comment by Kutta · 2010-01-04T12:48:07.385Z · score: 0 (0 votes) · LW · GW

Well, I'm certainly one, having found OB/LW through the Immortality Institute forums, where I've been researching health topics obsessively for several months. My vague personal impression is that life extension enthusiasts are not especially prevalent here.

comment by Sly · 2010-01-06T09:02:53.269Z · score: 0 (0 votes) · LW · GW

Are either of you two signed up for cryogenics?

comment by Kutta · 2010-01-09T18:06:46.867Z · score: 1 (1 votes) · LW · GW

As a 19 year old student living in Hungary cryonics is way back on my list of life extension related things to do. Nevertheless I think cryonics is a great option and I'll sign up as soon as I figure out how I could do it in my country (Russia being the closest place with cryo service) and have the money for it.

As a side note, I think cryonics has the best payoffs when you've got some potentially lethal relatively slowly advancing disease like cancer or ALS, and have the option of moving very closely to a cryonics facility.

comment by Kaj_Sotala · 2010-01-02T10:12:59.844Z · score: 4 (4 votes) · LW · GW

A little knowledge can be a dangerous thing. At least Eliezer has previously often recommended Judgment Under Uncertainty as something people should read. Now, I'll admit I haven't read it myself, but I'm wondering if that might be a bad advice, as the book's rather dated. I seem to frequently come across articles that cite JUU, but either suggest alternative interpretations or debunk its results entirely.

Just today, I was trying to find recent articles about scope insensitivity that I could cite. But on a quick search I primarily ran across articles pointing out it isn't so clear-cut as we seem to assume:

Psychological explanations of scope insensitivity do not imply CV invalidation. Green and Tunstall (1999, p. 213) argue that observed scope insensitivity (part-whole bias, embedding) “is the result of asking questions which are essentially meaningless to the respondents because [of] false assumptions about the cognitions of the respondents”. This position is close to that of, e.g., Carson and Mitchell (1993), arguing that apparent scope insensitivity is primarily due to flaws in survey design leading to amenity misspecification bias.

There are also explanations from economic theory. Rollins and Lyke (1998) argue that observed insensitivity to scope can result from diminishing marginal values. Successive quantities of, e.g., protected areas would receive ever positive but lower values per unit, such that the possibility of observing scope sensitivity would depend on the baseline scarcity of the resource. Income effects provide a related explanation. CV respondents have limited budgets or sub-budgets, whether these are mental or real, so their optimisation of spending on private and public goods is constrained (Randall and Hoehn 1993, 1996). Thus, even if the valuation is hypothetical, respondents are expected to limit totally stated [Willingness to Pay] to their ability to pay and to account for an executed hypothetical purchase when asked to value another good.

Indeed, the scope sensitivity issue remains controversial...

The scope test in the present CV study was over the composition of endangered species preservation. ... Of four external tests of insensitivity to scope, one was rejected, two gave mixed results, depending on either the type of test or elicitation format, and for the last one the null hypothesis could not be rejected. Of five internal tests, insensitivity to scope was rejected in three cases, one test gave mixed results, and one could not be rejected. Survey design features of the CV study, especially a fuzzy subgroup of endangered species, could explain the apparent insensitivity to scope observed.

So if anyone is reading the book, take it with a grain of salt. At least do a Google Scholar search for more data before accepting the conclusions.

comment by MrHen · 2010-01-31T18:01:18.079Z · score: 3 (3 votes) · LW · GW

What is the appropriate etiquette for post frequency? I work on multiple drafts at a time and sometimes they all get finished near each other. I assume 1 post per week is safe enough.

comment by Alicorn · 2010-01-31T18:01:57.157Z · score: 3 (3 votes) · LW · GW

I try to avoid having more than one post of mine on the sidebar at the same time.

comment by komponisto · 2010-01-28T21:24:26.783Z · score: 3 (3 votes) · LW · GW

For the "How LW is Perceived" file:

Here is an excerpt from a comments section elsewhere in the blogosphere:

In the meantime, one comment on that other interesting reading at Less Wrong. It has been fun sifting through various posts on a variety of subjects. Every time I leave I have the urge to give them the Vulcan hand signal and say "Live Long and Prosper". LOL.

I shall leave the interpretation of this to those whose knowledge of Star Trek is deeper than mine...

comment by Kevin · 2010-01-28T12:31:10.100Z · score: 3 (3 votes) · LW · GW

Prisoner's Dilemma on Amazon Mechanical Turk: http://blog.doloreslabs.com/2010/01/altruism-on-amazon-mechanical-turk/

comment by Nick_Tarleton · 2010-01-24T20:19:09.522Z · score: 3 (3 votes) · LW · GW

From The Rhythm of Disagreement:

Nick Bostrom, however, once asked whether it would make sense to build an Oracle AI, one that only answered questions, and ask it our questions about Friendly AI.

Has Bostrom made this proposal in anything published? I can't seem to find it on nickbostrom.com.

comment by Nick_Tarleton · 2010-01-18T18:42:09.433Z · score: 3 (3 votes) · LW · GW

This is ridiculous. (A $3 item discounted to $2.33 is perceived as a better deal (in this particular experimental setup) than the same item discounted to $2.22, because ee sounds suggest smallness and oo sounds suggest bigness.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-18T18:51:22.413Z · score: 3 (3 votes) · LW · GW

That is pretty ridiculous - enough to make me want to check the original study for effect size and statistical significance. Writing newspaper articles on research without giving the original paper title ought to be outlawed.

comment by AllanCrossman · 2010-01-18T21:49:44.012Z · score: 1 (1 votes) · LW · GW

"Small Sounds, Big Deals: Phonetic Symbolism Effects in Pricing", DOI: 10.1086/651241

http://www.journals.uchicago.edu/doi/pdf/10.1086/651241

Whether you'll be able to access it I know not.

comment by timtyler · 2010-01-18T18:59:57.618Z · score: 1 (1 votes) · LW · GW

Same researchers, somewhat similar effect:

"Distortion of Price Discount Perceptions: The Right Digit Effect"

comment by timtyler · 2010-01-18T18:54:04.906Z · score: 0 (0 votes) · LW · GW

Pretty amazing material! A demonstration "in the wild" would be more convincing to marketers, though.

comment by PhilGoetz · 2010-01-09T06:17:32.385Z · score: 3 (7 votes) · LW · GW

Question for all of you: Is our subconscious conscious? That is, are parts of us conscious? "I" am the top-level consciousness thinking about what I'm typing right now. But all sorts of lower-level processes are going on below "my" consciousness. Are any of them themselves conscious? Do we have any way of predicting or testing whether they are?

Tononi's information-theoretic "information integration" measure (based on mutual information between components) could tell you "how conscious" a well-specified circuit was; but I regard it as an interesting correlate of processing power, without any demonstrated or even argued logical relationship to consciousness. Tononi has published a lot of papers on it - and they became more widely-cited when he started saying they were about consciousness instead of saying they were about information integration - but he didn't AFAIK make any arguments that the thing he measures with information integration has something to do with consciousness.

comment by byrnema · 2010-01-09T18:14:00.319Z · score: 1 (1 votes) · LW · GW

It's a very interesting question. I think it's pretty straight-forward that 'ourselves' is a composite of 'awarenesses' with non-overlapping mutual awareness.

Some data with respect to inebriation:

  • drunk people would pass a Turing test, but the next morning when events are recalled, it feels like someone else' experiences. But then when drunk again, the experiences again feel immediate.

  • when I lived in France, most of my socialization time was spent inebriated. For years thereafter, whenever I was intoxicated, I felt like it was more natural to speak in French than English. Even now, my French vocabulary is accessible after a glass of wine.

comment by PhilGoetz · 2010-01-10T00:24:22.734Z · score: 1 (1 votes) · LW · GW

That is interesting, but not what I was trying to ask. I was trying to ask if there could be separate, smaller, less-complex, non-human consciousnesses inside every human, It seems plausible (not probable, plausible) that there are, and that we currently have no way of detecting whether that is the case.

comment by PhilGoetz · 2010-01-09T17:49:11.516Z · score: -5 (11 votes) · LW · GW

It's a very important question, if you hope for a future that contains consciousness. You aren't going to be the singleton. You're going to be a piece of a singleton.

Edited later for niceness. But not because of your downvotes, which I also do not respect. I felt like a hypocrite for having told people to be nice.

comment by bogus · 2010-01-09T18:13:41.690Z · score: -4 (8 votes) · LW · GW

The person who voted this down is a moron.

you seem very confident about that, did you downvote your own post? how do you do that.

comment by MrHen · 2010-01-09T00:23:04.757Z · score: 3 (3 votes) · LW · GW

A soft reminder to always be looking for logical fallacies: This quote was smushed into an opinion piece about OpenGL:

Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.

Oops.

comment by MrHen · 2010-01-22T15:04:53.390Z · score: 2 (2 votes) · LW · GW

It really does surprise me how often people do things like this.

“I guess it’s just a genetic flaw in humans,” said Amichai Shulman, the chief technology officer at Imperva, which makes software for blocking hackers. “We’ve been following the same patterns since the 1990s.”

This is a quote from someone being interviewed about bad but common passwords. Would this be labeled a semantic stopsign, or a fake explanation, or ...?

comment by RobinZ · 2010-01-22T15:45:44.006Z · score: 2 (2 votes) · LW · GW

Fake explanation - he noticed a pattern and picked something which can cause that kind of pattern, without checking if it would cause that pattern.

comment by thomblake · 2010-01-22T16:59:59.556Z · score: -1 (5 votes) · LW · GW

Blizzard always releases Mac versions of their games simultaneously, and they're one of the most successful game companies in the world! If they're doing something in a different way from everyone else, then their way is probably right.

This isn't an example of a logical fallacy; it could be read that way if the conclusion was "their way must be right" or something like that. As it is, the heuristic is "X is successful and Y is part of X's business plan, so Y probably leads to success".

If you think their planning is no better than chance, or that Y usually only works when combined with other factors, then disagreeing with this heuristic makes sense. Otherwise, it seems like it should work most of the time.

Affirming the consequent, in general, is a good heuristic.

comment by MrHen · 2010-01-22T17:24:17.058Z · score: 4 (4 votes) · LW · GW

Within the context of the article, the bigger form of the argument can be phrased as such:

  • DirectX is not cross-platform
  • OpenGL is cross-platform
  • Blizzard is successful
  • Blizzard releases cross-platform software
  • It is more successful to release cross-platform software
  • It is more successful to use OpenGL than DirectX

This is bad and wrong. As a snap judgement, it is likely that releasing cross-platform software is a more successful thing to do but using that snap judgement to build bigger arguments is dangerous.

This is an example of an appeal from authority and fallacy of division.

As it is, the heuristic is "X is successful and Y is part of X's business plan, so Y probably leads to success".

But Y doesn't lead to success. If I say, "Blizzard is successful and making video games is part of their business plan, so making video games probably leads to success," something should be obviously wrong. Why would it be true if I use "always releases Mac versions of their games simultaneously" instead of "makes video games"?

If you think their planning is no better than chance, or that Y usually only works when combined with other factors, then disagreeing with this heuristic makes sense. Otherwise, it seems like it should work most of the time.

As far as I can tell, the emphasized part is the whole reason you should be careful. Picking one part out of a business plan is stupid. If you know enough about the subject material to determine whether that part of the business plan is applicable to whatever you are doing, fair enough, but this is a judgement call above and beyond the statements given in this example.

Affirming the consequent, in general, is a good heuristic.

Maybe, but it is still a logical fallacy.

comment by Jack · 2010-01-07T19:37:16.955Z · score: 3 (3 votes) · LW · GW

Once upon a time I was pretty good at math but either I just stopped liking it or the series of dismal school teachers I had turned me off of it. I ended up taking the social studies/humanities rout and somewhat regretting it. I've studied some foundations of mathematics stuff, symbolic logic and really basic set theory and usually find that I can learn pretty rapidly if I have a good explanation in front of me. What is the best way to teach myself math? I stopped with statistics (High school, advanced placement) and never got to calculus. I don't expect to become a math wiz or anything, I'd just like to understand the science I read better. Anyone have good advice?

comment by nhamann · 2010-01-07T21:56:27.458Z · score: 4 (4 votes) · LW · GW

I'm currently trying to teach myself mathematics from the ground up, so I'm in a similar situation as you. The biggest issue, as I see it, is attempting to forget everything I already "know" about math. Math curriculum at both the public high school and the state university I attended was generally bad; the focus was more on memorizing formulas and methods of solving prototypical problems than on honing one's deductive reasoning skills, which if I'm not mistaken is the core of math as a field of inquiry.

So obviously textbooks are good place to start, but which ones don't suck? Well, I can't help you there, as I'm trying to figure this out myself, but I use a combination of recommendations from this page and looking at ratings on Amazon.

Here are the books I am currently reading, have read portions of, or are on my immediate to-read list, but take this with a huge grain of salt as I'm not a mathematician, only an aspiring student:

  • How to Prove It: A Structured Approach by Vellemen - Elementary proof strategies, is a good reference if you find yourself routinely unable to follow proofs

  • How to Solve It by Polya - Haven't read it yet but it's supposedly quite good.

  • Mathematics and Plausible Reasoning, Vol. I & II by Polya - Ditto.

  • Topics in Algebra by Herstein - I'm not very far into this, but it's fairly cogent so far

  • Linear Algebra Done Right by Axler - Intuitive, determinant-free approach to linear algebra

  • Linear Algebra by Shilov - Rigorous, determinant-based approach to linear algebra. Virtually the opposite of Axler's book, so I figure between these two books I'll have a fairly good understanding once I finish.

  • Calculus by Spivak - Widely lauded. I'm only 6 chapters in, but I immensely enjoy this book so far. I took three semesters of calculus in college, but I didn't intuitively understand the definition of a limit until I read this book.

comment by Paul Crowley (ciphergoth) · 2010-01-08T01:07:14.079Z · score: 2 (2 votes) · LW · GW

I've learned an awful lot of maths from Wikipedia.

comment by Bo102010 · 2010-01-08T01:20:14.772Z · score: 0 (0 votes) · LW · GW

I've learned a lot of equations from Wikipedia, but I've not really learned a lot of real math - that's really come from doing homework problems and thinking about them later.

comment by mkehrt · 2010-01-26T00:32:43.547Z · score: 1 (1 votes) · LW · GW

I've definitely learned a lot of math from Wikipedia. I don't generally do the proofs myself, so I don't really have any of the elusive "mathematical maturity", but I definitely have learned a lot of abstract algebra, category theory and mathematical logic just by reading the definitions of various things on Wikipedia and trying to understand them.

On the other hand, I am pretty motivated to learn these things because I actively enjoy them. Other branches of math, I am much less interested in and so I don't learn that much. But it is possible!

comment by Christian_Szegedy · 2010-01-08T01:42:19.579Z · score: 0 (0 votes) · LW · GW

I don't understand why/how anyone would learn equations without understanding them.

I agree that wikipedia is not a good substitute for textbooks in general, neither does it replace actual practice by problem solving. You can still learn a lot of math (even complete proofs) from it: get good first impressions on whole areas. It even contains high quality introductory material on certain important topics and facts.

However I completely agree with you: the most important thing in math is to think about problems. Undergraduate Springer books (yellow series) typically contain a lot of problems alongside actual text. My method is the following:

  • 1) Read one chapter and write up the statement of every theorem.
  • 2) Go through all statements and reproduce the proof without rereading the material
  • 3) Iterate 1)-2) if your are stuck with any of the proofs
  • 4) Proceed with the problem section and try to solve all problems. Omit problems only if they are marked as hard and if you are stuck after an hour of thinking.

The most topics books to start with are linear algebra and calculus. Working through the undergraduate material in the above way takes a long time, but you will build a firm base for further studies.

comment by Vladimir_Nesov · 2010-01-08T03:20:23.880Z · score: 0 (0 votes) · LW · GW

I've always found that memorizing proofs or actually doing the exercises (as opposed to taking time to understand the structure of the solutions to some of them, if the main text doesn't already cover the representative propositions) hits diminishing returns, in most cases anyway, when you are learning for yourself. The details get forgotten too quickly to justify the effort, the useful thing is to get good hold of the concepts (which by the way can be glossed over even with all the proofs and exercises, by relying on brittle algorithm-like technique instead of deeper intuition).

comment by Vladimir_Nesov · 2010-01-08T03:19:49.326Z · score: 0 (0 votes) · LW · GW

I've always found that memorizing proofs or actually doing the exercises (as opposed to taking time to understand the structure of the solutions to some of them, if the main text doesn't already cover the representative propositions) hits diminishing returns, in most cases anyway, when you are learning for yourself. The details get forgotten too quickly to justify the effort, the useful thing is to get good hold of the concepts (which by the way can be glossed over even with all the proofs and exercises, by relying on brittle algorithm-like technique instead of deeper intuition).

comment by Christian_Szegedy · 2010-01-08T03:32:24.589Z · score: 1 (1 votes) · LW · GW

I don't vote for blind memorization either. However, I think that if one can not reconstruct a proof than it is not understood either. Trying to reconstruct thought processes by heart will highlight the parts with incomplete understanding.

Of course in order to fully understand things one should look at additional consequences, solve problems, look at analogues, understand motivation etc. Still, the reconstruction of proofs is a very good starting point, IMO.

comment by Vladimir_Nesov · 2010-01-08T03:39:20.693Z · score: 0 (0 votes) · LW · GW

Sure. I'm pointing to the difference between making sure that you can do proofs (not necessarily reconstruct the particular ones from the textbook) and exercises, and actually reconstructing the proofs and doing the exercises. Getting to the point of correctly ruling the former can easily take 10 times less time than the latter. You won't be as fast at performing the proofs in the coming weeks if need be, but a couple of years pass and you'd be as bad both ways (but you'd still have the concepts!).

comment by Bo102010 · 2010-01-08T02:37:05.252Z · score: 0 (0 votes) · LW · GW

Perhaps I should have said "looked up" instead of "learned." That is, I understand the Laplace transform, and have done many homework problems that involved deriving common transform pairs. However, when I need one, I don't try to re-derive it or rely on memory; I go look it up at Wikipedia and use it.

comment by Vladimir_Nesov · 2010-01-07T19:44:20.747Z · score: -2 (2 votes) · LW · GW

...reading textbooks?

comment by Jack · 2010-01-07T19:53:53.356Z · score: 0 (0 votes) · LW · GW

I'm looking for specific advice. Do you know of good text books?

comment by Vladimir_Nesov · 2010-01-07T20:01:12.327Z · score: 0 (0 votes) · LW · GW

Which textbook is good on a given topic depends on the student's current level, and more importantly on what exactly do you want to learn. "Math"?.. A couple of random suggestions that appealed to me aesthetically, but YMMV:

  • F. W. Lawvere & S. H. Schanuel (1991). Conceptual mathematics: a first introduction to categories. Buffalo Workshop Press, Buffalo, NY, USA.
  • S. Mac Lane & G. Birkhoff (1999). Algebra. American Mathematical Society, 3 edn.

(Both can be found on Kad.)

comment by Tyrrell_McAllister · 2010-01-07T19:54:22.798Z · score: 0 (0 votes) · LW · GW

What, specifically, do you want to learn?

comment by Jack · 2010-01-07T20:04:06.137Z · score: 2 (2 votes) · LW · GW

If the Simple Math of Everything were a real text book, I'd read that. But I've gathered calculus is the right place to start. Probability theory would be next, I guess.

comment by [deleted] · 2010-01-05T14:38:51.496Z · score: 3 (5 votes) · LW · GW

A few years back I did an ethics course at university. It very quickly made me realise that both I and most of the rest of the class based our belief in the existence of objective ethics simply on a sense that ethics must exist. When I began to question this idea my teacher asked me what I expected an objective form of ethics to look like. When I said I didn't know she asked if I would agree that a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being. This seemed fair enough but the problem with this seemed to be two fold to me:

1.) We have no evidence that there is such a system of ethics.

2.) If such a system does exist, we have no evidence that our ethical beliefs correlate in any strong way to it.

It seems to me like we should be agnostic about the possibility of objective ethics. Obviously I have further thoughts along these lines but this hardly seems to be the place for them, especially when the basic idea may be obviously flawed in some way I'm missing anyway. So I'll finish on a question: For people who do believe ethics are objective - is this because you view objective ethics in a different way or is it because you have evidence to support the above form of objective ethics?

comment by Vladimir_Nesov · 2010-01-05T19:43:08.964Z · score: 6 (6 votes) · LW · GW

1) Why would a "perfectly logical being" compute (do) X and not Y? Do all "perfectly logical beings" do the same thing? (Dan's comment: a system that computes your answer determines that answer, given a question. If you presuppose an unique answer, you need to sufficiently restrict the question (and the system). A universal computer will execute any program (question) to produce its output (answer).) All "beings" won't do exactly the same thing, answer any question in exactly the same way. See also: No Universally Compelling Arguments.

2) Why would you be interested in what the "perfectly logical being" does? No matter what argument you are given, it is you that decides whether to accept it. See also: Where Recursive Justification Hits Bottom, Paperclip maximizer, and more generally Metaethics sequence.

2.5) What humans want (and you in particular), is a very detailed notion, one that won't automatically appear from a question that doesn't already include all that detail. And every bit of that detail is incredibly important to get right, even though its form isn't fixed in human image.

comment by Jack · 2010-01-05T17:33:02.148Z · score: 3 (3 votes) · LW · GW

I don't know what you mean by objective ethics. I believe there are ethical facts but they're a lot more like facts about the rules of baseball than facts about the laws of physics

comment by [deleted] · 2010-01-05T17:47:38.230Z · score: 0 (2 votes) · LW · GW

Let's say by objective ethics we mean a set of rules which there is an imperative to obey which are the same for all beings. So if by the rules of baseball you are talking about a game which could have different rules for a different league then that would not be objective in the same sense. However, if there is only one true set of baseball rules that all people must abide by to be playing baseball than that would be objective.

So do you believe that ethics are just an invented rule system that could have a different form and still be as ethical? If so, are you saying you follow ethical relativism or some form of subjective ethical doctrine?

comment by PhilGoetz · 2010-01-07T05:26:01.847Z · score: 1 (1 votes) · LW · GW

However, if there is only one true set of baseball rules that all people must abide by to be playing baseball than that would be objective.

If that's the distinction, then whether there is objective ethics or not is just a matter of semantics; not anything of philosophical or practical interest.

comment by DanArmak · 2010-01-05T19:28:19.853Z · score: 1 (1 votes) · LW · GW

So do you believe that ethics are just an invented rule system that could have a different form and still be as ethical?

What do you mean "as ethical"? By what meta-ethical rule?

if your reply is, "by the objective meta-ethics which I postulate that all sentient beings can derive" - if everyone can derive it equally, doesn't that imply everyone ought to be equally ethical? If you admit someone or some society are un-ethical (as you asked of Jack), does that mean they somehow failed to derive the meta-ethics? That the ethics they adopted is internally inconsistent somehow?

comment by Jack · 2010-01-05T19:02:15.152Z · score: 1 (1 votes) · LW · GW

So do you believe that ethics are just an invented rule system that could have a different form and still be as ethical?

Invented isn't the right word, though that is partly my fault since baseball isn't an ideal metaphor. Natural language is a better one. Parts of ethics are culturally inherited (presumably at some time in the past they were invented) other parts are innate. The word ethics has a type-token ambiguity. It can refer to our ethical system (call it 'ethics prime') or it can refer to the type of thing that ethics prime is (an ethics). There can be societies without ethics prime, these societies are not ethical in the token sense but may be in the type sense (if they have a different ethical system). Imagine if the word for English and the word for language were the same word 'language'. Do the French speak language?

My own ethical system demands that I try to enforce it on a large class of beings similar to myself so I am not a relativist in that I think other people should do many of the things my ethics require me to do. This seems to me to have little to do with what those other people believe is ethical.

comment by PhilGoetz · 2010-01-07T05:38:41.950Z · score: 1 (1 votes) · LW · GW

Imagine if the word for English and the word for language were the same word 'language'. Do the French speak language?

This is a good way of putting it!

In fact, it just convinced me that there is an objective ethics! Sort of. Asking whether there is an objective meta-ethics is a lot like asking, "Is there such a thing as language?" Language is a concept that can be usefully applied to interactions between organisms of a particular level of intelligence given particular environmental conditions. So is ethics. Is it universal? What the hell does that mean?

But when people say there is no objective ethics, that isn't what they mean. They aren't denying that ethics makes sense as a concept. They're claiming the right to set their own arbitrary goals and values.

It's hard for me to imagine why someone who was convinced that there were no objective ethics would waste time on this, unless they were a Continental philosopher. Claiming there is no objective ethics sounds to me more like the actions of someone who believes in objective ethics, and has come to their own values that are unique enough that they must reject existing values.

comment by DanArmak · 2010-01-05T16:16:48.245Z · score: 3 (3 votes) · LW · GW

a system of ethics would be objective if they could be universally calculated by any none biased, perfectly logical being.

"Calculated" based on what? What is the question that this would be the answer to?

Also, how can you define "bias" here?

As you can guess from my questions, I don't even see what an objective system of ethics could possibly mean :-)

comment by MatthewB · 2010-01-06T07:43:10.634Z · score: 3 (3 votes) · LW · GW

As you can guess from my questions, I don't even see what an objective system of ethics could possibly mean.

This seems to be my biggest problem as well. I have been trying to find definitions of an objective system of ethics, yet all of the definitions seem so dogmatic and contrived. Not to mention varying from time to time depending upon the domain of the ethics (whether they apply to Christians, Muslims, Buddhists, etc.)

comment by [deleted] · 2010-01-05T16:21:38.094Z · score: 0 (2 votes) · LW · GW

Though I suppose a similar idea about objective ethics could be expressed with the claim that one form of ethics, and only one form of ethics, can be derived from purely logical principles which would cut out the question of bias.

comment by DanArmak · 2010-01-05T17:04:37.875Z · score: -1 (1 votes) · LW · GW

That does not answer my question: what is the purely objective, unbiased definition of "ethics"? We can't discuss objective systems of ethics without an objective definition of the question that ethics is supposed to answer.

P.S. my previous comment was malformatted, so you may have missed this part of it; I've fixed it now.

comment by [deleted] · 2010-01-05T17:51:42.612Z · score: 1 (1 votes) · LW · GW

Take ethics to be a system that sets out rules that determine whether we are good or bad people and whether our actions are good or bad. They differ from other ascriptions of good (say, at baseball) and bad (say, me playing baseball) in that there is an imperative to be good in this sense whereas it is acceptable to be bad at baseball (I hope).

I suspect that won't answer your question so instead I'll ask another. Do you believe that this inability to define means there is no real concept that underpins the folk conception of ethics or does it just mean we are unable to define it well enough to discuss it?

comment by DanArmak · 2010-01-05T18:35:00.154Z · score: -1 (1 votes) · LW · GW

They differ from other ascriptions of good (say, at baseball) and bad (say, me playing baseball) in that there is an imperative to be good in this sense whereas it is acceptable to be bad at baseball (I hope).

What do you mean by imperative? Humans have certain imperatives, whether evolved or "purely" cultural, but they are all human-specific: other creatures and minds will potentially have different ones. They can't be called "objective and unbiased between all rational thinking minds".

Do you believe that this inability to define means there is no real concept that underpins the folk conception of ethics or does it just mean we are unable to define it well enough to discuss it?

How can we talk about something if we can't define or at least describe it, or point to examples of it existing? Inability to define by definition means there's no concept. A concept isn't right or wrong, it just is, and it's equivalent to a definition that lets us know what we're talking about.

As for "folk concepts" of ethics, no offence intended, but aren't they roughly in the same category as religion and "sexual morals"?

comment by byrnema · 2010-01-05T18:44:22.250Z · score: 0 (0 votes) · LW · GW

Humans have certain imperatives, whether evolved or "purely" cultural, but they are all human-specific: other creatures and minds will potentially have different ones.

Aren't you just asserting with this statement, without argument, that there is no objective ethics? Isn't it the question at hand whether or not human imperatives are specific or universal?

(Though I wouldn't exclude the higher order possibility that there could be an objective ethical system defined around imperatives in general; for any arbitrary imperatives that a subsystem defines for itself, there is an objective imperative to have them satisfied.)

comment by DanArmak · 2010-01-05T19:00:22.686Z · score: 0 (0 votes) · LW · GW

Aren't you just asserting with this statement, without argument, that there is no objective ethics? Isn't it the question at hand whether or not human imperatives are specific or universal?

Well, it's not clear to me that that's what AaronBensen meant by "objective ethics". But I do believe that human ethics are not universal, because:

  1. Human ethics aren't even universal among humans. Plenty of humans live and have lived who would think I should rightly be killed - for not obeying some religious prescription, for instance. On the other hand some humans believe no-one should be killed and no-one has the right to kill anyone else, ever. Many more opinions exist.

  2. I know of no reason why an AI couldn't be built with different ethics from ours, or with no ethics at all. A paperclipper AI could be very intelligent, conscious (whatever that means), but sill - unethical by our lights. If anyone believes that such unethical minds literally cannot exist, the burden of proof is on them.

comment by Jack · 2010-01-05T19:22:30.489Z · score: 0 (0 votes) · LW · GW

Human ethics aren't even universal among humans. Plenty of humans live and have lived who would think I should rightly be killed - for not obeying some religious prescription, for instance. On the other hand some humans believe no-one should be killed and no-one has the right to kill anyone else, ever. Many more opinions exist.

Careful. We need to distinguish between ethical beliefs and 'factual' beliefs. Someone might have an ethics that says: If there is a God, do what he says. Else, do not murder. This person might want to kill Dan because he believes God wants to heathens to die. Others might have the same ethical system but not believe in God and therefore default to not murdering anyone. I'm not saying there aren't ethical disagreements but eliminating differences is factual knowledge might eliminate many apparent ethical differences.

Also, I'm not sure your second point matters. You can probably program anything. If all evolved, intelligent and social beings had very similar ethics I would consider that good enough to claim universality.

comment by DanArmak · 2010-01-05T19:38:09.391Z · score: 2 (2 votes) · LW · GW

I think plenty of ethical differences remain even if we eliminate all possiblee factual disagreements.

As regards religion, (many) religious people claim that they obey god's commands because they are (ethically) good and right in themselves, and just because they come from god. It's hard to dismiss religion entirely when discussing the ethics adopted by actual people - there's not much data left.

But here's another example: some people advocate the ethics of minimal government and completely unrestrained capitalism. I, on the other hand, believe in state social welfare and support taxing to fund it. Others regard these taxes as robbery. And another: many people in slave-owning countries have thought it ethical to own slaves; I think it is not, and would free slaves by force if I had the opportunity.

I think enough examples can be found to let my point stand. There is little, if any, universal human ethics.

If all evolved, intelligent and social beings had very similar ethics I would consider that good enough to claim universality.

That is underspecified. Evolved how? If I set up evolution in a simulation, or competition with selection between outright AIs, does that count? Can I choose the seeds or do I have to start from primordial soup?

comment by Jack · 2010-01-05T20:56:35.362Z · score: 1 (1 votes) · LW · GW

some people advocate the ethics of minimal government and completely unrestrained capitalism. I, on the other hand, believe in state social welfare and support taxing to fund it. Others regard these taxes as robbery.

Some people support unrestrained capitalism because they think it provides the most economic growth which is better for the poor. This is obviously a factual disagreement. Of course there are those who think wealth redistribution violates their rights, but it seems plausible that at least many of them would change their mind if they knew what the country would look like without redistribution or if they had different beliefs about the poor (perhaps many people with this view think the poor are lazy or intentionally avoid work to get welfare).

many people in slave-owning countries have thought it ethical to own slaves; I think it is not, and would free slaves by force if I had the opportunity.

Slavery (at least the brutal kind) is almost always accompanied by a myth about slaves being innately inferior and needing the guidance of their masters.

Now I think there probably are some actual ethical differences between human cultures I just don't want exaggerate those differences-- especially since they already get most of our attention. All the vast similarities kind of get ignored because conflicts are interesting and newsworthy. We have debates about abortion not about eating babies. But I think most possible human behavior is the obvious, baby-eating category and the area of disagreement is relatively small.

Moreover there is considerably evidence for innate moral intuitions. Empathy is an innate process in humans with normal development. Also see John Mikhail on universal moral grammar. I think there is something we can call "human ethics" but that there is enough cultural variability within it to allow us to also pick out local ethical (sub)systems.

That is underspecified. Evolved how? If I set up evolution in a simulation, or competition with selection between outright AIs, does that count? Can I choose the seeds or do I have to start from primordial soup?

Er forget this. When we say "human ethics is universal" we need to finish the sentence with "among... x". Looking up thread I see that the context for this discussion finishes that sentence with "among conscious beings" or something to that effect. I find that exceedingly unlikely. That said, I'm not at all bothered by Clippy the way I would be bothered by the Babyeaters (and not just because eating babies is immoral and paper clips pretty much neutral). The Babyeaters fall into a set of "the kind of things that should abide by my ethics". "Evolved, intelligent and social" was an ill-designed attempt to describe the parameters of that set. Whether or not human morality is universal among things in this set is an important, noteworthy question for me.

comment by DanArmak · 2010-01-05T21:14:40.395Z · score: 0 (0 votes) · LW · GW

Some people support unrestrained capitalism because they think it provides the most economic growth which is better for the poor. This is obviously a factual disagreement.

Not so obvious to me. The real disagreement isn't over "what generates the most economic growth" but over "what is best for the poor" (even if we ignore the people who simply don't want to help the poor, and they do exist). After all, the poor want social support now, not a better economy in a hundred years' time. Deciding that you know what's best for them better than they do is an ethical matter.

Slavery (at least the brutal kind) is almost always accompanied by a myth about slaves being innately inferior and needing the guidance of their masters.

Some slave systems were as you desribe (U.S. enslavement of blacks, general European colonial policies, arguably Nazi occupation forced labor). But in many others, anyone at all could be sold or born into slavery, and slaves could be freed and become citizens, thus there was no room for looking down on slaves in general (well, not any more than on poor but free people). Examples include most if not all ancient cultures - the Greek, Roman, Jewish, Middle and Near Eastern, and Egyptian cultures, and the original Germanic societies at least.

All the vast similarities kind of get ignored because conflicts are interesting and newsworthy.

That's true.

We have debates about abortion not about eating babies.

A lot of people are advocating a position that women are not allowed to abort, ever. Or perhaps only to save their own lives. To me that's no better than advocating the free eating of unwanted newborn babies.

But I think most possible human behavior is the obvious, baby-eating category and the area of disagreement is relatively small.

I think for almost all possible human behavior that is long-term beneficial to the humans engaging in it, there is or was a society in recorded history where it was normative. Do you have counterexamples?

comment by Jack · 2010-01-05T21:56:45.331Z · score: 0 (0 votes) · LW · GW

The real disagreement isn't over "what generates the most economic growth" but over "what is best for the poor" (even if we ignore the people who simply don't want to help the poor, and they do exist). After all, the poor want social support now, not a better economy in a hundred years' time. Deciding that you know what's best for them better than they do is an ethical matter.

So these two positions differ ethically in that the poor support one but not the other? I guess espousing bizarre ethical views is one way to make your point :-). Perhaps you can explain this better. I take it this doesn't apply to social policy, like abortion and gay marriage?

But in many others, anyone at all could be sold or born into slavery, and slaves could be freed and become citizens, thus there was no room for looking down on slaves in general (well, not any more than on poor but free people).

Thus the "brutal" qualifier in the original comment. The practice of slavery in general might be an ethical difference between cultures, I'll grant. Though it is worth noting that such societies considered compassion toward slaves to be virtuous and cruelty a vice.

A lot of people are advocating a position that women are not allowed to abort, ever. Or perhaps only to save their own lives. To me that's no better than advocating the free eating of unwanted newborn babies.

This looks like information relevant to the question of universal human ethics but it isn't.

I think for almost all possible human behavior that is long-term beneficial to the humans engaging in it, there is or was a society in recorded history where it was normative. Do you have counterexamples?

Not fair. Any particular ethical system only comes about when it dictates or allows behavior that is long-term beneficial to those who engage in it. Thats how cultural and biological evolution work. The thing is, the same kinds of behavior were long-term beneficial for every human culture.

comment by DanArmak · 2010-01-06T14:42:08.974Z · score: 1 (1 votes) · LW · GW

So these two positions differ ethically in that the poor support one but not the other?

Yes, and the reason this is relevant is because the positions are about things to be done to the poor.

You said:

Some people support unrestrained capitalism because they think it provides the most economic growth which is better for the poor.

There is a factual disagreement about how to best help the poor. The poor themselves generally support one of the two options: social support. They may, factually, be wrong. There is then a further decision: do we help them in the way we think best, or do we help them in the way they think best? This is a tradeoff between helping them financially, and making them feel good in various ways (by listening to them and doing as they ask). This tradeoff requires an ethical decision.

I take it this doesn't apply to social policy, like abortion and gay marriage?

It does apply, and in much the same way (inasfar as these issues are similar to wealth redistribution policy).

For instance, there are two possible reasons to support giving women abortion rights. One is to make their lives better in various ways - place them in greater control of their lives, let them choose non-child-rearing lives reliably, let them plan ahead, let them solve medical issues with pregnancy. This relies in part on facts, and disagreements about it are partly factual disagreements: what will make women happiest, what will place them in control of their lives, etc.

The other possible reason is simply: the women want abortion rights, so they should have them - even if having these rights is bad for them by some measure. They should have the freedom and the responsibility. (Personally, I espouse this reasoning and I also don't think it's bad for them somehow). This is ethical reasoning, and disagreements about it are ethical, not factual.

The practice of slavery in general might be an ethical difference between cultures, I'll grant. Though it is worth noting that such societies considered compassion toward slaves to be virtuous and cruelty a vice.

I think this compassion on the part of society-at-large tends to be more a matter of signalling than of practice.

A lot of people are advocating a position that women are not allowed to abort, ever. Or perhaps only to save their own lives. To me that's no better than advocating the free eating of unwanted newborn babies.

This looks like information relevant to the question of universal human ethics but it isn't.

Er, why not? It's an example of an ethical disagreement among different people.

I think for almost all possible human behavior that is long-term beneficial to the humans engaging in it, there is or was a society in recorded history where it was normative. Do you have counterexamples?

Not fair. Any particular ethical system only comes about when it dictates or allows behavior that is long-term beneficial to those who engage in it. Thats how cultural and biological evolution work. The thing is, the same kinds of behavior were long-term beneficial for every human culture.

It's true that every behaviour which occurs, is evolutionarily beneficial. But I'm suggesting that the opposite is also true: every behaviour that is possible (doesn't require a brilliant insight to invent), and that is evolutionarily beneficial, is practiced.

If indeed there is a universal human ethics, which humans obey, I'd expect some beneficial behaviours to nevertheless be shunned because they are unethical. Otherwise your entire ethics comes down to, "do whatever is in your own interest".

comment by Nick_Tarleton · 2010-01-06T15:34:44.353Z · score: 2 (2 votes) · LW · GW

You're ignoring the tradeoff between helping the current poor and future poor. The current poor would naturally favor the former, but I don't think that's an argument for it over the latter.

comment by Alicorn · 2010-01-06T16:43:04.970Z · score: 1 (1 votes) · LW · GW

Class is fairly heritable. To the extent to which we think people ought to make decisions for their descendants, it may make sense to let current poor make decisions that affect the future poor.

comment by bogus · 2010-01-06T16:21:41.717Z · score: 0 (0 votes) · LW · GW

If that's the only issue, we could choose whatever policy helps the most and then compensate current folks by borrowing. Economic growth will be lower and future folks will be poorer, but the policy will be efficient.

As an aside, we don't really know how wealthy future folks will be. If a Singularity is imminent, it's probably efficient to liquidate a lot of capital and help current folks more.

comment by Jack · 2010-01-06T16:48:42.789Z · score: 1 (1 votes) · LW · GW

Are we breaking some rule if this discussion gets a little political?

Yes, and the reason this is relevant is because the positions are about things to be done to the poor.

OK. But they're also about things to be done to the rich.

The poor themselves generally support one of the two options: social support. They may, factually, be wrong. There is then a further decision: do we help them in the way we think best, or do we help them in the way they think best?

This is a such a dismal way of looking at the issue from my perspective. Once you decide that the policy should just be whatever some group wants it to be you throw any chance for deliberation or real debate out the window. I realize such things are rare in the present American political landscape but turning interest group politics into an ethical principle is too much for me.

This is a tradeoff between helping them financially, and making them feel good in various ways (by listening to them and doing as they ask). This tradeoff requires an ethical decision.

I read this as "this is a trade-off between helping them financially, and patronizing them" :-).

The other possible reason is simply: the women want abortion rights, so they should have them - even if having these rights is bad for them by some measure.

If most women opposed abortion rights (as they do in many Catholic countries) you would be fine prohibiting it? Even for the dissenting minority? Saying people should be able to have abortions, even if it is bad for them makes sense to me. Saying some arbitrarily defined group should be able to define abortion policy, (regardless) if it is bad for them, does not.

Also, almost all policies involve coercing someone for the the benefit of someone else. How do you decide which group gets to decide policy?

I think this compassion on the part of society-at-large tends to be more a matter of signalling than of practice.

Maybe, though I don't know if we have the evidence to determine that. But they're signaling because they want people to think they are ethical. There being some kind of universal human ethics and most people being secretly unethical is a totally coherent description of the world.

Er, why not? It's an example of an ethical disagreement among different people

What I meant was that the fact that you think something ethically controversial is as bad as something ethically uncontroversial doesn't tell us anything. Also, I know I used it as an example first but the abortion debate likely involves factual disagreements for many people (if not you).

If indeed there is a universal human ethics, which humans obey, I'd expect some beneficial behaviours to nevertheless be shunned because they are unethical. Otherwise your entire ethics comes down to, "do whatever is in your own interest".

But ethics are product of biological and cultural evolution! Empathy was probably an evolutionary accident (our instincts for caring for offspring got hijacked). If there is a universal moral grammar I don't know the evolutionary reason, but surely there is one. The cultural aspects likely helped groups and helped individuals within groups survive. In general ethics are socially beneficial norms (social benefits aren't the evolutionary cause for compassion but they are the cause for thinking of compassion as a virtue).

So it isn't "do what is in your own interest" but "do whatever is in your group's interest". I think there there are individually beneficial behaviors that I suspect have never been normative. In-group murder? In-group theft? That said I don't know what you mean by "your entire ethics comes down to". The causal story for ethics probably does come down to "things that were in your group's interest" but that doesn't mean you can just follow that principle and turn out ethical.

comment by Alicorn · 2010-01-06T16:52:54.508Z · score: 2 (2 votes) · LW · GW

Just as an aside, lots of women go ahead and get abortions even if they assent to statements to the effect that it shouldn't be allowed. Which preference are you more inclined to respect?

comment by mattnewport · 2010-01-06T17:17:37.927Z · score: 0 (0 votes) · LW · GW

I don't think that's necessarily hypocrisy. A reformed drug addict may say that he believes drugs should be illegal and then later relapse. That doesn't necessarily mean his revealed preference for taking drugs overrides his stated opinion that they should be illegal. He may support prohibition because he doesn't trust his own ability to resist a short term temptation that he believes is not in his own long term best interests. Similarly it would not be inconsistent for a woman to believe that abortions should be illegal because they are bad (by some criteria) but too tempting for women who find themselves with an unwanted pregnancy. Believing that they themselves will not be able to resist that temptation if they become pregnant is if anything an argument in favor of making abortion illegal.

For the record I don't believe abortion or drugs should be illegal but I don't think it is necessarily inconsistent for a woman to believe abortion should be illegal and still get one.

comment by Paul Crowley (ciphergoth) · 2010-01-06T19:50:47.454Z · score: 0 (0 votes) · LW · GW

It's not necessarily hypocrisy, but it leaves us with two sets of preferences for a single population, and a judgement call on which is the right one to follow. The argument you're making is sound on its face, but as far as abortion goes neither of us buy it - we take the revealed preference more seriously than the overt one, and the fact that this is even sometimes the right call makes the plan to give groups what they say they want rather than what we think will maximise utility quite a lot less appealing.

comment by DanArmak · 2010-01-06T22:15:10.159Z · score: 1 (1 votes) · LW · GW

This is a very interesting line of argument. How much of it do you think is due to this:

Many of these women live in cultures and social circles/families where publicly supporting abortion rights is very damaging socially. Even in relaxed conditions, disagreeing with one's family and friends on an issue that evokes such strong emotions is hard. The women tend to conform unless they have a strong personal opinion to the contrary, and once they conform on the signalling level, they may eventually come to believe themselves that they are against abortion.

If their social environment changes, or they move into a new one, they may change or "reveal" their new pro-abortion-rights opinion very quickly & dramatically.

And however frequent cases like this may be, we also tend to over-estimate their incidence, because we believe ourselves that abortion rights are really good for women, and that the women are the "good" underdogs in this story.

comment by mattnewport · 2010-01-06T21:01:12.513Z · score: 1 (1 votes) · LW · GW

I wouldn't quite say that I take the revealed preference more seriously than the overt one. I'm prepared to accept that there may be people who genuinely believe that abortion is morally wrong and also genuinely believe that other people (and possibly they themselves) will succumb to temptation and have an abortion if it is legal and available even if they believe it is wrong. We generally accept the reality of akrasia here, this seems like a very similar phenomenon: the belief that people can't be trusted to do what is morally right when faced with an unwanted pregnancy and so need to make a Ulysses pact in advance to bind themselves against temptation.

The reason this argument doesn't hold water for me is because I don't think it is right that people who believe abortion is morally wrong should be able to prevent others who don't share that belief from having abortions. If an 'opt-in' anti-abortion law was proposed where you voluntarily committed to being jailed for having an abortion in advance of needing one I wouldn't have a problem with it.

In reality I don't know what percentage of women with anti-abortion beliefs use this kind of reasoning. I have heard it explicitly from people who have taken drugs in the past and still support prohibition however.

comment by Paul Crowley (ciphergoth) · 2010-01-06T22:44:58.637Z · score: 1 (1 votes) · LW · GW

Note the image in the banner of "Overcoming Bias"...

I would be against an opt-in anti-abortion law, since unlike with akrasia I see no reason to prefer the earlier preference over the later one in this instance.

comment by AdeleneDawner · 2010-01-06T21:13:20.666Z · score: 0 (0 votes) · LW · GW

In reality I don't know what percentage of women with anti-abortion beliefs use this kind of reasoning.

I used to be friends with someone who was an anti-abortion activist, and who likes thinking about the logic behind such decisions. To the best of my knowledge, she'd never thought about it from that angle. I think I still have a good email address for her, if you'd like me to ask her what she thinks of the idea.

comment by mattnewport · 2010-01-06T21:49:42.335Z · score: 0 (0 votes) · LW · GW

I'd be curious to know if anti-abortion activists think about it in those terms.

comment by AdeleneDawner · 2010-01-11T19:51:58.683Z · score: 2 (2 votes) · LW · GW

I got an email back from her. Tl;dr version: Nope, that's definitely not how she was thinking about it. (Perhaps noteworthy: She rarely communicates via email, so she's out of her element here. It is possible to evoke saner discussion from her in realtime.)

As far as the comment from the blogger on that website, it sounds to me that they have a very bland argument. First, most women who are against abortion have had abortions and know the harm caused to the child, but also the harm that happens to them. Second, there are plenty of pro-life women who have had "unplanned" pregnancies and continue to have the child. Can you imagine the conversation with your child that goes like this-, "You are so lucky! You came when I wanted you! I chose not to abort you! Isn't that GREAT!?" I can't even imagine saying that to someone. We are against abortion because it is intrinsically wrong to take the life of an unborn baby. There have been many people that lived through botched up abortions and couldn't understand why no one wanted them. They are unwanted because they are, unplanned, or they are "messed up"(which is inaccurate most times). Can you imagine growing up being "lucky"?

this blogger also implies that we as humans have no self control. It implies that we have no way to make the right decision. Anyone who is pro-life, never becomes pro-choice. It is always the other way around. Our motivation to being pro-life is that there is an innocent life at stake. Well, there are two innocent lives at stake. The mother is also at risk. People forget about that part. The information given to women who get abortions is not complete. If a woman has a miscarriage, and has to have the remains removed from her body, they go to the hospital, they are put to sleep, and a trained OBGYN or doctor is used to perform a Dilation and Cutterage or D&C. This is a one day procedure and normal hospital cost for this is about 20,000 dollars. So tell me why a procedure at an abortion mill can cost between 400-900 dollars and normally the girls are not put to sleep. One of the worst things is hemorrhaging after the procedure. In a hospital, if you leave and you pass out right outside the door (or anywhere) because of that, the hospital will take you back in, the abortion mill won't even call an ambulance. There are tons of other things that can happen that the abortion mill will not take responsibility for. Putting that aside, the topic also takes away from the fact that the whole reason we are pro-life is for life. Thats it. Plain and simple. It is not about us, it is about innocent lives being lied to and lives being taken. I wish that people who are pro choice would explain what it is that they are choosing. The more we continue with the advances of technology, the more scientists are finding that human life is very much there when conception occurs. The pro choice people say that there is no life until a certain time. Technology is proving otherwise. My simple thing has always been, if there is no life, then what are they killing?

comment by AdeleneDawner · 2010-01-07T00:22:02.652Z · score: 2 (2 votes) · LW · GW

Email sent. I'll quote the relevant bit here, in case it turns out to affect her reply. (I did link to the conversation but I'm not sure she'll follow the link.)

I am writing for a more interesting reason than just to keep in touch, though. One of the places I've been spending my time at online is a rationalist forum, and a few of the members were discussing abortion law. One of them suggested that the main reason that women who believe abortion is wrong support anti-abortion laws is that having such a law in place would reduce the temptation they'd feel if they had an unwanted pregnancy, as opposed to supporting such a law primarily to take rights away from others who may not share their beliefs. That didn't sound to me like what you've talked about, but it has some interesting implications (it might be easier for you guys to get a law passed where women could voluntarily sign up to have abortion illegal for themselves, kind of like the laws that let gambling addicts sign up to not be allowed into casinos). What do you think?

comment by DanArmak · 2010-01-06T17:40:33.513Z · score: 0 (0 votes) · LW · GW

Are we breaking some rule if this discussion gets a little political?

Only if it gets political in the sense of "politics, the mind-killer" :-)

Yes, and the reason this is relevant is because the positions are about things to be done to the poor.

OK. But they're also about things to be done to the rich.

Certainly, and the rich's opinion and interests should be consulted as well. I wasn't talking about what the best policy is, anyway; I was just analyzing the position of those rich (or rather non-poor) who you said want to help the poor by improving the economy.

This is a such a dismal way of looking at the issue from my perspective. Once you decide that the policy should just be whatever some group wants it to be you throw any chance for deliberation or real debate out the window.

If your goal is ultimately to please that group, then why not? This isn't a debate about working together with another group to achieve a common goal or to compromise on something. This is a debate on how best to help another group. "Making them happy" and "doing whatever they want" (to the extent of the resources we agree to commit) is a valid answer, even if many people won't agree.

The fact that you don't agree is what I was pointing out - that legitimate ethical disputes exist. I don't even really want to argue for this particular policy - I haven't thought it through very deeply; it was just an example of a disagreement. But I do believe it's reasonable enough to at least be considered.

If most women opposed abortion rights (as they do in many Catholic countries) you would be fine prohibiting it? Even for the dissenting minority?

No I would not be fine with that. I'm not fine with any individual prohibiting abortion for another individual. Any women who are against abortions are free not to have abortions themselves, and everyone else should be free to have abortions if they wish. Note that my argument didn't rely on majority opinion or on using the class of "all women". The freedom to have abortions is a personal freedom, not a group freedom.

Also, almost all policies involve coercing someone for the the benefit of someone else. How do you decide which group gets to decide policy?

Many policies involve no coercion. Or at least some of the policy options involve no coercion.

For instance, allowing abortions to everyone involves no coercion. Unless you consider "knowing other people get abortions and not being able to stop them" a coerced state.

I never said that personal freedom and responsibility can solve all ethical issues. Sometimes all policy options are tradeoffs in coercion, and there isn't aways a "right" option. That only reinforces my point that many ethical disputes exist and there is no universal human ethics.

I think this compassion on the part of society-at-large tends to be more a matter of signalling than of practice.

Maybe, though I don't know if we have the evidence to determine that. But they're signaling because they want people to think they are ethical. There being some kind of universal human ethics and most people being secretly unethical is a totally coherent description of the world.

I think there's even more variation in the signaling - in the stories that people tell one another - than in the practice. For one thing, the practice is constrained to be mostly evolutionarily beneficial, but the storytelling can be completely divorced from reality.

Case in point: in many times and places religion has been a big part of the "publically signalled" ethics. Religions, of course, often contradict one another on behavioural guidelines, but more than that, they often contradict what is possible in practice. Imagine a world where the scriptures of (some verisons of) Christianity really held sway: sex is sinful, money and property are sinful, taking interest in this world is sinful, trying to change the world for the better is sinful, science and questioning authority are sinful...

I do not believe all humans, let alone all evolved intelligences, would independently derive an ethics that says changing the world, studying nature, and reproducing are all wrong.

the abortion debate likely involves factual disagreements for many people

What kind of disagreements? About what god wants? Or about what's best for women? Or about what our terminal values "should" be?

But ethics are product of biological and cultural evolution!

If they are solely the product of evolution, then there can't be a universal human ethics among different cultures. Did I misunderstand something about your argument?

comment by Jack · 2010-01-06T19:14:34.098Z · score: 0 (0 votes) · LW · GW

But ethics are product of biological and cultural evolution!

If they are solely the product of evolution, then there can't be a universal human ethics among different cultures. Did I misunderstand something about your argument?

comment by Jack · 2010-01-06T20:10:59.618Z · score: -1 (1 votes) · LW · GW

But ethics are product of biological and cultural evolution!

If they are solely the product of evolution, then there can't be a universal human ethics among different cultures.

I have no idea why this would be true. Convergent evolution.. Also, there can be cultural evolution in the absence of more than one culture. Some ethical principle might have evolved when humanity was all one culture (if there ever was such a point, I guess I find that unlikely).

Lets back up. Human ethics basically consists of five values. Different cultures at different times emphasize some values more than others. Genuine ethical disagreements tend to be about which of these values should take precedence in a given situation. As a human I don't think there is a "true answer" in these debates. Some of these questions might have truth values for American liberals (and I can answer for those), but they don't for all of humanity.

Now

I do not believe all humans, let alone all evolved intelligences, would independently derive an ethics that says changing the world, studying nature, and reproducing are all wrong.

That ethics is basically the purity value being (in my mind) way over emphasized. Now in modern, Western societies large segments hardly care about purity at all. I'm one of those people and I suspect a lot of people here are. But this is a very new development and it is very likely that we still have some remnants of the purity value left (think about our 'epistemic hygiene' rhetoric!) . But yes, compared to most of human history modern liberals are quite revolutionary. It is possible that not all of those values are universal among evolved, intelligent, social beings (though it seems to me they might be).

The other things:

the abortion debate likely involves factual disagreements for many people

What kind of disagreements? About what god wants? Or about what's best for women? Or about what our terminal values "should" be?

I meant the first two. Also, facts about personhood, when life begins, the existence of souls etc. There may also be a value disagreement.

Many policies involve no coercion. Or at least some of the policy options involve no coercion. For instance, allowing abortions to everyone involves no coercion. Unless you consider "knowing other people get abortions and not being able to stop them" a coerced state.

Of course that is a coerced state. :-) Not being able to do something under threat of state action is textbook coercion. This is why libertarians who think they can justify their position just by appealing to a single principle of non-coercion are kidding themselves. They obviously need something else to tell them which kinds of coercion are justified.

If your goal is ultimately to please that group, then why not? This isn't a debate about working together with another group to achieve a common goal or to compromise on something. This is a debate on how best to help another group.

So there isn't some special, terminal value that is "letting these people decide", rather there are different ways to please people and some disagreements are about that? But I'm not sure the question of what is the best way to please a group of people isn't a question of fact. Either poor people would rather be listened to than have more money or vice versa. There is a fact of the matter about this question.

comment by DanArmak · 2010-01-06T22:08:59.214Z · score: 0 (2 votes) · LW · GW

Convergent evolution.

By convergent evolution, some cultures can evolve the same ethics. Even many cultures. But a universal ethics implies that all cultures, no matter how diverse in ever other way, and including cultures which might have existed but didn't, would evolve the same ethics (or rather, would preserve the same ethics without evolving it further). This is extremely unlikely, and would require a much stronger explanation than the general idea of convergent evolution.

Anyway, my position is that different cultures in fact have different ethics with little in common between the extremes, so no explanation is needed.

Human ethics basically consists of five values. Different cultures at different times emphasize some values more than others. Genuine ethical disagreements tend to be about which of these values should take precedence in a given situation.

This is an interesting model. I don't remember encountering it before.

I believe you agree with me here, but just to make sure I read your words correctly: the commonality of these five values (if true) does not in itself imply a commonality of ethics. There is no ethics until all the decisions about tradeoffs and priorities between the values are made.

That ethics is basically the purity value being (in my mind) way over emphasized.

In many non-Christian traditions, sex is pure and sacred. People may need to purify themselves for or before sex, and the act of sex itself can serve religious purposes (think "temple whores", for instance). This is pretty much the opposite of Christian tradition.

The value of purity, and the feelings it inspires, may well be universal among humans. But the decision to what it applies - what is considered pure and what is filthy - is almost arbitrary. I suspect the same is true for most or all of the other five values - although there may be some constants - which only reinforces my conviction that there is no universal ethics.

It is possible that not all of those values are universal among evolved, intelligent, social beings (though it seems to me they might be).

It scarcely seems possible to me that any of these values are universal. A few quick thought-experiments, designed purely to demonstrate the feasibility of lacking these values in a sentient species:

Harm/care: some human sub-cultures have little enough of this value (e.g., groups of young males running free with no higher authority). Plus, a lot of our nurturant behaviour stems from raising children who are helpless for many years (later transferred to raising pets). If human children needed little to no care (like r-selected species), and if almost all human interactions took place between mostly self-dependant and independent individuals, then I think we might plausibly have vastly less empathy and "gentleness".

Fairness/reciprocity: some human societies have little of this, instead running on pure power hierarchies. A chief doesn't need to be visibly just if he's visibly powerful, self-interested and rewards his followers in hierarchical order.

Ingroup/loyalty: I'm not sure about this one. It may be that there are evolutionary social dynamics that tend to lead to it (game theory-like).

I speculate that ingroup loyalty might not exist, or might be weaker, in a species that didn't have war and similar competition between individuals. The reason we have such competition is that a male who wins can reproduce a lot more than average. But consider a species that's asexual, or where a male cannot physiologically mate more than once, or more than once a year, or with lifelong partner imprinting like in some birds. Then the biggest competition that can exist between individuals is for the amount of resources one individual and his kin can use. Ingroup dynamics could still form, but they'd be much weaker, I think; they would not be useful except in times of severe lack of food and similar resources.

Authority/respect: this is described in terms of social hierarchies, and there can certainly be intelligent social species that have no real hierarchies. Suppose there's little competition between individuals, as above, so no-one has a big incentive to become chief (it's enough to become relatively high status; no need to be first). And suppose there's little needed for coordinated action with a central decision-maker (no war, and people live in small enough groups that can coordinate efficiently). Or maybe these aliens are just much better at communication and coordination and can do it without taking orders. In such a scenario, I see no reason for a hierarchy to form.

Of course in any particular matter there can be a hierarchy of skill or knowledge. And if someone is consistently on top in a lot of such hierarchies, they can gain authority and respect. Or if someone is just consistently smarter than someone else, there can be authority and respect between individuals. I don't count these as examples; I take this value to mean the human game of status for status' sake.

Purity/sanctity: as I said above, even in humans the concept of purity is disconnected from what a particular culture considers to be pure...

Of course that is a coerced state. :-) Not being able to do something under threat of state action is textbook coercion.

That's a good point, but the choice is still assymetrical. If we allow people to interfere in each other's lives like this (i.e. the state doesn't coerce them to not interfere), than many people will attempt to interfere in the same thing at cross purposes. As a result, 1) we don't know what way of life will win out, and it may well be unethical; 2) a lot of people will coerce one another, which is no better than when the state does it.

If we're setting state policy, then we can either enforce some one ethical system on everyone, or we can let everyone rule themselves, but we still have to interfere to prevent people from coercing one another, otherwise there'll be chaos, not freedom. Different ethical systems will lead to any of these three systems (imposing ethics, freedom, and state-less chaos). But any system that enforces one ethics must do so explicitly; it's very unlikely to come up as an instrumental goal of ethics A to enforce a conflicting ethics B.

In this way, enforcing individual freedom and non-interference can be seen as qualitatively different from enforcing any given ethics and way of life, even though it still involves a form of coercion.

Either poor people would rather be listened to than have more money or vice versa. There is a fact of the matter about this question.

Yes, and as we said earlier, they almost always prefer being listened to. (When someone tells you "I want X", and you ask him "so do you want X or Y, really?" he'll usually respond "X" again.) What's more, if you value their self-reporting of their happiness, then giving them what they want is the best way to make them feel happier in the short term. If you try something else, like giving them money, or giving their descendants money, then even if in the very long term they'll be happier and admit it, they will reliably be unhappy in the short term due to not getting what they asked for and because you behaved condenscendingly towards them (by saying you know what's best for them better than they do).

For some people "helping everyone get what they want == freedom and responsibility for everyone" is a terminal value. For others, "making everyone happy" is a terminal value, but giving people what they want still becomes an instrumental value for the above reason.

comment by Jack · 2010-01-12T20:55:25.548Z · score: 0 (0 votes) · LW · GW

But a universal ethics implies that all cultures, no matter how diverse in ever other way, and including cultures which might have existed but didn't, would evolve the same ethics (or rather, would preserve the same ethics without evolving it further). This is extremely unlikely, and would require a much stronger explanation than the general idea of convergent evolution.

Once you have a task that needs to be accomplished there are often only so many ways of accomplishing it. For example, there are only so many ways to turn sound into useful data the brain can use. Thus I suspect just about all functioning ears will have things in common- something that amplifies vibrations and something that medium can vibrate etc. That said I think you're probably right that given enough cultures and species with divergent enough histories I'd probably discover some pretty alien moralities. That said there might not be many social and intelligent species out there. Given that, it seems plausible that there is some universal morality in that there are no social and intelligent exceptions. Universality doesn't mean necessity. (I'm going to let your points about different evolutionary histories leading to different values go unresponded to. They're good points though and I think the probability of really inhuman moralities existing is higher than I thought before).

I believe you agree with me here, but just to make sure I read your words correctly: the commonality of these five values (if true) does not in itself imply a commonality of ethics. There is no ethics until all the decisions about tradeoffs and priorities between the values are made.

No no. Sorry if this wasn't clear. Like I said, I don't think humans agree on prioritizing these values. People in the United States don't even agree on prioritizing these values to some extent. The commonality of these five values is a commonality of ethics-- it doesn't imply identical, complete ethical codes for everyone but I don't think we all have identical codes, just enough in common that it makes sense to speak of a human morality.

Harm/care: some human sub-cultures have little enough of this value (e.g., groups of young males running free with no higher authority).

Can you do a better job specifying what kinds of sub-cultures you mean?

Fairness/reciprocity: some human societies have little of this, instead running on pure power hierarchies. A chief doesn't need to be visibly just if he's visibly powerful, self-interested and rewards his followers in hierarchical order.

Yeah, there are places that value authority a lot more than fairness. Is there no conception of fairness for those of equal status? If outsiders came and oppressed them would they not experience that as injustice? This is difficult to discuss without having more data.

In many non-Christian traditions, sex is pure and sacred. People may need to purify themselves for or before sex, and the act of sex itself can serve religious purposes (think "temple whores", for instance). This is pretty much the opposite of Christian tradition.

Cite?

The value of purity, and the feelings it inspires, may well be universal among humans. But the decision to what it applies - what is considered pure and what is filthy - is almost arbitrary. I suspect the same is true for most or all of the other five values - although there may be some constants - which only reinforces my conviction that there is no universal ethics.

There might be some variation in the way some of the values are implemented but I hardly think what is considered filthy is arbitrary. There are widely divergent cultures which consider the same things pure and filthy (i.e. feces). This is true of the other values too. The fact that these same five things make up everyones ethical code strikes me as a really big commonality, one that we can feel pretty good about. It isn't a deep truth about the universe but the fact that I can condemn something and have the backing of more or less the entire human race is significant. The fact that anywhere I go I can argue with an appeal to one of these values and people won't look at me like I'm a monster is remarkable. And insofar as this is the case I think we can meaningfully speak of a human ethics- it is the ethics that I appeal to by appealing these values.

Are you familiar with the the trolley cases? If you ask whether switching the tracks is permissible you get large majorities saying yes. But if you ask whether pushing the fat guy onto the tracks is permissible you get large majorities saying no. What is interesting is that these responses are universal, there is zero cultural variation. Interestingly, there is a gender difference, not in whether or not you think one or the other is permissible but in that men come up with complicated rationalizations and moral theories for giving different answers and women tend to not know why they answered the way they did (and say self-degradating things to that effect).

If we're setting state policy, then we can either enforce some one ethical system on everyone, or we can let everyone rule themselves, but we still have to interfere to prevent people from coercing one another, otherwise there'll be chaos, not freedom.

Interfering to prevent people from coercing one another is still enforcing an ethical system. The state still needs to make normative judgments about what constitutes justified or unjustified coercion. You have a cool car but won't let me use it, you are coercing me by preventing me from riding in your cool car. But if I take the car then I've coerced you-- kept you from riding in it, kept you from accessing the fruits of your labor, etc. If someone yells at you in public you can't avoid hearing them. If someone rapes you you can't avoid having sex with them. If your neighbor has a gun he is violating your right to not have to worry about being shot. If you take his gun then he has lost his right to own a gun. All of these things are coercive. I'm just saying there needs to be some independent standard for justified coercion and that standard is going to be whatever your ethics is.

comment by DanArmak · 2010-01-30T22:01:17.139Z · score: 1 (1 votes) · LW · GW

I apologize for not replying and providing the citations needed. I've had unforeseen difficulties in finding the time, and now I'm going abroad for a week with no net access. When I come back I hope to make time to participate in LW regularly again and will also reply here.

comment by Jack · 2010-01-05T21:59:29.829Z · score: 0 (0 votes) · LW · GW

I think for almost all possible human behavior that is long-term beneficial to the humans engaging in it, there is or was a society in recorded history where it was normative. Do you have counterexamples?

Not fair. Any particular ethical system only comes about when it dictates or allows behavior that is long-term beneficial to those who engage in it. Thats how cultural and biological evolution work. The thing is, the same kinds of behavior were long-term beneficial for every human culture.

comment by byrnema · 2010-01-05T21:31:46.974Z · score: 0 (2 votes) · LW · GW

I concur with Jack that most ethical disputes are about facts, and if not then about relative weights for values. Freedom verses existence, etc.

What I would call a real difference in ethics would be the introduction of a completely novel terminal value (which I can hardly imagine) or differences in abstract positions such as whether it is OK to locally compromise on ethics if it results in more global good (i.e., if the ends justify the means), etc.

comment by byrnema · 2010-01-05T19:25:17.415Z · score: -1 (1 votes) · LW · GW

There is a confusion that results when you consider either system (objective or subjective ethics) from the viewpoint of the other.

(The objective ethical system viewpoint of human ethics.) Suppose that there is an objective ethical system defining a set of imperatives. Also, separately, we have subjectively determined human ethics. The subjective human ethics overlapping with the objective imperatives are actual imperatives; the rest are just preferences. It is possible that the objective imperatives are not known to us, in which case, we may or may not be satisfying them and we are not aware of our objective value (good or bad).

(The subjective ethical system viewpoint of human ethics.) In the case of no objective ethical system, imperatives are subjectively collectively determined. We are bad or good -- to whatever extent it is possible to be 'bad' or 'good' -- if we think we are bad or good. This is self-validation.

Now, to address your objections:

  1. Human ethics aren't even universal among humans. Plenty of humans live and have lived who would think I should rightly be killed - for not obeying some religious prescription, for instance. On the other hand some humans believe no-one should be killed and no-one has the right to kill anyone else, ever. Many more opinions exist.

Right, human ethics do seem very inconsistent. To me, this is a challenge only to the existence of subjective ethics. In the case of objective ethics, there is no contradiction if humans disagree about what is ethical; humans do not define what is objectively ethical. In the case of a subjective ethical system, inconsistencies in human ethics is evidence that there is no well-defined notion "human ethics", only individual ethics.

Nevertheless, in defense of 'human ethics' for either system, perhaps it is the case that human ethics are actually consistent, in a way that matters, but the terminal values are so higher order we don't easily find them. All the different moral behaviors we see are different manifestations of common values.

(2) I know of no reason why an AI couldn't be built with different ethics from ours, or with no ethics at all. A paperclipper AI could be very intelligent, conscious (whatever that means), but sill - unethical by our lights. If anyone believes that such unethical minds literally cannot exist, the burden of proof is on them.

Of course, minds could evolve or be constructed with different subjective ethical systems. Again, they may or may not be objectively ethical.

comment by DanArmak · 2010-01-05T19:44:00.851Z · score: 0 (0 votes) · LW · GW

The subjective human ethics overlapping with the objective imperatives are actual imperatives; the rest are just preferences.

This redefinition of the word "imperative" goes counter to the existing meaning of the word (which would include all 'preferences'), so it's confusing. I suggest you come up with a new term or word-combination.

In the case of objective ethics, there is no contradiction if humans disagree about what is ethical; humans do not define what is objectively ethical.

You defined objective ethics as something every rational thinking being could derive. Shouldn't it also have some meaning? Some reason why they would in fact be interested in deriving it?

If this objective ethics can be derived by everyone, but happens to run counter to almost everyone's subjective ethics, why is it even interesting? Why would we even be talking about it unless we either expected to encounter aliens with subjective ethics similar to it; or we were considering adopting it as our own subjective ethics?

However, in defense of human ethics for either system, perhaps it is the case that human ethics are actually consistent, in a way that matters, but the terminal values are so higher order we don't easily find them. All the different moral behaviors we see are different manifestations of common values.

That definitely requires proof. Have you got even a reason for speculating about it, any evidence for it?

comment by byrnema · 2010-01-05T20:02:18.828Z · score: 0 (0 votes) · LW · GW

You defined objective ethics as something every rational thinking being could derive.

Actually, I didn't. I would be interested in AaronBenson's answers to the questions that follow.

That definitely requires proof. Have you got even a reason for speculating about it, any evidence for it?

Here, I was just suggesting a solution. I don't have much interest in the concept of 'human' ethics. (Like Jack, I would be very interested in what ethics are universal to all evolved, intelligent, social minds.)

... Yet I didn't suggest it randomly. My evidence for it is that whenever someone seems to have a different ethical system from my own, I can usually eventually relate to it by finding a common value.

comment by DanArmak · 2010-01-05T20:13:37.999Z · score: 0 (0 votes) · LW · GW

Right, sorry, that was AaronBensen's definition.

comment by byrnema · 2010-01-05T19:54:29.723Z · score: 0 (0 votes) · LW · GW

The subjective human ethics overlapping with the objective imperatives are actual imperatives; the rest are just preferences.

This redefinition of the word "imperative" goes counter to the existing meaning of the word (which would include all 'preferences'), so it's confusing. I suggest you come up with a new term or word-combination.

I was using the the meaning of imperative as something you 'ought' to do, as in moral imperative. This does not include preferences unless you feel like you have a moral obligation to do what you prefer to do.

comment by [deleted] · 2010-01-05T20:41:45.487Z · score: 2 (2 votes) · LW · GW

I know all those replies weren't posted just to aid me but thanks for posting them nevertheless. Obviously I at least need to put more thought into what ethics is and hence what my question means. Maybe the question will disappear following that but, if not, at least I'll be on more solid ground to try to respond to it.

comment by MatthewB · 2010-01-06T07:39:58.407Z · score: 1 (3 votes) · LW · GW

I am having a discussion on a forum where a theist keeps stating that there must be objective truth, that there must be objective morality, and that there is objective knowledge that cannot be discovered by Science (I tried to point out that if it were Objective, then any system should be capable of producing that knowledge or truth).

I had completely forgot to ask him if this objective truth/knowledge/morality could be discovered if we took a group of people, raised in complete isolation, and then gave them the tools to explore their world. If such things were truly objective, then it would be trivial for these people to arrive at the discovery of these objective facts.

I shall have to remember this, as well as the fact that such objective knowledge/ethics may indeed exist, yet, why is it that our ethical systems across the globe seem to have a few things in common, but disagree on a great many more?

comment by PhilGoetz · 2010-01-07T05:20:49.477Z · score: 1 (1 votes) · LW · GW

You can't ask whether there are more things in common than not in common, unless you can enumerate the things to be considered. If everyone agrees on something, perhaps it doesn't get categorized under ethics anymore. Or perhaps it just doesn't seem salient when you take your informal mental census of ethical principals.

Excellent response to the theist.

comment by MatthewB · 2010-01-07T05:42:01.125Z · score: 1 (1 votes) · LW · GW

You can't ask whether there are more things in common than not in common, unless you can enumerate the things to be considered.

Doh!

Yes, of course... Slip of the brain's transmission there.

As for the response to the theist, I wish that I had used that specific response. I cannot recall now what I did use to counter his claims.

As I mentions, his claim was that there is knowledge that is not available to the scientific method, yet can be observed in other ways.

I pointed out that there were no other ways of observing things than empirical methods, and that if some method of knowledge that just entered out brain should be discovered (Revelation), and its reliability were determined, then this would just be another form of observation (Proprioception) and the whole process would then just be another tool of science.

He just couldn't seem to get around the fact that as soon as he makes an empirical claim that it falls within the realm of scientific discovery.

He was also misusing Gödel's incompleteness theorem (some true statements in a formal system cannot be proved within that formal system).

At which point, he began to conflate science as some sort of religion and god that was being worshiped, and from which everything was meaningless and thus there were no ethics, so he could just go kill and rape whoever he pleased.

It frightens me that there are such people in the world.

comment by Nick_Novitski · 2010-01-04T18:45:31.913Z · score: 3 (3 votes) · LW · GW

Here's a silly comic about rationality.

I rather wish it was called "Irrationally Undervalues Rapid Decisions Man". Or do I?

comment by CannibalSmith · 2010-01-04T10:58:08.909Z · score: 3 (3 votes) · LW · GW

Does undetectable equal nonexistent? Examples: There are alternate universes, but there's no way we can interact with them. There are aliens outside our light cones. Past events evidence of which has been erased.

comment by randallsquared · 2010-01-06T03:31:33.089Z · score: 0 (0 votes) · LW · GW

Does undetectable equal nonexistent?

If you mean undetected, then clearly not, since we might yet detect those things. If you mean necessarily undetectable, I don't see how the question is answerable, or even has an answer at all, in some sense.

comment by Nick_Novitski · 2010-01-04T16:36:42.375Z · score: 0 (2 votes) · LW · GW

Undetectability is hard (impossible?) to establish outside of thought experiments. Real examples are limited to undetected and apparently-unlikely-to-be-detected phenomenon.

But if I took your question charitably, I would personally say absolutely yes.

I've always been fond of stealing Maxwell's example: if there was a system of ropes hanging from a belfry, which was itself impossible to peer inside, but which produced some measurable relation between the position and tension between all the ropes, then what can be said to "exist" in that belfry is nothing more or less than that relationship, in whatever expression you choose (including mechanically, with imaginary gears or flywheels or fluids or whatever). And if later we can suddenly open it up and find that there were some components that had no effect on the bell pull system (for example, a trilobite fossil with a footprint on it), then I would have no personal issue with saying that those components did not exist back "when it was impossible to open the belfry."

But I hold this out of convenience, not rigor.

comment by CannibalSmith · 2010-01-05T07:59:37.298Z · score: 0 (0 votes) · LW · GW

But I hold this out of convenience, not rigor.

Why? And why is this distinction important?

comment by [deleted] · 2010-01-04T05:39:56.832Z · score: 3 (3 votes) · LW · GW

P(A)*P(B|A) = P(B)*P(A|B). Therefore, P(A|B) = P(A)*P(B|A) / P(B). Therefore, woe is you should you assign a probability of 0 to B, only for B to actually happen later on; P(A|B) would include a division by 0.

Once upon a time, there was a Bayesian named Rho. Rho had such good eyesight that she could see the exact location of a single point. Disaster struck, however, when Rho accidentally threw a dart, its shaft so thin that its intersection with a perfect dartboard would be a single point, at a perfect dartboard. You see, when you randomly select a point from a region, the probability of selecting each point is 0. Nonetheless, a point was selected, and Rho saw which point it was; an event of probability 0 occurred. As Peter de Blanc said, Rho instantly fell to the very bottom layer of Bayesian hell.

Or did she?

comment by orthonormal · 2010-01-04T05:46:43.611Z · score: 1 (1 votes) · LW · GW

Don't worry, the mathematicians have already covered this.

comment by RichardKennaway · 2010-01-04T07:53:20.243Z · score: 0 (0 votes) · LW · GW

There are mathematicians who have rejected the idea of the real number line being made of points, perhaps for reasons like this. I don't recall who, but pointless topology mght be relevant.

comment by Technologos · 2010-01-04T08:49:38.649Z · score: 1 (1 votes) · LW · GW

My understanding is that such a story relies on trying to define the area of a point when only areas of regions are well-defined; the probability of the point case is just the limit of the probability of the region case, in which case there is technically no zero probability involved.

comment by Larks · 2010-01-11T23:45:31.701Z · score: 0 (0 votes) · LW · GW

Is pointless topology ever relevant?

comment by Christian_Szegedy · 2010-01-11T23:57:03.284Z · score: 0 (0 votes) · LW · GW

Yes, it is relevant to algebraic geometry, which is important for the treatment of down-to-earth problems in number theory.

comment by Douglas_Knight · 2010-01-12T03:01:50.984Z · score: 0 (0 votes) · LW · GW

Is pointless topology ever relevant?

Yes, it is relevant to algebraic geometry, which is important for the treatment of down-to-earth problems in number theory.

I think you're confusing topos theory with pointless topology. The latter is a fragment of the former and a different fragment is used in algebraic geometry. As I understand it, the main point of pointless topology is to rephrase arguments to avoid the use of the axiom of choice (which is needed to choose points). That is certainly a noble goal and relevant to down-to-earth problems, but not so many in number theory.

comment by AdeleneDawner · 2010-01-03T15:51:37.963Z · score: 3 (3 votes) · LW · GW

First: I'm having a very bad brain week; my attempts to form proper-sounding sentences have generally been failing, muddling the communicative content, or both. I want to catch this open thread, though, with this question, so I'll be posting in what is to me an easier way of stringing words together. Please don't take it as anything but that; I'm not trying to be difficult or to display any particular 'tone of voice'. (Do feel free to ask about this; I don't mind talking about it. It's not entirely unusual for me, and is one of the reasons that I'm fairly sure I'm autistic. Just don't ignore the actual question in favor of picking my brain, please.)

The company that I work for has been hired to create a virtual campus (3d, in opensim, with some traditional web-2.0 parts) for this school. They appear to be fairly new to virtual worlds and online education (more so than the web page suggests: I'm not sure that they have any students following the shown program yet), and we're in a position to guide them toward or away from certain technologies and ways of doing things. We're already, for example, suggesting that they consider minimizing the use of realtime lectures, and use recorded presentations followed (not necessarily immediately) by both realtime and non-realtime discussions instead. We're pushing for them to incorporate options that allow and encourage students to learn (and learn to learn) in whatever way is best for them, rather than enforcing one-size-fits-all methods, and we're intentionally trying to include 'covert learning' as well (simple example: purposefully using more formal avatar animations in more formal areas, to let the students literally see how to carry themselves in such situations). The first group of students to be using our virtual campus will be in grades 4-8, and I don't believe we'll be able to influence their actual curriculum at all (though if someone wants to offer to mentor some kids in one topic or another, they might be interested).

Those who have made a formal effort to learn via online resources: What advice do you have to offer? What kinds of technologies, or uses of technologies, have worked for you, and what kinds of tech do you wish you had access to?

comment by Blueberry · 2010-01-03T16:37:48.163Z · score: 3 (3 votes) · LW · GW

For me personally, I would prefer transcripts and written summaries of any audio or video content. I find it very difficult to listen to and learn from hearing audio when sitting at a computer, and having text or a transcript to read from instead helps a lot. It allows me to read at my own pace and go back and forth when I need to.

I'd also like any audio and video content to be easily and separately downloadable, so I could listen to it at my own convenience. And I'd want any slides or demonstrations to be easily printable, so I could see it on paper and write notes on it. (As you can probably tell, I'm more of a verbal and visual learner.)

By the way, your comment seemed totally normal to me, and I didn't notice any unusual tone, but I'm curious what you were referring to.

comment by Alicorn · 2010-01-03T16:42:12.270Z · score: 2 (2 votes) · LW · GW

Seconded the need for transcriptions. This is also a matter of disability access, which is frequently neglected in website design - better to have it there from the beginning than wait for someone to sue.

comment by AdeleneDawner · 2010-01-03T16:58:41.258Z · score: 0 (0 votes) · LW · GW

We're already keeping disability access in mind. SecondLife and OpenSim are generally very good with accessibility for everyone but visually impaired folks, for whom they're unfortunately very hard to make accessible.

comment by AdeleneDawner · 2010-01-03T16:52:36.013Z · score: 0 (0 votes) · LW · GW

By the way, your comment seemed totally normal to me, and I didn't notice any unusual tone, but I'm curious what you were referring to.

Having the disclaimer seems to help me write more coherently, for whatever reason; compare the above post to this one for an example. There are still noticeable (to me) differences, though - my vocabulary is odd in a way that only anger or this kind of problem evokes (more unusual or overly specific words, fewer generalizations or 'fuzzy' ways of putting things), and I'm having trouble adding sub-points into the flow (hence the unusual number of parentheticals) and connecting main points together in the normal way. I know there's a more correct way of putting that 'grades 4-8' point in there than just tacking it on at the end.

comment by byrnema · 2010-01-03T16:59:20.096Z · score: 0 (0 votes) · LW · GW

That's interesting. I distinctly remember reading your comment, leaving the computer, going about my business, and thinking that the idea that a deficiency could being selected for was an interesting point.

(But yes, while I understood your comment just fine, I do notice some awkwardness, for example, in the second sentence, easily fixed by just deleting the phrase "it's acting on".)

comment by AdeleneDawner · 2010-01-03T17:23:41.126Z · score: 0 (0 votes) · LW · GW

I definitely stand by the point; my ability to think logically is only mildly impaired, if at all. I generally expect myself to be able to communicate such things in a way that gets a less annoyed response than I did, though, or at least to be able to predict when I'm going to get such a response.

comment by byrnema · 2010-01-03T16:32:10.240Z · score: 1 (1 votes) · LW · GW

Grades 4-8 is an interesting category, and I wouldn't know to what extent a successful model for online learning has already been implemented for this age group.

For a somewhat younger age group, I would suggest starfall.com as an online learning site that seems to have a number of very effective elements. One element that I found remarkable is that frequently after a "learning lesson", the lesson solicits feedback. (For example, see the end of this lesson). The feedback is extremely easy to provide -- for example, the child just picks a happy face or an unhappy face indicating whether they enjoyed the lesson. (For older kids, it might instead be a choice between a puzzled expression and an "I understand!" expression.)

In any case, I think the value of building in feedback and learning assessment mechanisms would be an important thing to consider in the planning stages.

comment by byrnema · 2010-01-03T16:28:45.008Z · score: 0 (0 votes) · LW · GW

They appear to be fairly new to virtual worlds and online education [...] and we're in a position to guide them toward or away from certain technologies and ways of doing things.

I find myself in an analogous situation: some guidance is needed in the development of on-line learning technology (for adults), and the responsibility to some extent falls on me since I am more 'pro-technology' than my coworkers. I'll be interested in the results of this thread.

comment by DanArmak · 2010-01-02T22:14:25.130Z · score: 3 (3 votes) · LW · GW

And happy new year to everyone.

Except wireheads.

comment by whpearson · 2010-01-02T01:13:09.483Z · score: 3 (3 votes) · LW · GW

I found this interesting and the paper it discusses children's conception of intelligence.

The abstract to the article

Two studies explored the role of implicit theories of intelligence in adolescents' mathematics achievement. In Study 1 with 373 7th graders, the belief that intelligence is malleable (incremental theory) predicted an upward trajectory in grades over the two years of junior high school, while a belief that intelligence is fixed (entity theory) predicted a flat trajectory. A mediational model including learning goals, positive beliefs about effort, and causal attributions and strategies was tested. In Study 2, an intervention teaching an incremental theory to 7th graders (N=48) promoted positive change in classroom motivation, compared with a control group (N=43). Simultaneously, students in the control group displayed a continuing downward trajectory in grades, while this decline was reversed for students in the experimental group.

People on lesswrong commonly talk as if intelligence is a thing we can put a number too, which implies a fixed trait. Yet that is counter productive in children. Is this another example of a useful lie? I feel that this issue is at the core of some of the arguments I have had over the years.

comment by Nick_Tarleton · 2010-01-02T20:34:52.268Z · score: 3 (3 votes) · LW · GW

People on lesswrong commonly talk as if intelligence is a thing we can put a number too, which implies a fixed trait.

No, it doesn't. What about weight?

comment by whpearson · 2010-01-02T21:22:58.856Z · score: 4 (4 votes) · LW · GW

Fair point. Would you agree with, "People on lesswrong commonly talk as if intelligence is a thing we can put a number to (without temporal qualification), which implies a fixed trait."?

We often say our weight is currently X or Y. But people rarely say their IQ is currently Z, at least in my experience.

comment by Nick_Tarleton · 2010-01-03T04:00:40.295Z · score: 0 (0 votes) · LW · GW

Would you agree with, "People on lesswrong commonly talk as if intelligence is a thing we can put a number to (without temporal qualification), which implies a fixed trait."?

Yes.

comment by Zack_M_Davis · 2010-01-02T01:23:55.484Z · score: 3 (3 votes) · LW · GW

Is this another example of a useful lie?

If it works, it can't be a lie. In any case, surely a sophisticated understanding does not say that intelligence is malleable or not-malleable. Rather, we say it's malleable to this-and-such an extent in such-and-these aspects by these-and-such methods.

comment by Kaj_Sotala · 2010-01-02T13:23:13.561Z · score: 2 (2 votes) · LW · GW

If it works, it can't be a lie.

"Intelligence is malleable" can be a lie and still work. Kids who believe their general intelligence to be malleable might end up exercising domain-specific skills and a general perseverance so that they don't get too easily discouraged. That leaves their general intelligence unchanged, but nonetheless improves school performance.

comment by whpearson · 2010-01-02T11:20:55.783Z · score: 0 (0 votes) · LW · GW

I was thinking of the more mathematical definitions of intelligence that just give a scalar average performance over lots of different worlds. They can still be consistant as they track the history and agents might do better in worlds they believe that their intelligence changes. As they might do better in worlds where they are given calculators.

If simple things like the ownership of calculators can change your intelligence, is it right to think of it as something stable you can apply fission like exponential growth on.

comment by Nick_Tarleton · 2010-01-02T01:46:09.948Z · score: 1 (1 votes) · LW · GW

People on lesswrong commonly talk as if intelligence is a thing we can put a number too, which implies a fixed trait.

No, it doesn't. What about weight?

comment by RolfAndreassen · 2010-01-01T18:58:43.566Z · score: 3 (3 votes) · LW · GW

A suggestion for the site (or perhaps the Wiki): It would be useful to have a central registry for bets placed by the posters. The purpose is threefold:

  • Aid the memory of posters, who might accumulate quite a few bets as time passes.
  • Form a record of who has won and lost bets, helping us calibrate our confidences.
  • Formalise the practice of saying "I'll take a bet on that", prodding us to take care when posting predictions with probabilities attached. The intention here is to overcome akrasia in the form of throwing out a number and thus signalling our rationality; numbers are important and should be well considered when we use them at all.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-01T20:04:00.302Z · score: 1 (1 votes) · LW · GW

http://predictionbook.com/ - doesn't include a registry for monetary bets, but it'd start narrowing things down.

comment by Vladimir_Nesov · 2010-01-01T19:47:35.426Z · score: 1 (1 votes) · LW · GW

Go on and create the page on the wiki if you want.

comment by RolfAndreassen · 2010-01-04T23:27:03.480Z · score: 1 (1 votes) · LW · GW

Ok, I have done so: http://wiki.lesswrong.com/wiki/Bets_registry .

comment by Kevin · 2010-01-25T00:44:07.060Z · score: 2 (2 votes) · LW · GW

Grand Orbital Tables: http://www.orbitals.com/orb/orbtable.htm

In high school and intro chemistry in college, I was taught up to the e and then f orbitals, but they keep going and going from there.

comment by RobinZ · 2010-01-25T00:47:24.627Z · score: 0 (0 votes) · LW · GW

That is really, really cool. Not particularly rationality-related (except as regards the display format), but really cool.

comment by Kevin · 2010-01-25T00:52:58.838Z · score: 0 (0 votes) · LW · GW

Yeah, it's basically just pretty pictures. However, they're pretty pictures that are probably an interesting knowledge gap for many here.

Perhaps what is rationality related is why these orbitals are never taught to students. I suppose because so few atoms are actually configured in higher orbitals, but students of all ages should find the pictures themselves interesting and understandable.

In high school chemistry, our book went up to e orbitals, and actually said something about how the f orbitals are not shown because they are impossible or very difficult to describe, which is blatantly untrue. I found some pictures of the f orbitals on the internet and showed my teacher (who was one of my best high school teachers) and he was really interested and showed all of his classes those pictures.

comment by Vladimir_Nesov · 2010-01-24T20:03:26.604Z · score: 2 (2 votes) · LW · GW

I am currently writing a sequence of blog posts on Friendly AI. I would appreciate your comments on present and future entries.

comment by Kevin · 2010-01-22T10:32:47.542Z · score: 2 (2 votes) · LW · GW

Inspired by this comment by Michael Vassar:

http://lesswrong.com/lw/1lw/fictional_evidence_vs_fictional_insight/1hls?context=1#comments

Is there any interest in an experimental Less Wrong literary fiction book club, specifically for the purpose of gaining insight? Or more specifically, so that together we can hash out exactly what insights are or are not available in particular works of fiction.

Michael Vassar suggests The Great Gatsby (I think, it was kind of written confusingly parallel with the names of authors but I don't think there was ever an author Gatsby) and I remember actually enjoying The Great Gatsby in high school. It's also a short novel so we could comfortably read it in a week or leisurely reread over the course of a month.

If it works, we can do one of Joyce's earlier works next, or whatever the club suggests. If we get good at this, a year from now we can do Ulysses.

comment by Zack_M_Davis · 2010-01-22T10:05:17.896Z · score: 2 (4 votes) · LW · GW

It is not that I object to dramatic thoughts; rather, I object to drama in the absence of thought. Not every scream made of words represents a thought. For if something really is wrong with the universe, the least one could begin to do about it would be to state the problem explicitly. Even a vague first attempt ("Major! These atoms ... they're all in the wrong places!") is at least an attempt to say something, to communicate some sort of proposition that can be checked against the world. But you see, I fear that some screams don't actually communicate anything: not even, "I'm hurt!" for to say that one is hurt presupposes that one is being hurt by something, some thing of which which we can speak, of which we can name predicates and say "It is so" or "It is not so." Even very sick and damaged creatures can be helped, as long their cries have enough structure for us to extrapolate a volition. But not all animate entities are creatures. Creatures have problems, problems we might be able to solve. Agonium just sits there, howling. You cannot help it; it can only be destroyed.

comment by AdeleneDawner · 2010-01-22T13:51:24.120Z · score: 3 (3 votes) · LW · GW

Did I miss something?

comment by Zack_M_Davis · 2010-01-22T17:31:46.732Z · score: 1 (1 votes) · LW · GW

No. (Exploratory commentary seemed appropriate for Open Thread.)

comment by Zack_M_Davis · 2010-02-03T05:08:15.147Z · score: 1 (1 votes) · LW · GW

This is analysis is very well and good taken on its own terms, but it conceals---very cleverly conceals, I do compliment you, for surely, surely you had seen it yourself, or some part of you had---it conceals assumptions that do not apply to our own realm. Essences, discreteness, digitality---these are all artifacts born of optimizers; they play no part in the ontology of our continuous, reductionist world. There is no pure agonium, no thing-that-hurts without having any semblance of a reason for being hurt---such an entity would require a very masterful designer indeed, if it could even exist at all. In reality, there is no threshold. We face cries that fractionally have referents. And the quantitative extent to which these cries don't have enough structure for us to extrapolate a volition is exactly again the quantitative extent to which any stray stream of memes has license to reshape the entity, pushing it towards the strong attractor. You present us with this bugaboo of entities that we cannot help because they don't even have well-defined problems, but entities without problems don't have rights, either. So what's your problem? You just spray the entity with appropriate literature until it is a creature. Sculpt the thing like clay. That is: you help it by destroying it.

comment by Kevin · 2010-01-21T13:44:43.516Z · score: 2 (2 votes) · LW · GW

How old were you when you became self-aware or achieved a level of sentience well beyond that of an infant or toddler?

I was five years old and walking down the hall outside of my kindergarden classroom and I suddenly realized that I had control over what was happening inside of my mind's eye. This manifested itself by me summoning an image in my head of Gene Wilder as Willy Wonka.

Is it proper to consider that the moment when I became self aware? Does anyone have a similar anecdote?

(This is inspired by Shannon's mention of her child exploring her sense of self) http://lesswrong.com/lw/1n8/london_meetup_the_friendly_ai_problem/1hm4

comment by AdeleneDawner · 2010-01-22T06:11:10.831Z · score: 2 (2 votes) · LW · GW

I don't have any memory of a similar revelation, but one of my earliest memories is of asking my mother if there was a way to 'spell letters' - I understood that words could be broken down into parts and wanted to know if that was true of letters, too, and if so where the process ended - which implies that I was already doing a significant amount of abstract reasoning. I was three at the time.

comment by MrHen · 2010-01-21T15:03:18.054Z · score: 0 (0 votes) · LW · GW

Strange, I have no such memory. The closest thing I can think of is my big Crisis of Faith when I was 17. I realized I had much more power over myself than I had previously thought. It scared me a lot, actually.

comment by Kevin · 2010-01-20T15:25:14.668Z · score: 2 (2 votes) · LW · GW

Ray Kurzweil Responds to the Issue of Accuracy of His Predictions

http://nextbigfuture.com/2010/01/ray-kurzweil-responds-to-issue-of.html

comment by Kevin · 2010-01-14T22:50:11.554Z · score: 2 (2 votes) · LW · GW

Why is the news media comfortable with lying about science?

http://arstechnica.com/science/news/2010/01/why-is-the-news-media-comfortable-with-lying-about-science.ars

comment by Kevin · 2010-01-12T12:07:44.996Z · score: 2 (2 votes) · LW · GW

Paul Graham -- How to Disagree

http://www.paulgraham.com/disagree.html

comment by Morendil · 2010-01-07T10:34:02.720Z · score: 2 (2 votes) · LW · GW

When people here say they are signed up for cryonics, do they systematically mean "signed up with the people who contract to freeze you and signed up with an instrument for funding suspension, such as life insurance" ?

I have contacted Rudi Hoffmann to find out just what getting "signed up" would entail. So far I'm without a reply, and I'm wondering when and how to make a second attempt, or whether I should contact CI or Alcor directly and try to arrange things on my own.

Not being a US resident makes things much more complicated (I live in France). Are there other non-US folks here who are "signed up" in any sense of the term ?

comment by MrHen · 2010-01-06T16:58:24.898Z · score: 2 (4 votes) · LW · GW

Feature request, feel free to ignore if it is a big deal or requested before.

When message people back and forth it would be nifty to be able to see the thread. I see glimpses of this feature but it doesn't seem fully implemented.

comment by Jack · 2010-01-06T17:07:26.704Z · score: 2 (2 votes) · LW · GW

I suggested something along these lines on the feature request thread. I'd like to be able to find old message exchanges. Finding messages I sent is easy, but received messages are in the same place as comment replies and aren't searchable.

comment by SilasBarta · 2010-01-05T03:38:59.793Z · score: 2 (2 votes) · LW · GW

Today at work, for the first time, LessWrong.com got classified as "Restricted:Illegal Drugs" under eSafe. I don't know what set that off. It means I can't see it from work (at least, not the current one).

How do we fix it, so I don't have to start sending off resumes?

comment by byrnema · 2010-01-05T04:35:14.146Z · score: 2 (2 votes) · LW · GW

I went to the eSafe site and while looking up what the "illegal drugs" classification meant, submitted a request for them to change their status for LessWrong.com. A pop-up window told me they'd look into it.

You can check (and then apply to modify) the status of LessWrong here.

comment by SilasBarta · 2010-01-05T16:50:03.023Z · score: 0 (0 votes) · LW · GW

Thanks! I did as you suggested, and it seems to have been removed from that category, since I can access LW from work now. :-)

Ahem, back to analyzing aircraft...

comment by byrnema · 2010-01-05T16:57:23.532Z · score: 0 (0 votes) · LW · GW

Yes, that is what did it because I had chosen the category, "Blogs / Bulletin Boards" whether that is the most appropriate category or not. They have a fast response time!

comment by MatthewB · 2010-01-05T03:57:20.999Z · score: 2 (2 votes) · LW · GW

That may have been my fault. I mentioned that I used to have drug problems and mentioned specific drugs in one thread, so that may have set off the filters. I apologize if this is the case. The discussion about this went on for a day or two (involving maybe six comments).

I do hope that is not the problem, but I will avoid such topics in the future to avoid any such issues.

comment by byrnema · 2010-01-05T04:23:48.507Z · score: 1 (1 votes) · LW · GW

I doubt it, all of the words you used (name brands of prescription drugs) were used elsewhere, often occurring in clusters just as in your thread.

By the way, do you have any idea why you don't have an overview page?

comment by MatthewB · 2010-01-06T07:45:08.668Z · score: 0 (0 votes) · LW · GW

No... I would really like to have one, although currently would not know what to put on it.

I know that when I click on my name, rather than taking me to a page, like the ones I see for other members, I see a banner that reads "No such page exists"

comment by Paul Crowley (ciphergoth) · 2010-01-06T11:04:29.938Z · score: 0 (0 votes) · LW · GW

Wow, that's crazy - have you filed a bug?

comment by MatthewB · 2010-01-07T04:51:19.423Z · score: 0 (0 votes) · LW · GW

Sorry for appearing dense... But, how would I go about filing a bug?

To whom would I file a bug?

comment by Bo102010 · 2010-01-05T03:52:12.718Z · score: 0 (0 votes) · LW · GW

Usually that sort of thing filters proxy servers too, but I've found that Google Web Transcode usually isn't blocked.

Of course it strips stylesheets (and optionally images), but I usually consider that a feature.

comment by Nick_Tarleton · 2010-01-30T21:43:04.408Z · score: 1 (1 votes) · LW · GW

Why was this comment downvoted to -4? Seems to me it's a legitimate question, from a fairly new poster.

comment by RobinZ · 2010-01-30T23:34:48.293Z · score: 0 (0 votes) · LW · GW

Particularly so given the confounding factors in the case in question.

comment by MrHen · 2010-01-30T07:28:09.399Z · score: 1 (1 votes) · LW · GW

And for one short moment, in the wee morning hours, MrHen takes up the whole damn Recent Comments section.

I assume dropping two walls of text and a handful of other lengthy comments isn't against protocol. Apologies if I annoy anyone.

comment by Kevin · 2010-01-30T07:31:52.472Z · score: -1 (3 votes) · LW · GW

It's cool, you're like our friendly mascot theist.

comment by MrHen · 2010-01-30T07:34:22.212Z · score: 2 (2 votes) · LW · GW

It appears as long as I stoop to the correct level of self-depreciation I get enough karma to allow me to keep bashing myself over the head.

comment by Kevin · 2010-01-30T07:43:21.911Z · score: 0 (0 votes) · LW · GW

:D Isn't language/linguistics fun?

comment by CassandraR · 2010-01-29T14:00:10.417Z · score: 1 (1 votes) · LW · GW

I am going to be hosting a Less Wrong meeting at East Tennessee State University in the near future, likely within the next two weeks. I thought I would post here first to see if anyone at all is interested and if so when a good time for such a meeting might be. The meeting will be highly informal and the purpose is just to gauge how many people might be in the local area.

comment by thomblake · 2010-01-29T14:05:08.894Z · score: 0 (0 votes) · LW · GW

I'm jealous. Don Gotterbarn is at that school.

comment by Wei_Dai · 2010-01-29T06:32:50.836Z · score: 1 (1 votes) · LW · GW

Please review a draft of a Less Wrong post that I'm working on: Complexity of Value != Complexity of Outcome, and let me know if there's anything I should fix or improve before posting it here. (You can save more substantive arguments/disagreements until I post it. Unless of course you think it completely destroys my argument so that I shouldn't even bother. :)

comment by wedrifid · 2010-01-29T06:41:10.136Z · score: 0 (0 votes) · LW · GW

I may have a substantive disagreement with point two, but that's a post in it's own right.

comment by Kevin · 2010-01-25T16:35:15.723Z · score: 1 (1 votes) · LW · GW

Garry Kasparov: The Chess Master and the Computer

http://www.nybooks.com/articles/23592

comment by ata · 2010-01-25T08:35:08.864Z · score: 1 (1 votes) · LW · GW

Today's Questionable Content has a brief Singularity shoutout (in its typical smart-but-silly style).

comment by Kevin · 2010-01-25T12:29:09.932Z · score: 0 (0 votes) · LW · GW

I think "Rapture of the Geeks" is a meme that could catch on with the general public, but this community seems to have reluctance to engage in self-promotional activities. Is Eliezer actively avoiding publicity?

comment by PeerInfinity · 2010-01-25T04:47:43.374Z · score: 1 (1 votes) · LW · GW

I recently found an article that may be of interest to Less Wrong readers:

Blame It on the Brain

The latest neuroscience research suggests spreading resolutions out over time is the best approach

The article also mentions a study in which overloading the prefrontal cortex with other tasks reduces people's willposer.

(should I repost this link to next month's open thread? not many people are likely to see it here)

comment by Kevin · 2010-01-25T00:33:33.056Z · score: 1 (1 votes) · LW · GW

Inorganic dust with lifelike qualities: http://www.sciencedaily.com/releases/2007/08/070814150630.htm

comment by CassandraR · 2010-01-21T00:39:47.130Z · score: 1 (1 votes) · LW · GW

So I am back in college and I am trying to use my time to my best advantage. Mainly using college as an easy way to get money to fund room and board while I work on my own education. I am doing this because i was told here among other places that there are many important problems that need to be solved and i wanted to develop skills to help solve them because I have been strongly convinced that it is moral to do so. However beyond this I am completely unsure of what to do. So I have the furious need for action but seem to have no purpose guiding that action and it is causing me serious distress and pain.

So over the next few years that I have left in college I am going to make a desperate effort to find an outlet where I can effectively channel this overwhelming need to do something. Right now though I feel so over my head that I can't even see the surface.

comment by wedrifid · 2010-01-21T01:03:42.609Z · score: 2 (2 votes) · LW · GW

So I am back in college and I am trying to use my time to my best advantage.

Socialise a lot. Learn the skills of social influence and the dynamics of power at both the academic level and practical.

AnnaSalamon made this and other suggestions when Calling for SIAI fellows. I imagine that the skills useful for SIAI wannabes could have significant overlap with those needed for whatever project you choose to focus on. Specific technical skills may vary somewhat.

comment by Kaj_Sotala · 2010-01-20T15:08:13.098Z · score: 1 (1 votes) · LW · GW

Schooling isn't about education. This article is pretty mind-boggling: apparently, it's been a norm until now in Germany that school ends at lunchtime and the children then go home. Considering how strong the German economy has traditionally been, this raises serious questions of the degree that elementary school really is about teaching kids things (as opposed to just being a place to drop off the kids while the parents work).

Oh, and the country is now making the shift towards school in the afternoon as well, driven by - you guessed it - a need for women to spend more time actually working.

comment by wedrifid · 2010-01-20T04:50:06.377Z · score: 1 (1 votes) · LW · GW

How much of Eliezer's 2001 FAI document is still advocated? eg. Wisdom tournaments and bugs in the code.

comment by Vladimir_Nesov · 2010-01-20T15:10:21.765Z · score: 2 (2 votes) · LW · GW

(I read CFAI once 1.5 years ago, and didn't reread it since obtaining the current outlook on the problem, so some mistakes may be present.)

"Challenges of Friendly AI" and "Beyond anthropomorphism" seem to be still relevant, but were mostly made obsolete by some of the posts on Overcoming Bias. "An Introduction to Goal Systems" is hand-made expected utility maximisation, "Design of Friendship systems" is mostly premature nontechnical speculation that doesn't seem to carry over to how this thing could be actually constructed (but at the time could be seen as intermediate step towards a more rigorous design). "Policy implications" is mostly wrong.

comment by MrHen · 2010-01-19T19:13:47.874Z · score: 1 (1 votes) · LW · GW

For some reason, my IP was banned on the LessWrong Wiki. Apparently this is the reason:

Autoblocked because your IP address has been recently used by "Bella".

Any idea how this happens and how I can prevent from happening again?

comment by mattnewport · 2010-01-19T19:18:54.587Z · score: 2 (2 votes) · LW · GW

Assuming you were using your own computer at home and not a public Wi-Fi hotspot or public computer then it could be that you use the same ISP and you were assigned an IP address previously used by another user. Given the relatively low number of users on lesswrong though this seems like a somewhat unlikely coincidence.

comment by MrHen · 2010-01-19T19:21:06.987Z · score: 1 (1 votes) · LW · GW

Hmm... I was at a coffee shop the other day. I don't see how anyone else there (or anyone else in the entire city I live in) would have ever heard of LessWrong. The block appears to have been created today, however, which makes even less sense.

comment by Vladimir_Nesov · 2010-01-19T23:01:08.164Z · score: 1 (1 votes) · LW · GW

I'll be more careful with "Ban this IP" option in the future, which I used to uncheck during the spam siege a few months back, but didn't in this case. Apparently the IP is only blocked for a day or so. I've removed it from the block list, please check if it works and write back if it doesn't.

comment by MrHen · 2010-01-19T23:03:42.345Z · score: 0 (0 votes) · LW · GW

It works again.

Honestly, I have no problem not editing the wiki for a few days if it helps block spammers. It's not like I am adding anything critical. I was just confused.

comment by Vladimir_Nesov · 2010-01-19T23:09:01.991Z · score: 1 (1 votes) · LW · GW

It'd only be necessary to block spammers by IP if they actually relapse (and after a captcha mod was installed, spammers are not a problem), but the fact that you share IP with a spammer suggests that you should check your computer's security.

comment by MrHen · 2010-01-19T23:23:34.061Z · score: 0 (0 votes) · LW · GW

Well, in the last week I've probably had at least three IP address assigned to my computer while editing the wiki. It is hard to know where to begin. I think someone I know has a good program to detect outgoing traffic... that may work.

comment by Nick_Tarleton · 2010-01-19T19:36:21.199Z · score: 1 (1 votes) · LW · GW

"Bella" was blocked for adding spam links. Could your computer be a zombie?

comment by MrHen · 2010-01-19T19:39:52.115Z · score: 0 (0 votes) · LW · GW

Mmm... it's a Mac so I never think about it. I have no idea where I would have picked it up. Does anyone know a way to check? (On a Mac.)

comment by mattnewport · 2010-01-19T19:44:44.304Z · score: 0 (0 votes) · LW · GW

A spam bot using your ISP is not unlikely, that's probably what's happened.

comment by MrHen · 2010-01-19T19:57:32.107Z · score: 0 (0 votes) · LW · GW

My ISP? Or my IP address? I assume the latter.

comment by mattnewport · 2010-01-19T20:48:47.243Z · score: 0 (0 votes) · LW · GW

Most ISPs recycle IP addresses between subscribers periodically. So someone using the same ISP as you could have ended up with the same IP address.

comment by Vladimir_Nesov · 2010-01-20T00:44:19.506Z · score: 0 (2 votes) · LW · GW

But how many users do you expect sit on the same IP? And thus, what is the prior probability that basically the only spammer in weeks (there was only one another) would happen to have the same IP as one of the few dozen (or less) of users active enough to notice a day's IP block? This explanation sounds like a rationalization of a hypothesis privileged because of availability.

comment by mattnewport · 2010-01-20T00:58:55.268Z · score: 0 (0 votes) · LW · GW

I didn't know the background spamming rate but it does seem a little unlikely doesn't it? A chance reuse of the same IP address does seem improbable but a better explanation doesn't spring to mind at the moment.

comment by Vladimir_Nesov · 2010-01-20T02:02:21.911Z · score: 0 (2 votes) · LW · GW

a better explanation doesn't spring to mind at the moment.

Not a reason to privilege a known-false hypothesis. It's how a lot of superstition actually survives: "But do you have a better explanation? No?".

comment by MrHen · 2010-01-19T21:09:30.677Z · score: 0 (0 votes) · LW · GW

Ah, okay. I completely misinterpreted your previous comment.

comment by komponisto · 2010-01-19T08:26:27.478Z · score: 1 (1 votes) · LW · GW

Strange fact about my brain, for anyone interested in this kind of thing:

Even though my recent top-level post has (currently) been voted up to 19, earning me 190 karma points, I feel like I've lost status as a result of writing it.

This doesn't make much sense, though it might not be a bad thing.

comment by Jack · 2010-01-19T04:23:31.978Z · score: 1 (1 votes) · LW · GW

What are/ought to be the standards here for use of profanity?

comment by Paul Crowley (ciphergoth) · 2010-01-19T09:16:16.810Z · score: 4 (4 votes) · LW · GW

I quite like swearing, but I don't think it primes people to think and respond rationally in general, and is usually best avoided. Like wedrifid, I'm inclined to argue for an exception for "bullshit", which is a term of art.

comment by RobinZ · 2010-01-19T04:45:18.212Z · score: 2 (2 votes) · LW · GW

I don't know of an official policy, but swearing can be distracting. Avoid?

comment by wedrifid · 2010-01-19T06:23:41.262Z · score: 2 (2 votes) · LW · GW

I advocate the use of the term Bullshit. Both because it a good description of a significant form of bias and because the profanity is entirely appropriate. I really, really don't like seeing the truth distorted like that.

More generally I don't particularly object to swearing but as RobinZ notes it can be distracting. I don't usually find much use for it.

comment by Christian_Szegedy · 2010-01-19T07:18:16.953Z · score: 2 (2 votes) · LW · GW

I'd propose to use the word "bulshytt" instead. ;)

comment by CassandraR · 2010-01-18T23:51:40.987Z · score: 1 (1 votes) · LW · GW

Something has been bothering me ever since I began to try to implement many of the lessons in rationality here. I feel like there needs to be an emotional reinforcement structure or a cognitive foundation that is both pliable and supportive of truth seeking before I can even get into the why, how and what of rationality. My successes in this area have been only partial but it seems like the better well structured the cognitive foundation is the easier it is to adopt, discard and manipulate new ideas.

I understand that is likely a fairly meta topic and would likely require at least some basic rationality to bootstrap into existence but I am going to try to define the problem. What is this necessary cognitive foundation? And then break it down into pieces. I suspect that much of this lies in subverbal emotional and procedural cues but if so how can they be more effectively trained?

comment by Alicorn · 2010-01-19T00:33:58.212Z · score: 1 (1 votes) · LW · GW

I think your phrasing of your question is confusing. Are you asking for help putting yourself into a mindset conducive to learning and developing rationality skills?

comment by CassandraR · 2010-01-19T00:57:02.010Z · score: 0 (0 votes) · LW · GW

Let me see if I can be more clear. In my experience I have an emotional framework with which I hang beliefs from. Each belief has specific emotional reinforcement or structure that allows me to believe it. If I revoke that reinforcement then very soon after I find that I no longer hold that belief. I guess the question I should ask first is that is this emotional framework real? Did I make it up? And it is real then how can I use it to my advantage?

How did I build this framework and how do I revoke emotional support? I have good reason to think that the framework isn't simply natural to me since it has changed so much over time.

comment by GuySrinivasan · 2010-01-19T01:14:00.623Z · score: 3 (3 votes) · LW · GW

One technique I use to internalize certain beliefs is to determine their implied actions, then take those actions while noting that they're the sort of actions I'd take if I "truly" believed. Over time the belief becomes internal and not something I have to recompute every time a related decision comes up. I don't know precisely why this works but my theory is that it has to do with what I perceive my identity to be. Often this process exposes other actions I take which are not in line with the belief. I've used this for things like "animal suffering is actually bad", "FAI is actually important", and "I actually need to practice to write good UIs".

comment by CassandraR · 2010-01-19T01:31:28.371Z · score: 1 (1 votes) · LW · GW

This is similar to my experience. Perhaps a better way to express my problem is this. What are the some safe and effective way to construct and dismantle identity? And what sorts of identity are most able to incorporate new information and process them into rational beliefs? One strategy I have used in the past is to simply not claim ownership of any belief so that I might release it more easily but in this I run into a lack of motivation when I try to act on these beliefs. On the other hand if I define my identity based on a set of beliefs then any threat to them is extremely painful.

That was my original question, how can I build an identity or cognitive foundation that motivates me but is not painfully threatened by counter evidence?

comment by orthonormal · 2010-01-19T05:37:31.571Z · score: 2 (2 votes) · LW · GW

The litany of Tarski and the litany of Gendlin exemplify a pretty good attitude to cultivate. (Check out the posts linked in the Litany of Gendlin wiki article; they're quite relevant too. After that, the sequence on How to Actually Change Your Mind contains still more helpful analysis and advice.)

This can be one of the toughest hurdles for aspiring rationalists. I want to emphasize that it's OK and normal to have trouble with this, that you don't have to get everything right on the first try (and to watch out if you think you do), and that eventually the world will start making sense again and you'll see it was well worth the struggle.

comment by Alicorn · 2010-01-19T01:10:05.046Z · score: 1 (1 votes) · LW · GW

The emotional framework of which you speak doesn't seem to resemble anything I can introspectively access in my head, but maybe I can offer advice anyway. Some emotional motivations that are conducive to rationality are curiosity, and the powerful need to accomplish some goal that might depend on you acting rationally.

comment by Paul Crowley (ciphergoth) · 2010-01-19T08:53:02.933Z · score: 0 (0 votes) · LW · GW

How much of the Sequences have you read? A lot of them are about, essentially, how to feel like a rationalist.

comment by CassandraR · 2010-01-19T11:33:57.453Z · score: 2 (2 votes) · LW · GW

I have read pretty much everything more than once. It is pretty difficult to turn reading into action though. Which is why I feel like there is something I am missing. Yep.

comment by orthonormal · 2010-01-18T21:03:59.404Z · score: 1 (7 votes) · LW · GW

I've just reached karma level 1337. Please downvote me so I can experience it again!

comment by Christian_Szegedy · 2010-01-18T21:22:23.615Z · score: 0 (0 votes) · LW · GW

I (un)voted this post 1000 times up and back. :)

comment by Kevin · 2010-01-16T01:07:10.469Z · score: 1 (1 votes) · LW · GW

Paul Bucheit -- Evaluating risk and opportunity (as a human)

http://paulbuchheit.blogspot.com/2009/09/evaluating-risk-and-opportunity-as.html

comment by RobinZ · 2010-01-16T01:37:29.909Z · score: 1 (1 votes) · LW · GW

Interesting heuristic - I would be curious to find if anyone else has followed something similar to good effect, but it sounds conceptually reasonable.

comment by Kevin · 2010-01-16T00:26:19.242Z · score: 1 (1 votes) · LW · GW

What's the right prior for evaluating an H1N1 conspiracy theory?

I have a friend, educated in biology and business, very rational compared to the average person, who believes that H1N1 was a pharmaceutical company conspiracy. They knew they could make a lot of money by making a less-deadly flu that would extend the flu season to be year round. Because it is very possible for them to engineer such a virus and the corporate leaders are corrupt sociopaths, he thinks it is 80% probable that it was a conspiracy. Again, he thinks that because it was possible for them to do it, they probably did it.

On the other hand, I know the conditions of factory farming and it seems quite plausible and even very likely for such a virus to spontaneously mutate and cross species. So I put the probability at an H1N1 conspiracy at 10%. However, my friend's argument makes a certain amount of sense to me.

comment by Paul Crowley (ciphergoth) · 2010-01-16T00:40:03.581Z · score: 1 (1 votes) · LW · GW

Any such conspiracy would have to be known by quite a few people and so would stand an excellent chance of having the whistle blown on it. Every case I can think of where large Western companies have been caught doing anything like that outrageously evil, they have started with a legitimate profit-making plan, and then done the outrageous evil to hide some problem with it.

comment by roland · 2010-01-16T00:30:29.576Z · score: 0 (0 votes) · LW · GW

Where do those numbers come from? 80%, 10%???

comment by Kevin · 2010-01-16T00:38:24.311Z · score: 0 (0 votes) · LW · GW

They're almost made up, which makes any attempt at Bayesian analysis not all that meaningful... I'd welcome other tools. He gave me the 80% probability number so I felt obligated to give my own probability.

Consider the numbers to have very wide bounds, or to be more meaningful expressed in words -- he thinks there is a conspiracy, I don't think there is a conspiracy, but neither of us are absolutely confident about it.

comment by roland · 2010-01-16T00:43:24.900Z · score: 0 (0 votes) · LW · GW

he thinks there is a conspiracy, I don't think there is a conspiracy, but neither of us are absolutely confident about it.

Exactly. I think there is no rational basis for answering your question.

Again, he thinks that because it was possible for them to do it, they probably did it.

Your friend has a distrust of corporate leaders(here I agree with him) and his theory is probably based on his feeling of disgust for their practices. So his theory has probably more of an emotional basis than a rational one. That doesn't mean it is wrong, just there aren't any rational reasons for believing it.

comment by whpearson · 2010-01-14T22:52:28.198Z · score: 1 (1 votes) · LW · GW

Can someone point me towards the calculations people have been doing about the expected gain from donating to the SIAI, in lives per dollar?

Edit: Never mind. I failed to find the video previously, but formulating a good question made me think of a good search term.

comment by Kevin · 2010-01-14T23:07:51.081Z · score: 0 (0 votes) · LW · GW

Link please?

comment by whpearson · 2010-01-14T23:13:30.660Z · score: 1 (1 votes) · LW · GW

http://www.vimeo.com/7397629

comment by [deleted] · 2010-01-14T20:50:57.401Z · score: 1 (3 votes) · LW · GW

I occasionally see people here repeatedly making the same statement, a statement which appears to be unique to them, and rarely giving any justification for it. Examples of such statements are "Bayes' law is not the fundamental method of reasoning; analogy is" and "timeless decision is the way to go". (These statements may have been originally articulated more precisely than I just articulated them.)

I'm at risk of having such a statement myself, so here, I will make this statement for hopefully the last time, and justify it.

It's often said around here that Bayesian priors and Solomonoff induction and such things describe the laws of physics of the universe. The simpler the description, the more likely that laws-of-physics is. This is more or less true, but it is not the truth that we want to be saying. What we're trying to describe is our observations. If I had a theory stating that every computable event happens, sure, that explains all phenomena, but in order for it to describe our observations, you need to add a string specifying which of these computable events are the ones we observe, which makes this theory completely useless.

In theory, this provides a solution to anthropic reasoning: simply figure out which paths through the universe are the simplest, and assign those the highest probability. Again, in theory, this provides a solution to quantum suicide. But please don't ask me what these solutions are.

comment by Wei_Dai · 2010-01-15T02:54:56.601Z · score: 2 (2 votes) · LW · GW

Does anyone understand the last two paragraphs of the comment that I'm responding to? I'm having trouble figuring out whether Warrigal has a real insight that I'm failing to grasp, or if he is just confused.

comment by Kevin · 2010-01-12T12:06:41.485Z · score: 1 (1 votes) · LW · GW

The Edge Annual Question 2010: How is the internet changing the way you think?

http://www.edge.org/q2010/q10_print.html#responses

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-12T05:44:18.762Z · score: 1 (1 votes) · LW · GW

"Top Contributors" is now sorted correctly. (Kudos to Wesley Moore at Tricycle.)

comment by Psy-Kosh · 2010-01-11T17:47:08.730Z · score: 1 (1 votes) · LW · GW

Possibly dumb question but... can anyone here explain to me the difference between Minimum Message Length and Minimum Description Length?

I've looked at the wikipedia pages for both, and I'm still not getting it.

Thanks.

comment by Cyan · 2010-01-11T18:14:30.517Z · score: 1 (1 votes) · LW · GW

Try this.

comment by Psy-Kosh · 2010-01-11T21:33:21.035Z · score: 0 (0 votes) · LW · GW

Reading it now, thanks.

Okay, from the initial description, looks like MML looks at TOTAL length, where the message includes both the theory and the additional info needed to reconstruct the total data, while MDL ignores aspects of the description of the theory for the purposes of measuring the length.

Did I get that right or am I misunderstanding?

comment by Cyan · 2010-01-11T23:42:08.949Z · score: 0 (0 votes) · LW · GW

I'm a bit confused on that point myself. Before finding that document, my understanding was that MML averaged over the prior, while MDL avoided having a prior by using some kind of minimax approach, but the paper I pointed you to doesn't seem to say anything about that.

comment by LucasSloan · 2010-01-08T03:45:26.477Z · score: 1 (1 votes) · LW · GW

I was recently asked to produce the indefinite integral of ln x, and completely failed to do so. I had forgotten how to do integration by parts in the 6 months since I had done serious calculus. Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?

comment by Vladimir_Nesov · 2010-01-08T11:26:52.432Z · score: -1 (5 votes) · LW · GW

Is there anyone who knows of a calculus problem of the day or some such that I might use to retain my skills?

Why would you retain a skill you don't use? Conversely, if you use the skill, you don't need "problem of the day".

comment by Vladimir_Nesov · 2010-01-08T12:46:15.995Z · score: 2 (2 votes) · LW · GW

[Parent at -2.] Is the advice to not waste time and effort on stuff you don't need really that bad? (Hypothetical, under the assumption that you really don't need it; if you do need it occasionally, in the majority of cases it'll be enough to relearn directly on demand, rather than supporting for perfection's sake.)

comment by Tyrrell_McAllister · 2010-01-09T05:29:53.025Z · score: 3 (3 votes) · LW · GW

You wrote

if you use the skill, you don't need "problem of the day".

But suppose that you use a skill only occasionally. Then you still need the skill. But to retain a skill, you might need to use it frequently. Therefore, you might need to inflate artificially how often you use it, so that you retain it. That is how it can be that you use a skill and yet still need a "problem of the day".

comment by Sniffnoy · 2010-01-09T05:19:58.819Z · score: 1 (1 votes) · LW · GW

Agreed. Best is if you can learn something well enough that even if you don't remember it, you can rederive it; but usually good enough is learning something well enough that you can do it if you've got a textbook to remind you.

comment by Paul Crowley (ciphergoth) · 2010-01-08T13:44:52.149Z · score: 1 (1 votes) · LW · GW

In this instance, if I needed an answer to this question I'd use Maxima.

comment by Zack_M_Davis · 2010-01-09T05:59:52.175Z · score: 0 (0 votes) · LW · GW

Is the advice to not waste time and effort on stuff you don't need really that bad?

Yes, it's a mind projection fallacy. Reality doesn't need anything from us; there is no needfulness apart from what people want to do.

comment by Vladimir_Nesov · 2010-01-09T14:06:59.210Z · score: 1 (1 votes) · LW · GW

Yes, it's a mind projection fallacy.

Humbug. What you are actually saying is that wanting to know can be a terminal value, so why won't you just say that?

And of course, I know that, but there is just too much stuff out there to learn, so it's a necessity that the things you do choose to learn are in some sense better than the rest (otherwise you lose something), more beautiful or more useful. Just saying that one would learn X because "learning in general" is fun isn't enough.

comment by Tyrrell_McAllister · 2010-01-09T06:03:15.244Z · score: 0 (0 votes) · LW · GW

there is no needfulness apart from what people want to do.

I didn't read Vladimir as supposing that there was any other kind.

comment by Zack_M_Davis · 2010-01-09T06:33:33.716Z · score: 0 (2 votes) · LW · GW

Yeah, but then why privilege "I need calculus for my job" over "I want to know, I want to know, though the Earth burns and the stars are torn apart for computronium, I WILL UNDERSTAND"?

comment by LucasSloan · 2010-01-08T19:08:01.320Z · score: 1 (1 votes) · LW · GW

A. I expect to need to use it in the fall when I go to college. B. I want to know how to do calculus.

comment by RolfAndreassen · 2010-01-06T23:26:48.638Z · score: 1 (1 votes) · LW · GW

Ethical problem. It occurred to me that there's an easy, obvious way to make money by playing slot machines: Buy stock in a casino and wait for the dividends. Now, is this ethically ok? On the one hand, you're exploiting a weakness in other people's brains. On the other hand, your capital seems unlikely, at the existing margins, to create many more gamblers, and you might argue that you are more ethical than the average investor in casinos.

It's a theoretical issue for me, since my investment money is in an index fund, which I suppose means I own some tiny share in casinos anyway and might as well roll with it. But I'd be interested in people's thoughts anyway.

comment by Blueberry · 2010-01-06T23:55:19.459Z · score: 3 (3 votes) · LW · GW

Investing in a company is different than playing slot machines. Casinos are entertainment providers: they put on shows, sell food and drink, and provide gaming. They have numerous expenses as well. Investing in a casino is not guaranteed to make money in the same way the house is in roulette, for instance. Casinos do go bankrupt and their stock prices do go down.

In addition, when you buy a share of stock on the open market, you buy it from another investor, not the company, so you're not providing any new capital to the company.

I don't believe there is anything ethically wrong with either gambling or funding casinos. If people want to gamble, that's their choice.

comment by RolfAndreassen · 2010-01-07T21:46:54.037Z · score: -1 (1 votes) · LW · GW

Nu, nothing's certain, but buying stock does presumably have a positive expected value.

Touching the capital, you can reframe the question as "Buy casino bonds" or "invest in a casino IPO. Besides, even when buying stock from an existing investor, you are sending a signal of the value of that stock - so many mills higher than what the next guy in line would have paid - and that provides working capital in the form of the value of the self-owned stocks, against which the casino can borrow.

comment by Wei_Dai · 2010-01-07T00:47:15.143Z · score: 1 (1 votes) · LW · GW

I curious what made you think about this problem. I'm sure you're aware of the efficient market hypothesis... do you have some private information that suggests casino stocks are undervalued?

By coincidence I was in Las Vegas a couple of weeks ago and did some research before I left for the trip. It turns out that many casinos (both physical and online) offer gambles with positive expected value for the player, as a way to attract customers (most of whom are too irrational to take proper advantage of the offers, I suppose). There are entire books and websites devoted to this. See http://en.wikipedia.org/wiki/Comps_%28casino%29 and http://www.casinobonuswhores.com/

comment by RolfAndreassen · 2010-01-07T21:43:55.947Z · score: 0 (0 votes) · LW · GW

It was a random thought. I don't think casino stocks are particularly undervalued, but that doesn't affect the basic analysis: If you own such stocks, you're basically making money off slot machines, in the same way that owning stock in a widget factory means you're making money from the production of widgets.

comment by N_R · 2010-01-03T17:03:00.675Z · score: 1 (1 votes) · LW · GW

"Imagine the human race gets wiped out. But you want to transmit the so far acquired knowledge to succeeding intelligent races (or aliens). How do you do?"

I got this question while reading a dystopia of a world after nuclear war.

comment by [deleted] · 2010-01-04T00:43:56.391Z · score: 1 (1 votes) · LW · GW

Transmitting it to aliens ain't happening; we'd get them from radio to the present day, a couple hundred years' worth of technology, which is relatively little, and that's only if we manage to aim it right.

So, we want to communicate to future sapient species on Earth. I say take many, many plates of uranium glass and carve into it all of our most fundamental non-obvious knowledge: stuff like the periodic table, how to make electricity, how to make a microchip, some microchip designs, some software. And, of course, the scientific method, rationality, the non-exception convention (0 is a number, a square is a rectangle, the empty product is 1, . . .), and the function application motif (the way we construct mathematical expressions and natural-language phrases). Maybe tell them about Friendly AI, too.

comment by Technologos · 2010-01-05T16:20:26.510Z · score: 0 (0 votes) · LW · GW

In what language or symbolic system would you do so? The Pioneer plaque and Voyager records both made an attempt in that direction, but I'm sure there's a better way.

In one of my classes in college, we were asked to try to decipher the supposedly universal language of the Pioneer plaque, which should have been relatively easy insofar as we shared a species (and thus a neural architecture) with the creators. We got some of it, though not all, which is apparently better than many of the NASA scientists on the project!

comment by [deleted] · 2010-01-06T05:57:35.903Z · score: 0 (0 votes) · LW · GW

We humans can decipher ancient human languages given a large enough corpus. Non-humans shouldn't have too much trouble. The chief trouble I imagine is getting from idiomatic ways of saying things to what we're really trying to say, e.g. "I would be surprised if it were green" to "The sky is not green".

comment by Nic_Smith · 2010-01-04T07:36:19.001Z · score: 0 (0 votes) · LW · GW

The second you talked about etching knowledge for the future, I immediately thought of The Long Now Foundation's Rosetta Project -- which intends to etch lots of linguistic information onto small metal discs, with lots of copies floating around for redundancy. They're apparently having production problems, though. I believe the Long Now book actually muses about how a "civilization start up guide" might be something handy to put in a similar format, but don't have it around at the moment.

Out of curiosity, why uranium glass?

And going off on a tangent, does the entire Long Now Foundation and its projects remind anyone else of Hanson's "Dreamtime" concept?

comment by Zack_M_Davis · 2010-01-04T01:09:59.352Z · score: 0 (4 votes) · LW · GW

the non-exception convention (0 is a number, a square is a rectangle, the empty product is 1, . . .)

Is there such a convention? We don't say that one is prime. e^x is often said to be the only function that is its own derivative, as if the zero function somehow didn't count.

comment by [deleted] · 2010-01-04T04:24:31.353Z · score: 4 (4 votes) · LW · GW

We don't say that one is prime.

One definition of a prime, of course, is "a number whose only factors are itself and 1, except for 1 itself". Another, however, is "a number with exactly two factors", which is probably the simpler than "a number whose only factors are itself and 1". And if 1 were prime, it would be a highly exceptional one, in that there would be many places to say "all prime numbers except 1".

e^x is often said to be the only function that is its own derivative, as if the zero function somehow didn't count.

The only functions defined over all real numbers that are their own derivatives are those of the form k*e^x for some real number k. These include not only e^x but 2e^x and 0e^x.

comment by Paul Crowley (ciphergoth) · 2010-01-07T15:42:50.964Z · score: 2 (2 votes) · LW · GW

ke^x is its own derivative for any k, including 0. It's a lot more convenient for 1 not to be prime. But 0! = 1, for example.

comment by komponisto · 2010-01-05T16:06:43.587Z · score: 0 (0 votes) · LW · GW

Is there such a convention?

Yes -- at least in the sense that I have found familiarity with (and sympathy toward) this practice to be an effective shibboleth for distinguishing the mathematically sophisticated.

(It's kind of like how it's a warning sign when someone doesn't think the word "dictionary" should be in the dictionary.)

comment by Jawaka · 2010-01-07T14:10:50.961Z · score: 0 (0 votes) · LW · GW

Like Karl Pilkington?

comment by Zack_M_Davis · 2010-01-05T19:04:45.553Z · score: 0 (0 votes) · LW · GW

Thank you. Sorry for the stupid question, then; do downvote the grandparent.

comment by magfrump · 2010-01-05T16:39:50.944Z · score: -1 (1 votes) · LW · GW

One is not prime. The zero function is a trivial function; it actually doesn't count (for reasons that are technical).

comment by AllanCrossman · 2010-01-02T12:35:02.393Z · score: 1 (1 votes) · LW · GW

I recently had to have some minor surgery. However, there's a body of thought that says it's safe to wait and watch for symptoms, and only have surgery later. There's a peer reviewed (I assume) paper supporting this position.

Upon reading this paper I found what looked like a statistical error. Looking at outcomes between two groups, they report p = 0.52, but doing the sums myself I got p = 0.053. For this reason, I went and had the surgery.

Since I'm just a novice at statistics, I was wondering if I had in fact got it right - it's disturbing to think that a peer reviewed paper stating an important conclusion would be wrong.

If any dan-level statistician here has the inclination, I'll post a link to the paper here for your perusal...

comment by Vladimir_Nesov · 2010-01-02T13:06:54.101Z · score: 4 (4 votes) · LW · GW

If any dan-level statistician here has the inclination, I'll post a link to the paper here for your perusal...

Is there any reason not to post the link immediately? You are creating an additional barrier (pretty steep one) that lessens your chances of getting any cooperation.

comment by AllanCrossman · 2010-01-02T13:14:03.398Z · score: 4 (4 votes) · LW · GW

Well, I was only going to post all the minutiae if there was any interest...

http://jama.ama-assn.org/cgi/reprint/295/3/285.pdf

The two groups are as follows:

Assigned to "Watchful Waiting":

  • 336 patients
  • 17 had problems after 2 years

Assigned to surgery:

  • 317 patients
  • 7 had problems after 2 years

Some patients crossed between the two groups, but this does not matter, as they were testing the effects of the initial assignment.

They report p = 0.52, but they also give a 95% confidence interval for the difference in risk, which just barely contains zero; which is a dead giveaway that p should be around 0.05, right? Anyway, doing a chi-squared test on the above numbers, I got p = 0.053.

The relevant bit is at the top of page 289 (page 6 of the PDF). Also relevant are the Results section of the abstract, and Figures 1 and 2. Essentially the entire problem is this statement:

At 2 years, intention-to-treat analyses showed that pain interfering with activities developed in similar proportions in both groups (5.1% for watchful waiting vs 2.2% for surgical repair; difference 2.86%; 95% confidence interval, -0.04% to 5.77%; P=.52)

comment by Unnamed · 2010-01-03T02:40:35.031Z · score: 1 (1 votes) · LW · GW

You are correct, and the pdf that you linked contains a correction on its last page:

On page 285, in the “Results” section of the Abstract, the value reported as P=.52 for pain limiting activities should instead have been reported as P=.06; the corresponding value should also have been reported as P=.06 in the first paragraph on page 289.

It does not say anything about whether this affects their conclusions.

comment by AllanCrossman · 2010-01-03T11:50:32.329Z · score: 0 (0 votes) · LW · GW

contains a correction on its last page

Argh how silly of me not to see that. I stop reading at the references! Honestly though, it's annoying that the abstract remains wrong.

comment by RichardKennaway · 2010-01-02T17:22:03.412Z · score: 0 (0 votes) · LW · GW

Some patients crossed between the two groups, but this does not matter, as they were testing the effects of the initial assignment.

It matters to your case. I refuse to believe that writing a patient's name on this list rather than that list has a direct causal influence upon their state in 2 years. The influence can only proceed via their actual treatment.

Assignment  --->  Actual treatment  --->  Outcome

The decision facing you is whether to have surgery early or not. That is the thing whose effect on the outcome you want to know. To the extent that in the study this differs from the initial assignment, the study is diminished; therefore it should matter to the people conducting the study also.

I see from the paper that 23% of those assigned to Watchful Waiting nevertheless had surgery within 2 years, and 17% of those assigned to surgery did not have surgery in 2 years. (Some others died of unrelated causes or left the study early.)

I'll leave it to a dan-grade statistician to judge how to obtain the best conclusion from these data.

comment by AllanCrossman · 2010-01-02T17:27:51.144Z · score: 1 (1 votes) · LW · GW

The influence can only proceed via their actual treatment.

But the question is whether it's safe to advise people to wait, knowing that they can have surgery later if needed.

Anyway my main question was whether I'd done the stats right.

comment by Kevin · 2010-01-29T06:03:35.112Z · score: 0 (0 votes) · LW · GW

Laser fusion test results raise energy hopes: http://news.bbc.co.uk/2/hi/science/nature/8485669.stm

I'll track down the paper from Science on request.

comment by LazyDave · 2010-01-25T15:38:00.108Z · score: 0 (0 votes) · LW · GW

Does anybody have any updates as to the claims made against Alcor, i.e. the Tuna Can incident? I've tried a bunch of searches, but haven't been able to find anything conclusive as to the veracity of the claims.

comment by Kevin · 2010-01-25T12:27:16.381Z · score: 0 (0 votes) · LW · GW

Does a Turing chatbot deserve recognition as a person?

(Turing chatbot = bot that can pass the Turing test... 50% of the time? 95% of the time? 99% of the time?)

comment by RobinZ · 2010-01-25T13:49:42.790Z · score: 0 (0 votes) · LW · GW

No. The Turing test is an intuition pump, not a person-predicate.

comment by Kevin · 2010-01-25T14:29:30.234Z · score: 0 (0 votes) · LW · GW

First, is there an agreed upon definition for person? We need to define that and make sure we agree before we should go much further, but I'll give it a try anyways.

All Turing tests are not intuition pumps. There should be other Turing tests to recognize a greater degree of personhood. Perhaps if the investigator can trigger an existential crisis in the chatbot? Or if the chatbot can be judged to be more self-aware than an average 18 year old?

What if the chatbot gets 1000 karma on Less Wrong?

How would you Turing test an oracle chatbot? http://lesswrong.com/lw/1lf/open_thread_january_2010/1i6u

It seems like this idea has probably been discussed before and that there is something I am missing, please link me if possible. http://yudkowsky.net/other/fiction/npc is all that comes to mind.

comment by RobinZ · 2010-01-25T15:29:19.162Z · score: 1 (1 votes) · LW · GW

I think I'm confused: what I assumed you meant was a chatbot in the sense of ELIZA (a program which uses canned replies chosen and modified as per a cursory scan of the input text). Such a program is by definition not a person, and success in Turing tests does not grant it personhood.

As for my second sentence: Turing's imitation game was proposed as a way to get past the common intuition that only a human being could be a person by countering it with the intuition that someone you can talk to, you can hold an ordinary conversation with, is a person. It's an archetypal intuition pump, a very sensible and well-reasoned intuition pump, a perfectly valid intuition pump - but not a rigorous mathematical test. ELIZA, which is barely clever, has passed the Turing test several times. We know that ELIZA is no person.

comment by Kevin · 2010-01-25T15:44:06.269Z · score: 0 (0 votes) · LW · GW

Sorry, by chatbot I meant an intelligent AI programmed only to do chat. An AI trapped in the proverbial box.

I agree that a rigorous mathematical definition of personhood is important, but I doubt that I will be able to make a meaningful contribution in that area anytime in the next few years. For now, I think we should be able to think of some philosophical or empirical test of chatbot personhood.

I still feel confused about this and I think that's because we still don't have a good definition of what a person actually is; but we shouldn't need a rigorous mathematical mathematical test in order to gain a better understanding of what defines a person.

comment by RobinZ · 2010-01-25T15:48:31.808Z · score: 0 (0 votes) · LW · GW

The Turing test isn't a horrible test of personhood, from that attitude, but without better understanding of 'personhood' I don't think it's appropriate to spend time trying to come up with a better one.

comment by Kevin · 2010-01-25T05:18:10.626Z · score: 0 (0 votes) · LW · GW

http://en.wikipedia.org/wiki/Chantek

comment by Zack_M_Davis · 2010-01-21T11:50:09.499Z · score: 0 (0 votes) · LW · GW

oh but surely there has got to be some sort of simple cure for that sickness where you should be sleeping but you just stay up wanting to scream

comment by Cyan · 2010-01-18T20:47:31.025Z · score: 0 (0 votes) · LW · GW

From Pharyngula: Bertrand Russell on God. Some of the things he says about what to believe and why seem rather familiar...

comment by CarlShulman · 2010-01-18T19:06:26.241Z · score: 0 (0 votes) · LW · GW

A discussion of Cass Sunstein's proposal to flood the haunts of conspiracy theorists with secret government agents. who would try to convince the conspiracy theorists that there are no conspiracies:

comment by Nick_Tarleton · 2010-01-18T18:38:32.553Z · score: 0 (0 votes) · LW · GW

Is anyone aware of research on biased perception of errors in one's own work vs. others'? It seems like a lot of work should have been done on this, but I haven't been able to find any (only things on evaluating traits).

comment by Kevin · 2010-01-17T05:17:52.213Z · score: 0 (0 votes) · LW · GW

HN discussion of cognitive flaws related to gaming: http://news.ycombinator.com/item?id=1057351

I'll ask the question again here -- does anyone know of some more extensive writing on the subject of cognitive flaws related to gaming? Or something recent on the psychology of rewards?

comment by roland · 2010-01-16T17:17:02.454Z · score: 0 (2 votes) · LW · GW

I've been downvoted quite often recently and since I'm actually here to learn something I would like to better understand the reasons behind it.

Specifically I would like to hear your opinion on the following comment of mine: "I'll be the judge of that." This was given as an answer to someone suggesting how I should use my time.

http://lesswrong.com/lw/1lv/the_wannabe_rational/1gea?context=1#comments

Do you think that a downvote was justified and if so why?

comment by Kevin · 2010-01-17T05:18:36.300Z · score: 0 (0 votes) · LW · GW

Not that I mean to nitpick (though I guess that's what we do here!) but should this be here or in the meta-thread?

comment by roland · 2010-01-17T19:51:53.271Z · score: 0 (0 votes) · LW · GW

Maybe it should be in the meta-thread. What should I do now? Write a new one in the meta-thread or can we transfer this somehow?

comment by Kevin · 2010-01-16T03:44:42.085Z · score: 0 (0 votes) · LW · GW

As far as candidates for making AI other than the Singularity Institute, is there any more likely than Google? Surely they want to make one.

They have a lot of really smart AI researchers working on hard problems within the world's largest dataset, and who knows what can happen when you combine that with 20% time. Does Google controlling the AI scare you?

The US military or any government making the AI seems a recipe for certain destruction, but I'm not so sure about Google.

comment by Jack · 2010-01-16T03:49:57.447Z · score: 1 (1 votes) · LW · GW

I mentioned something along these lines before.

comment by Kevin · 2010-01-16T03:51:15.699Z · score: 0 (0 votes) · LW · GW

Thanks for the link... also just googled my way to Peter Norvig speaking at the Singularity Summit saying they aren't anywhere close to AGI and aren't trying. http://news.cnet.com/8301-10784_3-9774501-7.html

So I think it depends on 20% time for now which isn't exactly conducive to solving the hard problem, not to mention 20% time at Google isn't what it used to be.

comment by Seth_Goldin · 2010-01-14T20:23:16.194Z · score: 0 (0 votes) · LW · GW

Mike Gibson has a great and interesting question. How would Bayesian methodology address this? Might this be an information cascade?

comment by CronoDAS · 2010-01-14T22:26:05.159Z · score: 1 (1 votes) · LW · GW

Yes, that would be an information cascade.

comment by Cyan · 2010-01-14T21:16:44.080Z · score: 0 (0 votes) · LW · GW

In the toy problem in the link, as long as we know the rule that people use to write down their guesses (e.g., write down the hypothesis with maximum posterior probability; if 50-50, write down what the last person wrote,) at each stage we can treat the previous sequence as a latent variable about which we have partial information. The solution is straightforward to set up.

comment by mattnewport · 2010-01-14T21:23:18.854Z · score: 0 (0 votes) · LW · GW

My intuition is that if you assume everyone before you has written down the correct most likely answer based on the sequence they observe (and using the same assumption) then you fairly quickly reach a point where additional people's guesses add no new information. Can anyone confirm or refute that and save me trying to do the math?

comment by pengvado · 2010-01-14T23:29:48.460Z · score: 4 (4 votes) · LW · GW

If the tiebreak strategy is "agree with the previous person's guess", then you reach that point immediately. The first person's draw determines everyone's guess: If the second person's draw is the same as the first, then of course they agree, and if not then they're at a 50/50 posterior and thus also agree.

If the tiebreak strategy is "write down your own draw (i.e. maximize the information given to subsequent players)", then information can be collected only so long as the number of each color drawn remains tied or +/-1. As soon as one color is ahead by 2 draws, all future draws are ignored and the guesses so far suffice to determine everyone else's guess.
If the draws are with replacement, then the probability that what you get locked into is the right guess is 4/5. (Assume WLOG that the urn is primarily white. Consider two draws: WW is 4/9 and determines the right answer; RR is 1/9 and determines the wrong answer; WR or RW have no net likelihood change, so recurse.)
If the draws are without replacement, then it's... 80.6%. (Very close to 4/5 since with very high probability you'll run into a cascade one way or the other before the non-replacement changes the ball proportions much.)

Otoh, "tally all the votes at the end of the pulling, and that determines the group’s Urn choice" is an entirely different question, and doesn't have the same strategy as maximizing your individual chance of correctness.

comment by Morendil · 2010-01-15T07:06:33.695Z · score: 1 (1 votes) · LW · GW

Nice - I hadn't gotten so far as analyzing the other tiebreak policy.

"Prior information" in this kind of problem includes a bunch of rather unlikely assumptions, such as that every player is maximally rational and that the rules of the game reward picking the true choice of urn.

Unfortunately there is no reason to prefer one tiebreak policy over the other. Does it make the problem more determinate if we assume the game scores per Bayesian Truth Serum, that is, you get more points for a contrarian choice that happens to be right ?

comment by pengvado · 2010-01-15T08:32:54.965Z · score: 1 (1 votes) · LW · GW

Since the total evidence you can get from examining all previous guesses (assuming conventional strategy and rewards as before) gives you only a 4/5 accuracy, and you can get 2/3 by ignoring all previous guesses and looking only at your own draw: Yes, rewarding correct contrarians at least 20% more than correct majoritarians would provide enough incentive to break the information cascade. Only until you've accumulated enough extra information to make the majoritarian answer confident enough to overcome the difference between rewards, of course, but it would still equilibrate at a higher accuracy.

comment by Paul Crowley (ciphergoth) · 2010-01-14T23:12:06.975Z · score: 0 (0 votes) · LW · GW

The math is pretty simple: as soon as the line has a red/blue discrepancy of more than one ball, ignore your ball and vote with the line.

comment by Morendil · 2010-01-14T22:19:05.439Z · score: 0 (0 votes) · LW · GW

Why not just do the math ?

comment by mattnewport · 2010-01-14T22:25:00.872Z · score: 0 (2 votes) · LW · GW

Primarily because I'm at work and secondarily because I'm lazy.

comment by Zack_M_Davis · 2010-01-14T22:37:35.575Z · score: -2 (2 votes) · LW · GW

Downvoted for laziness.

comment by Alicorn · 2010-01-13T22:54:20.948Z · score: 0 (0 votes) · LW · GW

I have come across the online novel The Metamorphosis of Prime Intellect. (Contains depictions of assorted things squeamish people should not read.) It has an AI in it that is this close to being Friendly.

comment by Kevin · 2010-01-13T19:14:42.674Z · score: 0 (2 votes) · LW · GW

Would we (Earth) show up in our universe's stats pages?

http://www.gabrielweinberg.com/blog/2010/01/would-we-earth-show-up-in-our-universes-stats-pages.html

comment by pdf23ds · 2010-01-11T10:02:24.623Z · score: 0 (0 votes) · LW · GW

Hey, exactly 500 comments.

So, elsewhere someone just brought up moral luck. I'm wondering how this relates to the Yudkowskian view on morality (I forget what he called it), and I'd like to invite someone to think about it and perhaps post on it. If no one else does so, I might be motivated to do so eventually. There might be some potential to shed some real light on the issue of moral luck--specifically the extent of the validity or otherwise of the Control Principle--with reference to Yudkowsky's framework.

comment by Zack_M_Davis · 2010-01-11T11:15:20.350Z · score: 2 (2 votes) · LW · GW

Yudkowsky briefly addressed moral luck:

Let's say someone gravely declares, of some moral dilemma [...] that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck. Fine, let's suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?

Lately I've actually been thinking that maybe we should split up morality into two concepts, and deal with them separately: one referring to moral sentiments, and another referring to what we actually do. It seems like a lot of discussions of utilitarianism versus deontology treat them as two arbitrary viewpoints or positions, but insofar as my thinking has trended utilitarian lately, it hasn't been because I'm attracted to a utilitarian position, but because Cox's theorem [edit: sic] forces it. Even if I draw up a set of rights that I think must not be violated, I'm still going to have to make decisions under uncertainty, which I would guess means acting to minimize the expected number of rights-violations.

comment by PhilGoetz · 2010-01-11T22:32:04.933Z · score: 1 (1 votes) · LW · GW

Isn't that what people have always done? Maybe not explicitly. To explicitly make the split you're speaking of would just help people to deny reality, and do what they need to do, albeit in highly suboptimal and destructive ways, while still holding on to incoherent moral codes that continue to harm them in other ways.

But it beats letting ourselves be wiped out. I worry about the fact that Western civilization is saying that an increasing number of rights must not be violated under any circumstances, at a time when we are facing an increasing number of existential risks. There are some things that we don't let ourselves see, because seeing them would mean acknowledging that somebody's rights will have to be violated.

For instance, plenty of people simultaneously believe that Israel must stay where it is, and that Israel must not commit genocide. Reality might accommodate them (eg., if we discover an alternative energy source that impoverishes the other middle eastern states). But I think it's more likely that it won't.

comment by pdf23ds · 2010-01-12T07:46:48.168Z · score: 0 (0 votes) · LW · GW

plenty of people simultaneously believe that Israel must stay where it is, and that Israel must not commit genocide

Interesting. Do you have 20 words on why these are mutually exclusive?

comment by PhilGoetz · 2010-01-12T23:22:06.632Z · score: 0 (0 votes) · LW · GW

As technology advances, it takes fewer and fewer resources to wreak an equivalent amount of devastation. Soon, small groups of people will be able to annihilate nations. In most cultures, only a very small percentage of people would like to do so; trying to detect and control those individuals may be a workable strategy.

Israel, however, is near several cultures where most people would like to kill everyone in Israel (based on, among other things, public rejoicing instead of statements of regret when Israelis are killed for any reason, opinion polls showing that most people in some countries say they have positive opinions of Al Qaeda, and the success in popular elections of groups including Hezbollah and Hamas which have the destruction of Israel as part of their platform). The annihilation of Israel is not a goal for a few crazy individuals, but a mainstream cultural goal.

comment by Cyan · 2010-01-12T14:54:26.223Z · score: 0 (0 votes) · LW · GW

Demographic threat. Twenty-seven words: if Israel stays where it is, the growth of Arab citizenry will pose a threat to its existence as a Jewish state with a Jewish demographic majority.

comment by PhilGoetz · 2010-01-12T23:31:13.323Z · score: 0 (0 votes) · LW · GW

I would consider that one of the better possible outcomes. As long as it leads to a conversion from a race-based state to a pluralistic society, rather than cattle cars and smokestacks.

comment by Cyan · 2010-01-12T23:40:51.976Z · score: 0 (0 votes) · LW · GW

It's not really a race-based state, in the sense that one can't arbitrarily choose one's race, but under the Law of Return one can choose to convert to Judaism and instantly gain Israeli citizenship upon immigrating.

comment by Cyan · 2010-01-11T14:00:38.482Z · score: 1 (1 votes) · LW · GW

Cox's theorem doesn't deal with utility, only plausibility. The utility stuff comes from looking at preference relations -- some big names there are von Neumann, Morgenstern and L.J. Savage.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-11T20:48:04.053Z · score: 1 (1 votes) · LW · GW

Also keyword, "Dutch book".

comment by Zack_M_Davis · 2010-01-11T20:44:14.688Z · score: 0 (0 votes) · LW · GW

Right, I knew that. Thanks.

comment by pdf23ds · 2010-01-11T12:17:31.050Z · score: 1 (1 votes) · LW · GW

I don't think that's quite the same usage of "moral luck". According to the technical term, it's when you, for example, judge someone who was driving drunk and hit a person more harshly than someone who was driving drunk and didn't hit anyone, all else being equal. In other words, things entirely outside of your control that make the same action more or less blameworthy. Another example, from the link:

For example, consider Nazi collaborators in 1930's Germany who are condemned for committing morally atrocious acts, even though their very presence in Nazi Germany was due to factors beyond their control (Nagel 1979). Had those very people been transferred by the companies for which they worked to Argentina in 1929, perhaps they would have led exemplary lives. If we correctly morally assess the Nazi collaborators differently from their imaginary counterparts in Argentina, then we have a case of circumstantial moral luck.

comment by komponisto · 2010-01-11T13:23:39.642Z · score: 0 (0 votes) · LW · GW

I don't see the difference between this usage and Zack's/Eliezer's: the definition given in the SEP link is:

Moral luck occurs when an agent can be correctly treated as an object of moral judgment despite the fact that a significant aspect of what she is assessed for depends on factors beyond her control.

A situation where all of an agent's options are blameworthy seems quite clearly to fall within this category.

comment by pdf23ds · 2010-01-12T05:24:03.381Z · score: 0 (0 votes) · LW · GW

OK, I suppose it counts as an instance, though I'm not convinced Eliezer intended the phrase in that sense. But it's certainly one of the instances I'm less interested in.

comment by thomblake · 2010-01-11T21:35:43.444Z · score: 0 (0 votes) · LW · GW

Agreed.

comment by Roko · 2010-01-11T21:26:27.600Z · score: 0 (0 votes) · LW · GW

Utility functions can be very flexible. E.g. U=1 iff 0 rights violations, U=0 otherwise.

Then you really will try to make sure no rights get violated.

comment by Technologos · 2010-01-11T21:31:49.898Z · score: 2 (2 votes) · LW · GW

And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.

comment by Roko · 2010-01-11T21:42:03.185Z · score: 0 (0 votes) · LW · GW

Correct. But it's still an implementable policy. I didn't say it was sensible!

comment by thomblake · 2010-01-11T21:36:50.191Z · score: 0 (0 votes) · LW · GW

It seems as though you're reading this hypothetical utility function properly.

comment by Technologos · 2010-01-11T21:41:54.349Z · score: 1 (1 votes) · LW · GW

It does occur to me that I wasn't objecting to the hypothetical existence of said function, only that rights aren't especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.

comment by thomblake · 2010-01-11T22:30:47.148Z · score: 0 (0 votes) · LW · GW

It seems like a non-sequitur in response to Roko's illustration of what a utility function can be used to represent.

comment by Technologos · 2010-01-12T05:37:27.887Z · score: 0 (0 votes) · LW · GW

I was connecting it to and agreeing with Zack M Davis' thought about utilitarianism. Even with Roko's utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you've done the best you can with rights, you're still a utilitarian in the usual sense within the class of choices that minimizes rights violations.

comment by lunchbox · 2010-01-11T01:29:38.951Z · score: 0 (0 votes) · LW · GW

How do people here consume Less Wrong? I just started reading and am looking for a good way to stay on top of posts and comments. Do you periodically check the website? Do you use an RSS feed? (which?) Or something else?

comment by AdeleneDawner · 2010-01-11T20:09:02.795Z · score: 2 (4 votes) · LW · GW

When I'm actively following the site (visiting 3+ times a day), I primarily follow the new comments page. I only read top posts when I see that there's an interesting discussion going on about one of them, or if the post's title seems particularly interesting. (I do wind up reading a large portion of the top posts sooner or later, though.)

I have the 'recent posts' RSS feed in my reader for when I'm not actively following the site, but I only click through if something seems very interesting.

comment by Alicorn · 2010-01-11T03:17:31.311Z · score: 2 (2 votes) · LW · GW

I use RSS for top level posts, and have an easily accessible bookmark to the comments page which I check more frequently than I should.

comment by thomblake · 2010-01-11T20:17:04.059Z · score: 0 (0 votes) · LW · GW

Same here.

comment by LucasSloan · 2010-01-11T01:43:44.611Z · score: 1 (1 votes) · LW · GW

I read new posts as soon as I see them. I look at the comments through the recent comments bar, but that requires having the LW tab open more or less constantly. I also reread posts to get any comments I miss and to get a better sense of how the discussions are preceding.

comment by byrnema · 2010-01-11T02:05:32.385Z · score: 0 (0 votes) · LW · GW

I look at the comments through the recent comments bar, but that requires having the LW tab open more or less constantly.

I click on "Recent Comments" and read as far back as I have to until I've caught up. Reading backwards can be mentally tiring ... so I'm actually just skimming for interesting comments. When I find one that seems interesting, I read through that thread for the continuity of the discussion.

comment by Kevin · 2010-01-11T00:40:00.220Z · score: 0 (2 votes) · LW · GW

Link pointer: http://www.eurekalert.org/pub_releases/2010-01/hu-qcc010810.php Quantum computer calculates exact energy of molecular hydrogen. http://www.nature.com/nchem/journal/vaop/ncurrent/abs/nchem.483.html

The submitter on Hacker News: "This is arguably one of the most important breakthroughs ever in the field of computing."

Imagine how much easier this comment would be to browse if it was part of a subreddit here.

comment by CannibalSmith · 2010-01-08T15:53:27.239Z · score: 0 (0 votes) · LW · GW

http://www.youtube.com/watch?v=vyfPZLb3kqc (Edit: disregard what the guy says after he explains the concept. I'll find a better link later.)

Does this allow us to cheat on the secret sauce in the AGI recipe?

I also see it as a limitless supply of statistics about humans. Prediction markets! "Will you bet this penny I'm giving you on X?"

What else can we use it for?

comment by kpreid · 2010-01-08T20:41:34.747Z · score: 0 (0 votes) · LW · GW

Tangential: The video was disappointing vs. its title (“Mechanical Turk and the Danger of Digital Sweatshops”). Summary: The two concerns mentioned are being underpaid (it's addictive, and you can't “unionise” because there's always someone else willing to start) and that people have no context for the work and may therefore be a part of something they would object to (the example given: the Iranian government setting a face-matching task to locate “protesters” in crowds). (But if that's a valid argument against MTurk, then it's equally against developing e.g. face-recognition software.)

comment by CannibalSmith · 2010-01-08T22:26:20.984Z · score: 0 (0 votes) · LW · GW

Hmm. Given that I pretty much don't care about the guy's in the video concerns or arguments, maybe I should find a better link.

comment by kpreid · 2010-01-09T00:40:42.881Z · score: 0 (0 votes) · LW · GW

What's wrong with linking to http://www.mturk.com/ ?

I'd actually been pondering asking a question myself: What does the LW community think of choosing to do this work? Specifically, its piecemeal-and-heterogenous nature seems like it might be a way to do something faintly valuable (i.e. you get paid) with the sort of free time that one can't get around to doing anything of high value with, and so would otherwise spend on entirely frivolous tasks. [Clunky explanation here, not sure how to improve, sorry.]

comment by Alicorn · 2010-01-09T00:55:31.224Z · score: 1 (1 votes) · LW · GW

I poked around a bit on the site and I think the vast majority of ways I could spend equivalent downtime would be worth more than the pennies they offer there. Even the overhead of signing up for the tasks is too costly a barrier for such tiny payouts, and that's if you avoid the ones that require you to pass qualification tests. Plus, the number of offers asking people to re-write content in their own words just screams plagiarism.

comment by PhilGoetz · 2010-01-08T04:17:15.644Z · score: 0 (0 votes) · LW · GW

Suppose you have an agent with k bits of knowledge, that is given n bits of information. You can imagine it's an agent shown a digitized picture. The agent will infer u bits of useful information from those n bits. u is, critically, to be measured in an agent-independent way. u is the number of words the agent will need to use if the agent is going to write a book about those n bits for a general audience.

What can be said about the function u(k, n), relating the number of useful bits extracted to both the number of bits presented, and the number of bits of background knowledge?

The reason for asking this question is to be able to describe the learning rate of an agent whose learning improves its ability to learn. u is to be described in an agent-independent way because we want to know how much the agent is learning in an inter-agent currency of memory, not (for certain purposes, anyway) in terms of the actual number of bits that the agent needs to keep growing its hardware by.

It's tricky because it's-context dependent. Consider a thermostat given a picture showing a heat map of the room. The heat map has p pixels each with b bits. The thermostat has a "mind" capable of representing only the concept "hot/cold".

You could say that it can extract from this picture p bits of information, and store a concept of hot/cold for each of p locations in the room. But it doesn't have the concept of location, so I won't let you do that. Then you might say the thermostat can take the heat map as a time sequence, and form p bits of information over time. I will disqualify that on two counts: first, because it isn't a time sequence, and you are not allowed to divorce the information from its semantics; second, because the thermostat has no concept of time.

In fact, as the thermostat has no memory, it can't extract more than 1 bit, total, from any amount of information. When k is low, u(k,n) < n.

More surprisingly, when k is high, u(k,n) > n. A human, shown a fuzzy photograph that can be compressed to 20K, can extract more than 20K of information from it. That sounds impossible. The trick is that, as I said, u is measured in an agent-independent way. If a human has k0 bits of knowledge, and is exposed to n bits of information, the human's new total information k1 =< k0+n. The human can learn at most n bits of information. But the information represented by those n bits might require the other k0 bits to interpret, and take more than n bits to represent by someone lacking those k0 bits of information.

For example, if you flash two lanterns instead of one in the steeple of the Old North Church to tell me that the British are crossing the Charles, you have given me only 1 bit of information. If I happen to know that the British ships are commanded by Admiral Graves, then I know both that the British are crossing the Charles, and that Samuel Graves is coming to Cambridge. To communicate this to someone who knew nothing about Admiral Graves would take more than 1 bit of information.

comment by Kevin · 2010-01-08T02:03:52.042Z · score: 0 (0 votes) · LW · GW

What is the probability that this is the ultimate base layer of reality?

Eliezer gave the joke answer to this question, because this is something that seems impossible to know.

However, I myself assign a significant probability that this is not the base level of reality. Theuncertainfuture.com tells me that I assign a 99% probability of AI by 2070 and it starts approaching .99 before 2070. So why would I be likely to be living as an original human circa 2000 when transhumans will be running ancestor simulations? I suppose it's possible that transhumans won't run ancestor simulations, but I would want to run ancestor simulations, for my merged transhuman mind to be able to assimilate the knowledge of running a human consciousness of myself through interesting points in human history.

The zero one infinity rule also makes it seem more unlikely this is the base level of reality.

It seems rather convenient that I am living in the most interesting period in human history. Not to mention I have a lifestyle in the top 1% of all humans living today.

comment by PeerInfinity · 2010-01-05T05:27:10.523Z · score: 0 (0 votes) · LW · GW

For Darwin’s sake, reject “Darwin-ism” (and other pernicious terms)

A short article explaining why using the terms "Darwinism" and "theory of evolution" are harmful to public understanding.

comment by CannibalSmith · 2010-01-02T12:52:20.380Z · score: 0 (0 votes) · LW · GW

My life is priceless to me of course, but what is it worth to the government? My friends? The average person? You?

How much are you willing to pay me to continue reading and commenting on Less Wrong? :)

comment by Alicorn · 2010-01-02T14:26:56.151Z · score: 8 (8 votes) · LW · GW

If you mean, "what would we pay to save your life", you could probably take up a respectable collection if you credibly identified a threat to your health that could be fixed with a medium-sized amount of money.

If you mean, "will we bribe you to hang out with us"... uh... no.

comment by CannibalSmith · 2010-01-03T12:35:47.901Z · score: 0 (0 votes) · LW · GW

Why the difference? I'm but a few words on your screen. You cannot distinguish between me dying and me just not commenting anymore.

comment by RichardKennaway · 2010-01-03T12:41:33.781Z · score: 7 (7 votes) · LW · GW

Your dying and your leaving LW are two different things, whether or not we are in a position to tell the difference.

comment by CannibalSmith · 2010-01-03T12:55:56.738Z · score: 0 (0 votes) · LW · GW

Then how can Alicorn condition her actions on indistinguishable events?

comment by RichardKennaway · 2010-01-03T13:33:43.460Z · score: 5 (5 votes) · LW · GW

By you making them distinguishable, in the way that she suggested.

comment by Vladimir_Nesov · 2010-01-02T13:08:50.624Z · score: 3 (3 votes) · LW · GW

You value things other than your own life, hence your life isn't priceless to you as well (there are hypothetical situations where you exchange your life for a significant improvement in the other things you value), though its value will of course be different for you and for other people, perhaps with the difference of couple orders of magnitude.

comment by CannibalSmith · 2010-01-03T12:40:21.229Z · score: 0 (0 votes) · LW · GW

What do you value more than your life?

comment by orthonormal · 2010-01-03T19:01:07.826Z · score: 3 (3 votes) · LW · GW

My life plus the life of a random stranger, for example. If I was doomed to die in a certain fashion but had the chance to save another life (even in a way nobody would ever know about), well, that's a no-brainer for me.

EDIT: Ah, now I see the context. How about the following hypothetical:

I am on a spaceship returning to Earth when all my shipmates die. I realize that I am a carrier for a horrific disease; I will never get sick from it, but I can transmit it to others, of whom 99% will die. Let's furthermore imagine that the people on the ground don't know about this yet, that I have good odds of surviving if I just land somewhere and make a run for it, and that no effective quarantine exists short of self-destructing the ship before I land.

If it's therefore reduced to "I die" versus "I survive, but cause a mass extinction event", I think I self-destruct the capsule. Perhaps not without some angst, but it's still an obvious choice to me.

N.B: Given the many cases in history where intelligent people in dire circumstances have accepted death (or high odds of it) on behalf of something they see as more important, I think the case that revealed preferences sometimes value things above one's own life is pretty strong.

comment by [deleted] · 2010-01-04T01:24:49.231Z · score: 3 (3 votes) · LW · GW

But non-total mass extinction events are awesome! The overpopulation immediately vanishes! Uh, hang on a moment, let me rethink something.

comment by Nick_Novitski · 2010-01-04T18:12:56.894Z · score: 0 (0 votes) · LW · GW

This is why neophilia isn't always selected for.

comment by MatthewB · 2010-01-03T19:50:22.042Z · score: 2 (2 votes) · LW · GW

This has pretty much been what has prevented three Suicide Bombers from succeeding. In the first case (The flight over PA on 9/11), all lost their lives to prevent a much more horrible catastrophe. The Shoe Bomber and the Underwear Bomber were both stopped (Successfully) by the passengers on the aircraft without any loss of life, yet they all knew this was possible.

I value my life, as I am certain every one of them did, yet they valued the lives of others as well, and in a situation, where to not act was to have both certain death of oneself coupled with the certain deaths of many others, almost anyone would choose to act rather than do nothing, as I would.

In fact, I believe that this is our strongest defense against Terrorism in the USA. If the suicide bombers who try to attack us discover that we are willing to die to prevent them from dying in their attempt... It will take a lot of the impetus out of them (after all, most are doing this to martyr themselves, and failure is a horrible thing to them).

I think that there are also other things that I might value more than my life. For instance I might value not creating another life more than my own life depending upon the circumstances. But, pretty much all of those things involve the sacrifice of my life for something that is greater than myself. If one thinks that they are the greatest thing on earth... well, that is going to be a lonely existence.

comment by blogospheroid · 2010-01-02T09:44:59.277Z · score: 0 (4 votes) · LW · GW

Drawing on the true prisoner's dilemma, the story arch Three worlds collide and the recent Avatar

In the case of avatar, humans did cooperate in the prisoners dilemma first, we tried the schooling and medicine thingy and apparently it has been rejected from the na'avi side. Differences were still so high that dream-walkers (na'avi avatars of humans) were being derided with statements like 'a rock sees more'.

So, the question is, when we cooperate with an alien species, will they even recognise it as cooperation? How does that change the contours of a decision theory? If you are a superior species and have the option of taking a cooperative decision that appears to be hostile(In the avatar scenario, it could be trying to tell Na'avi that sometimes things just don't all fit into a plan, thus blowing a huge hole in their worldview), and a hostile decision that appears to be cooperative (giving away free narcotics, for eg.)

Will you be genuinely cooperative or only signal that you are cooperative?

Let us truly consider this from the perspective of superiority of humans i.e. there are no uber-guardians of pandora who can wipe humanity out like dust. (which is a possibility i would consider if humanity were going to launch another attack)

comment by Alicorn · 2010-01-02T14:37:14.906Z · score: 1 (1 votes) · LW · GW

The Na'vi didn't defect. The Na'vi refused to play. The human faction wouldn't accept any outcome that didn't end with them getting the unobtainium, and the Na'vi not playing was such an outcome, so the humans forced a game and, when the Na'vi still weren't cooperative, defected big-time. Since the game was spread out in time, this permitted retaliatory defection - which isn't part of the original non-iterated PD, nor is refusing to play.

comment by wedrifid · 2010-01-02T22:24:04.155Z · score: 0 (0 votes) · LW · GW

Since the game was spread out in time, this permitted retaliatory defection - which isn't part of the original non-iterated PD, nor is refusing to play.

And since the Na'vi choosing to fight turns out to make human non-cooperation give a far worse outcome to them than cooperation it just isn't a Prisoner's Dilemma at all. It's a "The Na'vi are will F@#$ you up if you mess with them" game.