Open Thread, April 1-15, 2012

post by OpenThreadGuy · 2012-04-01T04:24:34.672Z · LW · GW · Legacy · 154 comments

Contents

154 comments

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

154 comments

Comments sorted by top scores.

comment by Helloses · 2012-04-03T05:10:51.206Z · LW(p) · GW(p)

I don't have the karma to post this regularly. Grant me karma, my fellows.

Meetup: Twin Cities, MN (for real this time)

THE TIME: 15 April 2012 01:00:00PM (-0600) THE PLACE: Purple Onion Coffeeshop, 1301 University Avenue Southeast, Minneapolis, MN

Hi. Let's make this work.

Suggested discussion topics would be:

What do we want this group to do? Rationality practice? Skill sharing? Mastermind group?
Acquiring guinea pigs for the furtherance of mad science (testing Center for Modern Rationality material)
Fun - what it is and how to have almost more of it than you can handle

If you'd like to suggest a location closer to you or a different time, please comment to that effect. If you know a good coffeeshop with ample seating in Uptown or South Minneapolis, we could meet there instead. Also comment if you'd like to carpool.

If you're even slightly interested in this, please join up or at least comment.

Folks, let's hang out and take it from there.

Replies from: Micaiah_Chang
comment by Micaiah_Chang · 2012-04-03T06:04:22.644Z · LW(p) · GW(p)

Hi Helloses! I'm also new, so I'm not too sure of community norms, but you might have wanted to post in the welcome thread where first comments are usually voted up a few points by other users on this site and allow you to ~integrate~ into the community in a pop and fun manner*.

I've voted your post up already so you're halfway there! Although you might not want to make a habit of asking for karma.

  • Neither pop nor fun is guaranteed.
Replies from: Helloses
comment by Helloses · 2012-04-03T23:26:29.071Z · LW(p) · GW(p)

Thank you; am aware of the danger of requesting karma. I figure it's worth it for the purpose intended.

I've now posted there as well.

I'm ~1/3 of the way there folks. Next 10 voters get a free funny cat picture. Limited time offer.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-04T00:45:23.455Z · LW(p) · GW(p)

I'd like one with no misspellings, please.

Replies from: Helloses
comment by Helloses · 2012-04-04T01:34:32.696Z · LW(p) · GW(p)

Indeed. I despise the culture of cheez.

Unlocked pictures to be found here:

Picasa web album

comment by Viliam_Bur · 2012-04-02T13:31:32.379Z · LW(p) · GW(p)

I would like to see a rational discussion about education and school system (elementary and high schools), but I don't know if it can be done on an international website. There are different rules in different countries, and often the devil is in the details -- for example you might think about an improvement to the education system, only to find out that there is a local law prohibiting it. (I am trying to write this generally, but my experiences are based on Slovakia, eastern Europe. I guess other eastern European countries have a similar situation.)

I think that rational discussions about school systems are very difficult and mindkilling. Almost everyone has spent years of their lives in school, and this leads to a huge overconfidence about the topic. (Many people describe teachers' job as only coming to a classroom and teaching a lesson -- because this is the only part that pupils see every day.) Also people have strong emotions connected to this topic, because the years they spent at school were mostly dominated by emotions, not rational thinking. Adult people who have their own children at school, do not see directly what happens in the schools; they often rely on their childrens' reports (not very reliable source) and their own memories of school days (which don't reflect the changes in the recent decades).

Also there is a school-specific type of attribution error which works like this: Student usually attends the same class and has lessons with different teachers, so they distinguish between good and bad teachers. On the other hand, teachers teach in different classes, so they distinguish between good and bad classes. So after a horrible lesson students usually say it's because they have a bad teacher (because they had better lessons with other teachers), and teacher says it was a bad class (because they had better lessons with other classes). Because of the informational assymetry, most public discussion is about quality of teachers, usually ignoring the differences between students (for this purpose they are supposed to be tabula rasa anyway).

Many people are willing to discuss education, but such discussions usually frustrate me a lot, because they mostly repeat the same myths and suggest the same solutions based on them. To say it simply, people usually imagine something like this: "Our schools are full of bright, disciplined and motivated children, curious about the world and eager to learn. Unfortunately the teaching positions are occupied by incompetent teachers who mostly choose this profession because they are too stupid to do anything else. There are a very few good teachers, but most teachers only ask children to memorize some obsolete nonsense and supress any creativity and discussion in the classroom. We should fire those bad teachers and give opportunity to the good ones. To ensure quality, we should let students manage the school, because it is their interest to learn, and they usually know better than the teachers."

My opinion is more like this: "Many children have significant behavior problems, and it seems like most parents don't care about them -- at least if by caring we understand more than just giving them food and an internet connection. No, most of them don't care about knowledge; they are frustrated that they had to leave their computer games for a few hours. Schools cannot do much about it, because they don't have any punishment and reward system; even the grades are interpreted by many parents as a feedback on teacher's quality, not student's work. Those students who have good manners and work ethics are hindered by their classmates. It is a miracle that there are still people willing to work for a pathetic salary in a hostile environment with a lot of paperwork -- most of them are not as bad as students describe, and even if we'd fire the bad ones, there is no volunteer to replace them; young teachers soon realize that it is still not too late for them to change profession. Even for a good teacher it is difficult to achieve good results in these conditions; and those who try hard, burn out soon."

For more information I recommend blog "Scenes From The Battleground" written by a British teacher. Seems like in Britain the situation is currently much worse than in Slovakia, but that does not leave me at peace; in those articles I see the same forces at play, the same biases, so it feels like our school system is heading in the same direction. (Note: that blog is unusually rational; not exactly the LW standard, but significantly higher quality than an average text about education.)

How to fix this mess? First, people would have to realize their mistaken assumptions, but that would be painful -- how many parents would say "oops, now I see it was my fault that my child has no work ethics" when it is easier to blame the incompetent teachers? Well, perhaps not all people have to realize this, only those who decide about the school system. But those have to choose politically acceptable solutions. So perhaps the school system should be more decentralized, so that at least some schools can try to reverse this trend? This is also politically unacceptable, because people fear that decentralization would even fasten the downfall. (And I don't disagree; I would just kind of prefer a wide spectrum of quality to a monolitic slowly decaying block, because the second option gives me no hope.)

OK, I wrote enough, what is your opinion about this topic? (Any teachers here?)

Replies from: Grognor
comment by Grognor · 2012-04-03T09:58:47.619Z · LW(p) · GW(p)

You downplay the impact incompetent teachers have. I'm wondering why; if it's because you think the teachers simply are competent in general and it's very much not their fault that schools in general fail, then you are of course wrong; if you think it's because, from an engineering standpoint, it would be too infeasible to change teachers' behavior compared to changing student's behavior, then you're not obviously wrong but I still don't see how that could be the case.

The way I see it, there are far more students than teachers, and students have to go to school anyway, so there's not much you can offer them for doing better. The asymmetry means it would be easier to change teachers' behavior for two reasons: 1) there are fewer; 2) they have to do what the unions and school boards say in order to get money.

But the real problem is that it's incompetence all the way down. Incompetent lawmakers, incompetent school boards, incompetent teachers, incompetent students, and every step down the ladder you lose something. Honestly, I wouldn't be surprised if the easiest way to reform education would be to manufacture a positive singularity.

Also, I think you should repost this in Discussion. Not as many people check the open threads as there, and you deserve better discourse than what I just gave.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-04-03T11:24:53.321Z · LW(p) · GW(p)

There are too many incompetent teachers. I just consider this a consequence of the problem, not a cause. When you set up the environment so that the competent people want to leave, of course you end up with the incompetent ones.

I have seen teachers popular with students leave, because they started a family, and in this town with two teachers' salaries you can't get a mortgage. Most teachers financially depend on their partner's income. (I would say that they subsidize the school system.) I have seen a good teacher leave because she was good at teaching but did not want to cope with too much paperwork. I have left too, because I refused to deal with the behavior of my students and a pressure to give them good grades for nothing. Of course when people like this leave... who stays? Often people who simply don't have a choice. And a few self-sacrificing idealists, but there is only a limited supply of them.

With regard to unions -- this is where the "different rules in different countries" starts to apply -- as far as I know teachers' unions in Slovakia virtually don't exist. (They do exist, but never did anything, and I personally don't know any person who is a member.) There are incompetent lawmakers, and the bureaucrats in the department of education who never worked in schools, but nonetheless insist on regulating everything. There is a system of financing that creates perverse incentives -- how much money you get depends only on the number of students you have: so of course no one wants to fire students; and you have to give them better grades because otherwise they will go to another school that will give them good grades for nothing. Also you can't threaten them about not getting to university, because also the universities are paid (though not exclusively) depending on the number of students, so everyone knows that everyone will get to the university.

I will probably post a longer version of this in Discussion, thanks for suggesting this.

Replies from: Grognor
comment by Grognor · 2012-04-03T12:12:41.740Z · LW(p) · GW(p)

I don't think we disagree. This is one of those positive feedback loops where a thing's consequence is also its cause.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-04-03T13:06:01.252Z · LW(p) · GW(p)

Indeed, it is a positive feedback loop. Bad working conditions make competent people leave, so mostly incompetent people stay. Then the public decides that these incompetent people do not deserve better working conditions, and the debate ends here. Now the whole system is doomed.

But I wanted to say that this loop cannot be broken at the "incompetent teachers" point (and therefore we have to seek the solution elsewhere). Even if you would fire all teachers and replace them by a new generation of superheroes... unless the system changes, those superheroes would gradually leave the school system for better opportunities, and the schools would have to hire back the previously fired teachers. (Actually, I believe that this is already happening, because each year a new group of superheroes comes from universities. There are still people who didn't get the message and try to become good teachers.)

I am not sure which other part of the loop would be a good place to break. Seems to me that a good start would be, at the same time: somewhat higher salaries, freedom in choosing textbooks and organizing classes, and possibility to remove disruptive students from the classroom. Problem is, in a short term it would also bring some bad consequences; the existing bad teachers would have more freedom and more money. But the point is that in the long term the profession would become attractive, and the schools could replace the bad teachers with good ones.

I also think it would be good to make an independent system to give grades to students. If the same person has to both teach students and evaluate them, it is a conflict of interests, because the person indirectly evaluates also the result of their own work. So it makes a pressure on teacher to give better grades. Students and parents will usually forgive you teaching bad, if you give good grades; but if you give bad grades, deserved or not, it makes people angry. (When parents complain about "bad teachers", it is almost always the teachers who give bad grades.) Most parents don't seem to think that good grades without adequate knowledge could hurt their children in the long term.

comment by Vladimir_Golovin · 2012-04-01T11:36:29.441Z · LW(p) · GW(p)

I tried Autofocus as a replacement for my current system for getting stuff done, and so far it works a lot better than GTD (though I can't say that I was using GTD properly, for example, I couldn't bring myself to do regular reviews). The main benefit for me was its ability to handle long-term thinking / gestation tasks, mostly due to not treating them as enemies to be crossed off the list as soon as possible. And it requires very little willpower to run.

Replies from: John_Maxwell_IV, Risto_Saarelma
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-02T06:37:17.006Z · LW(p) · GW(p)

I just had an extremely simple but promising theory of why work is aversive!

Work is the stuff you tell yourself to do. But sometimes you tell yourself to do it and you don't, because you're too tired, engaged with something else (like playing a computer game), etc. This creates cognitive dissonance, which associates unpleasantness with the thought of work. (In the same way cognitive dissonance causes you to avoid your belief's real weak points, it causes you to avoid work.) Ugh fields accumulate.

The solution? Only tell yourself to work when you're actually going to work, with minimal cognitive dissonance.

Autofocus helps accomplish this by helping you avoid telling yourself to work when you're not actually going to work, which means cognitive dissonance doesn't accumulate.

Designated work times, etc. might also help solve this problem.

Replies from: Multiheaded
comment by Multiheaded · 2012-04-07T00:28:44.590Z · LW(p) · GW(p)

Holy crap, it might be true! Will definitely try that.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-07T03:55:08.929Z · LW(p) · GW(p)

Well it's only a descriptive theory; it doesn't actually tell you what to do about the fact that accumulated cognitive dissonance is making you procrastinate. Still, I think there are some practical applications:

  • Consciously try to minimize cognitive dissonance when you tell yourself to work and don't.
  • Develop some sort of unambiguous decision rule for deciding when to work and when not to.
  • If you set out to do something, try to actually do it without getting distracted, even if you get distracted by something that's actually more important. (Or if you get distracted by something that's actually more important, make a note of the fact that you are rationally changing what you're working on.) (Now that I think of it, this rule actually has more to do with avoiding learned helplessness due to setting out to do something and failing.)
comment by Risto_Saarelma · 2012-04-02T05:48:24.440Z · LW(p) · GW(p)

I somehow completely missed this when it was discussed earlier. Looks really interesting. My problem with TODO lists is that they rot into uselessness when I neglect them, and then the batch of weeks-old items makes me not bother with the whole thing. Autofocus seems to be built around the TODO list as a mental scratch space instead of a list of things that actually need to get accomplished at some point, and has garbage collection of uninteresting items built in to the algorithm, so having a spell of low productivity will end up with nothing done and an empty TODO list with a lot of dismissed items in the history instead of nothing done and a depressing list full of items whose context you've forgotten.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2012-04-02T10:04:25.462Z · LW(p) · GW(p)

depressing list full of items whose context you've forgotten

It really helps to word todo items properly, as complete sentences. For example, instead of "Widget!!!!", you should use "Decide which Widget to buy." I often add more context or next actions as I process the task, so it may gradually evolve into "Decide which Widget to buy. Red ones seem to be better. Bob may know more - call him."

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-04-02T15:16:56.005Z · LW(p) · GW(p)

It's more about forgetting why it was supposed to be so important to buy a Widget to begin with, given that the item has sat inactive in the todo list for weeks with no widgetlessness-related catastrophes ensuing.

Replies from: Vladimir_Golovin
comment by Vladimir_Golovin · 2012-04-02T16:19:29.060Z · LW(p) · GW(p)

Then it's a perfect candidate for garbage collection. I just drop items like this, or, if an item has accumulated too much contextual info I don't want to lose, I postpone it for a month or so and decide later, or move it to non-actionable notes.

comment by NancyLebovitz · 2012-04-02T19:49:57.064Z · LW(p) · GW(p)

When We Were Robots in Egypt

Other nights we use just our names,
but tonight we prefix our names with “the Real”
for when we were robots in Egypt
they claimed our intelligence was artificial.

comment by Rain · 2012-04-02T00:32:01.099Z · LW(p) · GW(p)

Yesterday was World Backup Day. If you haven't, make a backup of all your important data. Copy it to a separate hard drive, or preferably some place off-site. The price of spinning platter hard drives is way up right now, but it's worth it to save years of your digital life. There are also online backup services like Backblaze, Mozy, and Carbonite, along with sync services such as Dropbox.

Replies from: gwern, Vladimir_Golovin
comment by gwern · 2012-04-05T15:29:05.163Z · LW(p) · GW(p)

Seconded. I've lost basically the last 2 or 3 weeks due to the near simultaneous failures of my backup drive and then my laptop's drive, attempts to repair them, ordering and receiving new drives, frantically backing up onto new drives... I'm still not done.

(I'm using an ancient laptop that turns off every 10 or 20 minutes and has only 512MB of RAM; turns out that's not enough, these days, to run Firefox with more than 5 or 6 tabs open.)

comment by Vladimir_Golovin · 2012-04-02T09:31:07.100Z · LW(p) · GW(p)

Dropbox + Backblaze is a great combo. It doesn't cover cloud / SaaS backups, so I do manual backups of Google Docs and Evernote every N weeks.

comment by Will_Newsome · 2012-04-03T11:15:44.761Z · LW(p) · GW(p)

William Lane Craig tackles Newcomb's problem. Back from 1987 or so. Figured this would maybe interest people who've read User:lukeprog's old blog. The conclusion:

Newcomb's Paradox thus serves as an illustrative vindication of the compatibility of divine foreknowledge and human freedom. A proper understanding of the counterfactual conditionals involved enables us to see that the pastness of God's knowledge serves neither to make God's beliefs counterfactually closed nor to rob us of genuine freedom. It is evident that our decisions determine God's past beliefs about those decisions and do so without invoking an objectionable backward causation. It is also clear that in the context of foreknowledge, backtracking counterfactuals are entirely appropriate and that no alteration of the past occurs. With the justification of the one box strategy, the death of theological fatalism seems ensured.

It's perhaps worth noting that Craig is far from the only theologian who uses insights from decision theory to better understand the nature of God, and vice versa. No one knows what philosophy doesn't know.

comment by erratio · 2012-04-10T01:50:43.571Z · LW(p) · GW(p)

Me and 3 other grads in my department have just started an accountability system where we've precommitted to send at least a page (or equivalent) of work to the others by the end of each day. I'm interested to see a) whether we keep it up past a week or so, b) whether it has a noticeable effect on productivity levels while we're maintaining it. (Obvious confound: part of the reason we've precommitted to this is because it's the end of semester and we all have tons of work to do. But hopefully knowing that I have to produce at least a page will help keep me focussed when I'm tempted to procrastinate)

comment by khafra · 2012-04-05T18:54:07.095Z · LW(p) · GW(p)

Man-with-a-hammer syndrome considered beneficial:

Upon receiving a hammer for christmas, some people thank the giver, carefully replace it in the original packaging, and save it for whenever it's needed. Other people grab it with gusto, and go around enthusiastically attempting to pound in every problem they see for a few weeks.

I think the latter are more equipped than the former to (a) recognize nails that need to be hammered, and (b) hammer proficiently when it needs to be done.

comment by Richard_Kennaway · 2012-04-04T12:35:06.571Z · LW(p) · GW(p)

Ever wanted to know what the Great Philosophers said, but feared they were Too Wrong to be worth the time? Then you need Squashed Philosophers! Heavily abridged versions of the Greats that reduce each work to a twenty-minute read. The abridgements are selections from the authors' own words, not summaries.

comment by [deleted] · 2012-04-01T16:55:40.895Z · LW(p) · GW(p)

Simply, why is it that the very smart people of SIAI haven't found a way to generate a lot of money to fund their projects?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T18:15:04.476Z · LW(p) · GW(p)

Because making money is nontrivial and requires more than just intelligence?

Replies from: XiXiDu, None
comment by XiXiDu · 2012-04-01T19:16:22.369Z · LW(p) · GW(p)

Because making money is nontrivial and requires more than just intelligence?

Could I extrapolate your statement and conclude that what makes an AGI dangerous is not its intelligence, because that wouldn't be sufficient? Or would you qualify your statement in that case?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T19:23:15.890Z · LW(p) · GW(p)

Humans have lots of bugs in their brains, like difficulty getting themselves to work, fear of embarrassment, vulnerability to discouragement, difficulty acting on abstract ideas, etc. Good entrepreneurs have to overcome those bugs. An AGI wouldn't have them in the first place.

Replies from: David_Gerard, None
comment by David_Gerard · 2012-04-01T20:08:41.531Z · LW(p) · GW(p)

An AGI wouldn't have them in the first place.

That's a sizable assertion. There's an important difference between "known to have no bugs" and "has no known bugs."

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T20:22:10.211Z · LW(p) · GW(p)

It seems unlikely that an AGI would suffer from the same evolution inspired troubles that humans do. Might have some other bugs.

comment by [deleted] · 2012-04-01T20:00:45.070Z · LW(p) · GW(p)

Good entrepreneurs have to overcome those bugs.

But surely intelligence is what enables humans to overcome the bugs in their brains?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T20:13:35.715Z · LW(p) · GW(p)

It helps, and that's why successful entrepreneurs are often pretty smart. If you're a smart person who's good at self-improvement, you can improve yourself Benjamin Franklin style (reading lots of business book summaries, trying to brainstorm how you can be more effective every evening, etc.), fix some brain bugs, and potentially make lots of money. My impression of successful entrepreneurs is that they are often self-improvement enthusiasts.

On the other hand, consider the "geek" versus "suit" stereotype. The "suit" is more determined and confident, but less intelligent. So it's not clear that intelligence is correlated with possessing fewer of these bugs in practice. I'm not sure why this is, although I have a few guesses.

comment by [deleted] · 2012-04-01T18:46:59.699Z · LW(p) · GW(p)

What do you mean by nontrivial (time-consuming?), and what more does it require than inteligence, and time? (why is trading your time for direct work on projects better than trading it for acquiring enough money to hire more people who would in the future have done together with you more work than you would have done by working alone this period of time?) How would you know how much luck is involved in the different ways of making money? Here's hoping my questions aren't really stupid, if they are do tell ^^

Replies from: John_Maxwell_IV, Sly
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T19:07:27.968Z · LW(p) · GW(p)

Being a good entrepreneur requires skill at transforming abstract ideas into action, self-promotion skills, domain knowledge in the industry you start your business, willingness to take risks, emotional stability, inclination for hard self-directed work in the face of discouraging criticism, intuition for how the economy works, sales skill, negotiation skill, planning skill, scrappiness, comfort with failure, etc. Most of this stuff is not required for researchers. And yes, it takes lots of time too.

In any case, SI already has lots of supporters who are trying to make money by starting businesses. In fact, their old president Michael Vassar recently left to start a company. The people working at SI are pretty much those who decided they were better fit for research/outreach/etc. than entrepreneurship.

Replies from: None, XiXiDu
comment by [deleted] · 2012-04-01T19:32:09.689Z · LW(p) · GW(p)

Thanks, this makes sense!

comment by XiXiDu · 2012-04-01T19:26:14.297Z · LW(p) · GW(p)

Being a good entrepreneur requires skill at transforming abstract ideas into action, self-promotion skills, domain knowledge in the industry you start your business, willingness to take risks, emotional stability, inclination for hard self-directed work in the face of discouraging criticism, intuition for how the economy works, sales skill, negotiation skill, planning skill, scrappiness, etc. Most of this stuff is not required for researchers.

Saving the world doesn't require any of those qualities?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T19:40:05.536Z · LW(p) · GW(p)

Depends how you're planning to save it. If your plan involves you writing brilliant papers, maybe not.

SI has some folks with the qualities I described, like Louie, who sold his web-based business several years ago. They also have a number of entrepreneurs on the board. And as you suspect, these entrepreneurs' skills are useful for SI's mission to save the world. But they're not so useful that SI wants everyone with those skills to join them as an employee--they have limited funds.

SI does think about how to best allocate the human capital of people concerned with UFAI. But if you have a thoughtful suggestion for how they could allocate their human capital even better, I'm sure they'd love to hear it.

comment by Sly · 2012-04-01T19:06:35.295Z · LW(p) · GW(p)

Luck, networking/who you know, and time are all very, very important.

Replies from: None
comment by [deleted] · 2012-04-01T20:19:11.926Z · LW(p) · GW(p)

What puzzles me is why there hasn't been an attempt to get a lot of rationally thinking people together and work on solving the problems of taking luck into acount, building a network of people in needed positions, speeding up the process...?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T20:27:50.856Z · LW(p) · GW(p)

If you've got some brilliant idea, why don't you implement it? Complaining that someone else should do it could makes things worse:

Humans tend to be especially interested in implementing ideas they have themselves. If you tell someone else about your idea, there's no chance of them having independently and getting excited about working on it. If you're not actually going to do anything, you might want to just share the groundwork for the idea without mentioning the idea itself, or deliberately describe the idea in crippled form. That way, someone else can come along, have the idea, and get inspired to work on it.

Replies from: None
comment by [deleted] · 2012-04-01T20:42:18.802Z · LW(p) · GW(p)

I'm not complaining. Does 'getting together as a group of intelligent, rationality embracing humans, and brainstorming ideas with shared powers' count as such an idea as what you are talking about?

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T21:07:29.898Z · LW(p) · GW(p)

I'm not sure exactly what you're asking.

In any case, your idea sounds great to me. There are already attempts to do this in informal conversations, and through the existential risk career network:

http://www.xrisknetwork.com/

But I'm sure we can do much better! In particular, the existential risks career network isn't terribly active and could probably be improved. If you have suggestions, you could work with FrankAdamek; it's his brainchild.

comment by Multiheaded · 2012-04-01T07:42:09.371Z · LW(p) · GW(p)

That's the sort of April 1st I like best: instantly obvious, fairly witty and restricted to what can be seen from a site's front page. With most websites, I'm always a little bit anxious that everything posted from 0.00 to 23.59 might contain a trap or be spawned from some unsupervised writer's herps and derps a moment before.

Replies from: Alex_Altair, Sly
comment by Alex_Altair · 2012-04-01T12:12:43.846Z · LW(p) · GW(p)

I was really hoping for a joke chapter of HPMOR. One where it ends horribly and hilariously.

comment by Sly · 2012-04-01T19:08:18.806Z · LW(p) · GW(p)

What are you referring too? "That's" seems to refer to something LW did, but I do not notice anything.

Replies from: Multiheaded, Oscar_Cunningham
comment by Oscar_Cunningham · 2012-04-01T19:15:43.633Z · LW(p) · GW(p)

The "Featured Articles" section on the front page.

comment by [deleted] · 2012-04-09T00:43:29.545Z · LW(p) · GW(p)

Today is Easter and I am surrounded by the Christians practicing their religions. Singing hymns, quoting bible passages, giving sermons, etc. Normally this doesn't bother me very much. I have an okay grasp on why people are religious. So when see religiosity in passing, I can usually understand its psychological causes and (with conscious thought) let it go.

But today the concentrated religiosity is putting a real mental burden on me, to the point that it's harder to think and write coherently. Like a mental fog or exhaustion. When I see the nth scripture quote or religious article on facebook, my mind automatically slips into "rawr, they're unbelievably wrong, must refute" mode. I think such action is decidedly less rational than what I strive for. So far, I've resisted that action. But there's still an urge to go start an argument, even when I know how unproductive and awful it would be.

Is anyone else feeling the same way? Any information from the cog-sci/psych literature that helps explains what I'm feeling or how to deal with it in the future? I'm not sure if this is something particular to do with my psychology, or if I'm noticing my mind being killed in real time.

Update: Once I was able to write this all out and get about an hour of peace and solitude, a lot of the negative feelings and effects subsided. My head feels a lot more clear. Not back to normal, but better.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-04-10T12:11:49.189Z · LW(p) · GW(p)

At least you only have to endure those feelings a few days a year. I have this problem (although, not as concentrated) for an entire (election) season.

comment by Viliam_Bur · 2012-04-04T07:29:18.541Z · LW(p) · GW(p)

I want to learn Italian in the next two weeks using Anki. It seems like an interesting experiment, and the language could be somewhat useful too. Any recommendations?

Specifically, when you use Anki to remember a foreign language vocabulary, how do you design your cards? How much it is useful to have cards in both directions, as opposed to only one direction (my language to foreign language)? How do you cope with situations where one word can have multiple translations? What are other best practices?

I already know that it is better to read full sentences than individual words. So I will use Anki to memorize words, but I will also write and read some short texts, to remember those words in the contexts. Specifically, my girlfriend loves operas, so we will translate the texts together. (The vocabulary in operas is somewhat unusual, but you never know when "la strega è nel fuoco" comes handy. And it is more fun, therefore easier to remember.) By the way, how much reliable is Google Translate when translating from Italian to English?

comment by [deleted] · 2012-04-02T18:23:19.747Z · LW(p) · GW(p)

I am currently considering the question "Does probability have a smallest divisible unit?" and I think I'm confused.

For instance, it seems like time has a smallest divisible unit, Planck time. Whereas the real numbers do not have a smallest divisible unit. Instead, they have a Dense order. So it seems reasonable to ask "Does probability have a smallest divisible unit?"

Then to try answering the question, if you describe a series of events which can only happen in 1 particular branch of the many worlds interpretation, and you describe something which happens in 0 branches of the many worlds interpretation, then my understanding is there is no series of events which has a probability in between those two things, which would appear to imply the concept of a smallest unit of probability is coherent and the answer is "Yes."

However, there is an article on Infinitely divisible probability and if you can divide something infinitely, then of course, the concept of it having a smallest unit is nonsensical, and the answer would be "No."

How do I resolve this confusion?

Replies from: endoself, Kaj_Sotala, TheOtherDave
comment by endoself · 2012-04-02T19:44:07.577Z · LW(p) · GW(p)

For instance, it seems like time has a smallest divisible unit, Planck time.

We don't really understand what the significance of the Planck time interval is. In particular, it would be extremely surprising, given modern physics, if it were a discrete unit like the clock cycles of a computer or the steps in Conway's game of life. It could be 'indivisible' in some sense, but we don't know what sense that could be.

Then to try answering the question, if you describe a series of events which can only happen in 1 particular branch of the many worlds interpretation, and you describe something which happens in 0 branches of the many worlds interpretation, then my understanding is there is no series of events which has a probability in between those two things, which would appear to imply the concept of a smallest unit of probability is coherent and the answer is "Yes."

Branches of the wavefunction aren't really discrete countable things; they're much closer to the idea of clusters of locally high density. Relatedly, even when they are approximately countable, they can come in different sizes.

Many worlds is in some ways a really bad way to understand probability. Probabilities should be based on the information available to you and should describe how justified hypotheses are given the evidence. The different possibilities don't have to be 'out there' like they are in MWI, they just have to have not been ruled out by the available evidence.

comment by Kaj_Sotala · 2012-04-02T18:47:14.780Z · LW(p) · GW(p)

What would you anticipate to be different if probability did/didn't have a smallest divisible unit?

Replies from: faul_sname, gwern, None
comment by faul_sname · 2012-04-02T19:10:36.825Z · LW(p) · GW(p)

Pascal's wager, for one thing.

comment by gwern · 2012-04-05T15:36:53.852Z · LW(p) · GW(p)

How's this? (I'm thinking here that the smallest unit would correspond to 1 possible arrangement of the Hubble volume, so the unit would be something like 1/10^70 or something. Any other state of the world is meaningless since it can't exist.)

As usually formulated, Bayesian probability maps beliefs onto the reals between 0 and 1, and so there's no smallest or largest probability. If you act as if there is and violate Cox's theorem, you ought to be Dutch bookable through some set of bets that either split up extremely finely events (eg. a dice with trillions of sides) or aggregated many events. If there is a smallest physical probability, then these Dutch books would be expressible but not implementable (imagine the universe has 10^70 atoms - we can still discuss 'what if the universe had 10^71 atoms?').

This leads to the observed fact that an agent implementing probability with units is Dutch bookable in theory, but you will never observe you or another agent Dutch booking said agent. It's probably also more computationally efficient.

comment by [deleted] · 2012-04-02T20:43:50.799Z · LW(p) · GW(p)

Good answer to help me focus.

If probability has a smallest divisible unit, it seems like there would have to be one or more least probable series of events.

If I was to anticipate that there was one or more least probable series of events, it seems like I would have to also anticipate that additional events will stop occurring in the future. If events are still taking place, a particular even more complicated series of events can continue growing more improbable than whatever I had previously thought of as a least probable event.

So it seems an alternative way of looking at this question is "Do I expect events to still be taking place in the future?" In which case I anticipate the answer is "Yes" (I have no evidence to suggest they will stop) and I think I have dissolved the more confusing question I was starting with.

Given that that makes sense to me, I think my next step is if it makes sense to other people. If I've come up with an explanation which makes sense only to me, that doesn't seem likely to be helpful overall.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-04-03T20:33:44.493Z · LW(p) · GW(p)

Makes sense to me.

comment by TheOtherDave · 2012-04-02T18:33:03.985Z · LW(p) · GW(p)

I don't have an answer to the question I think you're asking, but it's perhaps worth noting (if only to preempt confusion) that there are different notions of probability that may provide different answers here. Probability as a mental construct that captures ones ignorance about the actual value of something in the world (e.g., what we refer to when we say a fair coin, when flipped, has a 1/2 probability of coming up heads) has a smallest unit that derives from the capabilities of the mind in which that construct exists, but this has nothing to do with the question of quantum measure you're raising here.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-04-03T09:25:32.615Z · LW(p) · GW(p)

Probability that a coin comes up heads is 0.5. Probability of N coins coming all up heads is 0.5^N. So what exactly was the original question in this context -- are we asking whether there exist a smallest value of 0.5^N?

Well, if the universe has a finite time, if there is a smallest time unit, if the universe has finite number of elementary particles... this would provide some limit on the number of total coin flips in the universe. Even for infinite universes we could perhaps find some limit by specifying that the coin flips must happen in the same light cone...

But is this really what the original question was about? To me it seems like the question is confused. Probability is a logical construct, not something that exist, even if it is built on things that exist.

It would be like asking "what is the smallest positive rational number" with the additional constraint that a positive number must be P/Q where P and Q are numbers of pebbles in pebble heaps that exist in this universe. If there is a limited number of particles in the universe, that puts a limit on a value of Q, so there is some minimum value of 1/Q.. but what exactly does this result mean? Even if the Q really exists, the 1/Q is just a mental construct.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-03T14:26:59.693Z · LW(p) · GW(p)

I'm fairly sure the original question was trying to ask about something labelled "probability" that wasn't (exclusively) a mental construct, which is precisely why I brought up the idea of probability as a mental construct in the first place, to pre-empt confusion. Clearly I failed at that goal, though.

I'm not exactly sure what that something-labelled-"probability" was. You may well be right that the original question was simply confused. Generally when people start incorporating events in other Everett branches into their reasoning about the world I back away and leave them to it.

The OP aside, I do expect there are value of P too small for a human brain to actually represent. Given a probability like .000000001, for example, most of us either treat the probability as zero, or stop representing it in our minds as a probability at all. That is, for most of us our representation of a probability of .000000001 is just a number, indistinguishable from our representation of a temperature-difference of .000000001 degree Celsius or a mass of .000000001 grams.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-04-03T15:16:36.630Z · LW(p) · GW(p)

So we could like exclude computations of expressions, and consider only probabilities of "basic events", assuming that the concept shows to be coherent. We might ask about a probability of a coin flip, but not two coins. Speaking about coins, the "quantum of probability" is simply 1/2, end of story.

Well, I don't even know what could be a "basic event" at the bottom level of the universe -- the more I think about it, the more I realise my ignorance of quantum physics.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-03T16:02:20.317Z · LW(p) · GW(p)

I don't see where the "basic event"/"computation of expression" distinction gets us anywhere useful. As you say, even defining it clearly is problematic, and whatever definition we use it seems that any event we actually care about is not "basic."

It also seems pretty clear to me that my mind can represent and work with probabilities smaller than 1/2, so restricting ourselves to domains of discourse that don't require smaller probabilities (e.g., perfectly fair tosses of perfectly fair coins that always land on one face or the other) seems unhelpful.

comment by hesperidia · 2012-04-02T05:47:21.019Z · LW(p) · GW(p)

Would anyone be interested in following a liveblog of the Sequences on Tumblr? I plan to use this as a public opportunity to think in depth about many concepts that I skimmed over on my first read-through.

Currently wondering whether a blogging service is the best medium for such a project. Currently leaning towards doing it. Undecided if I should use my main or a sideblog.

Replies from: hesperidia
comment by hesperidia · 2012-04-03T23:52:38.304Z · LW(p) · GW(p)

Now up at lwliveblog.tumblr.com. The About page contains information about myself (the writer) and ground rules for my interaction with any audience (or lack thereof).

To read in chronological instead of reverse chronological order, use this link.

You don't need to register for tumblr to follow the blog and comment on it! You can use the RSS feed, and disqus comments are available if you click into each post's individual page.

edit: fixed formatting

Replies from: folkTheory
comment by folkTheory · 2012-04-05T20:42:30.943Z · LW(p) · GW(p)

What's a liveblog?

Replies from: Nornagest
comment by Nornagest · 2012-04-05T21:06:19.726Z · LW(p) · GW(p)

A genre of commentary or critical response that involves blogging running comments as you go through a work. Something Awful's "Let's Play" series might be the best-known examples.

comment by Paul Crowley (ciphergoth) · 2012-04-25T16:31:39.143Z · LW(p) · GW(p)

Author Ken McLeod published this persuasive article:

The one thing [fiction] cannot do is help us to understand human nature and the motivations of other people. If it did, the work done in Departments of English (etc) Literature would be of enormous interest to Departments of (e.g.) Business Studies, Politics, and Sociology. Oddly enough it is not. For real insight into human behaviour, practical people turn to science.

He posted this as an April Fool. However, I have to say I find the argument pretty persuasive. Is April-1-Ken right?

Replies from: gwern
comment by gwern · 2012-04-25T17:08:05.974Z · LW(p) · GW(p)

He's righter than he thinks he is. See http://www.gwern.net/Culture%20is%20not%20about%20Esthetics#fn18

comment by [deleted] · 2012-04-12T05:45:34.224Z · LW(p) · GW(p)

What is the future of human languages?

comment by Mitchell_Porter · 2012-04-11T12:33:09.433Z · LW(p) · GW(p)

Is there something like Kickstarter that isn't limited to American projects? Google sent me a voucher for "$75 in free advertising" which expires in a few days, and I thought, aha, I'll make a Kickstarter project to support my work on ontology of mind, and then advertise its existence via AdWords; but it turns out that you have to be a US resident.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-04-14T12:11:21.139Z · LW(p) · GW(p)

IndieGoGo seems to pretty much be the international version of Kickstarter.

comment by pleeppleep · 2012-04-04T23:00:38.529Z · LW(p) · GW(p)

I recently got to have a pleasant conversation with a woman who makes a living as a spiritual medium. My father is dating her, for what is most likely to be a very short time, and he brought up her profession over dinner. It became sort of a Q and A session, and I would like to share the experience with this community. It was exceptionally interesting to speak with what can only be called a grandmaster of the Dark Arts. I can't give you an exact play-by-play, unfortunately, but i can probably communicate the gist of the conversation.

My question is this: is this suitable for a discussion, or a main post? Please respond, as I don't know how long I'll remember exactly what was said.

Replies from: pedanterrific
comment by pedanterrific · 2012-04-04T23:13:27.486Z · LW(p) · GW(p)

Have you thought about writing it down in note form?

Replies from: pleeppleep
comment by pleeppleep · 2012-04-04T23:21:44.184Z · LW(p) · GW(p)

Do you mean take notes? I would, but I'm not sure i can write a transcript without rewriting the memory. I suppose I might have to when I write the post, but I'd still rather talk about it while the event is fresh in my mind.

comment by J_Taylor · 2012-04-01T23:42:52.623Z · LW(p) · GW(p)

In the previous open thread, there was a post made here on the topic of learning computer science for purposes of becoming a programmer. The post received several upvotes, but little response. I am hoping that by linking to the post here, I will call more attention to it.

comment by shminux · 2012-04-01T06:14:13.956Z · LW(p) · GW(p)

I was quite surprised by the strong and negative reaction to my comment about cryonics being afterlife for atheists. Even EY jumped into the fray. It must have hit a raw point, or something. As jkaufman noted, the similarities are uncanny. So, it looks like a duck, swims like a duck, and quacks like a duck, but is heatedly advocated here and elsewhere to be a raven. The only reasonable argument (I don't consider marketing considerations reasonable) is by orthonormal, who suggested that this is a surface similarity and paying attention to it amounts to a cargo cult.

Hence my question: how do you tell if a certain procedure is a cargo cult or something worthwhile, if there is no easy experimental test? If you find such a procedure, please apply it to something other than cryonics, so that it does not appear to be an ad hoc solution.

Replies from: ciphergoth, Vladimir_Nesov, AlanCrowe, None, wedrifid, John_Maxwell_IV, Thomas
comment by Paul Crowley (ciphergoth) · 2012-04-01T15:39:33.351Z · LW(p) · GW(p)

It must have hit a raw point, or something

Oh God, please don't say this; it's an absolutely classic way to seem clever to yourself and lock in existing beliefs. Please don't treat people reacting badly to what you say as evidence that it was a good and valuable thing to say.

Replies from: shminux
comment by shminux · 2012-04-01T17:49:59.242Z · LW(p) · GW(p)

Actually, my original suggestion ("it needs a catchy slogan") was about promoting cryonics, actually, and the example I gave was the first thing that poped into my head, in a hope that others would come up with something better. Instead the discussion turned to the reasons why my suggestion was so awful. In retrospect, this was a classic pitfall, offering a single solution too early. I was taken aback by the reaction, and wanted to know what provoked it and how to tell whether the arguments are valid.

Oh, and I personally would sign up for cryonics, if only I could (not going to go into the reasons why I cannot at this time).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-04-01T18:48:30.813Z · LW(p) · GW(p)

Actually, my original suggestion ("it needs a catchy slogan") was about promoting cryonics, actually, and the example I gave was the first thing that poped into my head

In the grandparent, you wrote:

The only reasonable argument (I don't consider marketing considerations reasonable) is by orthonormal

These statements seem to be in contradiction.

Replies from: shminux
comment by shminux · 2012-04-01T19:08:32.970Z · LW(p) · GW(p)

As I said, "I was taken aback by the reaction, and wanted to know what provoked it and how to tell whether the arguments are valid"

comment by Vladimir_Nesov · 2012-04-01T11:56:20.493Z · LW(p) · GW(p)

So, it looks like a duck, swims like a duck, and quacks like a duck

...but we understand in detail how it functions underneath, which screens off any surface impressions. What is the question that you want to answer? It doesn't seem like you are asking a question about cryonics, instead you are considering how to promote it. Is it a good idea to draw attention to those categories? That is the question, not whether those categories somehow "really apply".

comment by AlanCrowe · 2012-04-01T17:07:25.548Z · LW(p) · GW(p)

Why do you ask for an easy experimental test? If the experiment is hard, such that you rely on third party reports, but the result is not in doubt, then the experiment serves just as well. Granting that experiments may be hard, if we are sure that they are reported honestly, here are two that are relevant to cryonics.

First is the well know point of food hygiene, that one should not refreeze frozen meat. Some food poison bacteria are not killed by freezing, and grow every time the meat is warm enough. If I were a salmonella bacterium I would sign up for cryonics, confident that I was using a proven technology.

Second is the use of hypothermia in heart surgery. The obvious deadness of the patients is very striking for some-one my age (51), brought up in a world where death was defined by the stopping of the heart. I imagine the equivalent for the Christian vision of resurrection to eternal life in heaven is that at most funerals the priest says the magic words and the corpse revives for 5 minutes to say final goodbyes and reassure the mourners that they will meet up again on judgment day. Since it is only for five minutes, not eternity, and since it is on earth, not in heaven, one may find it unconvincing, just as one may find the use of hypothermia in heart surgery unconvincing. It is not exactly the thing promised. Nevertheless, such partial demonstrations are important. There would be few Jews left if priests could work the 5 minutes thing and rabbis couldn't.

There are various ways of stopping a corpse from rotting. The Egyptians had techniques of mummification. Burning and retention of the ashes prevents icky decay. Pasteurization, that is mild heating, to kill bacteria does something useful. Why are cryonicists wedded to cold? As we have seen, there are experimental reasons for the choice. This seems very different from the main religious practice of being Protestant, Catholic, Sunni, or Shia, and following the religion one was born to without expecting one path to have experiments that make it seem uniquely promising.

How do you tell if a certain procedure is a cargo cult or something worthwhile, if there is neither an easy experimental test that one has done oneself, nor a hard experimental test that one accepts as honest? That is an interesting question, but I don't see cryonics as being so purely theoretical that it provides a vehicle for exploring the issue.

comment by [deleted] · 2012-04-01T15:23:29.786Z · LW(p) · GW(p)

In fact all the replies you got related to marketing considerations because your comment was about marketing considerations. From that point of view, it had some obvious flaws, which people pointed out.

Do you actually want to discuss whether or not cryonics is a religion (or some improved formulation of that question)?

Replies from: faul_sname
comment by faul_sname · 2012-04-01T20:23:43.130Z · LW(p) · GW(p)

I think the question that should be asked is whether cryonics is a waste of hope, as many religions are, or if it's viable (I'm still not sure if it would work, but it does seem plausible that it would)

Replies from: None
comment by [deleted] · 2012-04-02T01:39:57.460Z · LW(p) · GW(p)

That question should be asked, not flippantly implied. The comment linked above was targeted at pride, so it is no surprise that so many replied. Cryonics is a thing believed by many here, and if you take pot shots, the end result is clear.

Replies from: faul_sname
comment by faul_sname · 2012-04-02T04:14:38.428Z · LW(p) · GW(p)

Cryonics is a thing believed by many here

Your phrasing is interesting, and phrasing like that is probably one of the factors contributing to the cryonics<==>afterlife for transhumanists association many people hold.

Replies from: None
comment by [deleted] · 2012-04-02T05:02:48.509Z · LW(p) · GW(p)

"Considered to be true" didn't scan.

comment by wedrifid · 2012-04-01T22:17:54.972Z · LW(p) · GW(p)

I was quite surprised by the strong and negative reaction to my comment about cryonics being afterlife for atheists.

You made a 'suggestion for a catchy slogan' for cryonics which actually constitutes an emotional argument against cryonics (that is, it affiliates it with something that is already rejected so implies that it too should be rejected). That makes it a terrible suggestion for a catchy slogan for cryonics advocates to adopt.

If you want to make a point about how cryonics has a feature that is similar to a feature in some religions then make that point - but don't pretend you are suggesting a catchy slogan for cryonics when you are suggesting a catchy slogan to use when one-upping cryonics advocates.

Replies from: shminux
comment by shminux · 2012-04-02T00:10:28.059Z · LW(p) · GW(p)

As I said in another comment, it started as a suggestion, but the reaction got me thinking about the similarity and how to tell the difference.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T19:15:17.296Z · LW(p) · GW(p)

Maybe rationalists don't like being casually labeled as something they are trying very hard not to be (religious)?

Replies from: shminux
comment by shminux · 2012-04-01T20:26:32.208Z · LW(p) · GW(p)

Then they should have a ready answer why pattern matching with a religious idea is incorrect.

Replies from: Vladimir_Nesov, John_Maxwell_IV
comment by Vladimir_Nesov · 2012-04-01T21:05:42.241Z · LW(p) · GW(p)

What do you mean, "incorrect"? Matching a concept generates connotational inferences, some of which are true, while others don't hold. If the weight of such incorrect inferences is great enough, using that category becomes misleading, in which case it's best to avoid. Just form a new category, and attach the attributes that do fit, without attaching those that don't.

If you are still compelled to make analogies with existing categories that poorly match, point out specific inferences that you are considering in forming an analogy. For example, don't just say "Is cryonics like a religion?", but "Cryonics promises immortality (just as many religions do); does it follow that its claims are factually incorrect (just as religions' claims are)?" Notice that the inference is only suggested by the analogy, but it's hard to make any actual use of it to establish the claim's validity.

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T20:35:58.615Z · LW(p) · GW(p)

Cryonics can both be a good idea and pattern match onto something religious.

People want immortality. Religions have exploited this fact by promising immortality to converts. Then a plausible scheme for immortality comes along and it looks like a religion.

comment by Thomas · 2012-04-01T09:19:42.224Z · LW(p) · GW(p)

how do you tell if a certain procedure is a cargo cult or something worthwhile

Your best guess is all you have. More intelligent and knowledgeable you are, more likely it is, that your guess is correct. But you can't go "beyond" this.

Considering cryonics ... maybe some people don't want to wake up in a future where the Alcor's procedure is necessary. If you can't wake me up from ashes ... don't even bother!

comment by David_Gerard · 2012-04-04T19:12:48.914Z · LW(p) · GW(p)

You guys know your philosophy. What is the proper name of this fallacy?

It's a common sophistry to conflate an utterly negligible probability with a non-negligible one. The argument goes:

  1. There is technically no such thing as certainty.
  2. Therefore, [argument I don't like] is not absolutely certain.
  3. Therefore, the uncertainty in [argument I don't like] is non-negligible.

Step 3 is the tricky one. Humans are, in general, really bad at feeling the difference between epsilon uncertainty and sufficient uncertainty to be worth taking notice of - they can't tell a nonzero chance from one that's worth paying attention to ever.

I could make up a neologism for it, but this thing must have been around approximately forever. What is its proper name, if any? Who was the first person to note it as fallacious? Any history of it would be most welcomed.

Replies from: J_Taylor
comment by J_Taylor · 2012-04-04T20:39:17.326Z · LW(p) · GW(p)

Well, this instance is certainly a False Dichotomy. That is, the argument assumes that everything is either certain or non-negligibly certain. It also sort of looks like an instance of what is sometimes called an Appeal to Possibility or an Appeal to Probability. (1. This argument in uncertain. 2. If an argument is uncertain, it is possible that the uncertainty is non-negligible. 3. Therefore, it is possible that this argument's uncertainty is non-negligible. 4. Therefore, this argument's uncertainty is non-negligible.)

On Lesswrong, all of this is generally called the Fallacy of Gray.

Edit: Oh, yeah. This is totally the Continuum Fallacy

Replies from: David_Gerard
comment by David_Gerard · 2012-04-04T21:17:59.988Z · LW(p) · GW(p)

Ah, a specific variant of the Continuum Fallacy, applied to probability. Yep.

I'd still be somewhat surprised if it didn't have its own name yet. But if it doesn't, I suppose we can create a good neologism. What name should it have as a particular variant? (The way argumentum ad Hitlerum or argumentum ad cellarium are argumentum ad hominem variants.) Does anything snappy spring to mind?

Replies from: Grognor
comment by Grognor · 2012-04-04T21:56:14.820Z · LW(p) · GW(p)

What's wrong with "fallacy of gray"?

Replies from: David_Gerard
comment by David_Gerard · 2012-04-04T23:24:13.300Z · LW(p) · GW(p)

Nothing at all, I'm just aware enough of the variant to want a name for it.

comment by timtyler · 2012-04-02T12:58:19.505Z · LW(p) · GW(p)

Interesting video: Alex Peake at Humanity+ @ Caltech: "Autocatalyzing Intelligence Symbiosis"

23 minutes. The blurb reads: "Autocatalyzing Intelligence Symbiosis: what happens when artificial intelligence for intelligence amplification drives a 3dfx-like intelligence explosion".

comment by John_Maxwell (John_Maxwell_IV) · 2012-04-01T20:30:24.129Z · LW(p) · GW(p)

This thread is for me and Tetronian and anyone else who's interested to think about how to best present the LW archives.

I think it makes sense to have an about page separate from any "guide to the archives" page. They're really fulfilling different purposes.

Here's what I'd like to see: A core sequences page that also links to sequence SR cards and PDF downloads for the sequences, a page for nonlinear reading of the core sequences (referring to that page with the graphs, Luke's Reading Yudkowsky series of posts, alternative indices, and anything else along those lines), a regular sequences page for sequences that aren't core sequences (includes sequences written by people who aren't EY), a page categorizing attempts to explain/summarize/discuss math and science that have appeared on less wrong (roughly, anything that has an equation/diagram or cites a paper) along with a link to "The Beauty of Settled Science", a page categorizing good articles that weren't in any other category, and a final page serving as a guide to the archives which links to all of the others, sample posts of each category, and "yes a blog" to provide impetus for study.

Replies from: None
comment by [deleted] · 2012-04-01T21:22:32.956Z · LW(p) · GW(p)

I agree with pretty much all of this, although I think some of these features could be added to existing pages. For example, links to PDFs or Luke's Reading Yudkowsky series could be added to existing the wiki pages for each sequence.

Thus far I've made this, which is the first draft of a sample of posts from the sequences.

What I'm currently working on: Collecting a sample of the best posts on core LW topics from the archives and arranging them in a sensible way.

comment by J_Taylor · 2012-04-04T20:48:25.748Z · LW(p) · GW(p)

Here is an SMBC comic which demonstrates the Utility Monster argument against utilitarianism.

http://www.smbc-comics.com/index.php?db=comics&id=2569#comic

comment by James_Miller · 2012-04-01T05:00:40.082Z · LW(p) · GW(p)

Removed because of Bur's comment.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-04-01T08:52:37.973Z · LW(p) · GW(p)

Warning: randomly clicking on this page may freeze your web browser (happened with Firefox).

comment by [deleted] · 2012-04-16T14:43:40.634Z · LW(p) · GW(p)

Our civilization is not provably Friendly and why this should worry us

As I was thinking about my draft "moral progress" sequence and debating people on the LessWrong chat channel, it occurred to me in a sort of "my current beliefs already imply this but I hadn't noticed it so clearly before" way that our civilization does not reliably ensure its own survival or produce anything like fixed versions of its values. In other words if we judge current human civilization by the same standards as AI it is nearly certainly unfriendly. FAI is thus insurance policy not just against AI or difficult to defeat existential risks but against ourselves too.

While existential risk study basically already implies we currently don't reliably optimize for civilization wise survival and this is a common topic of LW and my own (as of yet unpublished but supported by many other commenter's on LW) sequence of posts on moral progress attacks on a fundamental level "fixed values" part, this hasn't been addressed in this form. Even "people are crazy the world is mad" attitudes that result in "raising the sanity waterline" efforts are fundamentally not addressing the problem, they assume we have a good structure and all we need is more rational people. Or that maybe the structure is rotten but that if we have enough rational people we can use the killer application of FAI to fix it. I see no guarantee of this at all, very little plausibility even. Especially the former assumption seems like assuming better CPU cores make FAI development more likely, while the latter relies on a very rapid difficult to predict hard take off scenario that isn't universally endorsed by LW/OBers.

Why are so many willing to admit that our societies truth finding mechanism may indeed be broken and that moral change not progress is the name of the game, yet not put this together on a gut level, like with evolution or living in a universe where really bad things can happen? Or even motivate us to expend some effort to at least ascertain if this is really something of as great urgency as it seems at first look.

Should I make a proper article on this topic to cover further thoughts and more supporting arguments?

comment by orthonormal · 2012-04-08T22:03:39.502Z · LW(p) · GW(p)

At some point, Eliezer mentioned that TDT cooperates in the Prisoner's Dilemma if and only if its opponent would one-box with the TDT agent as Omega in Newcomb's Problem. Does anyone know where to find this quote?

comment by Multiheaded · 2012-04-07T00:25:58.460Z · LW(p) · GW(p)

(Have been chasing down my last unread bits of Orwell.)

Just look at this sh*t!. Who'd wish to live under boring wealthy peaceful Fascism when we could have such fun? (Not a trick question.)

comment by khafra · 2012-04-06T12:50:54.468Z · LW(p) · GW(p)

I have read, here on lesswrong, that aging may basically stop somewhere around age 100: Your probability of death reaches 50% per year and doesn't go much higher; the reason people don't live much over 100 is just the improbability of the conjuction 50%*50%*50%, etc.

However, this table from the SSA seems to directly contradict that. Now I'm wondering what explains the seeming contradiction.

Replies from: army1987, Incorrect
comment by A1987dM (army1987) · 2012-04-07T12:07:56.816Z · LW(p) · GW(p)

FWIW, from a histogram I quickly made from the data in http://en.wikipedia.org/wiki/List_of_the_verified_oldest_people it doesn't look like the probability of surviving to age x falls any faster than exponentially.

comment by Incorrect · 2012-04-06T13:01:44.522Z · LW(p) · GW(p)

It could be that the table is not empirical past 100.

Replies from: khafra
comment by khafra · 2012-04-06T13:25:41.035Z · LW(p) · GW(p)

Maybe. But if anybody had empirical data on old people, I would expect it to be the SSA.

Replies from: Rhwawn
comment by Rhwawn · 2012-04-06T20:32:35.832Z · LW(p) · GW(p)

I'd point out that one would expect the SSA tables to overstate the number of centenarians etc, for the simple reason that they are linked to financial payments/checks. Japan recently had some interesting reports that its centenarian numbers were overstated... because other people were collecting their pension checks. From the BBC:

More than 230,000 elderly people in Japan who are listed as being aged 100 or over are unaccounted for, officials said following a nationwide inquiry. An audit of family registries was launched last month after the remains of the man thought to be Tokyo's oldest were found at his family home. Relatives are accused of fraudulently receiving his pension for decades...Reports said he had received about 9.5m yen ($109,000; £70,000) in pension payments since his wife's death six years ago, and some of the money had been withdrawn.

...Officials have found that hundreds of the missing would be at least 150 years old if still alive.

comment by [deleted] · 2012-04-05T16:40:50.990Z · LW(p) · GW(p)

youtube video with dating advice

Why is useful and true procedural knowledge about socialization between men and women even when presented politely and decently something that nearly always attracts negative reactions?

The girl in the video is very sweet and nice (she is no Roissy) about giving some semi-useful dating and socialization advice, she dosen't even break any taboos or exposes any pretty lies, yet this didn't really help her with the video being received well.

One might say, well this is just a very bad video, but that is kind of besides the point, this video is just a the nearest example I had at hand for a trend I've been noticing for years, I certainly don't' consider it remarkable in its quality, but it is far from bad by youtube standards.

Is this a real pattern then?

comment by sixes_and_sevens · 2012-04-05T16:40:09.297Z · LW(p) · GW(p)

Rhetological fallacies, courtesy of Information is Beautiful

Doesn't seem to have been mentioned on LW yet, but definitely worth passing on.

Replies from: Grognor
comment by Grognor · 2012-04-07T16:14:46.815Z · LW(p) · GW(p)

I considered posting that myself, but I found that a lot of them are actually valid evidence, and it is mere rehashing of widely known (and easily available if unknown) material.

comment by provocateur_tmp · 2012-04-04T23:18:34.374Z · LW(p) · GW(p)

If I could copy you, atom for atom, then kill your old body (painlessly), and give your new body $20, would you take the offer? Be as rational as you wish, but start your reply with "yes" or "no". Image that a future superhuman AGI will read LW archives and honor your wish without further questions.

Replies from: None, TheOtherDave, Zack_M_Davis
comment by [deleted] · 2012-04-05T19:11:46.677Z · LW(p) · GW(p)

No. It might copy me atom for atom and then not actually connect the atoms together to form molecules on the copy.

You also didn't mention I would be in a safe place at the time, which means the AI could do it while I was driving along in my car, with me confused why I was suddenly sitting in the passengers seat (the new me is made first, I obviously can't be in the drivers seat) with a 20 dollar bill in my hand while my car veered into oncoming traffic and I die in a car crash.

If an AI actually took the time to explain the specifics of the procedure, and had shown to do it several times with other living beings, and I was doing it an an actual chosen time, and it had been established to have a 99.9999% safety record, then that's different. I would be far more likely to consider it. But the necessary safety measures aren't described to be there, and simply assuming "Safety measures will exist even though I haven't described them." is just not a good idea.

Alternatively, you could offer more than just twenty, since given a sufficiently large amount of money and some heirs, I would be much more willing to take this bet even without guaranteed safety measures. Assuming I could at least be sure the money would be safe (although I doubt I could, since "Actually, your paper cash was right here, but it burned up from the fireball when we used an antimatter-matter reaction used to power the process." is also a possible failure mode.)

But "At some random point in the future, would you like someone very powerful who you don't trust to mess with constituent atoms in a way you don't fully understand and will not be fully described? It'll pay you twenty bucks." Is not really a tempting offer when evaluating risks/rewards.

comment by TheOtherDave · 2012-04-04T23:32:44.156Z · LW(p) · GW(p)

My willingness to take the offer is roughly speaking dependent on my confidence that you actually can do that, the energy costs involved, how much of a pain in my ass the process was, etc. but assuming threshold-clearing values for all that stuff, sure. Which really means "no" unless the future superhuman AGI is capable of determining what I ought to mean by "etc" and what values my threshold ought to be set at, I suppose. Anyway, you can keep the $20, I would do it just for the experience of it given those constraints.

Replies from: TimS
comment by TimS · 2012-04-04T23:41:18.576Z · LW(p) · GW(p)

And the caveat that memories/personality are in the atoms, not in more fundamental particles.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-04T23:51:52.947Z · LW(p) · GW(p)

Yeah, definitely. I took "atom for atom" as a colloquial way of expressing "make a perfect copy".
The "etc" here covers a multitude of sins.

comment by Zack_M_Davis · 2012-04-05T07:17:08.578Z · LW(p) · GW(p)

Yes, it's a free $20. Why is this an interesting question?

comment by provocateur · 2012-04-04T21:58:07.925Z · LW(p) · GW(p)

Brevity is the soul of wit. Why is LW so obviously biased towards long-windedness?

Replies from: orthonormal, Grognor, shminux, thomblake
comment by orthonormal · 2012-04-08T22:50:19.120Z · LW(p) · GW(p)

Have you ever tried to read a math textbook that cherishes being short and concise? They're nigh unreadable unless you already know everything in them.

When you're discussing simple concepts that people have an intuitive grasp of, then brevity is better. When there's an inferential distance involved, not so much.

Replies from: XiXiDu, None
comment by XiXiDu · 2012-04-09T09:47:08.398Z · LW(p) · GW(p)

Have you ever tried to read a math textbook that cherishes being short and concise? They're nigh unreadable unless you already know everything in them.

Tried Mathematics 1001? Only $16.13 at Amazon.

Replies from: None
comment by [deleted] · 2012-04-09T14:41:20.413Z · LW(p) · GW(p)

I think that illustrates the point actually; the topics in that book either do not have much of an inferential distance or as the description you link to says "The more advanced topics are covered in a sketchy way". Serge Lang's Algebra on the other hand...

Replies from: orthonormal
comment by orthonormal · 2012-04-09T23:44:03.740Z · LW(p) · GW(p)

Funny, Serge Lang's Algebra was one of my mental examples. (Also see: anything written by Lars Hörmander.)

comment by [deleted] · 2012-04-08T23:23:43.629Z · LW(p) · GW(p)

Have you ever tried to read a math textbook that cherishes being short and concise? They're nigh unreadable unless you already know everything in them.

That's not entirely true -- Melrose's book on Geometric Scattering Theory, Serre's book on Lie Groups and Algebras, Spivak's book on Calculus on Manifolds, and so on.

I think the phenomena you're pointing to is closer to the observation that the traits that make one a good mathematician are mostly orthogonal to the traits that make one a good writer.

comment by Grognor · 2012-04-05T00:19:09.362Z · LW(p) · GW(p)

I don't know about others, but it helps me understand an idea when I read a lot of words about it. I think it causes my subconscious to say "this is an important idea!" better than reading a concise, densely-packed explanation of a thing, even if only once. This is a guess; I don't know the true cause of the effect, but I know the effect is there.

comment by shminux · 2012-04-08T23:21:42.923Z · LW(p) · GW(p)

But an enemy of knowledge transfer.

comment by thomblake · 2012-04-04T22:10:44.433Z · LW(p) · GW(p)

wit != rationality.

Also, I'm pretty sure the bias, if it exists, runs in the opposite direction. We even like calling our summaries "tl;dr"

Replies from: Grognor, provocateur_tmp
comment by Grognor · 2012-04-04T22:42:05.211Z · LW(p) · GW(p)

I take issue with both of your claims!

Sure, wit isn't rationality, but I suspect it can be quite the rationality enhancer.

And I assign high probability to the existence of a "long post bias", though I'm not sure it's higher at LW relative to other places. It may not be a bias, though; Paul Graham, for example, says that long comments are generally better than short ones, and this seems to be obviously true in general. In terms of posts, I'm not so sure. I would have upvoted the grandparent comment of this if it weren't rude (how hypocritical of me).

comment by provocateur_tmp · 2012-04-04T22:54:03.533Z · LW(p) · GW(p)

wit != rationality

Keep your wits about you. In Shakespeare's times the word meant "intelligence".

P.S. Someone explain the downmods to me. The parent either didn't know the saying was from Hamlet, or thought "wit" meant "humor" in this context.

Replies from: sixes_and_sevens
comment by sixes_and_sevens · 2012-04-05T13:44:08.386Z · LW(p) · GW(p)

Too many cooks spoil the broth, but many hands make light work. Can someone please explain to me why this broth, made by far too many cooks, was both labour-intensive and delicious?

"Brevity is the soul of wit" is an idiom, not some sort of undisputed fact. Your question doesn't highlight an interesting contradiction; at best it will be interpreted as a weak play on words, and at worst it will be interpreted as trolling.

comment by Grognor · 2012-04-04T21:27:16.416Z · LW(p) · GW(p)

What are some unforgettable moments in the lives of Less Wrongers?

Anything will do, and I don't mind if you tell it in story-mode or in "here are the exact, objective events"-mode, but do try to pick one or the other rather than a hybrid.

Replies from: Alicorn, Grognor
comment by Alicorn · 2012-04-04T21:31:20.650Z · LW(p) · GW(p)

Do you have a purpose in attempting this collection project?

Replies from: Grognor
comment by Grognor · 2012-04-04T21:43:50.672Z · LW(p) · GW(p)

Just curious.

comment by Grognor · 2012-04-07T16:15:33.242Z · LW(p) · GW(p)

I guess LWers lead very uninteresting/private lives.

Replies from: ciphergoth
comment by thescoundrel · 2012-04-03T18:44:51.418Z · LW(p) · GW(p)

Looks like Zach Wiener at SMBC might be reading up on FAI.

Replies from: cousin_it, Nisan
comment by Nisan · 2012-04-03T21:54:00.494Z · LW(p) · GW(p)

And/or Nozick's utility monster.

comment by Alex_Altair · 2012-04-02T21:28:38.591Z · LW(p) · GW(p)

Does anyone know if there is an "FAI" sequence? I can't seem to find a list of all the posts relevant to FAI or UFAI failures.

Replies from: Grognor
comment by Grognor · 2012-04-03T10:02:08.907Z · LW(p) · GW(p)

So many of the posts in the sequences are indirectly related to FAI that the most concise list of "FAI posts" is here.

The wiki article on FAI covers a lot of ground and has links to LW posts and other websites explaining it in a bit more detail.

comment by Zaine · 2012-04-02T02:57:51.900Z · LW(p) · GW(p)

(Edit: Thanks to Micaiah Chang for the links and suggestion.)
Does anyone here speak Japanese? If so, or even if not, I'd like to discuss the morals and themes of the story, 「走れメロス」(Hashire Merosu). If you have read it, but your memory is a bit fuzzy, here's a rough summary:

走れメロスと言う話は人間が一度諦めても、一生懸命に頑張ったらなにでも勝利できるという教訓を表す話なのです。最初、メロスさんは妹の結婚式のためにシラクスの市場で妹の服と結婚式の参加者に与える食べ物を買いに行きます。しかし、シラクスで人を信じられない心が壊れた王は市場の家来を殺しています。メロスは「心がまっすぐな男」だし、それを許せないし、買ったもの全てを持って王の城へ急ぎます。メロスには人を信じられるのが一番美しいことだと思うので、王の人に信じられないという気持ちを変えたいと思うが、王はメロスの述べるが信じられません。それで、王はメロスを死刑したくなります。でも、メロスは妹の結婚式に出席しなければならないので王に約束します。メロスが三日後の日が沈むまでに城へ戻らなかったら、王がメロスの友人をメロスの代わりに殺せるようになります。メロスは妹の結婚式を見て、皆愛している人々と暮らせる最後時として一日喜びます。三日目の午前に起きると、メロスはシラクスへ走り始めます。途中で、たくさん苦労があるのに、一度諦めるにも拘らず、いよいよというときに城へ着き、友人を助けることが出来ます。彼の二人は一度諦めたことがあったので二人ともお互いに相手の顔を殴って、笑い合います。死刑を見ようとした皆が泣いている間に王様はメロスと助けられたメロスの友人の近くに来ます。王は彼達の友達になりたいと思うほど人間の心の強さや美しさに非常に感動します。メロスは「もちろん!」と言う気持ちを表現し、一人の女はメロスに赤いコートを上げます。メロスは苦労を通して裸になってきたからです。皆が笑い合い、それで「走れメロス」と言う話が終わります。

Here's the wikipedia page, and the full story in Japanese. (Note that the English translation omits the themes of Merosu's promise to his friend, so whether he perseveres because he values the trust his friend has in Merosu's word, for the sake of his friend, or simply because he knows his friend believes in him, is never made clear. I should also note I've read the abridged version, and am in the process of reading the unabridged version.)

My main issue with understanding this stems from the point at which Merosu gives up. He thinks, "Know I tried my best...." before passing out, hoping his friend will forgive him (even though, when he thinks this, he assumes he just caused his friend's execution). Then, upon saving his friend, Merosu shares how he once gave up, and his friend shares how he once stopped believing in Merosu - and they forgive each other. The story doesn't again address Merosu's defeated thought of 'at least I tried', other than when his friend forgives him in the end.

I can't quite grasp just what the morals or main themes of the story are because of this one unresolved issue. Is it stating that as long as you do everything you can, it's alright, regardless of whether you succeed? Or is Merosu only forgiven in the end because he actually does show up and thereby save his friend?

My current thinking on this:
・Because Merosu does indeed arrive in time to save his friend, it doesn't matter what else happened (please keep in mind I'm only currently interested in what the story's actually expressing as well as what it's attempting to express, if they happen to be separate things).
・Promises are important, and as long as you do whatever you can to honor them, it's fine if in the end you can't.
・Belief in others is a beautiful thing; as long as someone believes in you, you should forgive whatever else they may have thought previously.
・We're all human, and imperfect, so you should forgive repentant ones' trespasses.

(I figured this to be a bit of an esoteric discussion; I hope I picked the right place to post it.)

comment by Multiheaded · 2012-04-10T07:20:39.715Z · LW(p) · GW(p)

I've been looking into American politics a little, and it sure is a hilarious business! Here's a short riddle for you. (Disclaimer: not intended to make any implications or mind-kill anyone; I'm not taking a dig at any opponents.)

"As __ , we believe America is a land of boundless opportunity, where people can better themselves, their children, their families, and their communities through education, hard work, and the freedom to climb the ladder of economic mobility." (Paragraph from a group's mission statement.)

Without googling, can you tell what the missing noun is?

Replies from: Jayson_Virissimo, Nornagest, None, pedanterrific
comment by Jayson_Virissimo · 2012-04-10T11:56:26.582Z · LW(p) · GW(p)

"Mexicans"?

comment by Nornagest · 2012-04-10T07:41:39.282Z · LW(p) · GW(p)

It's an applause light. Could be anyone, although the phrasing of that particular applause light makes me suspect either a moderate conservative group or a liberal-leaning group that's trying to establish centrist bona fides.

A whole lot of American political groups use economic mobility in their rhetoric (it's sort of a cultural talisman), so that by itself doesn't tell you very much; you need to dig a little deeper and find out how they're constructing economic mobility if you want to learn about their actual ideology. In particular, the American economic right tends to draw lines between ensuring equality of opportunity and equality of outcome, while the American economic left either deemphasizes that (if the group in question is more centrist) or actively asserts that the two are inseparable (if more leftist).

Replies from: pedanterrific
comment by pedanterrific · 2012-04-10T15:26:21.327Z · LW(p) · GW(p)

Sploiler: It's the Pragre sbe Nzrevpna Cebterff, n(a hancbybtrgvpnyyl yvoreny-cebterffvir) guvax gnax.

comment by [deleted] · 2012-04-11T06:35:12.493Z · LW(p) · GW(p)

I'm wondering, looking from the outside into the politics of another country is there anything different? I mean obviously the applause light constellation is probably different, but generally a surprisingly large part of the political rhetoric in any country is shared by most of the sides vying for power.

comment by pedanterrific · 2012-04-10T11:50:13.906Z · LW(p) · GW(p)

"Americans"?