Posts

LINK: Quora brainstorms strategies for containing AI risk 2016-05-26T16:32:02.304Z · score: 5 (6 votes)
Help Build a Landing Page for Existential Risk? 2015-07-30T06:03:26.238Z · score: 12 (13 votes)
Reminder Memes 2013-05-13T23:56:09.245Z · score: -12 (17 votes)
LINK: Human Bio-engineering and Coherent Extrapolated Volition 2012-04-28T01:40:02.931Z · score: 2 (9 votes)
Hearsay, Double Hearsay, and Bayesian Updates 2012-02-16T22:31:53.118Z · score: 47 (59 votes)
List of Donors, Fall 2011 2011-08-28T07:45:10.649Z · score: 1 (6 votes)
Should Rationalists Tip at Restaurants? 2011-07-12T05:28:32.675Z · score: 8 (10 votes)
Meetup : San Francisco & Tortuga Go Surfing 2011-07-10T01:36:07.802Z · score: 1 (2 votes)
An Outside View on Less Wrong's Advice 2011-07-07T04:46:17.611Z · score: 60 (90 votes)
Meetup : Marin & SF Less Wrong Make Things Go Boom 2011-07-05T04:38:25.480Z · score: 3 (4 votes)
Poll for next article 2011-06-24T03:23:30.575Z · score: 2 (5 votes)
San Francisco Meetup every Tues 5/10, 7 pm 2011-05-05T06:28:23.386Z · score: 3 (4 votes)
San Francisco Meetup 4/28 2011-04-24T23:48:49.736Z · score: 4 (5 votes)
Intro to Naturalist Metaethics? 2011-02-14T22:32:00.611Z · score: 4 (5 votes)
Rational Repentance 2011-01-14T09:37:11.613Z · score: 36 (55 votes)
Link: NY Times covers Bayesian statistics 2011-01-12T08:45:37.236Z · score: 7 (8 votes)
Rationalist Clue 2011-01-08T08:21:55.266Z · score: 23 (30 votes)
I'm scared. 2010-12-23T09:05:24.807Z · score: 41 (42 votes)
How to Live on 24 Hours a Day 2010-12-04T09:12:04.436Z · score: 15 (26 votes)
Rational Project Management 2010-11-25T09:32:04.283Z · score: 6 (7 votes)
The Instrumental Value of Your Own Time 2010-07-14T07:57:21.408Z · score: 23 (34 votes)
What if AI doesn't quite go FOOM? 2010-06-20T00:03:09.699Z · score: 11 (22 votes)

Comments

Comment by mass_driver on The abruptness of nuclear weapons · 2018-03-10T05:40:47.649Z · score: 6 (3 votes) · LW · GW

I like the style of your analysis. I think your conclusion is wrong because of wonky details about World War 2. 4 years of technical progress at anything important, delivered for free on a silver platter, would have flipped the outcome of the war. 4 years of progress in fighter airplanes means you have total air superiority and can use enemy tanks for target practice. 4 years of progress in tanks means your tanks are effectively invulnerable against their opponents, and slice through enemy divisions with ease. 4 years of progress in manufacturing means you outproduce your opponent 2:1 at the front lines each and overwhelm them with numbers. 4 years of progress in cryptography means you know your opponent's every move and they are blind to your strategy.

Meanwhile, the kiloton bombs were only able to cripple cities "in a single mission" because nobody was watching out for them. Early nukes were so heavy that it's doubtful whether the slow clumsy planes that carried them could have arrived at their targets against determined opposition.

There is an important sense in which fission energy is discontinuously better than chemical energy, but it's not obvious that this translates into a discontinuity in strategic value per year of technological progress.

Comment by Mass_Driver on [deleted post] 2017-05-26T07:51:01.063Z

1) I agree with the very high-level point that there are lots of rationalist group houses with flat / egalitarian structures, and so it might make sense to try one that's more authoritarian to see how that works. Sincere kudos to you for forming a concrete experimental plan and discussing it in public.

2) I don't think I've met you or heard of you before, and my first impression of you from your blog post is that you are very hungry for power. Like, you sound like you would really, really enjoy being the chief of a tribe, bossing people around, having people look up to you as their leader, feeling like an alpha male, etc. The main reason this makes me uncomfortable is that I don't see you owning this desire anywhere in your long post. Like, if you had said, just once, "I think I would enjoy being a leader, and I think you might enjoy being led by me," I would feel calmer. Instead I'm worried that you have convinced yourself that you are grudgingly stepping up as a leader because it's necessary and no one else will. If you're not being fully honest about your motivations for nominating yourself to be an authoritarian leader, what else are you hiding?

3) Your post has a very high ratio of detailed proposals to literature review. I would have liked to see you discuss other group houses in more detail, make reference to articles or books or blog posts about the theory of cohousing and of utopian communities more generally, or otherwise demonstrate that you have done your homework to find out what has worked, what has not worked, and why. None of your proposals sound obviously bad to me, and you've clearly put some thought and care into articulating them, but it's not clear whether your proposals are backed up by research, or whether you're just reasoning from your armchair.

4) Why should anyone follow you on an epic journey to improve their time management skills if you're sleep-deprived and behind schedule on writing a blog post? Don't you need to be more or less in control of your own lifestyle before you can lead others to improve theirs?

Comment by mass_driver on Expecting Short Inferential Distances · 2016-12-23T10:08:49.201Z · score: 0 (0 votes) · LW · GW

And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...

"I know [evolution] sounds crazy -- it didn't make sense to me at first either. I can explain how it works if you're curious, but it will take me a long time, because it's a complicated idea with lots of moving parts that you probably haven't seen before. Sometimes even simple questions like 'where did the first humans come from?' turn out to have complicated answers."

Comment by mass_driver on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-14T01:28:31.587Z · score: 1 (1 votes) · LW · GW

I am always trying to cultivate a little more sympathy for people who work hard and have good intentions! CFAR staff definitely fit in that basket. If your heart's calling is reducing AI risk, then work on that! Despite my disappointment, I would not urge anyone who's longing to work on reducing AI risk to put that dream aside and teach general-purpose rationality classes.

That said, I honestly believe that there is an anti-synergy between (a) cultivating rationality and (b) teaching AI researchers. I think each of those worthy goals is best pursued separately.

Comment by mass_driver on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-13T08:25:10.278Z · score: 4 (4 votes) · LW · GW

Yeah, that pretty much sums it up: do you think it's more important for rationalists to focus even more heavily on AI research so that their example will sway others to prioritize FAI, or do you think it's more important for rationalists to broaden their network so that rationalists have more examples to learn from?

Shockingly, as a lawyer who's working on homelessness and donating to universal income experiments, I prefer a more general focus. Just as shockingly, the mathematicians and engineers who have been focusing on AI for the last several years prefer a more specialized focus. I don't see a good way for us to resolve our disagreement, because the disagreement is rooted primarily in differences in personal identity.

I think the evidence is undeniable that rationality memes can help young, awkward engineers build a satisfying social life and increase their productivity by 10% to 20%. As an alum of one of CFAR's first minicamps back in 2011, I'd hoped that rationality would amount to much more than that. I was looking forward to seeing rationalist tycoons, rationalist Olympians, rationalist professors, rationalist mayors, rationalist DJs. I assumed that learning how to think clearly and act accordingly would fuel a wave of conspicuous success, which would in turn attract more resources for the project of learning how to think clearly, in a rapidly expanding virtuous cycle.

Instead, five years later, we've got a handful of reasonably happy rationalist families, an annual holiday party, and a couple of research institutes dedicated to pursuing problems that, by definition, will provide no reliable indicia of their success until it is too late. I feel very disappointed.

Comment by mass_driver on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-13T05:13:07.718Z · score: 1 (1 votes) · LW · GW

Well, like I said, AI risk is a very important cause, and working on a specific problem can help focus the mind, so running a series of AI-researcher-specific rationality seminars would offer the benefit of (a) reducing AI risk, (b) improving morale, and (c) encouraging rationality researchers to test their theories using a real-world example. That's why I think it's a good idea for CFAR to run a series of AI-specific seminars.

What is the marginal benefit gained by moving further along the road to specialization, from "roughly half our efforts these days happen to go to running an AI research seminar series" to "our mission is to enlighten AI researchers?" The only marginal benefit I would expect is the potential for an even more rapid reduction in AI risk, caused by being able to run, e.g., 4 seminars a quarter for AI researchers, instead of 2 for AI researchers and 2 for the general public. I would expect any such potential to be seriously outweighed by the costs I describe in my main post (e.g., losing out on rationality techniques that would be invented by people who are interested in other issues), such that the marginal effect of moving from 50% specialization to 100% specialization would be to increase AI risk. That's why I don't want CFAR to specialize in educating AI researchers to the exclusion of all other groups.

Comment by mass_driver on Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality” · 2016-12-12T16:23:30.226Z · score: 8 (8 votes) · LW · GW

I dislike CFAR's new focus, and I will probably stop my modest annual donations as a result.

In my opinion, the most important benefit of cause-neutrality is that it safeguards the integrity of the young and still-evolving methods of rationality. If it is official CFAR policy that reducing AI risk is the most important cause, and CFAR staff do almost all of their work with people who are actively involved with AI risk, and then go and do almost all of their socializing with rationalists (most of whom also place a high value on reducing AI risk), then there will be an enormous temptation to discover, promote, and discuss only those methods of reasoning that support the viewpoint that reducing AI risk is the most important value. This is bad partly because it might stop CFAR from changing its mind in the face of new evidence, but mostly because the methods that CFAR will discover (and share with the world) will be stunted -- students will not receive the best-available cognitive tools; they will only receive the best-available cognitive tools that encourage people to reduce AI risk. You might also lose out on discovering methods of (teaching) rationality that would only be found by people with different sorts of brains -- it might turn out that the sort of people who strongly prioritize friendly AI think in certain similar ways, and if you surround yourself with only those people, then you limit yourself to learning only what those people have to teach, even if you somehow maintain perfect intellectual honesty.

Another problem with focusing exclusively on AI risk is that it is such a Black Swan-type problem that it is extremely difficult to measure progress, which in turn makes it difficult to assess the value or success of any new cognitive tools. If you work on reducing global warming, you can check the global average temperature. More importantly, so can any layperson, and you can all evaluate your success together. If you work on reducing nuclear proliferation for ten years, and you haven't secured or prevented a single nuclear warhead, then you know you're not doing a good job. But how do you know if you're failing to reduce AI risk? Even if you think you have good evidence that you're making progress, how could anyone who's not already a technical expert possibly assess that progress? And if you propose to train all of the best experts in your methods, so that they learn to see you as a source of wisdom, then how many of them will retain the capacity to accuse you of failure?

I would not object to CFAR rolling out a new line of seminars that are specifically intended for people working on AI risk -- it is a very important cause, and there's something to be gained in working on a specific problem, and as you say, CFAR is small enough that CFAR can't do it all. But what I hear you saying that the mission is now going to focus exclusively on reducing AI risk. I hear you saying that if all of CFAR's top leadership is obsessed with AI risk, then the solution is not to aggressively recruit some leaders who care about other topics, but rather to just be honest about that obsession and redirect the institution's policies accordingly. That sounds bad. I appreciate your transparency, but transparency alone won't be enough to save the CFAR/MIRI community from the consequences of deliberately retreating into a bubble of AI researchers.

Comment by mass_driver on Rationality Quotes Thread February 2016 · 2016-02-19T17:03:15.702Z · score: 1 (1 votes) · LW · GW

Does anyone know what happened to TC Chamberlin's proposal? In other words, shortly after 1897, did he in fact manage to spread better intellectual habits to other people? Why or why not?

Comment by mass_driver on Help Build a Landing Page for Existential Risk? · 2015-08-14T22:04:12.518Z · score: 0 (0 votes) · LW · GW

Thank you! I see that some people voted you down without explaining why. If you don't like someone's blurb, please either contribute a better one or leave a comment to specifically explain how the blurb could be improved.

Comment by mass_driver on Help Build a Landing Page for Existential Risk? · 2015-08-09T16:44:13.631Z · score: 0 (0 votes) · LW · GW

Sure!

Comment by mass_driver on Help Build a Landing Page for Existential Risk? · 2015-08-07T18:38:05.731Z · score: 1 (1 votes) · LW · GW

Again, fair point -- if you are reading this, and you have experience designing websites, and you are willing to donate a couple of hours to build a very basic website, let us know!

Comment by mass_driver on Help Build a Landing Page for Existential Risk? · 2015-08-07T18:36:48.055Z · score: 2 (2 votes) · LW · GW

Sounds good to me. I'll keep an eye out for public domain images of the Earth exploding. If the starry background takes up enough of the image, then the overall effect will probably still hit the right balance between alarm and calm.

A really fun graphic would be an asteroid bouncing off a shield and not hitting Earth, but that might be too specific.

Comment by mass_driver on Help Build a Landing Page for Existential Risk? · 2015-08-07T18:34:08.339Z · score: 0 (0 votes) · LW · GW

Great! Pick one and get started, please. If you can't decide which one to do, please do asteroids.

Comment by mass_driver on Help Build a Landing Page for Existential Risk? · 2015-08-07T18:33:39.791Z · score: 1 (1 votes) · LW · GW

It would go to the best available charity that is working to fight that particular existential risk. For example, the 'donate' button for hostile AI might go to MIRI. The donate button for pandemics might go the Center for Disease Control, and the donate button for nuclear holocaust might go to the Global Threat Reduction Initiative. If we can't agree on which agency is best for a particular risk, we can pick one at random from the front-runners.

If you have ideas for which charities are the best for a particular risk, please share them here! That is part of the work that needs to get done.

Comment by mass_driver on Help Build a Landing Page for Existential Risk? · 2015-08-05T17:21:03.969Z · score: 2 (2 votes) · LW · GW

Hi Dorikka,

Yes, I am also concerned that the banner is too visually complicated -- it's supposed to be a scene of a flooded garage workshop, suggesting both major problems and a potential ability to fix them, but the graphic is not at all iconic. If you have another idea for the banner (or can recommend a particular font that would work better), please chime in.

I am not convinced that www.existential-risk.org is a good casual landing page, because (a) most of the content is in the form of an academic CV, (b) there is no easy-to-read summary telling the reader about existential risks, and (c) there is no donate button.

Comment by mass_driver on Final Words · 2015-03-17T05:12:54.574Z · score: 1 (1 votes) · LW · GW

It's probably "Song of Light," or if you want a more literal translation, "Hymn to Light."

Comment by mass_driver on A discussion of heroic responsibility · 2014-10-29T07:26:34.772Z · score: 4 (4 votes) · LW · GW

You might be wrestling with a hard trade-off between wanting to do as much good as possible and wanting to fit in well with a respected peer group. Those are both good things to want, and it's not obvious to me that you can maximize both of them at the same time.

I have some thoughts on your concepts of "special snowflake" and "advice that doesn't generalize." I agree that you are not a special snowflake in the sense of being noticeably smarter, more virtuous, more disciplined, whatever than the other nurses on your shift. I'll concede that you and them have -basically- the same character traits, personalities, and so on. But my guess is that the cluster of memes hanging out in your prefrontal cortex is more attuned to strategy than their meme-clusters -- you have a noticeably different set of beliefs and analytical tools. Because strategic meme-clusters are very rare compared to how useful they are, having those meme-clusters makes you "special" in a meaningful way even if in all other respects you are almost identical to your peers. The 1% more-of-the-time that you spend strategizing about how best to accomplish goals can double or triple your effectiveness at many types of tasks, so your small difference in outlook leads to a large difference in what kinds of activities you want to devote your life to. That's OK.

Similarly, I agree with you that it would be bad if all the nurses in your ward quit to enter politics -- someone has to staff the bloody ward, or no amount of political re-jiggering will help. The algorithm that I try to follow when I'm frustrated that the advice I'm giving myself doesn't seem to generalize is to first check and see if -enough- people are doing Y, and then switch from X to Y if and only if fewer-than-enough people are doing Y. As a trivial example, if forty of my friends and I are playing soccer, we will probably all have more fun if one of us agrees to serve as a referee. I can't offer the generally applicable advice "You should stop kicking the ball around and start refereeing." That would be stupid advice; we'd have forty referees and no ball game. But I can say "Hm, what is the optimal number of referees? Probably 2 or 3 people out of the 40 of us. How many people are currently refereeing? Hm, zero. If I switch from playing to refereeing, we will all have more fun. Let me check and see if everyone is making the same leap at the same time and scrambling to put on a striped shirt. No? OK, cool, I'll referee for a while." That last long quote is fully generalizable advice -- I wish literally everyone would follow it, because then we'd wind up with close to an optimal number of referees.

Comment by mass_driver on Entropy, and Short Codes · 2014-10-05T22:38:07.722Z · score: 0 (0 votes) · LW · GW

OK, but why is "chair" shorter than "furniture"? Why is "blue" shorter than "color"? Furniture and color don't strike me as words that are so abstract as to rarely see use in everyday conversation.

Comment by mass_driver on Entropy, and Short Codes · 2014-10-03T18:42:23.637Z · score: 0 (0 votes) · LW · GW

I'm confused. What makes "chair" the basic category? I mean, obviously more basic categories will have shorter words -- but who decided that "solid object taking up roughly a cubic meter designed to support the weight of a single sitting human" was a basic category?

Comment by mass_driver on Rationality Quotes July 2014 · 2014-09-10T22:56:43.628Z · score: 3 (3 votes) · LW · GW

That's an important warning, and I'm glad you linked me to the post on ethical inhibitions. It's easy to be mistaken about when you're causing harm, and so allowing a buffer in honor of the precautionary principle makes sense. That's part of why I never mention the names of any of my clients in public and never post any information about any specific client on any public forums -- I expect that most of the time, doing so would cause no harm, but it's important to be careful.

Still, I had the sense when I first read your comment six weeks ago that it's not a good ethical maxim to "never provide any information (even in the mathematical/Bayesian sense of "information") to anyone who doesn't have an immediate need to know it."

I think I've finally put my finger on what was bothering me: in order to provide the best possible service to my clients, I need to make use of my social and emotional support structure. If I carried all of the burdens of my work solely on my own shoulders, letting all of my client's problems bounce around solely in my head, I'd go a little crazier than I already am, and I'd provide worse service. My clients would suffer from my peculiar errors of viewpoint. In theory, I can discuss my clients with my boss or with my assistants, but both of those relationships are too charged with competition to serve as an effective emotional safety valve -- I don't really want to rely on my boss for a dose of perspective; I'm too busy signalling to my boss that I'm competent.

I think this is probably generally applicable -- I want my doctors to have a chance to chat about me (without using my real name) in the break room or with their poker buddies, so that they can be as stable and relaxed as possible about giving me the best possible treatment. Same thing with my accountant -- I'm much more concerned that my accountant is going to forget to apply for a legal tax exemption that'll net me thousands of dollars than I am that my accountant is going to leak details about me to his friend who, unbeknownst to the accountant, is friends with the husband of an IRS agent who will then decide to give me an unfriendly audit. Sure, it's important to me that my medical and financial details stay reasonably private, but I'm willing to trade a small amount of privacy for a moderate increase in professional competence.

Do you feel differently? I suspect that some of the people who make bold, confident assertions about how "nobody should ever disclose any private information under any circumstances" are simply signalling their loyalty and discretion, rather than literally describing their preferred policies or honestly describing their intended behavior. Perhaps I'm just falling prey to the Typical Mind fallacy, though.

Comment by mass_driver on Rationality Quotes September 2014 · 2014-09-08T21:37:47.127Z · score: 15 (19 votes) · LW · GW

It seems to me that educated people should know something about the 13-billion-year prehistory of our species and the basic laws governing the physical and living world, including our bodies and brains. They should grasp the timeline of human history from the dawn of agriculture to the present. They should be exposed to the diversity of human cultures, and the major systems of belief and value with which they have made sense of their lives. They should know about the formative events in human history, including the blunders we can hope not to repeat. They should understand the principles behind democratic governance and the rule of law. They should know how to appreciate works of fiction and art as sources of aesthetic pleasure and as impetuses to reflect on the human condition.

On top of this knowledge, a liberal education should make certain habits of rationality second nature. Educated people should be able to express complex ideas in clear writing and speech. They should appreciate that objective knowledge is a precious commodity, and know how to distinguish vetted fact from superstition, rumor, and unexamined conventional wisdom. They should know how to reason logically and statistically, avoiding the fallacies and biases to which the untutored human mind is vulnerable. They should think causally rather than magically, and know what it takes to distinguish causation from correlation and coincidence. They should be acutely aware of human fallibility, most notably their own, and appreciate that people who disagree with them are not stupid or evil. Accordingly, they should appreciate the value of trying to change minds by persuasion rather than intimidation or demagoguery.

Steven Pinker, The New Republic 9/4/14

Comment by mass_driver on Rationality Quotes July 2014 · 2014-07-22T06:53:02.054Z · score: 8 (8 votes) · LW · GW

You're...welcome? For what it's worth, mainstream American legal ethics try to strike a balance between candor and advocacy. It's actually not OK for lawyers to provide unabashed advocacy; lawyers are expected to also pay some regard to epistemic accuracy. We're not just hired mercenaries; we're also officers of the court.

In a world that was full of Bayesian Conspiracies, where people routinely teased out obscure scraps of information in the service of high-stakes, well-concealed plots, I would share your horror at what you describe as "disclosing personal information." Mathematically, you're obviously correct that when I say anything about my client(s) that translates as anything other than a polite shrug, it has the potential to give my clients' enemies valuable information. As a practical matter, though, the people I meet at dinner parties don't know or care about my clients. They can't be bothered to hack into my firm's database, download my list of clients, hire an investigator to put together dossiers on each client, and then cross-reference the dossier with my remarks to revise their probability estimate that a particular client is faking his injury. Even if someone chose to go to all that trouble, nobody would buy the resulting information -- the defense lawyers I negotiate with are mathematically illiterate. Finally, even if someone bought the resulting information, it's not clear what the defense lawyers would do if they could confidently upgrade their estimate of the chance that Bob was faking his injury from 30% up to 60% -- would they tail him with a surveillance crew? They do that anyway. Would they drive a hard bargain in settlement talks? They do that anyway. Civil legal defense tactics aren't especially sensitive to this kind of information.

All of which is to say that I take my duties to my clients very seriously, and I would never amuse myself at a cocktail party in ways that I thought had more than an infinitesimal chance of harming them. If you prefer your advocates to go beyond a principle of 'do no harm' and live by a principle of 'disclose no information', and you are willing to pay for the extra privacy, then more power to you -- but beware of lawyers who smoothly assure you that they would never disclose any client info under any circumstances. It's a promise that's easy to make and hard to verify.

Comment by mass_driver on Rationality Quotes July 2014 · 2014-07-21T23:34:15.842Z · score: 13 (15 votes) · LW · GW

Is that revelation grounds for a lawsuit, a criminal offense or merely grounds for disbarment?

None of the above, really, unless you have so few murder cases that someone could plausibly guess which one you were referring to. I work with about 100 different plaintiffs right now, and my firm usually accepts any client with a halfway decent case who isn't an obvious liar. Under those conditions, it'd be alarming if I told you that 100 out of 100 were telling the truth -- someone's bound to be at least partly faking their injury. I don't think it undermines the justice system to admit as much in the abstract.

If you indiscreetly named a specific client who you thought was guilty, though, that could get you a lawsuit, a criminal offense, and disbarment.

Comment by mass_driver on Too good to be true · 2014-07-11T22:10:21.843Z · score: 4 (4 votes) · LW · GW

I'm confused about how this works.

Suppose the standard were to use 80% confidence. Would it still be surprising to see 60 of 60 studies agree that A and B were not linked? Suppose the standard were to use 99% confidence. Would it still be surprising to see 60 of 60 studies agree that A and B were not linked?

Also, doesn't the prior plausibility of the connection being tested matter for attempts to detect experimenter bias this way? E.g., for any given convention about confidence intervals, shouldn't we be quicker to infer experimenter bias when a set of studies conclude (1) that there is no link between eating lithium batteries and suffering brain damage vs. when a set of studies conclude (2) that there is no link between eating carrots and suffering brain damage?

Comment by mass_driver on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-08-05T07:06:11.821Z · score: 1 (1 votes) · LW · GW

Yes, Voldemort could probably teach DaDA without suffering from the curse, and a full-strength Voldemort with a Hogwarts Professorship could probably steal the stone.

I'm not sure either of those explains how Voldemort got back to full-strength in the first place, though. Did Voldemort fake the charred hulk of his body? And Harry forgot that apparent charred bodies aren't perfectly reliable evidence of a dead enemy because his books have maxims like "don't believe your enemy is dead until you see the body?" But then what was Voldemort doing between 1975 and 1990? He was winning the war until he tackled Harry; why would he suddenly decide to stop?

Comment by mass_driver on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-08-02T21:11:07.143Z · score: 1 (1 votes) · LW · GW

Puzzle:

Who is ultimately in control of the person who calls himself Quirrell?

  • Voldemort

If Voldemort is possessing the-person-pretending-to-be-Quirrell using the path Dumbledore & co. are familiar with, or for that matter by drinking unicorn blood, then why isn't Voldy's magic noticeably weaker than before? Quirrell seems like he could at least hold his own against Dumbledore, and possibly defeat him.

If Voldemort took control of the-person-pretending-to-be-Quirrell's body outright using incredibly Dark magic, then why would Quirrell openly suggest that possibility to the DMLE Auror in Taboo Tradeoffs I?

If Voldemort returned to life via the Philosopher's Stone, then how did he get past the 'legendary' and 'fantastic' wards on the forbidden corridor without so much as triggering an alarm?

  • David Monroe

If Monroe disappeared on purpose in 1975, and has been having random other international adventures since then, and has only just now decided to teach Battle Magic at Hogwarts (thereby ensuring his demise, per the Dark Lord's curse on the position) because his zombie syndrome is worsening and he is worried about living out the year, then what is his purpose in teaching Battle Magic? Is it just for the fun of it? This seems unlikely; he is very serious about his subject and rarely indulges in jokes or in irrelevant scholastic diversions.

Is it because he expects that teaching the students Battle Magic will help them learn to fight back and resist Dark wizards? Then why did he plan so poorly for his big Yuletide speech about resistance and unity as to allow Harry to seriously disrupt it? Could someone as intelligent as Monroe, whose major goal is to sway political opinion, really only give one big political speech and then, at that speech, fail to prevent one (admittedly precocious) student from giving a moderately persuasive opposing speech? Why not, e.g., cast a silent, wandless Silencio charm on Harry? Or simply inform him that he has 30 words in which to state his backup wish, or else it is forfeit? Or pretend to honor the wish that he would teach Defense against the Dark Arts next year? All of these alternatives (plus others) seem obviously better to me than tolerating such blatant interference with his primary goal.

  • Lucius Malfoy

If he had those kinds of powers, he would wield them openly and just take over Britain. Also, it's hard to imagine he wouldn't have been keeping a closer watch on his son, to the point where he would know if his son was involved in a duel and/or sitting around freezing for six to eight hours.

  • Slytherin's Monster

It has mysteriously powerful lore from the ancient past, and there's no firm evidence that it was killed or locked back in the Chamber of Secrets after Voldy broke in. In fact, the person who claims that Voldy's last words to the Monster would have been Avada Kedavra is...Quirrell. Not exactly a trustworthy source if Quirrell is the Monster.

OTOH, this would be ludicrously under-foreshadowed -- canon!Monster was a non-sentient beast, and the only HPMOR foreshadowing for the Monster focused on its being very long lived and able to speak Parseltongue. It's not clear how a rationalist would deduce, from available information, that the Monster was responsible -- we have very little data on what the Monster is like, so it's very hard to strongly match the actions we observe to the actions we expect from the Monster.

  • Albus Dumbledore

Lots of pieces of weak evidence point here; Dumbledore and Quirrell are two of the highest-powered wizards around, and are two of the weirdest wizards around, and have roughly the same power level, so the hypothesis that says they are both caused by the same phenomenon gets a simplicity bonus. Dumbledore is frequently absent without a good explanation; Quirrell is frequently zombie-ish without a good explanation; Quirrell is zombieish more often as Dumbledore starts to get more energetic and activate the Order of the Pheonix; I cannot think of any scenes where both Dumbledore and Quirrell are being very active at exactly the same time. Sometimes Dumbledore expresses skepticism at something Quirrell says, but I cannot think of any examples of them engaging in magical cooperation or confrontation. If they are the same person, then it is convenient that Quirrell made Dumbledore promise not to investigate who Quirrell is.

We know Dumbledore snuck into Harry's room (in his own person) and left messages for Harry warning Harry not to trust Dumbledore; perhaps Dumbledore also turns into Quirrell and warns Harry in Quirrell's body not to trust Dumbledore. It is a little unclear why Dumbledore would want to limit Harry's trust in him, but it could have to do with the idea of heroic responsibility (nihil supernum) or even just standard psychology -- if Quirrell and Dumbledore agree on something, even though Quirrell says not to trust Dumbledore, then Harry is very likely to believe it.

It is hard to imagine Dumbledore murdering Hermione in cold blood, but, as Harry has been musing, you can only say "that doesn't seem like his style" so many times before the style defense becomes extremely questionable. Dumbledore prevents Hermione from receiving a Time-Turner, was suspiciously absent at the time of the troll attack (but showed up immediately after it was complete, with just enough time in between to have obliviated Fred and George, who, conveniently, handed the Marauder's map over to the Headmaster and then forgot all about it).

OTOH, having Hermione attempt to kill Draco and then having the troll kill Hermione on school grounds is terrible for Dumbledore's political agenda -- he winds up losing support from the centrists over the attack on Draco, and losing support from everyone over incompetent security. The school, where he has been Headmaster for decades and where he must keep the Philosopher's Stone, might even be closed. It's hard to understand how putting his entire power base in grave jeopardy could be a deliberate plot on his part, nor is it easily explained in terms of feeling plot-appropriate (it doesn't) or Dumbledore's insanity (a fully general explanation).

Comment by mass_driver on Post ridiculous munchkin ideas! · 2013-05-13T04:20:23.043Z · score: 1 (1 votes) · LW · GW

Is there more to the Soylent thing than mixing off-the-shelf protein shake powder, olive oil, multivitamin pills, and mineral supplement pills and then eating it?

Comment by mass_driver on Rationality Quotes May 2012 · 2013-05-10T09:15:53.252Z · score: 1 (1 votes) · LW · GW

Isn't there a very wide middle ground between (1) assigning 100% of your mental probability to a single model, like a normal curve and (2) assigning your mental probability proportionately across every conceivable model ala Solomonoff?

I mean the whole approach here sounds more philosophical than practical. If you have any kind of constraint on your computing power, and you are trying to identify a model that most fully and simply explains a set of observed data, then it seems like the obvious way to use your computing power is to put about a quarter of your computing cycles on testing your preferred model, another quarter on testing mild variations on that model, another quarter on all different common distribution curves out of the back of your freshman statistics textbook, and the final quarter on brute-force fitting the data as best you can given that your priors about what kind of model to use for this data seem to be inaccurate.

I can't imagine any human being who is smart enough to run a statistical modeling exercise yet foolish enough to cycle between two peaks forever without ever questioning the assumption of a single peak, nor any human being foolish enough to test every imaginable hypothesis, even including hypotheses that are infinitely more complicated than the data they seek to explain. Why would we program computers (or design algorithms) to be stupider than we are? If you actually want to solve a problem, you try to get the computer to at least model your best cognitive features, if not improve on them. Am I missing something here?

Comment by mass_driver on Rationality Quotes February 2013 · 2013-02-02T08:20:12.310Z · score: 12 (12 votes) · LW · GW

What's the percent chance that I'm doing it wrong?

Comment by mass_driver on Rationality Quotes January 2013 · 2013-01-25T22:00:41.899Z · score: 15 (17 votes) · LW · GW

I once heard a story about the original writer of the Superman Radio Series. He wanted a pay rise, his employers didn't want to give him one. He decided to end the series with Superman trapped at the bottom of a well, tied down with kryptonite and surrounded by a hundred thousand tanks (or something along these lines). It was a cliffhanger. He then made his salary demands. His employers refused and went round every writer in America, but nobody could work out how the original writer was planning to have Superman escape. Eventually the radio guys had to go back to him and meet his wage demands. The first show of the next series began "Having escaped from the well, Superman hurried to..." There's a lesson in there somewhere, but I've no idea what it is.

-http://writebadlywell.blogspot.com/2010/05/write-yourself-into-corner.html

I would argue that the lesson is that when something valuable is at stake, we should focus on the simplest available solutions to the puzzles we face, rather than on ways to demonstrate our intelligence to ourselves or others.

Comment by mass_driver on Morality is Awesome · 2013-01-25T21:58:27.770Z · score: 2 (2 votes) · LW · GW

Ironically, this is my most-upvoted comment in several months.

Comment by mass_driver on Morality is Awesome · 2013-01-25T21:57:42.857Z · score: 1 (1 votes) · LW · GW

OK, so how else might we get people to gate-check the troublesome, philosophical, misleading parts of their moral intuitions that would have fewer undesirable side effects? I tend to agree with you that it's good when people pause to reflect on consequences -- but then when they evaluate those consequences I want them to just consult their gut feeling, as it were. Sooner or later the train of conscious reasoning had better dead-end in an intuitively held preference, or it's spectacularly unlikely to fulfill anyone's intuitively held preferences. (I, of course, intuitively prefer that such preferences be fulfilled.)

How do we prompt that kind of behavior? How can we get people to turn the logical brain on for consequentialism but off for normative ethics?

Comment by mass_driver on Morality is Awesome · 2013-01-08T00:42:29.687Z · score: 23 (23 votes) · LW · GW

Given at least moderate quality, upvotes correlate much more tightly with accessibility / scope of audience than quality of writing. Remember, the article score isn't an average of hundreds of scalar ratings -- it's the sum of thousands of ratings of [-1, 0, +1] -- and the default rating of anyone who doesn't see, doesn't care about, or doesn't understand the thrust of a post is 0. If you get a high score, that says more about how many people bothered to process your post than about how many people thought it was the best post ever.

Comment by mass_driver on Morality is Awesome · 2013-01-07T20:19:12.483Z · score: 1 (1 votes) · LW · GW

OK, let's say you're right, and people say "awesome" without thinking at all. I imagine Nyan_Sandwich would view that as a feature of the word, rather than as a bug. The point of using "awesome" in moral discourse is precisely to bypass conscious thought (which a quick review of formal philosophy suggests is highly misleading) and access common-sense intuitions.

I think it's fair to be concerned that people are mistaken about what is awesome, in the sense that (a) they can't accurately predict ex ante what states of the world they will wind up approving of, or in the sense that (b) what you think is awesome significantly diverges from what I (and perhaps from what a supermajority of people) think is awesome, or in the sense that (c) it shouldn't matter what people approve of, because the 'right' think to do is something else entirely that doesn't depend on what people approve of.

But merely to point out that saying "awesome" involves no conscious thought is not a very strong objection. Why should we always have to use conscious thought when we make moral judgments?

Comment by mass_driver on Morality is Awesome · 2013-01-06T20:08:54.906Z · score: 0 (2 votes) · LW · GW

To say that something's 'consequentialist' doesn't have to mean that it's literally forward-looking about each item under consideration. Like any other ethical theory, consequentialism can look back at an event and determine whether it was good/awesome. If you going white-water rafting was a good/awesome consequence, then your decision to go white-water rafting and the conditions of the universe that let you do so were good/awesome.

Comment by mass_driver on Rationality Quotes December 2012 · 2012-12-07T02:13:15.177Z · score: 4 (4 votes) · LW · GW

Also, this book was a horrible agglomeration of irrelevant and un-analyzed factoids. If you've already read any two Malcolm Gladwell books or Freakonomics, It'd be considerably more educational to skip this book and just read the cards in a Trivial Pursuit box.

Comment by mass_driver on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T23:36:40.478Z · score: 1 (1 votes) · LW · GW

The undergrad majors at Yale University typically follow lukeprog's suggestion -- there will be 20 classes on stuff that is thought to constitute cutting-edge, useful "political science" or "history" or "biology," and then 1 or 2 classes per major on "history of political science" or "history of history" or "history of biology." I think that's a good system. It's very important not to confuse a catalog of previous mistakes with a recipe for future progress, but for the same reasons that general history is interesting and worthwhile for the general public to know something about, the history of a given discipline is interesting and worthwhile for students of that discipline to look into.

Comment by mass_driver on Train Philosophers with Pearl and Kahneman, not Plato and Kant · 2012-12-06T23:33:22.461Z · score: 4 (4 votes) · LW · GW

I honestly have no idea which, if any, of the reddit philosphers are trolling. It's highly entertaining reading, though.

Comment by mass_driver on Rationality Quotes October 2012 · 2012-10-16T18:49:50.248Z · score: -1 (1 votes) · LW · GW

We could bemoan these legacies, but it makes more sense to confront them head on, to consider just how we should live not in light of the bodies we wish we had but instead with the ones we are born with, bodies that evolved in the wild, thanks to ancestors who only just barely got away.

http://www.slate.com/articles/health_and_science/human_evolution/2012/10/evolution_of_anxiety_humans_were_prey_for_predators_such_as_hyenas_snakes.2.html

Comment by mass_driver on How To Have Things Correctly · 2012-10-16T17:58:28.006Z · score: 7 (7 votes) · LW · GW

If money doesn't buy you happiness, you don't have enough money.

It's trivially true that multiplying the amount of money you have by 10,000 will probably make you much happier, but the interesting question is whether this is the easiest or most efficient route to increasing happiness. Since most people have no practical path to acquiring ten billion dollars, and most people could learn to enjoy their possessions more, Alicorn's piece is quite useful.

Comment by mass_driver on The Optimizer's Curse and How to Beat It · 2012-10-16T07:59:58.299Z · score: 1 (1 votes) · LW · GW

The problem with this analysis is that it assumes that the prior should be given the same weight both ex ante and ex post. I might well decide to evenly weight my prior (intuitive) distribution showing a normal curve and my posterior (informed) distribution showing a huge peak for the Green Revolution, in which case I'd only think the Green Revolution was one of the best charitable options, and would accordingly give it moderate funding, rather than all available funding for all foreign aid. But, then, ten years later, with the benefit of hindsight, I now factor in a third distribution, showing the same huge peak for the Green Revolution. And, because the third distribution is based not on intuition or abstract predictive analysis but on actual past results --it's entitled to much more weight. I might calculate a Bayesian update based on observing my intuition once, my analysis once, and the historical track record ten or twenty times. At that point, I would have no trouble believing that a charity was 100x as good as the 90th percentile. That's an extraordinary claim, but the extraordinary evidence to support it is well at hand. By contrast, no amount of ex ante analysis would persuade me that your proposed favorite charity is 100x better than the current 90th percentile, and I have no problem with that level of cynicism. If your charity's so damn good, run a pilot study and show me. Then I'll believe you.

Comment by mass_driver on [SEQ RERUN] The Truly Iterated Prisoner's Dilemma · 2012-08-23T22:54:16.800Z · score: 1 (1 votes) · LW · GW

I'm not sure what's silly about it. Just because there's only one game of IPD doesn't mean there can't be multiple rounds of communication before, during, and after each iteration.

As for the asymmetrical problem, if you're really close to 100% confident, would you like to bet $500 against my $20 that I can't find hard experimental evidence that there's a better solution than simple TFT, where "better" means that the alternative solution gets a higher score in an arena with a wide variety of strategies? If I do find an arena like that, and you later claim that my strategy only outperformed simple TFT because of something funky about the distribution of strategies, I'll let you bid double or norhing to see if changing the distribution in any plausible way you care to suggest changes the result.

Comment by mass_driver on The Ethical Status of Non-human Animals · 2012-08-23T22:42:27.424Z · score: 3 (3 votes) · LW · GW

Even though there's no moral realism, it still seems wrong that such an important ethical question turns out to hinge on whether humans or paper-clip-maximizers started breeding first. One way of not biting that bullet is to say that we shouldn't be "voting" at all. The only good reason to vote is when there are scarce, poorly divisible resources. For example, it makes sense to vote on what audio tracks to put on the Pioneer satellite; we can only afford to launch, e.g. 100 short sound clips, and making the clips even shorter to accommodate everyone's preferred tracks would just ruin them for everyone. On the other hand, if five people want to play jump rope and two people want to play hopscotch, the solution isn't to hold a vote and make everyone play jump rope -- the solution is for five people to play jump rope and two people to play hopscotch. Similarly, if 999 billion Clippys want to make paperclips and a billion humans want to build underground volcano lairs, and they both need the same matter to do it, and Clippies experience roughly the same amount of pleasure and pain as humans, then let the Clippies use 99.9% of the galaxy's matter to build paper clips, and let the humans use 0.1% of the galaxy's matter to build underground volcano lairs. There's no need to hold a vote or even to attempt to compare the absolute value of human utility with the absolute value of Clippy utility.

The interesting question is what to do about so-called "utility monsters" -- people who, for whatever reason, experience pleasure and pain much more deeply than average. Should their preferences count more? What if they self-modified into utility monsters specifically in order to have their preferences count more? What if they did so in an overtly strategic way, e.g., +20 utility if all demands are met, and -1,000,000 utility if any demands are even slightly unmet? More mundanely, if I credibly pre-commit to being tortured unless I get to pick what kind of pizza we all order, should you give in?

Comment by mass_driver on [SEQ RERUN] The Truly Iterated Prisoner's Dilemma · 2012-08-23T19:19:43.106Z · score: 0 (0 votes) · LW · GW

I mean, "terribly horrible" on what scale? If the criterion is "can it be strictly dominated by another strategy in terms of results if we ignore the cost of making the strategy more complicated," then, sure, a strategy that reliably allows opponents to costlessly defect on the first of 100 rounds fails that criterion. I'd argue that a more interesting set of criteria are "is the expected utility close to the maximum expected utility generated by any strategy," "is the standard deviation in expected utility acceptably low," and "is the strategy simple enough that it can be taught, shared, and implemented with little or no error?" Don't let the perfect become the enemy of the good.

Comment by mass_driver on [SEQ RERUN] The Truly Iterated Prisoner's Dilemma · 2012-08-23T08:43:46.032Z · score: 0 (0 votes) · LW · GW

Are there strategies that, if publicly announced, will let a more sophisticated player defect on the first round and get away with it? Sure. There are also slightly better strategies that can be publicly announced without allowing for useful first-round defection. Either way, though, the gains from even shaky cooperation in a 100-round game are on the order of 70 or 80 million lives -- letting those gains slip by because you're worried about losing 1 million lives on the first round is a mistake. There's a tendency to worry about losing face, or, as Andreas puts it, not being defeated. But with real stakes on the table, you should only worry about maximizing points. Your pride isn't worth millions of lives.

Comment by mass_driver on How to Save the World · 2012-08-02T23:47:45.784Z · score: 1 (1 votes) · LW · GW

If your moral system leads you to do things that make your moral intuition queasy, you should question your moral system.

Mmm, depends whether you are using "question" as a euphemism for "reject." Certainly, you should re-examine your explicit reasoning about ethics if the conclusions you reach conflict with many of your moral intuitions. However, you should also re-examine your moral intuitions when they fail to agree with your explicit reasoning about ethics. Otherwise, there would be very little point in conducting ethical analysis -- if your analysis can't ever validly prompt you to discard or ignore a moral intuition, then you may as well stop searching your conscience and just do whatever 'feels right' at any given moment. Sometimes your intuitions give way, and sometimes your formal reasoning gives way -- that's how you reach reflective equilibrium.

Biting bullets is an overly simple solution to moral dilemma. You find yourself making monsters without much effort.

Ah, but is "don't make monsters" your most important moral objective? Suppose you had to become a monster in the eyes of your friends in order to save a village full of innocent children. Is it obvious that it would be wrong to become a monster in this sense?

Comment by mass_driver on Welcome to Less Wrong! (July 2012) · 2012-07-29T05:20:16.961Z · score: 2 (2 votes) · LW · GW

The way to bridge that gap is to only volunteer predictions when you're quite confident, and otherwise stay quiet, change the subject, or murmur a polite assent. You're absolutely right that explicitly declaring a 65% confidence estimate will make you look indecisive -- but people aren't likely to notice that you make predictions less often than other people -- they'll be too focused on how when you do make predictions, you have an uncanny tendency to be correct...and also that you're pleasantly modest and demure, too.

Comment by mass_driver on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T20:26:03.909Z · score: 9 (9 votes) · LW · GW

No offense taken! I was that kid in middle school, but I've grown a lot since then. I've learned to read people very well, and as a result I've been able to win elections in school clubs, join a fraternity, date, host dinner parties, and basically have a social life that's as active and healthy as anyone else's.

I think often we assume that people are criticizing us because we are starting out from a place of insecurity. If you suspect and worry and fear that you deserve criticism, then even a neutral description of your characteristics can feel like a harsh personal attack. It's hard to listen to someone describe you, just like it's hard to listen to an audiotape of your voice or a videotape of your face. We are all more awkward in real life than we imagine ourselves to be; this is just the corollary of overconfidence/optimism bias, which says that we predict better results for ourselves than we are likely to actually obtain. It's OK, though. Honest, neutral feedback can be uncomfortable to hear, and still not be meant as criticism, much less as trolling.

Are there thousands of narrow-minded people who will read the article and laugh and say, "Haha, those stupid Less Wrongers, they're such weirdos?" Of course. But I don't think you can blame the journalist for that -- it's not the journalist's job to deprive ignorant, judgy readers of any and all ammunition, and, after all, we are a bit strange. If we weren't any different from the mainstream, then why bother?

Comment by mass_driver on Article about LW: Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set · 2012-07-26T02:11:15.454Z · score: 18 (22 votes) · LW · GW

Just read the article. I thought it was very nice! It takes us seriously, it accurately summarizes many of the things that LWers are doing and/or hope to do, and it makes us sound like we're having a lot of fun while thinking about topics that might be socially useful while not hurting or threatening anyone. How could this possibly be described as trolling? I think the OP should put the link back up -- the Observer deserves as much traffic as we can muster, I'd say.

Comment by mass_driver on Rationality Quotes July 2012 · 2012-07-13T02:47:44.539Z · score: 3 (5 votes) · LW · GW

I dunno, I think all of that is overstated. I mean, sure, perfectly rational agents will always win, where "win" is defined as "achieving the best possible outcome under the circumstances."

But aspiring rationalists will sometimes lose, and therefore be forced to choose the lesser of two evils, and, in making that choice, may very rationally decide that the pain of not achieving your (stated, proactive) goal is easier to bear than the pain of transgressing your (implicit, background) code of morality.

And if by "win" you mean not "achieve the best possible outcome under the circumstances," but "achieve your stated, proactive goal," then no, rationalists won't and shouldn't always win. Sometimes rationalists will correctly note that the best possible outcome under the circumstances is to suffer a negative consequence in order to uphold an ideal. Sometimes your competitors are significantly more talented and better-equipped than you, and only a little less rational than you, such that you can't outwit your way to an honorable upset victory. If you value winning more than honor, fine, and if you value honor more than winning, fine, but don't prod yourself to cheat simply because you have some misguided sense that rationalists never lose.

EDIT: Anyone care to comment on the downvotes?