Posts

Politics Is Upstream of AI 2016-09-28T21:47:40.988Z · score: 4 (6 votes)
Launched: Friendship is Optimal 2012-11-15T04:57:47.924Z · score: 40 (45 votes)
Friendship is Optimal: A My Little Pony fanfic about an optimization process 2012-09-08T06:16:09.920Z · score: 74 (69 votes)

Comments

Comment by iceman on MIRI's 2017 Fundraiser · 2017-12-08T06:03:26.620Z · score: 8 (8 votes) · LW · GW

I donated about $20,000, most of that in ETH. (Employer matching programs add another $12,000 on top of that.)

Comment by iceman on [deleted post] 2017-10-25T21:18:29.951Z

The conflict you feel resonates with me. The parts of the greater rationalist community that make me feel uncomfortable are firmly White; I disagree with most of their moral framework and am often annoyed that many of their moral proclamations are unquestioned and are assumed to be 'good'; ie, effective altruism, animal rights charities, etc.

A large part of what drives me is a Blue/Black desire to know things to help myself and make my life more awesome. Unlike Sarah above, I am excited by Blue words ("knowing", "understanding") because they cash out in better ways to achieve my Black desires.

Whatever MIRI's public persona is, I think of their most exciting research as Blue/Black. The study of decision theory is firmly about enlightened self interest, especially when you start thinking of the differences between PrudentBot and FairBot. Let the two of us trade, TDT/FDT style, to our mutual benefit. Any constraints on our behaviour are not imposed, as White might, but are consensual self-modifications to our decision processes so that we may maximize our individual utility through superior understanding.

Comment by iceman on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-09T21:12:34.067Z · score: 1 (0 votes) · LW · GW

(Comment copied from the old site; where are we supposed to be commenting during this transitory period?)

if you took the survey and hit 'submit', your information was saved and you don't have to take it again.

I'm not sure this is true.

I took the survey over two sessions, where I filled out most of the multiple choice questions in the first session, and most of the long form questions in the second. When I did my final submitting, I also downloaded a copy of my answers. I was annoyed to find that it didn't contain my long form responses. At the time, I had assumed that this was just an export error, but you might want to verify that across sessions, at least long form responses from the additional sessions get saved.

Comment by iceman on HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family · 2017-10-09T21:09:40.256Z · score: 3 (3 votes) · LW · GW

if you took the survey and hit 'submit', your information was saved and you don't have to take it again.

I'm not sure this is true.

I took the survey over two sessions, where I filled out most of the multiple choice questions in the first session, and most of the long form questions in the second. When I did my final submitting, I also downloaded a copy of my answers. I was annoyed to find that it didn't contain my long form responses. At the time, I had assumed that this was just an export error, but you might want to verify that across sessions, at least long form responses from the additional sessions get saved.

Comment by iceman on 2017 LessWrong Survey · 2017-09-17T21:05:29.113Z · score: 12 (12 votes) · LW · GW

Survey taken!

Comment by iceman on Seeking better name for "Effective Egoism" · 2016-11-27T23:27:09.840Z · score: 0 (0 votes) · LW · GW

I have previously been critical of Effective Altruism, comparing it to trying to be an Effective CooperateBot (example).

If we were to extend Effective CooperateBot to the other sorts of players in prisoner's dilemmas, by analogy we'd call egoists Effective FairBot / Effective PrudentBot, depending on how your brand of egoism deals with CooperateBots.

That's a mouthfull and might be too technical, but it might be a nice way of reframing the question; when you said 'egoism,' I didn't know exactly what you meant.

Comment by iceman on Politics Is Upstream of AI · 2016-09-28T21:49:14.417Z · score: 3 (3 votes) · LW · GW

I also enjoyed the linked Politics Is Upstream of Science, which went in-depth on the state interventions in science talked about in the beginning of this piece.

Comment by iceman on Open Thread May 30 - June 5, 2016 · 2016-06-03T23:47:02.066Z · score: 5 (5 votes) · LW · GW

As a person who donates to MIRI and tries to not associate this with my powerword, I'd like to encourage people to not attempt to unmask psuedonyms.

Comment by iceman on Open Thread April 25 - May 1, 2016 · 2016-04-27T22:58:31.935Z · score: 4 (4 votes) · LW · GW

Now, now, I'm entirely down with the use of ponies to make points about rationality.

Comment by iceman on Turning the Technical Crank · 2016-04-06T21:08:09.939Z · score: 2 (2 votes) · LW · GW

Easy entrance is how September happened, both on LessWrong and on Usenet.

My personal bias here is that I see little hope for most of the application level network protocols built in the 80s and 90s, but have high hope for future federated protocols. Urbit in particular since a certain subtribe of the LW diaspora will already be moving there as soon as it's ready.

Comment by iceman on Consider having sparse insides · 2016-04-04T23:43:19.973Z · score: -1 (1 votes) · LW · GW

I will take minor exception to your exceptions. One of the big lessons of LessWrong for me is how different decision processes react in the iterated prisoner's dilemma. In your exceptions, you don't condition your behaviour on the expected behaviour of your trading partner. The greatest lesson I took away from LessWrong was Don't Be CooperateBot. I would however, endorse FairBot versions of your statements:

"I am the kind of person who keeps promises to the kind of person who keeps promises," and "I am a person who can be relied upon to cooperate with people who can be relied upon to cooperate."

(You'll notice that I cut out the loyalty part on that second one. I am undecided here. A lot of social technology at least vaguely pattern matches to CliqueBot, which is how I generally map loyalty to the prisoner's dilemma. However, I'm not going to endorse it as optimal.)

Comment by iceman on Lesswrong 2016 Survey · 2016-03-26T03:03:27.312Z · score: 40 (40 votes) · LW · GW

Survey achieved.

Comment by iceman on Open Thread, Feb 8 - Feb 15, 2016 · 2016-03-02T00:36:18.398Z · score: 1 (1 votes) · LW · GW

That's the account that I got the spam from AND they just messaged me again.

Comment by iceman on Where does our community disagree about meaningful issues? · 2016-02-12T21:37:39.633Z · score: 5 (5 votes) · LW · GW

The value of the Effective Altruism movement.

Comment by iceman on Open Thread, Feb 8 - Feb 15, 2016 · 2016-02-11T23:24:31.279Z · score: 4 (4 votes) · LW · GW

I just got the weirdest piece of direct messaging spam from a 0 karma account:

Hi good day. My boss is interested on donating to MIRI's project and he is wondering if he could send money through you and you donate to miri through your company and thus accelertaing the value created. He wants to use "match donations" as a way of donating thats why he is looking for people in companies like you. I want to discuss more about this so if you could see this message please give me a reply. Thank you!

I'm not sure exactly what the scam is in the above, but the plan on the face is so ridiculous that it has to be one (p=95). Is anyone else receiving these, or is it just me since I'm a well know MIRI funder?

Comment by iceman on Open thread, Jan. 25 - Jan. 31, 2016 · 2016-01-26T00:52:35.277Z · score: 1 (1 votes) · LW · GW

What's wrong with the economics on the home page? It seems fairly straightforward and likely. Mass technological unemployment seems at least plausible enough to be raised to attention. (Also.)

Comment by iceman on Open Thread, January 4-10, 2016 · 2016-01-05T21:20:23.129Z · score: 2 (2 votes) · LW · GW

Use RAID on ZFS. RAID is not a backup solution, but with the proper RAIDZ6 configuration will protect you against common hard drive failure scenarios. Put all your files on ZFS. I use a dedicated FreeNAS file server for my home storage. Once everything you have is on ZFS, turn on snapshotting. I have my NAS configured to take a snapshot every hour during the day (set to expire in a week), and one snapshot on Monday which lasts 18 months. The short lived snapshots lets me quickly recover from brain snafus like overwriting a file.

Long lived snapshotting is amazing. Once you have filesystem snapshots, incremental backups become trivial. I have two portable hard drives, one onsite and one offsite. I plug in the hard drive, issue one command, and a few minutes later, I've copied the incremental snapshot to my offline drive. My backup hard drives become append only logs of my state. ZFS also lets you configure a drive so that it stores copies of data twice, so I have that turned on just to protect against the remote chance of random bitflips on the drive.

I do this monthly, and it only burns about 10 minutes a month. However, this isn't automated. If you're willing to trust the cloud, you could improve this and make it entirely automated with something like rsync.net's ZFS snapshot support. I think other cloud providers also offer snapshotting now, too.

Comment by iceman on Open Thread, Dec. 28 - Jan. 3, 2016 · 2015-12-31T01:39:46.896Z · score: 4 (4 votes) · LW · GW

Epistemic status: vague conjecture and talking aloud.

So this article by Peter Watts has been making the rounds, talking about how half the people with cranial cavities filled 95% with cerebrospinal fluids still have IQs over 100. One of the side discussions on Hacker News was about how most of the internal tissue in the brain was used for routing while most 'logic' happened in the outer millimeters.

So far, I haven't seen anyone make the connection to cryonics and plasatination. If it's true that most of the important data is stored near the outside of the brain, does that make identity preservation through cryonics more or less likely? I vaguely remember reading that getting the core of the brain to LN temperatures took time. But if most data is near outside of the brain, which will reach LN temperatures first, shouldn't that raise our estimates of whether personal identity is preserved?

Comment by iceman on MIRI's 2015 Winter Fundraiser! · 2015-12-12T05:13:56.723Z · score: 7 (7 votes) · LW · GW

Your welcome. I wasn't planning on this, but I would have been leaving money on the table from a coworker's private matching funds.

Comment by iceman on MIRI's 2015 Winter Fundraiser! · 2015-12-11T21:31:35.034Z · score: 16 (16 votes) · LW · GW

$1000. (With an additional $1000 because of private, non-employer matching.)

Comment by iceman on LessWrong 2.0 · 2015-12-04T06:15:24.735Z · score: 7 (7 votes) · LW · GW

As much as I like Omnilibrium as a concept, there's a lot of work needed to productize the site. A lot of the style is whitespace, there are boxes instead of icons (notably the bookmark icon), there's no password reset (and the site barfed on my >20 character autogenerated password so that if I lose my current browser profile, I'm going to lose my account there), etc. But even worse, people didn't move there en masse so the site was never bootstrapped.

I'm not convinced that the karma system as it exists today actually performs its desired task anymore because a good chunk of the voting seems to be done by the unquiet spirits. Back when I cared about karma here, it was because it reflected the opinions of people that I very much respected. I don't feel that way anymore.

One possible[*] solution would be to port the Omnilibrium algorithm back to LessWrong, customizing the scoring for each user, but this might be a place where we should hold off proposing solutions.

[*] As in, "Well I suppose that's technically possible, but..."

Comment by iceman on Open thread, Sep. 21 - Sep. 27, 2015 · 2015-09-25T06:50:34.693Z · score: 1 (1 votes) · LW · GW

It's not just you. I can't use the arrow keys either. Chrome 45 on Windows 8.

Comment by iceman on Rationality Quotes Thread August 2015 · 2015-08-29T01:47:44.020Z · score: 2 (2 votes) · LW · GW

I'm not sure about the mathematical details, but as described in their FAQ, they presume that it's inevitable that people will form into local Blue and Green tribes, so they attempt to cluster the population into Blue and Green to not just be a better recommendation engine to both Blues and Greens, but also calculate a nonpartisan score of upvotes by the other side and downvotes by your side.

In general, I thought this was fascinating because it gets to the heart about what voting is for on social websites. If we're trying to build a recommendation engine, having an extremely diverse set of viewpoints is probably something that we want in the input stream of links and discussion. However, we then don't want to have everyone's voting then represent a single score variable, because people are different and have different worldviews. Mixing everyone's scores together will make a homogenized mess that doesn't really speak to anyone.

The idea of tracking partisanship not just to Bayes voting to make better recommendations to users, but to get a sense of nonpartisan quality really impressed me as an idea that's totally obvious...in retrospect. I do wonder how well it scales, as Omnilibrium is fairly small right now.

Comment by iceman on Rationality Quotes Thread August 2015 · 2015-08-21T22:48:52.659Z · score: -1 (5 votes) · LW · GW

I am going to publicly call for banning user VoiceOfRa [...] VoiceOfRa almost certainly downvote bombed the user who made the grandparent comment, including downvoting some very uncontroversial and reasonable comments.

Consequentially...why bother even if this is true?

Assuming you are correct, Eugene's response to being banned (twice!) was to just make another account. It's highly likely that if you ban this new account, he will make a fourth account. That account will quickly quickly gain karma because, as you note, Eugene's comments are actually valuable. You are proposing that we do the same thing a third time and expect a different result.

Possible actual solutions that are way too much work:

  • move LW on to an Omnilibrium like system of voting where Eugene's votes will put him strongly into the optimate cluster and won't hurt as much.

  • give up on moderation democracy on the web.

Comment by iceman on MIRI's 2015 Summer Fundraiser! · 2015-07-21T19:20:37.971Z · score: 50 (50 votes) · LW · GW

Donated $25,000. My employer will also match $6,000 of that, for a grand total of $31,000.

Comment by iceman on Open Thread, May 25 - May 31, 2015 · 2015-05-27T07:12:24.930Z · score: 15 (19 votes) · LW · GW

(Disclaimer: My lifetime contribute to MIRI is in the low six digits.)

It appears to me that there are two LessWrongs.

The first is the LessWrong of decision theory. Most of the content in the Sequences contributed to making me sane, but the most valuable part was the focus on decision theory and considering how different processes performed in the prisoner's dilemma. Understanding decision theory is a precondition to solving the friendly AI problem.

The first LessWrong results in serious insights that should be integrated into one's life. In Program Equilibrium in the Prisoner's Dilemma via Lob's Theorem, the authors take a moment to discuss the issue of "Defecting Against CooperateBot"--if you know that you are playing against CooperateBot, you should defect. I remember when I first read the paper and the concept just clicked. Of course you should defect against CooperateBot. But this was an insight that I had to be told and LessWrong is valuable to me as it has helped internalize game theory. The first year that I took the LessWrong survey, I answered that of course you should cooperate in the one shot non-shared source code prisoner's dilemma. On the latest survey, I instead put the correct answer.

The second LessWrong is the LessWrong of utilitarianism, especially of a Singerian sort, which I find to clash with the first LessWrong. My understanding is that Peter Singer argues that because you would ruin your shoes to jump into a creek to save a drowning child, you should incur an equivalent cost to save the life of a child in the third world.

Now never mind that saving the child might have postive expected value to the jumper. We can restate Singer's moral obligation as a prisoner's dilemma, and then we can apply something like TDT to it and make the FairBot version of Singer: I want to incur a fiscal cost to save a child on the other side of the world iff parents on the other side of the world would incur a fiscal cost to save my child. I believe Singer would deny this statement (and would be more aghast at the PrudentBot version), and would insist that there's a moral obligation regardless of the other theoretical reciprocation.

I notice that I am being asked to be CooperateBot. I don't think CFAR has "Don't be CooperateBot," as a rationality technique, but they should.

Practically, I find that 'altruism' and 'CooperateBot' are synonyms. The question of reciprocality hangs in the background. It must, because Azathoth both generates those who are CooperateBot and those who exploit CooperateBots.

I will also point out that this whole discussion is happening on the website that exists to popularize humanity's greatest collective action problem. Every one of us has a selfish interest in solving the friendly AI problem. And while I am not much of a utilitarian, I would assume that the correct utilitarian charity answer in terms of number of people saved/generated would be MIRI, and that the most straightforward explanation is Hansonian cynacism.

Comment by iceman on Open Thread, May 4 - May 10, 2015 · 2015-05-07T22:50:49.311Z · score: 0 (0 votes) · LW · GW

On the off chance that you haven't heard about it, I would recommend Christopher Lasch's The Culture of Narcisicism as background reading if you enjoy TLP. Lasch treats narcissism as a more general cultural phenomena instead of a strict clinical diagnosis. It is over 30 years old now, it probably doesn't directly apply to modern topics like social media.

Comment by iceman on April 2015 Media Thread · 2015-04-04T03:34:00.874Z · score: 2 (2 votes) · LW · GW

I would be careful about taking the events in The King of Kong at face value. Jason Scott, the BBS and Infocom documentary guy, hates it and points out several parts of the narrative which apper to be made up. (Second much longer writeup, which is as much about Scott's approach to being a documentarian as it is to problems with The King of Kong, but is also worth a read).

Comment by iceman on Open thread, Mar. 23 - Mar. 31, 2015 · 2015-03-24T21:05:28.228Z · score: 1 (1 votes) · LW · GW

To speak to the second of naming things, I'm a big fan of content addressable everything. Addressing all content by hash_function() has major advantages. This may require another naming layer to give human recognizable names to hashes, but I think this still goes a long way towards making things better.

You might find Joe Armstrong's The Mess We're In interesting, and provides some simple strawman algorithms for deduplication, though they probably aren't sophisticated enough to run in practice.

(My roomate walked in while I was watching that lecture when I had headphones on, and just saw the final conclusion slide:

  • We've made a mess
  • We need to reverse entropy
  • Quantum mechanics sets limits to the ultimate speed of computation
  • We need Math
  • Abolish names and places
  • Build the condenser
  • Make low-power computers -- no net environmental damage

And just did that smile and nod thing. The above makes it sound like Armstrong is a crank, but it all makes sense in context, and I've deliberately copied just this last slide without any other context to try to get you to watch it. If you like theoretical computer science, I highly recommend watching the lecture.)

Comment by iceman on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapters 105-107 · 2015-02-19T07:59:14.139Z · score: 5 (5 votes) · LW · GW

One of the ongoing patterns in HPMoR is how certain spells require people to believe certain things or to be in certain emotional states. Harry can perform partial transfiguration because he actually believes in timeless physics. Harry can cast Patronus 2.0 because of his beliefs about life and death. Avada Kedavara requires hate (or indifference).

I see no references to the conversation between McGonigal and Quirrell. "Professor Quirrell made a sharp gesture, as though to indicate a concept for which he had no words." McGonigal reacts. That there is a concept that these two characters know of, but have not actually explained to the reader. I expect this to play a part in the grand finale.

Almost everything related to Quirrell is related to death. There are simply too many instances to list exhaustively; this is just things that immediately come to mind. Voldemort was all about death during his reign; "mort" is in his name. He talks about stars dieing on multiple occasions. He brings the dementor to Hogwarts. He brings Harry to Azkaban. Hermione. Unicorns. The actual outcome of the conversation above that I linked to is that McGonigal whispers to Harry, "I had a sister once," and then leads to Harry going on his field trip with Lupin, which is about the Peverell brothers, which is about death. He has made his own wasting away prolonged and visible. Over the last few chapters, he has done certain things that would be counterproductive if his goal was to merely obtain the philosopher's stone.

My final prediction: Everything in the last few chapters which shows him being a sloppy carton villain is a ruse, is being done deliberately to manipulate Harry. Quirrell plays the game One Level Higher Than You. Quirrell's plot is to manipulate Harry into a certain mental state, which is directly related to the gesture he made to McGonigal, which is one of the major unresolved questions.

As to what end, there I am slightly hazy. My roommate believes that Methods is a retelling of The Sword of Good, and that Quirrell is at minimum the antivillian seeking positive utilitarian gains, possibly by vanquishing death. I think that's likely but am not confident enough to bet on it.

While we are at it, where are the Deathly Hollows? Quirrell took the Cloak of Invisibility. I presume that he has the Resurrection Stone if he's plotting something related to death. As far as I know, Dumbledore has the Elder Wand. Hey, didn't Quirrell say that he had a plan to defeat the Headmaster if he showed up?

Comment by iceman on Harry Potter and the Methods of Rationality discussion thread, February 2015, chapter 104 · 2015-02-16T02:55:04.250Z · score: 12 (12 votes) · LW · GW

The obvious question is, why did Quirrell cause Harry to believe that Quirrell is Voldemort? As a background assumption, Quirrell plays the game one level higher than Harry. Quirrell is able to model people very accurately, and has recently been uncharacteristically sloppy. Quirrell is also obviously prepared for Harry to figure this out (gun, smile, etc.), and I would think it likely that he could craft a plan so that he would successfully avoid suspicion, so I'm expecting that Harry believing that Quirrell is Voldemort is part of the plan.

What's his game?

The only thing that comes to me instantly is that Harry is now more likely to just accept whatever Quirrell villainously monologues next as true. I'll note that people are already accepting this as some sort of confirmation that Harry is Tom Riddle. CONSTANT VIGILANCE, PEOPLE! All we have seen is Quirrell say "Hello, Tom Riddle," to Harry!

Comment by iceman on Open thread, Dec. 29, 2014 - Jan 04, 2015 · 2014-12-31T07:18:02.534Z · score: 2 (2 votes) · LW · GW

Vitalik Buterin mentions LW in his latest, On Silos:

I consider economics and game theory to be a key part of cryptoeconomic protocol analysis, and consider the primary academic deficit of the cryptocurrency community to be not ignorance of advanced computer science, but rather economics and philosophy. We should reach out to http://lesswrong.com/ more.

Comment by iceman on 2014 Less Wrong Census/Survey · 2014-10-24T21:28:09.971Z · score: 38 (38 votes) · LW · GW

Surveyed!

Thank you for continuing to run it.

Comment by iceman on What are your contrarian views? · 2014-09-15T22:26:53.681Z · score: 3 (3 votes) · LW · GW

Correct meaning what? I'm interpreting "the traditional left" as a value system instead of a set of statements about the world.

Comment by iceman on Open thread, 18-24 August 2014 · 2014-08-22T01:40:30.396Z · score: 7 (7 votes) · LW · GW

Vitalik Buterin, one of the guys behind Ethereum, talks about the positives and negatives of futarchy, and how digital autonomous organizations (corporations that live on the blockchain) could use them as a system of governance.

Comment by iceman on MIRI's 2014 Summer Matching Challenge · 2014-08-08T01:30:26.734Z · score: 25 (27 votes) · LW · GW

Sent a check for $15,000.

I'm glad to see that publishing the Sequences is being prioritized. LessWrong is, sadly, dying and I'd love to have a published, edited version of Eliezer's original work that I can send to people.

Comment by iceman on An Experiment In Social Status: Software Engineer vs. Data Science Manager · 2014-07-16T20:14:43.166Z · score: 7 (7 votes) · LW · GW

This doesn't work, because it gives the master exactly zero motivation to do anything; he is already getting from you whatever he wants.

Or to put it in local game theory terms: Your boss is significantly more likely to be PrudentBot than FairBot, and PrudentBot defects against CooperateBot.

Comment by iceman on Bragging Thread, June 2014 · 2014-06-10T20:58:33.286Z · score: 9 (9 votes) · LW · GW

At work, I launched the project that I've been working on for the last two years, which deleted over 100,000 lines of unmaintained legacy code and I got promoted for doing so.

Comment by iceman on Ergonomics Revisited · 2014-04-22T23:20:42.309Z · score: 2 (2 votes) · LW · GW

I am moving in the other direction. I currently have two screens and am going back to a single big one. There doesn't seem to be great support for whether two monitors make us more productive. (That said, measuring pixels is also probably not really looking at what's really important here.)

I will once again plug the Kinesis Advantage keyboards; I've used them for over seven years now. I previously had really bad RSI and it's now rare that I get any pain in my wrists at all.

Comment by iceman on Open thread for December 24-31, 2013 · 2013-12-24T16:49:45.774Z · score: 5 (5 votes) · LW · GW

If I understand correctly, one of the posts in the creepiness is male weakness / conceptual superweapons sequence was linked to recently by Marginal Revolution. The comments weren't kind, and this was the immediate cause of Yvain locking down his blog, even if he had planned to do so for a while.

I wouldn't want any gender discussion linked to under my Real Name either. As much as I'm disappointed that I can't read his posts, I can't say that I would have reacted any differently.

Comment by iceman on MIRI's Winter 2013 Matching Challenge · 2013-12-17T22:19:39.438Z · score: 30 (30 votes) · LW · GW

I put a check for $10,000 in the mail earlier this week. (That said, I don't believe my donation is available for the 3x Thiel matching, as I'm a preexisting large donor. Likewise, my employer will only match $1,000 of it, since they have an annual cap.)

In general, I'm much happier with MIRI/SIAI as an organization now than I've ever been in the past. I'm highly supportive of more public facing research and more engagement with the academic community. The workshops appear to be producing fantastic results, like the probabilistic logic paper, and I'm hoping to see more things like that.

Comment by iceman on Harry Potter and the Methods of Rationality discussion thread, part 28, chapter 99-101 · 2013-12-13T04:36:01.780Z · score: 28 (28 votes) · LW · GW

Premise: Quirrell plays the game one level higher than Harry Potter.

Observation: This entire incident is uncharacteristically sloppy. Why were the unicorn corpses found? Why was Quirrell discovered?

Observation: Harry Potter is now really pissed off that herds of unicorns to slay aren't standard procedure for stable-izing people with life threatening injuries. He has just been given another "if only" to fixate on. It has been brought to his attention in ways that wouldn't trip his "why am I being told this" sense.

Father had told Draco that to fathom a strange plot, one technique was to look at what ended up happening, assume it was the intended result, and ask who benefited.

Hypothesis: Reminding Harry that there were ways the wizarding world could have saved Hermione was the primary effect. Possible secondary effects may include impressing on Harry just how ridiculously powerful he is. Perhaps implanting the desire to save Quirrell into Harry's mind? Quirrell may not actually need the blood right now, though I suspect it doesn't hurt.

Comment by iceman on 2013 Less Wrong Census/Survey · 2013-11-22T05:09:39.765Z · score: 46 (46 votes) · LW · GW

Survey Taken.

Comment by iceman on 2013 Census/Survey: call for changes and additions · 2013-11-05T04:46:47.971Z · score: 23 (31 votes) · LW · GW

I'm going to channel gwern from last year: give us a question that allows us to express disaproval about the handeling of the basilisk.

When I was interviewed about Friendship is Optimal, there was a minor side discussion in the comments on the interview. The comments were nonspecific enough that I think it's OK linking there; I'm pointing out that this is not going away since this came up with no prompting on something that only mentioned LessWrong. That interview is from 3 months ago, nearly a year after Yvain rejected having a basilisk question on the 2012 census.

This is still an issue. It will continue to be an issue. The way forward through this issue is to have something linkable that suggests that "XX% of LessWrongers (dis)agreed with the handling of the situation," so that the next time (Xixidu / RW / some internet rando) mentions the situation, we can point out that what the majority of LessWrongers actually think. (The phrasing there obviously suggests what I think, but if the results come back the other way, that too is useful information!)

Comment by iceman on November 2013 Media Thread · 2013-11-03T07:35:09.541Z · score: 3 (3 votes) · LW · GW

Thank you for introducing me to Explosions in the Sky. I've listened to The Earth Is Not a Cold Dead Place all the way through multiple times today. I'm fairly partial to the final track, Your Hand In Mine.

Comment by iceman on MIRI's 2013 Summer Matching Challenge · 2013-07-23T23:49:27.834Z · score: 1 (3 votes) · LW · GW

Yes, though I find it improbable that they'd Really Want ponies.

(Devil's advocate: there are people who participate in the fandom daily, and have big chunks of their identity tied up in being a brony. If there were actually a population where people would Really Want ponies, this would be the one.)

Comment by iceman on MIRI's 2013 Summer Matching Challenge · 2013-07-23T06:44:09.463Z · score: 38 (40 votes) · LW · GW

Wrote a cheque for $5,000.

(I put the redacted image of my donation online because someone else decided to start an ad-hoc fundraising effort for MIRI on FIMFiction.)

Comment by iceman on "Stupid" questions thread · 2013-07-13T23:17:38.696Z · score: 0 (0 votes) · LW · GW

LessWrong has been linked to multiple times, at least from the /mlp/ board. (Friendship is Optimal may be a proximate cause for most of these links...)

Comment by iceman on Open Thread, July 1-15, 2013 · 2013-07-13T23:10:47.609Z · score: 5 (5 votes) · LW · GW

There is now fanfic about Eliezer in the Optimalverse. I'm not entirely sure what to make of it.

Comment by iceman on Responses to Catastrophic AGI Risk: A Survey · 2013-07-08T23:07:58.917Z · score: 1 (1 votes) · LW · GW

Did you mean to post this in discussion? I would assume a MIRI blog post would be cross posted to Main.