Posts

[Link] Opacity as a defense against bias? 2011-12-28T18:15:11.598Z
[LINK] Fermi Paradox paper touching on FAI 2011-12-03T19:22:25.731Z
Probability puzzle 2011-11-28T21:33:36.850Z
5 Second Level: Substituting the Question 2011-10-28T00:20:06.270Z
HPMoR: What do you think you know? 2011-10-23T04:17:31.650Z
Biases to watch out for while job hunting? 2011-05-21T19:28:03.877Z
[LINK] Human Brain Project aims to emulate brain by 2024 2011-05-18T22:38:54.339Z
Elitist Jerks: A Well-Kept Garden 2011-04-25T18:56:27.094Z

Comments

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 25, chapter 96 · 2013-07-25T20:49:23.186Z · LW · GW

'Shall be' refers to a change of future state, so it can't be about the way things are now.

Comment by malthrin on "Stupid" questions thread · 2013-07-17T19:38:08.446Z · LW · GW

Space colonization is part of the transhumanist package of ideas originating with Nikolai Federov.

Comment by malthrin on Learning programming: so I've learned the basics of Python, what next? · 2013-06-21T18:08:15.188Z · LW · GW

Build something you need. What you don't know, you'll learn in the process.

Comment by malthrin on The "Friendship is Witchcraft" expectation test · 2013-01-15T19:50:38.883Z · LW · GW

You may have some inferential distance issues here.

Comment by malthrin on Parenting and Happiness · 2012-10-18T19:15:27.217Z · LW · GW

IQ reverts to the mean across generations.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 · 2012-04-09T21:05:31.355Z · LW · GW

This memory?

Into the vacuum rose the memory, the worst memory, something forgotten so long ago that the neural patterns shouldn't have still existed.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 · 2012-04-06T19:05:10.642Z · LW · GW

Thanks, I didn't realize that was a real thing.

Harry's sleep schedule wasn't on the red herring list. Further investigation warranted.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 13, chapter 81 · 2012-03-28T03:23:41.132Z · LW · GW

Regarding the ending comments about Godric's Hollow: there was some earlier discussion about the wizarding community's consensus here.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 12 · 2012-03-27T20:32:54.368Z · LW · GW

That sure is a lot of burdensome details.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 12 · 2012-03-26T16:01:31.725Z · LW · GW

That description of the line of Merlin at the beginning sure sounded 'sacred'.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-24T14:10:52.266Z · LW · GW

Agreed. I'm not sure why everyone's so fixated on a tradeoff by Harry.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-23T16:08:11.279Z · LW · GW

What happened here?

The Veritaserum was brought in then, and Hermione looked for a brief moment like she was about to sob, she was looking at Harry - no, at Professor McGonagall - and Professor McGonagall was mouthing words that Harry couldn't make out from his angle. Then Hermione swallowed three drops of Veritaserum and her face grew slack.

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-21T14:47:33.257Z · LW · GW

This fits very well. Nice job!

Comment by malthrin on Harry Potter and the Methods of Rationality discussion thread, part 11 · 2012-03-19T18:28:30.994Z · LW · GW

So, what happened to Narcissa?

Comment by malthrin on Is Sunk Cost Fallacy a Fallacy? · 2012-02-07T16:06:28.341Z · LW · GW

Good point. My interpretation of what you're saying is that the error is actually failure to re-plan at all, not bad math while re-planning.

Comment by malthrin on The Singularity Institute's Arrogance Problem · 2012-01-19T01:18:42.085Z · LW · GW

To educate myself, I visited the SI site and read your December progress report. I should note that I've never visited the SI site before, despite having donated twice in the past two years. Here are my two impressions:

  • Many of these bullet points are about work in progress and (paywalled?) journal articles. If I can't link it to my friends and say, "Check out this cool thing," I don't care. Tell me what you've finished that I can share with people who might be interested.
  • Lots on transparency and progress reporting. In general, your communication strategy seems focused on people who already are aware of and follow SIAI closely. These people are loud, but they're a small minority of your potential donors.
Comment by malthrin on The Singularity Institute's Arrogance Problem · 2012-01-19T00:48:45.128Z · LW · GW

There's a phrase that the tech world uses to describe the kind of people you want to hire: "smart, and gets things done." I'm willing to grant "smart", but what about the other one?

The sequences and HPMoR are fantastic introductory/outreach writing, but they're all a few years old at this point. The rhetoric about SI being more awesome than ever doesn't square with the trend I observe* in your actual productivity. To be blunt, why are you happy that you're doing less with more?

*I'm sure I don't know everything SI has actually done in the last year, but that's a problem too.

Comment by malthrin on Leveling Up in Rationality: A Personal Journey · 2012-01-17T16:36:53.683Z · LW · GW

You're harder to relate to now that you've made progress on problems the rest of us are still struggling with. Don't take it personally.

Comment by malthrin on AI Challenge: Ants - Post Mortem · 2012-01-15T00:19:23.615Z · LW · GW

The winning program ignored a lot of information, and there weren't enough entries to convince me that the information couldn't be used efficiently.

Comment by malthrin on AI Challenge: Ants - Post Mortem · 2012-01-14T21:13:08.273Z · LW · GW

Agreed. We can certainly do better than that. Unless I have a major life-event before the next AI challenge, I'll enter and get the LW community involved in the effort.

Comment by malthrin on Procedural knowledge gap: public key encryption · 2012-01-12T22:15:49.921Z · LW · GW

Right. Encryption is a lever; it permits you to use the secrecy of a small piece of data (the key) to secure a larger piece of data (the message). The security isn't in the encryption math. It's in the key storage and exchange mechanism.

*I stole this analogy from something I read recently, probably on HN.

Comment by malthrin on AI Challenge: Ants - Post Mortem · 2012-01-12T20:46:22.994Z · LW · GW

I look forward to reading your thoughts. Ants looked like a fun problem.

Comment by malthrin on Procedural knowledge gap: public key encryption · 2012-01-12T18:51:20.375Z · LW · GW

The main reason is that it requires your recipient to take an extra step. If you send an encrypted email to someone else, and they haven't configured their mail client for encryption, then they won't be able to read it. For most people, that negative outweighs the privacy gain.

Comment by malthrin on On accepting an argument if you have limited computational power. · 2012-01-11T18:48:49.997Z · LW · GW

There's a similar guideline in the software world:

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.

Comment by malthrin on More intuitive explanations! · 2012-01-07T00:05:01.111Z · LW · GW

Is this the one you meant?

P(A & B) = P(B | A) P(A) = P(A | B) P(B)

Hold the second two statements equal and divide by P(A):

P(B | A) = P(A | B) * P(B) / P(A)

Comment by malthrin on . · 2012-01-05T18:12:54.804Z · LW · GW

That was interesting, thanks. Here's another take - specific to the field of language modeling, but addresses the same question of statistical versus formal models: http://norvig.com/chomsky.html

Comment by malthrin on [SEQ RERUN] Zut Allais! · 2011-12-27T17:29:06.119Z · LW · GW

It's a hack. Computation isn't free.

Comment by malthrin on [LINK] Question Templates · 2011-12-23T19:57:03.799Z · LW · GW

This reminds me of Explain/Worship/Ignore. Am I getting the right idea?

Comment by malthrin on Applied Rationality Practice · 2011-12-23T19:36:39.770Z · LW · GW

As Kahneman points out in his new book, failures of reasoning are much easier to recognize in others than in ourselves. His book is framed around introducing the language of heuristics and biases to office water-cooler gossip. Practicing on the hardest level (self-analysis) doesn't seem like the best way to grow stronger.

Comment by malthrin on Is every life really worth preserving? · 2011-12-23T19:19:00.261Z · LW · GW

Voted you down. This is deontologist thought in transhumanist wrapping paper.

Ignoring the debate concerning the merits of eternal paradise itself and the question of Heaven's existence, I would like to question the assumption that every soul is worth preserving for posterity.

Consider those who have demonstrated through their actions that they are best kept excluded from society at large. John Wayne Gacy and Jeffrey Dahmer would be prime examples. Many people write these villains off as evil and give their condition not a second thought. But it is quite possible that they actually suffer from some sort of Satanic corruption and are thus not fully responsible for their crimes. In fact, there is evidence that the souls of serial killers are measurably different from those of normal people. Far enough in the future, it might be possible to "cure" them. However, they will still possess toxic memories and thoughts that would greatly distress them now that they are normal. To truly save them, they would likely need to have many or all of their memories erased. At that point, with an amnesic brain and a cloned body, are they even really the same person, and if not, what was the point of saving them?

Forming a robust theory of mind and realizing that not everyone thinks or sees the world the same way you do is actually quite difficult. Consider the immense complexity of the world we live in and the staggering scope of thoughts that can possibly be thought as a result. If eternal salvation means first and foremost soul preservation, maybe there are some souls that just shouldn't be saved. Maybe Heaven would be a better, happier place without certain thoughts, feelings and memories--and without the minds that harbor them.

Comment by malthrin on Is anyone else worried about SOPA? Trying to do anything about it? · 2011-12-21T22:21:51.064Z · LW · GW

Make sure you know which "SOPA" you're referring to. This piece of legislation has undergone significant change from the version that sparked popular outrage.

Added after reading some other comments: if you've made cynical predictions about SOPA's progress through Congress or its effects in the real world, don't forget to update your beliefs on the eventual outcome. Write this prediction down somewhere.

Comment by malthrin on Talking to Children: A Pre-Holiday Guide · 2011-12-21T22:17:13.619Z · LW · GW

Regarding "convincing" children of things: this AI koan is relevant.

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.

Comment by malthrin on What is your rationality blind spot? · 2011-12-21T22:12:36.246Z · LW · GW

Alcohol.

Comment by malthrin on Too busy to think about life · 2011-12-20T06:06:22.344Z · LW · GW

So, I missed my goal of scoring 100% in the Stanford AI class. Time to do better - to do what others can't, or just haven't thought of yet.

Comment by malthrin on Uncertainty · 2011-12-12T17:00:30.353Z · LW · GW

Sure. S results from HH or from TT, so we'll calculate those independently and add them together at the end. We'll do that by this equation: P(p=x|S) = P(p=x|HH) P(H) + P(p=x|TT) P(T).

We start out with a uniform prior: P(p=x) = 1. After observing one H, by Bayes' rule, P(p=x|H) = P(H|p=x) P(p=x) / P(H). P(H|p=x) is just x. Our prior is 1. P(H) is our prior, multiplied by x, integrated from 0 to 1. That's 1/2. So P(p=x|H) = x1/(1/2) = 2x.

Apply the same process again for the second H. Bayes' rule: P(p=x|HH) = P(H|p=x,H) P(p=x|H) / P(H|H). The first term is still just x. The second term is our updated belief, 2x. The denominator is our updated belief, multiplied by x, integrated from 0 to 1. That's 2/3 this time. So P(p=x|HH) = x2x/(2/3) = 3x^2.

Calculating tails is similar, except we update with 1-x instead of x. So our belief goes from 1, to 2-2x, to 3x^2-6x+3. Then substitute both of these into the original equation: (3/2)(x^2) + (3/2)(x^2 - 2x + 1). From there it's just a bit of algebra to get it into the form I linked to.

Comment by malthrin on Mitt Romney's $10,000 bet · 2011-12-12T03:46:43.219Z · LW · GW

Why is your name Miley Cyrus?

Comment by malthrin on [LINK] Fermi Paradox paper touching on FAI · 2011-12-03T20:05:01.178Z · LW · GW

Whoops, fixed.

Comment by malthrin on 5 Axioms of Decision Making · 2011-12-02T15:27:55.220Z · LW · GW

There's a Stanford online course next semester called Probabilistic Graphical Models that will cover different ways of representing this sort of problem. I'm enrolled.

Comment by malthrin on 5 Axioms of Decision Making · 2011-12-02T15:24:56.846Z · LW · GW

This recursive expected value calculation is what I implemented to solve my coinflip question. There's a link to the Python code in that post for anyone who is curious about implementation.

Comment by malthrin on Uncertainty · 2011-12-01T03:56:34.013Z · LW · GW

Speaking only for myself, I'm in that awkward middle stage - I understand probability well enough to solve toy problems, and to follow explanations of it in real problems, but not enough to be confident in my own probabilistic interpretation of new problem domains. I'm looking forward to this sequence as part of my education and definitely appreciate seeing the formality behind the applications.

Comment by malthrin on Uncertainty · 2011-11-30T20:47:39.453Z · LW · GW

Can you elaborate on the calculation for S? I think it should be this, but I'm not confident in my math.

Comment by malthrin on Probability puzzle · 2011-11-29T01:53:22.189Z · LW · GW

That N is correct, or at least it's what I calculated. Nice work.

Comment by malthrin on Probability puzzle · 2011-11-29T00:52:59.137Z · LW · GW

No, you have to state N before you start flipping coins.

Comment by malthrin on Probability puzzle · 2011-11-29T00:33:40.993Z · LW · GW

You're missing the third option - the choice to stop playing.

Comment by malthrin on Probability puzzle · 2011-11-28T23:41:44.061Z · LW · GW

Right. The coin has a fixed value for P(heads), set when your friend tampered with it. You just don't know what it is.

Comment by malthrin on 5-second level case study: Value of information · 2011-11-22T18:19:18.088Z · LW · GW

Good post. I like how you explained both the technique and the process that you used to develop it.

I see another potential benefit in estimating VoI. Asking myself, "Does any state of knowledge exist that would make me choose differently here?" bypasses some of my involuntary defenses against, "What state of knowledge would make me choose differently here?" The difference is that the former triggers an honest search, while the latter queries for a counterfactual scenario but gives up quickly because one isn't available.

Comment by malthrin on [link] I Was Wrong, and So Are You · 2011-11-09T19:36:08.159Z · LW · GW

The meta-pattern for reasoning errors is question substitution. A question with an available answer is substituted for the actual query and the answer is translated using intensity matching if the units don't match.

In this case, the subjects were primed to recall the cheers of their football team by the context of a political survey. The question they substituted was, "Does this statement resemble any of the professed beliefs of my political affiliation?"

Their answers were never considered empirically. Most questions never are.

Comment by malthrin on Open thread, November 2011 · 2011-11-03T18:00:58.686Z · LW · GW

Sorry, I don't know what morality is. I thought we were talking about "morality". Taboo your words.

Comment by malthrin on Open thread, November 2011 · 2011-11-03T16:38:44.794Z · LW · GW

That's a good start. Let's take as given that "morality" refers to an ordered list of values. How do you compare two such lists? Is the greater morality:

  • The longer list?
  • The list that prohibits more actions?
  • The list that prohibits fewer actions?
  • The closest to alphabetical ordering?
  • Something else?

Once you decide what actually makes one list better than another, then consider what observable evidence that difference would produce. With a prediction in hand, you can look at the world and gather evidence for or against the hypothesis that "morality" is increasing.

Comment by malthrin on Open thread, November 2011 · 2011-11-03T03:09:25.434Z · LW · GW

My most important thought was to ensure that all CPU time is used. That means continuing to expand the search space in the time after your move has been submitted but before the next turn's state is received. Branches that are inconsistent with your opponent's move can be pruned once you know it.

Architecturally, several different levels of planning are necessary: food harvesting and anticipating new food spawns. Pathfinding, with good route caching so you don't spend all your CPU here. Combat instances, evaluating a small region of the map with alpha/beta pruning and some pre-tuned heuristics. High level strategy, allocating ants between food operations, harassment, and hive destruction.

If you're really hardcore, a scheduling algorithm to dynamically prioritize the above calculations. I was just going to let the runtime handle that and hope for the best, though.