Posts

AGI/FAI Theorist for Hire 2011-07-15T15:50:43.996Z
Some rationality tweets 2010-12-30T07:14:01.341Z
Pseudolikelihood as a source of cognitive bias 2010-11-20T20:06:32.222Z
Shock Levels are Point Estimates 2010-02-14T04:31:25.506Z
Philadelphia LessWrong Meetup, December 16th 2009-12-16T03:13:02.633Z
The Domain of Your Utility Function 2009-06-23T04:58:55.550Z
Epistemic vs. Instrumental Rationality: Approximations 2009-04-28T03:12:55.675Z

Comments

Comment by Peter_de_Blanc on Who Wants To Start An Important Startup? · 2012-08-14T08:47:25.477Z · LW · GW

I'm really excited about software similar to Anki, but with task-specialized user interfaces (vs. self-graded tasks) and better task-selection models (incorporating something like item response theory), ideally to be used for both training and credentialing.

Comment by Peter_de_Blanc on An Intuitive Explanation of Solomonoff Induction · 2012-07-09T02:46:53.210Z · LW · GW

No hypothesis is a prefix of another hypothesis.

Comment by Peter_de_Blanc on [deleted post] 2012-06-20T06:57:28.882Z

Shannon is a network hub. I spent some time at her previous house and made a lot of connections, including my current partners.

Comment by Peter_de_Blanc on Ask an experimental physicist · 2012-06-17T08:18:59.424Z · LW · GW

What happens when an antineutron interacts with a proton?

Comment by Peter_de_Blanc on Biased Pandemic · 2012-03-14T09:19:21.271Z · LW · GW

I now realise you might be asking "how does this demonstrate hyperbolic, as opposed to exponential, discounting", which might be a valid point, but hyperbolic discounting does lead to discounting the future too heavily, so the player's choices do sort of make sense.

That is what I was wondering. Actually, exponential discounting values the (sufficiently distant) future less than hyperbolic discounting. Whether this is too heavy depends on the your parameter (unless you think that any discounting is bad).

Comment by Peter_de_Blanc on Biased Pandemic · 2012-03-13T19:40:36.685Z · LW · GW

Another player with Hyperbolic Discounting went further: he treated cities, any city near him, while carrying 5 red city cards in his hand and pointing out, in response to entreaties to cure red, that red wasn't much of an issue right now.

How does this demonstrate hyperbolic discounting?

Comment by Peter_de_Blanc on Excluding the Supernatural · 2011-12-19T09:29:40.938Z · LW · GW

What's special about a mosquito is that it drinks blood.

Phil originally said this:

My point was that vampires were by definition not real - or at least, not understandable - because any time we found something real and understandable that met the definition of a vampire, we would change the definition to exclude it.

Note Phil's use of the word "because" here. Phil is claiming that if vampires weren't unreal-by-definition, then the audience would not have changed their definition whenever provided with a real example of a vampire as defined. It follows that the original definition would have been acceptable had it been augmented with the "not-real" requirement, and so this is the claim I was responding to with the unreal mosquito example.

Comment by Peter_de_Blanc on Excluding the Supernatural · 2011-12-19T04:45:06.457Z · LW · GW

I understand that Phil was not suggesting that all non-real things are vampires. That's why my example was a mosquito that isn't real, rather than, say, a Toyota that isn't real.

Comment by Peter_de_Blanc on Excluding the Supernatural · 2011-12-19T03:39:41.111Z · LW · GW

My point was that vampires were by definition not real

So according to you, a mosquito that isn't real is a vampire?

Comment by Peter_de_Blanc on Is latent Toxoplasmosis worth doing something about? · 2011-11-18T02:54:22.150Z · LW · GW

My fencing coach emphasizes modeling your opponent more accurately and setting up situations where you control when stuff happens. Both of these skills can substitute somewhat for having faster reflexes.

Comment by Peter_de_Blanc on Is latent Toxoplasmosis worth doing something about? · 2011-11-18T00:15:38.512Z · LW · GW

Sounds like you should do more Tae Kwon Do.

Comment by Peter_de_Blanc on Amanda Knox: post mortem · 2011-10-21T07:04:34.198Z · LW · GW

This argument does not show that.

Comment by Peter_de_Blanc on Rationality and Video Games · 2011-09-20T03:42:43.215Z · LW · GW

I still don't see why you would want to transform probabilities using a sigmoidal function. It seems unnatural to apply a sigmoidal function to something in the domain [0, 1] rather than the domain R. You would be reducing the range of possible values. The first sigmoidal function I think of is the logistic function. If you used that, then 0 would be transformed into 1/2.

I have no idea how something like this could be a standard "game design" thing to do, so I think we must not be understanding Chimera correctly.

Comment by Peter_de_Blanc on Rationality and Video Games · 2011-09-19T06:34:11.375Z · LW · GW

The standard "game design" thing to do would be push the probabilities through a sigmoid function (to reward correct changes much more often than not, as well as punish incorrect choices more often than not).

I don't understand. You're applying a sigmoid function to probabilities... what are you doing with the resulting numbers?

Comment by Peter_de_Blanc on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-21T05:24:48.922Z · LW · GW

The setting in my paper allows you to have any finite amount of background knowledge.

Comment by Peter_de_Blanc on [Link] "Upload", a video-conference between a girl and her dead grandfather · 2011-07-24T04:47:21.420Z · LW · GW

There are robots that look like humans, but if you want an upload to experience it as a human body, you would want it to be structured like a human body on the inside too, e.g. by having the same set of muscles.

Comment by Peter_de_Blanc on Preference For (Many) Future Worlds · 2011-07-18T05:02:39.346Z · LW · GW

Evolution.

Comment by Peter_de_Blanc on Preference For (Many) Future Worlds · 2011-07-18T02:27:30.745Z · LW · GW

You can't use the mind that came up with your preferences if no such mind exists. That's my point.

Comment by Peter_de_Blanc on Preference For (Many) Future Worlds · 2011-07-17T13:02:32.840Z · LW · GW

Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences

What makes you think a mind came up with them?

Comment by Peter_de_Blanc on AGI/FAI Theorist for Hire · 2011-07-17T08:33:30.638Z · LW · GW

There was a specific set of algorithms that got me thinking about this topic, but now that I'm thinking about the topic I'd like to look at more stuff. I would proceed by identifying spaces of policies within a domain, and then looking for learning algorithms that deal with those sorts of spaces. For sequential decision-making problems in simple settings, dynamic bayesian networks can be used both as models of an agent's environment and as action policies.

I'd be interested in talking. You can e-mail me at peter@spaceandgames.com.

Comment by Peter_de_Blanc on AGI/FAI Theorist for Hire · 2011-07-17T08:26:31.217Z · LW · GW

It is not available. The thinking on this matter was that sharing a bibliography of (what we considered) AGI publications relevant to the question of AGI timelines could direct researcher attention towards areas more likely to result in AGI soon, which would be bad.

Comment by Peter_de_Blanc on The Domain of Your Utility Function · 2011-06-11T13:34:05.284Z · LW · GW

I don't think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn't make it into this paper.

Comment by Peter_de_Blanc on St. Petersburg Mugging Implies You Have Bounded Utility · 2011-06-11T05:31:22.182Z · LW · GW

This tracks how good a god you are, and seems to make the paradox disappear.

How? Are you assuming that P(N) goes to zero?

Comment by Peter_de_Blanc on St. Petersburg Mugging Implies You Have Bounded Utility · 2011-06-10T08:27:20.434Z · LW · GW

LCPW cuts two ways here, because there are two universal quantifiers in your claim. You need to look at every possible bounded utility function, not just every possible scenario. At least, if I understand you correctly, you're claiming that no bounded utility function reflects your preferences accurately.

Comment by Peter_de_Blanc on St. Petersburg Mugging Implies You Have Bounded Utility · 2011-06-09T08:09:59.524Z · LW · GW

That doesn't sound like an expected utility maximizer.

Comment by Peter_de_Blanc on St. Petersburg Mugging Implies You Have Bounded Utility · 2011-06-09T08:09:08.823Z · LW · GW

It seems to me that expanding further would reduce the risk of losing the utility it was previously counting on.

Comment by Peter_de_Blanc on St. Petersburg Mugging Implies You Have Bounded Utility · 2011-06-08T08:41:20.079Z · LW · GW

what if the universe turns out to be much larger than previously thought, and the AI says "I'm at 99.999% of achievable utility already, it's not worth it to expand farther or live longer"?

It's not worth what?

Comment by Peter_de_Blanc on Seeing Red: Dissolving Mary's Room and Qualia · 2011-05-28T13:06:46.268Z · LW · GW

Depth perception can be gained through vision therapy, even if you've never had it before. This is something I'm looking into doing, since I also grew up without depth perception.

Comment by Peter_de_Blanc on Teachable Rationality Skills · 2011-05-28T04:52:50.026Z · LW · GW

Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong.

You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not "both right" or "both wrong", but "both -3 decibels."

Comment by Peter_de_Blanc on Circular Altruism · 2011-05-16T10:25:21.954Z · LW · GW

Obviously I didn't mean that being broke (or anything) is infinite disutility.

Then what asymptote were you referring to?

Comment by Peter_de_Blanc on Circular Altruism · 2011-05-16T08:10:31.851Z · LW · GW

I thought human utility over money was roughly logarithmic, in which case loss of utility per cent lost would grow until (theoretically) hitting an asymptote.

So you're saying that being broke is infinite disutility. How seriously have you thought about the realism of this model?

Comment by Peter_de_Blanc on People who want to save the world · 2011-05-15T15:59:50.926Z · LW · GW

I praise you for your right action.

Comment by Peter_de_Blanc on The 5-Second Level · 2011-05-08T02:33:45.835Z · LW · GW

First, I imagine a billion bits. That's maybe 15 minutes of high quality video, so it's pretty easy to imagine a billion bits. Then I imagine that each of those bits represents some proposition about a year - for example, whether or not humanity still exists. If you want to model a second proposition about each year, just add another billion bits.

Comment by Peter_de_Blanc on The 5-Second Level · 2011-05-07T14:09:52.594Z · LW · GW

Me: Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time.

In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?

Comment by Peter_de_Blanc on Procedural Knowledge Gaps · 2011-02-09T18:15:07.487Z · LW · GW

Instead of a strict straight/bi/gay split, I prefer to think of it as a spectrum where 0 is completely straight, 5 is completely bisexual and 10 is completely gay.

Hah! You're trying to squish two axes into one axis. Why not just have an "attraction to males" axis and an "attraction to females" axis? After all, it is possible for both to be zero or negative.

Comment by Peter_de_Blanc on On Charities and Linear Utility · 2011-02-06T01:37:11.859Z · LW · GW

OK, I guess my biggest complaint is this:

"If this approximation is close enough to the true value, the rest of the argument goes through: given that the sum Δx+Δy+Δz is fixed, it's best to put everything into the charity with the largest partial derivative at (X,Y,Z)."

What does "close enough" mean? I don't see this established anywhere in your post.

I guess one sufficient condition would be that a single charity has the largest partial derivative everywhere in the space of reachable outcomes.

Comment by Peter_de_Blanc on On Charities and Linear Utility · 2011-02-05T02:11:03.866Z · LW · GW

I voted this post down. You claim to have done math, and you tell a narrative of doing math, but for the most part your math is not shown. This makes it difficult for someone to form an opinion of your work without redoing the work from scratch.

[Edit: I was unnecessarily rude here, and I've removed the downvote.]

Comment by Peter_de_Blanc on Building Weirdtopia · 2011-01-24T18:04:54.213Z · LW · GW

Weirdtopia: sex is private. Your own memories of sex are only accessible while having sex. People having sex in public will be noticed but forgotten. Your knowledge of who your sex partners are is only accessible when it is needed to arrange sex. You will generally have warm feelings towards your sex partners, but you will not know the reason for these feelings most of the time, nor will you be curious. When you have sex, you will take great joy in realizing/remembering that this person you love is your sex partner.

Comment by Peter_de_Blanc on The Illusion of Sameness · 2011-01-23T16:35:53.685Z · LW · GW

If you want to predict how someone will answer a question, your own best answer is a good guess. Even if you think the other person is less intelligent than you, they are more likely to say the correct answer than they are to say any particular wrong answer.

Similarly, if you want to predict how someone will think through a problem, and you lack detailed knowledge of how that person's mind happens to be broken, then a good guess is that they will think the same sorts of thoughts that a non-broken mind would think.

Comment by Peter_de_Blanc on The Illusion of Sameness · 2011-01-23T14:17:33.819Z · LW · GW

This paper says its variance is from mutation-selection balance. I.e. it is a highly polygenic trait giving it a huge mutational target size, which makes it hard for natural selection to remove its variance.

That's what I said in the comment you are replying to.

Comment by Peter_de_Blanc on The Illusion of Sameness · 2011-01-22T08:37:17.866Z · LW · GW

There is no contradiction in believing that a prototypical human is smarter than most humans. Perhaps the variance in human intelligence is mostly explained by different degrees of divergence from the prototype due to developmental errors.

Comment by Peter_de_Blanc on The Finale of the Ultimate Meta Mega Crossover · 2011-01-19T08:08:31.379Z · LW · GW

Yeah, that bothered me too. But maybe Old One didn't know how long it would take to activate the Countermeasure.

Comment by Peter_de_Blanc on Working hurts less than procrastinating, we fear the twinge of starting · 2011-01-02T08:06:55.865Z · LW · GW

This sounds reasonable. What sort of thought would you recommend responding with after noticing oneself procrastinating? I'm leaning towards "what would I like to do?"

Comment by Peter_de_Blanc on Tallinn-Evans $125,000 Singularity Challenge · 2011-01-01T23:46:36.189Z · LW · GW

I think it would be helpful to talk about exactly what quantities one is risk averse about. If we can agree on a toy example, it should be easy to resolve the argument using math.

For instance, I am (reflectively) somewhat risk averse about the amount of money I have. I am not, on top of that, risk averse about the amount of money I gain or lose from a particular action.

Now how about human lives?

I'm not sure if I am risk averse about the amount of human life in all of spacetime.

I think I am risk averse about the number of humans living at once; if you added a second Earth to the solar system, complete with 6.7 billion humans, I don't think that makes the universe twice as good.

I think death events might be even more bad than you would predict from the reduction in human capital, but I am not risk averse about them; 400 deaths sound about twice as bad as 200 deaths if there are 6.7 billion people total.

Nor am I risk averse about the size of my personal contribution to preventing deaths. If I personally save 400 people, that is about twice as good as if I save 200 people.

I'd like to hear how you (and other commenters) feel about each of these measures.

Comment by Peter_de_Blanc on Some rationality tweets · 2011-01-01T23:09:09.477Z · LW · GW

Real analysis is the first thing that comes to mind. Linear algebra is the second thing.

Lately I've been thinking about if and how learning math can improve one's thinking in seemingly unrelated areas. I should be able to report on my findings in a year or two.

Comment by Peter_de_Blanc on Some rationality tweets · 2010-12-31T22:37:55.141Z · LW · GW

Converting Go positions from SGF to LaTeX.

Comment by Peter_de_Blanc on Some rationality tweets · 2010-12-31T11:16:47.924Z · LW · GW

Writing the above comment got me thinking about agents having different discount rates for different sorts of goods. Could the appearance of hyperbolic discounting come from a mixture of different rates of exponential discounting?

I remembered that the same sort of question comes up in the study of radioisotope decay. A quick google search turned up this blog, which says that if you assume a maximum-entropy mixture of decay rates (constrained by a particular mean energy), you get hyperbolic decay of the mixture. This is exactly the answer I was looking for.

Comment by Peter_de_Blanc on Some rationality tweets · 2010-12-31T11:01:00.788Z · LW · GW

In this metaphor, are learning and knowing investments that will return future cash? Why should there be different discount rates?

By learning, I mean gaining knowledge. Humans can receive enjoyment both from having stuff and from gaining stuff, and knowledge is not an exception.

It's true that a dynamically-consistent agent can't have different discount rates for different terminal values, but bounded rationalists might talk about instrumental values using the same sort of math they use for terminal values. In that context it makes sense to use different discount rates for different sorts of good.

Comment by Peter_de_Blanc on Some rationality tweets · 2010-12-31T10:40:37.827Z · LW · GW

Peter isn't the only person on that twitter account.

How did you figure that out?

Comment by Peter_de_Blanc on Some rationality tweets · 2010-12-30T22:36:03.959Z · LW · GW

Very good!