Posts

Comments

Comment by Erik on 2013 Less Wrong Census/Survey · 2013-12-02T13:30:02.361Z · LW · GW

Took the survey.

Comment by Erik on Best of Rationality Quotes, 2011 Edition · 2012-01-09T15:52:17.128Z · LW · GW

Thanks, nice work.

The comment: 13 points Hey 02 November 2011 09:01:09AM is maybe something you want to remove.

Comment by Erik on Less Wrong: Open Thread, September 2010 · 2010-09-03T07:38:53.401Z · LW · GW

At least you're not alone.

Comment by Erik on Open Thread: April 2010 · 2010-04-06T07:34:17.719Z · LW · GW

West and Brown has done some work on this which seemed pretty solid to me when I read it a few months ago. The basic idea is that biological systems are designed in a fractal way which messes up the dimensional analysis.

From the abstract of http://jeb.biologists.org/cgi/content/abstract/208/9/1575:

We have proposed a set of principles based on the observation that almost all life is sustained by hierarchical branching networks, which we assume have invariant terminal units, are space-filling and are optimised by the process of natural selection. We show how these general constraints explain quarter power scaling and lead to a quantitative, predictive theory that captures many of the essential features of diverse biological systems. Examples considered include animal circulatory systems, plant vascular systems, growth, mitochondrial densities, and the concept of a universal molecular clock. Temperature considerations, dimensionality and the role of invariants are discussed. Criticisms and controversies associated with this approach are also addressed.

A Science article of theirs containing similar ideas: http://www.sciencemag.org/cgi/content/abstract/sci;284/5420/1677

Edit: A recent Nature article showing that there is systematic deviations from the power law, somewhat explainable with a modified version of the model of West and Brown:

http://www.nature.com/nature/journal/v464/n7289/abs/nature08920.html

Comment by Erik on Think Before You Speak (And Signal It) · 2010-03-20T10:58:59.383Z · LW · GW

To convey an idea that is obvious in retrospect, an idea you can be confident in immediately

Solutions to hard puzzles are good examples of these. NP-problems, where finding a solution is (believed to be) exponentially harder than checking the correctness of it, is the extreme case.

Comment by Erik on Bayesian Flame · 2009-07-27T12:39:47.509Z · LW · GW

It's called an improper prior. There's been some argument about their use but they seldom lead to problems. The posteriors usually has much better behavior at infinity and when they don't, that's the theory telling us that the information doesn't determine the solution to the problem.

The observation that an improper prior cannot be obtain as a posterior distribution is kind of trivial. It is meant to represent a total lack of information w.r.t. some parameter. As soon you have made an observation you have more information than that.

Comment by Erik on Information cascades · 2009-04-01T07:29:40.548Z · LW · GW

[Sorry for not answering earlier, I didn't find the inbox until recently.]

I perhaps was a bit unclear, but when I say "ideal bayesian" I mean a mathematical construct that does full bayesian updating i.e. incorporates all prior knowledge into its calculations. This is of course impossible for anyone not extremely ignorant of the world, which is why I called it a minor point.

An ideal bayesian calculation would include massive deductive work on e.g. the psychology of voting, knowledge of the functioning of this community in particular etc.

My comment wasn't really an objection. To do a full bayesian calculation of a real world problem is comparable to using quantum mechanics for macroscopic systems. One must use approximations; the hard part is knowing when they break down.

Comment by Erik on Akrasia, hyperbolic discounting, and picoeconomics · 2009-03-30T12:48:36.035Z · LW · GW

Reading the Wikipedia article on hyperbolic discounting it seems like there is some evidence for a quasi-hyperbolic discounting. Looking at the formula, the interpretation is exponential discounting for all future times considered but with a special treatment of the present.

How to explain this? It is not unlikely that the brain uses one system for thinking about now and another about the future. Considering the usual workings of evolution, the latter is most likely a much later feature than the former. Considering this, one could perhaps even argue that it would be surprising if there wasn't any differences between the systems.

There seems to be some literature referenced at the wiki article. I suggest looking into it if you are interested. I sadly don't have the time right now.

Comment by Erik on Ask LW: What questions to test in our rationality questionnaire? · 2009-03-29T14:10:59.938Z · LW · GW

This is a project that I think really would profit from recruitment of a few psychologists with experience on creating personality test, IQ test or similarly. It sounds a bit like we're trying to create a new subfield here. Not that I want to sound discouraging, I think it is very important to get the ball rolling and even small, preliminary results could prove to be very useful, but there is probably enough material here to base quite a few academic careers on.

I'll have to agree with Kaj that a short survey is better for most purposes, but throwing out a long list of ideas first to later hone down to a more efficient one is a good idea.

Comment by Erik on Defense Against The Dark Arts: Case Study #1 · 2009-03-28T05:51:52.862Z · LW · GW

I think you may very well be correct in your interpretation of the original authors intention. However, I think Yvain's is more spot on for the majority of the upvotes the comment got.

Comment by Erik on Moore's Paradox · 2009-03-08T09:32:25.778Z · LW · GW

Is it harder for you to say "Evidence indicates that God exists" than for you to say "I believe God exists"? Just curious, it's a bit of a pet theory of mine. If you don't want to expend energy just to provide another data point for me, no hard feelings.

If you would be really kind, you could try to indicate how comfortable you are with different qualifiers jimrandomh gave.

Comment by Erik on Moore's Paradox · 2009-03-08T09:25:32.158Z · LW · GW

Ah, but the point is that "believe" is the weasliest of words. I know a few, and would guess there are quite a lot more, intelligent people who readily states "I believe that there is a God" but who would be very hesitant if you asked them to use "Evidence indicates that".

I would say that what you call weasel words occupy a scale and that its not just as easy to use them all in any given context, at least not for reasonably intelligent people.

Comment by Erik on Does blind review slow down science? · 2009-03-07T09:22:56.501Z · LW · GW

The title of the post is "Does blind review slow down science?", not "Does blind review stop science?". The prestigious researchers may have the time, but there are plenty of members of humanity that don't. Science is slow enough as it is. We would be well advised to consider any factors that may speed up progress.

Comment by Erik on Information cascades · 2009-03-06T07:13:57.315Z · LW · GW

The endpoints 1,2 and 4 are more or less equivalent; they are worth repeating though. There isn't really any worth in a score of votes on the true quality, at least not for bayesians. A score of votes on individual judgments would contain all useful information.

A thought experiment: You could use a double voting system: you make one vote on your beliefs before updating on the consensus and another vote in a separate count on your updated belief. The point would be to update on the consensus of the first vote count and use the second vote count for all other purposes, eg. promoting on the front page. This would allow broadcasting of each persons novel evidence (their individual judgement) as well as keeping some kind of aggregate score for the sites algorithms to work with. It would probably be easy to create an algorithm that makes full use of the first score though and as long as one can't think of a good use of the second count, one shouldn't vote on ones updated beliefs in a single vote system I guess.

A minor point about the calculations: An ideal bayesian wouldn't do the calculation you did. Knowing the voting procedure, they would dismiss any votes not contributing new information. As the order of the votes isn't public, they would have to keep a prior for the different orders and update on that. This is of course a minor quibble as this would lead to far too much calculations to be a reasonable model for any real reader.