Posts

Helsinki LW Meetup - Sat March 5th 2011-02-21T22:10:23.496Z

Comments

Comment by Erebus on The ethics of breaking belief · 2012-05-12T11:07:41.073Z · LW · GW

I have recently had the unpleasant experience of getting subjected to the kind of dishonest emotional manipulation that is recommended here. A (former) friend tried to convert me to his religion by using these tricks, and I can attest that they are effective if the person on the receiving end is trusting enough and doesn't realize that they are being manipulated. In my case the absence and avoidance of rational argument eventually led to the failure of the conversion attempt, but not before I had been inflicted severe emotional distress by a person I used to trust.

Needless to say, I find it unpleasant that these kind of techniques are mentioned without also mentioning that they are indeed manipulative, dishonest and very easy to abuse.

Comment by Erebus on Stupid Questions Open Thread · 2012-01-03T10:47:27.544Z · LW · GW

Solomonoff's universal prior assigns a probability to every individual Turing machine. Usually the interesting statements or hypotheses about which machine we are dealing with are more like "the 10th output bit is 1" than "the machine has the number 643653". The first statement describes an infinite number of different machines, and its probability is the sum of the probabilities of those Turing machines that produce 1 as their 10th output bit (as the probabilities of mutually exclusive hypotheses can be summed). This probability is not directly related to the K-complexity of the statement "the 10th output bit is 1" in any obvious way. The second statement, on the other hand, has probability exactly equal to the probability assigned to the Turing machine number 643653, and its K-complexity is essentially (that is, up to an additive constant) equal to the K-complexity of the number 643653.

So the point is that generic statements usually describe a huge number of different specific individual hypotheses, and that the complexity of a statement needed to delineate a set of Turing machines is not (necessarily) directly related to the complexities of the individual Turing machines in the set.

Comment by Erebus on Rationality Lessons Learned from Irrational Adventures in Romance · 2011-10-10T09:15:52.804Z · LW · GW

Of course it is still valid, unless X corresponds directly to some observable and clearly identifiable element of physical reality, so that its existence is not Platonic, but physically real. Obviously it wouldn't make sense to discuss whether someone has, say, committed theft if there didn't exist a precise and agreed-upon definition of what counts as theft -- or otherwise we would be hunting for some objectively existing Platonic idea of "theft" in order to see whether it applies.

Of course? There must be a miscommunication.

Do you think it makes sense to discuss, say, intelligence, friendship or morality? Do you think these exist either as physically real things or Platonic ideas, or can you supply precise and agreed-upon definitions for them?

I don't count any of my three examples physically real in the sense of being a clearly identifiable part of physical reality. Of course they reduce to physical things at the bottom, but only in the trivial sense in which everything does. Knowing that the reduction exists is one thing, but we don't judge things as intelligent, friendly or moral based on their physical configuration, but on higher-order abstractions. I'm not expecting us to have a disagreement here. I wouldn't consider any of the examples a Platonic idea either. Our concepts and intuitions do not have their source in some independently existing ideal world of perfections. Since you seemed to point to Platonism as a fallacy, we probably don't disagree here either.

So I'm led to expect that you think that to sensibly discuss whether a given behaviour is intelligent, friendly or moral, we need to be able to give precise definitions for intelligence, friendship and morality. But I can only think that this is fundamentally misguided: the discussions around these concepts are relevant precisely because we do not have such definitions at hand. We can try to unpack our intuitions about what we think of as a concept, for example by tabooing the word for it. But this is completely different from giving a definition.

However, to use the same example again, when people are accused of theft, in the overwhelming majority of cases, the only disagreement is whether the facts of the accusation are correct, and it's only very rarely that even after the facts are agreed upon, there is significant disagreement over whether what happened counts as theft. In contrast, when people are accused of sexism, a discussion almost always immediately starts about whether what they did was really and truly "sexist," even when there is no disagreement at all about what exactly was said or done.

This only reflects on the easiest ways of making or defending against particular kinds of accusations, not at all on the content of the accusations. Morality is similar to sexism in this respect, but it still makes sense to discuss morality without being a Platonist about it or without giving a precise agreed-upon definition.

Comment by Erebus on Rationality Lessons Learned from Irrational Adventures in Romance · 2011-10-09T08:08:02.634Z · LW · GW

[...] Discussing whether some institution, act, or claim is "sexist" makes sense only if at least one of these two conditions applies:

  1. There is some objectively existing Platonic idea of "sexism," [...]

  2. There is a precise and agreed-upon definition of "sexism," [...]

Replace "sexism" by "X". Do you think this alternative is still valid?

Or maybe you should elaborate on why you think "sexism" gives rise to this alternative.

Comment by Erebus on Rationality Lessons Learned from Irrational Adventures in Romance · 2011-10-04T15:36:00.477Z · LW · GW

I am troubled by the vehemence by which people seem to reject the notion of using the language of the second-order simulacrum -- especially in communities that should be intimately aware of the concept that the map is not the territory.

Understanding signaling in communication is almost as basic as understanding the difference between the map and the territory.

A choice of words always contains an element of signaling. Generalizing statements are not always made in order to describe the territory with a simpler map, they are also made in order to signal that the exceptions from the general case are not worth mentioning. This element of signaling is also present, even if the generalization is made out of a simple desire to not "waste space" - indeed the exceptional cases were not mentioned! Thus a sweeping generalization is evidence for the proposition that the speaker doesn't consider the exceptions to the stated general rule worth much (an upper bound is the trouble of mentioning them). And when dealing with matters of personal identity, not all explanations for the small worth of the set of exceptional people are as charitable as a supposedly small size of the set.

Comment by Erebus on Open Thread: September 2011 · 2011-09-21T07:40:48.117Z · LW · GW

Maybe I misinterpreted your first comment. I agree almost completely with this one, especially the part

(...) not relying on some magic future technology that will solve the existing problems.

Comment by Erebus on Open Thread: September 2011 · 2011-09-19T17:42:26.270Z · LW · GW

What would be the point of criticizing technology on the basis of its appropriate use?

Technologies do not exist in a vacuum, and even if they did, there'd be nobody around to use them. Thus restricting to only the "technology itself" is bound to miss the point of the criticism of technology. When considering the potential effects of future technology we need to take into account how the technologies will be used, and it is certainly reasonable to believe that some technologies have been and will be used to cause more harm than good. That a critical argument takes into account the relevant features of the society that uses the technology is not a flaw of the argument, but rather the opposite.

Comment by Erebus on Why We Can't Take Expected Value Estimates Literally (Even When They're Unbiased) · 2011-08-19T08:30:13.172Z · LW · GW

The argument is that simple numbers like 3^^^3 should be considered much more likely than random numbers with a similar size, since they have short descriptions and so the mechanisms by which that many people (or whatever) hang in the balance are less complex.

Consider the options A = "a proposed action affects 3^^^3 people" and B = "the number 3^^^3 was made up to make a point". Given my knowledge about the mechanisms that affect people in the real world and about the mechanisms people use to make points in arguments, I would say that the likelihood of A versus B is hugely in favor of B. This is because the relevant probabilities for calculating the likelihood scale (for large values and up to a first order approximation) with the size of the number in question for option A and the complexity of the number for option B. I didn't read de Blanc's paper further than the abstract, but from that and your description of the paper it seems that its setting is far more abstract and uninformative than the setting of Pascal's mugging, in which we also have the background knowledge of our usual life experience.

Comment by Erebus on Helsinki meetup Saturday 9.4 (cancelled) · 2011-04-05T07:03:13.813Z · LW · GW

I can't make it before mid-August, so waiting for me is probably not a good idea.

Comment by Erebus on Helsinki meetup Saturday 9.4 (cancelled) · 2011-04-04T17:38:17.622Z · LW · GW

A mailing list is a fine idea. With the amount of traffic on the front page these days, a dedicated mailing list might be a more reliable way of contacting less active readers. Assuming, of course, that we can get them to sign up on the list :)

Unfortunately I'm unable to participate in the meetup this time, as I'll be out of the country for quite some time starting on friday.

Comment by Erebus on Less Wrong NYC: Case Study of a Successful Rationalist Chapter · 2011-03-18T08:24:45.611Z · LW · GW

This post inspires me. I'll definitely keep this in mind when considering the next meetup in Helsinki.

(Unfortunately for organizing meetups, I'll be traveling until August. I hope my motivation won't have subsided when I come back.)

Comment by Erebus on Helsinki LW Meetup - Sat March 5th · 2011-03-07T19:17:23.444Z · LW · GW

Thanks to everyone who attended!

For the next meetup we should probably think of discussion topics in advance. Risto asked about the concrete benefits of having read Less Wrong. At least I feel that I wasn't able to articulate a satisfactory answer, so that might be one topic for next time. Since most of us were quite young, another thing that comes to mind is optimal career or study choices.

Comment by Erebus on Helsinki LW Meetup - Sat March 5th · 2011-03-02T07:12:56.261Z · LW · GW

I didn't have any specific topics in mind when proposing the meetup. Since this is the first Helsinki meetup, I think it might be a good idea to start with something like rationalist origin stories to get the discussion started.

Comment by Erebus on Helsinki LW Meetup - Sat March 5th · 2011-02-22T13:19:56.313Z · LW · GW

I wouldn't expect that having the meetup in English would be a problem for most of the prospective participants.

Comment by Erebus on Starting a LW meet-up is easy. · 2011-02-07T10:57:50.860Z · LW · GW

Judging from the comments, I guess we can fix the date of the meet as Saturday 5th of March. Any suggestions for a place? I'd think a not-too-noisy cafe in the center would be ideal, but I don't really know the options to recommend any. Just to provide a default suggestion in case nobody has a preference, let's say we meet at Cafe Aalto.

Comment by Erebus on Starting a LW meet-up is easy. · 2011-02-04T09:06:31.625Z · LW · GW

According to these statistics, Helsinki has the eighth largest population of LW-readers out of all the cities in the world. Even if that number is for some reason bloated compared to other cities in the list, I think it'd be a good idea to try and have a meet-up here. So, is anyone else from around Helsinki interested? A couple of answers in this thread should be enough for us to settle on a date (I prefer a weekend in March) and post an announcement on the front page.

Comment by Erebus on What is Bayesianism? · 2010-03-04T17:24:29.968Z · LW · GW

Infinity is mysterious was intended as a paraphrase of Jaynes' chapter on "paradoxes" of probability theory, and I intended mysterious precisely in the sense of inherently mysterious. As far as I know, Jaynes didn't use the word mysterious himself. But he certainly claims that rules of reasoning about infinity (which he conveniently ignores) are not to be trusted and that they lead to paradoxes.

Comment by Erebus on What is Bayesianism? · 2010-03-04T10:33:33.946Z · LW · GW

Just remember that Jaynes was not a mathematician and many of his claims about pure mathematics (as opposed to computations and their applications) in the book are wrong. Especially, infinity is not mysterious.

Comment by Erebus on For progress to be by accumulation and not by random walk, read great books · 2010-03-04T09:51:31.261Z · LW · GW

Do you have any specific examples in mind, or is this an expression of the general idea that the academia is mad?

Comment by Erebus on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-18T06:18:57.995Z · LW · GW

Would you expect to see evolutionary biologists discuss the methodological errors of creationist arguments in private correspondence?

(I don't think this is the place for this, since I don't think we're getting anywhere.)

Comment by Erebus on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-17T19:46:11.398Z · LW · GW

You're still talking about how the e-mails fit into the scenario of fraudulent climate scientists, that is, P(E|A) by my notation. I specifically said that I feel P(E|B) is being ignored by those who claim the e-mails are evidence of misconduct. Your link, for example, mostly lists things like climatologists talking about discrediting journals that publish AGW-sceptical stuff, which is exactly what they would do if they, in good faith, thought that AGW-scepticism is based on quack science. Reading the e-mails and concluding that sceptical papers are being suppressed without merit seems like merely assuming the conclusion.

(Regarding the FOI requests, that might indeed be something that might reasonably set off alarms and significantly reduce P(E|B) - if you believe the sceptics' commentaries accompanying the relevant quotes. But googling for "mcintyre foi harassment" and doing some reading gives a different story.)

(EDIT: Fixed notation, as in the parent.)

Comment by Erebus on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-17T08:54:04.477Z · LW · GW

For the most part, I don't think you're quite answering my question.

You present two explanations for the lack of peer-reviewed articles that are sceptical of the scientific consensus on global warming. The first is that there is unjust suppression of such views. The second is that such scepticism is based on bad science. You say that you think the leaked emails support the first explanation, and that there is sufficient evidence of biased (I'm guessing "biased" means "unmerited by the quality of the science" here) selection by journals. What is that sufficient evidence? More specifically, how does the information conveyed by the leaked emails distinguish between the first and second scenarios?

Now I'm sure the AGW believers feel that they are rejecting bad science rather than rejecting conclusions they don't like but emails like the above certainly make it appear that it is the conclusions as much as the methods that they are actually objecting to.

This addresses my questions, but I was asking for more specifics. Let A = "AGW sceptics are being suppressed from journals without proper evaluation of their science" and B = "AGW sceptics are being suppressed from journals because their science is unsound". Let E be the information provided by the email leaks. How do you get to the conclusion that the likelihood ratio P(E|A)/P(E|B) is significantly above 1?

Personally I can't see how the likelihood ratio would be anything but about 1, and it seems to me that those who act if the ratio is significantly greater than 1 are simply ignoring the estimation of P(E|B) because their prior for P(B) is small.

(EDIT: I originally wrote P(A|E) and P(B|E) instead P(E|A) and P(E|B). My text was still, apparently, clear enough that this wrong notation didn't cause confusion. I've now fixed the notation.)

Comment by Erebus on You're Entitled to Arguments, But Not (That Particular) Proof · 2010-02-16T11:03:20.960Z · LW · GW

What, specifically, is "damning" about those quotes?

Suppose creationists took over a formerly respected biology journal. Wouldn't you expect to find quotes like the above (with climate sceptics replaced by creationists) from the private correspondence of biologists?

Comment by Erebus on Open Thread: January 2010 · 2010-01-05T10:45:18.975Z · LW · GW

Inspired by reading this blog for quite some time, I started reading E.T. Jaynes' Probability Theory. I've read most of the book by now, and I have incredibly mixed feelings about it.

On one hand, the development of probability calculus starting from the needs of plausible inference seems very appealing as far as the needs of statistics, applied science and inferential reasoning in general are concerned. The Bayesian viewpoint of (applied) probability is developed with such elegance and clarity that alternative interpretations can hardly be considered appealing next to it.

On the other hand, the book is very painful reading for the pure mathematician. The repeated pontification about how wrong mathematicians are for desiring rigor and generality is strange, distracting and useless. What could possibly be wrong about the desire to make the steps and assumptions of deductive reasoning as clear and explicit as possible? Contrary to what Jaynes says or at least very strongly implies (in Appendix B and elsewhere), clarity and explicitness of mathematical arguments are not opposites or mutually contradictory; in my experience, they are complementary.

Even worse, Jaynes makes several strong claims about mathematics that seem to admit no favorable interpretation: the are simply wrong. All of the "paradoxes" surrounding the concepts of infinity he gives in Chapter 15 (*) are so fundamentally flawed that even a passing familiarity of what measure theory actually says dispels them as mere word-plays caused by fuzzy or shifting definitions, or simply erroneous applications of the theory. Intuitionism and other finitist positions are certainly consistent philosophical positions, but they aren't made appealing by advocates like Jaynes who claim to find errors in standard mathematics while simply misunderstanding what the standard theory says.

Also, Jaynes' claims about mathematics that I know to be wrong make it very difficult to take him seriously when he goes into rant mode about other things I know less about (such as "orthodox" statistics or thermodynamics).

I'm extremely frustrated by the book, but I still find it valuable. But I definitely wouldn't recommend it to anyone who didn't know enough mathematics to correct Jaynes' errors in the "paradoxes" he gives. So.. why haven't I seen qualifications, disclaimers or warnings in recommendations of the book here? Are the matters concerning pure mathematics just not considered important by those recommending the book here?

(*) I admit I only glanced at the longer ones, "tumbling tetrahedron" and the "marginalization paradox". They seemed to be more about the interpretation of probability than about supposed problems with the concepts of infinity; and given how Jaynes misunderstands and/or misrepresents the mathematical theories of measure and infinities in general elsewhere in the book, I wouldn't expect them to contain any real problems with mathematics anyway.